Science.gov

Sample records for accuracy precision sensitivity

  1. Sensitivity Analysis for Characterizing the Accuracy and Precision of JEM/SMILES Mesospheric O3

    NASA Astrophysics Data System (ADS)

    Esmaeili Mahani, M.; Baron, P.; Kasai, Y.; Murata, I.; Kasaba, Y.

    2011-12-01

    The main purpose of this study is to evaluate the Superconducting sub-Millimeter Limb Emission Sounder (SMILES) measurements of mesospheric ozone, O3. As the first step, the error due to the impact of Mesospheric Temperature Inversions (MTIs) on ozone retrieval has been determined. The impacts of other parameters such as pressure variability, solar events, and etc. on mesospheric O3 will also be investigated. Ozone, is known to be important due to the stratospheric O3 layer protection of life on Earth by absorbing harmful UV radiations. However, O3 chemistry can be studied purely in the mesosphere without distraction of heterogeneous situation and dynamical variations due to the short lifetime of O3 in this region. Mesospheric ozone is produced by the photo-dissociation of O2 and the subsequent reaction of O with O2. Diurnal and semi-diurnal variations of mesospheric ozone are associated with variations in solar activity. The amplitude of the diurnal variation increases from a few percent at an altitude of 50 km, to about 80 percent at 70 km. Although despite the apparent simplicity of this situation, significant disagreements exist between the predictions from the existing models and observations, which need to be resolved. SMILES is a highly sensitive radiometer with a few to several tens percent of precision from upper troposphere to the mesosphere. SMILES was developed by the Japanese Aerospace eXploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT) located at the Japanese Experiment Module (JEM) on the International Space Station (ISS). SMILES has successfully measured the vertical distributions and the diurnal variations of various atmospheric species in the latitude range of 38S to 65N from October 2009 to April 2010. A sensitivity analysis is being conducted to investigate the expected precision and accuracy of the mesospheric O3 profiles (from 50 to 90 km height) due to the impact of Mesospheric Temperature

  2. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032

  3. Accuracy and Precision of an IGRT Solution

    SciTech Connect

    Webster, Gareth J. Rowbottom, Carl G.; Mackay, Ranald I.

    2009-07-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within {+-} 3% in dose over the range of sample points. For some points in high-dose gradients

  4. Accuracy and precision of an IGRT solution.

    PubMed

    Webster, Gareth J; Rowbottom, Carl G; Mackay, Ranald I

    2009-01-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within +/- 3% in dose over the range of sample points. For some points in high-dose gradients

  5. Precision standoff guidance antenna accuracy evaluation

    NASA Astrophysics Data System (ADS)

    Irons, F. H.; Landesberg, M. M.

    1981-02-01

    This report presents a summary of work done to determine the inherent angular accuracy achievable with the guidance and control precision standoff guidance antenna. The antenna is a critical element in the anti-jam single station guidance program since its characteristics can limit the intrinsic location guidance accuracy. It was important to determine the extent to which high ratio beamsplitting results could be achieved repeatedly and what issues were involved with calibrating the antenna. The antenna accuracy has been found to be on the order of 0.006 deg. through the use of a straightforward lookup table concept. This corresponds to a cross range error of 21 m at a range of 200 km. This figure includes both pointing errors and off-axis estimation errors. It was found that the antenna off-boresight calibration is adequately represented by a straight line for each position plus a lookup table for pointing errors relative to broadside. In the event recalibration is required, it was found that only 1% of the model would need to be corrected.

  6. Guidelines for Dual Energy X-Ray Absorptiometry Analysis of Trabecular Bone-Rich Regions in Mice: Improved Precision, Accuracy, and Sensitivity for Assessing Longitudinal Bone Changes.

    PubMed

    Shi, Jiayu; Lee, Soonchul; Uyeda, Michael; Tanjaya, Justine; Kim, Jong Kil; Pan, Hsin Chuan; Reese, Patricia; Stodieck, Louis; Lin, Andy; Ting, Kang; Kwak, Jin Hee; Soo, Chia

    2016-05-01

    Trabecular bone is frequently studied in osteoporosis research because changes in trabecular bone are the most common cause of osteoporotic fractures. Dual energy X-ray absorptiometry (DXA) analysis specific to trabecular bone-rich regions is crucial to longitudinal osteoporosis research. The purpose of this study is to define a novel method for accurately analyzing trabecular bone-rich regions in mice via DXA. This method will be utilized to analyze scans obtained from the International Space Station in an upcoming study of microgravity-induced bone loss. Thirty 12-week-old BALB/c mice were studied. The novel method was developed by preanalyzing trabecular bone-rich sites in the distal femur, proximal tibia, and lumbar vertebrae via high-resolution X-ray imaging followed by DXA and micro-computed tomography (micro-CT) analyses. The key DXA steps described by the novel method were (1) proper mouse positioning, (2) region of interest (ROI) sizing, and (3) ROI positioning. The precision of the new method was assessed by reliability tests and a 14-week longitudinal study. The bone mineral content (BMC) data from DXA was then compared to the BMC data from micro-CT to assess accuracy. Bone mineral density (BMD) intra-class correlation coefficients of the new method ranging from 0.743 to 0.945 and Levene's test showing that there was significantly lower variances of data generated by new method both verified its consistency. By new method, a Bland-Altman plot displayed good agreement between DXA BMC and micro-CT BMC for all sites and they were strongly correlated at the distal femur and proximal tibia (r=0.846, p<0.01; r=0.879, p<0.01, respectively). The results suggest that the novel method for site-specific analysis of trabecular bone-rich regions in mice via DXA yields more precise, accurate, and repeatable BMD measurements than the conventional method. PMID:26956416

  7. Accuracy of Pressure Sensitive Paint

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Guille, M.; Sullivan, J. P.

    2001-01-01

    Uncertainty in pressure sensitive paint (PSP) measurement is investigated from a standpoint of system modeling. A functional relation between the imaging system output and luminescent emission from PSP is obtained based on studies of radiative energy transports in PSP and photodetector response to luminescence. This relation provides insights into physical origins of various elemental error sources and allows estimate of the total PSP measurement uncertainty contributed by the elemental errors. The elemental errors and their sensitivity coefficients in the error propagation equation are evaluated. Useful formulas are given for the minimum pressure uncertainty that PSP can possibly achieve and the upper bounds of the elemental errors to meet required pressure accuracy. An instructive example of a Joukowsky airfoil in subsonic flows is given to illustrate uncertainty estimates in PSP measurements.

  8. T1-mapping in the heart: accuracy and precision

    PubMed Central

    2014-01-01

    The longitudinal relaxation time constant (T1) of the myocardium is altered in various disease states due to increased water content or other changes to the local molecular environment. Changes in both native T1 and T1 following administration of gadolinium (Gd) based contrast agents are considered important biomarkers and multiple methods have been suggested for quantifying myocardial T1 in vivo. Characterization of the native T1 of myocardial tissue may be used to detect and assess various cardiomyopathies while measurement of T1 with extracellular Gd based contrast agents provides additional information about the extracellular volume (ECV) fraction. The latter is particularly valuable for more diffuse diseases that are more challenging to detect using conventional late gadolinium enhancement (LGE). Both T1 and ECV measures have been shown to have important prognostic significance. T1-mapping has the potential to detect and quantify diffuse fibrosis at an early stage provided that the measurements have adequate reproducibility. Inversion recovery methods such as MOLLI have excellent precision and are highly reproducible when using tightly controlled protocols. The MOLLI method is widely available and is relatively mature. The accuracy of inversion recovery techniques is affected significantly by magnetization transfer (MT). Despite this, the estimate of apparent T1 using inversion recovery is a sensitive measure, which has been demonstrated to be a useful tool in characterizing tissue and discriminating disease. Saturation recovery methods have the potential to provide a more accurate measurement of T1 that is less sensitive to MT as well as other factors. Saturation recovery techniques are, however, noisier and somewhat more artifact prone and have not demonstrated the same level of reproducibility at this point in time. This review article focuses on the technical aspects of key T1-mapping methods and imaging protocols and describes their limitations including

  9. Precision and Accuracy Studies with Kajaani Fiber Length Analyzers

    NASA Astrophysics Data System (ADS)

    Copur, Yalcin; Makkonen, Hannu

    The aim of this study was to test the measurement precision and accuracy of the Kajaani FS-100 giving attention to possible machine error in the measurements. Fiber length of pine pulps produced using polysulfide, kraft, biokraft and soda methods were determined using both FS-100 and FiberLab automated fiber length analyzers. The measured length values were compared for both methods. The measurement precision and accuracy was tested by replicated measurements using rayon stable fibers. Measurements performed on pulp samples showed typical length distributions for both analyzers. Results obtained from Kajaani FS-100 and FiberLab showed a significant correlation. The shorter length measurement with FiberLab was found to be mainly due to the instrument calibration. The measurement repeatability tested for Kajaani FS-100 indicated that the measurements are precise.

  10. Precision and Accuracy Parameters in Structured Light 3-D Scanning

    NASA Astrophysics Data System (ADS)

    Eiríksson, E. R.; Wilm, J.; Pedersen, D. B.; Aanæs, H.

    2016-04-01

    Structured light systems are popular in part because they can be constructed from off-the-shelf low cost components. In this paper we quantitatively show how common design parameters affect precision and accuracy in such systems, supplying a much needed guide for practitioners. Our quantitative measure is the established VDI/VDE 2634 (Part 2) guideline using precision made calibration artifacts. Experiments are performed on our own structured light setup, consisting of two cameras and a projector. We place our focus on the influence of calibration design parameters, the calibration procedure and encoding strategy and present our findings. Finally, we compare our setup to a state of the art metrology grade commercial scanner. Our results show that comparable, and in some cases better, results can be obtained using the parameter settings determined in this study.

  11. The Plus or Minus Game - Teaching Estimation, Precision, and Accuracy

    NASA Astrophysics Data System (ADS)

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-03-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in TPT (Larry Weinstein's "Fermi Questions.") For several years the authors (a college physics professor, a retired algebra teacher, and a fifth-grade teacher) have been playing a game, primarily at home to challenge each other for fun, but also in the classroom as an educational tool. We call the game "The Plus or Minus Game." The game combines estimation with the principle of precision and uncertainty in a competitive and fun way.

  12. Precision and accuracy in the reproduction of simple tone sequences.

    PubMed

    Vos, P G; Ellermann, H H

    1989-02-01

    In four experiments we investigated the precision and accuracy with which amateur musicians are able to reproduce sequences of tones varied only temporally, so as to have tone and rest durations constant over sequences, and the tempo varied over the musically meaningful range of 5-0.5 tones per second. Experiments 1 and 2 supported the hypothesis of attentional bias toward having the attack moments, rather than the departure moments, precisely times. Experiment 3 corroborated the hypothesis that inaccurate timing of short interattack intervals is manifested in a lengthening of rests, rather than tones, as a result of larger motor activity during the reproduction of rests. Experiment 4 gave some support to the hypothesis that the shortening of long interattack intervals is due to mnemonic constraints affecting the rests rather than the tones. Both theoretical and practical consequences of the various findings, particularly with respect to timing in musical performance, are discussed. PMID:2522528

  13. Fluorescence Axial Localization with Nanometer Accuracy and Precision

    SciTech Connect

    Li, Hui; Yen, Chi-Fu; Sivasankar, Sanjeevi

    2012-06-15

    We describe a new technique, standing wave axial nanometry (SWAN), to image the axial location of a single nanoscale fluorescent object with sub-nanometer accuracy and 3.7 nm precision. A standing wave, generated by positioning an atomic force microscope tip over a focused laser beam, is used to excite fluorescence; axial position is determined from the phase of the emission intensity. We use SWAN to measure the orientation of single DNA molecules of different lengths, grafted on surfaces with different functionalities.

  14. Accuracy, Precision, and Resolution in Strain Measurements on Diffraction Instruments

    NASA Astrophysics Data System (ADS)

    Polvino, Sean M.

    Diffraction stress analysis is a commonly used technique to evaluate the properties and performance of different classes of materials from engineering materials, such as steels and alloys, to electronic materials like Silicon chips. Often to better understand the performance of these materials at operating conditions they are also commonly subjected to elevated temperatures and different loading conditions. The validity of any measurement under these conditions is only as good as the control of the conditions and the accuracy and precision of the instrument being used to measure the properties. What is the accuracy and precision of a typical diffraction system and what is the best way to evaluate these quantities? Is there a way to remove systematic and random errors in the data that are due to problems with the control system used? With the advent of device engineering employing internal stress as a method for increasing performance the measurement of stress from microelectronic structures has become of enhanced importance. X-ray diffraction provides an ideal method for measuring these small areas without the need for modifying the sample and possibly changing the strain state. Micro and nano diffraction experiments on Silicon-on-Insulator samples revealed changes to the material under investigation and raised significant concerns about the usefulness of these techniques. This damage process and the application of micro and nano diffraction is discussed.

  15. Assessing the Accuracy of the Precise Point Positioning Technique

    NASA Astrophysics Data System (ADS)

    Bisnath, S. B.; Collins, P.; Seepersad, G.

    2012-12-01

    The Precise Point Positioning (PPP) GPS data processing technique has developed over the past 15 years to become a standard method for growing categories of positioning and navigation applications. The technique relies on single receiver point positioning combined with the use of precise satellite orbit and clock information and high-fidelity error modelling. The research presented here uniquely addresses the current accuracy of the technique, explains the limits of performance, and defines paths to improvements. For geodetic purposes, performance refers to daily static position accuracy. PPP processing of over 80 IGS stations over one week results in few millimetre positioning rms error in the north and east components and few centimetres in the vertical (all one sigma values). Larger error statistics for real-time and kinematic processing are also given. GPS PPP with ambiguity resolution processing is also carried out, producing slight improvements over the float solution results. These results are categorised into quality classes in order to analyse the root error causes of the resultant accuracies: "best", "worst", multipath, site displacement effects, satellite availability and geometry, etc. Also of interest in PPP performance is solution convergence period. Static, conventional solutions are slow to converge, with approximately 35 minutes required for 95% of solutions to reach the 20 cm or better horizontal accuracy. Ambiguity resolution can significantly reduce this period without biasing solutions. The definition of a PPP error budget is a complex task even with the resulting numerical assessment, as unlike the epoch-by-epoch processing in the Standard Position Service, PPP processing involving filtering. An attempt is made here to 1) define the magnitude of each error source in terms of range, 2) transform ranging error to position error via Dilution Of Precision (DOP), and 3) scale the DOP through the filtering process. The result is a deeper

  16. Scatterometry measurement precision and accuracy below 70 nm

    NASA Astrophysics Data System (ADS)

    Sendelbach, Matthew; Archie, Charles N.

    2003-05-01

    Scatterometry is a contender for various measurement applications where structure widths and heights can be significantly smaller than 70 nm within one or two ITRS generations. For example, feedforward process control in the post-lithography transistor gate formation is being actively pursued by a number of RIE tool manufacturers. Several commercial forms of scatterometry are available or under development which promise to provide satisfactory performance in this regime. Scatterometry, as commercially practiced today, involves analyzing the zeroth order reflected light from a grating of lines. Normal incidence spectroscopic reflectometry, 2-theta fixed-wavelength ellipsometry, and spectroscopic ellipsometry are among the optical techniques, while library based spectra matching and realtime regression are among the analysis techniques. All these commercial forms will find accurate and precise measurement a challenge when the material constituting the critical structure approaches a very small volume. Equally challenging is executing an evaluation methodology that first determines the true properties (critical dimensions and materials) of semiconductor wafer artifacts and then compares measurement performance of several scatterometers. How well do scatterometers track process induced changes in bottom CD and sidewall profile? This paper introduces a general 3D metrology assessment methodology and reports upon work involving sub-70 nm structures and several scatterometers. The methodology combines results from multiple metrologies (CD-SEM, CD-AFM, TEM, and XSEM) to form a Reference Measurement System (RMS). The methodology determines how well the scatterometry measurement tracks critical structure changes even in the presence of other noncritical changes that take place at the same time; these are key components of accuracy. Because the assessment rewards scatterometers that measure with good precision (reproducibility) and good accuracy, the most precise

  17. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS--1988

    EPA Science Inventory

    Precision and accuracy data obtained from state and local agencies (SLAMS) during 1988 are analyzed. ooled site variances and average biases which are relevant quantities to both precision and accuracy determinations are statistically compared within and between states to assess ...

  18. Accuracy and precision of alternative estimators of ectoparasiticide efficacy.

    PubMed

    Schall, Robert; Burger, Divan A; Luus, Herman G

    2016-06-15

    While there is consensus that the efficacy of parasiticides is properly assessed using the Abbott formula, there is as yet no general consensus on the use of arithmetic versus geometric mean numbers of surviving parasites in the formula. The purpose of this paper is to investigate the accuracy and precision of various efficacy estimators based on the Abbott formula which alternatively use arithmetic mean, geometric mean and median numbers of surviving parasites; we also consider a maximum likelihood estimator. Our study shows that the best estimators using geometric means are competitive, with respect to root mean squared error, with the conventional Abbott estimator using arithmetic means, as they have lower average and lower median root mean square error over the parameter scenarios which we investigated. However, our study confirms that Abbott estimators using geometric means are potentially biased upwards, and this upward bias is substantial in particular when the test product has substandard efficacy (90% and below). For this reason, we recommend that the Abbott estimator be calculated using arithmetic means. PMID:27198777

  19. Improved DORIS accuracy for precise orbit determination and geodesy

    NASA Technical Reports Server (NTRS)

    Willis, Pascal; Jayles, Christian; Tavernier, Gilles

    2004-01-01

    In 2001 and 2002, 3 more DORIS satellites were launched. Since then, all DORIS results have been significantly improved. For precise orbit determination, 20 cm are now available in real-time with DIODE and 1.5 to 2 cm in post-processing. For geodesy, 1 cm precision can now be achieved regularly every week, making now DORIS an active part of a Global Observing System for Geodesy through the IDS.

  20. Accuracy and precision of ice stream bed topography derived from ground-based radar surveys

    NASA Astrophysics Data System (ADS)

    King, Edward

    2016-04-01

    There is some confusion within the glaciological community as to the accuracy of the basal topography derived from radar measurements. A number of texts and papers state that basal topography cannot be determined to better than one quarter of the wavelength of the radar system. On the other hand King et al (Nature Geoscience, 2009) claimed that features of the bed topography beneath Rutford Ice Stream, Antarctica can be distinguished to +/- 3m using a 3 MHz radar system (which has a quarter wavelength of 14m in ice). These statements of accuracy are mutually exclusive. I will show in this presentation that the measurement of ice thickness is a radar range determination to a single strongly-reflective target. This measurement has much higher accuracy than the resolution of two targets of similar reflection strength, which is governed by the quarter-wave criterion. The rise time of the source signal and the sensitivity and digitisation interval of the recording system are the controlling criteria on radar range accuracy. A dataset from Pine Island Glacier, West Antarctica will be used to illustrate these points, as well as the repeatability or precision of radar range measurements, and the influence of gridding parameters and positioning accuracy on the final DEM product.

  1. Numerical planetary and lunar ephemerides - Present status, precision and accuracies

    NASA Technical Reports Server (NTRS)

    Standish, E. Myles, Jr.

    1986-01-01

    Features of the emphemeris creation process are described with attention given to the equations of motion, the numerical integration, and the least-squares fitting process. Observational data are presented and ephemeride accuracies are estimated. It is believed that radio measurements, VLBI, occultations, and the Space Telescope and Hipparcos will improve ephemerides in the near future. Limitations to accuracy are considered as well as relativity features. The export procedure, by which an outside user may obtain and use the JPL ephemerides, is discussed.

  2. S-193 scatterometer backscattering cross section precision/accuracy for Skylab 2 and 3 missions

    NASA Technical Reports Server (NTRS)

    Krishen, K.; Pounds, D. J.

    1975-01-01

    Procedures for measuring the precision and accuracy with which the S-193 scatterometer measured the background cross section of ground scenes are described. Homogeneous ground sites were selected, and data from Skylab missions were analyzed. The precision was expressed as the standard deviation of the scatterometer-acquired backscattering cross section. In special cases, inference of the precision of measurement was made by considering the total range from the maximum to minimum of the backscatter measurements within a data segment, rather than the standard deviation. For Skylab 2 and 3 missions a precision better than 1.5 dB is indicated. This procedure indicates an accuracy of better than 3 dB for the Skylab 2 and 3 missions. The estimates of precision and accuracy given in this report are for backscattering cross sections from -28 to 18 dB. Outside this range the precision and accuracy decrease significantly.

  3. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients

    PubMed Central

    Asadian, Simin; Khatony, Alireza; Moradi, Gholamreza; Abdi, Alireza; Rezaei, Mansour

    2016-01-01

    Introduction An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. Methods In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients’ body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. Results There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (P<0.001). Kappa coefficients showed good agreement between the temperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%). Paired t-test demonstrated an acceptable precision with forehead (P=0.132), left (P=0.18) and right (P=0.318) tympanic membranes, oral (P=1.00), and axillary (P=1.00) methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. Conclusion The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left) for assessing a patient’s body temperature in the intensive care units because of high accuracy and acceptable precision. PMID:27621673

  4. The precision and accuracy of a portable heart rate monitor.

    PubMed

    Seaward, B L; Sleamaker, R H; McAuliffe, T; Clapp, J F

    1990-01-01

    A device that would comfortably and accurately measure exercise heart rate during field performance could be valuable for athletes, fitness participants, and investigators in the field of exercise physiology. Such a device, a portable telemeterized microprocessor, was compared with direct EKG measurements in a laboratory setting under several conditions to assess its accuracy. Twenty-four subjects were studied at rest and during light-, moderate-, high-, and maximal-intensity endurance activities (walking, running, aerobic dancing, and Nordic Track simulated cross-country skiing. Differences between values obtained by the two measuring devices were not statistically significant, with correlation coefficient (r) values ranging from 0.998 to 0.999. The two methods proved equally reliable for measuring heart rate in a host of varied aerobic activities at varying intensities. PMID:2306564

  5. Factors affecting accuracy and precision in PET volume imaging

    SciTech Connect

    Karp, J.S.; Daube-Witherspoon, M.E.; Muehllehner, G. )

    1991-03-01

    Volume imaging positron emission tomographic (PET) scanners with no septa and a large axial acceptance angle offer several advantages over multiring PET scanners. A volume imaging scanner combines high sensitivity with fine axial sampling and spatial resolution. The fine axial sampling minimizes the partial volume effect, which affects the measured concentration of an object. Even if the size of an object is large compared to the slice spacing in a multiring scanner, significant variation in the concentration is measured as a function of the axial position of the object. With a volume imaging scanner, it is necessary to use a three-dimensional reconstruction algorithm in order to avoid variations in the axial resolution as a function of the distance from the center of the scanner. In addition, good energy resolution is needed in order to use a high energy threshold to reduce the coincident scattered radiation.

  6. Milling precision and fitting accuracy of Cerec Scan milled restorations.

    PubMed

    Arnetzl, G; Pongratz, D

    2005-10-01

    The milling accuracy of the Cerec Scan system was examined under standard practice conditions. For this purpose, one and the same 3D design similar to an inlay was milled 30 times from Vita Mark II ceramic blocks. Cylindrical diamond burs with 1.2 or 1.6 mm diameter were used. Each individual milled body was measured exactly to 0.1 microm at five defined sections with a coordinate measuring instrument from the Zeiss company. In the statistical evaluation, both the different diamond bur diameters and the extent of material removal from the ceramic blank were taken into consideration; sections with large substance removal and sections with low substance removal were defined. The standard deviation for the 1.6-mm burs was clearly greater than that for the 1.2-mm burs for the section with large substance removal. This difference was significant according to the Levene test for variance equality. In sections with low substance removal, no difference between the use of the 1.6-mm or 1.2-mm bur was shown. The measuring results ranged between 0.053 and 0.14 mm. The spacing of the distances with large substance removal were larger than those with low substance removal. The T-test for paired random samples showed that the distance with large substance removal when using the 1.6-mm bur was significantly larger than the distance with low substance removal. The difference was not significant for the small burs. It was shown several times statistically that the use of the cylindrical diamond bur with 1.6-mm diameter led to greater inaccuracies than the use of the 1.2-mm cylindrical diamond bur, especially at sites with large material removal. PMID:16689028

  7. Precision and Accuracy in Measurements: A Tale of Four Graduated Cylinders.

    ERIC Educational Resources Information Center

    Treptow, Richard S.

    1998-01-01

    Expands upon the concepts of precision and accuracy at a level suitable for general chemistry. Serves as a bridge to the more extensive treatments in analytical chemistry textbooks and the advanced literature on error analysis. Contains 22 references. (DDR)

  8. Expansion and dissemination of a standardized accuracy and precision assessment technique

    NASA Astrophysics Data System (ADS)

    Kwartowitz, David M.; Riti, Rachel E.; Holmes, David R., III

    2011-03-01

    The advent and development of new imaging techniques and image-guidance have had a major impact on surgical practice. These techniques attempt to allow the clinician to not only visualize what is currently visible, but also what is beneath the surface, or function. These systems are often based on tracking systems coupled with registration and visualization technologies. The accuracy and precision of the tracking systems, thus is critical in the overall accuracy and precision of the image-guidance system. In this work the accuracy and precision of an Aurora tracking system is assessed, using the technique specified in " novel technique for analysis of accuracy of magnetic tracking systems used in image guided surgery." This analysis yielded a demonstration that accuracy is dependent on distance from the tracker's field generator, and had an RMS value of 1.48 mm. The error has the similar characteristics and values as the previous work, thus validating this method for tracker analysis.

  9. Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis

    PubMed Central

    Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.

    2015-01-01

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505

  10. S193 radiometer brightness temperature precision/accuracy for SL2 and SL3

    NASA Technical Reports Server (NTRS)

    Pounds, D. J.; Krishen, K.

    1975-01-01

    The precision and accuracy with which the S193 radiometer measured the brightness temperature of ground scenes is investigated. Estimates were derived from data collected during Skylab missions. Homogeneous ground sites were selected and S193 radiometer brightness temperature data analyzed. The precision was expressed as the standard deviation of the radiometer acquired brightness temperature. Precision was determined to be 2.40 K or better depending on mode and target temperature.

  11. Precision and sensitivity optimization of quantitative measurements in solid state NMR

    NASA Astrophysics Data System (ADS)

    Ziarelli, Fabio; Viel, Stéphane; Sanchez, Stéphanie; Cross, David; Caldarelli, Stefano

    2007-10-01

    This work presents a methodology for optimizing the precision, accuracy and sensitivity of quantitative solid state NMR measurements based on the external reference method. It is shown that the sample must be exclusively located within and completely span the coil region where the NMR response is directly proportional to the sample amount. We describe two methods to determine this "quantitative" coil volume, based on whether the probe is equipped or not with a gradient coil. In addition, to improve the sensitivity and the accuracy, an optimum rotor packing design is described, which allows the sample volume of the rotor to be matched to the quantitative coil volume. Experiments conducted on adamantane and NaCl, which are representative of a soft and hard material, respectively, show that one order of magnitude increase in experimental precision can be achieved with this methodology. Interestingly, the precision can be further improved by using the ERETIC™ method in order to compensate for most instrumental instabilities.

  12. Accuracy of differential sensitivity for one-dimensional shock problems

    SciTech Connect

    Henninger, R.J.; Maudlin, P.J.; Rightley, M.L.

    1998-07-01

    The technique called Differential Sensitivity has been applied to the system of Eulerian continuum mechanics equations solved by a hydrocode. Differential Sensitivity uses forward and adjoint techniques to obtain output response sensitivity to input parameters. Previous papers have described application of the technique to two-dimensional, multi-component problems. Inaccuracies in the adjoint solutions have prompted us to examine our numerical techniques in more detail. Here we examine one-dimensional, one material shock problems. Solution accuracy is assessed by comparison to sensitivities obtained by automatic differentiation and a code-based adjoint differentiation technique. {copyright} {ital 1998 American Institute of Physics.}

  13. David Weston--Ocean science of invariant principles, total accuracy, and appropriate precision

    NASA Astrophysics Data System (ADS)

    Roebuck, Ian

    2002-11-01

    David Weston's entire professional career was as a member of the Royal Navy Scientific Service, working in the field of ocean acoustics and its applications to maritime operations. The breadth of his interests has often been remarked upon, but because of the sensitive nature of his work at the time, it was indeed much more diverse than his published papers showed. This presentation, from the successors to the laboratories he illuminated for many years, is an attempt to fill in at least some of the gaps. The presentation also focuses on the underlying scientific philosophy of David's work, rooted in the British tradition of applicable mathematics and physics. A deep appreciation of the role of invariants and dimensional methods, and awareness of the sensitivity of any models to changes to the input assumptions, was at the heart of his approach. The needs of the Navy kept him rigorous in requiring accuracy, and clear about the distinction between it and precision. Examples of these principles are included, still as relevant today as they were when he insisted on applying them 30 years ago.

  14. [Assessment of precision and accuracy of digital surface photogrammetry with the DSP 400 system].

    PubMed

    Krimmel, M; Kluba, S; Dietz, K; Reinert, S

    2005-03-01

    The objective of the present study was to evaluate the precision and accuracy of facial anthropometric measurements obtained through digital 3-D surface photogrammetry with the DSP 400 system in comparison to traditional 2-D photogrammetry. Fifty plaster casts of cleft infants were imaged and 21 standard anthropometric measurements were obtained. For precision assessment the measurements were performed twice in a subsample. Accuracy was determined by comparison of direct measurements and indirect 2-D and 3-D image measurements. Precision of digital surface photogrammetry was almost as good as direct anthropometry and clearly better than 2-D photogrammetry. Measurements derived from 3-D images showed better congruence to direct measurements than from 2-D photos. Digital surface photogrammetry with the DSP 400 system is sufficiently precise and accurate for craniofacial anthropometric examinations. PMID:15832575

  15. Evaluation of optoelectronic Plethysmography accuracy and precision in recording displacements during quiet breathing simulation.

    PubMed

    Massaroni, C; Schena, E; Saccomandi, P; Morrone, M; Sterzi, S; Silvestri, S

    2015-08-01

    Opto-electronic Plethysmography (OEP) is a motion analysis system used to measure chest wall kinematics and to indirectly evaluate respiratory volumes during breathing. Its working principle is based on the computation of marker displacements placed on the chest wall. This work aims at evaluating the accuracy and precision of OEP in measuring displacement in the range of human chest wall displacement during quiet breathing. OEP performances were investigated by the use of a fully programmable chest wall simulator (CWS). CWS was programmed to move 10 times its eight shafts in the range of physiological displacement (i.e., between 1 mm and 8 mm) at three different frequencies (i.e., 0.17 Hz, 0.25 Hz, 0.33 Hz). Experiments were performed with the aim to: (i) evaluate OEP accuracy and precision error in recording displacement in the overall calibrated volume and in three sub-volumes, (ii) evaluate the OEP volume measurement accuracy due to the measurement accuracy of linear displacements. OEP showed an accuracy better than 0.08 mm in all trials, considering the whole 2m(3) calibrated volume. The mean measurement discrepancy was 0.017 mm. The precision error, expressed as the ratio between measurement uncertainty and the recorded displacement by OEP, was always lower than 0.55%. Volume overestimation due to OEP linear measurement accuracy was always <; 12 mL (<; 3.2% of total volume), considering all settings. PMID:26736504

  16. Advancing sensitivity analysis to precisely characterize temporal parameter dominance

    NASA Astrophysics Data System (ADS)

    Guse, Björn; Pfannerstill, Matthias; Strauch, Michael; Reusser, Dominik; Lüdtke, Stefan; Volk, Martin; Gupta, Hoshin; Fohrer, Nicola

    2016-04-01

    Parameter sensitivity analysis is a strategy for detecting dominant model parameters. A temporal sensitivity analysis calculates daily sensitivities of model parameters. This allows a precise characterization of temporal patterns of parameter dominance and an identification of the related discharge conditions. To achieve this goal, the diagnostic information as derived from the temporal parameter sensitivity is advanced by including discharge information in three steps. In a first step, the temporal dynamics are analyzed by means of daily time series of parameter sensitivities. As sensitivity analysis method, we used the Fourier Amplitude Sensitivity Test (FAST) applied directly onto the modelled discharge. Next, the daily sensitivities are analyzed in combination with the flow duration curve (FDC). Through this step, we determine whether high sensitivities of model parameters are related to specific discharges. Finally, parameter sensitivities are separately analyzed for five segments of the FDC and presented as monthly averaged sensitivities. In this way, seasonal patterns of dominant model parameter are provided for each FDC segment. For this methodical approach, we used two contrasting catchments (upland and lowland catchment) to illustrate how parameter dominances change seasonally in different catchments. For all of the FDC segments, the groundwater parameters are dominant in the lowland catchment, while in the upland catchment the controlling parameters change seasonally between parameters from different runoff components. The three methodical steps lead to clear temporal patterns, which represent the typical characteristics of the study catchments. Our methodical approach thus provides a clear idea of how the hydrological dynamics are controlled by model parameters for certain discharge magnitudes during the year. Overall, these three methodical steps precisely characterize model parameters and improve the understanding of process dynamics in hydrological

  17. The Plus or Minus Game--Teaching Estimation, Precision, and Accuracy

    ERIC Educational Resources Information Center

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-01-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in "TPT" (Larry Weinstein's "Fermi…

  18. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS, 1984

    EPA Science Inventory

    Precision and accuracy data obtained from state and local agencies during 1984 are summarized and compared to data reported earlier for the period 1981-1983. A continual improvement in the completeness of the data is evident. Improvement is also evident in the size of the precisi...

  19. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS, 1983

    EPA Science Inventory

    Precision and accuracy data obtained from State and local agencies during 1983 are summarized and evaluated. Some comparisons are made with the results previously reported for 1981 and 1982 to determine the indication of any trends. Some trends indicated improvement in the comple...

  20. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS, 1985

    EPA Science Inventory

    Precision and accuracy data obtained from State and local agencies during 1985 are summarized and evaluated. Some comparisons are made with the results reported for prior years to determine any trends. Some trends indicated continued improvement in the completeness of reporting o...

  1. ASSESSMENT OF THE PRECISION AND ACCURACY OF SAM AND MFC MICROCOSMS EXPOSED TO TOXICANTS

    EPA Science Inventory

    The results of 30 mixed flank culture (MFC) and four standardized aquatic microcosm (SAM) microcosm experiments were used to describe the precision and accuracy of these two protocols. oefficients of variation (CV) for chemicals measurements (DO,pH) were generally less than 7%, f...

  2. Commissioning Procedures for Mechanical Precision and Accuracy in a Dedicated LINAC

    SciTech Connect

    Ballesteros-Zebadua, P.; Larrga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Juarez, J.; Prieto, I.; Moreno-Jimenez, S.; Celis, M. A.

    2008-08-11

    Mechanical precision measurements are fundamental procedures for the commissioning of a dedicated LINAC. At our Radioneurosurgery Unit, these procedures can be suitable as quality assurance routines that allow the verification of the equipment geometrical accuracy and precision. In this work mechanical tests were performed for gantry and table rotation, obtaining mean associated uncertainties of 0.3 mm and 0.71 mm, respectively. Using an anthropomorphic phantom and a series of localized surface markers, isocenter accuracy showed to be smaller than 0.86 mm for radiosurgery procedures and 0.95 mm for fractionated treatments with mask. All uncertainties were below tolerances. The highest contribution to mechanical variations is due to table rotation, so it is important to correct variations using a localization frame with printed overlays. Mechanical precision knowledge would allow to consider the statistical errors in the treatment planning volume margins.

  3. Evaluation of the Accuracy and Precision of a Next Generation Computer-Assisted Surgical System

    PubMed Central

    Dai, Yifei; Liebelt, Ralph A.; Gao, Bo; Gulbransen, Scott W.; Silver, Xeve S.

    2015-01-01

    Background Computer-assisted orthopaedic surgery (CAOS) improves accuracy and reduces outliers in total knee arthroplasty (TKA). However, during the evaluation of CAOS systems, the error generated by the guidance system (hardware and software) has been generally overlooked. Limited information is available on the accuracy and precision of specific CAOS systems with regard to intraoperative final resection measurements. The purpose of this study was to assess the accuracy and precision of a next generation CAOS system and investigate the impact of extra-articular deformity on the system-level errors generated during intraoperative resection measurement. Methods TKA surgeries were performed on twenty-eight artificial knee inserts with various types of extra-articular deformity (12 neutral, 12 varus, and 4 valgus). Surgical resection parameters (resection depths and alignment angles) were compared between postoperative three-dimensional (3D) scan-based measurements and intraoperative CAOS measurements. Using the 3D scan-based measurements as control, the accuracy (mean error) and precision (associated standard deviation) of the CAOS system were assessed. The impact of extra-articular deformity on the CAOS system measurement errors was also investigated. Results The pooled mean unsigned errors generated by the CAOS system were equal or less than 0.61 mm and 0.64° for resection depths and alignment angles, respectively. No clinically meaningful biases were found in the measurements of resection depths (< 0.5 mm) and alignment angles (< 0.5°). Extra-articular deformity did not show significant effect on the measurement errors generated by the CAOS system investigated. Conclusions This study presented a set of methodology and workflow to assess the system-level accuracy and precision of CAOS systems. The data demonstrated that the CAOS system investigated can offer accurate and precise intraoperative measurements of TKA resection parameters, regardless of the presence

  4. Accuracy, precision and economics: The cost of cutting-edge chemical analyses

    NASA Astrophysics Data System (ADS)

    Hamilton, B.; Hannigan, R.; Jones, C.; Chen, Z.

    2002-12-01

    Otolith (fish ear bone) chemistry has proven to be an exceptional tool for the identification of essential fish habitats in marine and freshwater environments. These measurements, which explore the variations in trace element content of otoliths relative to Calcium (eg., Ba/Ca, Mg/Ca etc.), provide data to resolve the differences in habitat water chemistry on the watershed to catchment scale. The vast majority of these analyses are performed by laser ablation ICP-MS using a high-resolution instrument. However, few laboratories are equipped with this configuration and many researchers measure the trace element chemistry of otoliths by whole digestion ICP-MS using lower resolution quadrupole instruments. This study examines the differences in accuracy and precision between three elemental analysis methods using whole otolith digestion on a low resolution ICP-MS (ELAN 9000). The first, and most commonly used, technique is external calibration with internal standardization. This technique is the most cost-effective but also is one with limitations in terms of method detection. The second, standard addition is more costly in terms of time and use of standard materials but offers gains in precision and accuracy. The third, isotope dilution, is the least cost effective but the most accurate of elemental analysis techniques. Based on the results of this study, which seeks to identify the technique which is the easiest to implement yet has the precision and accuracy necessary to resolve spatial variations in habitats, we conclude that external calibration with internal standardization can be sufficient to revolve spatial and temporal variations in marine and estuarine environments (+/- 6-8% accuracy). Standard addition increases the accuracy of measurements to 2-5% and is ideal for freshwater studies. While there is a gain in accuracy and precision with isotope dilution, the spatial and temporal resolution is no greater with this technique than the other.

  5. Accuracy in Dental Medicine, A New Way to Measure Trueness and Precision

    PubMed Central

    Ender, Andreas; Mehl, Albert

    2014-01-01

    Reference scanners are used in dental medicine to verify a lot of procedures. The main interest is to verify impression methods as they serve as a base for dental restorations. The current limitation of many reference scanners is the lack of accuracy scanning large objects like full dental arches, or the limited possibility to assess detailed tooth surfaces. A new reference scanner, based on focus variation scanning technique, was evaluated with regards to highest local and general accuracy. A specific scanning protocol was tested to scan original tooth surface from dental impressions. Also, different model materials were verified. The results showed a high scanning accuracy of the reference scanner with a mean deviation of 5.3 ± 1.1 µm for trueness and 1.6 ± 0.6 µm for precision in case of full arch scans. Current dental impression methods showed much higher deviations (trueness: 20.4 ± 2.2 µm, precision: 12.5 ± 2.5 µm) than the internal scanning accuracy of the reference scanner. Smaller objects like single tooth surface can be scanned with an even higher accuracy, enabling the system to assess erosive and abrasive tooth surface loss. The reference scanner can be used to measure differences for a lot of dental research fields. The different magnification levels combined with a high local and general accuracy can be used to assess changes of single teeth or restorations up to full arch changes. PMID:24836007

  6. Sensitivity and accuracy of whole effluent toxicity (WET) tests

    SciTech Connect

    Chapman, G.; Lussier, S.; Norberg-King, T.; Poucher, S.; Thursby, G.

    1995-12-31

    Direct measurement of effluent toxicity is a critical tool in controlling ambient toxicity. Lack of water quality criteria for many commonly discharged chemicals and complicated toxicological interactions in complex effluents are common. Complex effluent toxicity tests should provide limits as protective as National Criteria for single chemicals. One way to evaluate WET test sensitivity and accuracy is to compare WET test results with single chemicals to the National Criteria for these chemicals. A study of eight criteria chemicals (ammonia, analine, cadmium, carbaryl, copper, lead, methyl parathion, and zinc), two freshwater WET tests (Pimephales and Ceriodaphnia), and four marine WET tests (Arbacia, Champia, Menidia, and Mysidopsis) was conducted to provide this comparison. The most sensitive of the freshwater and marine WET tests with each chemical were generally less protective than the National Final Chronic Value (FCV) concentration by factors ranging from 1.09 to 44. Less-sensitive WET tests with each chemical represented significant underestimation of chronic toxicity by factors often in the range of 100--10,000. In two instances WET test results (copper and Champia, zinc and Ceriodaphnia) were below FCV by factors of 1.85 and 3.4, respectively. It is recognized that Champia is extremely sensitive to copper and that Daphnids may not be protected by the current National zinc criterion, thus these results are not surprising. Adequate accuracy for WET tests usually requires that the most sensitive WET-test species by utilized. Although the bias of WET tests to underestimate chronic toxicity is relatively small, it should be considered in the selection of test endpoints and application of WET data.

  7. A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures

    NASA Technical Reports Server (NTRS)

    Moore, Ashley

    2005-01-01

    The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.

  8. Sex differences in accuracy and precision when judging time to arrival: data from two Internet studies.

    PubMed

    Sanders, Geoff; Sinclair, Kamila

    2011-12-01

    We report two Internet studies that investigated sex differences in the accuracy and precision of judging time to arrival. We used accuracy to mean the ability to match the actual time to arrival and precision to mean the consistency with which each participant made their judgments. Our task was presented as a computer game in which a toy UFO moved obliquely towards the participant through a virtual three-dimensional space on route to a docking station. The UFO disappeared before docking and participants pressed their space bar at the precise moment they thought the UFO would have docked. Study 1 showed it was possible to conduct quantitative studies of spatiotemporal judgments in virtual reality via the Internet and confirmed reports that men are more accurate because women underestimate, but found no difference in precision measured as intra-participant variation. Study 2 repeated Study 1 with five additional presentations of one condition to provide a better measure of precision. Again, men were more accurate than women but there were no sex differences in precision. However, within the coincidence-anticipation timing (CAT) literature, of those studies that report sex differences, a majority found that males are both more accurate and more precise than females. Noting that many CAT studies report no sex differences, we discuss appropriate interpretations of such null findings. While acknowledging that CAT performance may be influenced by experience we suggest that the sex difference may have originated among our ancestors with the evolutionary selection of men for hunting and women for gathering. PMID:21125324

  9. Measuring the accuracy and precision of quantitative coronary angiography using a digitally simulated test phantom

    NASA Astrophysics Data System (ADS)

    Morioka, Craig A.; Whiting, James S.; LeFree, Michelle T.

    1998-06-01

    Quantitative coronary angiography (QCA) diameter measurements have been used as an endpoint measurement in clinical studies involving therapies to reduce coronary atherosclerosis. The accuracy and precision of the QCA measure can affect the sample size and study conclusions of a clinical study. Measurements using x-ray test phantoms can underestimate the precision and accuracy of the actual arteries in clinical digital angiograms because they do not contain complex patient structures. Determining the clinical performance of QCA algorithms under clinical conditions is difficult because: (1) no gold standard test object exists in clinical images, (2) phantom images do not have any structured background noise. We purpose the use of computer simulated arteries as a replacement for traditional angiographic test phantoms to evaluate QCA algorithm performance.

  10. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new

  11. Comparison between predicted and actual accuracies for an Ultra-Precision CNC measuring machine

    SciTech Connect

    Thompson, D.C.; Fix, B.L.

    1995-05-30

    At the 1989 CIRP annual meeting, we reported on the design of a specialized, ultra-precision CNC measuring machine, and on the error budget that was developed to guide the design process. In our paper we proposed a combinatorial rule for merging estimated and/or calculated values for all known sources of error, to yield a single overall predicted accuracy for the machine. In this paper we compare our original predictions with measured performance of the completed instrument.

  12. Measuring changes in Plasmodium falciparum transmission: Precision, accuracy and costs of metrics

    PubMed Central

    Tusting, Lucy S.; Bousema, Teun; Smith, David L.; Drakeley, Chris

    2016-01-01

    As malaria declines in parts of Africa and elsewhere, and as more countries move towards elimination, it is necessary to robustly evaluate the effect of interventions and control programmes on malaria transmission. To help guide the appropriate design of trials to evaluate transmission-reducing interventions, we review eleven metrics of malaria transmission, discussing their accuracy, precision, collection methods and costs, and presenting an overall critique. We also review the non-linear scaling relationships between five metrics of malaria transmission; the entomological inoculation rate, force of infection, sporozoite rate, parasite rate and the basic reproductive number, R0. Our review highlights that while the entomological inoculation rate is widely considered the gold standard metric of malaria transmission and may be necessary for measuring changes in transmission in highly endemic areas, it has limited precision and accuracy and more standardised methods for its collection are required. In areas of low transmission, parasite rate, sero-conversion rates and molecular metrics including MOI and mFOI may be most appropriate. When assessing a specific intervention, the most relevant effects will be detected by examining the metrics most directly affected by that intervention. Future work should aim to better quantify the precision and accuracy of malaria metrics and to improve methods for their collection. PMID:24480314

  13. Evaluation of precision and accuracy of selenium measurements in biological materials using neutron activation analysis

    SciTech Connect

    Greenberg, R.R.

    1988-01-01

    In recent years, the accurate determination of selenium in biological materials has become increasingly important in view of the essential nature of this element for human nutrition and its possible role as a protective agent against cancer. Unfortunately, the accurate determination of selenium in biological materials is often difficult for most analytical techniques for a variety of reasons, including interferences, complicated selenium chemistry due to the presence of this element in multiple oxidation states and in a variety of different organic species, stability and resistance to destruction of some of these organo-selenium species during acid dissolution, volatility of some selenium compounds, and potential for contamination. Neutron activation analysis (NAA) can be one of the best analytical techniques for selenium determinations in biological materials for a number of reasons. Currently, precision at the 1% level (1s) and overall accuracy at the 1 to 2% level (95% confidence interval) can be attained at the U.S. National Bureau of Standards (NBS) for selenium determinations in biological materials when counting statistics are not limiting (using the {sup 75}Se isotope). An example of this level of precision and accuracy is summarized. Achieving this level of accuracy, however, requires strict attention to all sources of systematic error. Precise and accurate results can also be obtained after radiochemical separations.

  14. Precision and accuracy of 3D lower extremity residua measurement systems

    NASA Astrophysics Data System (ADS)

    Commean, Paul K.; Smith, Kirk E.; Vannier, Michael W.; Hildebolt, Charles F.; Pilgram, Thomas K.

    1996-04-01

    Accurate and reproducible geometric measurement of lower extremity residua is required for custom prosthetic socket design. We compared spiral x-ray computed tomography (SXCT) and 3D optical surface scanning (OSS) with caliper measurements and evaluated the precision and accuracy of each system. Spiral volumetric CT scanned surface and subsurface information was used to make external and internal measurements, and finite element models (FEMs). SXCT and OSS were used to measure lower limb residuum geometry of 13 below knee (BK) adult amputees. Six markers were placed on each subject's BK residuum and corresponding plaster casts and distance measurements were taken to determine precision and accuracy for each system. Solid models were created from spiral CT scan data sets with the prosthesis in situ under different loads using p-version finite element analysis (FEA). Tissue properties of the residuum were estimated iteratively and compared with values taken from the biomechanics literature. The OSS and SXCT measurements were precise within 1% in vivo and 0.5% on plaster casts, and accuracy was within 3.5% in vivo and 1% on plaster casts compared with caliper measures. Three-dimensional optical surface and SXCT imaging systems are feasible for capturing the comprehensive 3D surface geometry of BK residua, and provide distance measurements statistically equivalent to calipers. In addition, SXCT can readily distinguish internal soft tissue and bony structure of the residuum. FEM can be applied to determine tissue material properties interactively using inverse methods.

  15. Increasing the precision and accuracy of top-loading balances:  application of experimental design.

    PubMed

    Bzik, T J; Henderson, P B; Hobbs, J P

    1998-01-01

    The traditional method of estimating the weight of multiple objects is to obtain the weight of each object individually. We demonstrate that the precision and accuracy of these estimates can be improved by using a weighing scheme in which multiple objects are simultaneously on the balance. The resulting system of linear equations is solved to yield the weight estimates for the objects. Precision and accuracy improvements can be made by using a weighing scheme without requiring any more weighings than the number of objects when a total of at least six objects are to be weighed. It is also necessary that multiple objects can be weighed with about the same precision as that obtained with a single object, and the scale bias remains relatively constant over the set of weighings. Simulated and empirical examples are given for a system of eight objects in which up to five objects can be weighed simultaneously. A modified Plackett-Burman weighing scheme yields a 25% improvement in precision over the traditional method and implicitly removes the scale bias from seven of the eight objects. Applications of this novel use of experimental design techniques are shown to have potential commercial importance for quality control methods that rely on the mass change rate of an object. PMID:21644600

  16. Large format focal plane array integration with precision alignment, metrology and accuracy capabilities

    NASA Astrophysics Data System (ADS)

    Neumann, Jay; Parlato, Russell; Tracy, Gregory; Randolph, Max

    2015-09-01

    Focal plane alignment for large format arrays and faster optical systems require enhanced precision methodology and stability over temperature. The increase in focal plane array size continues to drive the alignment capability. Depending on the optical system, the focal plane flatness of less than 25μm (.001") is required over transition temperatures from ambient to cooled operating temperatures. The focal plane flatness requirement must also be maintained in airborne or launch vibration environments. This paper addresses the challenge of the detector integration into the focal plane module and housing assemblies, the methodology to reduce error terms during integration and the evaluation of thermal effects. The driving factors influencing the alignment accuracy include: datum transfers, material effects over temperature, alignment stability over test, adjustment precision and traceability to NIST standard. The FPA module design and alignment methodology reduces the error terms by minimizing the measurement transfers to the housing. In the design, the proper material selection requires matched coefficient of expansion materials minimizes both the physical shift over temperature as well as lowering the stress induced into the detector. When required, the co-registration of focal planes and filters can achieve submicron relative positioning by applying precision equipment, interferometry and piezoelectric positioning stages. All measurements and characterizations maintain traceability to NIST standards. The metrology characterizes the equipment's accuracy, repeatability and precision of the measurements.

  17. High precision, high sensitivity distributed displacement and temperature measurements using OFDR-based phase tracking

    NASA Astrophysics Data System (ADS)

    Gifford, Dawn K.; Froggatt, Mark E.; Kreger, Stephen T.

    2011-05-01

    Optical Frequency Domain Reflectometry is used to measure distributed displacement and temperature change with very high sensitivity and precision by measuring the phase change of an optical fiber sensor as a function of distance with high spatial resolution and accuracy. A fiber containing semi-continuous Bragg gratings was used as the sensor. The effective length change, or displacement, in the fiber caused by small temperature changes was measured as a function of distance with a precision of 2.4 nm and a spatial resolution of 1.5 mm. The temperature changes calculated from this displacement were measured with precision of 0.001 C with an effective sensor gauge length of 12 cm. These results demonstrate that the method employed of continuously tracking the phase change along the length of the fiber sensor enables high resolution distributed measurements that can be used to detect very small displacements, temperature changes, or strains.

  18. Accuracy and precision of quantitative 31P-MRS measurements of human skeletal muscle mitochondrial function.

    PubMed

    Layec, Gwenael; Gifford, Jayson R; Trinity, Joel D; Hart, Corey R; Garten, Ryan S; Park, Song Y; Le Fur, Yann; Jeong, Eun-Kee; Richardson, Russell S

    2016-08-01

    Although theoretically sound, the accuracy and precision of (31)P-magnetic resonance spectroscopy ((31)P-MRS) approaches to quantitatively estimate mitochondrial capacity are not well documented. Therefore, employing four differing models of respiratory control [linear, kinetic, and multipoint adenosine diphosphate (ADP) and phosphorylation potential], this study sought to determine the accuracy and precision of (31)P-MRS assessments of peak mitochondrial adenosine-triphosphate (ATP) synthesis rate utilizing directly measured peak respiration (State 3) in permeabilized skeletal muscle fibers. In 23 subjects of different fitness levels, (31)P-MRS during a 24-s maximal isometric knee extension and high-resolution respirometry in muscle fibers from the vastus lateralis was performed. Although significantly correlated with State 3 respiration (r = 0.72), both the linear (45 ± 13 mM/min) and phosphorylation potential (47 ± 16 mM/min) models grossly overestimated the calculated in vitro peak ATP synthesis rate (P < 0.05). Of the ADP models, the kinetic model was well correlated with State 3 respiration (r = 0.72, P < 0.05), but moderately overestimated ATP synthesis rate (P < 0.05), while the multipoint model, although being somewhat less well correlated with State 3 respiration (r = 0.55, P < 0.05), most accurately reflected peak ATP synthesis rate. Of note, the PCr recovery time constant (τ), a qualitative index of mitochondrial capacity, exhibited the strongest correlation with State 3 respiration (r = 0.80, P < 0.05). Therefore, this study reveals that each of the (31)P-MRS data analyses, including PCr τ, exhibit precision in terms of mitochondrial capacity. As only the multipoint ADP model did not overstimate the peak skeletal muscle mitochondrial ATP synthesis, the multipoint ADP model is the only quantitative approach to exhibit both accuracy and precision. PMID:27302751

  19. Sensitivity and precision of whole effluent toxicity (WET) tests

    SciTech Connect

    Denton, D.; Chapman, G.; Fulk, F.

    1995-12-31

    The US Environmental Protection Agency`s test method manuals recommend reference toxicant test be performed to determine test sensitivity and precision within a test and among tests over time. The levels of intra and interlaboratory precision with two reference toxicants (zinc and copper) were examined with six marine test species (Macrocystis pyrifera, Haliotis rufescens, Strongylocentrotus purpuratus, Dendraster excentricus, Mytilus spp., and Menidia beryllina) and with two freshwater test species (Ceriodaphnia dubia and Pimephales promelas). Data from the EPA`s Reference Toxicant Database were used in the analysis. Data was used if the tests met the specified test acceptability criteria. The test sensitivity was examined by calculation of the minimum significant difference (e.g. MSD) and will be discussed. Results were compared to the national final chronic value (FCV) for copper and zinc. Greater than 99% of the EC25 values were above the FCV for copper with Strongylocentrotus purpuratus, Dendraster excentricus and Macrocystis pyrifera. However, greater than 99% of the EC25 values were below the FCV for zinc with Haliotis rufescens.

  20. Wound Area Measurement with Digital Planimetry: Improved Accuracy and Precision with Calibration Based on 2 Rulers

    PubMed Central

    Foltynski, Piotr

    2015-01-01

    Introduction In the treatment of chronic wounds the wound surface area change over time is useful parameter in assessment of the applied therapy plan. The more precise the method of wound area measurement the earlier may be identified and changed inappropriate treatment plan. Digital planimetry may be used in wound area measurement and therapy assessment when it is properly used, but the common problem is the camera lens orientation during the taking of a picture. The camera lens axis should be perpendicular to the wound plane, and if it is not, the measured area differ from the true area. Results Current study shows that the use of 2 rulers placed in parallel below and above the wound for the calibration increases on average 3.8 times the precision of area measurement in comparison to the measurement with one ruler used for calibration. The proposed procedure of calibration increases also 4 times accuracy of area measurement. It was also showed that wound area range and camera type do not influence the precision of area measurement with digital planimetry based on two ruler calibration, however the measurements based on smartphone camera were significantly less accurate than these based on D-SLR or compact cameras. Area measurement on flat surface was more precise with the digital planimetry with 2 rulers than performed with the Visitrak device, the Silhouette Mobile device or the AreaMe software-based method. Conclusion The calibration in digital planimetry with using 2 rulers remarkably increases precision and accuracy of measurement and therefore should be recommended instead of calibration based on single ruler. PMID:26252747

  1. Accuracy of 3D white light scanning of abutment teeth impressions: evaluation of trueness and precision

    PubMed Central

    Jeon, Jin-Hun; Kim, Hae-Young; Kim, Ji-Hwan

    2014-01-01

    PURPOSE This study aimed to evaluate the accuracy of digitizing dental impressions of abutment teeth using a white light scanner and to compare the findings among teeth types. MATERIALS AND METHODS To assess precision, impressions of the canine, premolar, and molar prepared to receive all-ceramic crowns were repeatedly scanned to obtain five sets of 3-D data (STL files). Point clouds were compared and error sizes were measured (n=10 per type). Next, to evaluate trueness, impressions of teeth were rotated by 10°-20° and scanned. The obtained data were compared with the first set of data for precision assessment, and the error sizes were measured (n=5 per type). The Kruskal-Wallis test was performed to evaluate precision and trueness among three teeth types, and post-hoc comparisons were performed using the Mann-Whitney U test with Bonferroni correction (α=.05). RESULTS Precision discrepancies for the canine, premolar, and molar were 3.7 µm, 3.2 µm, and 7.3 µm, respectively, indicating the poorest precision for the molar (P<.001). Trueness discrepancies for teeth types were 6.2 µm, 11.2 µm, and 21.8 µm, respectively, indicating the poorest trueness for the molar (P=.007). CONCLUSION In respect to accuracy the molar showed the largest discrepancies compared with the canine and premolar. Digitizing of dental impressions of abutment teeth using a white light scanner was assessed to be a highly accurate method and provided discrepancy values in a clinically acceptable range. Further study is needed to improve digitizing performance of white light scanning in axial wall. PMID:25551007

  2. The tradeoff between accuracy and precision in latent variable models of mediation processes

    PubMed Central

    Ledgerwood, Alison; Shrout, Patrick E.

    2016-01-01

    Social psychologists place high importance on understanding mechanisms, and frequently employ mediation analyses to shed light on the process underlying an effect. Such analyses can be conducted using observed variables (e.g., a typical regression approach) or latent variables (e.g., a SEM approach), and choosing between these methods can be a more complex and consequential decision than researchers often realize. The present paper adds to the literature on mediation by examining the relative tradeoff between accuracy and precision in latent versus observed variable modeling. Whereas past work has shown that latent variable models tend to produce more accurate estimates, we demonstrate that observed variable models tend to produce more precise estimates, and examine this relative tradeoff both theoretically and empirically in a typical three-variable mediation model across varying levels of effect size and reliability. We discuss implications for social psychologists seeking to uncover mediating variables, and recommend practical approaches for maximizing both accuracy and precision in mediation analyses. PMID:21806305

  3. Accuracy or precision: Implications of sample design and methodology on abundance estimation

    USGS Publications Warehouse

    Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.

    2015-01-01

    Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.

  4. Accuracy and precision of stream reach water surface slopes estimated in the field and from maps

    USGS Publications Warehouse

    Isaak, D.J.; Hubert, W.A.; Krueger, K.L.

    1999-01-01

    The accuracy and precision of five tools used to measure stream water surface slope (WSS) were evaluated. Water surface slopes estimated in the field with a clinometer or from topographic maps used in conjunction with a map wheel or geographic information system (GIS) were significantly higher than WSS estimated in the field with a surveying level (biases of 34, 41, and 53%, respectively). Accuracy of WSS estimates obtained with an Abney level did not differ from surveying level estimates, but conclusions regarding the accuracy of Abney levels and clinometers were weakened by intratool variability. The surveying level estimated WSS most precisely (coefficient of variation [CV] = 0.26%), followed by the GIS (CV = 1.87%), map wheel (CV = 6.18%), Abney level (CV = 13.68%), and clinometer (CV = 21.57%). Estimates of WSS measured in the field with an Abney level and estimated for the same reaches with a GIS used in conjunction with l:24,000-scale topographic maps were significantly correlated (r = 0.86), but there was a tendency for the GIS to overestimate WSS. Detailed accounts of the methods used to measure WSS and recommendations regarding the measurement of WSS are provided.

  5. Statistical precision and sensitivity of measures of dynamic gait stability.

    PubMed

    Bruijn, Sjoerd M; van Dieën, Jaap H; Meijer, Onno G; Beek, Peter J

    2009-04-15

    Recently, two methods for quantifying a system's dynamic stability have been applied to human locomotion: local stability (quantified by finite time maximum Lyapunov exponents, lambda(S-stride) and lambda(L-stride)) and orbital stability (quantified as maximum Floquet multipliers, MaxFm). Thus far, however, it has remained unclear how many data points are required to obtain precise estimates of these measures during walking, and to what extent these estimates are sensitive to changes in walking behaviour. To resolve these issues, we collected long data series of healthy subjects (n=9) walking on a treadmill in three conditions (normal walking at 0.83 m/s (3 km/h) and 1.38 m/s (5 km/h), and walking at 1.38 m/s (5 km/h) while performing a Stroop dual task). Data series from 0.83 and 1.38 m/s trials were submitted to a bootstrap procedure and paired t-tests for samples of different data series lengths were performed between 0.83 and 1.38 m/s and between 1.38 m/s with and without Stroop task. Longer data series led to more precise estimates for lambda(S-stride), lambda(L-stride), and MaxFm. All variables showed an effect of data series length. Thus, when estimating and comparing these variables across conditions, data series covering an equal number of strides should be analysed. lambda(S-stride), lambda(L-stride), and MaxFm were sensitive to the change in walking speed while only lambda(S-stride) and MaxFm were sensitive enough to capture the modulations of walking induced by the Stroop task. Still, these modulations could only be detected when using a substantial number of strides (>150). PMID:19135478

  6. Assessing accuracy and precision for field and laboratory data: a perspective in ecosystem restoration

    USGS Publications Warehouse

    Stapanian, Martin A.; Lewis, Timothy E; Palmer, Craig J.; Middlebrook Amos, Molly

    2016-01-01

    Unlike most laboratory studies, rigorous quality assurance/quality control (QA/QC) procedures may be lacking in ecosystem restoration (“ecorestoration”) projects, despite legislative mandates in the United States. This is due, in part, to ecorestoration specialists making the false assumption that some types of data (e.g. discrete variables such as species identification and abundance classes) are not subject to evaluations of data quality. Moreover, emergent behavior manifested by complex, adapting, and nonlinear organizations responsible for monitoring the success of ecorestoration projects tend to unconsciously minimize disorder, QA/QC being an activity perceived as creating disorder. We discuss similarities and differences in assessing precision and accuracy for field and laboratory data. Although the concepts for assessing precision and accuracy of ecorestoration field data are conceptually the same as laboratory data, the manner in which these data quality attributes are assessed is different. From a sample analysis perspective, a field crew is comparable to a laboratory instrument that requires regular “recalibration,” with results obtained by experts at the same plot treated as laboratory calibration standards. Unlike laboratory standards and reference materials, the “true” value for many field variables is commonly unknown. In the laboratory, specific QA/QC samples assess error for each aspect of the measurement process, whereas field revisits assess precision and accuracy of the entire data collection process following initial calibration. Rigorous QA/QC data in an ecorestoration project are essential for evaluating the success of a project, and they provide the only objective “legacy” of the dataset for potential legal challenges and future uses.

  7. Mapping stream habitats with a global positioning system: Accuracy, precision, and comparison with traditional methods

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.

    2006-01-01

    We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media

  8. In silico instrumental response correction improves precision of label-free proteomics and accuracy of proteomics-based predictive models.

    PubMed

    Lyutvinskiy, Yaroslav; Yang, Hongqian; Rutishauser, Dorothea; Zubarev, Roman A

    2013-08-01

    In the analysis of proteome changes arising during the early stages of a biological process (e.g. disease or drug treatment) or from the indirect influence of an important factor, the biological variations of interest are often small (∼10%). The corresponding requirements for the precision of proteomics analysis are high, and this often poses a challenge, especially when employing label-free quantification. One of the main contributors to the inaccuracy of label-free proteomics experiments is the variability of the instrumental response during LC-MS/MS runs. Such variability might include fluctuations in the electrospray current, transmission efficiency from the air-vacuum interface to the detector, and detection sensitivity. We have developed an in silico post-processing method of reducing these variations, and have thus significantly improved the precision of label-free proteomics analysis. For abundant blood plasma proteins, a coefficient of variation of approximately 1% was achieved, which allowed for sex differentiation in pooled samples and ≈90% accurate differentiation of individual samples by means of a single LC-MS/MS analysis. This method improves the precision of measurements and increases the accuracy of predictive models based on the measurements. The post-acquisition nature of the correction technique and its generality promise its widespread application in LC-MS/MS-based methods such as proteomics and metabolomics. PMID:23589346

  9. THE PRECISION AND ACCURACY OF EARLY EPOCH OF REIONIZATION FOREGROUND MODELS: COMPARING MWA AND PAPER 32-ANTENNA SOURCE CATALOGS

    SciTech Connect

    Jacobs, Daniel C.; Bowman, Judd; Aguirre, James E.

    2013-05-20

    As observations of the Epoch of Reionization (EoR) in redshifted 21 cm emission begin, we assess the accuracy of the early catalog results from the Precision Array for Probing the Epoch of Reionization (PAPER) and the Murchison Wide-field Array (MWA). The MWA EoR approach derives much of its sensitivity from subtracting foregrounds to <1% precision, while the PAPER approach relies on the stability and symmetry of the primary beam. Both require an accurate flux calibration to set the amplitude of the measured power spectrum. The two instruments are very similar in resolution, sensitivity, sky coverage, and spectral range and have produced catalogs from nearly contemporaneous data. We use a Bayesian Markov Chain Monte Carlo fitting method to estimate that the two instruments are on the same flux scale to within 20% and find that the images are mostly in good agreement. We then investigate the source of the errors by comparing two overlapping MWA facets where we find that the differences are primarily related to an inaccurate model of the primary beam but also correlated errors in bright sources due to CLEAN. We conclude with suggestions for mitigating and better characterizing these effects.

  10. Precision and accuracy of spectrophotometric pH measurements at environmental conditions in the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Hammer, Karoline; Schneider, Bernd; Kuliński, Karol; Schulz-Bull, Detlef E.

    2014-06-01

    The increasing uptake of anthropogenic CO2 by the oceans has raised an interest in precise and accurate pH measurement in order to assess the impact on the marine CO2-system. Spectrophotometric pH measurements were refined during the last decade yielding a precision and accuracy that cannot be achieved with the conventional potentiometric method. However, until now the method was only tested in oceanic systems with a relative stable and high salinity and a small pH range. This paper describes the first application of such a pH measurement system at conditions in the Baltic Sea which is characterized by a wide salinity and pH range. The performance of the spectrophotometric system at pH values as low as 7.0 (“total” scale) and salinities between 0 and 35 was examined using TRIS-buffer solutions, certified reference materials, and tests of consistency with measurements of other parameters of the marine CO2 system. Using m-cresol purple as indicator dye and a spectrophotometric measurement system designed at Scripps Institution of Oceanography (B. Carter, A. Dickson), a precision better than ±0.001 and an accuracy between ±0.01 and ±0.02 was achieved within the observed pH and salinity ranges in the Baltic Sea. The influence of the indicator dye on the pH of the sample was determined theoretically and is presented as a pH correction term for the different alkalinity regimes in the Baltic Sea. Because of the encouraging tests, the ease of operation and the fact that the measurements refer to the internationally accepted “total” pH scale, it is recommended to use the spectrophotometric method also for pH monitoring and trend detection in the Baltic Sea.

  11. Improvement in precision, accuracy, and efficiency in sstandardizing the characterization of granular materials

    SciTech Connect

    Tucker, Jonathan R.; Shadle, Lawrence J.; Benyahia, Sofiane; Mei, Joseph; Guenther, Chris; Koepke, M. E.

    2013-01-01

    Useful prediction of the kinematics, dynamics, and chemistry of a system relies on precision and accuracy in the quantification of component properties, operating mechanisms, and collected data. In an attempt to emphasize, rather than gloss over, the benefit of proper characterization to fundamental investigations of multiphase systems incorporating solid particles, a set of procedures were developed and implemented for the purpose of providing a revised methodology having the desirable attributes of reduced uncertainty, expanded relevance and detail, and higher throughput. Better, faster, cheaper characterization of multiphase systems result. Methodologies are presented to characterize particle size, shape, size distribution, density (particle, skeletal and bulk), minimum fluidization velocity, void fraction, particle porosity, and assignment within the Geldart Classification. A novel form of the Ergun equation was used to determine the bulk void fractions and particle density. Accuracy of properties-characterization methodology was validated on materials of known properties prior to testing materials of unknown properties. Several of the standard present-day techniques were scrutinized and improved upon where appropriate. Validity, accuracy, and repeatability were assessed for the procedures presented and deemed higher than present-day techniques. A database of over seventy materials has been developed to assist in model validation efforts and future desig

  12. Hepatic perfusion in a tumor model using DCE-CT: an accuracy and precision study

    NASA Astrophysics Data System (ADS)

    Stewart, Errol E.; Chen, Xiaogang; Hadway, Jennifer; Lee, Ting-Yim

    2008-08-01

    In the current study we investigate the accuracy and precision of hepatic perfusion measurements based on the Johnson and Wilson model with the adiabatic approximation. VX2 carcinoma cells were implanted into the livers of New Zealand white rabbits. Simultaneous dynamic contrast-enhanced computed tomography (DCE-CT) and radiolabeled microsphere studies were performed under steady-state normo-, hyper- and hypo-capnia. The hepatic arterial blood flows (HABF) obtained using both techniques were compared with ANOVA. The precision was assessed by the coefficient of variation (CV). Under normo-capnia the microsphere HABF were 51.9 ± 4.2, 40.7 ± 4.9 and 99.7 ± 6.0 ml min-1 (100 g)-1 while DCE-CT HABF were 50.0 ± 5.7, 37.1 ± 4.5 and 99.8 ± 6.8 ml min-1 (100 g)-1 in normal tissue, tumor core and rim, respectively. There were no significant differences between HABF measurements obtained with both techniques (P > 0.05). Furthermore, a strong correlation was observed between HABF values from both techniques: slope of 0.92 ± 0.05, intercept of 4.62 ± 2.69 ml min-1 (100 g)-1 and R2 = 0.81 ± 0.05 (P < 0.05). The Bland-Altman plot comparing DCE-CT and microsphere HABF measurements gives a mean difference of -0.13 ml min-1 (100 g)-1, which is not significantly different from zero. DCE-CT HABF is precise, with CV of 5.7, 24.9 and 1.4% in the normal tissue, tumor core and rim, respectively. Non-invasive measurement of HABF with DCE-CT is accurate and precise. DCE-CT can be an important extension of CT to assess hepatic function besides morphology in liver diseases.

  13. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    NASA Astrophysics Data System (ADS)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  14. Effects of shortened acquisition time on accuracy and precision of quantitative estimates of organ activity1

    PubMed Central

    He, Bin; Frey, Eric C.

    2010-01-01

    Purpose: Quantitative estimation of in vivo organ uptake is an essential part of treatment planning for targeted radionuclide therapy. This usually involves the use of planar or SPECT scans with acquisition times chosen based more on image quality considerations rather than the minimum needed for precise quantification. In previous simulation studies at clinical count levels (185 MBq 111In), the authors observed larger variations in accuracy of organ activity estimates resulting from anatomical and uptake differences than statistical noise. This suggests that it is possible to reduce the acquisition time without substantially increasing the variation in accuracy. Methods: To test this hypothesis, the authors compared the accuracy and variation in accuracy of organ activity estimates obtained from planar and SPECT scans at various count levels. A simulated phantom population with realistic variations in anatomy and biodistribution was used to model variability in a patient population. Planar and SPECT projections were simulated using previously validated Monte Carlo simulation tools. The authors simulated the projections at count levels approximately corresponding to 1.5–30 min of total acquisition time. The projections were processed using previously described quantitative SPECT (QSPECT) and planar (QPlanar) methods. The QSPECT method was based on the OS-EM algorithm with compensations for attenuation, scatter, and collimator-detector response. The QPlanar method is based on the ML-EM algorithm using the same model-based compensation for all the image degrading effects as the QSPECT method. The volumes of interests (VOIs) were defined based on the true organ configuration in the phantoms. The errors in organ activity estimates from different count levels and processing methods were compared in terms of mean and standard deviation over the simulated phantom population. Results: There was little degradation in quantitative reliability when the acquisition time was

  15. Slight pressure imbalances can affect accuracy and precision of dual inlet-based clumped isotope analysis.

    PubMed

    Fiebig, Jens; Hofmann, Sven; Löffler, Niklas; Lüdecke, Tina; Methner, Katharina; Wacker, Ulrike

    2016-01-01

    It is well known that a subtle nonlinearity can occur during clumped isotope analysis of CO2 that - if remaining unaddressed - limits accuracy. The nonlinearity is induced by a negative background on the m/z 47 ion Faraday cup, whose magnitude is correlated with the intensity of the m/z 44 ion beam. The origin of the negative background remains unclear, but is possibly due to secondary electrons. Usually, CO2 gases of distinct bulk isotopic compositions are equilibrated at 1000 °C and measured along with the samples in order to be able to correct for this effect. Alternatively, measured m/z 47 beam intensities can be corrected for the contribution of secondary electrons after monitoring how the negative background on m/z 47 evolves with the intensity of the m/z 44 ion beam. The latter correction procedure seems to work well if the m/z 44 cup exhibits a wider slit width than the m/z 47 cup. Here we show that the negative m/z 47 background affects precision of dual inlet-based clumped isotope measurements of CO2 unless raw m/z 47 intensities are directly corrected for the contribution of secondary electrons. Moreover, inaccurate results can be obtained even if the heated gas approach is used to correct for the observed nonlinearity. The impact of the negative background on accuracy and precision arises from small imbalances in m/z 44 ion beam intensities between reference and sample CO2 measurements. It becomes the more significant the larger the relative contribution of secondary electrons to the m/z 47 signal is and the higher the flux rate of CO2 into the ion source is set. These problems can be overcome by correcting the measured m/z 47 ion beam intensities of sample and reference gas for the contributions deriving from secondary electrons after scaling these contributions to the intensities of the corresponding m/z 49 ion beams. Accuracy and precision of this correction are demonstrated by clumped isotope analysis of three internal carbonate standards. The

  16. Estimated results analysis and application of the precise point positioning based high-accuracy ionosphere delay

    NASA Astrophysics Data System (ADS)

    Wang, Shi-tai; Peng, Jun-huan

    2015-12-01

    The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.

  17. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    SciTech Connect

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence of a significant quantity of 238U in the samples.

  18. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    DOE PAGESBeta

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence ofmore » a significant quantity of 238U in the samples.« less

  19. Accuracy and precision of estimating age of gray wolves by tooth wear

    USGS Publications Warehouse

    Gipson, P.S.; Ballard, W.B.; Nowak, R.M.; Mech, L.D.

    2000-01-01

    We evaluated the accuracy and precision of tooth wear for aging gray wolves (Canis lupus) from Alaska, Minnesota, and Ontario based on 47 known-age or known-minimum-age skulls. Estimates of age using tooth wear and a commercial cementum annuli-aging service were useful for wolves up to 14 years old. The precision of estimates from cementum annuli was greater than estimates from tooth wear, but tooth wear estimates are more applicable in the field. We tended to overestimate age by 1-2 years and occasionally by 3 or 4 years. The commercial service aged young wolves with cementum annuli to within ?? 1 year of actual age, but under estimated ages of wolves ???9 years old by 1-3 years. No differences were detected in tooth wear patterns for wild wolves from Alaska, Minnesota, and Ontario, nor between captive and wild wolves. Tooth wear was not appropriate for aging wolves with an underbite that prevented normal wear or severely broken and missing teeth.

  20. Accuracy and precision of gait events derived from motion capture in horses during walk and trot.

    PubMed

    Boye, Jenny Katrine; Thomsen, Maj Halling; Pfau, Thilo; Olsen, Emil

    2014-03-21

    This study aimed to create an evidence base for detection of stance-phase timings from motion capture in horses. The objective was to compare the accuracy (bias) and precision (SD) for five published algorithms for the detection of hoof-on and hoof-off using force plates as the reference standard. Six horses were walked and trotted over eight force plates surrounded by a synchronised 12-camera infrared motion capture system. The five algorithms (A-E) were based on: (A) horizontal velocity of the hoof; (B) Fetlock angle and horizontal hoof velocity; (C) horizontal displacement of the hoof relative to the centre of mass; (D) horizontal velocity of the hoof relative to the Centre of Mass and; (E) vertical acceleration of the hoof. A total of 240 stance phases in walk and 240 stance phases in trot were included in the assessment. Method D provided the most accurate and precise results in walk for stance phase duration with a bias of 4.1% for front limbs and 4.8% for hind limbs. For trot we derived a combination of method A for hoof-on and method E for hoof-off resulting in a bias of -6.2% of stance in the front limbs and method B for the hind limbs with a bias of 3.8% of stance phase duration. We conclude that motion capture yields accurate and precise detection of gait events for horses walking and trotting over ground and the results emphasise a need for different algorithms for front limbs versus hind limbs in trot. PMID:24529754

  1. Gaining Precision and Accuracy on Microprobe Trace Element Analysis with the Multipoint Background Method

    NASA Astrophysics Data System (ADS)

    Allaz, J. M.; Williams, M. L.; Jercinovic, M. J.; Donovan, J. J.

    2014-12-01

    Electron microprobe trace element analysis is a significant challenge, but can provide critical data when high spatial resolution is required. Due to the low peak intensity, the accuracy and precision of such analyses relies critically on background measurements, and on the accuracy of any pertinent peak interference corrections. A linear regression between two points selected at appropriate off-peak positions is a classical approach for background characterization in microprobe analysis. However, this approach disallows an accurate assessment of background curvature (usually exponential). Moreover, if present, background interferences can dramatically affect the results if underestimated or ignored. The acquisition of a quantitative WDS scan over the spectral region of interest is still a valuable option to determine the background intensity and curvature from a fitted regression of background portions of the scan, but this technique retains an element of subjectivity as the analyst has to select areas in the scan, which appear to represent background. We present here a new method, "Multi-Point Background" (MPB), that allows acquiring up to 24 off-peak background measurements from wavelength positions around the peaks. This method aims to improve the accuracy, precision, and objectivity of trace element analysis. The overall efficiency is amended because no systematic WDS scan needs to be acquired in order to check for the presence of possible background interferences. Moreover, the method is less subjective because "true" backgrounds are selected by the statistical exclusion of erroneous background measurements, reducing the need for analyst intervention. This idea originated from efforts to refine EPMA monazite U-Th-Pb dating, where it was recognised that background errors (peak interference or background curvature) could result in errors of several tens of million years on the calculated age. Results obtained on a CAMECA SX-100 "UltraChron" using monazite

  2. Error propagation in relative real-time reverse transcription polymerase chain reaction quantification models: the balance between accuracy and precision.

    PubMed

    Nordgård, Oddmund; Kvaløy, Jan Terje; Farmen, Ragne Kristin; Heikkilä, Reino

    2006-09-15

    Real-time reverse transcription polymerase chain reaction (RT-PCR) has gained wide popularity as a sensitive and reliable technique for mRNA quantification. The development of new mathematical models for such quantifications has generally paid little attention to the aspect of error propagation. In this study we evaluate, both theoretically and experimentally, several recent models for relative real-time RT-PCR quantification of mRNA with respect to random error accumulation. We present error propagation expressions for the most common quantification models and discuss the influence of the various components on the total random error. Normalization against a calibrator sample to improve comparability between different runs is shown to increase the overall random error in our system. On the other hand, normalization against multiple reference genes, introduced to improve accuracy, does not increase error propagation compared to normalization against a single reference gene. Finally, we present evidence that sample-specific amplification efficiencies determined from individual amplification curves primarily increase the random error of real-time RT-PCR quantifications and should be avoided. Our data emphasize that the gain of accuracy associated with new quantification models should be validated against the corresponding loss of precision. PMID:16899212

  3. Impact of survey workflow on precision and accuracy of terrestrial LiDAR datasets

    NASA Astrophysics Data System (ADS)

    Gold, P. O.; Cowgill, E.; Kreylos, O.

    2009-12-01

    Ground-based LiDAR (Light Detection and Ranging) survey techniques are enabling remote visualization and quantitative analysis of geologic features at unprecedented levels of detail. For example, digital terrain models computed from LiDAR data have been used to measure displaced landforms along active faults and to quantify fault-surface roughness. But how accurately do terrestrial LiDAR data represent the true ground surface, and in particular, how internally consistent and precise are the mosaiced LiDAR datasets from which surface models are constructed? Addressing this question is essential for designing survey workflows that capture the necessary level of accuracy for a given project while minimizing survey time and equipment, which is essential for effective surveying of remote sites. To address this problem, we seek to define a metric that quantifies how scan registration error changes as a function of survey workflow. Specifically, we are using a Trimble GX3D laser scanner to conduct a series of experimental surveys to quantify how common variables in field workflows impact the precision of scan registration. Primary variables we are testing include 1) use of an independently measured network of control points to locate scanner and target positions, 2) the number of known-point locations used to place the scanner and point clouds in 3-D space, 3) the type of target used to measure distances between the scanner and the known points, and 4) setting up the scanner over a known point as opposed to resectioning of known points. Precision of the registered point cloud is quantified using Trimble Realworks software by automatic calculation of registration errors (errors between locations of the same known points in different scans). Accuracy of the registered cloud (i.e., its ground-truth) will be measured in subsequent experiments. To obtain an independent measure of scan-registration errors and to better visualize the effects of these errors on a registered point

  4. Indium-111 white blood cell scans: Sensitivity, specificity, accuracy, and normal patterns of distribution

    SciTech Connect

    Guze, B.H.; Webber, M.M.; Hawkins, R.A.; Sinha, K.

    1990-01-01

    The UCLA Hospital experience with indium-111 labeled white blood cells was reviewed. There were a total of 345 consecutive cases covering a broad range of clinical indications. The overall sensitivity of the method was 79%, specificity was 62%, and accuracy was 73%. The sensitivity for suspected osteomyelitis cases was 84%, with a specificity of 65% and an accuracy of 75%. For other cases sensitivity was 77%, specificity was 60%, and accuracy was 72%. Furthermore, patterns of normal distribution were reviewed.

  5. Advancing the speed, sensitivity and accuracy of biomolecular detection using multi-length-scale engineering.

    PubMed

    Kelley, Shana O; Mirkin, Chad A; Walt, David R; Ismagilov, Rustem F; Toner, Mehmet; Sargent, Edward H

    2014-12-01

    Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices. PMID:25466541

  6. Advancing the speed, sensitivity and accuracy of biomolecular detection using multi-length-scale engineering

    PubMed Central

    Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.

    2015-01-01

    Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices. PMID:25466541

  7. Advancing the speed, sensitivity and accuracy of biomolecular detection using multi-length-scale engineering

    NASA Astrophysics Data System (ADS)

    Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.

    2014-12-01

    Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices.

  8. Precision, accuracy, and application of diver-towed underwater GPS receivers.

    PubMed

    Schories, Dirk; Niedzwiedz, Gerd

    2012-04-01

    Diver-towed global positioning systems (GPS) handhelds have been used for a few years in underwater monitoring studies. We modeled the accuracy of this method using the software KABKURR originally developed by the University of Rostock for fishing and marine engineering. Additionally, three field experiments were conducted to estimate the precision of the method and apply it in the field: (1) an experiment of underwater transects from 5 to 35 m in the Southern Chile fjord region, (2) a transect from 5 to 30 m under extreme climatic conditions in the Antarctic, and (3) an underwater tracking experiment at Lake Ranco, Southern Chile. The coiled cable length in relation to water depth is the main error source besides the signal quality of the GPS under calm weather conditions. The forces used in the model resulted in a displacement of 2.3 m in a depth of 5 m, 3.2 m at a 10-m depth, 4.6 m in a 20-m depth, 5.5 m at a 30-m depth, and 6.8 m in a 40-m depth, when only an additional 0.5 m cable extension was used compared to the water depth. The GPS buoy requires good buoyancy in order to keep its position at the water surface when the diver is trying to minimize any additional cable extension error. The diver has to apply a tensile force for shortening the cable length at the lower cable end. Repeated diving along transect lines from 5 to 35 m resulted only in small deviations independent of water depth indicating the precision of the method for monitoring studies. Routing of given reference points with a Garmin 76CSx handheld placed in an underwater housing resulted in mean deviances less than 6 m at a water depth of 10 m. Thus, we can confirm that diver-towed GPS handhelds give promising results when used for underwater research in shallow water and open a wide field of applicability, but no submeter accuracy is possible due to the different error sources. PMID:21614620

  9. Welcome detailed data, but with a grain of salt: accuracy, precision, uncertainty in flood inundation modeling

    NASA Astrophysics Data System (ADS)

    Dottori, Francesco; Di Baldassarre, Giuliano; Todini, Ezio

    2013-04-01

    New survey techniques are providing a huge amount of high-detailed and accurate data which can be extremely valuable for flood inundation modeling. Such data availability raises the issue of how to exploit their information content to provide reliable flood risk mapping and predictions. We think that these data should form the basis of hydraulic modelling anytime they are available. However, high expectations regarding these datasets should be tempered as some important issues should be considered. These include: the large number of uncertainty sources in model structure and available data; the difficult evaluation of model results, due to the scarcity of observed data; the computational efficiency; the false confidence that can be given by high-resolution results, as accuracy of results is not necessarily increased by higher precision. We briefly discuss these issues and existing approaches which can be used to manage high detailed data. In our opinion, methods based on sub-grid and roughness upscaling treatments would be in many instances an appropriate solution to maintain consistence with the uncertainty related to model structure and data available for model building and evaluation.

  10. Precision and accuracy of regional radioactivity quantitation using the maximum likelihood EM reconstruction algorithm

    SciTech Connect

    Carson, R.E.; Yan, Y.; Chodkowski, B.; Yap, T.K.; Daube-Witherspoon, M.E. )

    1994-09-01

    The imaging characteristics of maximum likelihood (ML) reconstruction using the EM algorithm for emission tomography have been extensively evaluated. There has been less study of the precision and accuracy of ML estimates of regional radioactivity concentration. The authors developed a realistic brain slice simulation by segmenting a normal subject's MRI scan into gray matter, white matter, and CSF and produced PET sinogram data with a model that included detector resolution and efficiencies, attenuation, scatter, and randoms. Noisy realizations at different count levels were created, and ML and filtered backprojection (FBP) reconstructions were performed. The bias and variability of ROI values were determined. In addition, the effects of ML pixel size, image smoothing and region size reduction were assessed. ML estimates at 1,000 iterations (0.6 sec per iteration on a parallel computer) for 1-cm[sup 2] gray matter ROIs showed negative biases of 6% [+-] 2% which can be reduced to 0% [+-] 3% by removing the outer 1-mm rim of each ROI. FBP applied to the full-size ROIs had 15% [+-] 4% negative bias with 50% less noise than ML. Shrinking the FBP regions provided partial bias compensation with noise increases to levels similar to ML. Smoothing of ML images produced biases comparable to FBP with slightly less noise. Because of its heavy computational requirements, the ML algorithm will be most useful for applications in which achieving minimum bias is important.

  11. Modeling precision and accuracy of a LWIR microgrid array imaging polarimeter

    NASA Astrophysics Data System (ADS)

    Boger, James K.; Tyo, J. Scott; Ratliff, Bradley M.; Fetrow, Matthew P.; Black, Wiley T.; Kumar, Rakesh

    2005-08-01

    Long-wave infrared (LWIR) imaging is a prominent and useful technique for remote sensing applications. Moreover, polarization imaging has been shown to provide additional information about the imaged scene. However, polarization estimation requires that multiple measurements be made of each observed scene point under optically different conditions. This challenging measurement strategy makes the polarization estimates prone to error. The sources of this error differ depending upon the type of measurement scheme used. In this paper, we examine one particular measurement scheme, namely, a simultaneous multiple-measurement imaging polarimeter (SIP) using a microgrid polarizer array. The imager is composed of a microgrid polarizer masking a LWIR HgCdTe focal plane array (operating at 8.3-9.3 μm), and is able to make simultaneous modulated scene measurements. In this paper we present an analytical model that is used to predict the performance of the system in order to help interpret real results. This model is radiometrically accurate and accounts for the temperature of the camera system optics, spatial nonuniformity and drift, optical resolution and other sources of noise. This model is then used in simulation to validate it against laboratory measurements. The precision and accuracy of the SIP instrument is then studied.

  12. Precision and accuracy of clinical quantification of myocardial blood flow by dynamic PET: A technical perspective.

    PubMed

    Moody, Jonathan B; Lee, Benjamin C; Corbett, James R; Ficaro, Edward P; Murthy, Venkatesh L

    2015-10-01

    A number of exciting advances in PET/CT technology and improvements in methodology have recently converged to enhance the feasibility of routine clinical quantification of myocardial blood flow and flow reserve. Recent promising clinical results are pointing toward an important role for myocardial blood flow in the care of patients. Absolute blood flow quantification can be a powerful clinical tool, but its utility will depend on maintaining precision and accuracy in the face of numerous potential sources of methodological errors. Here we review recent data and highlight the impact of PET instrumentation, image reconstruction, and quantification methods, and we emphasize (82)Rb cardiac PET which currently has the widest clinical application. It will be apparent that more data are needed, particularly in relation to newer PET technologies, as well as clinical standardization of PET protocols and methods. We provide recommendations for the methodological factors considered here. At present, myocardial flow reserve appears to be remarkably robust to various methodological errors; however, with greater attention to and more detailed understanding of these sources of error, the clinical benefits of stress-only blood flow measurement may eventually be more fully realized. PMID:25868451

  13. Evaluation of Precise Point Positioning accuracy under large total electron content variations in equatorial latitudes

    NASA Astrophysics Data System (ADS)

    Rodríguez-Bilbao, I.; Moreno Monge, B.; Rodríguez-Caderot, G.; Herraiz, M.; Radicella, S. M.

    2015-01-01

    The ionosphere is one of the largest contributors to errors in GNSS positioning. Although in Precise Point Positioning (PPP) the ionospheric delay is corrected to a first order through the 'iono-free combination', significant errors may still be observed when large electron density gradients are present. To confirm this phenomenon, the temporal behavior of intense fluctuations of total electron content (TEC) and PPP altitude accuracy at equatorial latitudes are analyzed during four years of different solar activity. For this purpose, equatorial plasma irregularities are identified with periods of high rate of change of TEC (ROT). The largest ROT values are observed from 19:00 to 01:00 LT, especially around magnetic equinoxes, although some differences exist between the stations depending on their location. Highest ROT values are observed in the American and African regions. In general, large ROT events are accompanied by frequent satellite signal losses and an increase in the PPP altitude error during years 2001, 2004 and 2011. A significant increase in the PPP altitude error RMS is observed in epochs of high ROT with respect to epochs of low ROT in years 2001, 2004 and 2011, reaching up to 0.26 m in the 19:00-01:00 LT period.

  14. Sub-nm accuracy metrology for ultra-precise reflective X-ray optics

    NASA Astrophysics Data System (ADS)

    Siewert, F.; Buchheim, J.; Zeschke, T.; Brenner, G.; Kapitzki, S.; Tiedtke, K.

    2011-04-01

    The transport and monochromatization of synchrotron light from a high brilliant laser-like source to the experimental station without significant loss of brilliance and coherence is a challenging task in X-ray optics and requires optical elements of utmost accuracy. These are wave-front preserving plane mirrors with lengths of up to 1 m characterized by residual slope errors in the range of 0.05 μrad (rms) and values of 0.1 nm (rms) for micro-roughness. In the case of focusing optical elements like elliptical cylinders the required residual slope error is in the range of 0.25 μrad rms and better. In addition the alignment of optical elements is a critical and beamline performance limiting topic. Thus the characterization of ultra-precise reflective optical elements for FEL-beamline application in the free and mounted states is of significant importance. We will discuss recent results in the field of metrology achieved at the BESSY-II Optics Laboratory (BOL) of the Helmholtz Zentrum Berlin (HZB) by use of the Nanometer Optical Component Measuring Machine (NOM). Different types of mirror have been inspected by line-scan and slope mapping in the free and mounted states. Based on these results the mirror clamping of a combined mirror/grating set-up for the BL-beamlines at FLASH was improved.

  15. Obtaining identical results with double precision global accuracy on different numbers of processors in parallel particle Monte Carlo simulations

    SciTech Connect

    Cleveland, Mathew A. Brunner, Thomas A.; Gentile, Nicholas A.; Keasler, Jeffrey A.

    2013-10-15

    We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositions will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.

  16. 13 Years of TOPEX/POSEIDON Precision Orbit Determination and the 10-fold Improvement in Expected Orbit Accuracy

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Beckley, B. D.; Klosko, S. M.

    2006-01-01

    Launched in the summer of 1992, TOPEX/POSEIDON (T/P) was a joint mission between NASA and the Centre National d Etudes Spatiales (CNES), the French Space Agency, to make precise radar altimeter measurements of the ocean surface. After the remarkably successful 13-years of mapping the ocean surface T/P lost its ability to maneuver and was de-commissioned January 2006. T/P revolutionized the study of the Earth s oceans by vastly exceeding pre-launch estimates of surface height accuracy recoverable from radar altimeter measurements. The precision orbit lies at the heart of the altimeter measurement providing the reference frame from which the radar altimeter measurements are made. The expected quality of orbit knowledge had limited the measurement accuracy expectations of past altimeter missions, and still remains a major component in the error budget of all altimeter missions. This paper describes critical improvements made to the T/P orbit time series over the 13-years of precise orbit determination (POD) provided by the GSFC Space Geodesy Laboratory. The POD improvements from the pre-launch T/P expectation of radial orbit accuracy and Mission requirement of 13-cm to an expected accuracy of about 1.5-cm with today s latest orbits will be discussed. The latest orbits with 1.5 cm RMS radial accuracy represent a significant improvement to the 2.0-cm accuracy orbits currently available on the T/P Geophysical Data Record (GDR) altimeter product.

  17. Measurement Precision and Accuracy of the Centre Location of AN Ellipse by Weighted Centroid Method

    NASA Astrophysics Data System (ADS)

    Matsuoka, R.

    2015-03-01

    Circular targets are often utilized in photogrammetry, and a circle on a plane is projected as an ellipse onto an oblique image. This paper reports an analysis conducted in order to investigate the measurement precision and accuracy of the centre location of an ellipse on a digital image by an intensity-weighted centroid method. An ellipse with a semi-major axis a, a semi-minor axis b, and a rotation angle θ of the major axis is investigated. In the study an equivalent radius r = (a2cos2θ + b2sin2θ)1/2 is adopted as a measure of the dimension of an ellipse. First an analytical expression representing a measurement error (ϵx, ϵy,) is obtained. Then variances Vx of ϵx are obtained at 1/256 pixel intervals from 0.5 to 100 pixels in r by numerical integration, because a formula representing Vx is unable to be obtained analytically when r > 0.5. The results of the numerical integration indicate that Vxwould oscillate in a 0.5 pixel cycle in r and Vx excluding the oscillation component would be inversely proportional to the cube of r. Finally an effective approximate formula of Vx from 0.5 to 100 pixels in r is obtained by least squares adjustment. The obtained formula is a fractional expression of which numerator is a fifth-degree polynomial of {r-0.5×int(2r)} expressing the oscillation component and denominator is the cube of r. Here int(x) is the function to return the integer part of the value x. Coefficients of the fifth-degree polynomial of the numerator can be expressed by a quadratic polynomial of {0.5×int(2r)+0.25}.

  18. Precision of Sensitivity in the Design Optimization of Indeterminate Structures

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Hopkins, Dale A.

    2006-01-01

    Design sensitivity is central to most optimization methods. The analytical sensitivity expression for an indeterminate structural design optimization problem can be factored into a simple determinate term and a complicated indeterminate component. Sensitivity can be approximated by retaining only the determinate term and setting the indeterminate factor to zero. The optimum solution is reached with the approximate sensitivity. The central processing unit (CPU) time to solution is substantially reduced. The benefit that accrues from using the approximate sensitivity is quantified by solving a set of problems in a controlled environment. Each problem is solved twice: first using the closed-form sensitivity expression, then using the approximation. The problem solutions use the CometBoards testbed as the optimization tool with the integrated force method as the analyzer. The modification that may be required, to use the stiffener method as the analysis tool in optimization, is discussed. The design optimization problem of an indeterminate structure contains many dependent constraints because of the implicit relationship between stresses, as well as the relationship between the stresses and displacements. The design optimization process can become problematic because the implicit relationship reduces the rank of the sensitivity matrix. The proposed approximation restores the full rank and enhances the robustness of the design optimization method.

  19. Accuracy, precision and response time of consumer bimetal and digital thermometers for cooked ground beef patties and chicken breasts

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Three models each of consumer instant-read bimetal and digital thermometers were tested for accuracy, precision and response time compared to a calibrated thermocouple in cooked 80 percent and 90 percent lean ground beef patties and boneless and bone-in split chicken breasts. At the recommended inse...

  20. Accuracy of critical-temperature sensitivity coefficients predicted by multilayered composite plate theories

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Burton, Scott

    1992-01-01

    An assessment is made of the accuracy of the critical-temperature sensitivity coefficients of multilayered plates predicted by different modeling approaches, based on two-dimensional shear-deformation theories. The sensitivity coefficients considered measure the sensitivity of the critical temperatures to variations in different lamination and material parameters of the plate. The standard of comparison is taken to be the sensitivity coefficients obtained by the three-dimensional theory of thermoelasticity. Numerical studies are presented showing the effects of variation in the geometric and lamination parameters of the plate on the accuracy of both the sensitivity coefficients and the critical temperatures predicted by the different modeling approaches.

  1. Double Precision Differential/Algebraic Sensitivity Analysis Code

    1995-06-02

    DDASAC solves nonlinear initial-value problems involving stiff implicit systems of ordinary differential and algebraic equations. Purely algebraic nonlinear systems can also be solved, given an initial guess within the region of attraction of a solution. Options include automatic reconciliation of inconsistent initial states and derivatives, automatic initial step selection, direct concurrent parametric sensitivity analysis, and stopping at a prescribed value of any user-defined functional of the current solution vector. Local error control (in the max-normmore » or the 2-norm) is provided for the state vector and can include the sensitivities on request.« less

  2. Using statistics and software to maximize precision and accuracy in U-Pb geochronological measurements

    NASA Astrophysics Data System (ADS)

    McLean, N.; Bowring, J. F.; Bowring, S. A.

    2009-12-01

    Uncertainty in U-Pb geochronology results from a wide variety of factors, including isotope ratio determinations, common Pb corrections, initial daughter product disequilibria, instrumental mass fractionation, isotopic tracer calibration, and U decay constants and isotopic composition. The relative contribution of each depends on the proportion of radiogenic to common Pb, the measurement technique, and the quality of systematic error determinations. Random and systematic uncertainty contributions may be propagated into individual analyses or for an entire population, and must be propagated correctly to accurately interpret data. Tripoli and U-Pb_Redux comprise a new data reduction and error propagation software package that combines robust cycle measurement statistics with rigorous multivariate data analysis and presents the results graphically and interactively. Maximizing the precision and accuracy of a measurement begins with correct appraisal and codification of the systematic and random errors for each analysis. For instance, a large dataset of total procedural Pb blank analyses defines a multivariate normal distribution, describing the mean of and variation in isotopic composition (IC) that must be subtracted from each analysis. Uncertainty in the size and IC of each Pb blank is related to the (random) uncertainty in ratio measurements and the (systematic) uncertainty involved in tracer subtraction. Other sample and measurement parameters can be quantified in the same way, represented as statistical distributions that describe their uncertainty or variation, and are input into U-Pb_Redux as such before the raw sample isotope ratios are measured. During sample measurement, U-Pb_Redux and Tripoli can relay cycle data in real time, calculating a date and uncertainty for each new cycle or block. The results are presented in U-Pb_Redux as an interactive user interface with multiple visualization tools. One- and two-dimensional plots of each calculated date and

  3. High Sensitive Precise 3D Accelerometer for Solar System Exploration with Unmanned Spacecrafts

    NASA Astrophysics Data System (ADS)

    Savenko, Y. V.; Demyanenko, P. O.; Zinkovskiy, Y. F.

    measuring, by analogue FOS, has been ˜ 10-4 %. Substantially accessible values are yet worse on 2-3 order. The reason of poor precise performances of measurers on the basis of analogue FOS is metrologically poor quality of a stream of optical radiation carrying out role of the carrier and receptor of the information. It is a high level of photon noise and a small blanket intensity level. First reason reflects the fact of discreteness of flow of high-energy photons, and it is consequence of second one - smallness, on absolute value, of inserted power into OF from available radiation sources (RS). Works on improvement of FO elements are carrying out. Certainly, it will be created RS allow to insert enough of power into standard OF. But simple increasing of optical flow power in measuring path of FOS will not be able to decide radically the problem of increasing of measuring prices: with raising of power in proportion of square root of its value there is raising a power of photon noises - 1000-times increase of power promises only 30-times increase of measuring precise; insertion into OF more large power (˜ 1 W for standard silicon OF) causes an appearance of non-linear effects in it, which destroying an operating principle of analogue FOS. Thus, it is needed to constatate impossibility of building, at that time, measurers of analogue FOS, concurated with traditional (electrical) measurers on measuring precise. At that all, advantages of FO, as basis of building of FO MD requires to find ways for decision of these problems. Analysis of problem of sensitivity of usual (analogue) FOS has brought us to conclusion about necessity of reviewing of principles of information signal forming in FOS and principles its next electronic processing. For radical increasing of accuracy of measurements with using FOS it is necessary to refuse analogue modulation of optical flow and to transfer to discreet its modulations, entering thus in optical flow new, non-optical, parameters, which will

  4. Use of single-representative reverse-engineered surface-models for RSA does not affect measurement accuracy and precision.

    PubMed

    Seehaus, Frank; Schwarze, Michael; Flörkemeier, Thilo; von Lewinski, Gabriela; Kaptein, Bart L; Jakubowitz, Eike; Hurschler, Christof

    2016-05-01

    Implant migration can be accurately quantified by model-based Roentgen stereophotogrammetric analysis (RSA), using an implant surface model to locate the implant relative to the bone. In a clinical situation, a single reverse engineering (RE) model for each implant type and size is used. It is unclear to what extent the accuracy and precision of migration measurement is affected by implant manufacturing variability unaccounted for by a single representative model. Individual RE models were generated for five short-stem hip implants of the same type and size. Two phantom analyses and one clinical analysis were performed: "Accuracy-matched models": one stem was assessed, and the results from the original RE model were compared with randomly selected models. "Accuracy-random model": each of the five stems was assessed and analyzed using one randomly selected RE model. "Precision-clinical setting": implant migration was calculated for eight patients, and all five available RE models were applied to each case. For the two phantom experiments, the 95%CI of the bias ranged from -0.28 mm to 0.30 mm for translation and -2.3° to 2.5° for rotation. In the clinical setting, precision is less than 0.5 mm and 1.2° for translation and rotation, respectively, except for rotations about the proximodistal axis (<4.1°). High accuracy and precision of model-based RSA can be achieved and are not biased by using a single representative RE model. At least for implants similar in shape to the investigated short-stem, individual models are not necessary. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 34:903-910, 2016. PMID:26553748

  5. Dichotomy in perceptual learning of interval timing: calibration of mean accuracy and precision differ in specificity and time course.

    PubMed

    Sohn, Hansem; Lee, Sang-Hun

    2013-01-01

    Our brain is inexorably confronted with a dynamic environment in which it has to fine-tune spatiotemporal representations of incoming sensory stimuli and commit to a decision accordingly. Among those representations needing constant calibration is interval timing, which plays a pivotal role in various cognitive and motor tasks. To investigate how perceived time interval is adjusted by experience, we conducted a human psychophysical experiment using an implicit interval-timing task in which observers responded to an invisible bar drifting at a constant speed. We tracked daily changes in distributions of response times for a range of physical time intervals over multiple days of training with two major types of timing performance, mean accuracy and precision. We found a decoupled dynamics of mean accuracy and precision in terms of their time course and specificity of perceptual learning. Mean accuracy showed feedback-driven instantaneous calibration evidenced by a partial transfer around the time interval trained with feedback, while timing precision exhibited a long-term slow improvement with no evident specificity. We found that a Bayesian observer model, in which a subjective time interval is determined jointly by a prior and likelihood function for timing, captures the dissociative temporal dynamics of the two types of timing measures simultaneously. Finally, the model suggested that the width of the prior, not the likelihoods, gradually shrinks over sessions, substantiating the important role of prior knowledge in perceptual learning of interval timing. PMID:23076112

  6. Quantifying Vegetation Change in Semiarid Environments: Precision and Accuracy of Spectral Mixture Analysis and the Normalized Difference Vegetation Index

    NASA Technical Reports Server (NTRS)

    Elmore, Andrew J.; Mustard, John F.; Manning, Sara J.; Elome, Andrew J.

    2000-01-01

    Because in situ techniques for determining vegetation abundance in semiarid regions are labor intensive, they usually are not feasible for regional analyses. Remotely sensed data provide the large spatial scale necessary, but their precision and accuracy in determining vegetation abundance and its change through time have not been quantitatively determined. In this paper, the precision and accuracy of two techniques, Spectral Mixture Analysis (SMA) and Normalized Difference Vegetation Index (NDVI) applied to Landsat TM data, are assessed quantitatively using high-precision in situ data. In Owens Valley, California we have 6 years of continuous field data (1991-1996) for 33 sites acquired concurrently with six cloudless Landsat TM images. The multitemporal remotely sensed data were coregistered to within 1 pixel, radiometrically intercalibrated using temporally invariante surface features and geolocated to within 30 m. These procedures facilitated the accurate location of field-monitoring sites within the remotely sensed data. Formal uncertainties in the registration, radiometric alignment, and modeling were determined. Results show that SMA absolute percent live cover (%LC) estimates are accurate to within ?4.0%LC and estimates of change in live cover have a precision of +/-3.8%LC. Furthermore, even when applied to areas of low vegetation cover, the SMA approach correctly determined the sense of clump, (i.e., positive or negative) in 87% of the samples. SMA results are superior to NDVI, which, although correlated with live cover, is not a quantitative measure and showed the correct sense of change in only 67%, of the samples.

  7. Accuracy and precisions of water quality parameters retrieved from particle swarm optimisation in a sub-tropical lake

    NASA Astrophysics Data System (ADS)

    Campbell, Glenn; Phinn, Stuart R.

    2009-09-01

    Optical remote sensing has been used to map and monitor water quality parameters such as the concentrations of hydrosols (chlorophyll and other pigments, total suspended material, and coloured dissolved organic matter). In the inversion / optimisation approach a forward model is used to simulate the water reflectance spectra from a set of parameters and the set that gives the closest match is selected as the solution. The accuracy of the hydrosol retrieval is dependent on an efficient search of the solution space and the reliability of the similarity measure. In this paper the Particle Swarm Optimisation (PSO) was used to search the solution space and seven similarity measures were trialled. The accuracy and precision of this method depends on the inherent noise in the spectral bands of the sensor being employed, as well as the radiometric corrections applied to images to calculate the subsurface reflectance. Using the Hydrolight® radiative transfer model and typical hydrosol concentrations from Lake Wivenhoe, Australia, MERIS reflectance spectra were simulated. The accuracy and precision of hydrosol concentrations derived from each similarity measure were evaluated after errors associated with the air-water interface correction, atmospheric correction and the IOP measurement were modelled and applied to the simulated reflectance spectra. The use of band specific empirically estimated values for the anisotropy value in the forward model improved the accuracy of hydrosol retrieval. The results of this study will be used to improve an algorithm for the remote sensing of water quality for freshwater impoundments.

  8. Nano-accuracy measurements and the surface profiler by use of Monolithic Hollow Penta-Prism for precision mirror testing

    NASA Astrophysics Data System (ADS)

    Qian, Shinan; Wayne, Lewis; Idir, Mourad

    2014-09-01

    We developed a Monolithic Hollow Penta-Prism Long Trace Profiler-NOM (MHPP-LTP-NOM) to attain nano-accuracy in testing plane- and near-plane-mirrors. A new developed Monolithic Hollow Penta-Prism (MHPP) combined with the advantages of PPLTP and autocollimator ELCOMAT of the Nano-Optic-Measuring Machine (NOM) is used to enhance the accuracy and stability of our measurements. Our precise system-alignment method by using a newly developed CCD position-monitor system (PMS) assured significant thermal stability and, along with our optimized noise-reduction analytic method, ensured nano-accuracy measurements. Herein we report our tests results; all errors are about 60 nrad rms or less in tests of plane- and near-plane- mirrors.

  9. Analysis and improvement of accuracy, sensitivity, and resolution of the coherent gradient sensing method.

    PubMed

    Dong, Xuelin; Zhang, Changxing; Feng, Xue; Duan, Zhiyin

    2016-06-10

    The coherent gradient sensing (CGS) method, one kind of shear interferometry sensitive to surface slope, has been applied to full-field curvature measuring for decades. However, its accuracy, sensitivity, and resolution have not been studied clearly. In this paper, we analyze the accuracy, sensitivity, and resolution for the CGS method based on the derivation of its working principle. The results show that the sensitivity is related to the grating pitch and distance, and the accuracy and resolution are determined by the wavelength of the laser beam and the diameter of the reflected beam. The sensitivity is proportional to the ratio of grating distance to its pitch, while the accuracy will decline as this ratio increases. In addition, we demonstrate that using phase gratings as the shearing element can improve the interferogram and enhance accuracy, sensitivity, and resolution. The curvature of a spherical reflector is measured by CGS with Ronchi gratings and phase gratings under different experimental parameters to illustrate this analysis. All of the results are quite helpful for CGS applications. PMID:27409035

  10. A high-precision Jacob's staff with improved spatial accuracy and laser sighting capability

    NASA Astrophysics Data System (ADS)

    Patacci, Marco

    2016-04-01

    A new Jacob's staff design incorporating a 3D positioning stage and a laser sighting stage is described. The first combines a compass and a circular spirit level on a movable bracket and the second introduces a laser able to slide vertically and rotate on a plane parallel to bedding. The new design allows greater precision in stratigraphic thickness measurement while restricting the cost and maintaining speed of measurement to levels similar to those of a traditional Jacob's staff. Greater precision is achieved as a result of: a) improved 3D positioning of the rod through the use of the integrated compass and spirit level holder; b) more accurate sighting of geological surfaces by tracing with height adjustable rotatable laser; c) reduced error when shifting the trace of the log laterally (i.e. away from the dip direction) within the trace of the laser plane, and d) improved measurement of bedding dip and direction necessary to orientate the Jacob's staff, using the rotatable laser. The new laser holder design can also be used to verify parallelism of a geological surface with structural dip by creating a visual planar datum in the field and thus allowing determination of surfaces which cut the bedding at an angle (e.g., clinoforms, levees, erosion surfaces, amalgamation surfaces, etc.). Stratigraphic thickness measurements and estimates of measurement uncertainty are valuable to many applications of sedimentology and stratigraphy at different scales (e.g., bed statistics, reconstruction of palaeotopographies, depositional processes at bed scale, architectural element analysis), especially when a quantitative approach is applied to the analysis of the data; the ability to collect larger data sets with improved precision will increase the quality of such studies.

  11. Performance characterization of precision micro robot using a machine vision system over the Internet for guaranteed positioning accuracy

    NASA Astrophysics Data System (ADS)

    Kwon, Yongjin; Chiou, Richard; Rauniar, Shreepud; Sosa, Horacio

    2005-11-01

    There is a missing link between a virtual development environment (e.g., a CAD/CAM driven offline robotic programming) and production requirements of the actual robotic workcell. Simulated robot path planning and generation of pick-and-place coordinate points will not exactly coincide with the robot performance due to lack of consideration in variations in individual robot repeatability and thermal expansion of robot linkages. This is especially important when robots are controlled and programmed remotely (e.g., through Internet or Ethernet) since remote users have no physical contact with robotic systems. Using the current technology in Internet-based manufacturing that is limited to a web camera for live image transfer has been a significant challenge for the robot task performance. Consequently, the calibration and accuracy quantification of robot critical to precision assembly have to be performed on-site and the verification of robot positioning accuracy cannot be ascertained remotely. In worst case, the remote users have to assume the robot performance envelope provided by the manufacturers, which may causes a potentially serious hazard for system crash and damage to the parts and robot arms. Currently, there is no reliable methodology for remotely calibrating the robot performance. The objective of this research is, therefore, to advance the current state-of-the-art in Internet-based control and monitoring technology, with a specific aim in the accuracy calibration of micro precision robotic system for the development of a novel methodology utilizing Ethernet-based smart image sensors and other advanced precision sensory control network.

  12. ACCURACY AND PRECISION OF A METHOD TO STUDY KINEMATICS OF THE TEMPOROMANDIBULAR JOINT: COMBINATION OF MOTION DATA AND CT IMAGING

    PubMed Central

    Baltali, Evre; Zhao, Kristin D.; Koff, Matthew F.; Keller, Eugene E.; An, Kai-Nan

    2008-01-01

    The purpose of the study was to test the precision and accuracy of a method used to track selected landmarks during motion of the temporomandibular joint (TMJ). A precision phantom device was constructed and relative motions between two rigid bodies on the phantom device were measured using optoelectronic (OE) and electromagnetic (EM) motion tracking devices. The motion recordings were also combined with a 3D CT image for each type of motion tracking system (EM+CT and OE+CT) to mimic methods used in previous studies. In the OE and EM data collections, specific landmarks on the rigid bodies were determined using digitization. In the EM+CT and OE+CT data sets, the landmark locations were obtained from the CT images. 3D linear distances and 3D curvilinear path distances were calculated for the points. The accuracy and precision for all 4 methods were evaluated (EM, OE, EM+CT and OE+CT). In addition, results were compared with and without the CT imaging (EM vs. EM+CT, OE vs. OE+CT). All systems overestimated the actual 3D curvilinear path lengths. All systems also underestimated the actual rotation values. The accuracy of all methods was within 0.5 mm for 3D curvilinear path calculations, 0.05 mm for 3D linear distance calculations, and 0.2° for rotation calculations. In addition, Bland-Altman plots for each configuration of the systems suggest that measurements obtained from either system are repeatable and comparable. PMID:18617178

  13. Accuracy and Precision of Three-Dimensional Low Dose CT Compared to Standard RSA in Acetabular Cups: An Experimental Study.

    PubMed

    Brodén, Cyrus; Olivecrona, Henrik; Maguire, Gerald Q; Noz, Marilyn E; Zeleznik, Michael P; Sköldenberg, Olof

    2016-01-01

    Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting. PMID:27478832

  14. Accuracy and Precision of Three-Dimensional Low Dose CT Compared to Standard RSA in Acetabular Cups: An Experimental Study

    PubMed Central

    Olivecrona, Henrik; Maguire, Gerald Q.; Noz, Marilyn E.; Zeleznik, Michael P.

    2016-01-01

    Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting. PMID:27478832

  15. The accuracy and precision of DXA for assessing body composition in team sport athletes.

    PubMed

    Bilsborough, Johann Christopher; Greenway, Kate; Opar, David; Livingstone, Steuart; Cordy, Justin; Coutts, Aaron James

    2014-01-01

    This study determined the precision of pencil and fan beam dual-energy X-ray absorptiometry (DXA) devices for assessing body composition in professional Australian Football players. Thirty-six professional Australian Football players, in two groups (fan DXA, N = 22; pencil DXA, N = 25), underwent two consecutive DXA scans. A whole body phantom with known values for fat mass, bone mineral content and fat-free soft tissue mass was also used to validate each DXA device. Additionally, the criterion phantom was scanned 20 times by each DXA to assess reliability. Test-retest reliability of DXA anthropometric measures were derived from repeated fan and pencil DXA scans. Fat-free soft tissue mass and bone mineral content from both DXA units showed strong correlations with, and trivial differences to, the criterion phantom values. Fat mass from both DXA showed moderate correlations with criterion measures (pencil: r = 0.64; fan: r = 0.67) and moderate differences with the criterion value. The limits of agreement were similar for both fan beam DXA and pencil beam DXA (fan: fat-free soft tissue mass = -1650 ± 179 g, fat mass = -357 ± 316 g, bone mineral content = 289 ± 122 g; pencil: fat-free soft tissue mass = -1701 ± 257 g, fat mass = -359 ± 326 g, bone mineral content = 177 ± 117 g). DXA also showed excellent precision for bone mineral content (coefficient of variation (%CV) fan = 0.6%; pencil = 1.5%) and fat-free soft tissue mass (%CV fan = 0.3%; pencil = 0.5%) and acceptable reliability for fat measures (%CV fan: fat mass = 2.5%, percent body fat = 2.5%; pencil: fat mass = 5.9%, percent body fat = 5.7%). Both DXA provide precise measures of fat-free soft tissue mass and bone mineral content in lean Australian Football players. DXA-derived fat-free soft tissue mass and bone mineral content are suitable for assessing body composition in lean team sport athletes. PMID:24914773

  16. A Time Projection Chamber for High Accuracy and Precision Fission Cross-Section Measurements

    SciTech Connect

    T. Hill; K. Jewell; M. Heffner; D. Carter; M. Cunningham; V. Riot; J. Ruz; S. Sangiorgio; B. Seilhan; L. Snyder; D. M. Asner; S. Stave; G. Tatishvili; L. Wood; R. G. Baker; J. L. Klay; R. Kudo; S. Barrett; J. King; M. Leonard; W. Loveland; L. Yao; C. Brune; S. Grimes; N. Kornilov; T. N. Massey; J. Bundgaard; D. L. Duke; U. Greife; U. Hager; E. Burgett; J. Deaven; V. Kleinrath; C. McGrath; B. Wendt; N. Hertel; D. Isenhower; N. Pickle; H. Qu; S. Sharma; R. T. Thornton; D. Tovwell; R. S. Towell; S.

    2014-09-01

    The fission Time Projection Chamber (fissionTPC) is a compact (15 cm diameter) two-chamber MICROMEGAS TPC designed to make precision cross-section measurements of neutron-induced fission. The actinide targets are placed on the central cathode and irradiated with a neutron beam that passes axially through the TPC inducing fission in the target. The 4p acceptance for fission fragments and complete charged particle track reconstruction are powerful features of the fissionTPC which will be used to measure fission cross-sections and examine the associated systematic errors. This paper provides a detailed description of the design requirements, the design solutions, and the initial performance of the fissionTPC.

  17. The Precision and Accuracy of AIRS Level 1B Radiances for Climate Studies

    NASA Technical Reports Server (NTRS)

    Hearty, Thomas J.; Gaiser, Steve; Pagano, Tom; Aumann, Hartmut

    2004-01-01

    We investigate uncertainties in the Atmospheric Infrared Sounder (AIRS) radiances based on in-flight and preflight calibration algorithms and observations. The global coverage and spectra1 resolution ((lamda)/(Delta)(lamda) 1200) of AIRS enable it to produce a data set that can be used as a climate data record over the lifetime of the instrument. Therefore, we examine the effects of the uncertainties in the calibration and the detector stability on future climate studies. The uncertainties of the parameters that go into the AIRS radiometric calibration are propagated to estimate the accuracy of the radiances and any climate data record created from AIRS measurements. The calculated radiance uncertainties are consistent with observations. Algorithm enhancements may be able to reduce the radiance uncertainties by as much as 7%. We find that the orbital variation of the gain contributes a brightness temperature bias of < 0.01 K.

  18. Quantification and visualization of carotid segmentation accuracy and precision using a 2D standardized carotid map

    NASA Astrophysics Data System (ADS)

    Chiu, Bernard; Ukwatta, Eranga; Shavakh, Shadi; Fenster, Aaron

    2013-06-01

    This paper describes a framework for vascular image segmentation evaluation. Since the size of vessel wall and plaque burden is defined by the lumen and wall boundaries in vascular segmentation, these two boundaries should be considered as a pair in statistical evaluation of a segmentation algorithm. This work proposed statistical metrics to evaluate the difference of local vessel wall thickness (VWT) produced by manual and algorithm-based semi-automatic segmentation methods (ΔT) with the local segmentation standard deviation of the wall and lumen boundaries considered. ΔT was further approximately decomposed into the local wall and lumen boundary differences (ΔW and ΔL respectively) in order to provide information regarding which of the wall and lumen segmentation errors contribute more to the VWT difference. In this study, the lumen and wall boundaries in 3D carotid ultrasound images acquired for 21 subjects were each segmented five times manually and by a level-set segmentation algorithm. The (absolute) difference measures (i.e., ΔT, ΔW, ΔL and their absolute values) and the pooled local standard deviation of manually and algorithmically segmented wall and lumen boundaries were computed for each subject and represented in a 2D standardized map. The local accuracy and variability of the segmentation algorithm at each point can be quantified by the average of these metrics for the whole group of subjects and visualized on the 2D standardized map. Based on the results shown on the 2D standardized map, a variety of strategies, such as adding anchor points and adjusting weights of different forces in the algorithm, can be introduced to improve the accuracy and variability of the algorithm.

  19. Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine.

    PubMed

    Castaneda, Christian; Nalley, Kip; Mannion, Ciaran; Bhattacharyya, Pritish; Blake, Patrick; Pecora, Andrew; Goy, Andre; Suh, K Stephen

    2015-01-01

    As research laboratories and clinics collaborate to achieve precision medicine, both communities are required to understand mandated electronic health/medical record (EHR/EMR) initiatives that will be fully implemented in all clinics in the United States by 2015. Stakeholders will need to evaluate current record keeping practices and optimize and standardize methodologies to capture nearly all information in digital format. Collaborative efforts from academic and industry sectors are crucial to achieving higher efficacy in patient care while minimizing costs. Currently existing digitized data and information are present in multiple formats and are largely unstructured. In the absence of a universally accepted management system, departments and institutions continue to generate silos of information. As a result, invaluable and newly discovered knowledge is difficult to access. To accelerate biomedical research and reduce healthcare costs, clinical and bioinformatics systems must employ common data elements to create structured annotation forms enabling laboratories and clinics to capture sharable data in real time. Conversion of these datasets to knowable information should be a routine institutionalized process. New scientific knowledge and clinical discoveries can be shared via integrated knowledge environments defined by flexible data models and extensive use of standards, ontologies, vocabularies, and thesauri. In the clinical setting, aggregated knowledge must be displayed in user-friendly formats so that physicians, non-technical laboratory personnel, nurses, data/research coordinators, and end-users can enter data, access information, and understand the output. The effort to connect astronomical numbers of data points, including '-omics'-based molecular data, individual genome sequences, experimental data, patient clinical phenotypes, and follow-up data is a monumental task. Roadblocks to this vision of integration and interoperability include ethical, legal

  20. Enhancing detection sensitivity and accuracy on “Candidatus Liberibacter asiaticus” through next generation sequencing technology

    Technology Transfer Automated Retrieval System (TEKTRAN)

    “Candidatus Liberibacter asiaticus” is associated with citrus Huanglongbing (HLB, yellow shoot disease). Detection of this unculturable bacterium exclusively depends on polymerase chain reaction (PCR). Appropriate primer design is key to assure detection sensitivity and accuracy, which depend on qua...

  1. Sensitivity of grass and alfalfa reference evapotranspiration to weather station sensor accuracy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A sensitivity analysis was conducted to determine the relative effects of measurement errors in climate data input parameters on the accuracy of calculated reference crop evapotranspiration (ET) using the ASCE-EWRI Standardized Reference ET Equation. Data for the period of 1991 to 2008 from an autom...

  2. Precise and Continuous Time and Frequency Synchronisation at the 5×10-19 Accuracy Level

    PubMed Central

    Wang, B.; Gao, C.; Chen, W. L.; Miao, J.; Zhu, X.; Bai, Y.; Zhang, J. W.; Feng, Y. Y.; Li, T. C.; Wang, L. J.

    2012-01-01

    The synchronisation of time and frequency between remote locations is crucial for many important applications. Conventional time and frequency dissemination often makes use of satellite links. Recently, the communication fibre network has become an attractive option for long-distance time and frequency dissemination. Here, we demonstrate accurate frequency transfer and time synchronisation via an 80 km fibre link between Tsinghua University (THU) and the National Institute of Metrology of China (NIM). Using a 9.1 GHz microwave modulation and a timing signal carried by two continuous-wave lasers and transferred across the same 80 km urban fibre link, frequency transfer stability at the level of 5×10−19/day was achieved. Time synchronisation at the 50 ps precision level was also demonstrated. The system is reliable and has operated continuously for several months. We further discuss the feasibility of using such frequency and time transfer over 1000 km and its applications to long-baseline radio astronomy. PMID:22870385

  3. Towards the next decades of precision and accuracy in a 87Sr optical lattice clock

    NASA Astrophysics Data System (ADS)

    Martin, Michael; Lin, Yige; Swallows, Matthew; Bishof, Michael; Blatt, Sebastian; Benko, Craig; Chen, Licheng; Hirokawa, Takako; Rey, Ana Maria; Ye, Jun

    2011-05-01

    Optical lattice clocks based on ensembles of neutral atoms have the potential to operate at the highest levels of stability due to the parallel interrogation of many atoms. However, the control of systematic shifts in these systems is correspondingly difficult due to potential collisional atomic interactions. By tightly confining samples of ultracold fermionic 87Sr atoms in a two-dimensional optical lattice, as opposed to the conventional one-dimensional geometry, we increase the collisional interaction energy to be the largest relevant energy scale, thus entering the strongly interacting regime of clock operation. We show both theoretically and experimentally that this increase in interaction energy results in a paradoxical decrease in the collisional shift, reducing this key systematic to the 10-17 level. We also present work towards next- generation ultrastable lasers to attain quantum-limited clock operation, potentially enhancing clock precision by an order of magnitude. This work was supported by a grant from the ARO with funding from the DARPA OLE program, NIST, NSF, and AFOSR.

  4. Tedlar bag sampling technique for vertical profiling of carbon dioxide through the atmospheric boundary layer with high precision and accuracy.

    PubMed

    Schulz, Kristen; Jensen, Michael L; Balsley, Ben B; Davis, Kenneth; Birks, John W

    2004-07-01

    Carbon dioxide is the most important greenhouse gas other than water vapor, and its modulation by the biosphere is of fundamental importance to our understanding of global climate change. We have developed a new technique for vertical profiling of CO2 and meteorological parameters through the atmospheric boundary layer and well into the free troposphere. Vertical profiling of CO2 mixing ratios allows estimates of landscape-scale fluxes characteristic of approximately100 km2 of an ecosystem. The method makes use of a powered parachute as a platform and a new Tedlar bag air sampling technique. Air samples are returned to the ground where measurements of CO2 mixing ratios are made with high precision (< or =0.1%) and accuracy (< or =0.1%) using a conventional nondispersive infrared analyzer. Laboratory studies are described that characterize the accuracy and precision of the bag sampling technique and that measure the diffusion coefficient of CO2 through the Tedlar bag wall. The technique has been applied in field studies in the proximity of two AmeriFlux sites, and results are compared with tower measurements of CO2. PMID:15296321

  5. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review).

    PubMed

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194

  6. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation.

    PubMed

    Choe, Kyoung Whan; Blake, Randolph; Lee, Sang-Hun

    2016-01-01

    Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (<16s) and large (∼4min) time scales, and these pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task. PMID:25578924

  7. Accuracy Assessment of the Precise Point Positioning for Different Troposphere Models

    NASA Astrophysics Data System (ADS)

    Oguz Selbesoglu, Mahmut; Gurturk, Mert; Soycan, Metin

    2016-04-01

    This study investigates the accuracy and repeatability of PPP technique at different latitudes by using different troposphere delay models. Nine IGS stations were selected between 00-800 latitudes at northern hemisphere and southern hemisphere. Coordinates were obtained for 7 days at 1 hour intervals in summer and winter. At first, the coordinates were estimated by using Niell troposphere delay model with and without including north and east gradients in order to investigate the contribution of troposphere delay gradients to the positioning . Secondly, Saastamoinen model was used to eliminate troposphere path delays by using standart atmosphere parameters were extrapolated for all station levels. Finally, coordinates were estimated by using RTCA-MOPS empirical troposphere delay model. Results demonstrate that Niell troposphere delay model with horizontal gradients has better mean values of rms errors 0.09 % and 65 % than the Niell troposphere model without horizontal gradients and RTCA-MOPS model, respectively. Saastamoinen model mean values of rms errors were obtained approximately 4 times bigger than the Niell troposphere delay model with horizontal gradients.

  8. A simple device for high-precision head image registration: Preliminary performance and accuracy tests

    SciTech Connect

    Pallotta, Stefania

    2007-05-15

    The purpose of this paper is to present a new device for multimodal head study registration and to examine its performance in preliminary tests. The device consists of a system of eight markers fixed to mobile carbon pipes and bars which can be easily mounted on the patient's head using the ear canals and the nasal bridge. Four graduated scales fixed to the rigid support allow examiners to find the same device position on the patient's head during different acquisitions. The markers can be filled with appropriate substances for visualisation in computed tomography (CT), magnetic resonance, single photon emission computer tomography (SPECT) and positron emission tomography images. The device's rigidity and its position reproducibility were measured in 15 repeated CT acquisitions of the Alderson Rando anthropomorphic phantom and in two SPECT studies of a patient. The proposed system displays good rigidity and reproducibility characteristics. A relocation accuracy of less than 1,5 mm was found in more than 90% of the results. The registration parameters obtained using such a device were compared to those obtained using fiducial markers fixed on phantom and patient heads, resulting in differences of less than 1 deg. and 1 mm for rotation and translation parameters, respectively. Residual differences between fiducial marker coordinates in reference and in registered studies were less than 1 mm in more than 90% of the results, proving that the device performed as accurately as noninvasive stereotactic devices. Finally, an example of multimodal employment of the proposed device is reported.

  9. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review)

    PubMed Central

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194

  10. A Method of Determining Accuracy and Precision for Dosimeter Systems Using Accreditation Data

    SciTech Connect

    Rick Cummings and John Flood

    2010-12-01

    A study of the uncertainty of dosimeter results is required by the national accreditation programs for each dosimeter model for which accreditation is sought. Typically, the methods used to determine uncertainty have included the partial differentiation method described in the U.S. Guide to Uncertainty in Measurements or the use of Monte Carlo techniques and probability distribution functions to generate simulated dose results. Each of these techniques has particular strengths and should be employed when the areas of uncertainty are required to be understood in detail. However, the uncertainty of dosimeter results can also be determined using a Model II One-Way Analysis of Variance technique and accreditation testing data. The strengths of the technique include (1) the method is straightforward and the data are provided under accreditation testing and (2) the method provides additional data for the analysis of long-term uncertainty using Statistical Process Control (SPC) techniques. The use of SPC to compare variances and standard deviations over time is described well in other areas and is not discussed in detail in this paper. The application of Analysis of Variance to historic testing data indicated that the accuracy in a representative dosimetry system (Panasonic® Model UD-802) was 8.2%, 5.1%, and 4.8% and the expanded uncertainties at the 95% confidence level were 10.7%, 14.9%, and 15.2% for the Accident, Protection Level-Shallow, and Protection Level-Deep test categories in the Department of Energy Laboratory Accreditation Program, respectively. The 95% level of confidence ranges were (0.98 to 1.19), (0.90 to 1.20), and (0.90 to 1.20) for the three groupings of test categories, respectively.

  11. A method of determining accuracy and precision for dosimeter systems using accreditation data.

    PubMed

    Cummings, Frederick; Flood, John R

    2010-12-01

    A study of the uncertainty of dosimeter results is required by the national accreditation programs for each dosimeter model for which accreditation is sought. Typically, the methods used to determine uncertainty have included the partial differentiation method described in the U.S. Guide to Uncertainty in Measurements or the use of Monte Carlo techniques and probability distribution functions to generate simulated dose results. Each of these techniques has particular strengths and should be employed when the areas of uncertainty are required to be understood in detail. However, the uncertainty of dosimeter results can also be determined using a Model II One-Way Analysis of Variance technique and accreditation testing data. The strengths of the technique include (1) the method is straightforward and the data are provided under accreditation testing and (2) the method provides additional data for the analysis of long-term uncertainty using Statistical Process Control (SPC) techniques. The use of SPC to compare variances and standard deviations over time is described well in other areas and is not discussed in detail in this paper. The application of Analysis of Variance to historic testing data indicated that the accuracy in a representative dosimetry system (Panasonic® Model UD-802) was 8.2%, 5.1%, and 4.8% and the expanded uncertainties at the 95% confidence level were 10.7%, 14.9%, and 15.2% for the Accident, Protection Level-Shallow, and Protection Level-Deep test categories in the Department of Energy Laboratory Accreditation Program, respectively. The 95% level of confidence ranges were (0.98 to 1.19), (0.90 to 1.20), and (0.90 to 1.20) for the three groupings of test categories, respectively. PMID:21068596

  12. Decoding Accuracy in Supplementary Motor Cortex Correlates with Perceptual Sensitivity to Tactile Roughness

    PubMed Central

    Kim, Junsuk; Chung, Yoon Gi; Park, Jang-Yeon; Chung, Soon-Cheol; Wallraven, Christian; Bülthoff, Heinrich H.; Kim, Sung-Phil

    2015-01-01

    Perceptual sensitivity to tactile roughness varies across individuals for the same degree of roughness. A number of neurophysiological studies have investigated the neural substrates of tactile roughness perception, but the neural processing underlying the strong individual differences in perceptual roughness sensitivity remains unknown. In this study, we explored the human brain activation patterns associated with the behavioral discriminability of surface texture roughness using functional magnetic resonance imaging (fMRI). First, a whole-brain searchlight multi-voxel pattern analysis (MVPA) was used to find brain regions from which we could decode roughness information. The searchlight MVPA revealed four brain regions showing significant decoding results: the supplementary motor area (SMA), contralateral postcentral gyrus (S1), and superior portion of the bilateral temporal pole (STP). Next, we evaluated the behavioral roughness discrimination sensitivity of each individual using the just-noticeable difference (JND) and correlated this with the decoding accuracy in each of the four regions. We found that only the SMA showed a significant correlation between neuronal decoding accuracy and JND across individuals; Participants with a smaller JND (i.e., better discrimination ability) exhibited higher decoding accuracy from their voxel response patterns in the SMA. Our findings suggest that multivariate voxel response patterns presented in the SMA represent individual perceptual sensitivity to tactile roughness and people with greater perceptual sensitivity to tactile roughness are likely to have more distinct neural representations of different roughness levels in their SMA. PMID:26067832

  13. Video image analysis in the Australian meat industry - precision and accuracy of predicting lean meat yield in lamb carcasses.

    PubMed

    Hopkins, D L; Safari, E; Thompson, J M; Smith, C R

    2004-06-01

    A wide selection of lamb types of mixed sex (ewes and wethers) were slaughtered at a commercial abattoir and during this process images of 360 carcasses were obtained online using the VIAScan® system developed by Meat and Livestock Australia. Soft tissue depth at the GR site (thickness of tissue over the 12th rib 110 mm from the midline) was measured by an abattoir employee using the AUS-MEAT sheep probe (PGR). Another measure of this thickness was taken in the chiller using a GR knife (NGR). Each carcass was subsequently broken down to a range of trimmed boneless retail cuts and the lean meat yield determined. The current industry model for predicting meat yield uses hot carcass weight (HCW) and tissue depth at the GR site. A low level of accuracy and precision was found when HCW and PGR were used to predict lean meat yield (R(2)=0.19, r.s.d.=2.80%), which could be improved markedly when PGR was replaced by NGR (R(2)=0.41, r.s.d.=2.39%). If the GR measures were replaced by 8 VIAScan® measures then greater prediction accuracy could be achieved (R(2)=0.52, r.s.d.=2.17%). A similar result was achieved when the model was based on principal components (PCs) computed from the 8 VIAScan® measures (R(2)=0.52, r.s.d.=2.17%). The use of PCs also improved the stability of the model compared to a regression model based on HCW and NGR. The transportability of the models was tested by randomly dividing the data set and comparing coefficients and the level of accuracy and precision. Those models based on PCs were superior to those based on regression. It is demonstrated that with the appropriate modeling the VIAScan® system offers a workable method for predicting lean meat yield automatically. PMID:22061323

  14. Accuracy and reliability of multi-GNSS real-time precise positioning: GPS, GLONASS, BeiDou, and Galileo

    NASA Astrophysics Data System (ADS)

    Li, Xingxing; Ge, Maorong; Dai, Xiaolei; Ren, Xiaodong; Fritsche, Mathias; Wickert, Jens; Schuh, Harald

    2015-06-01

    In this contribution, we present a GPS+GLONASS+BeiDou+Galileo four-system model to fully exploit the observations of all these four navigation satellite systems for real-time precise orbit determination, clock estimation and positioning. A rigorous multi-GNSS analysis is performed to achieve the best possible consistency by processing the observations from different GNSS together in one common parameter estimation procedure. Meanwhile, an efficient multi-GNSS real-time precise positioning service system is designed and demonstrated by using the multi-GNSS Experiment, BeiDou Experimental Tracking Network, and International GNSS Service networks including stations all over the world. The statistical analysis of the 6-h predicted orbits show that the radial and cross root mean square (RMS) values are smaller than 10 cm for BeiDou and Galileo, and smaller than 5 cm for both GLONASS and GPS satellites, respectively. The RMS values of the clock differences between real-time and batch-processed solutions for GPS satellites are about 0.10 ns, while the RMS values for BeiDou, Galileo and GLONASS are 0.13, 0.13 and 0.14 ns, respectively. The addition of the BeiDou, Galileo and GLONASS systems to the standard GPS-only processing, reduces the convergence time almost by 70 %, while the positioning accuracy is improved by about 25 %. Some outliers in the GPS-only solutions vanish when multi-GNSS observations are processed simultaneous. The availability and reliability of GPS precise positioning decrease dramatically as the elevation cutoff increases. However, the accuracy of multi-GNSS precise point positioning (PPP) is hardly decreased and few centimeter are still achievable in the horizontal components even with 40 elevation cutoff. At 30 and 40 elevation cutoffs, the availability rates of GPS-only solution drop significantly to only around 70 and 40 %, respectively. However, multi-GNSS PPP can provide precise position estimates continuously (availability rate is more than 99

  15. Accuracy and Precision of Equine Gait Event Detection during Walking with Limb and Trunk Mounted Inertial Sensors

    PubMed Central

    Olsen, Emil; Andersen, Pia Haubro; Pfau, Thilo

    2012-01-01

    The increased variations of temporal gait events when pathology is present are good candidate features for objective diagnostic tests. We hypothesised that the gait events hoof-on/off and stance can be detected accurately and precisely using features from trunk and distal limb-mounted Inertial Measurement Units (IMUs). Four IMUs were mounted on the distal limb and five IMUs were attached to the skin over the dorsal spinous processes at the withers, fourth lumbar vertebrae and sacrum as well as left and right tuber coxae. IMU data were synchronised to a force plate array and a motion capture system. Accuracy (bias) and precision (SD of bias) was calculated to compare force plate and IMU timings for gait events. Data were collected from seven horses. One hundred and twenty three (123) front limb steps were analysed; hoof-on was detected with a bias (SD) of −7 (23) ms, hoof-off with 0.7 (37) ms and front limb stance with −0.02 (37) ms. A total of 119 hind limb steps were analysed; hoof-on was found with a bias (SD) of −4 (25) ms, hoof-off with 6 (21) ms and hind limb stance with 0.2 (28) ms. IMUs mounted on the distal limbs and sacrum can detect gait events accurately and precisely. PMID:22969392

  16. Fast and sensitive detection of indels induced by precise gene targeting.

    PubMed

    Yang, Zhang; Steentoft, Catharina; Hauge, Camilla; Hansen, Lars; Thomsen, Allan Lind; Niola, Francesco; Vester-Christensen, Malene B; Frödin, Morten; Clausen, Henrik; Wandall, Hans H; Bennett, Eric P

    2015-05-19

    The nuclease-based gene editing tools are rapidly transforming capabilities for altering the genome of cells and organisms with great precision and in high throughput studies. A major limitation in application of precise gene editing lies in lack of sensitive and fast methods to detect and characterize the induced DNA changes. Precise gene editing induces double-stranded DNA breaks that are repaired by error-prone non-homologous end joining leading to introduction of insertions and deletions (indels) at the target site. These indels are often small and difficult and laborious to detect by traditional methods. Here we present a method for fast, sensitive and simple indel detection that accurately defines indel sizes down to ±1 bp. The method coined IDAA for Indel Detection by Amplicon Analysis is based on tri-primer amplicon labelling and DNA capillary electrophoresis detection, and IDAA is amenable for high throughput analysis. PMID:25753669

  17. The extended tracking network and indications of baseline precision and accuracy in the North Andes

    NASA Technical Reports Server (NTRS)

    Freymueller, Jeffrey T.; Kellogg, James N.

    1990-01-01

    The CASA Uno Global Positioning System (GPS) experiment (January-February 1988) included an extended tracking network which covered three continents in addition to the network of scientific interest in Central and South America. The repeatability of long baselines (400-1000 km) in South America is improved by up to a factor of two in the horizontal vector baseline components by using tracking stations in the Pacific and Europe to supplement stations in North America. In every case but one, the differences between the mean solutions obtained using different tracking networks was equal to or smaller than day-to-day rms repeatabilities for the same baselines. The mean solutions obtained by using tracking stations in North America and the Pacific agreed at the 2-3 millimeter level with those using tracking stations in North America and Europe. The agreement of the extended tracking network solutions suggests that a broad distribution of tracking stations provides better geometric constraints on the satellite orbits and that solutions are not sensitive to changes in tracking network configuration when an extended network is use. A comparison of the results from the North Andes and a baseline in North America suggests that the use of a geometrically strong extended tracking network is most important when the network of interest is far from North America.

  18. Risk sensitivity in a motor task with speed-accuracy trade-off

    PubMed Central

    Braun, Daniel A.; Wolpert, Daniel M.

    2011-01-01

    When a racing driver steers a car around a sharp bend, there is a trade-off between speed and accuracy, in that high speed can lead to a skid whereas a low speed increases lap time, both of which can adversely affect the driver's payoff function. While speed-accuracy trade-offs have been studied extensively, their susceptibility to risk sensitivity is much less understood, since most theories of motor control are risk neutral with respect to payoff, i.e., they only consider mean payoffs and ignore payoff variability. Here we investigate how individual risk attitudes impact a motor task that involves such a speed-accuracy trade-off. We designed an experiment where a target had to be hit and the reward (given in points) increased as a function of both subjects' endpoint accuracy and endpoint velocity. As faster movements lead to poorer endpoint accuracy, the variance of the reward increased for higher velocities. We tested subjects on two reward conditions that had the same mean reward but differed in the variance of the reward. A risk-neutral account predicts that subjects should only maximize the mean reward and hence perform identically in the two conditions. In contrast, we found that some (risk-averse) subjects chose to move with lower velocities and other (risk-seeking) subjects with higher velocities in the condition with higher reward variance (risk). This behavior is suboptimal with regard to maximizing the mean number of points but is in accordance with a risk-sensitive account of movement selection. Our study suggests that individual risk sensitivity is an important factor in motor tasks with speed-accuracy trade-offs. PMID:21430284

  19. Risk sensitivity in a motor task with speed-accuracy trade-off.

    PubMed

    Nagengast, Arne J; Braun, Daniel A; Wolpert, Daniel M

    2011-06-01

    When a racing driver steers a car around a sharp bend, there is a trade-off between speed and accuracy, in that high speed can lead to a skid whereas a low speed increases lap time, both of which can adversely affect the driver's payoff function. While speed-accuracy trade-offs have been studied extensively, their susceptibility to risk sensitivity is much less understood, since most theories of motor control are risk neutral with respect to payoff, i.e., they only consider mean payoffs and ignore payoff variability. Here we investigate how individual risk attitudes impact a motor task that involves such a speed-accuracy trade-off. We designed an experiment where a target had to be hit and the reward (given in points) increased as a function of both subjects' endpoint accuracy and endpoint velocity. As faster movements lead to poorer endpoint accuracy, the variance of the reward increased for higher velocities. We tested subjects on two reward conditions that had the same mean reward but differed in the variance of the reward. A risk-neutral account predicts that subjects should only maximize the mean reward and hence perform identically in the two conditions. In contrast, we found that some (risk-averse) subjects chose to move with lower velocities and other (risk-seeking) subjects with higher velocities in the condition with higher reward variance (risk). This behavior is suboptimal with regard to maximizing the mean number of points but is in accordance with a risk-sensitive account of movement selection. Our study suggests that individual risk sensitivity is an important factor in motor tasks with speed-accuracy trade-offs. PMID:21430284

  20. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    PubMed

    Wells, Emma; Wolfe, Marlene K; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4-19% error), then test strips (5.2-48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration. Given the

  1. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions

    PubMed Central

    Wells, Emma; Wolfe, Marlene K.; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4–19% error), then test strips (5.2–48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5–11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14–37 for test strips and $33–609 for titration

  2. Sensitivity to light weakly-coupled new physics at the precision frontier

    NASA Astrophysics Data System (ADS)

    Le Dall, Matthias; Pospelov, Maxim; Ritz, Adam

    2015-07-01

    Precision measurements of rare particle physics phenomena (flavor oscillations and decays, electric dipole moments, etc.) are often sensitive to the effects of new physics encoded in higher-dimensional operators with Wilson coefficients given by C /(ΛNP)n , where C is dimensionless, n ≥1 , and ΛNP is an energy scale. Many extensions of the Standard Model predict that ΛNP should be at the electroweak scale or above, and the search for new short-distance physics is often stated as the primary goal of experiments at the precision frontier. In rather general terms, we investigate the alternative possibility—C ≪1 , and ΛNP≪mW —to identify classes of precision measurements sensitive to light new physics (hidden sectors) that do not require an ultraviolet completion with additional states at or above the electroweak scale. We find that hadronic electric dipole moments, lepton number and flavor violation, nonuniversality, as well as lepton g -2 can be induced at interesting levels by hidden sectors with light degrees of freedom. In contrast, many hadronic flavor- and baryon number-violating observables, and precision probes of charged currents, typically require new physics with ΛNP≳mW . Among the leptonic observables, we find that a nonzero electron electric dipole moment near the current level of sensitivity would point to the existence of new physics at or above the electroweak scale.

  3. Detailed data is welcome, but with a pinch of salt: Accuracy, precision, and uncertainty in flood inundation modeling

    NASA Astrophysics Data System (ADS)

    Dottori, F.; Di Baldassarre, G.; Todini, E.

    2013-09-01

    New survey techniques provide a large amount of high-resolution data, which can be extremely precious for flood inundation modeling. Such data availability raises the issue as to how to exploit their information content to effectively improve flood risk mapping and predictions. In this paper, we will discuss a number of important issues which should be taken into account in works related to flood modeling. These include the large number of uncertainty sources in model structure and available data; the difficult evaluation of model results, due to the scarcity of observed data; computational efficiency; false confidence that can be given by high-resolution outputs, as accuracy is not necessarily increased by higher precision. Finally, we briefly review and discuss a number of existing approaches, such as subgrid parameterization and roughness upscaling methods, which can be used to incorporate high detailed data into flood inundation models, balancing efficiency and reliability.

  4. Community-based Approaches to Improving Accuracy, Precision, and Reproducibility in U-Pb and U-Th Geochronology

    NASA Astrophysics Data System (ADS)

    McLean, N. M.; Condon, D. J.; Bowring, S. A.; Schoene, B.; Dutton, A.; Rubin, K. H.

    2015-12-01

    The last two decades have seen a grassroots effort by the international geochronology community to "calibrate Earth history through teamwork and cooperation," both as part of the EARTHTIME initiative and though several daughter projects with similar goals. Its mission originally challenged laboratories "to produce temporal constraints with uncertainties approaching 0.1% of the radioisotopic ages," but EARTHTIME has since exceeded its charge in many ways. Both the U-Pb and Ar-Ar chronometers first considered for high-precision timescale calibration now regularly produce dates at the sub-per mil level thanks to instrumentation, laboratory, and software advances. At the same time new isotope systems, including U-Th dating of carbonates, have developed comparable precision. But the larger, inter-related scientific challenges envisioned at EARTHTIME's inception remain - for instance, precisely calibrating the global geologic timescale, estimating rates of change around major climatic perturbations, and understanding evolutionary rates through time - and increasingly require that data from multiple geochronometers be combined. To solve these problems, the next two decades of uranium-daughter geochronology will require further advances in accuracy, precision, and reproducibility. The U-Th system has much in common with U-Pb, in that both parent and daughter isotopes are solids that can easily be weighed and dissolved in acid, and have well-characterized reference materials certified for isotopic composition and/or purity. For U-Pb, improving lab-to-lab reproducibility has entailed dissolving precisely weighed U and Pb metals of known purity and isotopic composition together to make gravimetric solutions, then using these to calibrate widely distributed tracers composed of artificial U and Pb isotopes. To mimic laboratory measurements, naturally occurring U and Pb isotopes were also mixed in proportions to mimic samples of three different ages, to be run as internal

  5. Accuracy, Precision, Sensitivity, and Specificity of Noninvasive ICP Absolute Value Measurements.

    PubMed

    Krakauskaite, Solventa; Petkus, Vytautas; Bartusis, Laimonas; Zakelis, Rolandas; Chomskis, Romanas; Preiksaitis, Aidanas; Ragauskas, Arminas; Matijosaitis, Vaidas; Petrikonis, Kestutis; Rastenyte, Daiva

    2016-01-01

    An innovative absolute intracranial pressure (ICP) value measurement method has been validated by multicenter comparative clinical studies. The method is based on two-depth transcranial Doppler (TCD) technology and uses intracranial and extracranial segments of the ophthalmic artery as pressure sensors. The ophthalmic artery is used as a natural pair of "scales" that compares ICP with controlled pressure Pe, which is externally applied to the orbit. To balance the scales, ICP = Pe a special two-depth TCD device was used as a pressure balance indicator. The proposed method is the only noninvasive ICP measurement method that does not need patient-specific calibration. PMID:27165929

  6. Accuracy of the domain method for the material derivative approach to shape design sensitivities

    NASA Technical Reports Server (NTRS)

    Yang, R. J.; Botkin, M. E.

    1987-01-01

    Numerical accuracy for the boundary and domain methods of the material derivative approach to shape design sensitivities is investigated through the use of mesh refinement. The results show that the domain method is generally more accurate than the boundary method, using the finite element technique. It is also shown that the domain method is equivalent, under certain assumptions, to the implicit differentiation approach not only theoretically but also numerically.

  7. Assessment of accuracy and precision of 3D reconstruction of unicompartmental knee arthroplasty in upright position using biplanar radiography.

    PubMed

    Tsai, Tsung-Yuan; Dimitriou, Dimitris; Hosseini, Ali; Liow, Ming Han Lincoln; Torriani, Martin; Li, Guoan; Kwon, Young-Min

    2016-07-01

    This study aimed to evaluate the precision and accuracy of 3D reconstruction of UKA component position, contact location and lower limb alignment in standing position using biplanar radiograph. Two human specimens with 4 medial UKAs were implanted with beads for radiostereometric analysis (RSA). The specimens were frozen in standing position and CT-scanned to obtain relative positions between the beads, bones and UKA components. The specimens were then imaged using biplanar radiograph (EOS). The positions of the femur, tibia, UKA components and UKA contact locations were obtained using RSA- and EOS-based techniques. Intraclass correlation coefficient (ICC) was calculated for inter-observer reliability of the EOS technique. The average (standard deviation) of the differences between two techniques in translations and rotations were less than 0.18 (0.29) mm and 0.39° (0.66°) for UKA components. The root-mean-square-errors (RMSE) of contact location along the anterior/posterior and medial/lateral directions were 0.84mm and 0.30mm. The RMSEs of the knee rotations were less than 1.70°. The ICCs for the EOS-based segmental orientations between two raters were larger than 0.98. The results suggest the EOS-based 3D reconstruction technique can precisely determine component position, contact location and lower limb alignment for UKA patients in weight-bearing standing position. PMID:27117422

  8. The Impact of 3D Volume-of-Interest Definition on Accuracy and Precision of Activity Estimation in Quantitative SPECT and Planar Processing Methods

    PubMed Central

    He, Bin; Frey, Eric C.

    2010-01-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise, and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT), and planar (QPlanar) processing. Another important effect impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimations. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in the same transaxial plane in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g., in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from −1 to 1 voxels in increments of 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ

  9. Single-frequency receivers as master permanent stations in GNSS networks: precision and accuracy of the positioning in mixed networks

    NASA Astrophysics Data System (ADS)

    Dabove, Paolo; Manzino, Ambrogio Maria

    2015-04-01

    The use of GPS/GNSS instruments is a common practice in the world at both a commercial and academic research level. Since last ten years, Continuous Operating Reference Stations (CORSs) networks were born in order to achieve the possibility to extend a precise positioning more than 15 km far from the master station. In this context, the Geomatics Research Group of DIATI at the Politecnico di Torino has carried out several experiments in order to evaluate the achievable precision obtainable with different GNSS receivers (geodetic and mass-market) and antennas if a CORSs network is considered. This work starts from the research above described, in particular focusing the attention on the usefulness of single frequency permanent stations in order to thicken the existing CORSs, especially for monitoring purposes. Two different types of CORSs network are available today in Italy: the first one is the so called "regional network" and the second one is the "national network", where the mean inter-station distances are about 25/30 and 50/70 km respectively. These distances are useful for many applications (e.g. mobile mapping) if geodetic instruments are considered but become less useful if mass-market instruments are used or if the inter-station distance between master and rover increases. In this context, some innovative GNSS networks were developed and tested, analyzing the performance of rover's positioning in terms of quality, accuracy and reliability both in real-time and post-processing approach. The use of single frequency GNSS receivers leads to have some limits, especially due to a limited baseline length, the possibility to obtain a correct fixing of the phase ambiguity for the network and to fix the phase ambiguity correctly also for the rover. These factors play a crucial role in order to reach a positioning with a good level of accuracy (as centimetric o better) in a short time and with an high reliability. The goal of this work is to investigate about the

  10. Standardization of Operator-Dependent Variables Affecting Precision and Accuracy of the Disk Diffusion Method for Antibiotic Susceptibility Testing.

    PubMed

    Hombach, Michael; Maurer, Florian P; Pfiffner, Tamara; Böttger, Erik C; Furrer, Reinhard

    2015-12-01

    Parameters like zone reading, inoculum density, and plate streaking influence the precision and accuracy of disk diffusion antibiotic susceptibility testing (AST). While improved reading precision has been demonstrated using automated imaging systems, standardization of the inoculum and of plate streaking have not been systematically investigated yet. This study analyzed whether photometrically controlled inoculum preparation and/or automated inoculation could further improve the standardization of disk diffusion. Suspensions of Escherichia coli ATCC 25922 and Staphylococcus aureus ATCC 29213 of 0.5 McFarland standard were prepared by 10 operators using both visual comparison to turbidity standards and a Densichek photometer (bioMérieux), and the resulting CFU counts were determined. Furthermore, eight experienced operators each inoculated 10 Mueller-Hinton agar plates using a single 0.5 McFarland standard bacterial suspension of E. coli ATCC 25922 using regular cotton swabs, dry flocked swabs (Copan, Brescia, Italy), or an automated streaking device (BD-Kiestra, Drachten, Netherlands). The mean CFU counts obtained from 0.5 McFarland standard E. coli ATCC 25922 suspensions were significantly different for suspensions prepared by eye and by Densichek (P < 0.001). Preparation by eye resulted in counts that were closer to the CLSI/EUCAST target of 10(8) CFU/ml than those resulting from Densichek preparation. No significant differences in the standard deviations of the CFU counts were observed. The interoperator differences in standard deviations when dry flocked swabs were used decreased significantly compared to the differences when regular cotton swabs were used, whereas the mean of the standard deviations of all operators together was not significantly altered. In contrast, automated streaking significantly reduced both interoperator differences, i.e., the individual standard deviations, compared to the standard deviations for the manual method, and the mean of

  11. Accuracy and precision of MR blood oximetry based on the long paramagnetic cylinder approximation of large vessels.

    PubMed

    Langham, Michael C; Magland, Jeremy F; Epstein, Charles L; Floyd, Thomas F; Wehrli, Felix W

    2009-08-01

    An accurate noninvasive method to measure the hemoglobin oxygen saturation (%HbO(2)) of deep-lying vessels without catheterization would have many clinical applications. Quantitative MRI may be the only imaging modality that can address this difficult and important problem. MR susceptometry-based oximetry for measuring blood oxygen saturation in large vessels models the vessel as a long paramagnetic cylinder immersed in an external field. The intravascular magnetic susceptibility relative to surrounding muscle tissue is a function of oxygenated hemoglobin (HbO(2)) and can be quantified with a field-mapping pulse sequence. In this work, the method's accuracy and precision was investigated theoretically on the basis of an analytical expression for the arbitrarily oriented cylinder, as well as experimentally in phantoms and in vivo in the femoral artery and vein at 3T field strength. Errors resulting from vessel tilt, noncircularity of vessel cross-section, and induced magnetic field gradients were evaluated and methods for correction were designed and implemented. Hemoglobin saturation was measured at successive vessel segments, differing in geometry, such as eccentricity and vessel tilt, but constant blood oxygen saturation levels, as a means to evaluate measurement consistency. The average standard error and coefficient of variation of measurements in phantoms were <2% with tilt correction alone, in agreement with theory, suggesting that high accuracy and reproducibility can be achieved while ignoring noncircularity for tilt angles up to about 30 degrees . In vivo, repeated measurements of %HbO(2) in the femoral vessels yielded a coefficient of variation of less than 5%. In conclusion, the data suggest that %HbO(2) can be measured reproducibly in vivo in large vessels of the peripheral circulation on the basis of the paramagnetic cylinder approximation of the incremental field. PMID:19526517

  12. Methods in Use for Sensitivity Analysis, Uncertainty Evaluation, and Target Accuracy Assessment

    SciTech Connect

    G. Palmiotti; M. Salvatores; G. Aliberti

    2007-10-01

    Sensitivity coefficients can be used for different objectives like uncertainty estimates, design optimization, determination of target accuracy requirements, adjustment of input parameters, and evaluations of the representativity of an experiment with respect to a reference design configuration. In this paper the theory, based on the adjoint approach, that is implemented in the ERANOS fast reactor code system is presented along with some unique tools and features related to specific types of problems as is the case for nuclide transmutation, reactivity loss during the cycle, decay heat, neutron source associated to fuel fabrication, and experiment representativity.

  13. Determination of the precision and accuracy of morphological measurements using the Kinect™ sensor: comparison with standard stereophotogrammetry.

    PubMed

    Bonnechère, B; Jansen, B; Salvia, P; Bouzahouene, H; Sholukha, V; Cornelis, J; Rooze, M; Van Sint Jan, S

    2014-01-01

    The recent availability of the Kinect™ sensor, a low-cost Markerless Motion Capture (MMC) system, could give new and interesting insights into ergonomics (e.g. the creation of a morphological database). Extensive validation of this system is still missing. The aim of the study was to determine if the Kinect™ sensor can be used as an easy, cheap and fast tool to conduct morphology estimation. A total of 48 subjects were analysed using MMC. Results were compared with measurements obtained from a high-resolution stereophotogrammetric system, a marker-based system (MBS). Differences between MMC and MBS were found; however, these differences were systematically correlated and enabled regression equations to be obtained to correct MMC results. After correction, final results were in agreement with MBS data (p = 0.99). Results show that measurements were reproducible and precise after applying regression equations. Kinect™ sensors-based systems therefore seem to be suitable for use as fast and reliable tools to estimate morphology. Practitioner Summary: The Kinect™ sensor could eventually be used for fast morphology estimation as a body scanner. This paper presents an extensive validation of this device for anthropometric measurements in comparison to manual measurements and stereophotogrammetric devices. The accuracy is dependent on the segment studied but the reproducibility is excellent. PMID:24646374

  14. ICan: An Optimized Ion-Current-Based Quantification Procedure with Enhanced Quantitative Accuracy and Sensitivity in Biomarker Discovery

    PubMed Central

    2015-01-01

    The rapidly expanding availability of high-resolution mass spectrometry has substantially enhanced the ion-current-based relative quantification techniques. Despite the increasing interest in ion-current-based methods, quantitative sensitivity, accuracy, and false discovery rate remain the major concerns; consequently, comprehensive evaluation and development in these regards are urgently needed. Here we describe an integrated, new procedure for data normalization and protein ratio estimation, termed ICan, for improved ion-current-based analysis of data generated by high-resolution mass spectrometry (MS). ICan achieved significantly better accuracy and precision, and lower false-positive rate for discovering altered proteins, over current popular pipelines. A spiked-in experiment was used to evaluate the performance of ICan to detect small changes. In this study E. coli extracts were spiked with moderate-abundance proteins from human plasma (MAP, enriched by IgY14-SuperMix procedure) at two different levels to set a small change of 1.5-fold. Forty-five (92%, with an average ratio of 1.71 ± 0.13) of 49 identified MAP protein (i.e., the true positives) and none of the reference proteins (1.0-fold) were determined as significantly altered proteins, with cutoff thresholds of ≥1.3-fold change and p ≤ 0.05. This is the first study to evaluate and prove competitive performance of the ion-current-based approach for assigning significance to proteins with small changes. By comparison, other methods showed remarkably inferior performance. ICan can be broadly applicable to reliable and sensitive proteomic survey of multiple biological samples with the use of high-resolution MS. Moreover, many key features evaluated and optimized here such as normalization, protein ratio determination, and statistical analyses are also valuable for data analysis by isotope-labeling methods. PMID:25285707

  15. Strategy for high-accuracy-and-precision retrieval of atmospheric methane from the mid-infrared FTIR network

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Forster, F.; Rettinger, M.; Jones, N.

    2011-05-01

    We present a strategy (MIR-GBM v1.0) for the retrieval of column-averaged dry-air mole fractions of methane (XCH4) with a precision <0.3 % (1-σ diurnal variation, 7-min integration) and a seasonal bias <0.14 % from mid-infrared ground-based solar FTIR measurements of the Network for the Detection of Atmospheric Composition Change (NDACC, comprising 22 FTIR stations). This makes NDACC methane data useful for satellite validation and for the inversion of regional-scale sources and sinks in addition to long-term trend analysis. Such retrievals complement the high accuracy and precision near-infrared observations of the younger Total Carbon Column Observing Network (TCCON) with time series dating back 15 yr or so before TCCON operations began. MIR-GBM v1.0 is using HITRAN 2000 (including the 2001 update release) and 3 spectral micro windows (2613.70-2615.40 cm-1, 2835.50-2835.80 cm-1, 2921.00-2921.60 cm-1). A first-order Tikhonov constraint is applied to the state vector given in units of per cent of volume mixing ratio. It is tuned to achieve minimum diurnal variation without damping seasonality. Final quality selection of the retrievals uses a threshold for the ratio of root-mean-square spectral residuals and information content (<0.15 %). Column-averaged dry-air mole fractions are calculated using the retrieved methane profiles and four-times-daily pressure-temperature-humidity profiles from National Center for Environmental Prediction (NCEP) interpolated to the time of measurement. MIR-GBM v1.0 is the optimum of 24 tested retrieval strategies (8 different spectral micro-window selections, 3 spectroscopic line lists: HITRAN 2000, 2004, 2008). Dominant errors of the non-optimum retrieval strategies are HDO/H2O-CH4 interference errors (seasonal bias up to ≈4 %). Therefore interference errors have been quantified at 3 test sites covering clear-sky integrated water vapor levels representative for all NDACC sites (Wollongong maximum = 44.9 mm, Garmisch mean = 14.9 mm

  16. Strategy for high-accuracy-and-precision retrieval of atmospheric methane from the mid-infrared FTIR network

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Forster, F.; Rettinger, M.; Jones, N.

    2011-09-01

    We present a strategy (MIR-GBM v1.0) for the retrieval of column-averaged dry-air mole fractions of methane (XCH4) with a precision <0.3% (1-σ diurnal variation, 7-min integration) and a seasonal bias <0.14% from mid-infrared ground-based solar FTIR measurements of the Network for the Detection of Atmospheric Composition Change (NDACC, comprising 22 FTIR stations). This makes NDACC methane data useful for satellite validation and for the inversion of regional-scale sources and sinks in addition to long-term trend analysis. Such retrievals complement the high accuracy and precision near-infrared observations of the younger Total Carbon Column Observing Network (TCCON) with time series dating back 15 years or so before TCCON operations began. MIR-GBM v1.0 is using HITRAN 2000 (including the 2001 update release) and 3 spectral micro windows (2613.70-2615.40 cm-1, 2835.50-2835.80 cm-1, 2921.00-2921.60 cm-1). A first-order Tikhonov constraint is applied to the state vector given in units of per cent of volume mixing ratio. It is tuned to achieve minimum diurnal variation without damping seasonality. Final quality selection of the retrievals uses a threshold for the goodness of fit (χ2 < 1) as well as for the ratio of root-mean-square spectral noise and information content (<0.15%). Column-averaged dry-air mole fractions are calculated using the retrieved methane profiles and four-times-daily pressure-temperature-humidity profiles from National Center for Environmental Prediction (NCEP) interpolated to the time of measurement. MIR-GBM v1.0 is the optimum of 24 tested retrieval strategies (8 different spectral micro-window selections, 3 spectroscopic line lists: HITRAN 2000, 2004, 2008). Dominant errors of the non-optimum retrieval strategies are systematic HDO/H2O-CH4 interference errors leading to a seasonal bias up to ≈5%. Therefore interference errors have been quantified at 3 test sites covering clear-sky integrated water vapor levels representative for all NDACC

  17. Airborne Laser CO2 Column Measurements: Evaluation of Precision and Accuracy Under a Wide Range of Surface and Atmospheric Conditions

    NASA Astrophysics Data System (ADS)

    Browell, E. V.; Dobler, J. T.; Kooi, S. A.; Fenn, M. A.; Choi, Y.; Vay, S. A.; Harrison, F. W.; Moore, B.

    2011-12-01

    This paper discusses the latest flight test results of a multi-frequency intensity-modulated (IM) continuous-wave (CW) laser absorption spectrometer (LAS) that operates near 1.57 μm for remote CO2 column measurements. This IM-LAS system is under development for a future space-based mission to determine the global distribution of regional-scale CO2 sources and sinks, which is the objective of the NASA Active Sensing of CO2 Emissions during Nights, Days, and Seasons (ASCENDS) mission. A prototype of the ASCENDS system, called the Multi-frequency Fiber Laser Lidar (MFLL), has been flight tested in eleven airborne campaigns since May 2005. This paper compares the most recent results obtained during the 2010 and 2011 UC-12 and DC-8 flight tests, where MFLL remote CO2 column measurements were evaluated against airborne in situ CO2 profile measurements traceable to World Meteorological Organization standards. The major change to the MFLL system in 2011 was the implementation of several different IM modes, which could be quickly changed in flight, to directly compare the precision and accuracy of MFLL CO2 measurements in each mode. The different IM modes that were evaluated included "fixed" IM frequencies near 50, 200, and 500 kHz; frequencies changed in short time steps (Stepped); continuously swept frequencies (Swept); and a pseudo noise (PN) code. The Stepped, Swept, and PN modes were generated to evaluate the ability of these IM modes to desensitize MFLL CO2 column measurements to intervening optically thin aerosols/clouds. MFLL was flown on the NASA Langley UC-12 aircraft in May 2011 to evaluate the newly implemented IM modes and their impact on CO2 measurement precision and accuracy, and to determine which IM mode provided the greatest thin cloud rejection (TCR) for the CO2 column measurements. Within the current hardware limitations of the MFLL system, the "fixed" 50 kHz results produced similar SNR values to those found previously. The SNR decreased as expected

  18. Correlated fluorescence and 3D electron microscopy with high sensitivity and spatial precision

    PubMed Central

    Kukulski, Wanda; Schorb, Martin; Welsch, Sonja; Picco, Andrea

    2011-01-01

    Correlative electron and fluorescence microscopy has the potential to elucidate the ultrastructural details of dynamic and rare cellular events, but has been limited by low precision and sensitivity. Here we present a method for direct mapping of signals originating from ∼20 fluorescent protein molecules to 3D electron tomograms with a precision of less than 100 nm. We demonstrate that this method can be used to identify individual HIV particles bound to mammalian cell surfaces. We also apply the method to image microtubule end structures bound to mal3p in fission yeast, and demonstrate that growing microtubule plus-ends are flared in vivo. We localize Rvs167 to endocytic sites in budding yeast, and show that scission takes place halfway through a 10-s time period during which amphiphysins are bound to the vesicle neck. This new technique opens the door for direct correlation of fluorescence and electron microscopy to visualize cellular processes at the ultrastructural scale. PMID:21200030

  19. Precisely Controlled Ultrathin Conjugated Polymer Films for Large Area Transparent Transistors and Highly Sensitive Chemical Sensors.

    PubMed

    Khim, Dongyoon; Ryu, Gi-Seong; Park, Won-Tae; Kim, Hyunchul; Lee, Myungwon; Noh, Yong-Young

    2016-04-01

    A uniform ultrathin polymer film is deposited over a large area with molecularlevel precision by the simple wire-wound bar-coating method. The bar-coated ultrathin films not only exhibit high transparency of up to 90% in the visible wavelength range but also high charge carrier mobility with a high degree of percolation through the uniformly covered polymer nanofibrils. They are capable of realizing highly sensitive multigas sensors and represent the first successful report of ethylene detection using a sensor based on organic field-effect transistors. PMID:26849096

  20. Evaluation of precision and accuracy of the Borgwaldt RM20S(®) smoking machine designed for in vitro exposure.

    PubMed

    Kaur, Navneet; Lacasse, Martine; Roy, Jean-Philippe; Cabral, Jean-Louis; Adamson, Jason; Errington, Graham; Waldron, Karen C; Gaça, Marianna; Morin, André

    2010-12-01

    The Borgwaldt RM20S(®) smoking machine enables the generation, dilution, and transfer of fresh cigarette smoke to cell exposure chambers, for in vitro analyses. We present a study confirming the precision (repeatability r, reproducibility R) and accuracy of smoke dose generated by the Borgwaldt RM20S(®) system and delivery to exposure chambers. Due to the aerosol nature of cigarette smoke, the repeatability of the dilution of the vapor phase in air was assessed by quantifying two reference standard gases: methane (CH(4), r between 29.0 and 37.0 and RSD between 2.2% and 4.5%) and carbon monoxide (CO, r between 166.8 and 235.8 and RSD between 0.7% and 3.7%). The accuracy of dilution (percent error) for CH(4) and CO was between 6.4% and 19.5% and between 5.8% and 6.4%, respectively, over a 10-1000-fold dilution range. To corroborate our findings, a small inter-laboratory study was carried out for CH(4) measurements. The combined dilution repeatability had an r between 21.3 and 46.4, R between 52.9 and 88.4, RSD between 6.3% and 17.3%, and error between 4.3% and 13.1%. Based on the particulate component of cigarette smoke (3R4F), the repeatability (RSD = 12%) of the undiluted smoke generated by the Borgwaldt RM20S(®) was assessed by quantifying solanesol using high-performance liquid chromatography with ultraviolet detection (HPLC/UV). Finally, the repeatability (r between 0.98 and 4.53 and RSD between 8.8% and 12%) of the dilution of generated smoke particulate phase was assessed by quantifying solanesol following various dilutions of cigarette smoke. The findings in this study suggest the Borgwaldt RM20S(®) smoking machine is a reliable tool to generate and deliver repeatable and reproducible doses of whole smoke to in vitro cultures. PMID:21126153

  1. Effect of modulation frequency bandwidth on measurement accuracy and precision for digital diffuse optical spectroscopy (dDOS)

    NASA Astrophysics Data System (ADS)

    Jung, Justin; Istfan, Raeef; Roblyer, Darren

    2014-03-01

    Near-infrared (NIR) frequency-domain Diffuse Optical Spectroscopy (DOS) is an emerging technology with a growing number of potential clinical applications. In an effort to reduce DOS system complexity and improve portability, we recently demonstrated a direct digital sampling method that utilizes digital signal generation and detection as a replacement for more traditional analog methods. In our technique, a fast analog-to-digital converter (ADC) samples the detected time-domain radio frequency (RF) waveforms at each modulation frequency in a broad-bandwidth sweep (50- 300MHz). While we have shown this method provides comparable results to other DOS technologies, the process is data intensive as digital samples must be stored and processed for each modulation frequency and wavelength. We explore here the effect of reducing the modulation frequency bandwidth on the accuracy and precision of extracted optical properties. To accomplish this, the performance of the digital DOS (dDOS) system was compared to a gold standard network analyzer based DOS system. With a starting frequency of 50MHz, the input signal of the dDOS system was swept to 100, 150, 250, or 300MHz in 4MHz increments and results were compared to full 50-300MHz networkanalyzer DOS measurements. The average errors in extracted μa and μs' with dDOS were lowest for the full 50-300MHz sweep (less than 3%) and were within 3.8% for frequency bandwidths as narrow as 50-150MHz. The errors increased to as much as 9.0% when a bandwidth of 50-100MHz was tested. These results demonstrate the possibility for reduced data collection with dDOS without critical compensation of optical property extraction.

  2. Pulp responses to precise thermal stimuli in dentin-sensitive teeth.

    PubMed

    Leffingwell, Clifford S; Meinberg, Trudy A; Wagner, Joshua G; Gound, Tom G; Marx, David B; Reinhardt, Richard A

    2004-06-01

    The purpose of this study was to determine whether pulpal responses to cold temperatures applied to enamel, using a method that precisely controls the intensity of the cold stimulus or measures the response time, could distinguish dentin-sensitive teeth from nonsensitive teeth. Eighteen human subjects were stimulated with cold temperatures decreasing in 5 degree C intervals (and with tetrafluoroethane) on exposed root and enamel of a dentin-sensitive tooth and enamel of a contralateral nonsensitive tooth. Pain threshold, intensity of pain, time to pain onset, and duration of pain at baseline, 4 h, 8 h, and 1 week were measured. Responses to enamel stimulation of sensitive teeth compared with the nonsensitive teeth usually were highly correlated and not significantly different. The exception was a longer duration of pain in the dentin-sensitive teeth (4.62 +/- 0.47 s) compared with nonsensitive teeth (2.92 +/- 0.49 s; p = 0.016) after enamel stimulation with tetrafluoroethane. Longitudinal studies are necessary to determine whether these slight increases in pain duration indicate an increased probability of pulpal degeneration or need for dentin protection. PMID:15167462

  3. Ultrahigh precise and sensitive measurement of optical rotation based on photo-elastic modulation

    NASA Astrophysics Data System (ADS)

    Li, Kewu; Wang, Zhibin; Wang, Liming

    2015-11-01

    A novel technique for improving measurement sensitivity of optical rotation based on photo-elastic modulation is presented. The probe laser orderly passes through a polarizer, the sample to be measured, a photo-elastic modulator(PEM), and an analyzer to be detected. Using the least optical elements to avoid the measurement error may introduced by the other optical elements in the detection light path; other than a reference light path is brought in the measurement system, and differential balance detection method is employed to obtain the AC and DC signal, the common mode noise of light source is efficiently eliminated, then the AC signal is preamplified, and output by a lock-in amplifier, the measurement sensitivity of optical rotation is enhanced further. For our verification experiment, the results show that the precision is up to 0.4%, and the sensitivity is up to 3.17×10-7 rad . So our scheme realizes more accurate and sensitive measurement of optical rotation than any one reported previously.

  4. Accuracy, precision and response time of consumer fork, remote digital probe and disposable indicator thermometers for cooked ground beef patties and chicken breasts

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Nine different commercially available instant-read consumer thermometers (forks, remotes, digital probe and disposable color change indicators) were tested for accuracy and precision compared to a calibrated thermocouple in 80 percent and 90 percent lean ground beef patties, and boneless and bone-in...

  5. An Examination of the Precision and Technical Accuracy of the First Wave of Group-Randomized Trials Funded by the Institute of Education Sciences

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Raudenbush, Stephen W.

    2009-01-01

    This article examines the power analyses for the first wave of group-randomized trials funded by the Institute of Education Sciences. Specifically, it assesses the precision and technical accuracy of the studies. The authors identified the appropriate experimental design and estimated the minimum detectable standardized effect size (MDES) for each…

  6. Deconvolution improves the accuracy and depth sensitivity of time-resolved measurements

    NASA Astrophysics Data System (ADS)

    Diop, Mamadou; St. Lawrence, Keith

    2013-03-01

    Time-resolved (TR) techniques have the potential to distinguish early- from late-arriving photons. Since light travelling through superficial tissue is detected earlier than photons that penetrate the deeper layers, time-windowing can in principle be used to improve the depth sensitivity of TR measurements. However, TR measurements also contain instrument contributions - referred to as the instrument-response-function (IRF) - which cause temporal broadening of the measured temporal-point-spread-function (TPSF). In this report, we investigate the influence of the IRF on pathlength-resolved absorption changes (Δμa) retrieved from TR measurements using the microscopic Beer-Lambert law (MBLL). TPSFs were acquired on homogeneous and two-layer tissue-mimicking phantoms with varying optical properties. The measured IRF and TPSFs were deconvolved to recover the distribution of time-of-flights (DTOFs) of the detected photons. The microscopic Beer-Lambert law was applied to early and late time-windows of the TPSFs and DTOFs to access the effects of the IRF on pathlength-resolved Δμa. The analysis showed that the late part of the TPSFs contains substantial contributions from early-arriving photons, due to the smearing effects of the IRF, which reduced its sensitivity to absorption changes occurring in deep layers. We also demonstrated that the effects of the IRF can be efficiently eliminated by applying a robust deconvolution technique, thereby improving the accuracy and sensitivity of TR measurements to deep-tissue absorption changes.

  7. Toward Improved Force-Field Accuracy through Sensitivity Analysis of Host-Guest Binding Thermodynamics

    PubMed Central

    Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.

    2015-01-01

    Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery. PMID:26181208

  8. A Simple, High-Precision, High-Sensitivity Tracer Assay for N(inf2) Fixation

    PubMed Central

    Montoya, J. P.; Voss, M.; Kahler, P.; Capone, D. G.

    1996-01-01

    We describe a simple, precise, and sensitive experimental protocol for direct measurement of N(inf2) fixation using the conversion of (sup15)N(inf2) to organic N. Our protocol greatly reduces the limit of detection for N(inf2) fixation by taking advantage of the high sensitivity of a modern, multiple-collector isotope ratio mass spectrometer. This instrument allowed measurement of N(inf2) fixation by natural assemblages of plankton in incubations lasting several hours in the presence of relatively low-level (ca. 10 atom%) tracer additions of (sup15)N(inf2) to the ambient pool of N(inf2). The sensitivity and precision of this tracer method are comparable to or better than those associated with the C(inf2)H(inf2) reduction assay. Data obtained in a series of experiments in the Gotland Basin of the Baltic Sea showed excellent agreement between (sup15)N(inf2) tracer and C(inf2)H(inf2) reduction measurements, with the largest discrepancies between the methods occurring at very low fixation rates. The ratio of C(inf2)H(inf2) reduced to N(inf2) fixed was 4.68 (plusmn) 0.11 (mean (plusmn) standard error, n = 39). In these experiments, the rate of C(inf2)H(inf2) reduction was relatively insensitive to assay volume. Our results, the first for planktonic diazotroph populations of the Baltic, confirm the validity of the C(inf2)H(inf2) reduction method as a quantitative measure of N(inf2) fixation in this system. Our (sup15)N(inf2) protocols are comparable to standard C(inf2)H(inf2) reduction procedures, which should promote use of direct (sup15)N(inf2) fixation measurements in other systems. PMID:16535283

  9. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters

    NASA Astrophysics Data System (ADS)

    Nasehi Tehrani, Joubin; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing

    2015-11-01

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation.

  10. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters.

    PubMed

    Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing

    2015-11-21

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation. PMID:26531324

  11. Deformable Image Registration for Adaptive Radiation Therapy of Head and Neck Cancer: Accuracy and Precision in the Presence of Tumor Changes

    SciTech Connect

    Mencarelli, Angelo; Kranen, Simon Robert van; Hamming-Vrieze, Olga; Beek, Suzanne van; Nico Rasch, Coenraad Robert; Herk, Marcel van; Sonke, Jan-Jakob

    2014-11-01

    Purpose: To compare deformable image registration (DIR) accuracy and precision for normal and tumor tissues in head and neck cancer patients during the course of radiation therapy (RT). Methods and Materials: Thirteen patients with oropharyngeal tumors, who underwent submucosal implantation of small gold markers (average 6, range 4-10) around the tumor and were treated with RT were retrospectively selected. Two observers identified 15 anatomical features (landmarks) representative of normal tissues in the planning computed tomography (pCT) scan and in weekly cone beam CTs (CBCTs). Gold markers were digitally removed after semiautomatic identification in pCTs and CBCTs. Subsequently, landmarks and gold markers on pCT were propagated to CBCTs, using a b-spline-based DIR and, for comparison, rigid registration (RR). To account for observer variability, the pair-wise difference analysis of variance method was applied. DIR accuracy (systematic error) and precision (random error) for landmarks and gold markers were quantified. Time trend of the precisions for RR and DIR over the weekly CBCTs were evaluated. Results: DIR accuracies were submillimeter and similar for normal and tumor tissue. DIR precision (1 SD) on the other hand was significantly different (P<.01), with 2.2 mm vector length in normal tissue versus 3.3 mm in tumor tissue. No significant time trend in DIR precision was found for normal tissue, whereas in tumor, DIR precision was significantly (P<.009) degraded during the course of treatment by 0.21 mm/week. Conclusions: DIR for tumor registration proved to be less precise than that for normal tissues due to limited contrast and complex non-elastic tumor response. Caution should therefore be exercised when applying DIR for tumor changes in adaptive procedures.

  12. Millimeter-accuracy GPS landslide monitoring using Precise Point Positioning with Single Receiver Phase Ambiguity (PPP-SRPA) resolution: a case study in Puerto Rico

    NASA Astrophysics Data System (ADS)

    Wang, G. Q.

    2013-03-01

    Continuous Global Positioning System (GPS) monitoring is essential for establishing the rate and pattern of superficial movements of landslides. This study demonstrates a technique which uses a stand-alone GPS station to conduct millimeter-accuracy landslide monitoring. The Precise Point Positioning with Single Receiver Phase Ambiguity (PPP-SRPA) resolution employed by the GIPSY/OASIS software package (V6.1.2) was applied in this study. Two-years of continuous GPS data collected at a creeping landslide were used to evaluate the accuracy of the PPP-SRPA solutions. The criterion for accuracy was the root-mean-square (RMS) of residuals of the PPP-SRPA solutions with respect to "true" landslide displacements over the two-year period. RMS is often regarded as repeatability or precision in GPS literature. However, when contrasted with a known "true" position or displacement it could be termed RMS accuracy or simply accuracy. This study indicated that the PPP-SRPA resolution can provide an accuracy of 2 to 3 mm horizontally and 8 mm vertically for 24-hour sessions with few outliers (< 1%) in the Puerto Rico region. Horizontal accuracy below 5 mm can be stably achieved with 4-hour or longer sessions if avoiding the collection of data during extreme weather conditions. Vertical accuracy below 10 mm can be achieved with 8-hour or longer sessions. This study indicates that the PPP-SRPA resolution is competitive with the conventional carrier-phase double-difference network resolution for static (longer than 4 hours) landslide monitoring while maintaining many advantages. It is evident that the PPP-SRPA method would become an attractive alternative to the conventional carrier-phase double-difference method for landslide monitoring, notably in remote areas or developing countries.

  13. High precision and high accuracy isotopic measurement of uranium using lead and thorium calibration solutions by inductively coupled plasma-multiple collector-mass spectrometry

    SciTech Connect

    Bowen, I.; Walder, A.J.; Hodgson, T.; Parrish, R.R. |

    1998-12-31

    A novel method for the high accuracy and high precision measurement of uranium isotopic composition by Inductively Coupled Plasma-Multiple Collector-Mass Spectrometry is discussed. Uranium isotopic samples are spiked with either thorium or lead for use as internal calibration reference materials. This method eliminates the necessity to periodically measure uranium standards to correct for changing mass bias when samples are measured over long time periods. This technique has generated among the highest levels of analytical precision on both the major and minor isotopes of uranium. Sample throughput has also been demonstrated to exceed Thermal Ionization Mass Spectrometry by a factor of four to five.

  14. A material sensitivity study on the accuracy of deformable organ registration using linear biomechanical models

    SciTech Connect

    Chi, Y.; Liang, J.; Yan, D.

    2006-02-15

    Model-based deformable organ registration techniques using the finite element method (FEM) have recently been investigated intensively and applied to image-guided adaptive radiotherapy (IGART). These techniques assume that human organs are linearly elastic material, and their mechanical properties are predetermined. Unfortunately, the accurate measurement of the tissue material properties is challenging and the properties usually vary between patients. A common issue is therefore the achievable accuracy of the calculation due to the limited access to tissue elastic material constants. In this study, we performed a systematic investigation on this subject based on tissue biomechanics and computer simulations to establish the relationships between achievable registration accuracy and tissue mechanical and organ geometrical properties. Primarily we focused on image registration for three organs: rectal wall, bladder wall, and prostate. The tissue anisotropy due to orientation preference in tissue fiber alignment is captured by using an orthotropic or a transversely isotropic elastic model. First we developed biomechanical models for the rectal wall, bladder wall, and prostate using simplified geometries and investigated the effect of varying material parameters on the resulting organ deformation. Then computer models based on patient image data were constructed, and image registrations were performed. The sensitivity of registration errors was studied by perturbating the tissue material properties from their mean values while fixing the boundary conditions. The simulation results demonstrated that registration error for a subvolume increases as its distance from the boundary increases. Also, a variable associated with material stability was found to be a dominant factor in registration accuracy in the context of material uncertainty. For hollow thin organs such as rectal walls and bladder walls, the registration errors are limited. Given 30% in material uncertainty

  15. Digital PCR methods improve detection sensitivity and measurement precision of low abundance mtDNA deletions

    PubMed Central

    Belmonte, Frances R.; Martin, James L.; Frescura, Kristin; Damas, Joana; Pereira, Filipe; Tarnopolsky, Mark A.; Kaufman, Brett A.

    2016-01-01

    Mitochondrial DNA (mtDNA) mutations are a common cause of primary mitochondrial disorders, and have also been implicated in a broad collection of conditions, including aging, neurodegeneration, and cancer. Prevalent among these pathogenic variants are mtDNA deletions, which show a strong bias for the loss of sequence in the major arc between, but not including, the heavy and light strand origins of replication. Because individual mtDNA deletions can accumulate focally, occur with multiple mixed breakpoints, and in the presence of normal mtDNA sequences, methods that detect broad-spectrum mutations with enhanced sensitivity and limited costs have both research and clinical applications. In this study, we evaluated semi-quantitative and digital PCR-based methods of mtDNA deletion detection using double-stranded reference templates or biological samples. Our aim was to describe key experimental assay parameters that will enable the analysis of low levels or small differences in mtDNA deletion load during disease progression, with limited false-positive detection. We determined that the digital PCR method significantly improved mtDNA deletion detection sensitivity through absolute quantitation, improved precision and reduced assay standard error. PMID:27122135

  16. Detailed high-accuracy megavoltage transmission measurements: A sensitive experimental benchmark of EGSnrc

    SciTech Connect

    Ali, E. S. M.; McEwen, M. R.; Rogers, D. W. O.

    2012-10-15

    Purpose: There are three goals for this study: (a) to perform detailed megavoltage transmission measurements in order to identify the factors that affect the measurement accuracy, (b) to use the measured data as a benchmark for the EGSnrc system in order to identify the computational limiting factors, and (c) to provide data for others to benchmark Monte Carlo codes. Methods: Transmission measurements are performed at the National Research Council Canada on a research linac whose incident electron parameters are independently known. Automated transmission measurements are made on-axis, down to a transmission value of {approx}1.7%, for eight beams between 10 MV (the lowest stable MV beam on the linac) and 30 MV, using fully stopping Be, Al, and Pb bremsstrahlung targets and no fattening filters. To diversify energy differentiation, data are acquired for each beam using low-Z and high-Z attenuators (C and Pb) and Farmer chambers with low-Z and high-Z buildup caps. Experimental corrections are applied for beam drifts (2%), polarity (2.5% typical maximum, 6% extreme), ion recombination (0.2%), leakage (0.3%), and room scatter (0.8%)-the values in parentheses are the largest corrections applied. The experimental setup and the detectors are modeled using EGSnrc, with the newly added photonuclear attenuation included (up to a 5.6% effect). A detailed sensitivity analysis is carried out for the measured and calculated transmission data. Results: The developed experimental protocol allows for transmission measurements with 0.4% uncertainty on the smallest signals. Suggestions for accurate transmission measurements are provided. Measurements and EGSnrc calculations agree typically within 0.2% for the sensitivity of the transmission values to the detector details, to the bremsstrahlung target material, and to the incident electron energy. Direct comparison of the measured and calculated transmission data shows agreement better than 2% for C (3.4% for the 10 MV beam) and

  17. Towards the GEOSAT Follow-On Precise Orbit Determination Goals of High Accuracy and Near-Real-Time Processing

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Zelensky, Nikita P.; Chinn, Douglas S.; Beckley, Brian D.; Lillibridge, John L.

    2006-01-01

    The US Navy's GEOSAT Follow-On spacecraft (GFO) primary mission objective is to map the oceans using a radar altimeter. Satellite laser ranging data, especially in combination with altimeter crossover data, offer the only means of determining high-quality precise orbits. Two tuned gravity models, PGS7727 and PGS7777b, were created at NASA GSFC for GFO that reduce the predicted radial orbit through degree 70 to 13.7 and 10.0 mm. A macromodel was developed to model the nonconservative forces and the SLR spacecraft measurement offset was adjusted to remove a mean bias. Using these improved models, satellite-ranging data, altimeter crossover data, and Doppler data are used to compute both daily medium precision orbits with a latency of less than 24 hours. Final precise orbits are also computed using these tracking data and exported with a latency of three to four weeks to NOAA for use on the GFO Geophysical Data Records (GDR s). The estimated orbit precision of the daily orbits is between 10 and 20 cm, whereas the precise orbits have a precision of 5 cm.

  18. Updating Mars-GRAM to Increase the Accuracy of Sensitivity Studies at Large Optical Depths

    NASA Technical Reports Server (NTRS)

    Justh, Hiliary L.; Justus, C. G.; Badger, Andrew M.

    2010-01-01

    The Mars Global Reference Atmospheric Model (Mars-GRAM) is an engineering-level atmospheric model widely used for diverse mission applications. Mars-GRAM s perturbation modeling capability is commonly used, in a Monte-Carlo mode, to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL). During the Mars Science Laboratory (MSL) site selection process, it was discovered that Mars-GRAM, when used for sensitivity studies for MapYear=0 and large optical depth values such as tau=3, is less than realistic. From the surface to 80 km altitude, Mars-GRAM is based on the NASA Ames Mars General Circulation Model (MGCM). MGCM results that were used for Mars-GRAM with MapYear set to 0 were from a MGCM run with a fixed value of tau=3 for the entire year at all locations. This has resulted in an imprecise atmospheric density at all altitudes. As a preliminary fix to this pressure-density problem, density factor values were determined for tau=0.3, 1 and 3 that will adjust the input values of MGCM MapYear 0 pressure and density to achieve a better match of Mars-GRAM MapYear 0 with Thermal Emission Spectrometer (TES) observations for MapYears 1 and 2 at comparable dust loading. Currently, these density factors are fixed values for all latitudes and Ls. Results will be presented from work being done to derive better multipliers by including variation with latitude and/or Ls by comparison of MapYear 0 output directly against TES limb data. The addition of these more precise density factors to Mars-GRAM 2005 Release 1.4 will improve the results of the sensitivity studies done for large optical depths.

  19. Analysis of the accuracy and precision of the McMaster method in detection of the eggs of Toxocara and Trichuris species (Nematoda) in dog faeces.

    PubMed

    Kochanowski, Maciej; Dabrowska, Joanna; Karamon, Jacek; Cencek, Tomasz; Osiński, Zbigniew

    2013-07-01

    The aim of this study was to determine the accuracy and precision of McMaster method with Raynaud's modification in the detection of the eggs of the nematodes Toxocara canis (Werner, 1782) and Trichuris ovis (Abildgaard, 1795) in faeces of dogs. Four variants of McMaster method were used for counting: in one grid, two grids, the whole McMaster chamber and flotation in the tube. One hundred sixty samples were prepared from dog faeces (20 repetitions for each egg quantity) containing 15, 25, 50, 100, 150, 200, 250 and 300 eggs of T. canis and T. ovis in 1 g of faeces. To compare the influence of kind of faeces on the results, samples of dog faeces were enriched at the same levels with the eggs of another nematode, Ascaris suum Goeze, 1782. In addition, 160 samples of pig faeces were prepared and enriched only with A. suum eggs in the same way. The highest limit of detection (the lowest level of eggs that were detected in at least 50% of repetitions) in all McMaster chamber variants were obtained for T. canis eggs (25-250 eggs/g faeces). In the variant with flotation in the tube, the highest limit of detection was obtained for T. ovis eggs (100 eggs/g). The best results of the limit of detection, sensitivity and the lowest coefficients of variation were obtained with the use of the whole McMaster chamber variant. There was no significant impact of properties of faeces on the obtained results. Multiplication factors for the whole chamber were calculated on the basis of the transformed equation of the regression line, illustrating the relationship between the number of detected eggs and that of the eggs added to the'sample. Multiplication factors calculated for T. canis and T. ovis eggs were higher than those expected using McMaster method with Raynaud modification. PMID:23951934

  20. The precision and accuracy of iterative and non-iterative methods of photopeak integration in activation analysis, with particular reference to the analysis of multiplets

    USGS Publications Warehouse

    Baedecker, P.A.

    1977-01-01

    The relative precisions obtainable using two digital methods, and three iterative least squares fitting procedures of photopeak integration have been compared empirically using 12 replicate counts of a test sample with 14 photopeaks of varying intensity. The accuracy by which the various iterative fitting methods could analyse synthetic doublets has also been evaluated, and compared with a simple non-iterative approach. ?? 1977 Akade??miai Kiado??.

  1. The sensitivity and accuracy of a cone beam CT in detecting the chorda tympani.

    PubMed

    Hiraumi, Harukazu; Suzuki, Ryo; Yamamoto, Norio; Sakamoto, Tatsunori; Ito, Juichi

    2016-04-01

    The facial recess approach through posterior tympanotomy is the standard approach in cochlear implantation surgery. The size of the facial recess is highly variable, depending on the course of the chorda tympani. Despite their clinical importance, little is known about the sensitivity and accuracy of imaging studies in the detection of the chorda tympani. A total of 13 human temporal bones were included in this study. All of the temporal bones were submitted to a cone beam CT (Accuitomo, Morita, Japan). The multi-planar reconstruction images were rotated around the mastoid portion of the facial nerve to locate the branches of the facial nerve. A branch was diagnosed as the chorda tympani when it entered the tympanic cavity near the notch of Rivinus. The distance between the bifurcation and the tip of the short crus of the incus was measured. In all temporal bones, the canal of the chorda tympani or the posterior canaliculus was detected. In the CT-based evaluation, the average distance from the bifurcation to the incus short crus was 12.6 mm (8.3-15.8 mm). The actual distance after dissection was 12.4 mm (8.2-16.4 mm). The largest difference between the distances evaluated with the two procedures was 1.1 mm. Cone beam CT is very useful in detecting the course of the chorda tympani within the temporal bone. The measured distance is accurate. PMID:25956616

  2. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  3. Accuracy and precision of a custom camera-based system for 2-d and 3-d motion tracking during speech and nonspeech motor tasks.

    PubMed

    Feng, Yongqiang; Max, Ludo

    2014-04-01

    PURPOSE Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and submillimeter accuracy. METHOD The authors examined the accuracy and precision of 2-D and 3-D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially available computer software (APAS, Ariel Dynamics), and a custom calibration device. RESULTS Overall root-mean-square error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3- vs. 6-mm diameter) was negligible at all frame rates for both 2-D and 3-D data. CONCLUSION Motion tracking with consumer-grade digital cameras and the APAS software can achieve submillimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  4. Improving Mars-GRAM: Increasing the Accuracy of Sensitivity Studies at Large Optical Depths

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, C. G.; Badger, Andrew M.

    2010-01-01

    Extensively utilized for numerous mission applications, the Mars Global Reference Atmospheric Model (Mars-GRAM) is an engineering-level atmospheric model. In a Monte-Carlo mode, Mars-GRAM's perturbation modeling capability is used to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL). Mars-GRAM has been found to be inexact when used during the Mars Science Laboratory (MSL) site selection process for sensitivity studies for MapYear=0 and large optical depth values such as tau=3. Mars-GRAM is based on the NASA Ames Mars General Circulation Model (MGCM) from the surface to 80 km altitude. Mars-GRAM with the MapYear parameter set to 0 utilizes results from a MGCM run with a fixed value of tau=3 at all locations for the entire year. Imprecise atmospheric density and pressure at all altitudes is a consequence of this use of MGCM with tau=3. Density factor values have been determined for tau=0.3, 1 and 3 as a preliminary fix to this pressure-density problem. These factors adjust the input values of MGCM MapYear 0 pressure and density to achieve a better match of Mars-GRAM MapYear 0 with Thermal Emission Spectrometer (TES) observations for MapYears 1 and 2 at comparable dust loading. These density factors are fixed values for all latitudes and Ls and are included in Mars-GRAM Release 1.3. Work currently being done, to derive better multipliers by including variations with latitude and/or Ls by comparison of MapYear 0 output directly against TES limb data, will be highlighted in the presentation. The TES limb data utilized in this process has been validated by a comparison study between Mars atmospheric density estimates from Mars-GRAM and measurements by Mars Global Surveyor (MGS). This comparison study was undertaken for locations on Mars of varying latitudes, Ls, and LTST. The more precise density factors will be included in Mars-GRAM 2005 Release 1.4 and thus improve the results of future sensitivity studies done for large

  5. Spatial variability in sensitivity of reference crop ET to accuracy of climate data in the Texas High Plains

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A detailed sensitivity analysis was conducted to determine the relative effects of measurement errors in climate data input parameters on the accuracy of calculated reference crop evapotranspiration (ET) using the ASCE-EWRI Standardized Reference ET Equation. Data for the period of 1995 to 2008, fro...

  6. Precision electron polarimetry

    SciTech Connect

    Chudakov, Eugene A.

    2013-11-01

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. M{\\o}ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at ~300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100\\%-polarized electron target for M{\\o}ller polarimetry.

  7. Precision electron polarimetry

    SciTech Connect

    Chudakov, E.

    2013-11-07

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. Mo/ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at 300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100%-polarized electron target for Mo/ller polarimetry.

  8. Optimizing the accuracy and precision of the single-pulse Laue technique for synchrotron photo-crystallography

    SciTech Connect

    Kaminski, Radoslaw; Graber, Timothy; Benedict, Jason B.; Henning, Robert; Chen, Yu-Sheng; Scheins, Stephan; Messerschmidt, Marc; Coppens, Philip

    2010-06-24

    The accuracy that can be achieved in single-pulse pump-probe Laue experiments is discussed. It is shown that with careful tuning of the experimental conditions a reproducibility of the intensity ratios of equivalent intensities obtained in different measurements of 3-4% can be achieved. The single-pulse experiments maximize the time resolution that can be achieved and, unlike stroboscopic techniques in which the pump-probe cycle is rapidly repeated, minimize the temperature increase due to the laser exposure of the sample.

  9. Optimizing the accuracy and precision of the single-pulse Laue technique for synchrotron photo-crystallography

    PubMed Central

    Kamiński, Radosław; Graber, Timothy; Benedict, Jason B.; Henning, Robert; Chen, Yu-Sheng; Scheins, Stephan; Messerschmidt, Marc; Coppens, Philip

    2010-01-01

    The accuracy that can be achieved in single-pulse pump-probe Laue experiments is discussed. It is shown that with careful tuning of the experimental conditions a reproducibility of the intensity ratios of equivalent intensities obtained in different measurements of 3–4% can be achieved. The single-pulse experiments maximize the time resolution that can be achieved and, unlike stroboscopic techniques in which the pump-probe cycle is rapidly repeated, minimize the temperature increase due to the laser exposure of the sample. PMID:20567080

  10. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique

    PubMed Central

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan

    2014-01-01

    Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823

  11. Accuracy and precision of end-expiratory lung-volume measurements by automated nitrogen washout/washin technique in patients with acute respiratory distress syndrome

    PubMed Central

    2011-01-01

    Introduction End-expiratory lung volume (EELV) is decreased in acute respiratory distress syndrome (ARDS), and bedside EELV measurement may help to set positive end-expiratory pressure (PEEP). Nitrogen washout/washin for EELV measurement is available at the bedside, but assessments of accuracy and precision in real-life conditions are scant. Our purpose was to (a) assess EELV measurement precision in ARDS patients at two PEEP levels (three pairs of measurements), and (b) compare the changes (Δ) induced by PEEP for total EELV with the PEEP-induced changes in lung volume above functional residual capacity measured with passive spirometry (ΔPEEP-volume). The minimal predicted increase in lung volume was calculated from compliance at low PEEP and ΔPEEP to ensure the validity of lung-volume changes. Methods Thirty-four patients with ARDS were prospectively included in five university-hospital intensive care units. ΔEELV and ΔPEEP volumes were compared between 6 and 15 cm H2O of PEEP. Results After exclusion of three patients, variability of the nitrogen technique was less than 4%, and the largest difference between measurements was 81 ± 64 ml. ΔEELV and ΔPEEP-volume were only weakly correlated (r2 = 0.47); 95% confidence interval limits, -414 to 608 ml). In four patients with the highest PEEP (≥ 16 cm H2O), ΔEELV was lower than the minimal predicted increase in lung volume, suggesting flawed measurements, possibly due to leaks. Excluding those from the analysis markedly strengthened the correlation between ΔEELV and ΔPEEP volume (r2 = 0.80). Conclusions In most patients, the EELV technique has good reproducibility and accuracy, even at high PEEP. At high pressures, its accuracy may be limited in case of leaks. The minimal predicted increase in lung volume may help to check for accuracy. PMID:22166727

  12. Progress integrating ID-TIMS U-Pb geochronology with accessory mineral geochemistry: towards better accuracy and higher precision time

    NASA Astrophysics Data System (ADS)

    Schoene, B.; Samperton, K. M.; Crowley, J. L.; Cottle, J. M.

    2012-12-01

    It is increasingly common that hand samples of plutonic and volcanic rocks contain zircon with dates that span between zero and >100 ka. This recognition comes from the increased application of U-series geochronology on young volcanic rocks and the increased precision to better than 0.1% on single zircons by the U-Pb ID-TIMS method. It has thus become more difficult to interpret such complicated datasets in terms of ashbed eruption or magma emplacement, which are critical constraints for geochronologic applications ranging from biotic evolution and the stratigraphic record to magmatic and metamorphic processes in orogenic belts. It is important, therefore, to develop methods that aid in interpreting which minerals, if any, date the targeted process. One promising tactic is to better integrate accessory mineral geochemistry with high-precision ID-TIMS U-Pb geochronology. These dual constraints can 1) identify cogenetic populations of minerals, and 2) record magmatic or metamorphic fluid evolution through time. Goal (1) has been widely sought with in situ geochronology and geochemical analysis but is limited by low-precision dates. Recent work has attempted to bridge this gap by retrieving the typically discarded elution from ion exchange chemistry that precedes ID-TIMS U-Pb geochronology and analyzing it by ICP-MS (U-Pb TIMS-TEA). The result integrates geochemistry and high-precision geochronology from the exact same volume of material. The limitation of this method is the relatively coarse spatial resolution compared to in situ techniques, and thus averages potentially complicated trace element profiles through single minerals or mineral fragments. In continued work, we test the effect of this on zircon by beginning with CL imaging to reveal internal zonation and growth histories. This is followed by in situ LA-ICPMS trace element transects of imaged grains to reveal internal geochemical zonation. The same grains are then removed from grain-mount, fragmented, and

  13. Technical note: precision and accuracy of in vitro digestion of neutral detergent fiber and predicted net energy of lactation content of fibrous feeds.

    PubMed

    Spanghero, M; Berzaghi, P; Fortina, R; Masoero, F; Rapetti, L; Zanfi, C; Tassone, S; Gallo, A; Colombini, S; Ferlito, J C

    2010-10-01

    The objective of this study was to test the precision and agreement with in situ data (accuracy) of neutral detergent fiber degradability (NDFD) obtained with the rotating jar in vitro system (Daisy(II) incubator, Ankom Technology, Fairport, NY). Moreover, the precision of the chemical assays requested by the National Research Council (2001) for feed energy calculations and the estimated net energy of lactation contents were evaluated. Precision was measured as standard deviation (SD) of reproducibility (S(R)) and repeatability (S(r)) (between- and within-laboratory variability, respectively), which were expressed as coefficients of variation (SD/mean × 100, S(R) and S(r), respectively). Ten fibrous feed samples (alfalfa dehydrated, alfalfa hay, corn cob, corn silage, distillers grains, meadow hay, ryegrass hay, soy hulls, wheat bran, and wheat straw) were analyzed by 5 laboratories. Analyses of dry matter (DM), ash, crude protein (CP), neutral detergent fiber (NDF), and acid detergent fiber (ADF) had satisfactory S(r), from 0.4 to 2.9%, and S(R), from 0.7 to 6.2%, with the exception of ether extract (EE) and CP bound to NDF or ADF. Extending the fermentation time from 30 to 48 h increased the NDFD values (from 42 to 54% on average across all tested feeds) and improved the NDFD precision, in terms of both S(r) (12 and 7% for 30 and 48 h, respectively) and S(R) (17 and 10% for 30 and 48 h, respectively). The net energy for lactation (NE(L)) predicted from 48-h incubation NDFD data approximated well the tabulated National Research Council (2001) values for several feeds, and the improvement in NDFD precision given by longer incubations (48 vs. 30 h) also improved precision of the NE(L) estimates from 11 to 8%. Data obtained from the rotating jar in vitro technique compared well with in situ data. In conclusion, the adoption of a 48-h period of incubation improves repeatability and reproducibility of NDFD and accuracy and reproducibility of the associated calculated

  14. The sensitivity limitation by the recording ADC to Laser Fiducial Line and Precision Laser Inclinometer

    NASA Astrophysics Data System (ADS)

    Batusov, V.; Budagov, J.; Lyablin, M.; Shirkov, G.; Gayde, J.-Ch.; Mergelkuhl, D.

    2015-12-01

    For metrology set-ups using a laser beam (The Laser Fiducial Line, the Precision Laser Inclinometer) the recording noise has been determined. This noise is limiting the measurement precision of the beam displacement Δ x and consequently the precision Δψ of measurement of the beam inclination angle. For a 10 mm laser beam diameter the Δ x = ±2.9 × 10-9 m has been obtained. For a one-mode laser beam with a primary diameter of 10 mm and with subsequent focusing a value of Δψ = ±1.7 × 10-11 rad has been found.

  15. Leaf vein length per unit area is not intrinsically dependent on image magnification: avoiding measurement artifacts for accuracy and precision.

    PubMed

    Sack, Lawren; Caringella, Marissa; Scoffoni, Christine; Mason, Chase; Rawls, Michael; Markesteijn, Lars; Poorter, Lourens

    2014-10-01

    Leaf vein length per unit leaf area (VLA; also known as vein density) is an important determinant of water and sugar transport, photosynthetic function, and biomechanical support. A range of software methods are in use to visualize and measure vein systems in cleared leaf images; typically, users locate veins by digital tracing, but recent articles introduced software by which users can locate veins using thresholding (i.e. based on the contrasting of veins in the image). Based on the use of this method, a recent study argued against the existence of a fixed VLA value for a given leaf, proposing instead that VLA increases with the magnification of the image due to intrinsic properties of the vein system, and recommended that future measurements use a common, low image magnification for measurements. We tested these claims with new measurements using the software LEAFGUI in comparison with digital tracing using ImageJ software. We found that the apparent increase of VLA with magnification was an artifact of (1) using low-quality and low-magnification images and (2) errors in the algorithms of LEAFGUI. Given the use of images of sufficient magnification and quality, and analysis with error-free software, the VLA can be measured precisely and accurately. These findings point to important principles for improving the quantity and quality of important information gathered from leaf vein systems. PMID:25096977

  16. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, Jacquelyn C.; Thompson, Anne M.; Schmidlin, F. J.; Oltmans, S. J.; Smit, H. G. J.

    2004-01-01

    Since 1998 the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 ozone profiles over eleven southern hemisphere tropical and subtropical stations. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used to measure ozone. The data are archived at: &ttp://croc.gsfc.nasa.gov/shadoz>. In analysis of ozonesonde imprecision within the SHADOZ dataset, Thompson et al. [JGR, 108,8238,20031 we pointed out that variations in ozonesonde technique (sensor solution strength, instrument manufacturer, data processing) could lead to station-to-station biases within the SHADOZ dataset. Imprecisions and accuracy in the SHADOZ dataset are examined in light of new data. First, SHADOZ total ozone column amounts are compared to version 8 TOMS (2004 release). As for TOMS version 7, satellite total ozone is usually higher than the integrated column amount from the sounding. Discrepancies between the sonde and satellite datasets decline two percentage points on average, compared to version 7 TOMS offsets. Second, the SHADOZ station data are compared to results of chamber simulations (JOSE-2000, Juelich Ozonesonde Intercomparison Experiment) in which the various SHADOZ techniques were evaluated. The range of JOSE column deviations from a standard instrument (-10%) in the chamber resembles that of the SHADOZ station data. It appears that some systematic variations in the SHADOZ ozone record are accounted for by differences in solution strength, data processing and instrument type (manufacturer).

  17. Sensitivity and accuracy analysis of CT image in PRISM autocontouring using confusion matrix and ROC/AUC curve methods

    NASA Astrophysics Data System (ADS)

    Yunisa, Regina; Haryanto, Freddy

    2015-09-01

    The research was conducted to evaluate and analyze the results of the CT image autocontouring Prism TPS using confusion matrix and ROC methods. This study begins by treating thoracic CT images using a grayscale format TPS Prism software. Autocontouring done in the area of spinal cord and right lung with appropriate parameter settings window. The average value of the sensitivity, specificity and accuracy for 23 slices of spinal cord are 0.93, 0.99, and 0.99. For two slices of the right lung, average value of sensitivity, specificity, and accuracy of 2 slices were 0.99, 0.99, and 0.99. These values are classified as `Excellent'.

  18. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Accuracy Analysis

    NASA Astrophysics Data System (ADS)

    Sarrazin, F.; Pianosi, F.; Hartmann, A. J.; Wagener, T.

    2014-12-01

    Sensitivity analysis aims to characterize the impact that changes in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). It is a valuable diagnostic tool for model understanding and for model improvement, it enhances calibration efficiency, and it supports uncertainty and scenario analysis. It is of particular interest for environmental models because they are often complex, non-linear, non-monotonic and exhibit strong interactions between their parameters. However, sensitivity analysis has to be carefully implemented to produce reliable results at moderate computational cost. For example, sample size can have a strong impact on the results and has to be carefully chosen. Yet, there is little guidance available for this step in environmental modelling. The objective of the present study is to provide guidelines for a robust sensitivity analysis, in order to support modellers in making appropriate choices for its implementation and in interpreting its outcome. We considered hydrological models with increasing level of complexity. We tested four sensitivity analysis methods, Regional Sensitivity Analysis, Method of Morris, a density-based (PAWN) and a variance-based (Sobol) method. The convergence and variability of sensitivity indices were investigated. We used bootstrapping to assess and improve the robustness of sensitivity indices even for limited sample sizes. Finally, we propose a quantitative validation approach for sensitivity analysis based on the Kolmogorov-Smirnov statistics.

  19. EFFECT OF RADIATION DOSE LEVEL ON ACCURACY AND PRECISION OF MANUAL SIZE MEASUREMENTS IN CHEST TOMOSYNTHESIS EVALUATED USING SIMULATED PULMONARY NODULES

    PubMed Central

    Söderman, Christina; Johnsson, Åse Allansdotter; Vikgren, Jenny; Norrlund, Rauni Rossi; Molnar, David; Svalkvist, Angelica; Månsson, Lars Gunnar; Båth, Magnus

    2016-01-01

    The aim of the present study was to investigate the dependency of the accuracy and precision of nodule diameter measurements on the radiation dose level in chest tomosynthesis. Artificial ellipsoid-shaped nodules with known dimensions were inserted in clinical chest tomosynthesis images. Noise was added to the images in order to simulate radiation dose levels corresponding to effective doses for a standard-sized patient of 0.06 and 0.04 mSv. These levels were compared with the original dose level, corresponding to an effective dose of 0.12 mSv for a standard-sized patient. Four thoracic radiologists measured the longest diameter of the nodules. The study was restricted to nodules located in high-dose areas of the tomosynthesis projection radiographs. A significant decrease of the measurement accuracy and intraobserver variability was seen for the lowest dose level for a subset of the observers. No significant effect of dose level on the interobserver variability was found. The number of non-measurable small nodules (≤5 mm) was higher for the two lowest dose levels compared with the original dose level. In conclusion, for pulmonary nodules at positions in the lung corresponding to locations in high-dose areas of the projection radiographs, using a radiation dose level resulting in an effective dose of 0.06 mSv to a standard-sized patient may be possible in chest tomosynthesis without affecting the accuracy and precision of nodule diameter measurements to any large extent. However, an increasing number of non-measurable small nodules (≤5 mm) with decreasing radiation dose may raise some concerns regarding an applied general dose reduction for chest tomosynthesis examinations in the clinical praxis. PMID:26994093

  20. EFFECT OF RADIATION DOSE LEVEL ON ACCURACY AND PRECISION OF MANUAL SIZE MEASUREMENTS IN CHEST TOMOSYNTHESIS EVALUATED USING SIMULATED PULMONARY NODULES.

    PubMed

    Söderman, Christina; Johnsson, Åse Allansdotter; Vikgren, Jenny; Norrlund, Rauni Rossi; Molnar, David; Svalkvist, Angelica; Månsson, Lars Gunnar; Båth, Magnus

    2016-06-01

    The aim of the present study was to investigate the dependency of the accuracy and precision of nodule diameter measurements on the radiation dose level in chest tomosynthesis. Artificial ellipsoid-shaped nodules with known dimensions were inserted in clinical chest tomosynthesis images. Noise was added to the images in order to simulate radiation dose levels corresponding to effective doses for a standard-sized patient of 0.06 and 0.04 mSv. These levels were compared with the original dose level, corresponding to an effective dose of 0.12 mSv for a standard-sized patient. Four thoracic radiologists measured the longest diameter of the nodules. The study was restricted to nodules located in high-dose areas of the tomosynthesis projection radiographs. A significant decrease of the measurement accuracy and intraobserver variability was seen for the lowest dose level for a subset of the observers. No significant effect of dose level on the interobserver variability was found. The number of non-measurable small nodules (≤5 mm) was higher for the two lowest dose levels compared with the original dose level. In conclusion, for pulmonary nodules at positions in the lung corresponding to locations in high-dose areas of the projection radiographs, using a radiation dose level resulting in an effective dose of 0.06 mSv to a standard-sized patient may be possible in chest tomosynthesis without affecting the accuracy and precision of nodule diameter measurements to any large extent. However, an increasing number of non-measurable small nodules (≤5 mm) with decreasing radiation dose may raise some concerns regarding an applied general dose reduction for chest tomosynthesis examinations in the clinical praxis. PMID:26994093

  1. A New, Rapid, Precise and Sensitive Method for Chlorine Stable Isotope Analysis of Chlorinated Aliphatic Hydrocarbons

    NASA Astrophysics Data System (ADS)

    van Acker, M. R.; Shahar, A.; Young, E. D.; Coleman, M. L.

    2005-12-01

    Chlorinated aliphatic hydrocarbons (CAH) are recognized common groundwater contaminants. Because of their physico-chemical properties, their lifespan in groundwater is in the order of decades (Pankow and Cherry, 1996). Stable isotopes can play a role in determining the rate and extent of CAH attenuation (Slater, 2003). The use of chlorine has been hampered by the current time consuming and insensitive analytical methods. We present a new analytical procedure to measure chlorine stable isotope values using a gas chromatograph coupled to a multi-collector inductively coupled mass spectrometer (GC-MC-ICP-MS). The GC has a Porapack Q packed column. The carrier gas was helium and the temperature was constant at 160°C. The GC was coupled to the MC-ICP-MS by heated stainless steel tubing. Our high resolution spectra showed that 37Cl is free of its main interference 36Ar-H over a range of 0.004 amu. Two pure CAH, trichloroethene (TCE) and tetrachloroethene (PCE), were used for zero enrichment (sample relative to itself) and standard-sample difference measurements. Integrations and background corrections of transient signals were performed using Microsoft Excel after import of the raw data from the MC-ICPMS acquisition software. Zero enrichment tests with TCE and PCE yielded δ37Cl of -0.04±0.16‰ and -0.03±0.17‰, respectively, results for sample injections of 0.12 to 0.02 microliters. Accuracy was tested by injecting 0.24 microliters of a 50/50 mixture of TCE and PCE of known isotopic compositions as the difference between the two solvents was of paramount interest. The δ37Cl(TCE) value of PCE was -1.99±0.16‰. A highly satisfactory comparison with the conventional method is shown by published values for TCE and PCE, -2.04±0.12‰ and -0.30±0.14‰, respectively (Jendrzejewski et al., 2001), giving a δ37Cl(TCE) value for PCE of -2.34±0.18‰. These tests of the GC-MC-ICP-MS method showed that we can obtain reproducible and accurate Cl isotope values using an

  2. Effect of mesh distortion on the accuracy of transverse shear stresses and their sensitivity coefficients in multilayered composites

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Kim, Yong H.

    1995-01-01

    A study is made of the effect of mesh distortion on the accuracy of transverse shear stresses and their first-order and second-order sensitivity coefficients in multilayered composite panels subjected to mechanical and thermal loads. The panels are discretized by using a two-field degenerate solid element, with the fundamental unknowns consisting of both displacement and strain components, and the displacement components having a linear variation throughout the thickness of the laminate. A two-step computational procedure is used for evaluating the transverse shear stresses. In the first step, the in-plane stresses in the different layers are calculated at the numerical quadrature points for each element. In the second step, the transverse shear stresses are evaluated by using piecewise integration, in the thickness direction, of the three-dimensional equilibrium equations. The same procedure is used for evaluating the sensitivity coefficients of transverse shear stresses. Numerical results are presented showing no noticeable degradation in the accuracy of the in-plane stresses and their sensitivity coefficients with mesh distortion. However, such degradation is observed for the transverse shear stresses and their sensitivity coefficients. The standard of comparison is taken to be the exact solution of the three-dimensional thermoelasticity equations of the panel.

  3. SU-E-J-147: Monte Carlo Study of the Precision and Accuracy of Proton CT Reconstructed Relative Stopping Power Maps

    SciTech Connect

    Dedes, G; Asano, Y; Parodi, K; Arbor, N; Dauvergne, D; Testa, E; Letang, J; Rit, S

    2015-06-15

    Purpose: The quantification of the intrinsic performances of proton computed tomography (pCT) as a modality for treatment planning in proton therapy. The performance of an ideal pCT scanner is studied as a function of various parameters. Methods: Using GATE/Geant4, we simulated an ideal pCT scanner and scans of several cylindrical phantoms with various tissue equivalent inserts of different sizes. Insert materials were selected in order to be of clinical relevance. Tomographic images were reconstructed using a filtered backprojection algorithm taking into account the scattering of protons into the phantom. To quantify the performance of the ideal pCT scanner, we study the precision and the accuracy with respect to the theoretical relative stopping power ratios (RSP) values for different beam energies, imaging doses, insert sizes and detector positions. The planning range uncertainty resulting from the reconstructed RSP is also assessed by comparison with the range of the protons in the analytically simulated phantoms. Results: The results indicate that pCT can intrinsically achieve RSP resolution below 1%, for most examined tissues at beam energies below 300 MeV and for imaging doses around 1 mGy. RSP maps accuracy of less than 0.5 % is observed for most tissue types within the studied dose range (0.2–1.5 mGy). Finally, the uncertainty in the proton range due to the accuracy of the reconstructed RSP map is well below 1%. Conclusion: This work explores the intrinsic performance of pCT as an imaging modality for proton treatment planning. The obtained results show that under ideal conditions, 3D RSP maps can be reconstructed with an accuracy better than 1%. Hence, pCT is a promising candidate for reducing the range uncertainties introduced by the use of X-ray CT alongside with a semiempirical calibration to RSP.Supported by the DFG Cluster of Excellence Munich-Centre for Advanced Photonics (MAP)

  4. Strategies to Improve the Accuracy of Mars-GRAM Sensitivity Studies at Large Optical Depths

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, Carl G.; Badger, Andrew M.

    2010-01-01

    The poster provides an overview of techniques to improve the Mars Global Reference Atmospheric Model (Mars-GRAM) sensitivity. It has been discovered during the Mars Science Laboratory (MSL) site selection process that the Mars Global Reference Atmospheric Model (Mars-GRAM) when used for sensitivity studies for TES MapYear = 0 and large optical depth values such as tau = 3 is less than realistic. A preliminary fix has been made to Mars-GRAM by adding a density factor value that was determined for tau = 0.3, 1 and 3.

  5. Factors influence accuracy and precision in the determination of the elemental composition of defense waste glass by ICP-emission spectrometry

    SciTech Connect

    Goode, S.R.

    1995-12-31

    The influence of instrumental factors on the accuracy and precision of the determination of the composition of glass and glass feedstock is presented. In addition, the effects of different methods of sampling, dissolution methods, and standardization procedures and their effect on the quality of the chemical analysis will also be presented. The target glass simulates the material that will be prepared by the vitrification of highly radioactive liquid defense waste. The glass and feedstock streams must be well characterized to ensure a durable glass; current models estimate a 100,000 year lifetime. The elemental composition will be determined by ICP-emission spectrometry with radiation exposure issues requiring a multielement analysis for all constituents, on a single analytical sample, using compromise conditions.

  6. Approaches for achieving long-term accuracy and precision of δ18O and δ2H for waters analyzed using laser absorption spectrometers.

    PubMed

    Wassenaar, Leonard I; Coplen, Tyler B; Aggarwal, Pradeep K

    2014-01-21

    The measurement of δ(2)H and δ(18)O in water samples by laser absorption spectroscopy (LAS) are adopted increasingly in hydrologic and environmental studies. Although LAS instrumentation is easy to use, its incorporation into laboratory operations is not as easy, owing to extensive offline data manipulation required for outlier detection, derivation and application of algorithms to correct for between-sample memory, correcting for linear and nonlinear instrumental drift, VSMOW-SLAP scale normalization, and in maintaining long-term QA/QC audits. Here we propose a series of standardized water-isotope LAS performance tests and routine sample analysis templates, recommended procedural guidelines, and new data processing software (LIMS for Lasers) that altogether enables new and current LAS users to achieve and sustain long-term δ(2)H and δ(18)O accuracy and precision for these important isotopic assays. PMID:24328223

  7. Quantifying precision and accuracy of measurements of dissolved inorganic carbon stable isotopic composition using continuous-flow isotope-ratio mass spectrometry

    PubMed Central

    Waldron, Susan; Marian Scott, E; Vihermaa, Leena E; Newton, Jason

    2014-01-01

    RATIONALE We describe an analytical procedure that allows sample collection and measurement of carbon isotopic composition (δ13CV-PDB value) and dissolved inorganic carbon concentration, [DIC], in aqueous samples without further manipulation post field collection. By comparing outputs from two different mass spectrometers, we quantify with the statistical rigour uncertainty associated with the estimation of an unknown measurement. This is rarely undertaken, but it is needed to understand the significance of field data and to interpret quality assurance exercises. METHODS Immediate acidification of field samples during collection in evacuated, pre-acidified vials removed the need for toxic chemicals to inhibit continued bacterial activity that might compromise isotopic and concentration measurements. Aqueous standards mimicked the sample matrix and avoided headspace fractionation corrections. Samples were analysed using continuous-flow isotope-ratio mass spectrometry, but for low DIC concentration the mass spectrometer response could be non-linear. This had to be corrected for. RESULTS Mass spectrometer non-linearity exists. Rather than estimating precision as the repeat analysis of an internal standard, we have adopted inverse linear calibrations to quantify the precision and 95% confidence intervals (CI) of the δ13CDIC values. The response for [DIC] estimation was always linear. For 0.05–0.5 mM DIC internal standards, however, changes in mass spectrometer linearity resulted in estimations of the precision in the δ13CVPDB value of an unknown ranging from ± 0.44‰ to ± 1.33‰ (mean values) and a mean 95% CI half-width of ±1.1–3.1‰. CONCLUSIONS Mass spectrometer non-linearity should be considered in estimating uncertainty in measurement. Similarly, statistically robust estimates of precision and accuracy should also be adopted. Such estimations do not inhibit research advances: our consideration of small-scale spatial variability at two points on a

  8. High sensitivity 1.06 micron optical receiver for precision laser range finding. [YAG laser design

    NASA Technical Reports Server (NTRS)

    Scholl, F. W.; Harris, J. S., Jr.

    1977-01-01

    Aluminum gallium antimonide avalanche photodiodes with average gain of 10, internal quantum efficiency of greater than 60%, capacitance less than 0.2pf, and dark current of less than 1 micron were designed and fabricated for use in a low noise optical receiver suitable for 2 cm accuracy rangefinding. Topics covered include: (1) design of suitable photodetector structures; (2) epitaxial growth of AlGaSb devices; (3) fabrication of photodetectors; and (4) electro-optics characterization.

  9. A Study of the Accuracy and Precision Among XRF, ICP-MS, and PIXE on Trace Element Analyses of Small Water Samples

    NASA Astrophysics Data System (ADS)

    Naik, Sahil; Patnaik, Ritish; Kummari, Venkata; Phinney, Lucas; Dhoubhadel, Mangal; Jesseph, Aaron; Hoffmann, William; Verbeck, Guido; Rout, Bibhudutta

    2010-10-01

    The study aimed to compare the viability, precision, and accuracy among three popular instruments - X-ray Fluorescence (XRF), Inductively Coupled Plasma Mass Spectrometer (ICP-MS), and Particle-Induced X-ray Emission (PIXE) - used to analyze the trace elemental composition of small water samples. Ten-milliliter water samples from public tap water sources in seven different localities in India (Bangalore, Kochi, Bhubaneswar, Cuttack, Puri, Hospet, and Pipili) were prepared through filtration and dilution for proper analysis. The project speculates that the ICP-MS will give the most accurate and precise trace elemental analysis, followed by PIXE and XRF. XRF will be seen as a portable and affordable instrument that can analyze samples on-site while ICP-MS is extremely accurate, and expensive option for off-site analyses. PIXE will be deemed to be too expensive and cumbersome for on-site analysis; however, laboratories with a PIXE accelerator can use the instrument to get accurate analyses.

  10. Improving Precision and Accuracy of Isotope Ratios from Short Transient Laser Ablation-Multicollector-Inductively Coupled Plasma Mass Spectrometry Signals: Application to Micrometer-Size Uranium Particles.

    PubMed

    Claverie, Fanny; Hubert, Amélie; Berail, Sylvain; Donard, Ariane; Pointurier, Fabien; Pécheyran, Christophe

    2016-04-19

    The isotope drift encountered on short transient signals measured by multicollector inductively coupled plasma mass spectrometry (MC-ICPMS) is related to differences in detector time responses. Faraday to Faraday and Faraday to ion counter time lags were determined and corrected using VBA data processing based on the synchronization of the isotope signals. The coefficient of determination of the linear fit between the two isotopes was selected as the best criterion to obtain accurate detector time lag. The procedure was applied to the analysis by laser ablation-MC-ICPMS of micrometer sized uranium particles (1-3.5 μm). Linear regression slope (LRS) (one isotope plotted over the other), point-by-point, and integration methods were tested to calculate the (235)U/(238)U and (234)U/(238)U ratios. Relative internal precisions of 0.86 to 1.7% and 1.2 to 2.4% were obtained for (235)U/(238)U and (234)U/(238)U, respectively, using LRS calculation, time lag, and mass bias corrections. A relative external precision of 2.1% was obtained for (235)U/(238)U ratios with good accuracy (relative difference with respect to the reference value below 1%). PMID:27031645

  11. Vibrational sensitivity of a measuring instrument and methods of increasing the accuracy of its determinations

    NASA Technical Reports Server (NTRS)

    Mironov, Y. S.

    1973-01-01

    The properties and peculiarities of two groups of measuring systems reacting to vibrations are discussed. Specifically, results of the action of a three dimensional, cophasal, monoharmonic vibration on the linear system of a measuring instrument was analyzed. Data are also given on the connection between vibration sensitivity and vibration resistance for instruments, methods for estimating vibration resistance, and formulas for expressing test results of vibration resistance. Experimental data are also given for decreasing errors in nonlinear systems during vibrations.

  12. An in-depth evaluation of accuracy and precision in Hg isotopic analysis via pneumatic nebulization and cold vapor generation multi-collector ICP-mass spectrometry.

    PubMed

    Rua-Ibarz, Ana; Bolea-Fernandez, Eduardo; Vanhaecke, Frank

    2016-01-01

    Mercury (Hg) isotopic analysis via multi-collector inductively coupled plasma (ICP)-mass spectrometry (MC-ICP-MS) can provide relevant biogeochemical information by revealing sources, pathways, and sinks of this highly toxic metal. In this work, the capabilities and limitations of two different sample introduction systems, based on pneumatic nebulization (PN) and cold vapor generation (CVG), respectively, were evaluated in the context of Hg isotopic analysis via MC-ICP-MS. The effect of (i) instrument settings and acquisition parameters, (ii) concentration of analyte element (Hg), and internal standard (Tl)-used for mass discrimination correction purposes-and (iii) different mass bias correction approaches on the accuracy and precision of Hg isotope ratio results was evaluated. The extent and stability of mass bias were assessed in a long-term study (18 months, n = 250), demonstrating a precision ≤0.006% relative standard deviation (RSD). CVG-MC-ICP-MS showed an approximately 20-fold enhancement in Hg signal intensity compared with PN-MC-ICP-MS. For CVG-MC-ICP-MS, the mass bias induced by instrumental mass discrimination was accurately corrected for by using either external correction in a sample-standard bracketing approach (SSB) or double correction, consisting of the use of Tl as internal standard in a revised version of the Russell law (Baxter approach), followed by SSB. Concomitant matrix elements did not affect CVG-ICP-MS results. Neither with PN, nor with CVG, any evidence for mass-independent discrimination effects in the instrument was observed within the experimental precision obtained. CVG-MC-ICP-MS was finally used for Hg isotopic analysis of reference materials (RMs) of relevant environmental origin. The isotopic composition of Hg in RMs of marine biological origin testified of mass-independent fractionation that affected the odd-numbered Hg isotopes. While older RMs were used for validation purposes, novel Hg isotopic data are provided for the

  13. Dual-energy X-ray absorptiometry for measuring total bone mineral content in the rat: study of accuracy and precision.

    PubMed

    Casez, J P; Muehlbauer, R C; Lippuner, K; Kelly, T; Fleisch, H; Jaeger, P

    1994-07-01

    Sequential studies of osteopenic bone disease in small animals require the availability of non-invasive, accurate and precise methods to assess bone mineral content (BMC) and bone mineral density (BMD). Dual-energy X-ray absorptiometry (DXA), which is currently used in humans for this purpose, can also be applied to small animals by means of adapted software. Precision and accuracy of DXA was evaluated in 10 rats weighing 50-265 g. The rats were anesthetized with a mixture of ketamine-xylazine administrated intraperitoneally. Each rat was scanned six times consecutively in the antero-posterior incidence after repositioning using the rat whole-body software for determination of whole-body BMC and BMD (Hologic QDR 1000, software version 5.52). Scan duration was 10-20 min depending on rat size. After the last measurement, rats were sacrificed and soft tissues were removed by dermestid beetles. Skeletons were then scanned in vitro (ultra high resolution software, version 4.47). Bones were subsequently ashed and dissolved in hydrochloric acid and total body calcium directly assayed by atomic absorption spectrophotometry (TBCa[chem]). Total body calcium was also calculated from the DXA whole-body in vivo measurement (TBCa[DXA]) and from the ultra high resolution measurement (TBCa[UH]) under the assumption that calcium accounts for 40.5% of the BMC expressed as hydroxyapatite. Precision error for whole-body BMC and BMD (mean +/- S.D.) was 1.3% and 1.5%, respectively. Simple regression analysis between TBCa[DXA] or TBCa[UH] and TBCa[chem] revealed tight correlations (n = 0.991 and 0.996, respectively), with slopes and intercepts which were significantly different from 1 and 0, respectively.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:7950505

  14. The accuracy and precision of two non-invasive, magnetic resonance-guided focused ultrasound-based thermal diffusivity estimation methods

    PubMed Central

    Dillon, Christopher R.; Payne, Allison; Christensen, Douglas A.; Roemer, Robert B.

    2016-01-01

    Purpose The use of correct tissue thermal diffusivity values is necessary for making accurate thermal modeling predictions during magnetic resonance-guided focused ultrasound (MRgFUS) treatment planning. This study evaluates the accuracy and precision of two non-invasive thermal diffusivity estimation methods, a Gaussian Temperature method published by Cheng and Plewes in 2002 and a Gaussian specific absorption rate (SAR) method published by Dillon et al in 2012. Materials and Methods Both methods utilize MRgFUS temperature data obtained during cooling following a short (<25s) heating pulse. The Gaussian SAR method can also use temperatures obtained during heating. Experiments were performed at low heating levels (ΔT~10°C) in ex vivo pork muscle and in vivo rabbit back muscle. The non-invasive MRgFUS thermal diffusivity estimates were compared with measurements from two standard invasive methods. Results Both non-invasive methods accurately estimate thermal diffusivity when using MR-temperature cooling data (overall ex vivo error<6%, in vivo<12%). Including heating data in the Gaussian SAR method further reduces errors (ex vivo error<2%, in vivo<3%). The significantly lower standard deviation values (p<0.03) of the Gaussian SAR method indicate that it has better precision than the Gaussian Temperature method. Conclusions With repeated sonications, either MR-based method could provide accurate thermal diffusivity values for MRgFUS therapies. Fitting to more data simultaneously likely makes the Gaussian SAR method less susceptible to noise, and using heating data helps it converge more consistently to the FUS fitting parameters and thermal diffusivity. These effects lead to the improved precision of the Gaussian SAR method. PMID:25198092

  15. Sensitivity of Magnetospheric Multi-Scale (MMS) Mission Navigation Accuracy to Major Error Sources

    NASA Technical Reports Server (NTRS)

    Olson, Corwin; Long, Anne; Car[emter. Russell

    2011-01-01

    The Magnetospheric Multiscale (MMS) mission consists of four satellites flying in formation in highly elliptical orbits about the Earth, with a primary objective of studying magnetic reconnection. The baseline navigation concept is independent estimation of each spacecraft state using GPS pseudorange measurements referenced to an Ultra Stable Oscillator (USO) with accelerometer measurements included during maneuvers. MMS state estimation is performed onboard each spacecraft using the Goddard Enhanced Onboard Navigation System (GEONS), which is embedded in the Navigator GPS receiver. This paper describes the sensitivity of MMS navigation performance to two major error sources: USO clock errors and thrust acceleration knowledge errors.

  16. Improved optical axis determination accuracy for fiber-based polarization-sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Lu, Zenghai; Matcher, Stephen J.

    2013-03-01

    We report on a new calibration technique that permits the accurate extraction of sample Jones matrix and hence fast-axis orientation by using fiber-based polarization-sensitive optical coherence tomography (PS-OCT) that is completely based on non polarization maintaining fiber such as SMF-28. In this technique, two quarter waveplates are used to completely specify the parameters of the system fibers in the sample arm so that the Jones matrix of the sample can be determined directly. The device was validated on measurements of a quarter waveplate and an equine tendon sample by a single-mode fiber-based swept-source PS-OCT system.

  17. The effect of biomechanical variables on force sensitive resistor error: Implications for calibration and improved accuracy.

    PubMed

    Schofield, Jonathon S; Evans, Katherine R; Hebert, Jacqueline S; Marasco, Paul D; Carey, Jason P

    2016-03-21

    Force Sensitive Resistors (FSRs) are commercially available thin film polymer sensors commonly employed in a multitude of biomechanical measurement environments. Reasons for such wide spread usage lie in the versatility, small profile, and low cost of these sensors. Yet FSRs have limitations. It is commonly accepted that temperature, curvature and biological tissue compliance may impact sensor conductance and resulting force readings. The effect of these variables and degree to which they interact has yet to be comprehensively investigated and quantified. This work systematically assesses varying levels of temperature, sensor curvature and surface compliance using a full factorial design-of-experiments approach. Three models of Interlink FSRs were evaluated. Calibration equations under 12 unique combinations of temperature, curvature and compliance were determined for each sensor. Root mean squared error, mean absolute error, and maximum error were quantified as measures of the impact these thermo/mechanical factors have on sensor performance. It was found that all three variables have the potential to affect FSR calibration curves. The FSR model and corresponding sensor geometry are sensitive to these three mechanical factors at varying levels. Experimental results suggest that reducing sensor error requires calibration of each sensor in an environment as close to its intended use as possible and if multiple FSRs are used in a system, they must be calibrated independently. PMID:26903413

  18. Bias, precision and accuracy in the estimation of cuticular and respiratory water loss: a case study from a highly variable cockroach, Perisphaeria sp.

    PubMed

    Gray, Emilie M; Chown, Steven L

    2008-01-01

    We compared the precision, bias and accuracy of two techniques that were recently proposed to estimate the contributions of cuticular and respiratory water loss to total water loss in insects. We performed measurements of VCO2 and VH2O in normoxia, hyperoxia and anoxia using flow through respirometry on single individuals of the highly variable cockroach Perisphaeria sp. to compare estimates of cuticular and respiratory water loss (CWL and RWL) obtained by the VH2O-VCO2 y-intercept method with those obtained by the hyperoxic switch method. Precision was determined by assessing the repeatability of values obtained whereas bias was assessed by comparing the methods' results to each other and to values for other species found in the literature. We found that CWL was highly repeatable by both methods (R0.88) and resulted in similar values to measures of CWL determined during the closed-phase of discontinuous gas exchange (DGE). Repeatability of RWL was much lower (R=0.40) and significant only in the case of the hyperoxic method. RWL derived from the hyperoxic method is higher (by 0.044 micromol min(-1)) than that obtained from the method traditionally used for measuring water loss during the closed-phase of DGE, suggesting that in the past RWL may have been underestimated. The very low cuticular permeability of this species (3.88 microg cm(-2) h(-1) Torr(-1)) is reasonable given the seasonally hot and dry habitat where it lives. We also tested the hygric hypothesis proposed to account for the evolution of discontinuous gas exchange cycles and found no effect of respiratory pattern on RWL, although the ratio of mean VH2O to VCO2 was higher for continuous patterns compared with discontinuous ones. PMID:17949739

  19. Strategies to Improve the Accuracy of Mars-GRAM Sensitivity Studies at Large Optical Depths

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, Carl G.; Badger, Andrew M.

    2009-01-01

    The Mars Global Reference Atmospheric Model (Mars-GRAM) is an engineering-level atmospheric model widely used for diverse mission applications. Mars-GRAM s perturbation modeling capability is commonly used, in a Monte-Carlo mode, to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL). It has been discovered during the Mars Science Laboratory (MSL) site selection process that Mars-GRAM when used for sensitivity studies for MapYear=0 and large optical depth values such as tau=3 is less than realistic. A comparison study between Mars atmospheric density estimates from Mars- GRAM and measurements by Mars Global Surveyor (MGS) has been undertaken for locations of varying latitudes, Ls, and LTST on Mars. The preliminary results from this study have validated the Thermal Emission Spectrometer (TES) limb data. From the surface to 80 km altitude, Mars- GRAM is based on the NASA Ames Mars General Circulation Model (MGCM). MGCM results that were used for Mars-GRAM with MapYear=0 were from a MGCM run with a fixed value of tau=3 for the entire year at all locations. Unrealistic energy absorption by uniform atmospheric dust leads to an unrealistic thermal energy balance on the polar caps. The outcome is an inaccurate cycle of condensation/sublimation of the polar caps and, as a consequence, an inaccurate cycle of total atmospheric mass and global-average surface pressure. Under an assumption of unchanged temperature profile and hydrostatic equilibrium, a given percentage change in surface pressure would produce a corresponding percentage change in density at all altitudes. Consequently, the final result of a change in surface pressure is an imprecise atmospheric density at all altitudes. To solve this pressure-density problem, a density factor value was determined for tau=.3, 1 and 3 that will adjust the input values of MGCM MapYear 0 pressure and density to achieve a better match of Mars-GRAM MapYear=0 with MapYears 1 and 2 MGCM output

  20. Strategies to Improve the Accuracy of Mars-GRAM Sensitivity Studies at Large Optical Depths

    NASA Astrophysics Data System (ADS)

    Justh, H. L.; Justus, C. G.; Badger, A. M.

    2009-12-01

    The Mars Global Reference Atmospheric Model (Mars-GRAM) is an engineering-level atmospheric model widely used for diverse mission applications. Mars-GRAM’s perturbation modeling capability is commonly used, in a Monte-Carlo mode, to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL). It has been discovered during the Mars Science Laboratory (MSL) site selection process that Mars-GRAM when used for sensitivity studies for MapYear=0 and large optical depth values such as tau=3 is less than realistic. A comparison study between Mars atmospheric density estimates from Mars-GRAM and measurements by Mars Global Surveyor (MGS) has been undertaken for locations of varying latitudes, Ls, and LTST on Mars. The preliminary results from this study have validated the Thermal Emission Spectrometer (TES) limb data. From the surface to 80 km altitude, Mars-GRAM is based on the NASA Ames Mars General Circulation Model (MGCM). MGCM results that were used for Mars-GRAM with MapYear=0 were from a MGCM run with a fixed value of tau=3 for the entire year at all locations. Unrealistic energy absorption by uniform atmospheric dust leads to an unrealistic thermal energy balance on the polar caps. The outcome is an inaccurate cycle of condensation/sublimation of the polar caps and, as a consequence, an inaccurate cycle of total atmospheric mass and global-average surface pressure. Under an assumption of unchanged temperature profile and hydrostatic equilibrium, a given percentage change in surface pressure would produce a corresponding percentage change in density at all altitudes. Consequently, the final result of a change in surface pressure is an imprecise atmospheric density at all altitudes. To solve this pressure-density problem, a density factor value was determined for tau=.3, 1 and 3 that will adjust the input values of MGCM MapYear 0 pressure and density to achieve a better match of Mars-GRAM MapYear 0 with MapYears 1 and 2 MGCM output

  1. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  2. Evidence mapping for decision making: feasibility versus accuracy – when to abandon high sensitivity in electronic searches

    PubMed Central

    Buchberger, Barbara; Krabbe, Laura; Lux, Beate; Mattivi, Jessica Tajana

    2016-01-01

    Background: Mapping the evidence is a relatively new methodological approach and may be helpful for the development of research questions and decisions about their relevance and priority. However, the amount of data available today leads to challenges for scientists sometimes being confronted with literature searches retrieving over 30,000 results for screening. Objectives: We conducted an evidence mapping of the topic “diabetes and driving” to investigate its suitability for an evidence-based national clinical guideline. In addition, we compared a highly sensitive search with a highly specific one. Methods: Based on a systematic review, our database searches were limited to publications from 2002 to present in English and German language. Results: Due to the strongly focused topic and the limits, our sensitive search identified a manageable number of references including sufficient evidence to answer our research question. Using the specific search strategy, we achieved a reduction of citations by 25%, concurrently identifying 88% of relevant references. Conclusions: Evidence mapping with the intention of gaining an overview of a research field does not require high level accuracy in contrary to systematic reviews. Keeping this distinction in mind, a mass of extraneous information will be avoided by using specific instead of highly sensitive search strategies. PMID:27499726

  3. Sensitive Identification of Nearby Debris Disks via Precise Calibration of WISE Data

    NASA Astrophysics Data System (ADS)

    Patel, Rahul; Metchev, Stanimir; Heinze, Aren; Trollo, Joe

    2016-01-01

    Using data from the WISE All-Sky Survey, we have found >100 new infrared excess sources around main-sequence Hipparcos stars within 75 pc. Our empirical calibration of WISE photospheric colors and removal of non-trivial false-positive sources are responsible for the high confidence (>99.5%) of detections, while our corrections to saturated W1 and W2 photometry have for the first time allowed us to search for new infrared excess sources around bright field stars in WISE. The careful calibration and filtering of the WISE data have allowed us to probe excess fluxes down to roughly 8% of the photospheric emission at 22μm around saturated stars in WISE. We expect that the increased sensitivity of our survey will not only aid in understanding the evolution of debris disks, but will also benefit future studies using WISE.

  4. Mars-GRAM: Increasing the Precision of Sensitivity Studies at Large Optical Depths

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, C. G.; Badger, Andrew M.

    2010-01-01

    The Mars Global Reference Atmospheric Model (Mars-GRAM) is an engineering-level atmospheric model widely used for diverse mission applications. Mars-GRAM's perturbation modeling capability is commonly used, in a Monte-Carlo mode, to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL). It has been discovered during the Mars Science Laboratory (MSL) site selection process that Mars-GRAM, when used for sensitivity studies for MapYear=0 and large optical depth values such as tau=3, is less than realistic. A comparison study between Mars atmospheric density estimates from Mars-GRAM and measurements by Mars Global Surveyor (MGS) has been undertaken for locations of varying latitudes, Ls, and LTST on Mars. The preliminary results from this study have validated the Thermal Emission Spectrometer (TES) limb data. From the surface to 80 km altitude, Mars-GRAM is based on the NASA Ames Mars General Circulation Model (MGCM). MGCM results that were used for Mars-GRAM with MapYear=0 were from a MGCM run with a fixed value of tau=3 for the entire year at all locations. This has resulted in an imprecise atmospheric density at all altitudes. To solve this pressure-density problem, density factor values were determined for tau=.3, 1 and 3 that will adjust the input values of MGCM MapYear 0 pressure and density to achieve a better match of Mars-GRAM MapYear 0 with TES observations for MapYears 1 and 2 at comparable dust loading. The addition of these density factors to Mars-GRAM will improve the results of the sensitivity studies done for large optical depths.

  5. Routine OGTT: A Robust Model Including Incretin Effect for Precise Identification of Insulin Sensitivity and Secretion in a Single Individual

    PubMed Central

    De Gaetano, Andrea; Panunzi, Simona; Matone, Alice; Samson, Adeline; Vrbikova, Jana; Bendlova, Bela; Pacini, Giovanni

    2013-01-01

    In order to provide a method for precise identification of insulin sensitivity from clinical Oral Glucose Tolerance Test (OGTT) observations, a relatively simple mathematical model (Simple Interdependent glucose/insulin MOdel SIMO) for the OGTT, which coherently incorporates commonly accepted physiological assumptions (incretin effect and saturating glucose-driven insulin secretion) has been developed. OGTT data from 78 patients in five different glucose tolerance groups were analyzed: normal glucose tolerance (NGT), impaired glucose tolerance (IGT), impaired fasting glucose (IFG), IFG+IGT, and Type 2 Diabetes Mellitus (T2DM). A comparison with the 2011 Salinari (COntinuos GI tract MOdel, COMO) and the 2002 Dalla Man (Dalla Man MOdel, DMMO) models was made with particular attention to insulin sensitivity indices ISCOMO, ISDMMO and kxgi (the insulin sensitivity index for SIMO). ANOVA on kxgi values across groups resulted significant overall (P<0.001), and post-hoc comparisons highlighted the presence of three different groups: NGT (8.62×10−5±9.36×10−5 min−1pM−1), IFG (5.30×10−5±5.18×10−5) and combined IGT, IFG+IGT and T2DM (2.09×10−5±1.95×10−5, 2.38×10−5±2.28×10−5 and 2.38×10−5±2.09×10−5 respectively). No significance was obtained when comparing ISCOMO or ISDMMO across groups. Moreover, kxgi presented the lowest sample average coefficient of variation over the five groups (25.43%), with average CVs for ISCOMO and ISDMMO of 70.32% and 57.75% respectively; kxgi also presented the strongest correlations with all considered empirical measures of insulin sensitivity. While COMO and DMMO appear over-parameterized for fitting single-subject clinical OGTT data, SIMO provides a robust, precise, physiologically plausible estimate of insulin sensitivity, with which habitual empirical insulin sensitivity indices correlate well. The kxgi index, reflecting insulin secretion dependency on glycemia, also significantly differentiates clinically

  6. Routine OGTT: a robust model including incretin effect for precise identification of insulin sensitivity and secretion in a single individual.

    PubMed

    De Gaetano, Andrea; Panunzi, Simona; Matone, Alice; Samson, Adeline; Vrbikova, Jana; Bendlova, Bela; Pacini, Giovanni

    2013-01-01

    In order to provide a method for precise identification of insulin sensitivity from clinical Oral Glucose Tolerance Test (OGTT) observations, a relatively simple mathematical model (Simple Interdependent glucose/insulin MOdel SIMO) for the OGTT, which coherently incorporates commonly accepted physiological assumptions (incretin effect and saturating glucose-driven insulin secretion) has been developed. OGTT data from 78 patients in five different glucose tolerance groups were analyzed: normal glucose tolerance (NGT), impaired glucose tolerance (IGT), impaired fasting glucose (IFG), IFG+IGT, and Type 2 Diabetes Mellitus (T2DM). A comparison with the 2011 Salinari (COntinuos GI tract MOdel, COMO) and the 2002 Dalla Man (Dalla Man MOdel, DMMO) models was made with particular attention to insulin sensitivity indices ISCOMO, ISDMMO and kxgi (the insulin sensitivity index for SIMO). ANOVA on kxgi values across groups resulted significant overall (P<0.001), and post-hoc comparisons highlighted the presence of three different groups: NGT (8.62×10(-5)±9.36×10(-5) min(-1)pM(-1)), IFG (5.30×10(-5)±5.18×10(-5)) and combined IGT, IFG+IGT and T2DM (2.09×10(-5)±1.95×10(-5), 2.38×10(-5)±2.28×10(-5) and 2.38×10(-5)±2.09×10(-5) respectively). No significance was obtained when comparing ISCOMO or ISDMMO across groups. Moreover, kxgi presented the lowest sample average coefficient of variation over the five groups (25.43%), with average CVs for ISCOMO and ISDMMO of 70.32% and 57.75% respectively; kxgi also presented the strongest correlations with all considered empirical measures of insulin sensitivity. While COMO and DMMO appear over-parameterized for fitting single-subject clinical OGTT data, SIMO provides a robust, precise, physiologically plausible estimate of insulin sensitivity, with which habitual empirical insulin sensitivity indices correlate well. The kxgi index, reflecting insulin secretion dependency on glycemia, also significantly differentiates clinically

  7. High-Precision Surface Inspection: Uncertainty Evaluation within an Accuracy Range of 15μm with Triangulation-based Laser Line Scanners

    NASA Astrophysics Data System (ADS)

    Dupuis, Jan; Kuhlmann, Heiner

    2014-06-01

    Triangulation-based range sensors, e.g. laser line scanners, are used for high-precision geometrical acquisition of free-form surfaces, for reverse engineering tasks or quality management. In contrast to classical tactile measuring devices, these scanners generate a great amount of 3D-points in a short period of time and enable the inspection of soft materials. However, for accurate measurements, a number of aspects have to be considered to minimize measurement uncertainties. This study outlines possible sources of uncertainties during the measurement process regarding the scanner warm-up, the impact of laser power and exposure time as well as scanner’s reaction to areas of discontinuity, e.g. edges. All experiments were performed using a fixed scanner position to avoid effects resulting from imaging geometry. The results show a significant dependence of measurement accuracy on the correct adaption of exposure time as a function of surface reflectivity and laser power. Additionally, it is illustrated that surface structure as well as edges can cause significant systematic uncertainties.

  8. Sampling strategies in antimicrobial resistance monitoring: evaluating how precision and sensitivity vary with the number of animals sampled per farm.

    PubMed

    Yamamoto, Takehisa; Hayama, Yoko; Hidano, Arata; Kobayashi, Sota; Muroga, Norihiko; Ishikawa, Kiyoyasu; Ogura, Aki; Tsutsui, Toshiyuki

    2014-01-01

    Because antimicrobial resistance in food-producing animals is a major public health concern, many countries have implemented antimicrobial monitoring systems at a national level. When designing a sampling scheme for antimicrobial resistance monitoring, it is necessary to consider both cost effectiveness and statistical plausibility. In this study, we examined how sampling scheme precision and sensitivity can vary with the number of animals sampled from each farm, while keeping the overall sample size constant to avoid additional sampling costs. Five sampling strategies were investigated. These employed 1, 2, 3, 4 or 6 animal samples per farm, with a total of 12 animals sampled in each strategy. A total of 1,500 Escherichia coli isolates from 300 fattening pigs on 30 farms were tested for resistance against 12 antimicrobials. The performance of each sampling strategy was evaluated by bootstrap resampling from the observational data. In the bootstrapping procedure, farms, animals, and isolates were selected randomly with replacement, and a total of 10,000 replications were conducted. For each antimicrobial, we observed that the standard deviation and 2.5-97.5 percentile interval of resistance prevalence were smallest in the sampling strategy that employed 1 animal per farm. The proportion of bootstrap samples that included at least 1 isolate with resistance was also evaluated as an indicator of the sensitivity of the sampling strategy to previously unidentified antimicrobial resistance. The proportion was greatest with 1 sample per farm and decreased with larger samples per farm. We concluded that when the total number of samples is pre-specified, the most precise and sensitive sampling strategy involves collecting 1 sample per farm. PMID:24466335

  9. Design of state-feedback controllers including sensitivity reduction, with applications to precision pointing

    NASA Technical Reports Server (NTRS)

    Hadass, Z.

    1974-01-01

    The design procedure of feedback controllers was described and the considerations for the selection of the design parameters were given. The frequency domain properties of single-input single-output systems using state feedback controllers are analyzed, and desirable phase and gain margin properties are demonstrated. Special consideration is given to the design of controllers for tracking systems, especially those designed to track polynomial commands. As an example, a controller was designed for a tracking telescope with a polynomial tracking requirement and some special features such as actuator saturation and multiple measurements, one of which is sampled. The resulting system has a tracking performance comparing favorably with a much more complicated digital aided tracker. The parameter sensitivity reduction was treated by considering the variable parameters as random variables. A performance index is defined as a weighted sum of the state and control convariances that sum from both the random system disturbances and the parameter uncertainties, and is minimized numerically by adjusting a set of free parameters.

  10. Using Lunar Observations to Validate Pointing Accuracy and Geolocation, Detector Sensitivity Stability and Static Point Response of the CERES Instruments

    NASA Technical Reports Server (NTRS)

    Daniels, Janet L.; Smith, G. Louis; Priestley, Kory J.; Thomas, Susan

    2014-01-01

    Validation of in-orbit instrument performance is a function of stability in both instrument and calibration source. This paper describes a method using lunar observations scanning near full moon by the Clouds and Earth Radiant Energy System (CERES) instruments. The Moon offers an external source whose signal variance is predictable and non-degrading. From 2006 to present, these in-orbit observations have become standardized and compiled for the Flight Models -1 and -2 aboard the Terra satellite, for Flight Models-3 and -4 aboard the Aqua satellite, and beginning 2012, for Flight Model-5 aboard Suomi-NPP. Instrument performance measurements studied are detector sensitivity stability, pointing accuracy and static detector point response function. This validation method also shows trends per CERES data channel of 0.8% per decade or less for Flight Models 1-4. Using instrument gimbal data and computed lunar position, the pointing error of each detector telescope, the accuracy and consistency of the alignment between the detectors can be determined. The maximum pointing error was 0.2 Deg. in azimuth and 0.17 Deg. in elevation which corresponds to an error in geolocation near nadir of 2.09 km. With the exception of one detector, all instruments were found to have consistent detector alignment from 2006 to present. All alignment error was within 0.1o with most detector telescopes showing a consistent alignment offset of less than 0.02 Deg.

  11. Using lunar observations to validate pointing accuracy and geolocation, detector sensitivity stability and static point response of the CERES instruments

    NASA Astrophysics Data System (ADS)

    Daniels, Janet; Smith, G. Louis; Priestley, Kory J.; Thomas, Susan

    2014-10-01

    Validation of in-orbit instrument performance is a function of stability in both instrument and calibration source. This paper describes a method using lunar observations scanning near full moon by the Clouds and Earth Radiant Energy System (CERES) instruments. The Moon offers an external source whose signal variance is predictable and non-degrading. From 2006 to present, these in-orbit observations have become standardized and compiled for the Flight Models -1 and -2 aboard the Terra satellite, for Flight Models-3 and -4 aboard the Aqua satellite, and beginning 2012, for Flight Model-5 aboard Suomi-NPP. Instrument performance measurements studied are detector sensitivity stability, pointing accuracy and static detector point response function. This validation method also shows trends per CERES data channel of 0.8% per decade or less for Flight Models 1-4. Using instrument gimbal data and computed lunar position, the pointing error of each detector telescope, the accuracy and consistency of the alignment between the detectors can be determined. The maximum pointing error was 0.2o in azimuth and 0.17o in elevation which corresponds to an error in geolocation near nadir of 2.09 km. With the exception of one detector, all instruments were found to have consistent detector alignment from 2006 to present. All alignment error was within 0.1o with most detector telescopes showing a consistent alignment offset of less than 0.02o.

  12. Technical Note: Precision and accuracy of a commercially available CT optically stimulated luminescent dosimetry system for the measurement of CT dose index

    PubMed Central

    Vrieze, Thomas J.; Sturchio, Glenn M.; McCollough, Cynthia H.

    2012-01-01

    Purpose: To determine the precision and accuracy of CTDI100 measurements made using commercially available optically stimulated luminescent (OSL) dosimeters (Landaur, Inc.) as beam width, tube potential, and attenuating material were varied. Methods: One hundred forty OSL dosimeters were individually exposed to a single axial CT scan, either in air, a 16-cm (head), or 32-cm (body) CTDI phantom at both center and peripheral positions. Scans were performed using nominal total beam widths of 3.6, 6, 19.2, and 28.8 mm at 120 kV and 28.8 mm at 80 kV. Five measurements were made for each of 28 parameter combinations. Measurements were made under the same conditions using a 100-mm long CTDI ion chamber. Exposed OSL dosimeters were returned to the manufacturer, who reported dose to air (in mGy) as a function of distance along the probe, integrated dose, and CTDI100. Results: The mean precision averaged over 28 datasets containing five measurements each was 1.4% ± 0.6%, range = 0.6%–2.7% for OSL and 0.08% ± 0.06%, range = 0.02%–0.3% for ion chamber. The root mean square (RMS) percent differences between OSL and ion chamber CTDI100 values were 13.8%, 6.4%, and 8.7% for in-air, head, and body measurements, respectively, with an overall RMS percent difference of 10.1%. OSL underestimated CTDI100 relative to the ion chamber 21/28 times (75%). After manual correction of the 80 kV measurements, the RMS percent differences between OSL and ion chamber measurements were 9.9% and 10.0% for 80 and 120 kV, respectively. Conclusions: Measurements of CTDI100 with commercially available CT OSL dosimeters had a percent standard deviation of 1.4%. After energy-dependent correction factors were applied, the RMS percent difference in the measured CTDI100 values was about 10%, with a tendency of OSL to underestimate CTDI relative to the ion chamber. Unlike ion chamber methods, however, OSL dosimeters allow measurement of the radiation dose profile. PMID:23127052

  13. Technical Note: Precision and accuracy of a commercially available CT optically stimulated luminescent dosimetry system for the measurement of CT dose index

    SciTech Connect

    Vrieze, Thomas J.; Sturchio, Glenn M.; McCollough, Cynthia H.

    2012-11-15

    Purpose: To determine the precision and accuracy of CTDI{sub 100} measurements made using commercially available optically stimulated luminescent (OSL) dosimeters (Landaur, Inc.) as beam width, tube potential, and attenuating material were varied. Methods: One hundred forty OSL dosimeters were individually exposed to a single axial CT scan, either in air, a 16-cm (head), or 32-cm (body) CTDI phantom at both center and peripheral positions. Scans were performed using nominal total beam widths of 3.6, 6, 19.2, and 28.8 mm at 120 kV and 28.8 mm at 80 kV. Five measurements were made for each of 28 parameter combinations. Measurements were made under the same conditions using a 100-mm long CTDI ion chamber. Exposed OSL dosimeters were returned to the manufacturer, who reported dose to air (in mGy) as a function of distance along the probe, integrated dose, and CTDI{sub 100}. Results: The mean precision averaged over 28 datasets containing five measurements each was 1.4%{+-} 0.6%, range = 0.6%-2.7% for OSL and 0.08%{+-} 0.06%, range = 0.02%-0.3% for ion chamber. The root mean square (RMS) percent differences between OSL and ion chamber CTDI{sub 100} values were 13.8%, 6.4%, and 8.7% for in-air, head, and body measurements, respectively, with an overall RMS percent difference of 10.1%. OSL underestimated CTDI{sub 100} relative to the ion chamber 21/28 times (75%). After manual correction of the 80 kV measurements, the RMS percent differences between OSL and ion chamber measurements were 9.9% and 10.0% for 80 and 120 kV, respectively. Conclusions: Measurements of CTDI{sub 100} with commercially available CT OSL dosimeters had a percent standard deviation of 1.4%. After energy-dependent correction factors were applied, the RMS percent difference in the measured CTDI{sub 100} values was about 10%, with a tendency of OSL to underestimate CTDI relative to the ion chamber. Unlike ion chamber methods, however, OSL dosimeters allow measurement of the radiation dose profile.

  14. Accuracy and precision of porosity estimates based on velocity inversion of surface ground-penetrating radar data: A controlled experiment at the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Bradford, J.; Clement, W.

    2006-12-01

    Although rarely acquired, ground penetrating radar (GPR) data acquired in continuous multi-offset geometries can substantially improve our understanding of the subsurface compared to conventional single offset surveys. This improvement arises because multi-offset data enable full use of the information that the GPR signal can carry. The added information allows us to maximize the material property information extracted from a GPR survey. Of the array of potential multi-offset GPR measurements, traveltime vs offset information enables laterally and vertically continuous electromagnetic (EM) velocity measurements. In turn, the EM velocities provide estimates of water content via petrophysical relationships such as the CRIM or Topp's equations. In fully saturated media the water content is a direct measure of bulk porosity. The Boise Hydrogeophysical Research Site (BHRS) is a experimental wellfield located in a shallow alluvial aquifer near Boise, Idaho. In July, 2006 we conducted a controlled 3D multi-offset GPR experiment at the BHRS designed to test the accuracy of state-of-the-art velocity analysis methodologies. We acquired continuous multi-offset GPR data over an approximately 20 x 30 m 3D area. The GPR system was a Sensors and Software pulseEkko Pro multichannel system with 100 MHz antennas and was configured with 4 receivers and a single transmitter. Data were acquired in off-end geometry for a total of 16 offsets with a 1 m offset interval and 1 m near offset. The data were acquired on a 1 m x 1m grid in four passes, each consisting of a 3 m range of equally spaced offsets. The survey encompassed 13 wells finished to the ~20 m depth of the unconfined aquifer. We established velocity control by acquiring vertical radar profiles (VRPs) in all 13 wells. Preliminary velocity measurements using an established method of reflection tomography were within about 1 percent of local 1D velocity distributions determined from the VRPs. Vertical velocity precision from the

  15. Development and validation of an automated and marker-free CT-based spatial analysis method (CTSA) for assessment of femoral hip implant migration In vitro accuracy and precision comparable to that of radiostereometric analysis (RSA).

    PubMed

    Scheerlinck, Thierry; Polfliet, Mathias; Deklerck, Rudi; Van Gompel, Gert; Buls, Nico; Vandemeulebroucke, Jef

    2016-04-01

    Background and purpose - We developed a marker-free automated CT-based spatial analysis (CTSA) method to detect stem-bone migration in consecutive CT datasets and assessed the accuracy and precision in vitro. Our aim was to demonstrate that in vitro accuracy and precision of CTSA is comparable to that of radiostereometric analysis (RSA). Material and methods - Stem and bone were segmented in 2 CT datasets and both were registered pairwise. The resulting rigid transformations were compared and transferred to an anatomically sound coordinate system, taking the stem as reference. This resulted in 3 translation parameters and 3 rotation parameters describing the relative amount of stem-bone displacement, and it allowed calculation of the point of maximal stem migration. Accuracy was evaluated in 39 comparisons by imposing known stem migration on a stem-bone model. Precision was estimated in 20 comparisons based on a zero-migration model, and in 5 patients without stem loosening. Results - Limits of the 95% tolerance intervals (TIs) for accuracy did not exceed 0.28 mm for translations and 0.20° for rotations (largest standard deviation of the signed error (SDSE): 0.081 mm and 0.057°). In vitro, limits of the 95% TI for precision in a clinically relevant setting (8 comparisons) were below 0.09 mm and 0.14° (largest SDSE: 0.012 mm and 0.020°). In patients, the precision was lower, but acceptable, and dependent on CT scan resolution. Interpretation - CTSA allows detection of stem-bone migration with an accuracy and precision comparable to that of RSA. It could be valuable for evaluation of subtle stem loosening in clinical practice. PMID:26634843

  16. Development and validation of an automated and marker-free CT-based spatial analysis method (CTSA) for assessment of femoral hip implant migration In vitro accuracy and precision comparable to that of radiostereometric analysis (RSA)

    PubMed Central

    Scheerlinck, Thierry; Polfliet, Mathias; Deklerck, Rudi; Van Gompel, Gert; Buls, Nico; Vandemeulebroucke, Jef

    2016-01-01

    Background and purpose — We developed a marker-free automated CT-based spatial analysis (CTSA) method to detect stem-bone migration in consecutive CT datasets and assessed the accuracy and precision in vitro. Our aim was to demonstrate that in vitro accuracy and precision of CTSA is comparable to that of radiostereometric analysis (RSA). Material and methods — Stem and bone were segmented in 2 CT datasets and both were registered pairwise. The resulting rigid transformations were compared and transferred to an anatomically sound coordinate system, taking the stem as reference. This resulted in 3 translation parameters and 3 rotation parameters describing the relative amount of stem-bone displacement, and it allowed calculation of the point of maximal stem migration. Accuracy was evaluated in 39 comparisons by imposing known stem migration on a stem-bone model. Precision was estimated in 20 comparisons based on a zero-migration model, and in 5 patients without stem loosening. Results — Limits of the 95% tolerance intervals (TIs) for accuracy did not exceed 0.28 mm for translations and 0.20° for rotations (largest standard deviation of the signed error (SDSE): 0.081 mm and 0.057°). In vitro, limits of the 95% TI for precision in a clinically relevant setting (8 comparisons) were below 0.09 mm and 0.14° (largest SDSE: 0.012 mm and 0.020°). In patients, the precision was lower, but acceptable, and dependent on CT scan resolution. Interpretation — CTSA allows detection of stem-bone migration with an accuracy and precision comparable to that of RSA. It could be valuable for evaluation of subtle stem loosening in clinical practice. PMID:26634843

  17. The effect of dilution and the use of a post-extraction nucleic acid purification column on the accuracy, precision, and inhibition of environmental DNA samples

    USGS Publications Warehouse

    Mckee, Anna M.; Spear, Stephen F.; Pierson, Todd W.

    2015-01-01

    Isolation of environmental DNA (eDNA) is an increasingly common method for detecting presence and assessing relative abundance of rare or elusive species in aquatic systems via the isolation of DNA from environmental samples and the amplification of species-specific sequences using quantitative PCR (qPCR). Co-extracted substances that inhibit qPCR can lead to inaccurate results and subsequent misinterpretation about a species’ status in the tested system. We tested three treatments (5-fold and 10-fold dilutions, and spin-column purification) for reducing qPCR inhibition from 21 partially and fully inhibited eDNA samples collected from coastal plain wetlands and mountain headwater streams in the southeastern USA. All treatments reduced the concentration of DNA in the samples. However, column purified samples retained the greatest sensitivity. For stream samples, all three treatments effectively reduced qPCR inhibition. However, for wetland samples, the 5-fold dilution was less effective than other treatments. Quantitative PCR results for column purified samples were more precise than the 5-fold and 10-fold dilutions by 2.2× and 3.7×, respectively. Column purified samples consistently underestimated qPCR-based DNA concentrations by approximately 25%, whereas the directional bias in qPCR-based DNA concentration estimates differed between stream and wetland samples for both dilution treatments. While the directional bias of qPCR-based DNA concentration estimates differed among treatments and locations, the magnitude of inaccuracy did not. Our results suggest that 10-fold dilution and column purification effectively reduce qPCR inhibition in mountain headwater stream and coastal plain wetland eDNA samples, and if applied to all samples in a study, column purification may provide the most accurate relative qPCR-based DNA concentrations estimates while retaining the greatest assay sensitivity.

  18. Sensitivity of Flux Accuracy to Setup of Fossil Fuel and Biogenic CO2 Inverse System in an Urban Environment

    NASA Astrophysics Data System (ADS)

    Wu, K.; Lauvaux, T.; Deng, A.; Lopez-Coto, I.; Gurney, K. R.; Patarasuk, R.; Turnbull, J. C.; Davis, K. J.

    2015-12-01

    The Indianapolis Flux Experiment (INFLUX) aims to utilize a variety of measurements and a high resolution inversion system to estimate the spatial distribution and the temporal variation of anthropogenic greenhouse gas (GHG) emissions from the city of Indianapolis. We separated biogenic and fossil fuel CO2 fluxes and tested the sensitivity of inverse flux estimates to inverse system configurations by performing Observing System Simulation Experiments (OSSEs). The a priori CO2 emissions from Hestia were aggregated to 1 km resolution to represent emissions from the Indianapolis metropolitan area and its surroundings. With the Weather Research and Forecasting (WRF) model coupled to a Lagrangian Particle Dispersion Model (LPDM), the physical relations between concentrations at the tower locations and emissions at the surface were simulated at 1 km spatial resolution, hourly. Within a Bayesian synthesis inversion framework, we tested the effect of multiple parameters on our ability to infer fossil fuel CO2 fluxes: the presence of biogenic CO2 fluxes in the optimization procedure, the use of fossil fuel CO2 concentration measurements, the impact of reduced transport errors, the sensitivity to observation density, and the spatio-temporal properties of prior errors. The results indicate that the presence of biogenic CO2 fluxes obviously weakens the ability to invert for the fossil fuel CO2 emissions in an urban environment, but having relatively accurate fossil fuel CO2 concentration measurements can effectively compensate the interference from the biogenic flux component. Reduced transport error and more intensive measurement networks are two possible approaches to retrieve the spatial pattern of the fluxes and decrease the bias in inferred whole-city fossil fuel CO2 emissions. The accuracy of posterior fluxes is very sensitive to the spatial correlation length in the prior flux errors which, if they exist, can enhance significantly our ability to recover the known fluxes

  19. Sensitive and precise HPLC method with back-extraction clean-up step for the determination of sildenafil in rat plasma and its application to a pharmacokinetic study.

    PubMed

    Strach, Beata; Wyska, Elżbieta; Pociecha, Krzysztof; Krupa, Anna; Jachowicz, Renata

    2015-10-01

    A sensitive HPLC method was developed and validated for the determination of sildenafil concentrations in rat plasma (200 μL) using a liquid-liquid extraction procedure and paroxetine as an internal standard. In order to eliminate interferences and improve the peak shape, a back-extraction into an acidic solution was utilized. Chromatographic separation was achieved on a cyanopropyl bonded-phase column with a mobile phase composed of 50 m m potassium dihydrogen phosphate buffer (pH 4.5) and acetonitrile (75:25, v/v), pumped at the flow rate of 1 mL/min. A UV detector was set at 230 nm. A calibration curve was constructed within a concentration range from 10 to 1500 ng/mL. The limit of detection was 5 ng/mL. The inter- and intra-day precisions of the assay were in the ranges 2.91-7.33 and 2.61-6.18%, respectively, and the accuracies for inter- and intra-day runs were within 0.14-3.92 and 0.44-2.96%, respectively. The recovery of sildenafil was 85.22 ± 4.54%. Tests confirmed the stability of sildenafil in plasma during three freeze-thaw cycles and during long-term storage at -20 and -80°C for up to 2 months. The proposed method was successfully applied to a pharmacokinetic study in rats. PMID:25864807

  20. High Dynamics and Precision Optical Measurement Using a Position Sensitive Detector (PSD) in Reflection-Mode: Application to 2D Object Tracking over a Smart Surface

    PubMed Central

    Ivan, Ioan Alexandru; Ardeleanu, Mihai; Laurent, Guillaume J.

    2012-01-01

    When related to a single and good contrast object or a laser spot, position sensing, or sensitive, detectors (PSDs) have a series of advantages over the classical camera sensors, including a good positioning accuracy for a fast response time and very simple signal conditioning circuits. To test the performance of this kind of sensor for microrobotics, we have made a comparative analysis between a precise but slow video camera and a custom-made fast PSD system applied to the tracking of a diffuse-reflectivity object transported by a pneumatic microconveyor called Smart-Surface. Until now, the fast system dynamics prevented the full control of the smart surface by visual servoing, unless using a very expensive high frame rate camera. We have built and tested a custom and low cost PSD-based embedded circuit, optically connected with a camera to a single objective by means of a beam splitter. A stroboscopic light source enhanced the resolution. The obtained results showed a good linearity and a fast (over 500 frames per second) response time which will enable future closed-loop control by using PSD. PMID:23223078

  1. The 1998-2000 SHADOZ (Southern Hemisphere ADditional OZonesondes) Tropical Ozone Climatology: Ozonesonde Precision, Accuracy and Station-to-Station Variability

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, Anne M.; McPeters, R. D.; Oltmans, S. J.; Schmidlin, F. J.; Bhartia, P. K. (Technical Monitor)

    2001-01-01

    As part of the SAFARI-2000 campaign, additional launches of ozonesondes were made at Irene, South Africa and at Lusaka, Zambia. These represent campaign augmentations to the SHADOZ database described in this paper. This network of 10 southern hemisphere tropical and subtropical stations, designated the Southern Hemisphere ADditional OZonesondes (SHADOZ) project and established from operational sites, provided over 1000 profiles from ozonesondes and radiosondes during the period 1998-2000. (Since that time, two more stations, one in southern Africa, have joined SHADOZ). Archived data are available at: http://code9l6.gsfc.nasa.gov/Data-services/shadoz>. Uncertainties and accuracies within the SHADOZ ozone data set are evaluated by analyzing: (1) imprecisions in stratospheric ozone profiles and in methods of extrapolating ozone above balloon burst; (2) comparisons of column-integrated total ozone from sondes with total ozone from the Earth-Probe/TOMS (Total Ozone Mapping Spectrometer) satellite and ground-based instruments; (3) possible biases from station-to-station due to variations in ozonesonde characteristics. The key results are: (1) Ozonesonde precision is 5%; (2) Integrated total ozone column amounts from the sondes are in good agreement (2-10%) with independent measurements from ground-based instruments at five SHADOZ sites and with overpass measurements from the TOMS satellite (version 7 data). (3) Systematic variations in TOMS-sonde offsets and in groundbased-sonde offsets from station to station reflect biases in sonde technique as well as in satellite retrieval. Discrepancies are present in both stratospheric and tropospheric ozone. (4) There is evidence for a zonal wave-one pattern in total and tropospheric ozone, but not in stratospheric ozone.

  2. Application of U-Pb ID-TIMS dating to the end-Triassic global crisis: testing the limits on precision and accuracy in a multidisciplinary whodunnit (Invited)

    NASA Astrophysics Data System (ADS)

    Schoene, B.; Schaltegger, U.; Guex, J.; Bartolini, A.

    2010-12-01

    The ca. 201.4 Ma Triassic-Jurassic boundary is characterized by one of the most devastating mass-extinctions in Earth history, subsequent biologic radiation, rapid carbon cycle disturbances and enormous flood basalt volcanism (Central Atlantic Magmatic Province - CAMP). Considerable uncertainty remains regarding the temporal and causal relationship between these events though this link is important for understanding global environmental change under extreme stresses. We present ID-TIMS U-Pb zircon geochronology on volcanic ash beds from two marine sections that span the Triassic-Jurassic boundary and from the CAMP in North America. To compare the timing of the extinction with the onset of the CAMP, we assess the precision and accuracy of ID-TIMS U-Pb zircon geochronology by exploring random and systematic uncertainties, reproducibility, open-system behavior, and pre-eruptive crystallization of zircon. We find that U-Pb ID-TIMS dates on single zircons can be internally and externally reproducible at 0.05% of the age, consistent with recent experiments coordinated through the EARTHTIME network. Increased precision combined with methods alleviating Pb-loss in zircon reveals that these ash beds contain zircon that crystallized between 10^5 and 10^6 years prior to eruption. Mineral dates older than eruption ages are prone to affect all geochronologic methods and therefore new tools exploring this form of “geologic uncertainty” will lead to better time constraints for ash bed deposition. In an effort to understand zircon dates within the framework of a magmatic system, we analyzed zircon trace elements by solution ICPMS for the same volume of zircon dated by ID-TIMS. In one example we argue that zircon trace element patterns as a function of time result from a mix of xeno-, ante-, and autocrystic zircons in the ash bed, and approximate eruption age with the youngest zircon date. In a contrasting example from a suite of Cretaceous andesites, zircon trace elements

  3. Precision control of an invasive ant on an ecologically sensitive tropical island: a principle with wide applicability.

    PubMed

    Gaigher, R; Samways, M J; Jolliffe, K G; Jolliffe, S

    2012-07-01

    Effective management of invasive ants is an important priority for many conservation programs but can be difficult to achieve, especially within ecologically sensitive habitats. This study assesses the efficacy and nontarget risk of a precision ant baiting method aiming to reduce a population of the invasive big-headed ant Pheidole megacephala on a tropical island of great conservation value. Area-wide application of a formicidal bait, delivered in bait stations, resulted in the rapid decline of 8 ha of P. megacephala. Effective suppression remained throughout the succeeding 11-month monitoring period. We detected no negative effects of baiting on nontarget arthropods. Indeed, species richness of nontarget ants and abundance of other soil-surface arthropods increased significantly after P. megacephala suppression. This bait station method minimized bait exposure to nontarget organisms and was cost effective and adaptable to target species density. However, it was only effective over short distances and required thorough bait placement. This method would therefore be most appropriate for localized P. megacephala infestations where the prevention of nontarget impacts is essential. The methodology used here would be applicable to other sensitive tropical environments. PMID:22908700

  4. High sensitivity and accuracy dissolved oxygen (DO) detection by using PtOEP/poly(MMA-co-TFEMA) sensing film.

    PubMed

    Zhang, Ke; Zhang, Honglin; Wang, Ying; Tian, Yanqing; Zhao, Jiupeng; Li, Yao

    2017-01-01

    Fluorinated acrylate polymer has received great interest in recent years due to its extraordinary characteristics such as high oxygen permeability, good stability, low surface energy and refractive index. In this work, platinum octaethylporphyrin/poly(methylmethacrylate-co-trifluoroethyl methacrylate) (PtOEP/poly(MMA-co-TFEMA)) oxygen sensing film was prepared by the immobilizing of PtOEP in a poly(MMA-co-TFEMA) matrix and the technological readiness of optical properties was established based on the principle of luminescence quenching. It was found that the oxygen-sensing performance could be improved by optimizing the monomer ratio (MMA/TFEMA=1:1), tributylphosphate(TBP, 0.05mL) and PtOEP (5μg) content. Under this condition, the maximum quenching ratio I0/I100 of the oxygen sensing film is obtained to be about 8.16, Stern-Volmer equation is I0/I=1.003+2.663[O2] (R(2)=0.999), exhibiting a linear relationship, good photo-stability, high sensitivity and accuracy. Finally, the synthesized PtOEP/poly(MMA-co-TFEMA) sensing film was used for DO detection in different water samples. PMID:27450122

  5. KLY5 Kappabridge: High sensitivity susceptibility and anisotropy meter precisely decomposing in-phase and out-of-phase components

    NASA Astrophysics Data System (ADS)

    Pokorny, Petr; Pokorny, Jiri; Chadima, Martin; Hrouda, Frantisek; Studynka, Jan; Vejlupek, Josef

    2016-04-01

    The KLY5 Kappabridge is equipped, in addition to standard measurement of in-phase magnetic susceptibility and its anisotropy, for precise and calibrated measurement of out-of-phase susceptibility and its anisotropy. The phase angle is measured in "absolute" terms, i.e. without any residual phase error. The measured value of the out-of-phase susceptibility is independent on both the magnitude of the complex susceptibility and intensity of the driving magnetic field. The precise decomposition of the complex susceptibility into the in-phase and out-of-phase components is verified through presumably zero out-of-phase susceptibility of pure gadolinium oxide. The outstanding sensitivity in measurement of weak samples is achieved by newly developed drift compensation routine in addition to the latest models of electronic devices. In rocks, soils, and environmental materials, in which it is usually due to viscous relaxation, the out-of-phase susceptibility is able to substitute the more laborious frequency-dependent susceptibility routinely used in magnetic granulometry. Another new feature is measurement of the anisotropy of out-of-phase magnetic susceptibility (opAMS), which is also performed simultaneously and automatically with standard (in-phase) AMS measurement. The opAMS enables the direct determination of the magnetic sub-fabrics of the minerals that show non-zero out-of-phase susceptibility either due to viscous relaxation (ultrafine grains of magnetite or maghemite), or due to weak-field hysteresis (titanomagnetite, hematite, pyrrhotite), or due to eddy currents (in conductive minerals). Using the 3D rotator, the instrument performs the measurement of both the AMS and opAMS by only one insertion of the specimen into the specimen holder. In addition, fully automated measurement of the field variation of the AMS and opAMS is possible. The instrument is able to measure, in conjunction with the CS-4 Furnace and CS-L Cryostat, the temperature variation of

  6. Accuracy and precision of 14C-based source apportionment of organic and elemental carbon in aerosols using the Swiss_4S protocol

    NASA Astrophysics Data System (ADS)

    Mouteva, G. O.; Fahrni, S. M.; Santos, G. M.; Randerson, J. T.; Zhang, Y.-L.; Szidat, S.; Czimczik, C. I.

    2015-09-01

    Aerosol source apportionment remains a critical challenge for understanding the transport and aging of aerosols, as well as for developing successful air pollution mitigation strategies. The contributions of fossil and non-fossil sources to organic carbon (OC) and elemental carbon (EC) in carbonaceous aerosols can be quantified by measuring the radiocarbon (14C) content of each carbon fraction. However, the use of 14C in studying OC and EC has been limited by technical challenges related to the physical separation of the two fractions and small sample sizes. There is no common procedure for OC/EC 14C analysis, and uncertainty studies have largely focused on the precision of yields. Here, we quantified the uncertainty in 14C measurement of aerosols associated with the isolation and analysis of each carbon fraction with the Swiss_4S thermal-optical analysis (TOA) protocol. We used an OC/EC analyzer (Sunset Laboratory Inc., OR, USA) coupled to a vacuum line to separate the two components. Each fraction was thermally desorbed and converted to carbon dioxide (CO2) in pure oxygen (O2). On average, 91 % of the evolving CO2 was then cryogenically trapped on the vacuum line, reduced to filamentous graphite, and measured for its 14C content via accelerator mass spectrometry (AMS). To test the accuracy of our setup, we quantified the total amount of extraneous carbon introduced during the TOA sample processing and graphitization as the sum of modern and fossil (14C-depleted) carbon introduced during the analysis of fossil reference materials (adipic acid for OC and coal for EC) and contemporary standards (oxalic acid for OC and rice char for EC) as a function of sample size. We further tested our methodology by analyzing five ambient airborne particulate matter (PM2.5) samples with a range of OC and EC concentrations and 14C contents in an interlaboratory comparison. The total modern and fossil carbon blanks of our setup were 0.8 ± 0.4 and 0.67 ± 0.34 μg C, respectively

  7. Accuracy and precision of 14C-based source apportionment of organic and elemental carbon in aerosols using the Swiss_4S protocol

    NASA Astrophysics Data System (ADS)

    Mouteva, G. O.; Fahrni, S. M.; Santos, G. M.; Randerson, J. T.; Zhang, Y. L.; Szidat, S.; Czimczik, C. I.

    2015-04-01

    Aerosol source apportionment remains a critical challenge for understanding the transport and aging of aerosols, as well as for developing successful air pollution mitigation strategies. The contributions of fossil and non-fossil sources to organic carbon (OC) and elemental carbon (EC) in carbonaceous aerosols can be quantified by measuring the radiocarbon (14C) content of each carbon fraction. However, the use of 14C in studying OC and EC has been limited by technical challenges related to the physical separation of the two fractions and small sample sizes. There is no common procedure for OC/EC 14C analysis, and uncertainty studies have largely focused on the precision of yields. Here, we quantified the uncertainty in 14C measurement of aerosols associated with the isolation and analysis of each carbon fraction with the Swiss_4S thermal-optical analysis (TOA) protocol. We used an OC/EC analyzer (Sunset Laboratory Inc., OR, USA) coupled to vacuum line to separate the two components. Each fraction was thermally desorbed and converted to carbon dioxide (CO2) in pure oxygen (O2). On average 91% of the evolving CO2 was then cryogenically trapped on the vacuum line, reduced to filamentous graphite, and measured for its 14C content via accelerator mass spectrometry (AMS). To test the accuracy of our set-up, we quantified the total amount of extraneous carbon introduced during the TOA sample processing and graphitization as the sum of modern and fossil (14C-depleted) carbon introduced during the analysis of fossil reference materials (adipic acid for OC and coal for EC) and contemporary standards (oxalic acid for OC and rice char for EC) as a function of sample size. We further tested our methodology by analyzing five ambient airborne particulate matter (PM2.5) samples with a range of OC and EC concentrations and 14C contents in an interlaboratory comparison. The total modern and fossil carbon blanks of our set-up were 0.8 ± 0.4 and 0.67 ± 0.34 μg C, respectively

  8. The effects of temporal-precision and time-minimization constraints on the spatial and temporal accuracy of aimed hand movements.

    PubMed

    Carlton, L G

    1994-03-01

    Discrete aimed hand movements, made by subjects given temporal-accuracy and time-minimization task instructions, were compared. Movements in the temporal-accuracy task were made to a point target with a goal movement time of 400 ms. A circular target then was manufactured that incorporated the measured spatial errors from the temporal-accuracy task, and subjects attempted to contact the target with a minimum movement time and without missing the circular target (time-minimization task instructions). This procedure resulted in equal movement amplitude and approximately equal spatial accuracy for the two task instructions. Movements under the time-minimization instructions were completed rapidly (M = 307 ms) without target misses, and tended to be made up of two submovements. In contrast, movements under temporal-accuracy instructions were made more slowly (M = 397 ms), matching the goal movement time, and were typically characterized by a single submovement. These data support the hypothesis that movement times, at a fixed movement amplitude versus target width ratio, decrease as the number of submovements increases, and that movements produced under temporal-accuracy and time-minimization have different control characteristics. These control differences are related to the linear and logarithmic speed-accuracy relations observed for temporal-accuracy and time-minimization tasks, respectively. PMID:15757833

  9. CP Violation and Beauty Decays - A Case Study of High Impact, High Sensitivity and Even High Precision Physics

    NASA Astrophysics Data System (ADS)

    Bigi, I. I.

    The narrative of these lectures contains three main threads: (i) CP violation despite having so far been observed only in the decays of neutral kaons has been recognized as a phenomenon of truly fundamental importance. The KM ansatz constitutes the minimal implementation of CP violation: without requiring unknown degrees of freedom it can reproduce the known CP phenomenology in a nontrivial way. (ii) The physics of beauty hadrons - in particular their weak decays - opens a novel window onto fundamental dynamics: they usher in a new quark family (presumably the last one); they allow us to determine fundamental quantities of the Standard Model like the b quark mass and the CKM parameters V(cb), V(ub), V(ts) and V(td); they exhibit speedy or even rapid B0 - ¯ B0 oscillations. (iii) Heavy Quark Expansions allow us to treat B decays with an accuracy that would not have been thought possible a mere decade ago. These three threads are joined together in the following manner: (a) Huge CP asymmetries are predicted in B decays, which represents a decisive test of the KM paradigm for CP violation. (b) Some of these predictions are made with high parametric reliability, which (c) can be translated into numerical precision through the judicious employment of novel theoretical technologies. (d) Beauty decays thus provide us with a rich and promising field to search for New Physics and even study some of its salient features. At the end of it there might quite possibly be a New Paradigm for High Energy Physics. There will be some other threads woven into this tapestry: electric dipole moments, and CP violation in other strange and in charm decays.

  10. A Sensor Array Using Multi-functional Field-effect Transistors with Ultrahigh Sensitivity and Precision for Bio-monitoring

    NASA Astrophysics Data System (ADS)

    Kim, Do-Il; Quang Trung, Tran; Hwang, Byeong-Ung; Kim, Jin-Su; Jeon, Sanghun; Bae, Jihyun; Park, Jong-Jin; Lee, Nae-Eung

    2015-07-01

    Mechanically adaptive electronic skins (e-skins) emulate tactition and thermoception by cutaneous mechanoreceptors and thermoreceptors in human skin, respectively. When exposed to multiple stimuli including mechanical and thermal stimuli, discerning and quantifying precise sensing signals from sensors embedded in e-skins are critical. In addition, different detection modes for mechanical stimuli, rapidly adapting (RA) and slowly adapting (SA) mechanoreceptors in human skin are simultaneously required. Herein, we demonstrate the fabrication of a highly sensitive, pressure-responsive organic field-effect transistor (OFET) array enabling both RA- and SA- mode detection by adopting easily deformable, mechano-electrically coupled, microstructured ferroelectric gate dielectrics and an organic semiconductor channel. We also demonstrate that the OFET array can separate out thermal stimuli for thermoreception during quantification of SA-type static pressure, by decoupling the input signals of pressure and temperature. Specifically, we adopt piezoelectric-pyroelectric coupling of highly crystalline, microstructured poly(vinylidene fluoride-trifluoroethylene) gate dielectric in OFETs with stimuli to allow monitoring of RA- and SA-mode responses to dynamic and static forcing conditions, respectively. This approach enables us to apply the sensor array to e-skins for bio-monitoring of humans and robotics.

  11. A Sensor Array Using Multi-functional Field-effect Transistors with Ultrahigh Sensitivity and Precision for Bio-monitoring

    PubMed Central

    Kim, Do-Il; Quang Trung, Tran; Hwang, Byeong-Ung; Kim, Jin-Su; Jeon, Sanghun; Bae, Jihyun; Park, Jong-Jin; Lee, Nae-Eung

    2015-01-01

    Mechanically adaptive electronic skins (e-skins) emulate tactition and thermoception by cutaneous mechanoreceptors and thermoreceptors in human skin, respectively. When exposed to multiple stimuli including mechanical and thermal stimuli, discerning and quantifying precise sensing signals from sensors embedded in e-skins are critical. In addition, different detection modes for mechanical stimuli, rapidly adapting (RA) and slowly adapting (SA) mechanoreceptors in human skin are simultaneously required. Herein, we demonstrate the fabrication of a highly sensitive, pressure-responsive organic field-effect transistor (OFET) array enabling both RA- and SA- mode detection by adopting easily deformable, mechano-electrically coupled, microstructured ferroelectric gate dielectrics and an organic semiconductor channel. We also demonstrate that the OFET array can separate out thermal stimuli for thermoreception during quantification of SA-type static pressure, by decoupling the input signals of pressure and temperature. Specifically, we adopt piezoelectric-pyroelectric coupling of highly crystalline, microstructured poly(vinylidene fluoride-trifluoroethylene) gate dielectric in OFETs with stimuli to allow monitoring of RA- and SA-mode responses to dynamic and static forcing conditions, respectively. This approach enables us to apply the sensor array to e-skins for bio-monitoring of humans and robotics. PMID:26223845

  12. SU-E-J-03: Characterization of the Precision and Accuracy of a New, Preclinical, MRI-Guided Focused Ultrasound System for Image-Guided Interventions in Small-Bore, High-Field Magnets

    SciTech Connect

    Ellens, N; Farahani, K

    2015-06-15

    Purpose: MRI-guided focused ultrasound (MRgFUS) has many potential and realized applications including controlled heating and localized drug delivery. The development of many of these applications requires extensive preclinical work, much of it in small animal models. The goal of this study is to characterize the spatial targeting accuracy and reproducibility of a preclinical high field MRgFUS system for thermal ablation and drug delivery applications. Methods: The RK300 (FUS Instruments, Toronto, Canada) is a motorized, 2-axis FUS positioning system suitable for small bore (72 mm), high-field MRI systems. The accuracy of the system was assessed in three ways. First, the precision of the system was assessed by sonicating regular grids of 5 mm squares on polystyrene plates and comparing the resulting focal dimples to the intended pattern, thereby assessing the reproducibility and precision of the motion control alone. Second, the targeting accuracy was assessed by imaging a polystyrene plate with randomly drilled holes and replicating the hole pattern by sonicating the observed hole locations on intact polystyrene plates and comparing the results. Third, the practicallyrealizable accuracy and precision were assessed by comparing the locations of transcranial, FUS-induced blood-brain-barrier disruption (BBBD) (observed through Gadolinium enhancement) to the intended targets in a retrospective analysis of animals sonicated for other experiments. Results: The evenly-spaced grids indicated that the precision was 0.11 +/− 0.05 mm. When image-guidance was included by targeting random locations, the accuracy was 0.5 +/− 0.2 mm. The effective accuracy in the four rodent brains assessed was 0.8 +/− 0.6 mm. In all cases, the error appeared normally distributed (p<0.05) in both orthogonal axes, though the left/right error was systematically greater than the superior/inferior error. Conclusions: The targeting accuracy of this device is sub-millimeter, suitable for many

  13. System and method for high precision isotope ratio destructive analysis

    DOEpatents

    Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R

    2013-07-02

    A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).

  14. Detecting declines in the abundance of a bull trout (Salvelinus confluentus) population: Understanding the accuracy, precision, and costs of our efforts

    USGS Publications Warehouse

    Al-Chokhachy, R.; Budy, P.; Conner, M.

    2009-01-01

    Using empirical field data for bull trout (Salvelinus confluentus), we evaluated the trade-off between power and sampling effort-cost using Monte Carlo simulations of commonly collected mark-recapture-resight and count data, and we estimated the power to detect changes in abundance across different time intervals. We also evaluated the effects of monitoring different components of a population and stratification methods on the precision of each method. Our results illustrate substantial variability in the relative precision, cost, and information gained from each approach. While grouping estimates by age or stage class substantially increased the precision of estimates, spatial stratification of sampling units resulted in limited increases in precision. Although mark-resight methods allowed for estimates of abundance versus indices of abundance, our results suggest snorkel surveys may be a more affordable monitoring approach across large spatial scales. Detecting a 25% decline in abundance after 5 years was not possible, regardless of technique (power = 0.80), without high sampling effort (48% of study site). Detecting a 25% decline was possible after 15 years, but still required high sampling efforts. Our results suggest detecting moderate changes in abundance of freshwater salmonids requires considerable resource and temporal commitments and highlight the difficulties of using abundance measures for monitoring bull trout populations.

  15. Intra- and inter-laboratory reproducibility and accuracy of the LuSens assay: A reporter gene-cell line to detect keratinocyte activation by skin sensitizers.

    PubMed

    Ramirez, Tzutzuy; Stein, Nadine; Aumann, Alexandra; Remus, Tina; Edwards, Amber; Norman, Kimberly G; Ryan, Cindy; Bader, Jackie E; Fehr, Markus; Burleson, Florence; Foertsch, Leslie; Wang, Xiaohong; Gerberick, Frank; Beilstein, Paul; Hoffmann, Sebastian; Mehling, Annette; van Ravenzwaay, Bennard; Landsiedel, Robert

    2016-04-01

    Several non-animal methods are now available to address the key events leading to skin sensitization as defined by the adverse outcome pathway. The KeratinoSens assay addresses the cellular event of keratinocyte activation and is a method accepted under OECD TG 442D. In this study, the results of an inter-laboratory evaluation of the "me-too" LuSens assay, a bioassay that uses a human keratinocyte cell line harboring a reporter gene construct composed of the rat antioxidant response element (ARE) of the NADPH:quinone oxidoreductase 1 gene and the luciferase gene, are described. Earlier in-house validation with 74 substances showed an accuracy of 82% in comparison to human data. When used in a battery of non-animal methods, even higher predictivity is achieved. To meet European validation criteria, a multicenter study was conducted in 5 laboratories. The study was divided into two phases, to assess 1) transferability of the method, and 2) reproducibility and accuracy. Phase I was performed by testing 8 non-coded test substances; the results showed a good transferability to naïve laboratories even without on-site training. Phase II was performed with 20 coded test substances (performance standards recommended by OECD, 2015). In this phase, the intra- and inter-laboratory reproducibility as well as accuracy of the method was evaluated. The data demonstrate a remarkable reproducibility of 100% and an accuracy of over 80% in identifying skin sensitizers, indicating a good concordance with in vivo data. These results demonstrate good transferability, reliability and accuracy of the method thereby achieving the standards necessary for use in a regulatory setting to detect skin sensitizers. PMID:26796489

  16. Compact diffraction grating laser wavemeter with sub-picometer accuracy and picowatt sensitivity using a webcam imaging sensor.

    PubMed

    White, James D; Scholten, Robert E

    2012-11-01

    We describe a compact laser wavelength measuring instrument based on a small diffraction grating and a consumer-grade webcam. With just 1 pW of optical power, the instrument achieves absolute accuracy of 0.7 pm, sufficient to resolve individual hyperfine transitions of the rubidium absorption spectrum. Unlike interferometric wavemeters, the instrument clearly reveals multimode laser operation, making it particularly suitable for use with external cavity diode lasers and atom cooling and trapping experiments. PMID:23206048

  17. Compact diffraction grating laser wavemeter with sub-picometer accuracy and picowatt sensitivity using a webcam imaging sensor

    NASA Astrophysics Data System (ADS)

    White, James D.; Scholten, Robert E.

    2012-11-01

    We describe a compact laser wavelength measuring instrument based on a small diffraction grating and a consumer-grade webcam. With just 1 pW of optical power, the instrument achieves absolute accuracy of 0.7 pm, sufficient to resolve individual hyperfine transitions of the rubidium absorption spectrum. Unlike interferometric wavemeters, the instrument clearly reveals multimode laser operation, making it particularly suitable for use with external cavity diode lasers and atom cooling and trapping experiments.

  18. Method and system using power modulation for maskless vapor deposition of spatially graded thin film and multilayer coatings with atomic-level precision and accuracy

    DOEpatents

    Montcalm, Claude; Folta, James Allen; Tan, Swie-In; Reiss, Ira

    2002-07-30

    A method and system for producing a film (preferably a thin film with highly uniform or highly accurate custom graded thickness) on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source operated with time-varying flux distribution. In preferred embodiments, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. A user selects a source flux modulation recipe for achieving a predetermined desired thickness profile of the deposited film. The method relies on precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  19. Precision Fabrication of a Large-Area Sinusoidal Surface Using a Fast-Tool-Servo Technique ─Improvement of Local Fabrication Accuracy

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Tano, Makoto; Araki, Takeshi; Kiyono, Satoshi

    This paper describes a diamond turning fabrication system for a sinusoidal grid surface. The wavelength and amplitude of the sinusoidal wave in each direction are 100µm and 100nm, respectively. The fabrication system, which is based on a fast-tool-servo (FTS), has the ability to generate the angle grid surface over an area of φ 150mm. This paper focuses on the improvement of the local fabrication accuracy. The areas considered are each approximately 1 × 1mm, and can be imaged by an interference microscope. Specific fabrication errors of the manufacturing process, caused by the round nose geometry of the diamond cutting tool and the data digitization, are successfully identified by Discrete Fourier Transform of the microscope images. Compensation processes are carried out to reduce the errors. As a result, the fabrication errors in local areas of the angle grid surface are reduced by 1/10.

  20. Preliminary assessment of the accuracy and precision of TOPEX/POSEIDON altimeter data with respect to the large-scale ocean circulation

    NASA Technical Reports Server (NTRS)

    Wunsch, Carl; Stammer, Detlef

    1994-01-01

    TOPEX/POSEIDON sea surface height measurements are examined for quantitative consistency with known elements of the oceanic general circulation and its variability. Project-provided corrections were accepted but are at tested as part of the overall results. The ocean was treated as static over each 10-day repeat cycle and maps constructed of the absolute sea surface topography from simple averages in 2 deg x 2 deg bins. A hybrid geoid model formed from a combination of the recent Joint Gravity Model-2 and the project-provided Ohio State University geoid was used to estimate the absolute topography in each 10-day period. Results are examined in terms of the annual average, seasonal average, seasonal variations, and variations near the repeat period. Conclusion are as follows: the orbit error is now difficult to observe, having been reduced to a level at or below the level of other error sources; the geoid dominates the error budget of the estimates of the absolute topography; the estimated seasonal cycle is consistent with prior estimates; shorter-period variability is dominated on the largest scales by an oscillation near 50 days in spherical harmonics Y(sup m)(sub 1)(theta, lambda) with an amplitude near 10 cm, close to the simplest alias of the M(sub 2) tide. This spectral peak and others visible in the periodograms support the hypothesis that the largest remaining time-dependent errors lie in the tidal models. Though discrepancies attribute to the geoid are within the formal uncertainties of the good estimates, removal of them is urgent for circulation studies. Current gross accuracy of the TOPEX/POSEIDON mission is in the range of 5-10 cm, distributed overbroad band of frequencies and wavenumbers. In finite bands, accuracies approach the 1-cm level, and expected improvements arising from extended mission duration should reduce these numbers by nearly an order of magnitude.

  1. Leaf Vein Length per Unit Area Is Not Intrinsically Dependent on Image Magnification: Avoiding Measurement Artifacts for Accuracy and Precision1[W][OPEN

    PubMed Central

    Sack, Lawren; Caringella, Marissa; Scoffoni, Christine; Mason, Chase; Rawls, Michael; Markesteijn, Lars; Poorter, Lourens

    2014-01-01

    Leaf vein length per unit leaf area (VLA; also known as vein density) is an important determinant of water and sugar transport, photosynthetic function, and biomechanical support. A range of software methods are in use to visualize and measure vein systems in cleared leaf images; typically, users locate veins by digital tracing, but recent articles introduced software by which users can locate veins using thresholding (i.e. based on the contrasting of veins in the image). Based on the use of this method, a recent study argued against the existence of a fixed VLA value for a given leaf, proposing instead that VLA increases with the magnification of the image due to intrinsic properties of the vein system, and recommended that future measurements use a common, low image magnification for measurements. We tested these claims with new measurements using the software LEAFGUI in comparison with digital tracing using ImageJ software. We found that the apparent increase of VLA with magnification was an artifact of (1) using low-quality and low-magnification images and (2) errors in the algorithms of LEAFGUI. Given the use of images of sufficient magnification and quality, and analysis with error-free software, the VLA can be measured precisely and accurately. These findings point to important principles for improving the quantity and quality of important information gathered from leaf vein systems. PMID:25096977

  2. High-accuracy, high-precision, high-resolution, continuous monitoring of urban greenhouse gas emissions? Results to date from INFLUX

    NASA Astrophysics Data System (ADS)

    Davis, K. J.; Brewer, A.; Cambaliza, M. O. L.; Deng, A.; Hardesty, M.; Gurney, K. R.; Heimburger, A. M. F.; Karion, A.; Lauvaux, T.; Lopez-Coto, I.; McKain, K.; Miles, N. L.; Patarasuk, R.; Prasad, K.; Razlivanov, I. N.; Richardson, S.; Sarmiento, D. P.; Shepson, P. B.; Sweeney, C.; Turnbull, J. C.; Whetstone, J. R.; Wu, K.

    2015-12-01

    The Indianapolis Flux Experiment (INFLUX) is testing the boundaries of our ability to use atmospheric measurements to quantify urban greenhouse gas (GHG) emissions. The project brings together inventory assessments, tower-based and aircraft-based atmospheric measurements, and atmospheric modeling to provide high-accuracy, high-resolution, continuous monitoring of emissions of GHGs from the city. Results to date include a multi-year record of tower and aircraft based measurements of the urban CO2 and CH4 signal, long-term atmospheric modeling of GHG transport, and emission estimates for both CO2 and CH4 based on both tower and aircraft measurements. We will present these emissions estimates, the uncertainties in each, and our assessment of the primary needs for improvements in these emissions estimates. We will also present ongoing efforts to improve our understanding of atmospheric transport and background atmospheric GHG mole fractions, and to disaggregate GHG sources (e.g. biogenic vs. fossil fuel CO2 fluxes), topics that promise significant improvement in urban GHG emissions estimates.

  3. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset 1998-2000 in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, A. M.; Schmidlin, F. J.; Oltmans, S. J.; McPeters, R. D.; Smit, H. G. J.

    2003-01-01

    A network of 12 southern hemisphere tropical and subtropical stations in the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 profiles of stratospheric and tropospheric ozone since 1998. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used with standard radiosondes for pressure, temperature and relative humidity measurements. The archived data are available at:http: //croc.gsfc.nasa.gov/shadoz. In Thompson et al., accuracies and imprecisions in the SHADOZ 1998- 2000 dataset were examined using ground-based instruments and the TOMS total ozone measurement (version 7) as references. Small variations in ozonesonde technique introduced possible biases from station-to-station. SHADOZ total ozone column amounts are now compared to version 8 TOMS; discrepancies between the two datasets are reduced 2\\% on average. An evaluation of ozone variations among the stations is made using the results of a series of chamber simulations of ozone launches (JOSIE-2000, Juelich Ozonesonde Intercomparison Experiment) in which a standard reference ozone instrument was employed with the various sonde techniques used in SHADOZ. A number of variations in SHADOZ ozone data are explained when differences in solution strength, data processing and instrument type (manufacturer) are taken into account.

  4. Tailoring Catalytic Activity of Pt Nanoparticles Encapsulated Inside Dendrimers by Tuning Nanoparticle Sizes with Subnanometer Accuracy for Sensitive Chemiluminescence-Based Analyses.

    PubMed

    Lim, Hyojung; Ju, Youngwon; Kim, Joohoon

    2016-05-01

    Here, we report the size-dependent catalysis of Pt dendrimer-encapsulated nanoparticles (DENs) having well-defined sizes over the range of 1-3 nm with subnanometer accuracy for the highly enhanced chemiluminescence of the luminol/H2O2 system. This size-dependent catalysis is ascribed to the differences in the chemical states of the Pt DENs as well as in their surface areas depending on their sizes. Facile and versatile applications of the Pt DENs in diverse oxidase-based assays are demonstrated as efficient catalysts for sensitive chemiluminescence-based analyses. PMID:27032992

  5. Precise Determination of Enantiomeric Excess by a Sensitivity Enhanced Two-Dimensional Band-Selective Pure-Shift NMR.

    PubMed

    Rachineni, Kavitha; Kakita, Veera Mohana Rao; Dayaka, Satyanarayana; Vemulapalli, Sahithya Phani Babu; Bharatam, Jagadeesh

    2015-07-21

    Unambiguous identification and precise quantification of enantiomers in chiral mixtures is crucial for enantiomer specific synthesis as well as chemical analysis. The task is often challenging for mixtures with high enantiomeric excess and for complex molecules with strong (1)H-(1)H scalar (J) coupling network. The recent advancements in (1)H-(1)H decoupling strategies to suppress the J-interactions offered new possibilities for NMR based unambiguous discrimination and quantification enantiomers. Herein, we discuss a high resolution two-dimensional pure-shift zCOSY NMR method with homonuclear band-selective decoupling in both the F1 and F2 dimensions (F1F2-HOBS-zCOSY). This advanced method shows a sharp improvement in resolution over the other COSY methods and also eliminates the problems associated with the overlapping decoupling sidebands. The efficacy of this method has been exploited for precise quantification of enantiomeric excess (ee) ratio (R/S) up to 99:1 in the presence of very low concentrations of chiral lanthanide shift reagents (CLSR) or chiral solvating agents (CSA). The F1F2-HOBS-zCOSY is simple and can be easily implemented on any modern NMR spectrometers, as a routine analytical tool. PMID:26091767

  6. Re-Os geochronology of the El Salvador porphyry Cu-Mo deposit, Chile: Tracking analytical improvements in accuracy and precision over the past decade

    NASA Astrophysics Data System (ADS)

    Zimmerman, Aaron; Stein, Holly J.; Morgan, John W.; Markey, Richard J.; Watanabe, Yasushi

    2014-04-01

    deposit geochronology. The timing and duration of mineralization from Re-Os dating of ore minerals is more precise than estimates from previously reported 40Ar/39Ar and K-Ar ages on alteration minerals. The Re-Os results suggest that the mineralization is temporally distinct from pre-mineral rhyolite porphyry (42.63 ± 0.28 Ma) and is immediately prior to or overlapping with post-mineral latite dike emplacement (41.16 ± 0.48 Ma). Based on the Re-Os and other geochronologic data, the Middle Eocene intrusive activity in the El Salvador district is divided into three pulses: (1) 44-42.5 Ma for weakly mineralized porphyry intrusions, (2) 41.8-41.2 Ma for intensely mineralized porphyry intrusions, and (3) ∼41 Ma for small latite dike intrusions without major porphyry stocks. The orientation of igneous dikes and porphyry stocks changed from NNE-SSW during the first pulse to WNW-ESE for the second and third pulses. This implies that the WNW-ESE striking stress changed from σ3 (minimum principal compressive stress) during the first pulse to σHmax (maximum principal compressional stress in a horizontal plane) during the second and third pulses. Therefore, the focus of intense porphyry Cu-Mo mineralization occurred during a transient geodynamic reconfiguration just before extinction of major intrusive activity in the region.

  7. Living cell dry mass measurement using quantitative phase imaging with quadriwave lateral shearing interferometry: an accuracy and sensitivity discussion

    NASA Astrophysics Data System (ADS)

    Aknoun, Sherazade; Savatier, Julien; Bon, Pierre; Galland, Frédéric; Abdeladim, Lamiae; Wattellier, Benoit; Monneret, Serge

    2015-12-01

    Single-cell dry mass measurement is used in biology to follow cell cycle, to address effects of drugs, or to investigate cell metabolism. Quantitative phase imaging technique with quadriwave lateral shearing interferometry (QWLSI) allows measuring cell dry mass. The technique is very simple to set up, as it is integrated in a camera-like instrument. It simply plugs onto a standard microscope and uses a white light illumination source. Its working principle is first explained, from image acquisition to automated segmentation algorithm and dry mass quantification. Metrology of the whole process, including its sensitivity, repeatability, reliability, sources of error, over different kinds of samples and under different experimental conditions, is developed. We show that there is no influence of magnification or spatial light coherence on dry mass measurement; effect of defocus is more critical but can be calibrated. As a consequence, QWLSI is a well-suited technique for fast, simple, and reliable cell dry mass study, especially for live cells.

  8. Broadband and highly sensitive comb-assisted cavity ring down spectroscopy of CO near 1.57 μm with sub-MHz frequency accuracy

    NASA Astrophysics Data System (ADS)

    Mondelain, D.; Sala, T.; Kassi, S.; Romanini, D.; Marangoni, M.; Campargue, A.

    2015-03-01

    A self-referenced frequency comb has been combined with a cavity ring down (CRD) spectrometer to achieve a sub-MHz accuracy on the derived positions of the absorption lines. The frequency emitted by the distributed feedback (DFB) laser diode used in the spectrometer was obtained from the frequency of its beat note with the closest mode of the frequency comb. This delivers excellent frequency accuracy over a broad spectral region with sensitivity (noise equivalent absorption) of 1×10-11 cm-1 Hz-1/2. This setup is used to measure the absorption spectrum of CO over a wide range corresponding to the 3-0 band (6172.5-6418.0 cm-1). Accurate values of line centers are measured for a total of 184 lines of four CO isotopologues, namely 12C16O, 13C16O, 12C18O and 12C17O present in "natural" abundances in our sample. The measurements include the first extensive study of the 3-0 band of 12C18O and 12C17O, of the 4-1 hot band of 12C16O and the detection of new high-J transitions of the 3-0 band of 12C16O up to J=34. The line centers were corrected for the self-pressure shift and used to derive the upper state spectroscopic parameters. The obtained standard deviation of about 300 kHz and 500 kHz for the 3-0 band of 12C16O and of the minor isotopologues, respectively, is a good estimate of the average accuracy of the reported line centers. The resulting 3-0 line list of 12C16O provided as Supplementary material includes 69 reference line positions with a 300 kHz accuracy for the 6183-6418 cm-1 region.

  9. Improving sensitivity and accuracy of pore structural characterisation using scanning curves in integrated gas sorption and mercury porosimetry experiments.

    PubMed

    Hitchcock, Iain; Lunel, Marie; Bakalis, Serafim; Fletcher, Robin S; Holt, Elizabeth M; Rigby, Sean P

    2014-03-01

    Gas sorption scanning curves are increasingly used as a means to supplement the pore structural information implicit in boundary adsorption and desorption isotherms to obtain more detailed pore space descriptors for disordered solids. However, co-operative adsorption phenomena set fundamental limits to the level of information that conventional scanning curve experiments can deliver. In this work, we use the novel integrated gas sorption and mercury porosimetry technique to show that crossing scanning curves are obtained for some through ink-bottle pores within a disordered solid, thence demonstrating that their shielded pore bodies are undetectable using conventional scanning experiments. While gas sorption alone was not sensitive enough to detect these pore features, the integrated technique was, and, thence, this synergistic method is more powerful than the two individual techniques applied separately. The integrated method also showed how the appropriate filling mechanism equation (e.g. meniscus geometry for capillary condensation equations), to use to convert filling pressure to pore size, varied with position along the adsorption branch, thereby enabling avoidance of the further systematic error introduced into PSDs by assuming a single filling mechanism for disordered solids. PMID:24407663

  10. Precision Pointing Control System (PPCS) star tracker test

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Tests performed on the TRW precision star tracker are described. The unit tested was a two-axis gimballed star tracker designed to provide star LOS data to an accuracy of 1 to 2 sec. The tracker features a unique bearing system and utilizes thermal and mechanical symmetry techniques to achieve high precision which can be demonstrated in a one g environment. The test program included a laboratory evaluation of tracker functional operation, sensitivity, repeatibility, and thermal stability.

  11. A sensitive identification of warm debris disks in the solar neighborhood through precise calibration of saturated wise photometry

    NASA Astrophysics Data System (ADS)

    Patel, R.; Metchev, S.; Heinze, A.

    2014-09-01

    We present a sensitive search for WISE W3 (12m) and W4 (22m) excesses from warm optically thin dust around Hipparcos main sequence stars within 75 pc. Our approach uses all combinations of WISE colors to empirically identify high confidence (> 99.5%) excesses as small as 8% above the stellar photosphere in either W3 or W4. The high fidelity of our detections even at faint excess levels stems from our well-behaved empirical calibration of WISE photospheric colors, and from the removal of stars with discrepant photometry between the WISE All-Sky Catalog and the WISE Single Exposure Source Table. In addition, we derive empirical corrections to saturated stellar photometry in W1 and W2, which allows us to extend our debris disk identification to previously overlooked saturated stars in WISE. We have identified over 200 Hipparcos debris disk-host stars within 75 pc, of which 110 are new excess identifications at 10-30m. Altogether, we have expanded the number of known debris disks within 75 pc by 114, or 29%, compared to previous studies on IRAS, ISO, Spitzer, AKARI, and WISE.

  12. Precision and sensitivity of the measurement of 15N enrichment in D-alanine from bacterial cell walls using positive/negative ion mass spectrometry

    NASA Technical Reports Server (NTRS)

    Tunlid, A.; Odham, G.; Findlay, R. H.; White, D. C.

    1985-01-01

    Sensitive detection of cellular components from specific groups of microbes can be utilized as 'signatures' in the examination of microbial consortia from soils, sediments or biofilms. Utilizing capillary gas chromatography/mass spectrometry and stereospecific derivatizing agents, D-alanine, a component localized in the prokaryotic (bacterial) cell wall, can be detected reproducibly. Enrichments of D-[15N]alanine determined in E. coli grown with [15N]ammonia can be determined with precision at 1.0 atom%. Chemical ionization with methane gas and the detection of negative ions (M - HF)- and (M - F or M + H - HF)- formed from the heptafluorobutyryl D-2 butanol ester of D-alanine allowed as little as 8 pg (90 fmol) to be detected reproducibly. This method can be utilized to define the metabolic activity in terms of 15N incorporation at the level of 10(3)-10(4) cells, as a function of the 15N-14N ratio.

  13. What can we learn from European continuous atmospheric CO2 measurements to quantify regional fluxes Part 2: Sensitivity of flux accuracy to inverse setup

    NASA Astrophysics Data System (ADS)

    Carouge, C.; Peylin, P.; Rayner, P. J.; Bousquet, P.; Chevallier, F.; Ciais, P.

    2008-10-01

    An inverse model using atmospheric CO2 observations from a European network of stations to reconstruct daily CO2 fluxes and their uncertainties over Europe at 50 km resolution has been developed within a Bayesian framework. We use the pseudo-data or identical twin approach in which we try to recover known fluxes using a range of perturbations to the input. In this second part, the focus is put on the sensitivity of flux accuracy to the inverse setup, varying the prior flux errors, the pseudo-data errors and the network of stations. We show that, under a range of assumptions about prior error and data error we can recover fluxes reliably at the scale of 1000 km and 10 days. At smaller scales the performance is highly sensitive to details of the inverse set-up. The use of temporal correlations in the flux domain appears to be of the same importance as the spatial correlations. We also note that the use of simple, isotropic correlations on the prior flux errors is more reliable than the use of apparently physically-based errors. Finally, increasing the European atmospheric network density improves the area with significant error reduction in the flux retrieval.

  14. Precision and accuracy in fluorescent short tandem repeat DNA typing: assessment of benefits imparted by the use of allelic ladders with the AmpF/STR Profiler Plus kit.

    PubMed

    Leclair, Benoît; Frégeau, Chantal J; Bowen, Kathy L; Fourney, Ron M

    2004-03-01

    Base-calling precision of short tandem repeat (STR) allelic bands on dynamic slab-gel electrophoresis systems was evaluated. Data was collected from over 6000 population database allele peaks generated from 468 population database samples amplified with the AmpF/STR Profiler Plus (PP) kit and electrophoresed on ABD 377 DNA sequencers. Precision was measured by way of standard deviations and was shown to be essentially the same, whether using fixed or floating bin genotyping. However, the allelic ladders have proven more sensitive to electrophoretic variations than database samples, which have caused some floating bins of D18S51 to shift on occasion. This observation prompted the investigation of polyacrylamide gel formulations in order to stabilize allelic ladder migration. The results demonstrate that, although alleles comprised in allelic ladders and questioned samples run on the same gel should migrate in an identical manner, this premise needs to be verified for any given electrophoresis platform and gel formulation. We show that the compilation of base-calling data is a very informative and useful tool for assessing the performance stability of dynamic gel electrophoresis systems, stability on which depends genotyping result quality. PMID:15004837

  15. Relative accuracy evaluation.

    PubMed

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  16. Relative Accuracy Evaluation

    PubMed Central

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  17. SU-E-P-54: Evaluation of the Accuracy and Precision of IGPS-O X-Ray Image-Guided Positioning System by Comparison with On-Board Imager Cone-Beam Computed Tomography

    SciTech Connect

    Zhang, D; Wang, W; Jiang, B; Fu, D

    2015-06-15

    Purpose: The purpose of this study is to assess the positioning accuracy and precision of IGPS-O system which is a novel radiographic kilo-voltage x-ray image-guided positioning system developed for clinical IGRT applications. Methods: IGPS-O x-ray image-guided positioning system consists of two oblique sets of radiographic kilo-voltage x-ray projecting and imaging devices which were equiped on the ground and ceiling of treatment room. This system can determine the positioning error in the form of three translations and three rotations according to the registration of two X-ray images acquired online and the planning CT image. An anthropomorphic head phantom and an anthropomorphic thorax phantom were used for this study. The phantom was set up on the treatment table with correct position and various “planned” setup errors. Both IGPS-O x-ray image-guided positioning system and the commercial On-board Imager Cone-beam Computed Tomography (OBI CBCT) were used to obtain the setup errors of the phantom. Difference of the Result between the two image-guided positioning systems were computed and analyzed. Results: The setup errors measured by IGPS-O x-ray image-guided positioning system and the OBI CBCT system showed a general agreement, the means and standard errors of the discrepancies between the two systems in the left-right, anterior-posterior, superior-inferior directions were −0.13±0.09mm, 0.03±0.25mm, 0.04±0.31mm, respectively. The maximum difference was only 0.51mm in all the directions and the angular discrepancy was 0.3±0.5° between the two systems. Conclusion: The spatial and angular discrepancies between IGPS-O system and OBI CBCT for setup error correction was minimal. There is a general agreement between the two positioning system. IGPS-O x-ray image-guided positioning system can achieve as good accuracy as CBCT and can be used in the clinical IGRT applications.

  18. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    PubMed Central

    2011-01-01

    Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed

  19. Contribution of apamin-sensitive SK channels to the firing precision but not to the slow afterhyperpolarization and spike frequency adaptation in snail neurons.

    PubMed

    Vatanparast, Jafar; Janahmadi, Mahyar

    2009-02-19

    Apamin-sensitive small conductance Ca(2+)-dependent K(+)(SK) channels are generally accepted as responsible for the medium afterhyperpolarization (mAHP) after single or train of action potentials. Here, we examined the functional involvement of these channels in the firing precision, post train AHP and spike frequency adaptation (SFA) in neurons of snail Caucasotachea atrolabiata. Apamin, a selective SK channel antagonist, reduced the duration of single-spike AHP and disrupted the spontaneous rhythmic activity. High frequency trains of evoked action potentials showed a time-dependent decrease in the action potential discharge rate (spike frequency adaptation) and followed by a prominent post stimulus inhibitory period (PSIP) as a marker of slow AHP (sAHP). Neither sAHP nor SFA was attenuated by apamin, suggesting that apamin-sensitive SK channels can strongly affect the rhythmicity, but are probably not involved in the SFA and sAHP. Nifedipine, antagonist of L-type Ca(2+) channels, decreased the firing frequency and neuronal rhythmicity. When PSIP was normalized to the background interspike interval, a suppressing effect of nifedipine on PSIP was also observed. Intracellular iontophoretic injection of BAPTA, a potent Ca(2+) chelator, dramatically suppressed PSIP that confirms the intracellular Ca(2+) dependence of the sAHP, but had no discernable effect on the SFA. During train-evoked activity a reduction in the action potential overshoot and maximum depolarization rate was also observed, along with a decrease in the firing frequency, while the action potential threshold increased, which indicated that Na(+) channels, rather than Ca(2+)-dependent K(+) channels, are involved in the SFA. PMID:19100724

  20. Application of AFINCH as a Tool for Evaluating the Effects of Streamflow-Gaging-Network Size and Composition on the Accuracy and Precision of Streamflow Estimates at Ungaged Locations in the Southeast Lake Michigan Hydrologic Subregion

    USGS Publications Warehouse

    Koltun, G.F.; Holtschlag, David J.

    2010-01-01

    Bootstrapping techniques employing random subsampling were used with the AFINCH (Analysis of Flows In Networks of CHannels) model to gain insights into the effects of variation in streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the 0405 (Southeast Lake Michigan) hydrologic subregion. AFINCH uses stepwise-regression techniques to estimate monthly water yields from catchments based on geospatial-climate and land-cover data in combination with available streamflow and water-use data. Calculations are performed on a hydrologic-subregion scale for each catchment and stream reach contained in a National Hydrography Dataset Plus (NHDPlus) subregion. Water yields from contributing catchments are multiplied by catchment areas and resulting flow values are accumulated to compute streamflows in stream reaches which are referred to as flow lines. AFINCH imposes constraints on water yields to ensure that observed streamflows are conserved at gaged locations. Data from the 0405 hydrologic subregion (referred to as Southeast Lake Michigan) were used for the analyses. Daily streamflow data were measured in the subregion for 1 or more years at a total of 75 streamflow-gaging stations during the analysis period which spanned water years 1971-2003. The number of streamflow gages in operation each year during the analysis period ranged from 42 to 56 and averaged 47. Six sets (one set for each censoring level), each composed of 30 random subsets of the 75 streamflow gages, were created by censoring (removing) approximately 10, 20, 30, 40, 50, and 75 percent of the streamflow gages (the actual percentage of operating streamflow gages censored for each set varied from year to year, and within the year from subset to subset, but averaged approximately the indicated percentages). Streamflow estimates for six flow lines each were aggregated by censoring level, and results were analyzed to assess (a) how the size

  1. Online image-guided intensity-modulated radiotherapy for prostate cancer: How much improvement can we expect? A theoretical assessment of clinical benefits and potential dose escalation by improving precision and accuracy of radiation delivery

    SciTech Connect

    Ghilezan, Michel; Yan Di . E-mail: dyan@beaumont.edu; Liang Jian; Jaffray, David; Wong, John; Martinez, Alvaro

    2004-12-01

    Purpose: To quantify the theoretical benefit, in terms of improvement in precision and accuracy of treatment delivery and in dose increase, of using online image-guided intensity-modulated radiotherapy (IG-IMRT) performed with onboard cone-beam computed tomography (CT), in an ideal setting of no intrafraction motion/deformation, in the treatment of prostate cancer. Methods and materials: Twenty-two prostate cancer patients treated with conventional radiotherapy underwent multiple serial CT scans (median 18 scans per patient) during their treatment. We assumed that these data sets were equivalent to image sets obtainable by an onboard cone-beam CT. Each patient treatment was simulated with conventional IMRT and online IG-IMRT separately. The conventional IMRT plan was generated on the basis of pretreatment CT, with a clinical target volume to planning target volume (CTV-to-PTV) margin of 1 cm, and the online IG-IMRT plan was created before each treatment fraction on the basis of the CT scan of the day, without CTV-to-PTV margin. The inverse planning process was similar for both conventional IMRT and online IG-IMRT. Treatment dose for each organ of interest was quantified, including patient daily setup error and internal organ motion/deformation. We used generalized equivalent uniform dose (EUD) to compare the two approaches. The generalized EUD (percentage) of each organ of interest was scaled relative to the prescription dose at treatment isocenter for evaluation and comparison. On the basis of bladder wall and rectal wall EUD, a dose-escalation coefficient was calculated, representing the potential increment of the treatment dose achievable with online IG-IMRT as compared with conventional IMRT. Results: With respect to radiosensitive tumor, the average EUD for the target (prostate plus seminal vesicles) was 96.8% for conventional IMRT and 98.9% for online IG-IMRT, with standard deviations (SDs) of 5.6% and 0.7%, respectively (p < 0.0001). The average EUDs of

  2. Ultra-precision: enabling our future.

    PubMed

    Shore, Paul; Morantz, Paul

    2012-08-28

    This paper provides a perspective on the development of ultra-precision technologies: What drove their evolution and what do they now promise for the future as we face the consequences of consumption of the Earth's finite resources? Improved application of measurement is introduced as a major enabler of mass production, and its resultant impact on wealth generation is considered. This paper identifies the ambitions of the defence, automotive and microelectronics sectors as important drivers of improved manufacturing accuracy capability and ever smaller feature creation. It then describes how science fields such as astronomy have presented significant precision engineering challenges, illustrating how these fields of science have achieved unprecedented levels of accuracy, sensitivity and sheer scale. Notwithstanding their importance to science understanding, many science-driven ultra-precision technologies became key enablers for wealth generation and other well-being issues. Specific ultra-precision machine tools important to major astronomy programmes are discussed, as well as the way in which subsequently evolved machine tools made at the beginning of the twenty-first century, now provide much wider benefits. PMID:22802499

  3. Children's school-breakfast reports and school-lunch reports (in 24-h dietary recalls): conventional and reporting-error-sensitive measures show inconsistent accuracy results for retention interval and breakfast location.

    PubMed

    Baxter, Suzanne D; Guinn, Caroline H; Smith, Albert F; Hitchcock, David B; Royer, Julie A; Puryear, Megan P; Collins, Kathleen L; Smith, Alyssa L

    2016-04-14

    Validation-study data were analysed to investigate retention interval (RI) and prompt effects on the accuracy of fourth-grade children's reports of school-breakfast and school-lunch (in 24-h recalls), and the accuracy of school-breakfast reports by breakfast location (classroom; cafeteria). Randomly selected fourth-grade children at ten schools in four districts were observed eating school-provided breakfast and lunch, and were interviewed under one of eight conditions created by crossing two RIs ('short'--prior-24-hour recall obtained in the afternoon and 'long'--previous-day recall obtained in the morning) with four prompts ('forward'--distant to recent, 'meal name'--breakfast, etc., 'open'--no instructions, and 'reverse'--recent to distant). Each condition had sixty children (half were girls). Of 480 children, 355 and 409 reported meals satisfying criteria for reports of school-breakfast and school-lunch, respectively. For breakfast and lunch separately, a conventional measure--report rate--and reporting-error-sensitive measures--correspondence rate and inflation ratio--were calculated for energy per meal-reporting child. Correspondence rate and inflation ratio--but not report rate--showed better accuracy for school-breakfast and school-lunch reports with the short RI than with the long RI; this pattern was not found for some prompts for each sex. Correspondence rate and inflation ratio showed better school-breakfast report accuracy for the classroom than for cafeteria location for each prompt, but report rate showed the opposite. For each RI, correspondence rate and inflation ratio showed better accuracy for lunch than for breakfast, but report rate showed the opposite. When choosing RI and prompts for recalls, researchers and practitioners should select a short RI to maximise accuracy. Recommendations for prompt selections are less clear. As report rates distort validation-study accuracy conclusions, reporting-error-sensitive measures are recommended. PMID

  4. Precision digital control systems

    NASA Astrophysics Data System (ADS)

    Vyskub, V. G.; Rozov, B. S.; Savelev, V. I.

    This book is concerned with the characteristics of digital control systems of great accuracy. A classification of such systems is considered along with aspects of stabilization, programmable control applications, digital tracking systems and servomechanisms, and precision systems for the control of a scanning laser beam. Other topics explored are related to systems of proportional control, linear devices and methods for increasing precision, approaches for further decreasing the response time in the case of high-speed operation, possibilities for the implementation of a logical control law, and methods for the study of precision digital control systems. A description is presented of precision automatic control systems which make use of electronic computers, taking into account the existing possibilities for an employment of computers in automatic control systems, approaches and studies required for including a computer in such control systems, and an analysis of the structure of automatic control systems with computers. Attention is also given to functional blocks in the considered systems.

  5. Chemical Sensors: Precisely Controlled Ultrathin Conjugated Polymer Films for Large Area Transparent Transistors and Highly Sensitive Chemical Sensors (Adv. Mater. 14/2016).

    PubMed

    Khim, Dongyoon; Ryu, Gi-Seong; Park, Won-Tae; Kim, Hyunchul; Lee, Myungwon; Noh, Yong-Young

    2016-04-01

    A precise control over the film thickness is a vital requirement for achievement of high performance in thin-film electronic devices. On page 2752, Y.-Y. Noh and co-workers develop an effective way to deposit a large-area and uniform ultrathin polymer film with a molecular-level precision via a simple wire-wound bar-coating method for high-performance organic transistors and gas sensors. PMID:27062168

  6. Precision Nova operations

    NASA Astrophysics Data System (ADS)

    Ehrlich, Robert B.; Miller, John L.; Saunders, Rodney L.; Thompson, Calvin E.; Weiland, Timothy L.; Laumann, Curt W.

    1995-12-01

    To improve the symmetry of x-ray drive on indirectly driven ICF capsules, we have increased the accuracy of operating procedures and diagnostics on the Nova laser. Precision Nova operations include routine precision power balance to within 10% rms in the 'foot' and 5% rms in the peak of shaped pulses, beam synchronization to within 10 ps rms, and pointing of the beams onto targets to within 35 micrometer rms. We have also added a 'fail-safe chirp' system to avoid stimulated Brillouin scattering (SBS) in optical components during high energy shots.

  7. Precision Nova operations

    SciTech Connect

    Ehrlich, R.B.; Miller, J.L.; Saunders, R.L.; Thompson, C.E.; Weiland, T.L.; Laumann, C.W.

    1995-09-01

    To improve the symmetry of x-ray drive on indirectly driven ICF capsules, we have increased the accuracy of operating procedures and diagnostics on the Nova laser. Precision Nova operations includes routine precision power balance to within 10% rms in the ``foot`` and 5% nns in the peak of shaped pulses, beam synchronization to within 10 ps rms, and pointing of the beams onto targets to within 35 {mu}m rms. We have also added a ``fail-safe chirp`` system to avoid Stimulated Brillouin Scattering (SBS) in optical components during high energy shots.

  8. Ultra-rare Disease and Genomics-Driven Precision Medicine

    PubMed Central

    Lee, Sangmoon

    2016-01-01

    Since next-generation sequencing (NGS) technique was adopted into clinical practices, revolutionary advances in diagnosing rare genetic diseases have been achieved through translating genomic medicine into precision or personalized management. Indeed, several successful cases of molecular diagnosis and treatment with personalized or targeted therapies of rare genetic diseases have been reported. Still, there are several obstacles to be overcome for wider application of NGS-based precision medicine, including high sequencing cost, incomplete variant sensitivity and accuracy, practical complexities, and a shortage of available treatment options. PMID:27445646

  9. Precision laser automatic tracking system.

    PubMed

    Lucy, R F; Peters, C J; McGann, E J; Lang, K T

    1966-04-01

    A precision laser tracker has been constructed and tested that is capable of tracking a low-acceleration target to an accuracy of about 25 microrad root mean square. In tracking high-acceleration targets, the error is directly proportional to the angular acceleration. For an angular acceleration of 0.6 rad/sec(2), the measured tracking error was about 0.1 mrad. The basic components in this tracker, similar in configuration to a heliostat, are a laser and an image dissector, which are mounted on a stationary frame, and a servocontrolled tracking mirror. The daytime sensitivity of this system is approximately 3 x 10(-10) W/m(2); the ultimate nighttime sensitivity is approximately 3 x 10(-14) W/m(2). Experimental tests were performed to evaluate both dynamic characteristics of this system and the system sensitivity. Dynamic performance of the system was obtained, using a small rocket covered with retroreflective material launched at an acceleration of about 13 g at a point 204 m from the tracker. The daytime sensitivity of the system was checked, using an efficient retroreflector mounted on a light aircraft. This aircraft was tracked out to a maximum range of 15 km, which checked the daytime sensitivity of the system measured by other means. The system also has been used to track passively stars and the Echo I satellite. Also, the system tracked passively a +7.5 magnitude star, and the signal-to-noise ratio in this experiment indicates that it should be possible to track a + 12.5 magnitude star. PMID:20048888

  10. Precision Environmental Radiation Monitoring System

    SciTech Connect

    Vladimir Popov, Pavel Degtiarenko

    2010-07-01

    A new precision low-level environmental radiation monitoring system has been developed and tested at Jefferson Lab. This system provides environmental radiation measurements with accuracy and stability of the order of 1 nGy/h in an hour, roughly corresponding to approximately 1% of the natural cosmic background at the sea level. Advanced electronic front-end has been designed and produced for use with the industry-standard High Pressure Ionization Chamber detector hardware. A new highly sensitive readout electronic circuit was designed to measure charge from the virtually suspended ionization chamber ion collecting electrode. New signal processing technique and dedicated data acquisition were tested together with the new readout. The designed system enabled data collection in a remote Linux-operated computer workstation, which was connected to the detectors using a standard telephone cable line. The data acquisition system algorithm is built around the continuously running 24-bit resolution 192 kHz data sampling analog to digital convertor. The major features of the design include: extremely low leakage current in the input circuit, true charge integrating mode operation, and relatively fast response to the intermediate radiation change. These features allow operating of the device as an environmental radiation monitor, at the perimeters of the radiation-generating installations in densely populated areas, like in other monitoring and security applications requiring high precision and long-term stability. Initial system evaluation results are presented.

  11. Precise Orbit Determination for ALOS

    NASA Technical Reports Server (NTRS)

    Nakamura, Ryo; Nakamura, Shinichi; Kudo, Nobuo; Katagiri, Seiji

    2007-01-01

    The Advanced Land Observing Satellite (ALOS) has been developed to contribute to the fields of mapping, precise regional land coverage observation, disaster monitoring, and resource surveying. Because the mounted sensors need high geometrical accuracy, precise orbit determination for ALOS is essential for satisfying the mission objectives. So ALOS mounts a GPS receiver and a Laser Reflector (LR) for Satellite Laser Ranging (SLR). This paper deals with the precise orbit determination experiments for ALOS using Global and High Accuracy Trajectory determination System (GUTS) and the evaluation of the orbit determination accuracy by SLR data. The results show that, even though the GPS receiver loses lock of GPS signals more frequently than expected, GPS-based orbit is consistent with SLR-based orbit. And considering the 1 sigma error, orbit determination accuracy of a few decimeters (peak-to-peak) was achieved.

  12. Sensitivity and accuracy of high-throughput metabarcoding methods used to describe aquatic communities for early detection of invasve fish species

    EPA Science Inventory

    For early detection biomonitoring of aquatic invasive species, sensitivity to rare individuals and accurate, high-resolution taxonomic classification are critical to minimize Type I and II detection errors. Given the great expense and effort associated with morphological identifi...

  13. Accuracy and precision of gravitational-wave models of inspiraling neutron star-black hole binaries with spin: Comparison with matter-free numerical relativity in the low-frequency regime

    NASA Astrophysics Data System (ADS)

    Bhagwat, Swetha; Kumar, Prayush; Barkett, Kevin; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilagyi, Bela; LIGO Collaboration

    2016-03-01

    Detection of gravitational wave involves extracting extremely weak signal from noisy data and their detection depends crucially on the accuracy of the signal models. The most accurate models of compact binary coalescence are known to come from solving the Einstein's equation numerically without any approximations. However, this is computationally formidable. As a more practical alternative, several analytic or semi analytic approximations are developed to model these waveforms. However, the work of Nitz et al. (2013) demonstrated that there is disagreement between these models. We present a careful follow up study on accuracies of different waveform families for spinning black-hole neutron star binaries, in context of both detection and parameter estimation and find that SEOBNRv2 to be the most faithful model. Post Newtonian models can be used for detection but we find that they could lead to large parameter bias. Supported by National Science Foundation (NSF) Awards No. PHY-1404395 and No. AST-1333142.

  14. Making Precise Antenna Reflectors For Millimeter Wavelengths

    NASA Technical Reports Server (NTRS)

    Sharp, G. Richard; Wanhainen, Joyce S.; Ketelsen, Dean A.

    1994-01-01

    In improved method of fabrication of precise, lightweight antenna reflectors for millimeter wavelengths, required precise contours of reflecting surfaces obtained by computer numberically controlled machining of surface layers bonded to lightweight, rigid structures. Achievable precision greater than that of older, more-expensive fabrication method involving multiple steps of low- and high-temperature molding, in which some accuracy lost at each step.

  15. Precision translator

    DOEpatents

    Reedy, Robert P.; Crawford, Daniel W.

    1984-01-01

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  16. Precision translator

    DOEpatents

    Reedy, R.P.; Crawford, D.W.

    1982-03-09

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  17. Do We Know Who Will Drop out?: A Review of the Predictors of Dropping out of High School--Precision, Sensitivity, and Specificity

    ERIC Educational Resources Information Center

    Bowers, Alex J.; Sprott, Ryan; Taff, Sherry A.

    2013-01-01

    The purpose of this study is to review the literature on the most accurate indicators of students at risk of dropping out of high school. We used Relative Operating Characteristic (ROC) analysis to compare the sensitivity and specificity of 110 dropout flags across 36 studies. Our results indicate that 1) ROC analysis provides a means to compare…

  18. Precision GPS ephemerides and baselines

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Based on the research, the area of precise ephemerides for GPS satellites, the following observations can be made pertaining to the status and future work needed regarding orbit accuracy. There are several aspects which need to be addressed in discussing determination of precise orbits, such as force models, kinematic models, measurement models, data reduction/estimation methods, etc. Although each one of these aspects was studied at CSR in research efforts, only points pertaining to the force modeling aspect are addressed.

  19. Assessing the Accuracy and Precision of Inorganic Geochemical Data Produced through Flux Fusion and Acid Digestions: Multiple (60+) Comprehensive Analyses of BHVO-2 and the Development of Improved "Accepted" Values

    NASA Astrophysics Data System (ADS)

    Ireland, T. J.; Scudder, R.; Dunlea, A. G.; Anderson, C. H.; Murray, R. W.

    2014-12-01

    The use of geological standard reference materials (SRMs) to assess both the accuracy and the reproducibility of geochemical data is a vital consideration in determining the major and trace element abundances of geologic, oceanographic, and environmental samples. Calibration curves commonly are generated that are predicated on accurate analyses of these SRMs. As a means to verify the robustness of these calibration curves, a SRM can also be run as an unknown item (i.e., not included as a data point in the calibration). The experimentally derived composition of the SRM can thus be compared to the certified (or otherwise accepted) value. This comparison gives a direct measure of the accuracy of the method used. Similarly, if the same SRM is analyzed as an unknown over multiple analytical sessions, the external reproducibility of the method can be evaluated. Two common bulk digestion methods used in geochemical analysis are flux fusion and acid digestion. The flux fusion technique is excellent at ensuring complete digestion of a variety of sample types, is quick, and does not involve much use of hazardous acids. However, this technique is hampered by a high amount of total dissolved solids and may be accompanied by an increased analytical blank for certain trace elements. On the other hand, acid digestion (using a cocktail of concentrated nitric, hydrochloric and hydrofluoric acids) provides an exceptionally clean digestion with very low analytical blanks. However, this technique results in a loss of Si from the system and may compromise results for a few other elements (e.g., Ge). Our lab uses flux fusion for the determination of major elements and a few key trace elements by ICP-ES, while acid digestion is used for Ti and trace element analyses by ICP-MS. Here we present major and trace element data for BHVO-2, a frequently used SRM derived from a Hawaiian basalt, gathered over a period of over two years (30+ analyses by each technique). We show that both digestion

  20. Facile realization of efficient blocking from ZnO/TiO2 mismatch interface in dye-sensitized solar cells and precise microscopic modeling adapted by circuit analysis

    NASA Astrophysics Data System (ADS)

    Ameri, Mohsen; Samavat, Feridoun; Mohajerani, Ezeddin; Fathollahi, Mohammad-Reza

    2016-06-01

    In the present research, the effect of \\text{ZnO} -based blocking layers on the operational features of \\text{Ti}{{\\text{O}}2} -based dye-sensitized solar cells is investigated. A facile solution-based coating method is applied to prepare an interfacial highly transparent \\text{ZnO} compact blocking layer (CBL) to enhance the efficiency of dye-sensitized solar cells. Different precursor molar concentration were tested to find the optimum concentration. Optical and electrical measurements were carried out to confirm the operation of the CBLs. Morphological characterizations were performed by scanning electron microscopy (SEM) and atomic force microscopy (AFM) to investigate the structure of the compact layers. We have also developed a set of modeling procedures to extract the effective electrical parameters including the parasitic resistances and charged carrier profiles to investigate the effect of CBLs on the dye-sensitized solar cell (DSSC) performance. The adopted modeling approach should establish a versatile framework for diagnosis of DSSCs and facilitates the exploration of critical factors influencing device performance.

  1. Precision synchrotron radiation detectors

    SciTech Connect

    Levi, M.; Rouse, F.; Butler, J.; Jung, C.K.; Lateur, M.; Nash, J.; Tinsman, J.; Wormser, G.; Gomez, J.J.; Kent, J.

    1989-03-01

    Precision detectors to measure synchrotron radiation beam positions have been designed and installed as part of beam energy spectrometers at the Stanford Linear Collider (SLC). The distance between pairs of synchrotron radiation beams is measured absolutely to better than 28 /mu/m on a pulse-to-pulse basis. This contributes less than 5 MeV to the error in the measurement of SLC beam energies (approximately 50 GeV). A system of high-resolution video cameras viewing precisely-aligned fiducial wire arrays overlaying phosphorescent screens has achieved this accuracy. Also, detectors of synchrotron radiation using the charge developed by the ejection of Compton-recoil electrons from an array of fine wires are being developed. 4 refs., 5 figs., 1 tab.

  2. Precise Measurement for Manufacturing

    NASA Technical Reports Server (NTRS)

    2003-01-01

    A metrology instrument known as PhaseCam supports a wide range of applications, from testing large optics to controlling factory production processes. This dynamic interferometer system enables precise measurement of three-dimensional surfaces in the manufacturing industry, delivering speed and high-resolution accuracy in even the most challenging environments.Compact and reliable, PhaseCam enables users to make interferometric measurements right on the factory floor. The system can be configured for many different applications, including mirror phasing, vacuum/cryogenic testing, motion/modal analysis, and flow visualization.

  3. Precision Pointing System Development

    SciTech Connect

    BUGOS, ROBERT M.

    2003-03-01

    The development of precision pointing systems has been underway in Sandia's Electronic Systems Center for over thirty years. Important areas of emphasis are synthetic aperture radars and optical reconnaissance systems. Most applications are in the aerospace arena, with host vehicles including rockets, satellites, and manned and unmanned aircraft. Systems have been used on defense-related missions throughout the world. Presently in development are pointing systems with accuracy goals in the nanoradian regime. Future activity will include efforts to dramatically reduce system size and weight through measures such as the incorporation of advanced materials and MEMS inertial sensors.

  4. Assessing the beginning to end-of-mission sensitivity change of the PREcision MOnitor Sensor total solar irradiance radiometer (PREMOS/PICARD)

    NASA Astrophysics Data System (ADS)

    Ball, William T.; Schmutz, Werner; Fehlmann, André; Finsterle, Wolfgang; Walter, Benjamin

    2016-08-01

    The switching of the total solar irradiance (TSI) backup radiometer (PREMOS-B) to a primary role for 2 weeks at the end of the PICARD mission provides a unique opportunity to test the fundamental hypothesis of radiometer experiments in space, which is that the sensitivity change of instruments due to the space environment is identical for the same instrument type as a function of solar-exposure time of the instruments. We verify this hypothesis for the PREMOS TSI radiometers within the PREMOS experiment on the PICARD mission. We confirm that the sensitivity change of the backup instrument, PREMOS-B, is similar to that of the identically-constructed primary radiometer, PREMOS-A. The extended exposure of the backup instrument at the end of the mission allows for the assessment, with an uncertainty estimate, of the sensitivity change of the primary radiometer from the beginning of the PICARD mission compared to the end, and of the degradation of the backup over the mission. We correct six sets of PREMOS-B observations connecting October 2011 with February 2014, using six ratios from simultaneous PREMOS-A and PREMOS-B exposures during the first days of PREMOS-A operation in 2010. These ratios are then used, without indirect estimates or assumptions, to evaluate the stability of SORCE/TIM and SOHO/VIRGO TSI measurements, which have both operated for more than a decade and now show different trends over the time span of the PICARD mission, namely from 2010 to 2014. We find that by February 2014 relative to October 2011 PREMOS-B supports the SORCE/TIM TSI time evolution, which in May 2014 relative to October 2011 is ~0.11 W m-2, or ~84 ppm, higher than SOHO/VIRGO. Such a divergence between SORCE/TIM and SOHO/VIRGO over this period is a significant fraction of the estimated decline of 0.2 W m-2 between the solar minima of 1996 and 2008, and questions the reliability of that estimated trend. Extrapolating the uncertainty indicated by the disagreement of SORCE/TIM and PREMOS

  5. High precision optical surface metrology using deflectometry

    NASA Astrophysics Data System (ADS)

    Huang, Run

    Software Configurable Optical Test System (SCOTS) developed at University of Arizona is a highly efficient optical metrology technique based on the principle of deflectometry, which can achieve comparable accuracy with interferometry but with low-cost hardware. In a SCOTS test, an LCD display is used to generate structured light pattern to illuminate the test optics and the reflected light is captured by a digital camera. The surface slope of test optics is determined by triangulation of the display pixels, test optics, and the camera. The surface shape is obtained by the integration of the slopes. Comparing to interferometry, which has long served as an accurate non-contact optical metrology technology, SCOTS overcomes the limitation of dynamic range and sensitivity to environment. It is able to achieve high dynamic range slope measurement without requiring null optics. In this dissertation, the sensitivity and performance of the test system have been analyzed comprehensively. Sophisticated calibrations of system components have been investigated and implemented in different metrology projects to push this technology to a higher accuracy including low-order terms. A compact on-axis SCOTS system lowered the testing geometry sensitivity in the metrology of 1-meter highly aspheric secondary mirror of Large Binocular Telescope. Sub-nm accuracy was achieved in testing a high precision elliptical X-ray mirror by using reference calibration. A well-calibrated SCOTS was successfully constructed and is, at the time of writing this dissertation, being used to provide surface metrology feedback for the fabrication of the primary mirror of Daniel K. Inouye Solar Telescope which is a 4-meter off-axis parabola with more than 8 mm aspherical departure.

  6. Precision GPS ephemerides and baselines

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The emphasis of this grant was focused on precision ephemerides for the Global Positioning System (GPS) satellites for geodynamics applications. During the period of this grant, major activities were in the areas of thermal force modeling, numerical integration accuracy improvement for eclipsing satellites, analysis of GIG '91 campaign data, and the Southwest Pacific campaign data analysis.

  7. Precision orbit computations for Starlette

    NASA Technical Reports Server (NTRS)

    Marsh, J. G.; Williamson, R. G.

    1976-01-01

    The Starlette satellite, launched in February 1975 by the French Centre National d'Etudes Spatiales, was designed to minimize the effects of nongravitational forces and to obtain the highest possible accuracy for laser range measurements. Analyses of the first four months of global laser tracking data confirmed the stability of the orbit and the precision to which the satellite's position is established.

  8. Precision of spiral-bevel gears

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.

    1982-01-01

    The kinematic errors in spiral bevel gear trains caused by the generation of nonconjugate surfaces, by axial displacements of the gears during assembly, and by eccentricity of the assembled gears were determined. One mathematical model corresponds to the motion of the contact ellipse across the tooth surface, (geometry I) and the other along the tooth surface (geometry II). The following results were obtained: (1) kinematic errors induced by errors of manufacture may be minimized by applying special machine settings, the original error may be reduced by order of magnitude, the procedure is most effective for geometry 2 gears, (2) when trying to adjust the bearing contact pattern between the gear teeth for geometry 1 gears, it is more desirable to shim the gear axially; for geometry II gears, shim the pinion axially; (3) the kinematic accuracy of spiral bevel drives are most sensitive to eccentricities of the gear and less sensitive to eccentricities of the pinion. The precision of mounting accuracy and manufacture are most crucial for the gear, and less so for the pinion.

  9. Precision of spiral-bevel gears

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.

    1983-01-01

    The kinematic errors in spiral bevel gear trains caused by the generation of nonconjugate surfaces, by axial displacements of the gears during assembly, and by eccentricity of the assembled gears were determined. One mathematical model corresponds to the motion of the contact ellipse across the tooth surface, (geometry I) and the other along the tooth surface (geometry II). The following results were obtained: (1) kinematic errors induced by errors of manufacture may be minimized by applying special machine settings, the original error may be reduced by order of magnitude, the procedure is most effective for geometry 2 gears, (2) when trying to adjust the bearing contact pattern between the gear teeth for geometry I gears, it is more desirable to shim the gear axially; for geometry II gears, shim the pinion axially; (3) the kinematic accuracy of spiral bevel drives are most sensitive to eccentricities of the gear and less sensitive to eccentricities of the pinion. The precision of mounting accuracy and manufacture are most crucial for the gear, and less so for the pinion. Previously announced in STAR as N82-30552

  10. Accuracy and precision of gravitational-wave models of inspiraling neutron star-black hole binaries with spin: Comparison with matter-free numerical relativity in the low-frequency regime

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilágyi, Béla

    2015-11-01

    Coalescing binaries of neutron stars and black holes are one of the most important sources of gravitational waves for the upcoming network of ground-based detectors. Detection and extraction of astrophysical information from gravitational-wave signals requires accurate waveform models. The effective-one-body and other phenomenological models interpolate between analytic results and numerical relativity simulations, that typically span O (10 ) orbits before coalescence. In this paper we study the faithfulness of these models for neutron star-black hole binaries. We investigate their accuracy using new numerical relativity (NR) simulations that span 36-88 orbits, with mass ratios q and black hole spins χBH of (q ,χBH)=(7 ,±0.4 ),(7 ,±0.6 ) , and (5 ,-0.9 ). These simulations were performed treating the neutron star as a low-mass black hole, ignoring its matter effects. We find that (i) the recently published SEOBNRv1 and SEOBNRv2 models of the effective-one-body family disagree with each other (mismatches of a few percent) for black hole spins χBH≥0.5 or χBH≤-0.3 , with waveform mismatch accumulating during early inspiral; (ii) comparison with numerical waveforms indicates that this disagreement is due to phasing errors of SEOBNRv1, with SEOBNRv2 in good agreement with all of our simulations; (iii) phenomenological waveforms agree with SEOBNRv2 only for comparable-mass low-spin binaries, with overlaps below 0.7 elsewhere in the neutron star-black hole binary parameter space; (iv) comparison with numerical waveforms shows that most of this model's dephasing accumulates near the frequency interval where it switches to a phenomenological phasing prescription; and finally (v) both SEOBNR and post-Newtonian models are effectual for neutron star-black hole systems, but post-Newtonian waveforms will give a significant bias in parameter recovery. Our results suggest that future gravitational-wave detection searches and parameter estimation efforts would benefit

  11. Preschoolers Monitor the Relative Accuracy of Informants

    ERIC Educational Resources Information Center

    Pasquini, Elisabeth S.; Corriveau, Kathleen H.; Koenig, Melissa; Harris, Paul L.

    2007-01-01

    In 2 studies, the sensitivity of 3- and 4-year-olds to the previous accuracy of informants was assessed. Children viewed films in which 2 informants labeled familiar objects with differential accuracy (across the 2 experiments, children were exposed to the following rates of accuracy by the more and less accurate informants, respectively: 100% vs.…

  12. Towards precision medicine.

    PubMed

    Ashley, Euan A

    2016-08-16

    There is great potential for genome sequencing to enhance patient care through improved diagnostic sensitivity and more precise therapeutic targeting. To maximize this potential, genomics strategies that have been developed for genetic discovery - including DNA-sequencing technologies and analysis algorithms - need to be adapted to fit clinical needs. This will require the optimization of alignment algorithms, attention to quality-coverage metrics, tailored solutions for paralogous or low-complexity areas of the genome, and the adoption of consensus standards for variant calling and interpretation. Global sharing of this more accurate genotypic and phenotypic data will accelerate the determination of causality for novel genes or variants. Thus, a deeper understanding of disease will be realized that will allow its targeting with much greater therapeutic precision. PMID:27528417

  13. Precision muon physics

    NASA Astrophysics Data System (ADS)

    Gorringe, T. P.; Hertzog, D. W.

    2015-09-01

    The muon is playing a unique role in sub-atomic physics. Studies of muon decay both determine the overall strength and establish the chiral structure of weak interactions, as well as setting extraordinary limits on charged-lepton-flavor-violating processes. Measurements of the muon's anomalous magnetic moment offer singular sensitivity to the completeness of the standard model and the predictions of many speculative theories. Spectroscopy of muonium and muonic atoms gives unmatched determinations of fundamental quantities including the magnetic moment ratio μμ /μp, lepton mass ratio mμ /me, and proton charge radius rp. Also, muon capture experiments are exploring elusive features of weak interactions involving nucleons and nuclei. We will review the experimental landscape of contemporary high-precision and high-sensitivity experiments with muons. One focus is the novel methods and ingenious techniques that achieve such precision and sensitivity in recent, present, and planned experiments. Another focus is the uncommonly broad and topical range of questions in atomic, nuclear and particle physics that such experiments explore.

  14. Meditation Experience Predicts Introspective Accuracy

    PubMed Central

    Fox, Kieran C. R.; Zakarauskas, Pierre; Dixon, Matt; Ellamil, Melissa; Thompson, Evan; Christoff, Kalina

    2012-01-01

    The accuracy of subjective reports, especially those involving introspection of one's own internal processes, remains unclear, and research has demonstrated large individual differences in introspective accuracy. It has been hypothesized that introspective accuracy may be heightened in persons who engage in meditation practices, due to the highly introspective nature of such practices. We undertook a preliminary exploration of this hypothesis, examining introspective accuracy in a cross-section of meditation practitioners (1–15,000 hrs experience). Introspective accuracy was assessed by comparing subjective reports of tactile sensitivity for each of 20 body regions during a ‘body-scanning’ meditation with averaged, objective measures of tactile sensitivity (mean size of body representation area in primary somatosensory cortex; two-point discrimination threshold) as reported in prior research. Expert meditators showed significantly better introspective accuracy than novices; overall meditation experience also significantly predicted individual introspective accuracy. These results suggest that long-term meditators provide more accurate introspective reports than novices. PMID:23049790

  15. Towards high accuracy calibration of electron backscatter diffraction systems.

    PubMed

    Mingard, Ken; Day, Austin; Maurice, Claire; Quested, Peter

    2011-04-01

    For precise orientation and strain measurements, advanced Electron Backscatter Diffraction (EBSD) techniques require both accurate calibration and reproducible measurement of the system geometry. In many cases the pattern centre (PC) needs to be determined to sub-pixel accuracy. The mechanical insertion/retraction, through the Scanning Electron Microscope (SEM) chamber wall, of the electron sensitive part of modern EBSD detectors also causes alignment and positioning problems and requires frequent monitoring of the PC. Optical alignment and lens distortion issues within the scintillator, lens and charge-coupled device (CCD) camera combination of an EBSD detector need accurate measurement for each individual EBSD system. This paper highlights and quantifies these issues and demonstrates the determination of the pattern centre using a novel shadow-casting technique with a precision of ∼10μm or ∼1/3 CCD pixel. PMID:21396526

  16. Precision spectroscopy of Helium

    SciTech Connect

    Cancio, P.; Giusfredi, G.; Mazzotti, D.; De Natale, P.; De Mauro, C.; Krachmalnicoff, V.; Inguscio, M.

    2005-05-05

    Accurate Quantum-Electrodynamics (QED) tests of the simplest bound three body atomic system are performed by precise laser spectroscopic measurements in atomic Helium. In this paper, we present a review of measurements between triplet states at 1083 nm (23S-23P) and at 389 nm (23S-33P). In 4He, such data have been used to measure the fine structure of the triplet P levels and, then, to determine the fine structure constant when compared with equally accurate theoretical calculations. Moreover, the absolute frequencies of the optical transitions have been used for Lamb-shift determinations of the levels involved with unprecedented accuracy. Finally, determination of the He isotopes nuclear structure and, in particular, a measurement of the nuclear charge radius, are performed by using hyperfine structure and isotope-shift measurements.

  17. Precision ozone vapor pressure measurements

    NASA Technical Reports Server (NTRS)

    Hanson, D.; Mauersberger, K.

    1985-01-01

    The vapor pressure above liquid ozone has been measured with a high accuracy over a temperature range of 85 to 95 K. At the boiling point of liquid argon (87.3 K) an ozone vapor pressure of 0.0403 Torr was obtained with an accuracy of + or - 0.7 percent. A least square fit of the data provided the Clausius-Clapeyron equation for liquid ozone; a latent heat of 82.7 cal/g was calculated. High-precision vapor pressure data are expected to aid research in atmospheric ozone measurements and in many laboratory ozone studies such as measurements of cross sections and reaction rates.

  18. Mixed-Precision Spectral Deferred Correction: Preprint

    SciTech Connect

    Grout, Ray W. S.

    2015-09-02

    Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.

  19. Quality, precision and accuracy of the maximum No. 40 anemometer

    SciTech Connect

    Obermeir, J.; Blittersdorf, D.

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  20. Precision and accuracy of decay constants and age standards

    NASA Astrophysics Data System (ADS)

    Villa, I. M.

    2011-12-01

    40 years of round-robin experiments with age standards teach us that systematic errors must be present in at least N-1 labs if participants provide N mutually incompatible data. In EarthTime, the U-Pb community has produced and distributed synthetic solutions with full metrological traceability. Collector linearity is routinely calibrated under variable conditions (e.g. [1]). Instrumental mass fractionation is measured in-run with double spikes (e.g. 233U-236U). Parent-daughter ratios are metrologically traceable, so the full uncertainty budget of a U-Pb age should coincide with interlaboratory uncertainty. TIMS round-robin experiments indeed show a decrease of N towards the ideal value of 1. Comparing 235U-207Pb with 238U-206Pb ages (e.g. [2]) has resulted in a credible re-evaluation of the 235U decay constant, with lower uncertainty than gamma counting. U-Pb microbeam techniques reveal the link petrology-microtextures-microchemistry-isotope record but do not achieve the low uncertainty of TIMS. In the K-Ar community, N is large; interlaboratory bias is > 10 times self-assessed uncertainty. Systematic errors may have analytical and petrological reasons. Metrological traceability is not yet implemented (substantial advance may come from work in progress, e.g. [7]). One of the worst problems is collector stability and linearity. Using electron multipliers (EM) instead of Faraday buckets (FB) reduces both dynamic range and collector linearity. Mass spectrometer backgrounds are never zero; the extent as well as the predictability of their variability must be propagated into the uncertainty evaluation. The high isotope ratio of the atmospheric Ar requires a large dynamic range over which linearity must be demonstrated under all analytical conditions to correctly estimate mass fractionation. The only assessment of EM linearity in Ar analyses [3] points out many fundamental problems; the onus of proof is on every laboratory claiming low uncertainties. Finally, sample size reduction is often associated to reducing clean-up time to increase sample/blank ratio; this may be self-defeating, as "dry blanks" [4] do not represent either the isotopic composition or the amount of Ar released by the sample chamber when exposed to unpurified sample gas. Single grains enhance background and purification problems relative to large sample sizes measured on FB. Petrologically, many natural "standards" are not ideal (e.g. MMhb1 [5], B4M [6]), as their original distributors never conceived petrology as the decisive control on isotope retention. Comparing ever smaller aliquots of unequilibrated minerals causes ever larger age variations. Metrologically traceable synthetic isotope mixtures still lie in the future. Petrological non-ideality of natural standards does not allow a metrological uncertainty budget. Collector behavior, on the contrary, does. Its quantification will, by definition, make true intralaboratory uncertainty greater or equal to interlaboratory bias. [1] Chen J, Wasserburg GJ, 1981. Analyt Chem 53, 2060-2067 [2] Mattinson JM, 2010. Chem Geol 275, 186-198 [3] Turrin B et al, 2010. G-cubed, 11, Q0AA09 [4] Baur H, 1975. PhD thesis, ETH Zürich, No. 6596 [5] Villa IM et al, 1996. Contrib Mineral Petrol 126, 67-80 [6] Villa IM, Heri AR, 2010. AGU abstract V31A-2296 [7] Morgan LE et al, in press. G-cubed, 2011GC003719

  1. Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images

    PubMed Central

    Frey, Eric C.; Humm, John L.; Ljungberg, Michael

    2012-01-01

    The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429

  2. Tomography & Geochemistry: Precision, Repeatability, Accuracy and Joint Interpretations

    NASA Astrophysics Data System (ADS)

    Foulger, G. R.; Panza, G. F.; Artemieva, I. M.; Bastow, I. D.; Cammarano, F.; Doglioni, C.; Evans, J. R.; Hamilton, W. B.; Julian, B. R.; Lustrino, M.; Thybo, H.; Yanovskaya, T. B.

    2015-12-01

    Seismic tomography can reveal the spatial seismic structure of the mantle, but has little ability to constrain composition, phase or temperature. In contrast, petrology and geochemistry can give insights into mantle composition, but have severely limited spatial control on magma sources. For these reasons, results from these three disciplines are often interpreted jointly. Nevertheless, the limitations of each method are often underestimated, and underlying assumptions de-emphasized. Examples of the limitations of seismic tomography include its ability to image in detail the three-dimensional structure of the mantle or to determine with certainty the strengths of anomalies. Despite this, published seismic anomaly strengths are often unjustifiably translated directly into physical parameters. Tomography yields seismological parameters such as wave speed and attenuation, not geological or thermal parameters. Much of the mantle is poorly sampled by seismic waves, and resolution- and error-assessment methods do not express the true uncertainties. These and other problems have become highlighted in recent years as a result of multiple tomography experiments performed by different research groups, in areas of particular interest e.g., Yellowstone. The repeatability of the results is often poorer than the calculated resolutions. The ability of geochemistry and petrology to identify magma sources and locations is typically overestimated. These methods have little ability to determine source depths. Models that assign geochemical signatures to specific layers in the mantle, including the transition zone, the lower mantle, and the core-mantle boundary, are based on speculative models that cannot be verified and for which viable, less-astonishing alternatives are available. Our knowledge is poor of the size, distribution and location of protoliths, and of metasomatism of magma sources, the nature of the partial-melting and melt-extraction process, the mixing of disparate melts, and the re-assimilation of crust and mantle lithosphere by rising melt. Interpretations of seismic tomography, petrologic and geochemical observations, and all three together, are ambiguous, and this needs to be emphasized more in presenting interpretations so that the viability of the models can be assessed more reliably.

  3. Global positioning system measurements for crustal deformation: Precision and accuracy

    USGS Publications Warehouse

    Prescott, W.H.; Davis, J.L.; Svarc, J.L.

    1989-01-01

    Analysis of 27 repeated observations of Global Positioning System (GPS) position-difference vectors, up to 11 kilometers in length, indicates that the standard deviation of the measurements is 4 millimeters for the north component, 6 millimeters for the east component, and 10 to 20 millimeters for the vertical component. The uncertainty grows slowly with increasing vector length. At 225 kilometers, the standard deviation of the measurement is 6, 11, and 40 millimeters for the north, east, and up components, respectively. Measurements with GPS and Geodolite, an electromagnetic distance-measuring system, over distances of 10 to 40 kilometers agree within 0.2 part per million. Measurements with GPS and very long baseline interferometry of the 225-kilometer vector agree within 0.05 part per million.

  4. Precision and accuracy of visual foliar injury assessments

    SciTech Connect

    Gumpertz, M.L.; Tingey, D.T.; Hogsett, W.E.

    1982-07-01

    The study compared three measures of foliar injury: (i) mean percent leaf area injured of all leaves on the plant, (ii) mean percent leaf area injured of the three most injured leaves, and (iii) the proportion of injured leaves to total number of leaves. For the first measure, the variation caused by reader biases and day-to-day variations were compared with the innate plant-to-plant variation. Bean (Phaseolus vulgaris 'Pinto'), pea (Pisum sativum 'Little Marvel'), radish (Rhaphanus sativus 'Cherry Belle'), and spinach (Spinacia oleracea 'Northland') plants were exposed to either 3 ..mu..L L/sup -1/ SO/sub 2/ or 0.3 ..mu..L L/sup -1/ ozone for 2 h. Three leaf readers visually assessed the percent injury on every leaf of each plant while a fourth reader used a transparent grid to make an unbiased assessment for each plant. The mean leaf area injured of the three most injured leaves was highly correlated with all leaves on the plant only if the three most injured leaves were <100% injured. The proportion of leaves injured was not highly correlated with percent leaf area injured of all leaves on the plant for any species in this study. The largest source of variation in visual assessments was plant-to-plant variation, which ranged from 44 to 97% of the total variance, followed by variation among readers (0-32% of the variance). Except for radish exposed to ozone, the day-to-day variation accounted for <18% of the total. Reader bias in assessment of ozone injury was significant but could be adjusted for each reader by a simple linear regression (R/sup 2/ = 0.89-0.91) of the visual assessments against the grid assessments.

  5. Arrival Metering Precision Study

    NASA Technical Reports Server (NTRS)

    Prevot, Thomas; Mercer, Joey; Homola, Jeffrey; Hunt, Sarah; Gomez, Ashley; Bienert, Nancy; Omar, Faisal; Kraut, Joshua; Brasil, Connie; Wu, Minghong, G.

    2015-01-01

    This paper describes the background, method and results of the Arrival Metering Precision Study (AMPS) conducted in the Airspace Operations Laboratory at NASA Ames Research Center in May 2014. The simulation study measured delivery accuracy, flight efficiency, controller workload, and acceptability of time-based metering operations to a meter fix at the terminal area boundary for different resolution levels of metering delay times displayed to the air traffic controllers and different levels of airspeed information made available to the Time-Based Flow Management (TBFM) system computing the delay. The results show that the resolution of the delay countdown timer (DCT) on the controllers display has a significant impact on the delivery accuracy at the meter fix. Using the 10 seconds rounded and 1 minute rounded DCT resolutions resulted in more accurate delivery than 1 minute truncated and were preferred by the controllers. Using the speeds the controllers entered into the fourth line of the data tag to update the delay computation in TBFM in high and low altitude sectors increased air traffic control efficiency and reduced fuel burn for arriving aircraft during time based metering.

  6. High precision anatomy for MEG.

    PubMed

    Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth

    2014-02-01

    Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1mm. Estimates of relative co-registration error were <1.5mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. PMID:23911673

  7. High precision anatomy for MEG☆

    PubMed Central

    Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth

    2014-01-01

    Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1 mm. Estimates of relative co-registration error were < 1.5 mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6 month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5 mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. PMID:23911673

  8. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    LRO definitive and predictive accuracy requirements were easily met in the nominal mission orbit, using the LP150Q lunar gravity model. center dot Accuracy of the LP150Q model is poorer in the extended mission elliptical orbit. center dot Later lunar gravity models, in particular GSFC-GRAIL-270, improve OD accuracy in the extended mission. center dot Implementation of a constrained plane when the orbit is within 45 degrees of the Earth-Moon line improves cross-track accuracy. center dot Prediction accuracy is still challenged during full-Sun periods due to coarse spacecraft area modeling - Implementation of a multi-plate area model with definitive attitude input can eliminate prediction violations. - The FDF is evaluating using analytic and predicted attitude modeling to improve full-Sun prediction accuracy. center dot Comparison of FDF ephemeris file to high-precision ephemeris files provides gross confirmation that overlap compares properly assess orbit accuracy.

  9. Precision Polarimetry for Cold Neutrons

    NASA Astrophysics Data System (ADS)

    Barron-Palos, Libertad; Bowman, J. David; Chupp, Timothy E.; Crawford, Christopher; Danagoulian, Areg; Gentile, Thomas R.; Jones, Gordon; Klein, Andreas; Penttila, Seppo I.; Salas-Bacci, Americo; Sharma, Monisha; Wilburn, W. Scott

    2007-10-01

    The abBA and PANDA experiments, currently under development, aim to measure the correlation coefficients in the polarized free neutron beta decay at the FnPB in SNS. The polarization of the neutron beam, polarized with a ^3He spin filter, has to be known with high precision in order to achieve the goal accuracy of these experiments. In the NPDGamma experiment, where a ^3He spin filter was used, it was observed that backgrounds play an important role in the precision to which the polarization can be determined. An experiment that focuses in the reduction of background sources to establish techniques and find the upper limit for the polarization accuracy with these spin filters is currently in progress at LANSCE. A description of the measurement and results will be presented.

  10. Precision powder feeder

    DOEpatents

    Schlienger, M. Eric; Schmale, David T.; Oliver, Michael S.

    2001-07-10

    A new class of precision powder feeders is disclosed. These feeders provide a precision flow of a wide range of powdered materials, while remaining robust against jamming or damage. These feeders can be precisely controlled by feedback mechanisms.

  11. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  12. Francis M. Pipkin Award Talk - Precision Measurement with Atom Interferometry

    NASA Astrophysics Data System (ADS)

    Müller, Holger

    2015-05-01

    Atom interferometers are relatives of Young's double-slit experiment that use matter waves. They leverage light-atom interactions to masure fundamental constants, test fundamental symmetries, sense weak fields such as gravity and the gravity gradient, search for elusive ``fifth forces,'' and potentially test properties of antimatter and detect gravitational waves. We will discuss large (multiphoton-) momentum transfer that can enhance sensitivity and accuracy of atom interferometers several thousand fold. We will discuss measuring the fine structure constant to sub-part per billion precision and how it tests the standard model of particle physics. Finally, there has been interest in light bosons as candidates for dark matter and dark energy; atom interferometers have favorable sensitivity in searching for those fields. As a first step, we present our experiment ruling out chameleon fields and a broad class of other theories that would reproduce the observed dark energy density.

  13. Environment-Assisted Precision Measurement

    SciTech Connect

    Goldstein, G.; Maze, J. R.; Lukin, M. D.; Cappellaro, P.; Hodges, J. S.; Jiang, L.; Soerensen, A. S.

    2011-04-08

    We describe a method to enhance the sensitivity of precision measurements that takes advantage of the environment of a quantum sensor to amplify the response of the sensor to weak external perturbations. An individual qubit is used to sense the dynamics of surrounding ancillary qubits, which are in turn affected by the external field to be measured. The resulting sensitivity enhancement is determined by the number of ancillas that are coupled strongly to the sensor qubit; it does not depend on the exact values of the coupling strengths and is resilient to many forms of decoherence. The method achieves nearly Heisenberg-limited precision measurement, using a novel class of entangled states. We discuss specific applications to improve clock sensitivity using trapped ions and magnetic sensing based on electronic spins in diamond.

  14. Environment-assisted precision measurement.

    PubMed

    Goldstein, G; Cappellaro, P; Maze, J R; Hodges, J S; Jiang, L; Sørensen, A S; Lukin, M D

    2011-04-01

    We describe a method to enhance the sensitivity of precision measurements that takes advantage of the environment of a quantum sensor to amplify the response of the sensor to weak external perturbations. An individual qubit is used to sense the dynamics of surrounding ancillary qubits, which are in turn affected by the external field to be measured. The resulting sensitivity enhancement is determined by the number of ancillas that are coupled strongly to the sensor qubit; it does not depend on the exact values of the coupling strengths and is resilient to many forms of decoherence. The method achieves nearly Heisenberg-limited precision measurement, using a novel class of entangled states. We discuss specific applications to improve clock sensitivity using trapped ions and magnetic sensing based on electronic spins in diamond. PMID:21561175

  15. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  16. GPS and Glonass Combined Static Precise Point Positioning (ppp)

    NASA Astrophysics Data System (ADS)

    Pandey, D.; Dwivedi, R.; Dikshit, O.; Singh, A. K.

    2016-06-01

    With the rapid development of multi-constellation Global Navigation Satellite Systems (GNSSs), satellite navigation is undergoing drastic changes. Presently, more than 70 satellites are already available and nearly 120 more satellites will be available in the coming years after the achievement of complete constellation for all four systems- GPS, GLONASS, Galileo and BeiDou. The significant improvement in terms of satellite visibility, spatial geometry, dilution of precision and accuracy demands the utilization of combining multi-GNSS for Precise Point Positioning (PPP), especially in constrained environments. Currently, PPP is performed based on the processing of only GPS observations. Static and kinematic PPP solutions based on the processing of only GPS observations is limited by the satellite visibility, which is often insufficient for the mountainous and open pit mines areas. One of the easiest options available to enhance the positioning reliability is to integrate GPS and GLONASS observations. This research investigates the efficacy of combining GPS and GLONASS observations for achieving static PPP solution and its sensitivity to different processing methodology. Two static PPP solutions, namely standalone GPS and combined GPS-GLONASS solutions are compared. The datasets are processed using the open source GNSS processing environment gLAB 2.2.7 as well as magicGNSS software package. The results reveal that the addition of GLONASS observations improves the static positioning accuracy in comparison with the standalone GPS point positioning. Further, results show that there is an improvement in the three dimensional positioning accuracy. It is also shown that the addition of GLONASS constellation improves the total number of visible satellites by more than 60% which leads to the improvement of satellite geometry represented by Position Dilution of Precision (PDOP) by more than 30%.

  17. A high accuracy sun sensor

    NASA Astrophysics Data System (ADS)

    Bokhove, H.

    The High Accuracy Sun Sensor (HASS) is described, concentrating on measurement principle, the CCD detector used, the construction of the sensorhead and the operation of the sensor electronics. Tests on a development model show that the main aim of a 0.01-arcsec rms stability over a 10-minute period is closely approached. Remaining problem areas are associated with the sensor sensitivity to illumination level variations, the shielding of the detector, and the test and calibration equipment.

  18. Construction concepts for precision segmented reflectors

    NASA Technical Reports Server (NTRS)

    Mikulas, Martin M., Jr.; Withnell, Peter R.

    1993-01-01

    Three construction concepts for deployable precision segmented reflectors are presented. The designs produce reflectors with very high surface accuracies and diameters three to five times the width of the launch vehicle shroud. Of primary importance is the reliability of both the deployment process and the reflector operation. This paper is conceptual in nature, and uses these criteria to present beneficial design concepts for deployable precision segmented reflectors.

  19. High-precision arithmetic in mathematical physics

    DOE PAGESBeta

    Bailey, David H.; Borwein, Jonathan M.

    2015-05-12

    For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.

  20. The Magsat precision vector magnetometer

    NASA Technical Reports Server (NTRS)

    Acuna, M. H.

    1980-01-01

    This paper examines the Magsat precision vector magnetometer which is designed to measure projections of the ambient field in three orthogonal directions. The system contains a highly stable and linear triaxial fluxgate magnetometer with a dynamic range of + or - 2000 nT (1 nT = 10 to the -9 weber per sq m). The magnetometer electronics, analog-to-digital converter, and digitally controlled current sources are implemented with redundant designs to avoid a loss of data in case of failures. Measurements are carried out with an accuracy of + or - 1 part in 64,000 in magnitude and 5 arcsec in orientation (1 arcsec = 0.00028 deg).

  1. Precise Countersinking Tool

    NASA Technical Reports Server (NTRS)

    Jenkins, Eric S.; Smith, William N.

    1992-01-01

    Tool countersinks holes precisely with only portable drill; does not require costly machine tool. Replaceable pilot stub aligns axis of tool with centerline of hole. Ensures precise cut even with imprecise drill. Designed for relatively low cutting speeds.

  2. Estimating Software-Development Costs With Greater Accuracy

    NASA Technical Reports Server (NTRS)

    Baker, Dan; Hihn, Jairus; Lum, Karen

    2008-01-01

    COCOMOST is a computer program for use in estimating software development costs. The goal in the development of COCOMOST was to increase estimation accuracy in three ways: (1) develop a set of sensitivity software tools that return not only estimates of costs but also the estimation error; (2) using the sensitivity software tools, precisely define the quantities of data needed to adequately tune cost estimation models; and (3) build a repository of software-cost-estimation information that NASA managers can retrieve to improve the estimates of costs of developing software for their project. COCOMOST implements a methodology, called '2cee', in which a unique combination of well-known pre-existing data-mining and software-development- effort-estimation techniques are used to increase the accuracy of estimates. COCOMOST utilizes multiple models to analyze historical data pertaining to software-development projects and performs an exhaustive data-mining search over the space of model parameters to improve the performances of effort-estimation models. Thus, it is possible to both calibrate and generate estimates at the same time. COCOMOST is written in the C language for execution in the UNIX operating system.

  3. "Precision" drug development?

    PubMed

    Woodcock, J

    2016-02-01

    The concept of precision medicine has entered broad public consciousness, spurred by a string of targeted drug approvals, highlighted by the availability of personal gene sequences, and accompanied by some remarkable claims about the future of medicine. It is likely that precision medicines will require precision drug development programs. What might such programs look like? PMID:26331240

  4. Precision agricultural systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Precision agriculture is a new farming practice that has been developing since late 1980s. It has been variously referred to as precision farming, prescription farming, site-specific crop management, to name but a few. There are numerous definitions for precision agriculture, but the central concept...

  5. Mapmaking for precision 21 cm cosmology

    NASA Astrophysics Data System (ADS)

    Dillon, Joshua S.; Tegmark, Max; Liu, Adrian; Ewall-Wice, Aaron; Hewitt, Jacqueline N.; Morales, Miguel F.; Neben, Abraham R.; Parsons, Aaron R.; Zheng, Haoxuan

    2015-01-01

    In order to study the "Cosmic Dawn" and the Epoch of Reionization with 21 cm tomography, we need to statistically separate the cosmological signal from foregrounds known to be orders of magnitude brighter. Over the last few years, we have learned much about the role our telescopes play in creating a putatively foreground-free region called the "EoR window." In this work, we examine how an interferometer's effects can be taken into account in a way that allows for the rigorous estimation of 21 cm power spectra from interferometric maps while mitigating foreground contamination and thus increasing sensitivity. This requires a precise understanding of the statistical relationship between the maps we make and the underlying true sky. While some of these calculations would be computationally infeasible if performed exactly, we explore several well-controlled approximations that make mapmaking and the calculation of map statistics much faster, especially for compact and highly redundant interferometers designed specifically for 21 cm cosmology. We demonstrate the utility of these methods and the parametrized trade-offs between accuracy and speed using one such telescope, the upcoming Hydrogen Epoch of Reionization Array, as a case study.

  6. Precision CW laser automatic tracking system investigated

    NASA Technical Reports Server (NTRS)

    Lang, K. T.; Lucy, R. F.; Mcgann, E. J.; Peters, C. J.

    1966-01-01

    Precision laser tracker capable of tracking a low acceleration target to an accuracy of about 20 microradians rms is being constructed and tested. This laser tracking has the advantage of discriminating against other optical sources and the capability of simultaneously measuring range.

  7. Using satellite data to increase accuracy of PMF calculations

    SciTech Connect

    Mettel, M.C.

    1992-03-01

    The accuracy of a flood severity estimate depends on the data used. The more detailed and precise the data, the more accurate the estimate. Earth observation satellites gather detailed data for determining the probable maximum flood at hydropower projects.

  8. Highly precise measurement of HIV DNA by droplet digital PCR.

    PubMed

    Strain, Matthew C; Lada, Steven M; Luong, Tiffany; Rought, Steffney E; Gianella, Sara; Terry, Valeri H; Spina, Celsa A; Woelk, Christopher H; Richman, Douglas D

    2013-01-01

    Deoxyribonucleic acid (DNA) of the human immunodeficiency virus (HIV) provides the most sensitive measurement of residual infection in patients on effective combination antiretroviral therapy (cART). Droplet digital PCR (ddPCR) has recently been shown to provide highly accurate quantification of DNA copy number, but its application to quantification of HIV DNA, or other equally rare targets, has not been reported. This paper demonstrates and analyzes the application of ddPCR to measure the frequency of total HIV DNA (pol copies per million cells), and episomal 2-LTR (long terminal repeat) circles in cells isolated from infected patients. Analysis of over 300 clinical samples, including over 150 clinical samples assayed in triplicate by ddPCR and by real-time PCR (qPCR), demonstrates a significant increase in precision, with an average 5-fold decrease in the coefficient of variation of pol copy numbers and a >20-fold accuracy improvement for 2-LTR circles. Additional benefits of the ddPCR assay over qPCR include absolute quantification without reliance on an external standard and relative insensitivity to mismatches in primer and probe sequences. These features make digital PCR an attractive alternative for measurement of HIV DNA in clinical specimens. The improved sensitivity and precision of measurement of these rare events should facilitate measurements to characterize the latent HIV reservoir and interventions to eradicate it. PMID:23573183

  9. Highly Precise Measurement of HIV DNA by Droplet Digital PCR

    PubMed Central

    Strain, Matthew C.; Lada, Steven M.; Luong, Tiffany; Rought, Steffney E.; Gianella, Sara; Terry, Valeri H.; Spina, Celsa A.; Woelk, Christopher H.; Richman, Douglas D.

    2013-01-01

    Deoxyribonucleic acid (DNA) of the human immunodeficiency virus (HIV) provides the most sensitive measurement of residual infection in patients on effective combination antiretroviral therapy (cART). Droplet digital PCR (ddPCR) has recently been shown to provide highly accurate quantification of DNA copy number, but its application to quantification of HIV DNA, or other equally rare targets, has not been reported. This paper demonstrates and analyzes the application of ddPCR to measure the frequency of total HIV DNA (pol copies per million cells), and episomal 2-LTR (long terminal repeat) circles in cells isolated from infected patients. Analysis of over 300 clinical samples, including over 150 clinical samples assayed in triplicate by ddPCR and by real-time PCR (qPCR), demonstrates a significant increase in precision, with an average 5-fold decrease in the coefficient of variation of pol copy numbers and a >20-fold accuracy improvement for 2-LTR circles. Additional benefits of the ddPCR assay over qPCR include absolute quantification without reliance on an external standard and relative insensitivity to mismatches in primer and probe sequences. These features make digital PCR an attractive alternative for measurement of HIV DNA in clinical specimens. The improved sensitivity and precision of measurement of these rare events should facilitate measurements to characterize the latent HIV reservoir and interventions to eradicate it. PMID:23573183

  10. The Seasat Precision Orbit Determination Experiment

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Born, G. H.

    1980-01-01

    The objectives and conclusions reached during the Seasat Precision Orbit Determination Experiment are discussed. It is noted that the activities of the experiment team included extensive software calibration and validation and an intense effort to validate and improve the dynamic models which describe the satellite's motion. Significant improvement in the gravitational model was obtained during the experiment, and it is pointed out that the current accuracy of the Seasat altitude ephemeris is 1.5 m rms. An altitude ephemeris for the Seasat spacecraft with an accuracy of 0.5 m rms is seen as possible with further improvements in the geopotential, atmospheric drag, and solar radiation pressure models. It is concluded that since altimetry missions with a 2-cm precision altimeter are contemplated, the precision orbit determination effort initiated under the Seasat Project must be continued and expanded.

  11. Precision performance lamp technology

    NASA Astrophysics Data System (ADS)

    Bell, Dean A.; Kiesa, James E.; Dean, Raymond A.

    1997-09-01

    A principal function of a lamp is to produce light output with designated spectra, intensity, and/or geometric radiation patterns. The function of a precision performance lamp is to go beyond these parameters and into the precision repeatability of performance. All lamps are not equal. There are a variety of incandescent lamps, from the vacuum incandescent indictor lamp to the precision lamp of a blood analyzer. In the past the definition of a precision lamp was described in terms of wattage, light center length (LCL), filament position, and/or spot alignment. This paper presents a new view of precision lamps through the discussion of a new segment of lamp design, which we term precision performance lamps. The definition of precision performance lamps will include (must include) the factors of a precision lamp. But what makes a precision lamp a precision performance lamp is the manner in which the design factors of amperage, mscp (mean spherical candlepower), efficacy (lumens/watt), life, not considered individually but rather considered collectively. There is a statistical bias in a precision performance lamp for each of these factors; taken individually and as a whole. When properly considered the results can be dramatic to the system design engineer, system production manage and the system end-user. It can be shown that for the lamp user, the use of precision performance lamps can translate to: (1) ease of system design, (2) simplification of electronics, (3) superior signal to noise ratios, (4) higher manufacturing yields, (5) lower system costs, (6) better product performance. The factors mentioned above are described along with their interdependent relationships. It is statistically shown how the benefits listed above are achievable. Examples are provided to illustrate how proper attention to precision performance lamp characteristics actually aid in system product design and manufacturing to build and market more, market acceptable product products in the

  12. Precision optical metrology without lasers

    NASA Astrophysics Data System (ADS)

    Bergmann, Ralf B.; Burke, Jan; Falldorf, Claas

    2015-07-01

    Optical metrology is a key technique when it comes to precise and fast measurement with a resolution down to the micrometer or even nanometer regime. The choice of a particular optical metrology technique and the quality of results depends on sample parameters such as size, geometry and surface roughness as well as user requirements such as resolution, measurement time and robustness. Interferometry-based techniques are well known for their low measurement uncertainty in the nm range, but usually require careful isolation against vibration and a laser source that often needs shielding for reasons of eye-safety. In this paper, we concentrate on high precision optical metrology without lasers by using the gradient based measurement technique of deflectometry and the finite difference based technique of shear interferometry. Careful calibration of deflectometry systems allows one to investigate virtually all kinds of reflecting surfaces including aspheres or free-form surfaces with measurement uncertainties below the μm level. Computational Shear Interferometry (CoSI) allows us to combine interferometric accuracy and the possibility to use cheap and eye-safe low-brilliance light sources such as e.g. fiber coupled LEDs or even liquid crystal displays. We use CoSI e.g. for quantitative phase contrast imaging in microscopy. We highlight the advantages of both methods, discuss their transfer functions and present results on the precision of both techniques.

  13. Advanced irrigation engineering: Precision and Precise

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Irrigation advances in precision irrigation (PI) or site-specific irrigation (SSI) have been considerable in research; however commercialization lags. A primary necessity for it is variability in soil texture that affects soil water holding capacity and crop yield. Basically, SSI/PI uses variable ra...

  14. Advanced irrigation engineering: Precision and Precise

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Irrigation advances in precision irrigation (PI) or site specific irrigation (SSI) have been considerable in research; however commercialization lags. A primary necessity for PI/SSI is variability in soil texture that affects soil water holding capacity and crop yield. Basically, SSI/PI uses variabl...

  15. Precision aerial application for site-specific rice crop management

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Precision agriculture includes different technologies that allow agricultural professional to use information management tools to optimize agriculture production. The new technologies allow aerial application applicators to improve application accuracy and efficiency, which saves time and money for...

  16. Precision Test of Mass-Ratio Variations with Lattice-Confined Ultracold Molecules

    SciTech Connect

    Zelevinsky, T.; Ye Jun; Kotochigova, S.

    2008-02-01

    We propose a precision measurement of time variations of the proton-electron mass ratio using ultracold molecules in an optical lattice. Vibrational energy intervals are sensitive to changes of the mass ratio. In contrast to measurements that use hyperfine-interval-based atomic clocks, the scheme discussed here is model independent and does not require separation of time variations of different physical constants. The possibility of applying the zero-differential-Stark-shift optical lattice technique is explored to measure vibrational transitions at high accuracy.

  17. Precision test of mass-ratio variations with lattice-confined ultracold molecules.

    PubMed

    Zelevinsky, T; Kotochigova, S; Ye, Jun

    2008-02-01

    We propose a precision measurement of time variations of the proton-electron mass ratio using ultracold molecules in an optical lattice. Vibrational energy intervals are sensitive to changes of the mass ratio. In contrast to measurements that use hyperfine-interval-based atomic clocks, the scheme discussed here is model independent and does not require separation of time variations of different physical constants. The possibility of applying the zero-differential-Stark-shift optical lattice technique is explored to measure vibrational transitions at high accuracy. PMID:18352267

  18. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  19. Improving the precision matrix for precision cosmology

    NASA Astrophysics Data System (ADS)

    Paz, Dante J.; Sánchez, Ariel G.

    2015-12-01

    The estimation of cosmological constraints from observations of the large-scale structure of the Universe, such as the power spectrum or the correlation function, requires the knowledge of the inverse of the associated covariance matrix, namely the precision matrix, Ψ . In most analyses, Ψ is estimated from a limited set of mock catalogues. Depending on how many mocks are used, this estimation has an associated error which must be propagated into the final cosmological constraints. For future surveys such as Euclid and Dark Energy Spectroscopic Instrument, the control of this additional uncertainty requires a prohibitively large number of mock catalogues. In this work, we test a novel technique for the estimation of the precision matrix, the covariance tapering method, in the context of baryon acoustic oscillation measurements. Even though this technique was originally devised as a way to speed up maximum likelihood estimations, our results show that it also reduces the impact of noisy precision matrix estimates on the derived confidence intervals, without introducing biases on the target parameters. The application of this technique can help future surveys to reach their true constraining power using a significantly smaller number of mock catalogues.

  20. Precision Optics Curriculum.

    ERIC Educational Resources Information Center

    Reid, Robert L.; And Others

    This guide outlines the competency-based, two-year precision optics curriculum that the American Precision Optics Manufacturers Association has proposed to fill the void that it suggests will soon exist as many of the master opticians currently employed retire. The model, which closely resembles the old European apprenticeship model, calls for 300…

  1. A 3-D Multilateration: A Precision Geodetic Measurement System

    NASA Technical Reports Server (NTRS)

    Escobal, P. R.; Fliegel, H. F.; Jaffe, R. M.; Muller, P. M.; Ong, K. M.; Vonroos, O. H.

    1972-01-01

    A system was designed with the capability of determining 1-cm accuracy station positions in three dimensions using pulsed laser earth satellite tracking stations coupled with strictly geometric data reduction. With this high accuracy, several crucial geodetic applications become possible, including earthquake hazards assessment, precision surveying, plate tectonics, and orbital determination.

  2. Do saccharide doped PAGAT dosimeters increase accuracy?

    NASA Astrophysics Data System (ADS)

    Berndt, B.; Skyt, P. S.; Holloway, L.; Hill, R.; Sankar, A.; De Deene, Y.

    2015-01-01

    To improve the dosimetric accuracy of normoxic polyacrylamide gelatin (PAGAT) gel dosimeters, the addition of saccharides (glucose and sucrose) has been suggested. An increase in R2-response sensitivity upon irradiation will result in smaller uncertainties in the derived dose if all other uncertainties are conserved. However, temperature variations during the magnetic resonance scanning of polymer gels result in one of the highest contributions to dosimetric uncertainties. The purpose of this project was to study the dose sensitivity against the temperature sensitivity. The overall dose uncertainty of PAGAT gel dosimeters with different concentrations of saccharides (0, 10 and 20%) was investigated. For high concentrations of glucose or sucrose, a clear improvement of the dose sensitivity was observed. For doses up to 6 Gy, the overall dose uncertainty was reduced up to 0.3 Gy for all saccharide loaded gels compared to PAGAT gel. Higher concentrations of glucose and sucrose deteriorate the accuracy of PAGAT dosimeters for doses above 9 Gy.

  3. Precision Spectroscopy of Atomic Hydrogen

    NASA Astrophysics Data System (ADS)

    Beyer, A.; Parthey, Ch G.; Kolachevsky, N.; Alnis, J.; Khabarova, K.; Pohl, R.; Peters, E.; Yost, D. C.; Matveev, A.; Predehl, K.; Droste, S.; Wilken, T.; Holzwarth, R.; Hänsch, T. W.; Abgrall, M.; Rovera, D.; Salomon, Ch; Laurent, Ph; Udem, Th

    2013-12-01

    Precise determinations of transition frequencies of simple atomic systems are required for a number of fundamental applications such as tests of quantum electrodynamics (QED), the determination of fundamental constants and nuclear charge radii. The sharpest transition in atomic hydrogen occurs between the metastable 2S state and the 1S ground state. Its transition frequency has now been measured with almost 15 digits accuracy using an optical frequency comb and a cesium atomic clock as a reference [1]. A recent measurement of the 2S - 2P3/2 transition frequency in muonic hydrogen is in significant contradiction to the hydrogen data if QED calculations are assumed to be correct [2, 3]. We hope to contribute to this so-called "proton size puzzle" by providing additional experimental input from hydrogen spectroscopy.

  4. System for precise position registration

    DOEpatents

    Sundelin, Ronald M.; Wang, Tong

    2005-11-22

    An apparatus for enabling accurate retaining of a precise position, such as for reacquisition of a microscopic spot or feature having a size of 0.1 mm or less, on broad-area surfaces after non-in situ processing. The apparatus includes a sample and sample holder. The sample holder includes a base and three support posts. Two of the support posts interact with a cylindrical hole and a U-groove in the sample to establish location of one point on the sample and a line through the sample. Simultaneous contact of the third support post with the surface of the sample defines a plane through the sample. All points of the sample are therefore uniquely defined by the sample and sample holder. The position registration system of the current invention provides accuracy, as measured in x, y repeatability, of at least 140 .mu.m.

  5. Accuracy of analyses of microelectronics nanostructures in atom probe tomography

    NASA Astrophysics Data System (ADS)

    Vurpillot, F.; Rolland, N.; Estivill, R.; Duguay, S.; Blavette, D.

    2016-07-01

    The routine use of atom probe tomography (APT) as a nano-analysis microscope in the semiconductor industry requires the precise evaluation of the metrological parameters of this instrument (spatial accuracy, spatial precision, composition accuracy or composition precision). The spatial accuracy of this microscope is evaluated in this paper in the analysis of planar structures such as high-k metal gate stacks. It is shown both experimentally and theoretically that the in-depth accuracy of reconstructed APT images is perturbed when analyzing this structure composed of an oxide layer of high electrical permittivity (higher-k dielectric constant) that separates the metal gate and the semiconductor channel of a field emitter transistor. Large differences in the evaporation field between these layers (resulting from large differences in material properties) are the main sources of image distortions. An analytic model is used to interpret inaccuracy in the depth reconstruction of these devices in APT.

  6. Quick contrast sensitivity measurements in the periphery.

    PubMed

    Rosén, Robert; Lundström, Linda; Venkataraman, Abinaya Priya; Winter, Simon; Unsbo, Peter

    2014-01-01

    Measuring the contrast sensitivity function (CSF) in the periphery of the eye is complicated. The lengthy measurement time precludes all but the most determined subjects. The aim of this study was to implement and evaluate a faster routine based on the quick CSF method (qCSF) but adapted to work in the periphery. Additionally, normative data is presented on neurally limited peripheral CSFs. A peripheral qCSF measurement using 100 trials can be performed in 3 min. The precision and accuracy were tested for three subjects under different conditions (number of trials, peripheral angles, and optical corrections). The precision for estimates of contrast sensitivity at individual spatial frequencies was 0.07 log units when three qCSF measurements of 100 trials each were averaged. Accuracy was estimated by comparing the qCSF results with a more traditional measure of CSF. Average accuracy was 0.08 log units with no systematic error. In the second part of the study, we collected three CSFs of 100 trials for six persons in the 20° nasal, temporal, inferior, and superior visual fields. The measurements were performed in an adaptive optics system running in a continuous closed loop. The Tukey HSD test showed significant differences (p < 0.05) between all fields except between the nasal and the temporal fields. Contrast sensitivity was higher in the horizontal fields, and the inferior field was better than the superior. This modified qCSF method decreases the measurement time significantly and allows otherwise unfeasible studies of the peripheral CSF. PMID:24993017

  7. Interoceptive accuracy and panic.

    PubMed

    Zoellner, L A; Craske, M G

    1999-12-01

    Psychophysiological models of panic hypothesize that panickers focus attention on and become anxious about the physical sensations associated with panic. Attention on internal somatic cues has been labeled interoception. The present study examined the role of physiological arousal and subjective anxiety on interoceptive accuracy. Infrequent panickers and nonanxious participants participated in an initial baseline to examine overall interoceptive accuracy. Next, participants ingested caffeine, about which they received either safety or no safety information. Using a mental heartbeat tracking paradigm, participants' count of their heartbeats during specific time intervals were coded based on polygraph measures. Infrequent panickers were more accurate in the perception of their heartbeats than nonanxious participants. Changes in physiological arousal were not associated with increased accuracy on the heartbeat perception task. However, higher levels of self-reported anxiety were associated with superior performance. PMID:10596462

  8. Seasonal Effects on GPS PPP Accuracy

    NASA Astrophysics Data System (ADS)

    Saracoglu, Aziz; Ugur Sanli, D.

    2016-04-01

    GPS Precise Point Positioning (PPP) is now routinely used in many geophysical applications. Static positioning and 24 h data are requested for high precision results however real life situations do not always let us collect 24 h data. Thus repeated GPS surveys of 8-10 h observation sessions are still used by some research groups. Positioning solutions from shorter data spans are subject to various systematic influences, and the positioning quality as well as the estimated velocity is degraded. Researchers pay attention to the accuracy of GPS positions and of the estimated velocities derived from short observation sessions. Recently some research groups turned their attention to the study of seasonal effects (i.e. meteorological seasons) on GPS solutions. Up to now usually regional studies have been reported. In this study, we adopt a global approach and study the various seasonal effects (including the effect of the annual signal) on GPS solutions produced from short observation sessions. We use the PPP module of the NASA/JPL's GIPSY/OASIS II software and globally distributed GPS stations' data of the International GNSS Service. Accuracy studies previously performed with 10-30 consecutive days of continuous data. Here, data from each month of a year, incorporating two years in succession, is used in the analysis. Our major conclusion is that a reformulation for the GPS positioning accuracy is necessary when taking into account the seasonal effects, and typical one term accuracy formulation is expanded to a two-term one.

  9. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    A precision liquid level sensor utilizes a balanced bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  10. Note: Precise radial distribution of charged particles in a magnetic guiding field

    NASA Astrophysics Data System (ADS)

    Backe, H.

    2015-07-01

    Current high precision beta decay experiments of polarized neutrons, employing magnetic guiding fields in combination with position sensitive and energy dispersive detectors, resulted in a detailed study of the mono-energetic point spread function (PSF) for a homogeneous magnetic field. A PSF describes the radial probability distribution of mono-energetic electrons at the detector plane emitted from a point-like source. With regard to accuracy considerations, unwanted singularities occur as a function of the radial detector coordinate which have recently been investigated by subdividing the radial coordinate into small bins or employing analytical approximations. In this note, a series expansion of the PSF is presented which can numerically be evaluated with arbitrary precision.

  11. Note: Precise radial distribution of charged particles in a magnetic guiding field.

    PubMed

    Backe, H

    2015-07-01

    Current high precision beta decay experiments of polarized neutrons, employing magnetic guiding fields in combination with position sensitive and energy dispersive detectors, resulted in a detailed study of the mono-energetic point spread function (PSF) for a homogeneous magnetic field. A PSF describes the radial probability distribution of mono-energetic electrons at the detector plane emitted from a point-like source. With regard to accuracy considerations, unwanted singularities occur as a function of the radial detector coordinate which have recently been investigated by subdividing the radial coordinate into small bins or employing analytical approximations. In this note, a series expansion of the PSF is presented which can numerically be evaluated with arbitrary precision. PMID:26233418

  12. Note: Precise radial distribution of charged particles in a magnetic guiding field

    SciTech Connect

    Backe, H.

    2015-07-15

    Current high precision beta decay experiments of polarized neutrons, employing magnetic guiding fields in combination with position sensitive and energy dispersive detectors, resulted in a detailed study of the mono-energetic point spread function (PSF) for a homogeneous magnetic field. A PSF describes the radial probability distribution of mono-energetic electrons at the detector plane emitted from a point-like source. With regard to accuracy considerations, unwanted singularities occur as a function of the radial detector coordinate which have recently been investigated by subdividing the radial coordinate into small bins or employing analytical approximations. In this note, a series expansion of the PSF is presented which can numerically be evaluated with arbitrary precision.

  13. Precision displacement reference system

    DOEpatents

    Bieg, Lothar F.; Dubois, Robert R.; Strother, Jerry D.

    2000-02-22

    A precision displacement reference system is described, which enables real time accountability over the applied displacement feedback system to precision machine tools, positioning mechanisms, motion devices, and related operations. As independent measurements of tool location is taken by a displacement feedback system, a rotating reference disk compares feedback counts with performed motion. These measurements are compared to characterize and analyze real time mechanical and control performance during operation.

  14. Expressing precision and bias in calorimetry

    SciTech Connect

    Hauck, Danielle K; Croft, Stephen; Bracken, David S

    2010-01-01

    The calibration and calibration verification of a nuclear calorimeter represents a substantial investment of time in part because a single calorimeter measurement takes of the order of 2 to 24h to complete. The time to complete a measurement generally increases with the size of the calorimeter measurement well. It is therefore important to plan the sequence of measurements rather carefully so as to cover the dynamic range and achieve the required accuracy within a reasonable time frame. This work will discuss how calibrations and their verification has been done in the past and what we consider to be good general practice in this regard. A proposed approach to calibration and calibration verification is presented which, in the final analysis, makes use of all the available data - both calibration and verification collectively - in order to obtain the best (in a best fit sense) possible calibration. The combination of sample variance and percent recovery are traditionally taken as sufficient to capture the random (precision) and systematic (bias) contributions to the uncertainty in a calorimetric assay. These terms have been defined as well as formulated for a basic calibration. It has been tradition to assume that sensitivity is a linear function of power. However, the availability of computer power and statistical packages should be utilized to fit the response function as accurately as possible using whatever functions are deemed most suitable. Allowing for more flexibility in the response function fit will enable the calibration to be updated according to the results from regular validation measurements through the year. In a companion paper to be published elsewhere we plan to discuss alternative fitting functions.

  15. Precision optical device of freeform defects inspection

    NASA Astrophysics Data System (ADS)

    Meguellati, S.

    2015-09-01

    This method of optical scanning presented in this paper is used for precision measurement deformation in shape or absolute forms in comparison with a reference component form, of optical or mechanical components, on reduced surfaces area that are of the order of some mm2 and more. The principle of the method is to project the image of the source grating to palpate optically surface to be inspected, after reflection; the image of the source grating is printed by the object topography and is then projected onto the plane of reference grating for generate moiré fringe for defects detection. The optical device used allows a significant dimensional surface magnification of up to 1000 times the area inspected for micro-surfaces, which allows easy processing and reaches an exceptional nanometric imprecision of measurements. According to the measurement principle, the sensitivity for displacement measurement using moiré technique depends on the frequency grating, for increase the detection resolution. This measurement technique can be used advantageously to measure the deformations generated by the production process or constraints on functional parts and the influence of these variations on the function. The optical device and optical principle, on which it is based, can be used for automated inspection of industrially produced goods. It can also be used for dimensional control when, for example, to quantify the error as to whether a piece is good or rubbish. It then suffices to compare a figure of moiré fringes with another previously recorded from a piece considered standard; which saves time, money and accuracy. The technique has found various applications in diverse fields, from biomedical to industrial and scientific applications.

  16. Prints for precision engineering research lathe (Engineering Materials)

    SciTech Connect

    Not Available

    1982-12-01

    The precision engineering research lathe (PERL) is a small two-axis, ultra-high-precision turning machine used for turning very small contoured parts. Housed in a laminar-flow enclosure for temperature control, called a clean air envelope, PERL is maintained at a constant 68 degrees F (plus or minus 1 degree). The size of the lathe is minimized to reduce sensitivity to temperature variations. This, combined with internal water cooling of the spindle motor, the only major heat source on the machine, permits the use of air-shower temperature control. (This approach is a departure from previous designs for larger machines where liquid shower systems are used.) Major design features include the use of a T-configuration, hydrostatic oil slides, capstan slide drives, air-bearing spindles, and laser interferometer position feedback. The following features are particularly noteworthy: (1) to obtain the required accuracy and friction characteristics, the two linear slides are supported by 10-cm-travel hydrostatic bearings developed at LLNL; (2) to minimize backlash and friction, capstan drives are used to provide the slide motions; and (3) to obtain the best surface finish possible, asynchronous (nonrepeatable) spindle motion is minimized by driving the spindle directly with a brushless dc torque motor. PERL operates in single-axis mode. Using facing cuts on copper with a diamond tool, surface finishes of 7.5 nm peak-to-valley (1.5 nm rms) have been achieved.

  17. High-precision hydraulic Stewart platform

    NASA Astrophysics Data System (ADS)

    van Silfhout, Roelof G.

    1999-08-01

    We present a novel design for a Stewart platform (or hexapod), an apparatus which performs positioning tasks with high accuracy. The platform, which is supported by six hydraulic telescopic struts, provides six degrees of freedom with 1 μm resolution. Rotations about user defined pivot points can be specified for any axis of rotation with microradian accuracy. Motion of the platform is performed by changing the strut lengths. Servo systems set and maintain the length of the struts to high precision using proportional hydraulic valves and incremental encoders. The combination of hydraulic actuators and a design which is optimized in terms of mechanical stiffness enables the platform to manipulate loads of up to 20 kN. Sophisticated software allows direct six-axis positioning including true path control. Our platform is an ideal support structure for a large variety of scientific instruments that require a stable alignment base with high-precision motion.

  18. Exact-to-precision generalized perturbation for neutron transport calculation

    SciTech Connect

    Wang, C.; Abdel-Khalik, H. S.

    2013-07-01

    This manuscript extends the exact-to-precision generalized perturbation theory (E{sub P}GPT), introduced previously, to neutron transport calculation whereby previous developments focused on neutron diffusion calculation only. The E{sub P}GPT collectively denotes new developments in generalized perturbation theory (GPT) that place premium on computational efficiency and defendable accuracy in order to render GPT a standard analysis tool in routine design and safety reactor calculations. EPGPT constructs a surrogate model with quantifiable accuracy which can replace the original neutron transport model for subsequent engineering analysis, e.g. functionalization of the homogenized few-group cross sections in terms of various core conditions, sensitivity analysis and uncertainty quantification. This is achieved by reducing the effective dimensionality of the state variable (i.e. neutron angular flux) by projection onto an active subspace. Confining the state variations to the active subspace allows one to construct a small number of what is referred to as the 'active' responses which are solely dependent on the physics model rather than on the responses of interest, the number of input parameters, or the number of points in the state phase space. (authors)

  19. Accuracy of deception judgments.

    PubMed

    Bond, Charles F; DePaulo, Bella M

    2006-01-01

    We analyze the accuracy of deception judgments, synthesizing research results from 206 documents and 24,483 judges. In relevant studies, people attempt to discriminate lies from truths in real time with no special aids or training. In these circumstances, people achieve an average of 54% correct lie-truth judgments, correctly classifying 47% of lies as deceptive and 61% of truths as nondeceptive. Relative to cross-judge differences in accuracy, mean lie-truth discrimination abilities are nontrivial, with a mean accuracy d of roughly .40. This produces an effect that is at roughly the 60th percentile in size, relative to others that have been meta-analyzed by social psychologists. Alternative indexes of lie-truth discrimination accuracy correlate highly with percentage correct, and rates of lie detection vary little from study to study. Our meta-analyses reveal that people are more accurate in judging audible than visible lies, that people appear deceptive when motivated to be believed, and that individuals regard their interaction partners as honest. We propose that people judge others' deceptions more harshly than their own and that this double standard in evaluating deceit can explain much of the accumulated literature. PMID:16859438

  20. Anatomy-aware measurement of segmentation accuracy

    NASA Astrophysics Data System (ADS)

    Tizhoosh, H. R.; Othman, A. A.

    2016-03-01

    Quantifying the accuracy of segmentation and manual delineation of organs, tissue types and tumors in medical images is a necessary measurement that suffers from multiple problems. One major shortcoming of all accuracy measures is that they neglect the anatomical significance or relevance of different zones within a given segment. Hence, existing accuracy metrics measure the overlap of a given segment with a ground-truth without any anatomical discrimination inside the segment. For instance, if we understand the rectal wall or urethral sphincter as anatomical zones, then current accuracy measures ignore their significance when they are applied to assess the quality of the prostate gland segments. In this paper, we propose an anatomy-aware measurement scheme for segmentation accuracy of medical images. The idea is to create a "master gold" based on a consensus shape containing not just the outline of the segment but also the outlines of the internal zones if existent or relevant. To apply this new approach to accuracy measurement, we introduce the anatomy-aware extensions of both Dice coefficient and Jaccard index and investigate their effect using 500 synthetic prostate ultrasound images with 20 different segments for each image. We show that through anatomy-sensitive calculation of segmentation accuracy, namely by considering relevant anatomical zones, not only the measurement of individual users can change but also the ranking of users' segmentation skills may require reordering.

  1. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  2. Precision Higgs Physics

    NASA Astrophysics Data System (ADS)

    Boughezal, Radja

    2015-04-01

    The future of the high energy physics program will increasingly rely upon precision studies looking for deviations from the Standard Model. Run I of the Large Hadron Collider (LHC) triumphantly discovered the long-awaited Higgs boson, and there is great hope in the particle physics community that this new state will open a portal onto a new theory of Nature at the smallest scales. A precision study of Higgs boson properties is needed in order to test whether this belief is true. New theoretical ideas and high-precision QCD tools are crucial to fulfill this goal. They become even more important as larger data sets from LHC Run II further reduce the experimental errors and theoretical uncertainties begin to dominate. In this talk, I will review recent progress in understanding Higgs properties,including the calculation of precision predictions needed to identify possible physics beyond the Standard Model in the Higgs sector. New ideas for measuring the Higgs couplings to light quarks as well as bounding the Higgs width in a model-independent way will be discussed. Precision predictions for Higgs production in association with jets and ongoing efforts to calculate the inclusive N3LO cross section will be reviewed.

  3. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-05-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/√{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/√{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  4. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/√{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/√{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  5. Precise Indoor Localization for Mobile Laser Scanner

    NASA Astrophysics Data System (ADS)

    Kaijaluoto, R.; Hyyppä, A.

    2015-05-01

    Accurate 3D data is of high importance for indoor modeling for various applications in construction, engineering and cultural heritage documentation. For the lack of GNSS signals hampers use of kinematic platforms indoors, TLS is currently the most accurate and precise method for collecting such a data. Due to its static single view point data collection, excessive time and data redundancy are needed for integrity and coverage of data. However, localization methods with affordable scanners are used for solving mobile platform pose problem. The aim of this study was to investigate what level of trajectory accuracies can be achieved with high quality sensors and freely available state of the art planar SLAM algorithms, and how well this trajectory translates to a point cloud collected with a secondary scanner. In this study high precision laser scanners were used with a novel way to combine the strengths of two SLAM algorithms into functional method for precise localization. We collected five datasets using Slammer platform with two laser scanners, and processed them with altogether 20 different parameter sets. The results were validated against TLS reference. The results show increasing scan frequency improves the trajectory, reaching 20 mm RMSE levels for the best performing parameter sets. Further analysis of the 3D point cloud showed good agreement with TLS reference with 17 mm positional RMSE. With precision scanners the obtained point cloud allows for high level of detail data for indoor modeling with accuracies close to TLS at best with vastly improved data collection efficiency.

  6. How Physics Got Precise

    SciTech Connect

    Kleppner, Daniel

    2005-01-19

    Although the ancients knew the length of the year to about ten parts per million, it was not until the end of the 19th century that precision measurements came to play a defining role in physics. Eventually such measurements made it possible to replace human-made artifacts for the standards of length and time with natural standards. For a new generation of atomic clocks, time keeping could be so precise that the effects of the local gravitational potentials on the clock rates would be important. This would force us to re-introduce an artifact into the definition of the second - the location of the primary clock. I will describe some of the events in the history of precision measurements that have led us to this pleasing conundrum, and some of the unexpected uses of atomic clocks today.

  7. Precision gap particle separator

    DOEpatents

    Benett, William J.; Miles, Robin; Jones, II., Leslie M.; Stockton, Cheryl

    2004-06-08

    A system for separating particles entrained in a fluid includes a base with a first channel and a second channel. A precision gap connects the first channel and the second channel. The precision gap is of a size that allows small particles to pass from the first channel into the second channel and prevents large particles from the first channel into the second channel. A cover is positioned over the base unit, the first channel, the precision gap, and the second channel. An port directs the fluid containing the entrained particles into the first channel. An output port directs the large particles out of the first channel. A port connected to the second channel directs the small particles out of the second channel.

  8. Precision Muonium Spectroscopy

    NASA Astrophysics Data System (ADS)

    Jungmann, Klaus P.

    2016-09-01

    The muonium atom is the purely leptonic bound state of a positive muon and an electron. It has a lifetime of 2.2 µs. The absence of any known internal structure provides for precision experiments to test fundamental physics theories and to determine accurate values of fundamental constants. In particular ground state hyperfine structure transitions can be measured by microwave spectroscopy to deliver the muon magnetic moment. The frequency of the 1s-2s transition in the hydrogen-like atom can be determined with laser spectroscopy to obtain the muon mass. With such measurements fundamental physical interactions, in particular quantum electrodynamics, can also be tested at highest precision. The results are important input parameters for experiments on the muon magnetic anomaly. The simplicity of the atom enables further precise experiments, such as a search for muonium-antimuonium conversion for testing charged lepton number conservation and searches for possible antigravity of muons and dark matter.

  9. Attaining the Photometric Precision Required by Future Dark Energy Projects

    SciTech Connect

    Stubbs, Christopher

    2013-01-21

    This report outlines our progress towards achieving the high-precision astronomical measurements needed to derive improved constraints on the nature of the Dark Energy. Our approach to obtaining higher precision flux measurements has two basic components: 1) determination of the optical transmission of the atmosphere, and 2) mapping out the instrumental photon sensitivity function vs. wavelength, calibrated by referencing the measurements to the known sensitivity curve of a high precision silicon photodiode, and 3) using the self-consistency of the spectrum of stars to achieve precise color calibrations.

  10. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  11. Asymptotic accuracy of two-class discrimination

    SciTech Connect

    Ho, T.K.; Baird, H.S.

    1994-12-31

    Poor quality-e.g. sparse or unrepresentative-training data is widely suspected to be one cause of disappointing accuracy of isolated-character classification in modern OCR machines. We conjecture that, for many trainable classification techniques, it is in fact the dominant factor affecting accuracy. To test this, we have carried out a study of the asymptotic accuracy of three dissimilar classifiers on a difficult two-character recognition problem. We state this problem precisely in terms of high-quality prototype images and an explicit model of the distribution of image defects. So stated, the problem can be represented as a stochastic source of an indefinitely long sequence of simulated images labeled with ground truth. Using this sequence, we were able to train all three classifiers to high and statistically indistinguishable asymptotic accuracies (99.9%). This result suggests that the quality of training data was the dominant factor affecting accuracy. The speed of convergence during training, as well as time/space trade-offs during recognition, differed among the classifiers.

  12. Precision Heating Process

    NASA Technical Reports Server (NTRS)

    1992-01-01

    A heat sealing process was developed by SEBRA based on technology that originated in work with NASA's Jet Propulsion Laboratory. The project involved connecting and transferring blood and fluids between sterile plastic containers while maintaining a closed system. SEBRA markets the PIRF Process to manufacturers of medical catheters. It is a precisely controlled method of heating thermoplastic materials in a mold to form or weld catheters and other products. The process offers advantages in fast, precise welding or shape forming of catheters as well as applications in a variety of other industries.

  13. Precision manometer gauge

    DOEpatents

    McPherson, M.J.; Bellman, R.A.

    1982-09-27

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  14. Precision manometer gauge

    DOEpatents

    McPherson, Malcolm J.; Bellman, Robert A.

    1984-01-01

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  15. Astrophysics with Microarcsecond Accuracy Astrometry

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  16. Precision bolometer bridge

    NASA Technical Reports Server (NTRS)

    White, D. R.

    1968-01-01

    Prototype precision bolometer calibration bridge is manually balanced device for indicating dc bias and balance with either dc or ac power. An external galvanometer is used with the bridge for null indication, and the circuitry monitors voltage and current simultaneously without adapters in testing 100 and 200 ohm thin film bolometers.

  17. Precision metal molding

    NASA Technical Reports Server (NTRS)

    Townhill, A.

    1967-01-01

    Method provides precise alignment for metal-forming dies while permitting minimal thermal expansion without die warpage or cavity space restriction. The interfacing dowel bars and die side facings are arranged so the dies are restrained in one orthogonal angle and permitted to thermally expand in the opposite orthogonal angle.

  18. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    1985-01-29

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge. 2 figs.

  19. Precision liquid level sensor

    DOEpatents

    Field, Michael E.; Sullivan, William H.

    1985-01-01

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  20. Precision in Stereochemical Terminology

    ERIC Educational Resources Information Center

    Wade, Leroy G., Jr.

    2006-01-01

    An analysis of relatively new terminology that has given multiple definitions often resulting in students learning principles that are actually false is presented with an example of the new term stereogenic atom introduced by Mislow and Siegel. The Mislow terminology would be useful in some cases if it were used precisely and correctly, but it is…

  1. Precision physics at LHC

    SciTech Connect

    Hinchliffe, I.

    1997-05-01

    In this talk the author gives a brief survey of some physics topics that will be addressed by the Large Hadron Collider currently under construction at CERN. Instead of discussing the reach of this machine for new physics, the author gives examples of the types of precision measurements that might be made if new physics is discovered.

  2. Next generation KATRIN high precision voltage divider for voltages up to 65kV

    NASA Astrophysics Data System (ADS)

    Bauer, S.; Berendes, R.; Hochschulz, F.; Ortjohann, H.-W.; Rosendahl, S.; Thümmler, T.; Schmidt, M.; Weinheimer, C.

    2013-10-01

    The KATRIN (KArlsruhe TRItium Neutrino) experiment aims to determine the mass of the electron antineutrino with a sensitivity of 200 meV by precisely measuring the electron spectrum of the tritium beta decay. This will be done by the use of a retarding spectrometer of the MAC-E-Filter type. To achieve the desired sensitivity the stability of the retarding potential of -18.6 kV has to be monitored with a precision of 3 ppm over at least two months. Since this is not feasible with commercial devices, two ppm-class high voltage dividers were developed, following the concept of the standard divider for DC voltages of up to 100 kV of the Physikalisch-Technische Bundesanstalt (PTB). In order to reach such high accuracies different effects have to be considered. The two most important ones are the temperature dependence of resistance and leakage currents, caused by insulators or corona discharges. For the second divider improvements were made concerning the high-precision resistors and the thermal design of the divider. The improved resistors are the result of a cooperation with the manufacturer. The design improvements, the investigation and the selection of the resistors, the built-in ripple probe and the calibrations at PTB will be reported here. The latter demonstrated a stability of about 0.1 ppm/month over a period of two years.

  3. Precision Measurements in 37K

    NASA Astrophysics Data System (ADS)

    Anholm, Melissa; Ashery, Daniel; Behling, Spencer; Fenker, Benjamin; Melconian, Dan; Mehlman, Michael; Behr, John; Gorelov, Alexandre; Olchanski, Konstantin; Preston, Claire; Warner, Claire; Gwinner, Gerald

    2015-10-01

    We have performed precision measurements of the kinematics of the daughter particles in the decay of 37K. This isotope decays by β+ emission in a mixed Fermi/Gamow-Teller transition to its isobaric analog, 37Ar. Because the higher-order standard model corrections to this decay process are well understood, it is an ideal candidate for for improving constraints on interactions beyond the standard model. Our setup utilizes a magneto-optical trap to confine and cool samples of 37K, which are then spin-polarized by optical pumping. This allows us to perform measurements on both polarized and unpolarized nuclei, which is valuable for a complete understanding of systematic effects. Precision measurements of this decay are expected to be sensitive to the presence of right-handed vector currents, as well as a linear combination of scalar and tensor currents. Progress towards a final result is presented here. Support provided by: NSERC, NRC through TRIUMF, DOE ER40773, Early Career ER41747, Israel Science Foundation.

  4. Precision calibration and systematic error reduction in the long trace profiler

    SciTech Connect

    Qian, Shinan; Sostero, Giovanni; Takacs, Peter Z.

    2000-01-01

    The long trace profiler (LTP) has become the instrument of choice for surface figure testing and slope error measurement of mirrors used for synchrotron radiation and x-ray astronomy optics. In order to achieve highly accurate measurements with the LTP, systematic errors need to be reduced by precise angle calibration and accurate focal plane position adjustment. A self-scanning method is presented to adjust the focal plane position of the detector with high precision by use of a pentaprism scanning technique. The focal plane position can be set to better than 0.25 mm for a 1250-mm-focal-length Fourier-transform lens using this technique. The use of a 0.03-arcsec-resolution theodolite combined with the sensitivity of the LTP detector system can be used to calibrate the angular linearity error very precisely. Some suggestions are introduced for reducing the system error. With these precision calibration techniques, accuracy in the measurement of figure and slope error on meter-long mirrors is now at a level of about 1 {mu}rad rms over the whole testing range of the LTP. (c) 2000 Society of Photo-Optical Instrumentation Engineers.

  5. High-precision positioning of radar scatterers

    NASA Astrophysics Data System (ADS)

    Dheenathayalan, Prabu; Small, David; Schubert, Adrian; Hanssen, Ramon F.

    2016-05-01

    Remote sensing radar satellites cover wide areas and provide spatially dense measurements, with millions of scatterers. Knowledge of the precise position of each radar scatterer is essential to identify the corresponding object and interpret the estimated deformation. The absolute position accuracy of synthetic aperture radar (SAR) scatterers in a 2D radar coordinate system, after compensating for atmosphere and tidal effects, is in the order of centimeters for TerraSAR-X (TSX) spotlight images. However, the absolute positioning in 3D and its quality description are not well known. Here, we exploit time-series interferometric SAR to enhance the positioning capability in three dimensions. The 3D positioning precision is parameterized by a variance-covariance matrix and visualized as an error ellipsoid centered at the estimated position. The intersection of the error ellipsoid with objects in the field is exploited to link radar scatterers to real-world objects. We demonstrate the estimation of scatterer position and its quality using 20 months of TSX stripmap acquisitions over Delft, the Netherlands. Using trihedral corner reflectors (CR) for validation, the accuracy of absolute positioning in 2D is about 7 cm. In 3D, an absolute accuracy of up to ˜ 66 cm is realized, with a cigar-shaped error ellipsoid having centimeter precision in azimuth and range dimensions, and elongated in cross-range dimension with a precision in the order of meters (the ratio of the ellipsoid axis lengths is 1/3/213, respectively). The CR absolute 3D position, along with the associated error ellipsoid, is found to be accurate and agree with the ground truth position at a 99 % confidence level. For other non-CR coherent scatterers, the error ellipsoid concept is validated using 3D building models. In both cases, the error ellipsoid not only serves as a quality descriptor, but can also help to associate radar scatterers to real-world objects.

  6. Precision spectroscopy with reactor antineutrinos

    NASA Astrophysics Data System (ADS)

    Huber, Patrick; Schwetz, Thomas

    2004-09-01

    In this work we present an accurate parameterization of the antineutrino flux produced by the isotopes 235U, 239Pu, and 241Pu in nuclear reactors. We determine the coefficients of this parameterization, as well as their covariance matrix, by performing a fit to spectra inferred from experimentally measured beta spectra. Subsequently we show that flux shape uncertainties play only a minor role in the KamLAND experiment, however, we find that future reactor-neutrino experiments to measure the mixing angle θ13 are sensitive to the fine details of the reactor-neutrino spectra. Finally, we investigate the possibility to determine the isotopic composition in nuclear reactors through an antineutrino measurement. We find that with a three month exposure of a 1ton detector the isotope fractions and the thermal reactor power can be determined at a few percent accuracy, which may open the possibility of an application for safeguard or nonproliferation objectives.

  7. High-Accuracy Ring Laser Gyroscopes: Earth Rotation Rate and Relativistic Effects

    NASA Astrophysics Data System (ADS)

    Beverini, N.; Di Virgilio, A.; Belfi, J.; Ortolan, A.; Schreiber, K. U.; Gebauer, A.; Klügel, T.

    2016-06-01

    The Gross Ring G is a square ring laser gyroscope, built as a monolithic Zerodur structure with 4 m length on all sides. It has demonstrated that a large ring laser provides a sensitivity high enough to measure the rotational rate of the Earth with a high precision of ΔΩE < 10-8. It is possible to show that further improvement in accuracy could allow the observation of the metric frame dragging, produced by the Earth rotating mass (Lense-Thirring effect), as predicted by General Relativity. Furthermore, it can provide a local measurement of the Earth rotational rate with a sensitivity near to that provided by the international system IERS. The GINGER project is intending to take this level of sensitivity further and to improve the accuracy and the long-term stability. A monolithic structure similar to the G ring laser is not available for GINGER. Therefore the preliminary goal is the demonstration of the feasibility of a larger gyroscope structure, where the mechanical stability is obtained through an active control of the geometry. A prototype moderate size gyroscope (GP-2) has been set up in Pisa in order to test this active control of the ring geometry, while a second structure (GINGERino) has been installed inside the Gran Sasso underground laboratory in order to investigate the properties of a deep underground laboratory in view of an installation of a future GINGER apparatus. The preliminary data on these two latter instruments are presented.

  8. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  9. Improving the accuracy of phase-shifting techniques

    NASA Astrophysics Data System (ADS)

    Cruz-Santos, William; López-García, Lourdes; Redondo-Galvan, Arturo

    2015-05-01

    The traditional phase-shifting profilometry technique is based on the projection of digital interference patterns and computation of the absolute phase map. Recently, a method was proposed that used phase interpolation to the corner detection, at subpixel accuracy in the projector image for improving the camera-projector calibration. We propose a general strategy to improve the accuracy in the search for correspondence that can be used to obtain high precision three-dimensional reconstruction. Experimental results show that our strategy can outperform the precision of the phase-shifting method.

  10. Principles and techniques for designing precision machines

    SciTech Connect

    Hale, L C

    1999-02-01

    This thesis is written to advance the reader's knowledge of precision-engineering principles and their application to designing machines that achieve both sufficient precision and minimum cost. It provides the concepts and tools necessary for the engineer to create new precision machine designs. Four case studies demonstrate the principles and showcase approaches and solutions to specific problems that generally have wider applications. These come from projects at the Lawrence Livermore National Laboratory in which the author participated: the Large Optics Diamond Turning Machine, Accuracy Enhancement of High- Productivity Machine Tools, the National Ignition Facility, and Extreme Ultraviolet Lithography. Although broad in scope, the topics go into sufficient depth to be useful to practicing precision engineers and often fulfill more academic ambitions. The thesis begins with a chapter that presents significant principles and fundamental knowledge from the Precision Engineering literature. Following this is a chapter that presents engineering design techniques that are general and not specific to precision machines. All subsequent chapters cover specific aspects of precision machine design. The first of these is Structural Design, guidelines and analysis techniques for achieving independently stiff machine structures. The next chapter addresses dynamic stiffness by presenting several techniques for Deterministic Damping, damping designs that can be analyzed and optimized with predictive results. Several chapters present a main thrust of the thesis, Exact-Constraint Design. A main contribution is a generalized modeling approach developed through the course of creating several unique designs. The final chapter is the primary case study of the thesis, the Conceptual Design of a Horizontal Machining Center.

  11. A passion for precision

    SciTech Connect

    2010-05-19

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  12. Precision Polarization of Neutrons

    NASA Astrophysics Data System (ADS)

    Martin, Elise; Barron-Palos, Libertad; Couture, Aaron; Crawford, Christopher; Chupp, Tim; Danagoulian, Areg; Estes, Mary; Hona, Binita; Jones, Gordon; Klein, Andi; Penttila, Seppo; Sharma, Monisha; Wilburn, Scott

    2009-05-01

    Determining polarization of a cold neutron beam to high precision is required for the next generation neutron decay correlation experiments at the SNS, such as the proposed abBA and PANDA experiments. Precision polarimetry measurements were conducted at Los Alamos National Laboratory with the goal of determining the beam polarization to the level of 10-3 or better. The cold neutrons from FP12 were polarized using optically polarized ^3He gas as a spin filter, which has a highly spin-dependent absorption cross section. A second ^ 3He spin filter was used to analyze the neutron polarization after passing through a resonant RF spin rotator. A discussion of the experiment and results will be given.

  13. A passion for precision

    ScienceCinema

    None

    2011-10-06

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  14. Precision disablement aiming system

    DOEpatents

    Monda, Mark J.; Hobart, Clinton G.; Gladwell, Thomas Scott

    2016-02-16

    A disrupter to a target may be precisely aimed by positioning a radiation source to direct radiation towards the target, and a detector is positioned to detect radiation that passes through the target. An aiming device is positioned between the radiation source and the target, wherein a mechanical feature of the aiming device is superimposed on the target in a captured radiographic image. The location of the aiming device in the radiographic image is used to aim a disrupter towards the target.

  15. Precise linear sun sensor

    NASA Technical Reports Server (NTRS)

    Johnston, D. D.

    1972-01-01

    An evaluation of the precise linear sun sensor relating to future mission applications was performed. The test procedures, data, and results of the dual-axis, solid-state system are included. Brief descriptions of the sensing head and of the system's operational characteristics are presented. A unique feature of the system is that multiple sensor heads with various fields of view may be used with the same electronics.

  16. Precision laser aiming system

    SciTech Connect

    Ahrens, Brandon R.; Todd, Steven N.

    2009-04-28

    A precision laser aiming system comprises a disrupter tool, a reflector, and a laser fixture. The disrupter tool, the reflector and the laser fixture are configurable for iterative alignment and aiming toward an explosive device threat. The invention enables a disrupter to be quickly and accurately set up, aligned, and aimed in order to render safe or to disrupt a target from a standoff position.

  17. Accuracy in Judgments of Aggressiveness

    PubMed Central

    Kenny, David A.; West, Tessa V.; Cillessen, Antonius H. N.; Coie, John D.; Dodge, Kenneth A.; Hubbard, Julie A.; Schwartz, David

    2009-01-01

    Perceivers are both accurate and biased in their understanding of others. Past research has distinguished between three types of accuracy: generalized accuracy, a perceiver’s accuracy about how a target interacts with others in general; perceiver accuracy, a perceiver’s view of others corresponding with how the perceiver is treated by others in general; and dyadic accuracy, a perceiver’s accuracy about a target when interacting with that target. Researchers have proposed that there should be more dyadic than other forms of accuracy among well-acquainted individuals because of the pragmatic utility of forecasting the behavior of interaction partners. We examined behavioral aggression among well-acquainted peers. A total of 116 9-year-old boys rated how aggressive their classmates were toward other classmates. Subsequently, 11 groups of 6 boys each interacted in play groups, during which observations of aggression were made. Analyses indicated strong generalized accuracy yet little dyadic and perceiver accuracy. PMID:17575243

  18. Exploring a Three-Level Model of Calibration Accuracy

    ERIC Educational Resources Information Center

    Schraw, Gregory; Kuch, Fred; Gutierrez, Antonio P.; Richmond, Aaron S.

    2014-01-01

    We compared 5 different statistics (i.e., G index, gamma, "d'", sensitivity, specificity) used in the social sciences and medical diagnosis literatures to assess calibration accuracy in order to examine the relationship among them and to explore whether one statistic provided a best fitting general measure of accuracy. College…

  19. Precise Point Positioning in the Airborne Mode

    NASA Astrophysics Data System (ADS)

    El-Mowafy, Ahmed

    2011-01-01

    The Global Positioning System (GPS) is widely used for positioning in the airborne mode such as in navigation as a supplementary system and for geo-referencing of cameras in mapping and surveillance by aircrafts and Unmanned Aerial Vehicles (UAV). The Precise Point Positioning (PPP) approach is an attractive positioning approach based on processing of un-differenced observations from a single GPS receiver. It employs precise satellite orbits and satellite clock corrections. These data can be obtained via the internet from several sources, e.g. the International GNSS Service (IGS). The data can also broadcast from satellites, such as via the LEX signal of the new Japanese satellite system QZSS. The PPP can achieve positioning precision and accuracy at the sub-decimetre level. In this paper, the functional and stochastic mathematical modelling used in PPP is discussed. Results of applying the PPP method in an airborne test using a small fixed-wing aircraft are presented. To evaluate the performance of the PPP approach, a reference trajectory was established by differential positioning of the same GPS observations with data from a ground reference station. The coordinate results from the two approaches, PPP and differential positioning, were compared and statistically evaluated. For the test at hand, positioning accuracy at the cm-to-decimetre was achieved for latitude and longitude coordinates and doubles that value for height estimation.

  20. Highly Parallel, High-Precision Numerical Integration

    SciTech Connect

    Bailey, David H.; Borwein, Jonathan M.

    2005-04-22

    This paper describes a scheme for rapidly computing numerical values of definite integrals to very high accuracy, ranging from ordinary machine precision to hundreds or thousands of digits, even for functions with singularities or infinite derivatives at endpoints. Such a scheme is of interest not only in computational physics and computational chemistry, but also in experimental mathematics, where high-precision numerical values of definite integrals can be used to numerically discover new identities. This paper discusses techniques for a parallel implementation of this scheme, then presents performance results for 1-D and 2-D test suites. Results are also given for a certain problem from mathematical physics, which features a difficult singularity, confirming a conjecture to 20,000 digit accuracy. The performance rate for this latter calculation on 1024 CPUs is 690 Gflop/s. We believe that this and one other 20,000-digit integral evaluation that we report are the highest-precision non-trivial numerical integrations performed to date.

  1. The Impact of Ionospheric Disturbances on High Accuracy Positioning in Brazil

    NASA Astrophysics Data System (ADS)

    Yang, L.; Park, J.; Susnik, A.; Aquino, M. H.; Dodson, A.

    2013-12-01

    High positioning accuracy is a key requirement to a number of applications with a high economic impact, such as precision agriculture, surveying, geodesy, land management, off-shore operations. Global Navigation Satellite Systems (GNSS) carrier phase measurement based techniques, such as Real Time Kinematic (RTK), Network-RTK (NRTK) and Precise Point Positioning (PPP), have played an important role in providing centimetre-level positioning accuracy, and become the core of the above applications. However these techniques are especially sensitive to ionospheric perturbations, in particular scintillation. Brazil sits in one of the most affected regions of the Earth and can be regarded as a test-bed for scenarios of the severe ionospheric condition. Over the Brazilian territory, the ionosphere behaves in a considerably unpredictable way and scintillation activity is very prominent, occurring especially after sunset hours. NRTK services may not be able to provide satisfactory accuracy, or even continuous positioning during strong scintillation periods. CALIBRA (Countering GNSS high Accuracy applications Limitations due to Ionospheric disturbances in BRAzil) started in late 2012 and is a project funded by the GSA (European GNSS Agency) and the European Commission under the Framework Program 7 to deliver improvements on carrier phase based high accuracy algorithms and their implementation in GNSS receivers, aiming to counter the adverse ionospheric effects over Brazil. As the first stage of this project, the ionospheric disturbances, which affect the applications of RTK, NRTK or PPP, are characterized. Typical problems include degraded positioning accuracy, difficulties in ambiguity fixing, NRTK network interpolation errors, long PPP convergence time etc. It will identify how GNSS observables and existing algorithms are degraded by ionosphere related phenomena, evaluating the impact on positioning techniques in terms of accuracy, integrity and availability. Through the

  2. Accuracy of Digital vs. Conventional Implant Impressions

    PubMed Central

    Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.

    2015-01-01

    The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423

  3. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  4. Accuracy of tablet splitting.

    PubMed

    McDevitt, J T; Gurst, A H; Chen, Y

    1998-01-01

    We attempted to determine the accuracy of manually splitting hydrochlorothiazide tablets. Ninety-four healthy volunteers each split ten 25-mg hydrochlorothiazide tablets, which were then weighed using an analytical balance. Demographics, grip and pinch strength, digit circumference, and tablet-splitting experience were documented. Subjects were also surveyed regarding their willingness to pay a premium for commercially available, lower-dose tablets. Of 1752 manually split tablet portions, 41.3% deviated from ideal weight by more than 10% and 12.4% deviated by more than 20%. Gender, age, education, and tablet-splitting experience were not predictive of variability. Most subjects (96.8%) stated a preference for commercially produced, lower-dose tablets, and 77.2% were willing to pay more for them. For drugs with steep dose-response curves or narrow therapeutic windows, the differences we recorded could be clinically relevant. PMID:9469693

  5. Galvanometer deflection: a precision high-speed system.

    PubMed

    Jablonowski, D P; Raamot, J

    1976-06-01

    An X-Y galvanometer deflection system capable of high precision in a random access mode of operation is described. Beam positional information in digitized form is obtained by employing a Ronchi grating with a sophisticated optical detection scheme. This information is used in a control interface to locate the beam to the required precision. The system is characterized by high accuracy at maximum speed and is designed for operation in a variable environment, with particular attention placed on thermal insensitivity. PMID:20165203

  6. Flight control and landing precision in the nocturnal bee Megalopta is robust to large changes in light intensity.

    PubMed

    Baird, Emily; Fernandez, Diana C; Wcislo, William T; Warrant, Eric J

    2015-01-01

    Like their diurnal relatives, Megalopta genalis use visual information to control flight. Unlike their diurnal relatives, however, they do this at extremely low light intensities. Although Megalopta has developed optical specializations to increase visual sensitivity, theoretical studies suggest that this enhanced sensitivity does not enable them to capture enough light to use visual information to reliably control flight in the rainforest at night. It has been proposed that Megalopta gain extra sensitivity by summing visual information over time. While enhancing the reliability of vision, this strategy would decrease the accuracy with which they can detect image motion-a crucial cue for flight control. Here, we test this temporal summation hypothesis by investigating how Megalopta's flight control and landing precision is affected by light intensity and compare our findings with the results of similar experiments performed on the diurnal bumblebee Bombus terrestris, to explore the extent to which Megalopta's adaptations to dim light affect their precision. We find that, unlike Bombus, light intensity does not affect flight and landing precision in Megalopta. Overall, we find little evidence that Megalopta uses a temporal summation strategy in dim light, while we find strong support for the use of this strategy in Bombus. PMID:26578977

  7. Flight control and landing precision in the nocturnal bee Megalopta is robust to large changes in light intensity

    PubMed Central

    Baird, Emily; Fernandez, Diana C.; Wcislo, William T.; Warrant, Eric J.

    2015-01-01

    Like their diurnal relatives, Megalopta genalis use visual information to control flight. Unlike their diurnal relatives, however, they do this at extremely low light intensities. Although Megalopta has developed optical specializations to increase visual sensitivity, theoretical studies suggest that this enhanced sensitivity does not enable them to capture enough light to use visual information to reliably control flight in the rainforest at night. It has been proposed that Megalopta gain extra sensitivity by summing visual information over time. While enhancing the reliability of vision, this strategy would decrease the accuracy with which they can detect image motion—a crucial cue for flight control. Here, we test this temporal summation hypothesis by investigating how Megalopta's flight control and landing precision is affected by light intensity and compare our findings with the results of similar experiments performed on the diurnal bumblebee Bombus terrestris, to explore the extent to which Megalopta's adaptations to dim light affect their precision. We find that, unlike Bombus, light intensity does not affect flight and landing precision in Megalopta. Overall, we find little evidence that Megalopta uses a temporal summation strategy in dim light, while we find strong support for the use of this strategy in Bombus. PMID:26578977

  8. Activity monitor accuracy in persons using canes.

    PubMed

    Wendland, Deborah Michael; Sprigle, Stephen H

    2012-01-01

    The StepWatch activity monitor has not been validated on multiple indoor and outdoor surfaces in a population using ambulation aids. The aims of this technical report are to report on strategies to configure the StepWatch activity monitor on subjects using a cane and to report the accuracy of both leg-mounted and cane-mounted StepWatch devices on people ambulating over different surfaces while using a cane. Sixteen subjects aged 67 to 85 yr (mean 75.6) who regularly use a cane for ambulation participated. StepWatch calibration was performed by adjusting sensitivity and cadence. Following calibration optimization, accuracy was tested on both the leg-mounted and cane-mounted devices on different surfaces, including linoleum, sidewalk, grass, ramp, and stairs. The leg-mounted device had an accuracy of 93.4% across all surfaces, while the cane-mounted device had an aggregate accuracy of 84.7% across all surfaces. Accuracy of the StepWatch on the stairs was significantly less accurate (p < 0.001) when comparing surfaces using repeated measures analysis of variance. When monitoring community mobility, placement of a StepWatch on a person and his/her ambulation aid can accurately document both activity and device use. PMID:23341318

  9. Instrument Attitude Precision Control

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    2004-01-01

    A novel approach is presented in this paper to analyze attitude precision and control for an instrument gimbaled to a spacecraft subject to an internal disturbance caused by a moving component inside the instrument. Nonlinear differential equations of motion for some sample cases are derived and solved analytically to gain insight into the influence of the disturbance on the attitude pointing error. A simple control law is developed to eliminate the instrument pointing error caused by the internal disturbance. Several cases are presented to demonstrate and verify the concept presented in this paper.

  10. Precision Robotic Assembly Machine

    ScienceCinema

    None

    2010-09-01

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  11. Precision mass measurements

    NASA Astrophysics Data System (ADS)

    Gläser, M.; Borys, M.

    2009-12-01

    Mass as a physical quantity and its measurement are described. After some historical remarks, a short summary of the concept of mass in classical and modern physics is given. Principles and methods of mass measurements, for example as energy measurement or as measurement of weight forces and forces caused by acceleration, are discussed. Precision mass measurement by comparing mass standards using balances is described in detail. Measurement of atomic masses related to 12C is briefly reviewed as well as experiments and recent discussions for a future new definition of the kilogram, the SI unit of mass.

  12. Precision Robotic Assembly Machine

    SciTech Connect

    2009-08-14

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  13. Precision electroweak measurements

    SciTech Connect

    Demarteau, M.

    1996-11-01

    Recent electroweak precision measurements fro {ital e}{sup +}{ital e}{sup -} and {ital p{anti p}} colliders are presented. Some emphasis is placed on the recent developments in the heavy flavor sector. The measurements are compared to predictions from the Standard Model of electroweak interactions. All results are found to be consistent with the Standard Model. The indirect constraint on the top quark mass from all measurements is in excellent agreement with the direct {ital m{sub t}} measurements. Using the world`s electroweak data in conjunction with the current measurement of the top quark mass, the constraints on the Higgs` mass are discussed.

  14. A hyperspectral imager for high radiometric accuracy Earth climate studies

    NASA Astrophysics Data System (ADS)

    Espejo, Joey; Drake, Ginger; Heuerman, Karl; Kopp, Greg; Lieber, Alex; Smith, Paul; Vermeer, Bill

    2011-10-01

    We demonstrate a visible and near-infrared prototype pushbroom hyperspectral imager for Earth climate studies that is capable of using direct solar viewing for on-orbit cross calibration and degradation tracking. Direct calibration to solar spectral irradiances allow the Earth-viewing instrument to achieve required climate-driven absolute radiometric accuracies of <0.2% (1σ). A solar calibration requires viewing scenes having radiances 105 higher than typical Earth scenes. To facilitate this calibration, the instrument features an attenuation system that uses an optimized combination of different precision aperture sizes, neutral density filters, and variable integration timing for Earth and solar viewing. The optical system consists of a three-mirror anastigmat telescope and an Offner spectrometer. The as-built system has a 12.2° cross track field of view with 3 arcmin spatial resolution and covers a 350-1050 nm spectral range with 10 nm resolution. A polarization compensated configuration using the Offner in an out of plane alignment is demonstrated as a viable approach to minimizing polarization sensitivity. The mechanical design takes advantage of relaxed tolerances in the optical design by using rigid, non-adjustable diamond-turned tabs for optical mount locating surfaces. We show that this approach achieves the required optical performance. A prototype spaceflight unit is also demonstrated to prove the applicability of these solar cross calibration methods to on-orbit environments. This unit is evaluated for optical performance prior to and after GEVS shake, thermal vacuum, and lifecycle tests.

  15. Aerial multi-camera systems: Accuracy and block triangulation issues

    NASA Astrophysics Data System (ADS)

    Rupnik, Ewelina; Nex, Francesco; Toschi, Isabella; Remondino, Fabio

    2015-03-01

    Oblique photography has reached its maturity and has now been adopted for several applications. The number and variety of multi-camera oblique platforms available on the market is continuously growing. So far, few attempts have been made to study the influence of the additional cameras on the behaviour of the image block and comprehensive revisions to existing flight patterns are yet to be formulated. This paper looks into the precision and accuracy of 3D points triangulated from diverse multi-camera oblique platforms. Its coverage is divided into simulated and real case studies. Within the simulations, different imaging platform parameters and flight patterns are varied, reflecting both current market offerings and common flight practices. Attention is paid to the aspect of completeness in terms of dense matching algorithms and 3D city modelling - the most promising application of such systems. The experimental part demonstrates the behaviour of two oblique imaging platforms in real-world conditions. A number of Ground Control Point (GCP) configurations are adopted in order to point out the sensitivity of tested imaging networks and arising block deformations. To stress the contribution of slanted views, all scenarios are compared against a scenario in which exclusively nadir images are used for evaluation.

  16. Density Variations Observable by Precision Satellite Orbits

    NASA Astrophysics Data System (ADS)

    McLaughlin, C. A.; Lechtenberg, T.; Hiatt, A.

    2008-12-01

    This research uses precision satellite orbits from the Challenging Minisatellite Payload (CHAMP) satellite to produce a new data source for studying density changes that occur on time scales less than a day. Precision orbit derived density is compared to accelerometer derived density. In addition, the precision orbit derived densities are used to examine density variations that have been observed with accelerometer data to see if they are observable. In particular, the research will examine the observability of geomagnetic storm time changes and polar cusp features that have been observed in accelerometer data. Currently highly accurate density data is available from three satellites with accelerometers and much lower accuracy data is available from hundreds of satellites for which two-line element sets are available from the Air Force. This paper explores a new data source that is more accurate and has better temporal resolution than the two-line element sets, and provides better spatial coverage than satellites with accelerometers. This data source will be valuable for studying atmospheric phenomena over short periods, for long term studies of the atmosphere, and for validating and improving complex coupled models that include neutral density. The precision orbit derived densities are very similar to the accelerometer derived densities, but the accelerometer can observe features with shorter temporal variations. This research will quantify the time scales observable by precision orbit derived density. The technique for estimating density is optimal orbit determination. The estimates are optimal in the least squares or minimum variance sense. Precision orbit data from CHAMP is used as measurements in a sequential measurement processing and filtering scheme. The atmospheric density is estimated as a correction to an atmospheric model.

  17. Diagnostic Accuracy of Transvaginal Sonography in the Detection of Uterine Abnormalities in Infertile Women

    PubMed Central

    Niknejadi, Maryam; Haghighi, Hadieh; Ahmadi, Firoozeh; Niknejad, Fatemeh; Chehrazi, Mohammad; Vosough, Ahmad; Moenian, Deena

    2012-01-01

    Background Accurate diagnosis of uterine abnormalities has become a core part of the fertility work-up. A variety of modalities can be used for the diagnosis of uterine abnormalities. Objectives This study was designed to assess the diagnostic accuracy of transvaginal ultrasonography (TVS) in uterine pathologies of infertile patients using hysteroscopy as the gold standard. Patients and Methods This was a cross-sectional study carried out in the Department of Reproductive Imaging at Royan Institute from October 2007 to October 2008. In this study, the medical documents of 719 infertile women who were investigated with transvaginal ultrasound (TVS) and then hysteroscopy were reviewed. All women underwent hysteroscopy in the same cycle time after TVS. Seventy-six out of 719 patients were excluded from the study and 643 patients were studied. TVS was performed in the follicular phase after cessation of bleeding. Sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) were calculated for TVS. Hysteroscopy served as the gold standard. Results The overall sensitivity, specificity, positive and negative predictive values for TVS in the diagnosis of uterine abnormality was 79%, 82%, 84% and 71%, respectively. The sensitivity and PPV of TVS in detection of polyp were 88.3% and 81.6%, respectively. These indices were 89.2% and 92.5%, respectively for fibroma, 67% and 98.3%, respectively for subseptated uterus and 90.9% and 100%, respectively for septated uterus. Adhesion and unicornuated uterus have the lowest sensitivity with a sensitivity of 35% and PPV of 57.1%. Conclusion TVS is a cost-effective and non-invasive method for diagnosis of intrauterine lesions such as polyps, submucosal fibroids and septum. It is a valuable adjunctive to hysteroscopy with high accuracy for identification and characterization of intrauterine abnormalities. This may lead to a more precise surgery plan and performance. PMID:23329979

  18. Fundamental Symmetries Probed by Precision Nuclear Mass Measurements at ISOLTRAP

    NASA Astrophysics Data System (ADS)

    Bollen, Georg

    2005-04-01

    Mass measurements on rare isotopes can play an important role in testing the nature of fundamental interactions. Precise mass values together with decay data are required for critical tests of the conserved vector current (CVC) hypothesis and the standard model. Substantial progress in Penning trap mass spectrometry has made this technique the best choice for precision measurements on rare isotopes, by providing high accuracy and sensitivity even for short-lived nuclides. The pioneering facility in this field is ISOLTRAP at ISOLDE/CERN. ISOLTRAP is a mass spectrometer capable to determine nuclear binding energies with an uncertainty of 10-8 on nuclides that are produced with yields as low as a few 100 ions/s and at half-lives well below 100 ms. It is used for mass measurements relevant for a better understanding of nuclear structure and the nucleosynthesis of the elements. It is also used for the determination of masses that are important for the test of CVC, the unitary of the Cabibbo-Kobayashi-Maskawa (CKM) matrix, and for putting constrains on the existence of scalars currents. Measurements along this line include ^74Rb (T1/2=65 ms), which is the shortest-lived nuclide studied in a Penning trap. The QEC values of ^74Rb, determined with a precision of 6.10-8, serves as a test of CVC or of related theoretical corrections [1]. Masses of ^32Ar and ^33Ar have been determined with uncertainties of 6.0 . 10-8 and 1.4 . 10-8 [2]. The improved mass for ^32Ar helps to provide a better constraint on scalar contributions to the weak interaction and both argon data serve as the most stringent test of isobaric multiplet mass equation IMME. ^34Ar, another CVC test candidate, has been studied with an uncertainty of 1.1.10-8 (δm = 0.41 keV). Similar precision has been achieved for ^22Mg and neighboring ^21Na and ^22Na [4]. The importance of these results is twofold: First, an Ft value has been obtained for the super-allowed β decay of ^22Mg to further test the CVC hypothesis

  19. New High Precision Linelist of H_3^+

    NASA Astrophysics Data System (ADS)

    Hodges, James N.; Perry, Adam J.; Markus, Charles; Jenkins, Paul A., II; Kocheril, G. Stephen; McCall, Benjamin J.

    2014-06-01

    As the simplest polyatomic molecule, H_3^+ serves as an ideal benchmark for theoretical predictions of rovibrational energy levels. By strictly ab initio methods, the current accuracy of theoretical predictions is limited to an impressive one hundredth of a wavenumber, which has been accomplished by consideration of relativistic, adiabatic, and non-adiabatic corrections to the Born-Oppenheimer PES. More accurate predictions rely on a treatment of quantum electrodynamic effects, which have improved the accuracies of vibrational transitions in molecular hydrogen to a few MHz. High precision spectroscopy is of the utmost importance for extending the frontiers of ab initio calculations, as improved precision and accuracy enable more rigorous testing of calculations. Additionally, measuring rovibrational transitions of H_3^+ can be used to predict its forbidden rotational spectrum. Though the existing data can be used to determine rotational transition frequencies, the uncertainties are prohibitively large. Acquisition of rovibrational spectra with smaller experimental uncertainty would enable a spectroscopic search for the rotational transitions. The technique Noise Immune Cavity Enhanced Optical Heterodyne Velocity Modulation Spectroscopy, or NICE-OHVMS has been previously used to precisely and accurately measure transitions of H_3^+, CH_5^+, and HCO^+ to sub-MHz uncertainty. A second module for our optical parametric oscillator has extended our instrument's frequency coverage from 3.2-3.9 μm to 2.5-3.9 μm. With extended coverage, we have improved our previous linelist by measuring additional transitions. O. L. Polyansky, et al. Phil. Trans. R. Soc. A (2012), 370, 5014--5027. J. Komasa, et al. J. Chem. Theor. Comp. (2011), 7, 3105--3115. C. M. Lindsay, B. J. McCall, J. Mol. Spectrosc. (2001), 210, 66--83. J. N. Hodges, et al. J. Chem. Phys. (2013), 139, 164201.

  20. High precision kinematic surveying with laser scanners

    NASA Astrophysics Data System (ADS)

    Gräfe, Gunnar

    2007-12-01

    The kinematic survey of roads and railways is becoming a much more common data acquisition method. The development of the Mobile Road Mapping System (MoSES) has reached a level that allows the use of kinematic survey technology for high precision applications. The system is equipped with cameras and laser scanners. For high accuracy requirements, the scanners become the main sensor group because of their geometric precision and reliability. To guarantee reliable survey results, specific calibration procedures have to be applied, which can be divided into the scanner sensor calibration as step 1, and the geometric transformation parameter estimation with respect to the vehicle coordinate system as step 2. Both calibration steps include new methods for sensor behavior modeling and multisensor system integration. To verify laser scanner quality of the MoSES system, the results are regularly checked along different test routes. It can be proved that a standard deviation of 0.004 m for height of the scanner points will be obtained, if the specific calibrations and data processing methods are applied. This level of accuracy opens new possibilities to serve engineering survey applications using kinematic measurement techniques. The key feature of scanner technology is the full digital coverage of the road area. Three application examples illustrate the capabilities. Digital road surface models generated from MoSES data are used, especially for road surface reconstruction tasks along highways. Compared to static surveys, the method offers comparable accuracy at higher speed, lower costs, much higher grid resolution and with greater safety. The system's capability of gaining 360 profiles leads to other complex applications like kinematic tunnel surveys or the precise analysis of bridge clearances.

  1. Precision estimates for tomographic nondestructive assay

    SciTech Connect

    Prettyman, T.H.

    1995-12-31

    One technique being applied to improve the accuracy of assays of waste in large containers is computerized tomography (CT). Research on the application of CT to improve both neutron and gamma-ray assays of waste is being carried out at LANL. For example, tomographic gamma scanning (TGS) is a single-photon emission CT technique that corrects for the attenuation of gamma rays emitted from the sample using attenuation images from transmission CT. By accounting for the distribution of emitting material and correcting for the attenuation of the emitted gamma rays, TGS is able to achieve highly accurate assays of radionuclides in medium-density wastes. It is important to develope methods to estimate the precision of such assays, and this paper explores this problem by examining the precision estimators for TGS.

  2. Multi-level modeling for sensitivity assessment of springback in sheet metal forming

    NASA Astrophysics Data System (ADS)

    Lebon, J.; Lequilliec, G.; Coelho, R. Filomeno; Breitkopf, P.; Villon, P.

    2013-05-01

    In this work, we highlight that sensitivity analysis of metal forming process requires both high precision and low cost numerical models. We propose a two-pronged methodology to address these challenges. The deep drawing simulation process is performed using an original low cost semi-analytical approach based on a bending under tension model (B-U-T) with a good accuracy for small random perturbations of the physical and process parameters. The springback sensitivity analysis is based on the Sobol indices approach and performed using an non intrusive efficient methodology based on the post-treatment of the polynomial chaos coefficients.

  3. Precision measurements in supersymmetry

    SciTech Connect

    Feng, J.L.

    1995-05-01

    Supersymmetry is a promising framework in which to explore extensions of the standard model. If candidates for supersymmetric particles are found, precision measurements of their properties will then be of paramount importance. The prospects for such measurements and their implications are the subject of this thesis. If charginos are produced at the LEP II collider, they are likely to be one of the few available supersymmetric signals for many years. The author considers the possibility of determining fundamental supersymmetry parameters in such a scenario. The study is complicated by the dependence of observables on a large number of these parameters. He proposes a straightforward procedure for disentangling these dependences and demonstrate its effectiveness by presenting a number of case studies at representative points in parameter space. In addition to determining the properties of supersymmetric particles, precision measurements may also be used to establish that newly-discovered particles are, in fact, supersymmetric. Supersymmetry predicts quantitative relations among the couplings and masses of superparticles. The author discusses tests of such relations at a future e{sup +}e{sup {minus}} linear collider, using measurements that exploit the availability of polarizable beams. Stringent tests of supersymmetry from chargino production are demonstrated in two representative cases, and fermion and neutralino processes are also discussed.

  4. Precision flyer initiator

    DOEpatents

    Frank, Alan M.; Lee, Ronald S.

    1998-01-01

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or "flyer" is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices.

  5. Precision Joining Center

    SciTech Connect

    Powell, J.W.; Westphal, D.A.

    1991-08-01

    A workshop to obtain input from industry on the establishment of the Precision Joining Center (PJC) was held on July 10--12, 1991. The PJC is a center for training Joining Technologists in advanced joining techniques and concepts in order to promote the competitiveness of US industry. The center will be established as part of the DOE Defense Programs Technology Commercialization Initiative, and operated by EG G Rocky Flats in cooperation with the American Welding Society and the Colorado School of Mines Center for Welding and Joining Research. The overall objectives of the workshop were to validate the need for a Joining Technologists to fill the gap between the welding operator and the welding engineer, and to assure that the PJC will train individuals to satisfy that need. The consensus of the workshop participants was that the Joining Technologist is a necessary position in industry, and is currently used, with some variation, by many companies. It was agreed that the PJC core curriculum, as presented, would produce a Joining Technologist of value to industries that use precision joining techniques. The advantage of the PJC would be to train the Joining Technologist much more quickly and more completely. The proposed emphasis of the PJC curriculum on equipment intensive and hands-on training was judged to be essential.

  6. Progressive Precision Surface Design

    SciTech Connect

    Duchaineau, M; Joy, KJ

    2002-01-11

    We introduce a novel wavelet decomposition algorithm that makes a number of powerful new surface design operations practical. Wavelets, and hierarchical representations generally, have held promise to facilitate a variety of design tasks in a unified way by approximating results very precisely, thus avoiding a proliferation of undergirding mathematical representations. However, traditional wavelet decomposition is defined from fine to coarse resolution, thus limiting its efficiency for highly precise surface manipulation when attempting to create new non-local editing methods. Our key contribution is the progressive wavelet decomposition algorithm, a general-purpose coarse-to-fine method for hierarchical fitting, based in this paper on an underlying multiresolution representation called dyadic splines. The algorithm requests input via a generic interval query mechanism, allowing a wide variety of non-local operations to be quickly implemented. The algorithm performs work proportionate to the tiny compressed output size, rather than to some arbitrarily high resolution that would otherwise be required, thus increasing performance by several orders of magnitude. We describe several design operations that are made tractable because of the progressive decomposition. Free-form pasting is a generalization of the traditional control-mesh edit, but for which the shape of the change is completely general and where the shape can be placed using a free-form deformation within the surface domain. Smoothing and roughening operations are enhanced so that an arbitrary loop in the domain specifies the area of effect. Finally, the sculpting effect of moving a tool shape along a path is simulated.

  7. Precision flyer initiator

    DOEpatents

    Frank, A.M.; Lee, R.S.

    1998-05-26

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or ``flyer`` is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices. 10 figs.

  8. Precise autofocusing microscope with rapid response

    NASA Astrophysics Data System (ADS)

    Liu, Chien-Sheng; Jiang, Sheng-Hong

    2015-03-01

    The rapid on-line or off-line automated vision inspection is a critical operation in the manufacturing fields. Accordingly, this present study designs and characterizes a novel precise optics-based autofocusing microscope with a rapid response and no reduction in the focusing accuracy. In contrast to conventional optics-based autofocusing microscopes with centroid method, the proposed microscope comprises a high-speed rotating optical diffuser in which the variation of the image centroid position is reduced and consequently the focusing response is improved. The proposed microscope is characterized and verified experimentally using a laboratory-built prototype. The experimental results show that compared to conventional optics-based autofocusing microscopes, the proposed microscope achieves a more rapid response with no reduction in the focusing accuracy. Consequently, the proposed microscope represents another solution for both existing and emerging industrial applications of automated vision inspection.

  9. Precision laser spectroscopy in fundamental studies

    NASA Astrophysics Data System (ADS)

    Kolachevsky, N. N.; Khabarova, K. Yu

    2014-12-01

    The role of precision spectroscopic measurements in the development of fundamental theories is discussed, with particular emphasis on the hydrogen atom, the simplest stable atomic system amenable to the accurate calculation of energy levels from quantum electrodynamics. Research areas that greatly benefited from the participation of the Lebedev Physical Institute are reviewed, including the violation of fundamental symmetries, the stability of the fine-structure constant α, and sensitive tests of quantum electrodynamics.

  10. Reticence, Accuracy and Efficacy

    NASA Astrophysics Data System (ADS)

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  11. High-precision laser-assisted absolute determination of x-ray diffraction angles

    SciTech Connect

    Kubicek, K.; Braun, J.; Bruhns, H.; Crespo Lopez-Urrutia, J. R.; Mokler, P. H.; Ullrich, J.

    2012-01-15

    A novel technique for absolute wavelength determination in high-precision crystal x-ray spectroscopy recently introduced has been upgraded reaching unprecedented accuracies. The method combines visible laser beams with the Bond method, where Bragg angles ({theta} and -{theta}) are determined without any x-ray reference lines. Using flat crystals this technique makes absolute x-ray wavelength measurements feasible even at low x-ray fluxes. The upgraded spectrometer has been used in combination with first experiments on the 1s2p {sup 1}P{sub 1}{yields} 1s{sup 2} {sup 1}S{sub 0} w-line in He-like argon. By resolving a minute curvature of the x-ray lines the accuracy reaches there the best ever reported value of 1.5 ppm. The result is sensitive to predicted second-order QED contributions at the level of two-electron screening and two-photon radiative diagrams and will allow for the first time to benchmark predicted binding energies for He-like ions at this level of precision.

  12. Visual inspection reliability for precision manufactured parts

    DOE PAGESBeta

    See, Judi E.

    2015-09-04

    Sandia National Laboratories conducted an experiment for the National Nuclear Security Administration to determine the reliability of visual inspection of precision manufactured parts used in nuclear weapons. In addition visual inspection has been extensively researched since the early 20th century; however, the reliability of visual inspection for nuclear weapons parts has not been addressed. In addition, the efficacy of using inspector confidence ratings to guide multiple inspections in an effort to improve overall performance accuracy is unknown. Further, the workload associated with inspection has not been documented, and newer measures of stress have not been applied.

  13. Digital image centering. I. [for precision astrometry

    NASA Technical Reports Server (NTRS)

    Van Altena, W. F.; Auer, L. H.

    1975-01-01

    A series of parallax plates have been measured on a PDS microdensitometer to assess the possibility of using the PDS for precision relative astrometry and to investigate centering algorithms that might be used to analyze digital images obtained with the Large Space Telescope. The basic repeatability of the PDS is found to be plus or minus 0.6 micron, with the potential for reaching plus or minus 0.2 micron. A very efficient centering algorithm has been developed which fits the marginal density distributions of the image with a Gaussian profile and a sloping background. The accuracy is comparable with the best results obtained with a photoelectric image bisector.

  14. Precise and automated microfluidic sample preparation.

    SciTech Connect

    Crocker, Robert W.; Patel, Kamlesh D.; Mosier, Bruce P.; Harnett, Cindy K.

    2004-07-01

    Autonomous bio-chemical agent detectors require sample preparation involving multiplex fluid control. We have developed a portable microfluidic pump array for metering sub-microliter volumes at flowrates of 1-100 {micro}L/min. Each pump is composed of an electrokinetic (EK) pump and high-voltage power supply with 15-Hz feedback from flow sensors. The combination of high pump fluid impedance and active control results in precise fluid metering with nanoliter accuracy. Automated sample preparation will be demonstrated by labeling proteins with fluorescamine and subsequent injection to a capillary gel electrophoresis (CGE) chip.

  15. The GBT precision telescope control system

    NASA Astrophysics Data System (ADS)

    Prestage, Richard M.; Constantikes, Kim T.; Balser, Dana S.; Condon, James J.

    2004-10-01

    The NRAO Robert C. Byrd Green Bank Telescope (GBT) is a 100m diameter advanced single dish radio telescope designed for a wide range of astronomical projects with special emphasis on precision imaging. Open-loop adjustments of the active surface, and real-time corrections to pointing and focus on the basis of structural temperatures already allow observations at frequencies up to 50GHz. Our ultimate goal is to extend the observing frequency limit up to 115GHz; this will require a two dimensional tracking error better than 1.3", and an rms surface accuracy better than 210μm. The Precision Telescope Control System project has two main components. One aspect is the continued deployment of appropriate metrology systems, including temperature sensors, inclinometers, laser rangefinders and other devices. An improved control system architecture will harness this measurement capability with the existing servo systems, to deliver the precision operation required. The second aspect is the execution of a series of experiments to identify, understand and correct the residual pointing and surface accuracy errors. These can have multiple causes, many of which depend on variable environmental conditions. A particularly novel approach is to solve simultaneously for gravitational, thermal and wind effects in the development of the telescope pointing and focus tracking models. Our precision temperature sensor system has already allowed us to compensate for thermal gradients in the antenna, which were previously responsible for the largest "non-repeatable" pointing and focus tracking errors. We are currently targetting the effects of wind as the next, currently uncompensated, source of error.

  16. The Effect of Strength Training on Fractionalized Accuracy.

    ERIC Educational Resources Information Center

    Gronbech, C. Eric

    The role of the strength factor in the accomplishment of precision tasks was investigated. Forty adult males weight trained to develop physical strength in several muscle groups, particularly in the elbow flexor area. Results indicate a decrease in incidence of accuracy concurrent with an increase in muscle strength. This suggests that in order to…

  17. Precise Orbit Determination for Altimeter Satellites

    NASA Astrophysics Data System (ADS)

    Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Lemoine, F. G.; Beckley, B. B.; Wang, Y.; Chinn, D. S.

    2002-05-01

    Orbit error remains a critical component in the error budget for all radar altimeter missions. This paper describes the ongoing work at GSFC to improve orbits for three radar altimeter satellites: TOPEX/POSEIDON (T/P), Jason, and Geosat Follow-On (GFO). T/P has demonstrated that, the time variation of ocean topography can be determined with an accuracy of a few centimeters, thanks to the availability of highly accurate orbits (2-3 cm radially) produced at GSFC. Jason, the T/P follow-on, is intended to continue measurement of the ocean surface with the same, if not better accuracy. Reaching the Jason centimeter accuracy orbit goal would greatly benefit the knowledge of ocean circulation. Several new POD strategies which promise significant improvement to the current T/P orbit are evaluated over one year of data. Also, preliminary, but very promising Jason POD results are presented. Orbit improvement for GFO has been dramatic, and has allowed this mission to provide a POESEIDON class altimeter product. The GFO Precise Orbit Ephemeris (POE) orbits are based on satellite laser ranging (SLR) tracking supplemented with GFO/GFO altimeter crossover data. The accuracy of these orbits were evaluated using several tests, including independent TOPEX/GFO altimeter crossover data. The orbit improvements are shown over the years 2000 and 2001 for which the POEs have been completed.

  18. Light leptonic new physics at the precision frontier

    NASA Astrophysics Data System (ADS)

    Le Dall, Matthias

    2016-06-01

    Precision probes of new physics are often interpreted through their indirect sensitivity to short-distance scales. In this proceedings contribution, we focus on the question of which precision observables, at current sensitivity levels, allow for an interpretation via either short-distance new physics or consistent models of long-distance new physics, weakly coupled to the Standard Model. The electroweak scale is chosen to set the dividing line between these scenarios. In particular, we find that inverse see-saw models of neutrino mass allow for light new physics interpretations of most precision leptonic observables, such as lepton universality, lepton flavor violation, but not for the electron EDM.

  19. Truss Assembly and Welding by Intelligent Precision Jigging Robots

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Dorsey, John T.; Doggett, William R.; Correll, Nikolaus

    2014-01-01

    This paper describes an Intelligent Precision Jigging Robot (IPJR) prototype that enables the precise alignment and welding of titanium space telescope optical benches. The IPJR, equipped with micron accuracy sensors and actuators, worked in tandem with a lower precision remote controlled manipulator. The combined system assembled and welded a 2 m truss from stock titanium components. The calibration of the IPJR, and the difference between the predicted and the truss dimensions as-built, identified additional sources of error that should be addressed in the next generation of IPJRs in 2D and 3D.

  20. Operating a real time high accuracy positioning system

    NASA Astrophysics Data System (ADS)

    Johnston, G.; Hanley, J.; Russell, D.; Vooght, A.

    2003-04-01

    The paper shall review the history and development of real time DGPS services prior to then describing the design of a high accuracy GPS commercial augmentation system and service currently delivering over a wide area to users of precise positioning products. The infrastructure and system shall be explained in relation to the need for high accuracy and high integrity of positioning for users. A comparison of the different techniques for the delivery of data shall be provided to outline the technical approach taken. Examples of the performance of the real time system shall be shown in various regions and modes to outline the current achievable accuracies. Having described and established the current GPS based situation, a review of the potential of the Galileo system shall be presented. Following brief contextual information relating to the Galileo project, core system and services, the paper will identify possible key applications and the main user communities for sub decimetre level precise positioning. The paper will address the Galileo and modernised GPS signals in space that are relevant to commercial precise positioning for the future and will discuss the implications for precise positioning performance. An outline of the proposed architecture shall be described and associated with pointers towards a successful implementation. Central to this discussion will be an assessment of the likely evolution of system infrastructure and user equipment implementation, prospects for new applications and their effect upon the business case for precise positioning services.

  1. Impaired gas exchange: accuracy of defining characteristics in children with acute respiratory infection1

    PubMed Central

    Pascoal, Lívia Maia; Lopes, Marcos Venícios de Oliveira; Chaves, Daniel Bruno Resende; Beltrão, Beatriz Amorim; da Silva, Viviane Martins; Monteiro, Flávia Paula Magalhães

    2015-01-01

    OBJECTIVE: to analyze the accuracy of the defining characteristics of the Impaired gas exchange nursing diagnosis in children with acute respiratory infection. METHOD: open prospective cohort study conducted with 136 children monitored for a consecutive period of at least six days and not more than ten days. An instrument based on the defining characteristics of the Impaired gas exchange diagnosis and on literature addressing pulmonary assessment was used to collect data. The accuracy means of all the defining characteristics under study were computed. RESULTS: the Impaired gas exchange diagnosis was present in 42.6% of the children in the first assessment. Hypoxemia was the characteristic that presented the best measures of accuracy. Abnormal breathing presented high sensitivity, while restlessness, cyanosis, and abnormal skin color showed high specificity. All the characteristics presented negative predictive values of 70% and cyanosis stood out by its high positive predictive value. CONCLUSION: hypoxemia was the defining characteristic that presented the best predictive ability to determine Impaired gas exchange. Studies of this nature enable nurses to minimize variability in clinical situations presented by the patient and to identify more precisely the nursing diagnosis that represents the patient's true clinical condition. PMID:26155010

  2. Precision gravimetric survey at the conditions of urban agglomerations

    NASA Astrophysics Data System (ADS)

    Sokolova, Tatiana; Lygin, Ivan; Fadeev, Alexander

    2014-05-01

    Large cities growth and aging lead to the irreversible negative changes of underground. The study of these changes at the urban area mainly based on the shallow methods of Geophysics, which extensive usage restricted by technogenic noise. Among others, precision gravimetry is allocated as method with good resistance to the urban noises. The main the objects of urban gravimetric survey are the soil decompaction, leaded to the rocks strength violation and the karst formation. Their gravity effects are too small, therefore investigation requires the modern high-precision equipment and special methods of measurements. The Gravimetry division of Lomonosov Moscow State University examin of modern precision gravimeters Scintrex CG-5 Autograv since 2006. The main performance characteristics of over 20 precision gravimeters were examined in various operational modes. Stationary mode. Long-term gravimetric measurements were carried at a base station. It shows that records obtained differ by high-frequency and mid-frequency (period 5 - 12 hours) components. The high-frequency component, determined as a standard deviation of measurement, characterizes the level of the system sensitivity to external noise and varies for different devices from 2 to 5-7 μGals. Midrange component, which closely meet to the rest of nonlinearity gravimeter drifts, is partially compensated by the equipment. This factor is very important in the case of gravimetric monitoring or observations, when midrange anomalies are the target ones. For the examined gravimeters, amplitudes' deviations, associated with this parameter may reach 10 μGals. Various transportation modes - were performed by walking (softest mode), lift (vertical overload), vehicle (horizontal overloads), boat (vertical plus horizontal overloads) and helicopter. The survey quality was compared by the variance of the measurement results and internal convergence of series. The measurement results variance (from ±2 to ±4 μGals) and its

  3. Sensitivity analysis of reference evapotranspiration to sensor accuracy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Meteorological sensor networks are often used across agricultural regions to calculate the ASCE Standardized Reference ET Equation, and inaccuracies in individual sensors can lead to inaccuracies in ET estimates. Multiyear datasets from the semi-arid Colorado Agricultural Meteorological (CoAgMet) an...

  4. A Comparative Evaluation of Full-text, Concept-based, and Context-sensitive Search

    PubMed Central

    Moskovitch, Robert; Martins, Susana B.; Behiri, Eytan; Weiss, Aviram; Shahar, Yuval

    2007-01-01

    Objectives Study comparatively (1) concept-based search, using documents pre-indexed by a conceptual hierarchy; (2) context-sensitive search, using structured, labeled documents; and (3) traditional full-text search. Hypotheses were: (1) more contexts lead to better retrieval accuracy; and (2) adding concept-based search to the other searches would improve upon their baseline performances. Design Use our Vaidurya architecture, for search and retrieval evaluation, of structured documents classified by a conceptual hierarchy, on a clinical guidelines test collection. Measurements Precision computed at different levels of recall to assess the contribution of the retrieval methods. Comparisons of precisions done with recall set at 0.5, using t-tests. Results Performance increased monotonically with the number of query context elements. Adding context-sensitive elements, mean improvement was 11.1% at recall 0.5. With three contexts, mean query precision was 42% ± 17% (95% confidence interval [CI], 31% to 53%); with two contexts, 32% ± 13% (95% CI, 27% to 38%); and one context, 20% ± 9% (95% CI, 15% to 24%). Adding context-based queries to full-text queries monotonically improved precision beyond the 0.4 level of recall. Mean improvement was 4.5% at recall 0.5. Adding concept-based search to full-text search improved precision to 19.4% at recall 0.5. Conclusions The study demonstrated usefulness of concept-based and context-sensitive queries for enhancing the precision of retrieval from a digital library of semi-structured clinical guideline documents. Concept-based searches outperformed free-text queries, especially when baseline precision was low. In general, the more ontological elements used in the query, the greater the resulting precision. PMID:17213502

  5. Precision Joining Center

    NASA Technical Reports Server (NTRS)

    Powell, John W.

    1991-01-01

    The establishment of a Precision Joining Center (PJC) is proposed. The PJC will be a cooperatively operated center with participation from U.S. private industry, the Colorado School of Mines, and various government agencies, including the Department of Energy's Nuclear Weapons Complex (NWC). The PJC's primary mission will be as a training center for advanced joining technologies. This will accomplish the following objectives: (1) it will provide an effective mechanism to transfer joining technology from the NWC to private industry; (2) it will provide a center for testing new joining processes for the NWC and private industry; and (3) it will provide highly trained personnel to support advance joining processes for the NWC and private industry.

  6. MATS and LaSpec: High-precision experiments using ion traps and lasers at FAIR

    NASA Astrophysics Data System (ADS)

    Rodríguez, D.; Blaum, K.; Nörtershäuser, W.; Ahammed, M.; Algora, A.; Audi, G.; Äystö, J.; Beck, D.; Bender, M.; Billowes, J.; Block, M.; Böhm, C.; Bollen, G.; Brodeur, M.; Brunner, T.; Bushaw, B. A.; Cakirli, R. B.; Campbell, P.; Cano-Ott, D.; Cortés, G.; Crespo López-Urrutia, J. R.; Das, P.; Dax, A.; de, A.; Delheij, P.; Dickel, T.; Dilling, J.; Eberhardt, K.; Eliseev, S.; Ettenauer, S.; Flanagan, K. T.; Ferrer, R.; García-Ramos, J.-E.; Gartzke, E.; Geissel, H.; George, S.; Geppert, C.; Gómez-Hornillos, M. B.; Gusev, Y.; Habs, D.; Heenen, P.-H.; Heinz, S.; Herfurth, F.; Herlert, A.; Hobein, M.; Huber, G.; Huyse, M.; Jesch, C.; Jokinen, A.; Kester, O.; Ketelaer, J.; Kolhinen, V.; Koudriavtsev, I.; Kowalska, M.; Krämer, J.; Kreim, S.; Krieger, A.; Kühl, T.; Lallena, A. M.; Lapierre, A.; Le Blanc, F.; Litvinov, Y. A.; Lunney, D.; Martínez, T.; Marx, G.; Matos, M.; Minaya-Ramirez, E.; Moore, I.; Nagy, S.; Naimi, S.; Neidherr, D.; Nesterenko, D.; Neyens, G.; Novikov, Y. N.; Petrick, M.; Plaß, W. R.; Popov, A.; Quint, W.; Ray, A.; Reinhard, P.-G.; Repp, J.; Roux, C.; Rubio, B.; Sánchez, R.; Schabinger, B.; Scheidenberger, C.; Schneider, D.; Schuch, R.; Schwarz, S.; Schweikhard, L.; Seliverstov, M.; Solders, A.; Suhonen, M.; Szerypo, J.; Taín, J. L.; Thirolf, P. G.; Ullrich, J.; van Duppen, P.; Vasiliev, A.; Vorobjev, G.; Weber, C.; Wendt, K.; Winkler, M.; Yordanov, D.; Ziegler, F.

    2010-05-01

    Nuclear ground state properties including mass, charge radii, spins and moments can be determined by applying atomic physics techniques such as Penning-trap based mass spectrometry and laser spectroscopy. The MATS and LaSpec setups at the low-energy beamline at FAIR will allow us to extend the knowledge of these properties further into the region far from stability. The mass and its inherent connection with the nuclear binding energy is a fundamental property of a nuclide, a unique “fingerprint”. Thus, precise mass values are important for a variety of applications, ranging from nuclear-structure studies like the investigation of shell closures and the onset of deformation, tests of nuclear mass models and mass formulas, to tests of the weak interaction and of the Standard Model. The required relative accuracy ranges from 10-5 to below 10-8 for radionuclides, which most often have half-lives well below 1 s. Substantial progress in Penning trap mass spectrometry has made this method a prime choice for precision measurements on rare isotopes. The technique has the potential to provide high accuracy and sensitivity even for very short-lived nuclides. Furthermore, ion traps can be used for precision decay studies and offer advantages over existing methods. With MATS (Precision Measurements of very short-lived nuclei using an A_dvanced Trapping System for highly-charged ions) at FAIR we aim to apply several techniques to very short-lived radionuclides: High-accuracy mass measurements, in-trap conversion electron and alpha spectroscopy, and trap-assisted spectroscopy. The experimental setup of MATS is a unique combination of an electron beam ion trap for charge breeding, ion traps for beam preparation, and a high-precision Penning trap system for mass measurements and decay studies. For the mass measurements, MATS offers both a high accuracy and a high sensitivity. A relative mass uncertainty of 10-9 can be reached by employing highly-charged ions and a non

  7. High Accuracy Wavelength Calibration For A Scanning Visible Spectrometer

    SciTech Connect

    Filippo Scotti and Ronald Bell

    2010-07-29

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤ 0.2Â. An automated calibration for a scanning spectrometer has been developed to achieve a high wavelength accuracy overr the visible spectrum, stable over time and environmental conditions, without the need to recalibrate after each grating movement. The method fits all relevant spectrometer paraameters using multiple calibration spectra. With a steping-motor controlled sine-drive, accuracies of ~0.025 Â have been demonstrated. With the addition of high resolution (0.075 aresec) optical encoder on the grading stage, greater precision (~0.005 Â) is possible, allowing absolute velocity measurements with ~0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  8. High accuracy wavelength calibration for a scanning visible spectrometer.

    PubMed

    Scotti, Filippo; Bell, Ronald E

    2010-10-01

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤0.2 Å. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of ∼0.25 Å has been demonstrated. With the addition of a high resolution (0.075 arc  sec) optical encoder on the grating stage, greater precision (∼0.005 Å) is possible, allowing absolute velocity measurements within ∼0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively. PMID:21033925

  9. Precision Neutron Polarimetry

    NASA Astrophysics Data System (ADS)

    Sharma, Monisha; Barron-Palos, L.; Bowman, J. D.; Chupp, T. E.; Crawford, C.; Danagoulian, A.; Klein, A.; Penttila, S. I.; Salas-Bacci, A. F.; Wilburn, W. S.

    2008-04-01

    Proposed PANDA and abBA experiments aim to measure the correlation coefficients in the polarized neutron beta decay at the SNS. The goal of these experiments is 0.1% measurement which will require neutron polarimetry at 0.1% level. The FnPB neutron beam will be polarized either using a ^3He spin filter or a supermirror polarizer and the neutron polarization will be measured using a ^3He spin filter. Experiment to establish the accuracy to which neutron polarization can be determined using ^3He spin fliters was performed at Los Alamos National Laboratory in Summer 2007 and the analysis is in progress. The details of the experiment and the results will be presented.

  10. Laser interferometric high-precision angle monitor for JASMINE

    NASA Astrophysics Data System (ADS)

    Niwa, Yoshito; Arai, Koji; Sakagami, Masaaki; Gouda, Naoteru; Kobayashi, Yukiyasu; Yamada, Yoshiyuki; Yano, Taihei

    2006-06-01

    The JASMINE instrument uses a beam combiner to observe two different fields of view separated by 99.5 degrees simultaneously. This angle is so-called basic angle. The basic angle of JASMINE should be stabilized and fluctuations of the basic angle should be monitored with the accuracy of 10 microarcsec in root-mean-square over the satellite revolution period of 5 hours. For this purpose, a high-precision interferometric laser metrogy system is employed. One of the available techniques for measuring the fluctuations of the basic angle is a method known as the wave front sensing using a Fabry-Perot type laser interferometer. This technique is to detect fluctuations of the basic angle as displacement of optical axis in the Fabry-Perot cavity. One of the advantages of the technique is that the sensor is made to be sensitive only to the relative fluctuations of the basic angle which the JASMINE wants to know and to be insensitive to the common one; in order to make the optical axis displacement caused by relative motion enhanced the Fabry-Perot cavity is formed by two mirrors which have long radius of curvature. To verify the principle of this idea, the experiment was performed using a 0.1m-length Fabry-Perot cavity with the mirror curvature of 20m. The mirrors of the cavity were artificially actuated in either relative way or common way and the resultant outputs from the sensor were compared.

  11. Precision tests of parity violation over cosmological distances

    NASA Astrophysics Data System (ADS)

    Kaufman, Jonathan P.; Keating, Brian G.; Johnson, Bradley R.

    2016-01-01

    Recent measurements of the cosmic microwave background (CMB) B-mode polarization power spectrum by the BICEP2 and POLARBEAR experiments have demonstrated new precision tools for probing fundamental physics. Regardless of origin, the detection of sub-μK CMB polarization represents a technological tour de force. Yet more information may be latent in the CMB's polarization pattern. Because of its tensorial nature, CMB polarization may also reveal parity-violating physics via a detection of cosmic polarization rotation. Although current CMB polarimeters are sensitive enough to measure one degree-level polarization rotation with >5σ statistical significance, they lack the ability to differentiate this effect from a systematic instrumental polarization rotation. Here, we motivate the search for cosmic polarization rotation from current CMB data as well as independent radio galaxy and quasar polarization measurements. We argue that an improvement in calibration accuracy would allow the unambiguous measurement of parity- and Lorentz-violating effects. We describe the CalSat space-based polarization calibrator that will provide stringent control of systematic polarization angle calibration uncertainties to 0.05° - an order of magnitude improvement over current CMB polarization calibrators. CalSat-based calibration could be used with current CMB polarimeters searching for B-mode polarization, effectively turning them into probes of cosmic parity violation, `for free' - i.e. without the need to build dedicated instruments.

  12. Ground Truth Accuracy Tests of GPS Seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Oberlander, D. J.; Davis, J. L.; Baena, R.; Ekstrom, G.

    2005-12-01

    As the precision of GPS determinations of site position continues to improve the detection of smaller and faster geophysical signals becomes possible. However, lack of independent measurements of these signals often precludes an assessment of the accuracy of such GPS position determinations. This may be particularly true for high-rate GPS applications. We have built an apparatus to assess the accuracy of GPS position determinations for high-rate applications, in particular the application known as "GPS seismology." The apparatus consists of a bidirectional, single-axis positioning table coupled to a digitally controlled stepping motor. The motor, in turn, is connected to a Field Programmable Gate Array (FPGA) chip that synchronously sequences through real historical earthquake profiles stored in Erasable Programmable Read Only Memory's (EPROM). A GPS antenna attached to this positioning table undergoes the simulated seismic motions of the Earth's surface while collecting high-rate GPS data. Analysis of the time-dependent position estimates can then be compared to the "ground truth," and the resultant GPS error spectrum can be measured. We have made extensive measurements with this system while inducing simulated seismic motions either in the horizontal plane or the vertical axis. A second stationary GPS antenna at a distance of several meters was simultaneously collecting high-rate (5 Hz) GPS data. We will present the calibration of this system, describe the GPS observations and data analysis, and assess the accuracy of GPS for high-rate geophysical applications and natural hazards mitigation.

  13. Precise renal artery segmentation for estimation of renal vascular dominant regions

    NASA Astrophysics Data System (ADS)

    Wang, Chenglong; Kagajo, Mitsuru; Nakamura, Yoshihiko; Oda, Masahiro; Yoshino, Yasushi; Yamamoto, Tokunori; Mori, Kensaku

    2016-03-01

    This paper presents a novel renal artery segmentation method combining graph-cut and template-based tracking methods and its application to estimation of renal vascular dominant region. For the purpose of giving a computer assisted diagnose for kidney surgery planning, it is important to obtain the correct topological structures of renal artery for estimation of renal vascular dominant regions. Renal artery has a low contrast, and its precise extraction is a difficult task. Previous method utilizing vesselness measure based on Hessian analysis, still cannot extract the tiny blood vessels in low-contrast area. Although model-based methods including superellipsoid model or cylindrical intensity model are low-contrast sensitive to the tiny blood vessels, problems including over-segmentation and poor bifurcations detection still remain. In this paper, we propose a novel blood vessel segmentation method combining a new Hessian-based graph-cut and template modeling tracking method. Firstly, graph-cut algorithm is utilized to obtain the rough segmentation result. Then template model tracking method is utilized to improve the accuracy of tiny blood vessel segmentation result. Rough segmentation utilizing graph-cut solves the bifurcations detection problem effectively. Precise segmentation utilizing template model tracking focuses on the segmentation of tiny blood vessels. By combining these two approaches, our proposed method segmented 70% of the renal artery of 1mm in diameter or larger. In addition, we demonstrate such precise segmentation can contribute to divide renal regions into a set of blood vessel dominant regions utilizing Voronoi diagram method.

  14. PRECISE ANGLE MONITOR BASED ON THE CONCEPT OF PENCIL-BEAM INTERFEROMETRY

    SciTech Connect

    QIAN,S.; TAKACS,P.

    2000-07-30

    The precise angle monitoring is a very important metrology task for research, development and industrial applications. Autocollimator is one of the most powerful and widely applied instruments for small angle monitoring, which is based on the principle of geometric optics. In this paper the authors introduce a new precise angle monitoring system, Pencil-beam Angle Monitor (PAM), base on pencil beam interferometry. Its principle of operation is a combination of physical and geometrical optics. The angle calculation method is similar to the autocollimator. However, the autocollimator creates a cross image but the precise pencil-beam angle monitoring system produces an interference fringe on the focal plane. The advantages of the PAM are: high angular sensitivity, long-term stability character making angle monitoring over long time periods possible, high measurement accuracy in the order of sub-microradian, simultaneous measurement ability in two perpendicular directions or on two different objects, dynamic measurement possibility, insensitive to the vibration and air turbulence, automatic display, storage and analysis by use of the computer, small beam diameter making the alignment extremely easy and longer test distance. Some test examples are presented.

  15. Few-Nucleon Charge Radii and a Precision Isotope Shift Measurement in Helium

    NASA Astrophysics Data System (ADS)

    Hassan Rezaeian, Nima; Shiner, David

    2015-10-01

    Recent improvements in atomic theory and experiment provide a valuable method to precisely determine few nucleon charge radii, complementing the more direct scattering approaches, and providing sensitive tests of few-body nuclear theory. Some puzzles with respect to this method exist, particularly in the muonic and electronic measurements of the proton radius, known as the proton puzzle. Perhaps this puzzle will also exist in nuclear size measurements in helium. Muonic helium measurements are ongoing while our new electronic results will be discussed here. We measured precisely the isotope shift of the 23S - 23P transitions in 3He and 4He. The result is almost an order of magnitude more accurate than previous measured values. To achieve this accuracy, we implemented various experimental techniques. We used a tunable laser frequency discriminator and electro-optic modulation technique to precisely control the frequency and intensity. We select and stabilize the intensity of the required sideband and eliminate unused sidebands. The technique uses a MEMS fiber switch (ts = 10 ms) and several temperature stabilized narrow band (3 GHz) fiber gratings. A beam with both species of helium is achieved using a custom fiber laser for simultaneous optical pumping. A servo-controlled retro-reflected laser beam eliminates Doppler effects. Careful detection design and software are essential for unbiased data collection. Our new results will be compared to previous measurements.

  16. EVALUATION OF METRIC PRECISION FOR A RIPARIAN FOREST SURVEY

    EPA Science Inventory

    This paper evaluates the performance of a protocol to monitor riparian forests in western Oregon based on the quality of the data obtained from a recent field survey. Precision and accuracy are the criteria used to determine the quality of 19 field metrics. The field survey con...

  17. Precision Astronomy with Imperfect Deep Depletion CCDs

    NASA Astrophysics Data System (ADS)

    Stubbs, Christopher; LSST Sensor Team; PanSTARRS Team

    2014-01-01

    While thick CCDs do provide definite advantages in terms of increased quantum efficiency at wavelengths 700 nm<λ < 1.1 microns and reduced fringing from atmospheric emission lines, these devices also exhibit undesirable features that pose a challenge to precision determination of the positions, fluxes, and shapes of astronomical objects, and for the precision extraction of features in astronomical spectra. For example, the assumptions of a perfectly rectilinear pixel grid and of an intensity-independent point spread function become increasingly invalid as we push to higher precision measurements. Many of the effects seen in these devices arise from lateral electrical fields within the detector, that produce charge transport anomalies that have been previously misinterpreted as quantum efficiency variations. Performing simplistic flat-fielding therefore introduces systematic errors in the image processing pipeline. One measurement challenge we face is devising a combination of calibration methods and algorithms that can distinguish genuine quantum efficiency variations from charge transport effects. These device imperfections also confront spectroscopic applications, such as line centroid determination for precision radial velocity studies. Given the scientific benefits of improving both the precision and accuracy of astronomical measurements, we need to identify, characterize, and overcome these various detector artifacts. In retrospect, many of the detector features first identified in thick CCDs also afflict measurements made with more traditional CCD detectors, albeit often at a reduced level since the photocharge is subject to the perturbing influence of lateral electric fields for a shorter time interval. I provide a qualitative overview of the physical effects we think are responsible for the observed device properties, and provide some perspective for the work that lies ahead.

  18. Soviet precision timekeeping research and technology

    SciTech Connect

    Vessot, R.F.C.; Allan, D.W.; Crampton, S.J.B.; Cutler, L.S.; Kern, R.H.; McCoubrey, A.O.; White, J.D.

    1991-08-01

    This report is the result of a study of Soviet progress in precision timekeeping research and timekeeping capability during the last two decades. The study was conducted by a panel of seven US scientists who have expertise in timekeeping, frequency control, time dissemination, and the direct applications of these disciplines to scientific investigation. The following topics are addressed in this report: generation of time by atomic clocks at the present level of their technology, new and emerging technologies related to atomic clocks, time and frequency transfer technology, statistical processes involving metrological applications of time and frequency, applications of precise time and frequency to scientific investigations, supporting timekeeping technology, and a comparison of Soviet research efforts with those of the United States and the West. The number of Soviet professionals working in this field is roughly 10 times that in the United States. The Soviet Union has facilities for large-scale production of frequency standards and has concentrated its efforts on developing and producing rubidium gas cell devices (relatively compact, low-cost frequency standards of modest accuracy and stability) and atomic hydrogen masers (relatively large, high-cost standards of modest accuracy and high stability). 203 refs., 45 figs., 9 tabs.

  19. Glass ceramic ZERODUR enabling nanometer precision

    NASA Astrophysics Data System (ADS)

    Jedamzik, Ralf; Kunisch, Clemens; Nieder, Johannes; Westerhoff, Thomas

    2014-03-01

    The IC Lithography roadmap foresees manufacturing of devices with critical dimension of < 20 nm. Overlay specification of single digit nanometer asking for nanometer positioning accuracy requiring sub nanometer position measurement accuracy. The glass ceramic ZERODUR® is a well-established material in critical components of microlithography wafer stepper and offered with an extremely low coefficient of thermal expansion (CTE), the tightest tolerance available on market. SCHOTT is continuously improving manufacturing processes and it's method to measure and characterize the CTE behavior of ZERODUR® to full fill the ever tighter CTE specification for wafer stepper components. In this paper we present the ZERODUR® Lithography Roadmap on the CTE metrology and tolerance. Additionally, simulation calculations based on a physical model are presented predicting the long term CTE behavior of ZERODUR® components to optimize dimensional stability of precision positioning devices. CTE data of several low thermal expansion materials are compared regarding their temperature dependence between - 50°C and + 100°C. ZERODUR® TAILORED 22°C is full filling the tight CTE tolerance of +/- 10 ppb / K within the broadest temperature interval compared to all other materials of this investigation. The data presented in this paper explicitly demonstrates the capability of ZERODUR® to enable the nanometer precision required for future generation of lithography equipment and processes.

  20. Landsat classification accuracy assessment procedures

    USGS Publications Warehouse

    Mead, R. R.; Szajgin, John

    1982-01-01

    A working conference was held in Sioux Falls, South Dakota, 12-14 November, 1980 dealing with Landsat classification Accuracy Assessment Procedures. Thirteen formal presentations were made on three general topics: (1) sampling procedures, (2) statistical analysis techniques, and (3) examples of projects which included accuracy assessment and the associated costs, logistical problems, and value of the accuracy data to the remote sensing specialist and the resource manager. Nearly twenty conference attendees participated in two discussion sessions addressing various issues associated with accuracy assessment. This paper presents an account of the accomplishments of the conference.

  1. High accuracy die mechanical stress measurement with the ATC04 Assembly Test Chip

    NASA Astrophysics Data System (ADS)

    Sweet, J. N.; Peterson, D. W.

    1993-07-01

    We have designed and manufactured a new CMOS piezoresistive stress sensing chip, ATC04, with an advanced cell design which enables stress measurement to much higher accuracy and precision than any other known die.

  2. High precision laser processing of sensitive materials by Microjet

    NASA Astrophysics Data System (ADS)

    Sibailly, Ochelio D.; Wagner, Frank R.; Mayor, Laetitia; Richerzhagen, Bernold

    2003-11-01

    Material laser cutting is well known and widely used in industrial processes, including micro fabrication. An increasing number of applications require nevertheless a superior machining quality than can be achieved using this method. A possibility to increase the cut quality is to opt for the water-jet guided laser technology. In this technique the laser is conducted to the work piece by total internal reflection in a thin stable water-jet, comparable to the core of an optical fiber. The water jet guided laser technique was developed originally in order to reduce the heat damaged zone near the cut, but in fact many other advantages were observed due to the usage of a water-jet instead of an assist gas stream applied in conventional laser cutting. In brief, the advantages are three-fold: the absence of divergence due to light guiding, the efficient melt expulsion, and optimum work piece cooling. In this presentation we will give an overview on several industrial applications of the water-jet guided laser technique. These applications range from the cutting of CBN or ferrite cores to the dicing of thin wafers and the manufacturing of stencils, each illustrates the important impact of the water-jet usage.

  3. Prompt and Precise Prototyping

    NASA Technical Reports Server (NTRS)

    2003-01-01

    For Sanders Design International, Inc., of Wilton, New Hampshire, every passing second between the concept and realization of a product is essential to succeed in the rapid prototyping industry where amongst heavy competition, faster time-to-market means more business. To separate itself from its rivals, Sanders Design aligned with NASA's Marshall Space Flight Center to develop what it considers to be the most accurate rapid prototyping machine for fabrication of extremely precise tooling prototypes. The company's Rapid ToolMaker System has revolutionized production of high quality, small-to-medium sized prototype patterns and tooling molds with an exactness that surpasses that of computer numerically-controlled (CNC) machining devices. Created with funding and support from Marshall under a Small Business Innovation Research (SBIR) contract, the Rapid ToolMaker is a dual-use technology with applications in both commercial and military aerospace fields. The advanced technology provides cost savings in the design and manufacturing of automotive, electronic, and medical parts, as well as in other areas of consumer interest, such as jewelry and toys. For aerospace applications, the Rapid ToolMaker enables fabrication of high-quality turbine and compressor blades for jet engines on unmanned air vehicles, aircraft, and missiles.

  4. Electrosurgery with cellular precision.

    PubMed

    Palanker, Daniel V; Vankov, Alexander; Huie, Philip

    2008-02-01

    Electrosurgery, one of the most-often used surgical tools, is a robust but somewhat crude technology that has changed surprisingly little since its invention almost a century ago. Continuous radiofrequency is still used for tissue cutting, with thermal damage extending to hundreds of micrometers. In contrast, lasers developed 70 years later, have been constantly perfected, and the laser-tissue interactions explored in great detail, which has allowed tissue ablation with cellular precision in many laser applications. We discuss mechanisms of tissue damage by electric field, and demonstrate that electrosurgery with properly optimized waveforms and microelectrodes can rival many advanced lasers. Pulsed electric waveforms with burst durations ranging from 10 to 100 micros applied via insulated planar electrodes with 12 microm wide exposed edges produced plasma-mediated dissection of tissues with the collateral damage zone ranging from 2 to 10 microm. Length of the electrodes can vary from micrometers to centimeters and all types of soft tissues-from membranes to cartilage and skin could be dissected in liquid medium and in a dry field. This technology may allow for major improvements in outcomes of the current surgical procedures and development of much more refined surgical techniques. PMID:18270030

  5. [Contrast sensitivity in glaucoma].

    PubMed

    Bartos, D

    1989-05-01

    Author reports on results of the contrast sensitivity examinations using the Cambridge low-contrast lattice test supplied by Clement Clarke International LTD, in patients with open-angle glaucoma and ocular hypertension. In glaucoma patients there was observed statistically significant decrease of the contrast sensitivity. In patients with ocular hypertension decrease of the contrast sensitivity was in patients affected by corresponding changes of the visual field and of the optical disc. The main advantages of the Cambridge low-contrast lattice test were simplicity, rapidity and precision of its performance. PMID:2743444

  6. Improving the Precision of Astrometry for Space Debris

    NASA Astrophysics Data System (ADS)

    Sun, Rongyu; Zhao, Changyin; Zhang, Xiaoxiang

    2014-03-01

    The data reduction method for optical space debris observations has many similarities with the one adopted for surveying near-Earth objects; however, due to several specific issues, the image degradation is particularly critical, which makes it difficult to obtain precise astrometry. An automatic image reconstruction method was developed to improve the astrometry precision for space debris, based on the mathematical morphology operator. Variable structural elements along multiple directions are adopted for image transformation, and then all the resultant images are stacked to obtain a final result. To investigate its efficiency, trial observations are made with Global Positioning System satellites and the astrometry accuracy improvement is obtained by comparison with the reference positions. The results of our experiments indicate that the influence of degradation in astrometric CCD images is reduced, and the position accuracy of both objects and stellar stars is improved distinctly. Our technique will contribute significantly to optical data reduction and high-order precision astrometry for space debris.

  7. Improving the precision of astrometry for space debris

    SciTech Connect

    Sun, Rongyu; Zhao, Changyin; Zhang, Xiaoxiang

    2014-03-01

    The data reduction method for optical space debris observations has many similarities with the one adopted for surveying near-Earth objects; however, due to several specific issues, the image degradation is particularly critical, which makes it difficult to obtain precise astrometry. An automatic image reconstruction method was developed to improve the astrometry precision for space debris, based on the mathematical morphology operator. Variable structural elements along multiple directions are adopted for image transformation, and then all the resultant images are stacked to obtain a final result. To investigate its efficiency, trial observations are made with Global Positioning System satellites and the astrometry accuracy improvement is obtained by comparison with the reference positions. The results of our experiments indicate that the influence of degradation in astrometric CCD images is reduced, and the position accuracy of both objects and stellar stars is improved distinctly. Our technique will contribute significantly to optical data reduction and high-order precision astrometry for space debris.

  8. Accuracy of NHANES periodontal examination protocols.

    PubMed

    Eke, P I; Thornton-Evans, G O; Wei, L; Borgnakke, W S; Dye, B A

    2010-11-01

    This study evaluates the accuracy of periodontitis prevalence determined by the National Health and Nutrition Examination Survey (NHANES) partial-mouth periodontal examination protocols. True periodontitis prevalence was determined in a new convenience sample of 454 adults ≥ 35 years old, by a full-mouth "gold standard" periodontal examination. This actual prevalence was compared with prevalence resulting from analysis of the data according to the protocols of NHANES III and NHANES 2001-2004, respectively. Both NHANES protocols substantially underestimated the prevalence of periodontitis by 50% or more, depending on the periodontitis case definition used, and thus performed below threshold levels for moderate-to-high levels of validity for surveillance. Adding measurements from lingual or interproximal sites to the NHANES 2001-2004 protocol did not improve the accuracy sufficiently to reach acceptable sensitivity thresholds. These findings suggest that NHANES protocols produce high levels of misclassification of periodontitis cases and thus have low validity for surveillance and research. PMID:20858782

  9. Precision positioning of earth orbiting remote sensing systems

    NASA Technical Reports Server (NTRS)

    Melbourne, William G.; Yunck, T. P.; Wu, S. C.

    1987-01-01

    Decimeter tracking accuracy is sought for a number of precise earth sensing satellites to be flown in the 1990's. This accuracy can be achieved with techniques which use the Global Positioning System (GPS) in a differential mode. A precisely located global network of GPS ground receivers and a receiver aboard the user satellite are needed, and all techniques simultaneously estimate the user and GPS satellite states. Three basic navigation approaches include classical dynamic, wholly nondynamic, and reduced dynamic or hybrid formulations. The first two are simply special cases of the third, which promises to deliver subdecimeter accuracy for dynamically unpredictable vehicles down to the lowest orbit altitudes. The potential of these techniques for tracking and gravity field recovery will be demonstrated on NASA's Topex satellite beginning in 1991. Applications to the Shuttle, Space Station, and dedicated remote sensing platforms are being pursued.

  10. Investigation of Practical and Theoretical Accuracy of Wireless Indoor Positioning System Ubisense

    NASA Astrophysics Data System (ADS)

    Woźniak, Marek; Odziemczyk, Waldemar; Nagórski, Kamil

    2013-12-01

    This paper presents the accuracy investigation results and functionality of Ubisense RTLS positioning system. Three kinds of studies were conducted: test of calibration accuracy, analysis of theoretical accuracy of the coordinates determination as well as accuracy measurements in field conditions. Test of calibration accuracy was made with several different geometric constellation of reference points (tag positions). We determined changes of orientation parameters of receivers and disturbance of positioning points coordinates against chosen reference points constellations. Analysis of theoretical accuracy was made for several receivers spatial positions and their orientations. It allowed to indicate favourable and unfavourable measurement area considering accuracy and reliability. Real positioning accuracy of the Ubisense system was determined by comparison with coordinates measured using precise tacheometer TCRP1201+. Results of conducted experiments and accuracy analysis of test measurement were presented in figures and diagrams.

  11. Composite-light-pulse technique for high-precision atom interferometry.

    PubMed

    Berg, P; Abend, S; Tackmann, G; Schubert, C; Giese, E; Schleich, W P; Narducci, F A; Ertmer, W; Rasel, E M

    2015-02-13

    We realize beam splitters and mirrors for atom waves by employing a sequence of light pulses rather than individual ones. In this way we can tailor atom interferometers with improved sensitivity and accuracy. We demonstrate our method of composite pulses by creating a symmetric matter-wave interferometer which combines the advantages of conventional Bragg- and Raman-type concepts. This feature leads to an interferometer with a high immunity to technical noise allowing us to devise a large-area Sagnac gyroscope yielding a phase shift of 6.5 rad due to the Earth's rotation. With this device we achieve a rotation rate precision of 120  nrad s(-1) Hz(-1/2) and determine the Earth's rotation rate with a relative uncertainty of 1.2%. PMID:25723216

  12. XpertTrack: Precision Autonomous Measuring Device Developed for Real Time Shipments Tracker.

    PubMed

    Viman, Liviu; Daraban, Mihai; Fizesan, Raul; Iuonas, Mircea

    2016-01-01

    This paper proposes a software and hardware solution for real time condition monitoring applications. The proposed device, called XpertTrack, exchanges data through the GPRS protocol over a GSM network and monitories temperature and vibrations of critical merchandise during commercial shipments anywhere on the globe. Another feature of this real time tracker is to provide GPS and GSM positioning with a precision of 10 m or less. In order to interpret the condition of the merchandise, the data acquisition, analysis and visualization are done with 0.1 °C accuracy for the temperature sensor, and 10 levels of shock sensitivity for the acceleration sensor. In addition to this, the architecture allows increasing the number and the types of sensors, so that companies can use this flexible solution to monitor a large percentage of their fleet. PMID:26978360

  13. XpertTrack: Precision Autonomous Measuring Device Developed for Real Time Shipments Tracker

    PubMed Central

    Viman, Liviu; Daraban, Mihai; Fizesan, Raul; Iuonas, Mircea

    2016-01-01

    This paper proposes a software and hardware solution for real time condition monitoring applications. The proposed device, called XpertTrack, exchanges data through the GPRS protocol over a GSM network and monitories temperature and vibrations of critical merchandise during commercial shipments anywhere on the globe. Another feature of this real time tracker is to provide GPS and GSM positioning with a precision of 10 m or less. In order to interpret the condition of the merchandise, the data acquisition, analysis and visualization are done with 0.1 °C accuracy for the temperature sensor, and 10 levels of shock sensitivity for the acceleration sensor. In addition to this, the architecture allows increasing the number and the types of sensors, so that companies can use this flexible solution to monitor a large percentage of their fleet. PMID:26978360

  14. Precision absolute measurement and alignment of laser beam direction and position.

    PubMed

    Schütze, Daniel; Müller, Vitali; Heinzel, Gerhard

    2014-10-01

    For the construction of high-precision optical assemblies, direction and position measurement and control of the involved laser beams are essential. While optical components such as beamsplitters and mirrors can be positioned and oriented accurately using coordinate measuring machines (CMMs), the position and direction control of laser beams is a much more intriguing task since the beams cannot be physically contacted. We present an easy-to-implement method to both align and measure the direction and position of a laser beam using a CMM in conjunction with a position-sensitive quadrant photodiode. By comparing our results to calibrated angular and positional measurements we can conclude that with the proposed method, a laser beam can be both measured and aligned to the desired direction and position with 10 μrad angular and 3 μm positional accuracy. PMID:25322238

  15. Accuracy Evaluation of Electron-Probe Microanalysis as Applied to Semiconductors and Silicates

    NASA Technical Reports Server (NTRS)

    Carpenter, Paul; Armstrong, John

    2003-01-01

    An evaluation of precision and accuracy will be presented for representative semiconductor and silicate compositions. The accuracy of electron-probe analysis depends on high precision measurements and instrumental calibration, as well as correction algorithms and fundamental parameter data sets. A critical assessment of correction algorithms and mass absorption coefficient data sets can be made using the alpha factor technique. Alpha factor analysis can be used to identify systematic errors in data sets and also of microprobe standards used for calibration.

  16. Precise Truss Assembly using Commodity Parts and Low Precision Welding

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, William R.; Correll, Nikolaus

    2013-01-01

    We describe an Intelligent Precision Jigging Robot (IPJR), which allows high precision assembly of commodity parts with low-precision bonding. We present preliminary experiments in 2D that are motivated by the problem of assembling a space telescope optical bench on orbit using inexpensive, stock hardware and low-precision welding. An IPJR is a robot that acts as the precise "jigging", holding parts of a local assembly site in place while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (in this case, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. We report the challenges of designing the IPJR hardware and software, analyze the error in assembly, document the test results over several experiments including a large-scale ring structure, and describe future work to implement the IPJR in 3D and with micron precision.

  17. Precision medicine in myasthenia graves: begin from the data precision

    PubMed Central

    Hong, Yu; Xie, Yanchen; Hao, Hong-Jun; Sun, Ren-Cheng

    2016-01-01

    Myasthenia gravis (MG) is a prototypic autoimmune disease with overt clinical and immunological heterogeneity. The data of MG is far from individually precise now, partially due to the rarity and heterogeneity of this disease. In this review, we provide the basic insights of MG data precision, including onset age, presenting symptoms, generalization, thymus status, pathogenic autoantibodies, muscle involvement, severity and response to treatment based on references and our previous studies. Subgroups and quantitative traits of MG are discussed in the sense of data precision. The role of disease registries and scientific bases of precise analysis are also discussed to ensure better collection and analysis of MG data. PMID:27127759

  18. Precise Truss Assembly Using Commodity Parts and Low Precision Welding

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, W. R.; Correll, Nikolaus

    2014-01-01

    Hardware and software design and system integration for an intelligent precision jigging robot (IPJR), which allows high precision assembly using commodity parts and low-precision bonding, is described. Preliminary 2D experiments that are motivated by the problem of assembling space telescope optical benches and very large manipulators on orbit using inexpensive, stock hardware and low-precision welding are also described. An IPJR is a robot that acts as the precise "jigging", holding parts of a local structure assembly site in place, while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (for this prototype, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. The analysis of the assembly error and the results of building a square structure and a ring structure are discussed. Options for future work, to extend the IPJR paradigm to building in 3D structures at micron precision are also summarized.

  19. Multiple ion counting ICPMS double spike method for precise U isotopic analysis at ultra-trace levels

    NASA Astrophysics Data System (ADS)

    Snow, Jonathan E.; Friedrich, Jon M.

    2005-04-01

    Of the various methods for the measurement of the isotopic composition of U in solids and solutions, few offer both sensitivity and precision. In recent years, the use of ICPMS technology for this determination has become increasingly prevalent. Here we describe a method for the determination of the 235U/238U ratio in very small quantities (<=350 pg) with an accuracy of better than 3[per mille sign]. We measured several terrestrial standard materials and repeated analyses of the U960 isotopic composition standard. We used a 233U/236U double spike, with multiple ion counting on an unmodified Nu Instruments multicollector ICPMS and a non-standard detector configuration that allows an approximately 20-fold sensitivity gain over the best conventional techniques. This technique shows promise for the detection of isotopic tracers in the environment (for example anthropogenic 238U) at very extreme dilutions, or in cases where the total amount of analyte is necessarily limited.

  20. High Precision Spectroscopy of Neutral Beryllium-9

    NASA Astrophysics Data System (ADS)

    Lau, Chui Yu; Williams, Will

    2015-05-01

    We report on the progress of high precision spectroscopy of the 2s2p singlet and triplet states in beryllium-9. Our goal is to improve the experimental precision on the energy levels of the 2s2p triplet J = 0, 1, and 2 states by a factor of 500, 100, and 500 respectively in order to delineate various theoretical predictions. The goal for the 2s2p singlet (J = 1) state is to improve the experimental precision on the energy level by a factor of 600 as a test of quantum electrodynamics. Our experimental setup consists of an oven capable of 1400 C that produces a collimated beam of neutral beryllium-9. The triplet states are probed with a 455 nm ECDL stabilized to a tellurium-210 line. The singlet state is probed with 235nm light from a frequency quadrupled titanium sapphire laser, where the frequency doubled light at 470 nm is stabilized to another tellurium-210 line. We also present our progress on improving the absolute accuracy of our frequency reference by using an ultrastable/low drift fiber coupled cavity.