Science.gov

Sample records for accuracy precision robustness

  1. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032

  2. Accuracy and Precision of an IGRT Solution

    SciTech Connect

    Webster, Gareth J. Rowbottom, Carl G.; Mackay, Ranald I.

    2009-07-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within {+-} 3% in dose over the range of sample points. For some points in high-dose gradients

  3. Accuracy and precision of an IGRT solution.

    PubMed

    Webster, Gareth J; Rowbottom, Carl G; Mackay, Ranald I

    2009-01-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within +/- 3% in dose over the range of sample points. For some points in high-dose gradients

  4. Precision standoff guidance antenna accuracy evaluation

    NASA Astrophysics Data System (ADS)

    Irons, F. H.; Landesberg, M. M.

    1981-02-01

    This report presents a summary of work done to determine the inherent angular accuracy achievable with the guidance and control precision standoff guidance antenna. The antenna is a critical element in the anti-jam single station guidance program since its characteristics can limit the intrinsic location guidance accuracy. It was important to determine the extent to which high ratio beamsplitting results could be achieved repeatedly and what issues were involved with calibrating the antenna. The antenna accuracy has been found to be on the order of 0.006 deg. through the use of a straightforward lookup table concept. This corresponds to a cross range error of 21 m at a range of 200 km. This figure includes both pointing errors and off-axis estimation errors. It was found that the antenna off-boresight calibration is adequately represented by a straight line for each position plus a lookup table for pointing errors relative to broadside. In the event recalibration is required, it was found that only 1% of the model would need to be corrected.

  5. Assessing the Accuracy of the Precise Point Positioning Technique

    NASA Astrophysics Data System (ADS)

    Bisnath, S. B.; Collins, P.; Seepersad, G.

    2012-12-01

    The Precise Point Positioning (PPP) GPS data processing technique has developed over the past 15 years to become a standard method for growing categories of positioning and navigation applications. The technique relies on single receiver point positioning combined with the use of precise satellite orbit and clock information and high-fidelity error modelling. The research presented here uniquely addresses the current accuracy of the technique, explains the limits of performance, and defines paths to improvements. For geodetic purposes, performance refers to daily static position accuracy. PPP processing of over 80 IGS stations over one week results in few millimetre positioning rms error in the north and east components and few centimetres in the vertical (all one sigma values). Larger error statistics for real-time and kinematic processing are also given. GPS PPP with ambiguity resolution processing is also carried out, producing slight improvements over the float solution results. These results are categorised into quality classes in order to analyse the root error causes of the resultant accuracies: "best", "worst", multipath, site displacement effects, satellite availability and geometry, etc. Also of interest in PPP performance is solution convergence period. Static, conventional solutions are slow to converge, with approximately 35 minutes required for 95% of solutions to reach the 20 cm or better horizontal accuracy. Ambiguity resolution can significantly reduce this period without biasing solutions. The definition of a PPP error budget is a complex task even with the resulting numerical assessment, as unlike the epoch-by-epoch processing in the Standard Position Service, PPP processing involving filtering. An attempt is made here to 1) define the magnitude of each error source in terms of range, 2) transform ranging error to position error via Dilution Of Precision (DOP), and 3) scale the DOP through the filtering process. The result is a deeper

  6. Precision and Accuracy Studies with Kajaani Fiber Length Analyzers

    NASA Astrophysics Data System (ADS)

    Copur, Yalcin; Makkonen, Hannu

    The aim of this study was to test the measurement precision and accuracy of the Kajaani FS-100 giving attention to possible machine error in the measurements. Fiber length of pine pulps produced using polysulfide, kraft, biokraft and soda methods were determined using both FS-100 and FiberLab automated fiber length analyzers. The measured length values were compared for both methods. The measurement precision and accuracy was tested by replicated measurements using rayon stable fibers. Measurements performed on pulp samples showed typical length distributions for both analyzers. Results obtained from Kajaani FS-100 and FiberLab showed a significant correlation. The shorter length measurement with FiberLab was found to be mainly due to the instrument calibration. The measurement repeatability tested for Kajaani FS-100 indicated that the measurements are precise.

  7. Precision and Accuracy Parameters in Structured Light 3-D Scanning

    NASA Astrophysics Data System (ADS)

    Eiríksson, E. R.; Wilm, J.; Pedersen, D. B.; Aanæs, H.

    2016-04-01

    Structured light systems are popular in part because they can be constructed from off-the-shelf low cost components. In this paper we quantitatively show how common design parameters affect precision and accuracy in such systems, supplying a much needed guide for practitioners. Our quantitative measure is the established VDI/VDE 2634 (Part 2) guideline using precision made calibration artifacts. Experiments are performed on our own structured light setup, consisting of two cameras and a projector. We place our focus on the influence of calibration design parameters, the calibration procedure and encoding strategy and present our findings. Finally, we compare our setup to a state of the art metrology grade commercial scanner. Our results show that comparable, and in some cases better, results can be obtained using the parameter settings determined in this study.

  8. The Plus or Minus Game - Teaching Estimation, Precision, and Accuracy

    NASA Astrophysics Data System (ADS)

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-03-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in TPT (Larry Weinstein's "Fermi Questions.") For several years the authors (a college physics professor, a retired algebra teacher, and a fifth-grade teacher) have been playing a game, primarily at home to challenge each other for fun, but also in the classroom as an educational tool. We call the game "The Plus or Minus Game." The game combines estimation with the principle of precision and uncertainty in a competitive and fun way.

  9. Precision and accuracy in the reproduction of simple tone sequences.

    PubMed

    Vos, P G; Ellermann, H H

    1989-02-01

    In four experiments we investigated the precision and accuracy with which amateur musicians are able to reproduce sequences of tones varied only temporally, so as to have tone and rest durations constant over sequences, and the tempo varied over the musically meaningful range of 5-0.5 tones per second. Experiments 1 and 2 supported the hypothesis of attentional bias toward having the attack moments, rather than the departure moments, precisely times. Experiment 3 corroborated the hypothesis that inaccurate timing of short interattack intervals is manifested in a lengthening of rests, rather than tones, as a result of larger motor activity during the reproduction of rests. Experiment 4 gave some support to the hypothesis that the shortening of long interattack intervals is due to mnemonic constraints affecting the rests rather than the tones. Both theoretical and practical consequences of the various findings, particularly with respect to timing in musical performance, are discussed. PMID:2522528

  10. Fluorescence Axial Localization with Nanometer Accuracy and Precision

    SciTech Connect

    Li, Hui; Yen, Chi-Fu; Sivasankar, Sanjeevi

    2012-06-15

    We describe a new technique, standing wave axial nanometry (SWAN), to image the axial location of a single nanoscale fluorescent object with sub-nanometer accuracy and 3.7 nm precision. A standing wave, generated by positioning an atomic force microscope tip over a focused laser beam, is used to excite fluorescence; axial position is determined from the phase of the emission intensity. We use SWAN to measure the orientation of single DNA molecules of different lengths, grafted on surfaces with different functionalities.

  11. Accuracy, Precision, and Resolution in Strain Measurements on Diffraction Instruments

    NASA Astrophysics Data System (ADS)

    Polvino, Sean M.

    Diffraction stress analysis is a commonly used technique to evaluate the properties and performance of different classes of materials from engineering materials, such as steels and alloys, to electronic materials like Silicon chips. Often to better understand the performance of these materials at operating conditions they are also commonly subjected to elevated temperatures and different loading conditions. The validity of any measurement under these conditions is only as good as the control of the conditions and the accuracy and precision of the instrument being used to measure the properties. What is the accuracy and precision of a typical diffraction system and what is the best way to evaluate these quantities? Is there a way to remove systematic and random errors in the data that are due to problems with the control system used? With the advent of device engineering employing internal stress as a method for increasing performance the measurement of stress from microelectronic structures has become of enhanced importance. X-ray diffraction provides an ideal method for measuring these small areas without the need for modifying the sample and possibly changing the strain state. Micro and nano diffraction experiments on Silicon-on-Insulator samples revealed changes to the material under investigation and raised significant concerns about the usefulness of these techniques. This damage process and the application of micro and nano diffraction is discussed.

  12. Scatterometry measurement precision and accuracy below 70 nm

    NASA Astrophysics Data System (ADS)

    Sendelbach, Matthew; Archie, Charles N.

    2003-05-01

    Scatterometry is a contender for various measurement applications where structure widths and heights can be significantly smaller than 70 nm within one or two ITRS generations. For example, feedforward process control in the post-lithography transistor gate formation is being actively pursued by a number of RIE tool manufacturers. Several commercial forms of scatterometry are available or under development which promise to provide satisfactory performance in this regime. Scatterometry, as commercially practiced today, involves analyzing the zeroth order reflected light from a grating of lines. Normal incidence spectroscopic reflectometry, 2-theta fixed-wavelength ellipsometry, and spectroscopic ellipsometry are among the optical techniques, while library based spectra matching and realtime regression are among the analysis techniques. All these commercial forms will find accurate and precise measurement a challenge when the material constituting the critical structure approaches a very small volume. Equally challenging is executing an evaluation methodology that first determines the true properties (critical dimensions and materials) of semiconductor wafer artifacts and then compares measurement performance of several scatterometers. How well do scatterometers track process induced changes in bottom CD and sidewall profile? This paper introduces a general 3D metrology assessment methodology and reports upon work involving sub-70 nm structures and several scatterometers. The methodology combines results from multiple metrologies (CD-SEM, CD-AFM, TEM, and XSEM) to form a Reference Measurement System (RMS). The methodology determines how well the scatterometry measurement tracks critical structure changes even in the presence of other noncritical changes that take place at the same time; these are key components of accuracy. Because the assessment rewards scatterometers that measure with good precision (reproducibility) and good accuracy, the most precise

  13. T1-mapping in the heart: accuracy and precision

    PubMed Central

    2014-01-01

    The longitudinal relaxation time constant (T1) of the myocardium is altered in various disease states due to increased water content or other changes to the local molecular environment. Changes in both native T1 and T1 following administration of gadolinium (Gd) based contrast agents are considered important biomarkers and multiple methods have been suggested for quantifying myocardial T1 in vivo. Characterization of the native T1 of myocardial tissue may be used to detect and assess various cardiomyopathies while measurement of T1 with extracellular Gd based contrast agents provides additional information about the extracellular volume (ECV) fraction. The latter is particularly valuable for more diffuse diseases that are more challenging to detect using conventional late gadolinium enhancement (LGE). Both T1 and ECV measures have been shown to have important prognostic significance. T1-mapping has the potential to detect and quantify diffuse fibrosis at an early stage provided that the measurements have adequate reproducibility. Inversion recovery methods such as MOLLI have excellent precision and are highly reproducible when using tightly controlled protocols. The MOLLI method is widely available and is relatively mature. The accuracy of inversion recovery techniques is affected significantly by magnetization transfer (MT). Despite this, the estimate of apparent T1 using inversion recovery is a sensitive measure, which has been demonstrated to be a useful tool in characterizing tissue and discriminating disease. Saturation recovery methods have the potential to provide a more accurate measurement of T1 that is less sensitive to MT as well as other factors. Saturation recovery techniques are, however, noisier and somewhat more artifact prone and have not demonstrated the same level of reproducibility at this point in time. This review article focuses on the technical aspects of key T1-mapping methods and imaging protocols and describes their limitations including

  14. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS--1988

    EPA Science Inventory

    Precision and accuracy data obtained from state and local agencies (SLAMS) during 1988 are analyzed. ooled site variances and average biases which are relevant quantities to both precision and accuracy determinations are statistically compared within and between states to assess ...

  15. Robust and precise baseline determination of distributed spacecraft in LEO

    NASA Astrophysics Data System (ADS)

    Allende-Alba, Gerardo; Montenbruck, Oliver

    2016-01-01

    Recent experience with prominent formation flying missions in Low Earth Orbit (LEO), such as GRACE and TanDEM-X, has shown the feasibility of precise relative navigation at millimeter and sub-millimeter levels using GPS carrier phase measurements with fixed integer ambiguities. However, the robustness and availability of the solutions provided by current algorithms may be highly dependent on the mission profile. The main challenges faced in the LEO scenario are the resulting short continuous carrier phase tracking arcs along with the observed rapidly changing ionospheric conditions, which in the particular situation of long baselines increase the difficulty of correct integer ambiguity resolution. To reduce the impact of these factors, the present study proposes a strategy based on a reduced-dynamics filtering of dual-frequency GPS measurements for precise baseline determination along with a dedicated scheme for integer ambiguity resolution, consisting of a hybrid sequential/batch algorithm based on the maximum a posteriori and integer aperture estimators. The algorithms have been tested using flight data from the GRACE, TanDEM-X and Swarm missions in order to assess their robustness to different formation and baseline configurations. Results with the GRACE mission show an average 0.7 mm consistency with the K/Ka-band ranging measurements over a period of more than two years in a baseline configuration of 220 km. Results with TanDEM-X data show an average of 3.8 mm consistency of kinematic and reduced-dynamic solutions in the along-track component over a period of 40 days in baseline configurations of 500 m and 75 km. Data from Swarm A and Swarm C spacecraft are largely affected by atmospheric scintillation and contain half cycle ambiguities. The results obtained under such conditions show an overall consistency between kinematic and reduced-dynamic solutions of 1.7 cm in the along-track component over a period of 30 days in a variable baseline of approximately 60

  16. Measuring changes in Plasmodium falciparum transmission: Precision, accuracy and costs of metrics

    PubMed Central

    Tusting, Lucy S.; Bousema, Teun; Smith, David L.; Drakeley, Chris

    2016-01-01

    As malaria declines in parts of Africa and elsewhere, and as more countries move towards elimination, it is necessary to robustly evaluate the effect of interventions and control programmes on malaria transmission. To help guide the appropriate design of trials to evaluate transmission-reducing interventions, we review eleven metrics of malaria transmission, discussing their accuracy, precision, collection methods and costs, and presenting an overall critique. We also review the non-linear scaling relationships between five metrics of malaria transmission; the entomological inoculation rate, force of infection, sporozoite rate, parasite rate and the basic reproductive number, R0. Our review highlights that while the entomological inoculation rate is widely considered the gold standard metric of malaria transmission and may be necessary for measuring changes in transmission in highly endemic areas, it has limited precision and accuracy and more standardised methods for its collection are required. In areas of low transmission, parasite rate, sero-conversion rates and molecular metrics including MOI and mFOI may be most appropriate. When assessing a specific intervention, the most relevant effects will be detected by examining the metrics most directly affected by that intervention. Future work should aim to better quantify the precision and accuracy of malaria metrics and to improve methods for their collection. PMID:24480314

  17. Accuracy and precision of alternative estimators of ectoparasiticide efficacy.

    PubMed

    Schall, Robert; Burger, Divan A; Luus, Herman G

    2016-06-15

    While there is consensus that the efficacy of parasiticides is properly assessed using the Abbott formula, there is as yet no general consensus on the use of arithmetic versus geometric mean numbers of surviving parasites in the formula. The purpose of this paper is to investigate the accuracy and precision of various efficacy estimators based on the Abbott formula which alternatively use arithmetic mean, geometric mean and median numbers of surviving parasites; we also consider a maximum likelihood estimator. Our study shows that the best estimators using geometric means are competitive, with respect to root mean squared error, with the conventional Abbott estimator using arithmetic means, as they have lower average and lower median root mean square error over the parameter scenarios which we investigated. However, our study confirms that Abbott estimators using geometric means are potentially biased upwards, and this upward bias is substantial in particular when the test product has substandard efficacy (90% and below). For this reason, we recommend that the Abbott estimator be calculated using arithmetic means. PMID:27198777

  18. Improved DORIS accuracy for precise orbit determination and geodesy

    NASA Technical Reports Server (NTRS)

    Willis, Pascal; Jayles, Christian; Tavernier, Gilles

    2004-01-01

    In 2001 and 2002, 3 more DORIS satellites were launched. Since then, all DORIS results have been significantly improved. For precise orbit determination, 20 cm are now available in real-time with DIODE and 1.5 to 2 cm in post-processing. For geodesy, 1 cm precision can now be achieved regularly every week, making now DORIS an active part of a Global Observing System for Geodesy through the IDS.

  19. Numerical planetary and lunar ephemerides - Present status, precision and accuracies

    NASA Technical Reports Server (NTRS)

    Standish, E. Myles, Jr.

    1986-01-01

    Features of the emphemeris creation process are described with attention given to the equations of motion, the numerical integration, and the least-squares fitting process. Observational data are presented and ephemeride accuracies are estimated. It is believed that radio measurements, VLBI, occultations, and the Space Telescope and Hipparcos will improve ephemerides in the near future. Limitations to accuracy are considered as well as relativity features. The export procedure, by which an outside user may obtain and use the JPL ephemerides, is discussed.

  20. Robustness and Accuracy in Sea Urchin Developmental Gene Regulatory Networks

    PubMed Central

    Ben-Tabou de-Leon, Smadar

    2016-01-01

    Developmental gene regulatory networks robustly control the timely activation of regulatory and differentiation genes. The structure of these networks underlies their capacity to buffer intrinsic and extrinsic noise and maintain embryonic morphology. Here I illustrate how the use of specific architectures by the sea urchin developmental regulatory networks enables the robust control of cell fate decisions. The Wnt-βcatenin signaling pathway patterns the primary embryonic axis while the BMP signaling pathway patterns the secondary embryonic axis in the sea urchin embryo and across bilateria. Interestingly, in the sea urchin in both cases, the signaling pathway that defines the axis controls directly the expression of a set of downstream regulatory genes. I propose that this direct activation of a set of regulatory genes enables a uniform regulatory response and a clear cut cell fate decision in the endoderm and in the dorsal ectoderm. The specification of the mesodermal pigment cell lineage is activated by Delta signaling that initiates a triple positive feedback loop that locks down the pigment specification state. I propose that the use of compound positive feedback circuitry provides the endodermal cells enough time to turn off mesodermal genes and ensures correct mesoderm vs. endoderm fate decision. Thus, I argue that understanding the control properties of repeatedly used regulatory architectures illuminates their role in embryogenesis and provides possible explanations to their resistance to evolutionary change. PMID:26913048

  1. Robust adhesive precision bonding in automated assembly cells

    NASA Astrophysics Data System (ADS)

    Müller, Tobias; Haag, Sebastian; Bastuck, Thomas; Gisler, Thomas; Moser, Hansruedi; Uusimaa, Petteri; Axt, Christoph; Brecher, Christian

    2014-03-01

    Diode lasers are gaining importance, making their way to higher output powers along with improved BPP. The assembly of micro-optics for diode laser systems goes along with the highest requirements regarding assembly precision. Assembly costs for micro-optics are driven by the requirements regarding alignment in a submicron and the corresponding challenges induced by adhesive bonding. For micro-optic assembly tasks a major challenge in adhesive bonding at highest precision level is the fact, that the bonding process is irreversible. Accordingly, the first bonding attempt needs to be successful. Today's UV-curing adhesives inherit shrinkage effects crucial for submicron tolerances of e.g. FACs. The impact of the shrinkage effects can be tackled by a suitable bonding area design, such as minimal adhesive gaps and an adapted shrinkage offset value for the specific assembly parameters. Compensating shrinkage effects is difficult, as the shrinkage of UV-curing adhesives is not constant between two different lots and varies even over the storage period even under ideal circumstances as first test results indicate. An up-to-date characterization of the adhesive appears necessary for maximum precision in optics assembly to reach highest output yields, minimal tolerances and ideal beamshaping results. Therefore, a measurement setup to precisely determine the up-to-date level of shrinkage has been setup. The goal is to provide necessary information on current shrinkage to the operator or assembly cell to adjust the compensation offset on a daily basis. Impacts of this information are expected to be an improved beam shaping result and a first-time-right production.

  2. S-193 scatterometer backscattering cross section precision/accuracy for Skylab 2 and 3 missions

    NASA Technical Reports Server (NTRS)

    Krishen, K.; Pounds, D. J.

    1975-01-01

    Procedures for measuring the precision and accuracy with which the S-193 scatterometer measured the background cross section of ground scenes are described. Homogeneous ground sites were selected, and data from Skylab missions were analyzed. The precision was expressed as the standard deviation of the scatterometer-acquired backscattering cross section. In special cases, inference of the precision of measurement was made by considering the total range from the maximum to minimum of the backscatter measurements within a data segment, rather than the standard deviation. For Skylab 2 and 3 missions a precision better than 1.5 dB is indicated. This procedure indicates an accuracy of better than 3 dB for the Skylab 2 and 3 missions. The estimates of precision and accuracy given in this report are for backscattering cross sections from -28 to 18 dB. Outside this range the precision and accuracy decrease significantly.

  3. Adaptive Spike Threshold Enables Robust and Temporally Precise Neuronal Encoding

    PubMed Central

    Resnik, Andrey; Celikel, Tansu; Englitz, Bernhard

    2016-01-01

    Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information. PMID:27304526

  4. Robustness versus accuracy in shock-wave computations

    NASA Astrophysics Data System (ADS)

    Gressier, Jérémie; Moschetta, Jean-Marc

    2000-06-01

    Despite constant progress in the development of upwind schemes, some failings still remain. Quirk recently reported (Quirk JJ. A contribution to the great Riemann solver debate. International Journal for Numerical Methods in Fluids 1994; 18: 555-574) that approximate Riemann solvers, which share the exact capture of contact discontinuities, generally suffer from such failings. One of these is the odd-even decoupling that occurs along planar shocks aligned with the mesh. First, a few results on some failings are given, namely the carbuncle phenomenon and the kinked Mach stem. Then, following Quirk's analysis of Roe's scheme, general criteria are derived to predict the odd-even decoupling. This analysis is applied to Roe's scheme (Roe PL, Approximate Riemann solvers, parameters vectors, and difference schemes, Journal of Computational Physics 1981; 43: 357-372), the Equilibrium Flux Method (Pullin DI, Direct simulation methods for compressible inviscid ideal gas flow, Journal of Computational Physics 1980; 34: 231-244), the Equilibrium Interface Method (Macrossan MN, Oliver. RI, A kinetic theory solution method for the Navier-Stokes equations, International Journal for Numerical Methods in Fluids 1993; 17: 177-193) and the AUSM scheme (Liou MS, Steffen CJ, A new flux splitting scheme, Journal of Computational Physics 1993; 107: 23-39). Strict stability is shown to be desirable to avoid most of these flaws. Finally, the link between marginal stability and accuracy on shear waves is established. Copyright

  5. Accuracy of GIPSY PPP from version 6.2: a robust method to remove outliers

    NASA Astrophysics Data System (ADS)

    Hayal, Adem G.; Ugur Sanli, D.

    2014-05-01

    In this paper, we figure out the accuracy of GIPSY PPP from the latest version, version 6.2. As the research community prepares for the real-time PPP, it would be interesting to revise the accuracy of static GPS from the latest version of well established research software, the first among its kinds. Although the results do not significantly differ from the previous version, version 6.1.1, we still observe the slight improvement on the vertical component due to an enhanced second order ionospheric modeling which came out with the latest version. However, in this study, we rather turned our attention into outlier detection. Outliers usually occur among the solutions from shorter observation sessions and degrade the quality of the accuracy modeling. In our previous analysis from version 6.1.1, we argued that the elimination of outliers was cumbersome with the traditional method since repeated trials were needed, and subjectivity that could affect the statistical significance of the solutions might have been existed among the results (Hayal and Sanli, 2013). Here we overcome this problem using a robust outlier elimination method. Median is perhaps the simplest of the robust outlier detection methods in terms of applicability. At the same time, it might be considered to be the most efficient one with its highest breakdown point. In our analysis, we used a slightly different version of the median as introduced in Tut et al. 2013. Hence, we were able to remove suspected outliers at one run; which were, with the traditional methods, more problematic to remove this time from the solutions produced using the latest version of the software. References Hayal, AG, Sanli DU, Accuracy of GIPSY PPP from version 6, GNSS Precise Point Positioning Workshop: Reaching Full Potential, Vol. 1, pp. 41-42, (2013) Tut,İ., Sanli D.U., Erdogan B., Hekimoglu S., Efficiency of BERNESE single baseline rapid static positioning solutions with SEARCH strategy, Survey Review, Vol. 45, Issue 331

  6. Highly precise and robust packaging of optical components

    NASA Astrophysics Data System (ADS)

    Leers, Michael; Winzen, Matthias; Liermann, Erik; Faidel, Heinrich; Westphalen, Thomas; Miesner, Jörn; Luttmann, Jörg; Hoffmann, Dieter

    2012-03-01

    In this paper we present the development of a compact, thermo-optically stable and vibration and mechanical shock resistant mounting technique by soldering of optical components. Based on this technique a new generation of laser sources for aerospace applications is designed. In these laser systems solder technique replaces the glued and bolted connections between optical component, mount and base plate. Alignment precision in the arc second range and realization of long term stability of every single part in the laser system is the main challenge. At the Fraunhofer Institute for Laser Technology ILT a soldering and mounting technique has been developed for high precision packaging. The specified environmental boundary conditions (e.g. a temperature range of -40 °C to +50 °C) and the required degrees of freedom for the alignment of the components have been taken into account for this technique. In general the advantage of soldering compared to gluing is that there is no outgassing. In addition no flux is needed in our special process. The joining process allows multiple alignments by remelting the solder. The alignment is done in the liquid phase of the solder by a 6 axis manipulator with a step width in the nm range and a tilt in the arc second range. In a next step the optical components have to pass the environmental tests. The total misalignment of the component to its adapter after the thermal cycle tests is less than 10 arc seconds. The mechanical stability tests regarding shear, vibration and shock behavior are well within the requirements.

  7. The precision and accuracy of a portable heart rate monitor.

    PubMed

    Seaward, B L; Sleamaker, R H; McAuliffe, T; Clapp, J F

    1990-01-01

    A device that would comfortably and accurately measure exercise heart rate during field performance could be valuable for athletes, fitness participants, and investigators in the field of exercise physiology. Such a device, a portable telemeterized microprocessor, was compared with direct EKG measurements in a laboratory setting under several conditions to assess its accuracy. Twenty-four subjects were studied at rest and during light-, moderate-, high-, and maximal-intensity endurance activities (walking, running, aerobic dancing, and Nordic Track simulated cross-country skiing. Differences between values obtained by the two measuring devices were not statistically significant, with correlation coefficient (r) values ranging from 0.998 to 0.999. The two methods proved equally reliable for measuring heart rate in a host of varied aerobic activities at varying intensities. PMID:2306564

  8. Milling precision and fitting accuracy of Cerec Scan milled restorations.

    PubMed

    Arnetzl, G; Pongratz, D

    2005-10-01

    The milling accuracy of the Cerec Scan system was examined under standard practice conditions. For this purpose, one and the same 3D design similar to an inlay was milled 30 times from Vita Mark II ceramic blocks. Cylindrical diamond burs with 1.2 or 1.6 mm diameter were used. Each individual milled body was measured exactly to 0.1 microm at five defined sections with a coordinate measuring instrument from the Zeiss company. In the statistical evaluation, both the different diamond bur diameters and the extent of material removal from the ceramic blank were taken into consideration; sections with large substance removal and sections with low substance removal were defined. The standard deviation for the 1.6-mm burs was clearly greater than that for the 1.2-mm burs for the section with large substance removal. This difference was significant according to the Levene test for variance equality. In sections with low substance removal, no difference between the use of the 1.6-mm or 1.2-mm bur was shown. The measuring results ranged between 0.053 and 0.14 mm. The spacing of the distances with large substance removal were larger than those with low substance removal. The T-test for paired random samples showed that the distance with large substance removal when using the 1.6-mm bur was significantly larger than the distance with low substance removal. The difference was not significant for the small burs. It was shown several times statistically that the use of the cylindrical diamond bur with 1.6-mm diameter led to greater inaccuracies than the use of the 1.2-mm cylindrical diamond bur, especially at sites with large material removal. PMID:16689028

  9. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    NASA Astrophysics Data System (ADS)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  10. Precision and Accuracy in Measurements: A Tale of Four Graduated Cylinders.

    ERIC Educational Resources Information Center

    Treptow, Richard S.

    1998-01-01

    Expands upon the concepts of precision and accuracy at a level suitable for general chemistry. Serves as a bridge to the more extensive treatments in analytical chemistry textbooks and the advanced literature on error analysis. Contains 22 references. (DDR)

  11. Expansion and dissemination of a standardized accuracy and precision assessment technique

    NASA Astrophysics Data System (ADS)

    Kwartowitz, David M.; Riti, Rachel E.; Holmes, David R., III

    2011-03-01

    The advent and development of new imaging techniques and image-guidance have had a major impact on surgical practice. These techniques attempt to allow the clinician to not only visualize what is currently visible, but also what is beneath the surface, or function. These systems are often based on tracking systems coupled with registration and visualization technologies. The accuracy and precision of the tracking systems, thus is critical in the overall accuracy and precision of the image-guidance system. In this work the accuracy and precision of an Aurora tracking system is assessed, using the technique specified in " novel technique for analysis of accuracy of magnetic tracking systems used in image guided surgery." This analysis yielded a demonstration that accuracy is dependent on distance from the tracker's field generator, and had an RMS value of 1.48 mm. The error has the similar characteristics and values as the previous work, thus validating this method for tracker analysis.

  12. Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis

    PubMed Central

    Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.

    2015-01-01

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505

  13. S193 radiometer brightness temperature precision/accuracy for SL2 and SL3

    NASA Technical Reports Server (NTRS)

    Pounds, D. J.; Krishen, K.

    1975-01-01

    The precision and accuracy with which the S193 radiometer measured the brightness temperature of ground scenes is investigated. Estimates were derived from data collected during Skylab missions. Homogeneous ground sites were selected and S193 radiometer brightness temperature data analyzed. The precision was expressed as the standard deviation of the radiometer acquired brightness temperature. Precision was determined to be 2.40 K or better depending on mode and target temperature.

  14. Precision and accuracy of clinical quantification of myocardial blood flow by dynamic PET: A technical perspective.

    PubMed

    Moody, Jonathan B; Lee, Benjamin C; Corbett, James R; Ficaro, Edward P; Murthy, Venkatesh L

    2015-10-01

    A number of exciting advances in PET/CT technology and improvements in methodology have recently converged to enhance the feasibility of routine clinical quantification of myocardial blood flow and flow reserve. Recent promising clinical results are pointing toward an important role for myocardial blood flow in the care of patients. Absolute blood flow quantification can be a powerful clinical tool, but its utility will depend on maintaining precision and accuracy in the face of numerous potential sources of methodological errors. Here we review recent data and highlight the impact of PET instrumentation, image reconstruction, and quantification methods, and we emphasize (82)Rb cardiac PET which currently has the widest clinical application. It will be apparent that more data are needed, particularly in relation to newer PET technologies, as well as clinical standardization of PET protocols and methods. We provide recommendations for the methodological factors considered here. At present, myocardial flow reserve appears to be remarkably robust to various methodological errors; however, with greater attention to and more detailed understanding of these sources of error, the clinical benefits of stress-only blood flow measurement may eventually be more fully realized. PMID:25868451

  15. [Assessment of precision and accuracy of digital surface photogrammetry with the DSP 400 system].

    PubMed

    Krimmel, M; Kluba, S; Dietz, K; Reinert, S

    2005-03-01

    The objective of the present study was to evaluate the precision and accuracy of facial anthropometric measurements obtained through digital 3-D surface photogrammetry with the DSP 400 system in comparison to traditional 2-D photogrammetry. Fifty plaster casts of cleft infants were imaged and 21 standard anthropometric measurements were obtained. For precision assessment the measurements were performed twice in a subsample. Accuracy was determined by comparison of direct measurements and indirect 2-D and 3-D image measurements. Precision of digital surface photogrammetry was almost as good as direct anthropometry and clearly better than 2-D photogrammetry. Measurements derived from 3-D images showed better congruence to direct measurements than from 2-D photos. Digital surface photogrammetry with the DSP 400 system is sufficiently precise and accurate for craniofacial anthropometric examinations. PMID:15832575

  16. Evaluation of optoelectronic Plethysmography accuracy and precision in recording displacements during quiet breathing simulation.

    PubMed

    Massaroni, C; Schena, E; Saccomandi, P; Morrone, M; Sterzi, S; Silvestri, S

    2015-08-01

    Opto-electronic Plethysmography (OEP) is a motion analysis system used to measure chest wall kinematics and to indirectly evaluate respiratory volumes during breathing. Its working principle is based on the computation of marker displacements placed on the chest wall. This work aims at evaluating the accuracy and precision of OEP in measuring displacement in the range of human chest wall displacement during quiet breathing. OEP performances were investigated by the use of a fully programmable chest wall simulator (CWS). CWS was programmed to move 10 times its eight shafts in the range of physiological displacement (i.e., between 1 mm and 8 mm) at three different frequencies (i.e., 0.17 Hz, 0.25 Hz, 0.33 Hz). Experiments were performed with the aim to: (i) evaluate OEP accuracy and precision error in recording displacement in the overall calibrated volume and in three sub-volumes, (ii) evaluate the OEP volume measurement accuracy due to the measurement accuracy of linear displacements. OEP showed an accuracy better than 0.08 mm in all trials, considering the whole 2m(3) calibrated volume. The mean measurement discrepancy was 0.017 mm. The precision error, expressed as the ratio between measurement uncertainty and the recorded displacement by OEP, was always lower than 0.55%. Volume overestimation due to OEP linear measurement accuracy was always <; 12 mL (<; 3.2% of total volume), considering all settings. PMID:26736504

  17. The Plus or Minus Game--Teaching Estimation, Precision, and Accuracy

    ERIC Educational Resources Information Center

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-01-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in "TPT" (Larry Weinstein's "Fermi…

  18. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS, 1984

    EPA Science Inventory

    Precision and accuracy data obtained from state and local agencies during 1984 are summarized and compared to data reported earlier for the period 1981-1983. A continual improvement in the completeness of the data is evident. Improvement is also evident in the size of the precisi...

  19. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS, 1983

    EPA Science Inventory

    Precision and accuracy data obtained from State and local agencies during 1983 are summarized and evaluated. Some comparisons are made with the results previously reported for 1981 and 1982 to determine the indication of any trends. Some trends indicated improvement in the comple...

  20. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS, 1985

    EPA Science Inventory

    Precision and accuracy data obtained from State and local agencies during 1985 are summarized and evaluated. Some comparisons are made with the results reported for prior years to determine any trends. Some trends indicated continued improvement in the completeness of reporting o...

  1. ASSESSMENT OF THE PRECISION AND ACCURACY OF SAM AND MFC MICROCOSMS EXPOSED TO TOXICANTS

    EPA Science Inventory

    The results of 30 mixed flank culture (MFC) and four standardized aquatic microcosm (SAM) microcosm experiments were used to describe the precision and accuracy of these two protocols. oefficients of variation (CV) for chemicals measurements (DO,pH) were generally less than 7%, f...

  2. Commissioning Procedures for Mechanical Precision and Accuracy in a Dedicated LINAC

    SciTech Connect

    Ballesteros-Zebadua, P.; Larrga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Juarez, J.; Prieto, I.; Moreno-Jimenez, S.; Celis, M. A.

    2008-08-11

    Mechanical precision measurements are fundamental procedures for the commissioning of a dedicated LINAC. At our Radioneurosurgery Unit, these procedures can be suitable as quality assurance routines that allow the verification of the equipment geometrical accuracy and precision. In this work mechanical tests were performed for gantry and table rotation, obtaining mean associated uncertainties of 0.3 mm and 0.71 mm, respectively. Using an anthropomorphic phantom and a series of localized surface markers, isocenter accuracy showed to be smaller than 0.86 mm for radiosurgery procedures and 0.95 mm for fractionated treatments with mask. All uncertainties were below tolerances. The highest contribution to mechanical variations is due to table rotation, so it is important to correct variations using a localization frame with printed overlays. Mechanical precision knowledge would allow to consider the statistical errors in the treatment planning volume margins.

  3. Increasing average period lengths by switching of robust chaos maps in finite precision

    NASA Astrophysics Data System (ADS)

    Nagaraj, N.; Shastry, M. C.; Vaidya, P. G.

    2008-12-01

    Grebogi, Ott and Yorke (Phys. Rev. A 38, 1988) have investigated the effect of finite precision on average period length of chaotic maps. They showed that the average length of periodic orbits (T) of a dynamical system scales as a function of computer precision (ɛ) and the correlation dimension (d) of the chaotic attractor: T ˜ɛ-d/2. In this work, we are concerned with increasing the average period length which is desirable for chaotic cryptography applications. Our experiments reveal that random and chaotic switching of deterministic chaotic dynamical systems yield higher average length of periodic orbits as compared to simple sequential switching or absence of switching. To illustrate the application of switching, a novel generalization of the Logistic map that exhibits Robust Chaos (absence of attracting periodic orbits) is first introduced. We then propose a pseudo-random number generator based on chaotic switching between Robust Chaos maps which is found to successfully pass stringent statistical tests of randomness.

  4. Evaluation of the Accuracy and Precision of a Next Generation Computer-Assisted Surgical System

    PubMed Central

    Dai, Yifei; Liebelt, Ralph A.; Gao, Bo; Gulbransen, Scott W.; Silver, Xeve S.

    2015-01-01

    Background Computer-assisted orthopaedic surgery (CAOS) improves accuracy and reduces outliers in total knee arthroplasty (TKA). However, during the evaluation of CAOS systems, the error generated by the guidance system (hardware and software) has been generally overlooked. Limited information is available on the accuracy and precision of specific CAOS systems with regard to intraoperative final resection measurements. The purpose of this study was to assess the accuracy and precision of a next generation CAOS system and investigate the impact of extra-articular deformity on the system-level errors generated during intraoperative resection measurement. Methods TKA surgeries were performed on twenty-eight artificial knee inserts with various types of extra-articular deformity (12 neutral, 12 varus, and 4 valgus). Surgical resection parameters (resection depths and alignment angles) were compared between postoperative three-dimensional (3D) scan-based measurements and intraoperative CAOS measurements. Using the 3D scan-based measurements as control, the accuracy (mean error) and precision (associated standard deviation) of the CAOS system were assessed. The impact of extra-articular deformity on the CAOS system measurement errors was also investigated. Results The pooled mean unsigned errors generated by the CAOS system were equal or less than 0.61 mm and 0.64° for resection depths and alignment angles, respectively. No clinically meaningful biases were found in the measurements of resection depths (< 0.5 mm) and alignment angles (< 0.5°). Extra-articular deformity did not show significant effect on the measurement errors generated by the CAOS system investigated. Conclusions This study presented a set of methodology and workflow to assess the system-level accuracy and precision of CAOS systems. The data demonstrated that the CAOS system investigated can offer accurate and precise intraoperative measurements of TKA resection parameters, regardless of the presence

  5. Accuracy, precision and economics: The cost of cutting-edge chemical analyses

    NASA Astrophysics Data System (ADS)

    Hamilton, B.; Hannigan, R.; Jones, C.; Chen, Z.

    2002-12-01

    Otolith (fish ear bone) chemistry has proven to be an exceptional tool for the identification of essential fish habitats in marine and freshwater environments. These measurements, which explore the variations in trace element content of otoliths relative to Calcium (eg., Ba/Ca, Mg/Ca etc.), provide data to resolve the differences in habitat water chemistry on the watershed to catchment scale. The vast majority of these analyses are performed by laser ablation ICP-MS using a high-resolution instrument. However, few laboratories are equipped with this configuration and many researchers measure the trace element chemistry of otoliths by whole digestion ICP-MS using lower resolution quadrupole instruments. This study examines the differences in accuracy and precision between three elemental analysis methods using whole otolith digestion on a low resolution ICP-MS (ELAN 9000). The first, and most commonly used, technique is external calibration with internal standardization. This technique is the most cost-effective but also is one with limitations in terms of method detection. The second, standard addition is more costly in terms of time and use of standard materials but offers gains in precision and accuracy. The third, isotope dilution, is the least cost effective but the most accurate of elemental analysis techniques. Based on the results of this study, which seeks to identify the technique which is the easiest to implement yet has the precision and accuracy necessary to resolve spatial variations in habitats, we conclude that external calibration with internal standardization can be sufficient to revolve spatial and temporal variations in marine and estuarine environments (+/- 6-8% accuracy). Standard addition increases the accuracy of measurements to 2-5% and is ideal for freshwater studies. While there is a gain in accuracy and precision with isotope dilution, the spatial and temporal resolution is no greater with this technique than the other.

  6. Accuracy in Dental Medicine, A New Way to Measure Trueness and Precision

    PubMed Central

    Ender, Andreas; Mehl, Albert

    2014-01-01

    Reference scanners are used in dental medicine to verify a lot of procedures. The main interest is to verify impression methods as they serve as a base for dental restorations. The current limitation of many reference scanners is the lack of accuracy scanning large objects like full dental arches, or the limited possibility to assess detailed tooth surfaces. A new reference scanner, based on focus variation scanning technique, was evaluated with regards to highest local and general accuracy. A specific scanning protocol was tested to scan original tooth surface from dental impressions. Also, different model materials were verified. The results showed a high scanning accuracy of the reference scanner with a mean deviation of 5.3 ± 1.1 µm for trueness and 1.6 ± 0.6 µm for precision in case of full arch scans. Current dental impression methods showed much higher deviations (trueness: 20.4 ± 2.2 µm, precision: 12.5 ± 2.5 µm) than the internal scanning accuracy of the reference scanner. Smaller objects like single tooth surface can be scanned with an even higher accuracy, enabling the system to assess erosive and abrasive tooth surface loss. The reference scanner can be used to measure differences for a lot of dental research fields. The different magnification levels combined with a high local and general accuracy can be used to assess changes of single teeth or restorations up to full arch changes. PMID:24836007

  7. A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures

    NASA Technical Reports Server (NTRS)

    Moore, Ashley

    2005-01-01

    The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.

  8. Sex differences in accuracy and precision when judging time to arrival: data from two Internet studies.

    PubMed

    Sanders, Geoff; Sinclair, Kamila

    2011-12-01

    We report two Internet studies that investigated sex differences in the accuracy and precision of judging time to arrival. We used accuracy to mean the ability to match the actual time to arrival and precision to mean the consistency with which each participant made their judgments. Our task was presented as a computer game in which a toy UFO moved obliquely towards the participant through a virtual three-dimensional space on route to a docking station. The UFO disappeared before docking and participants pressed their space bar at the precise moment they thought the UFO would have docked. Study 1 showed it was possible to conduct quantitative studies of spatiotemporal judgments in virtual reality via the Internet and confirmed reports that men are more accurate because women underestimate, but found no difference in precision measured as intra-participant variation. Study 2 repeated Study 1 with five additional presentations of one condition to provide a better measure of precision. Again, men were more accurate than women but there were no sex differences in precision. However, within the coincidence-anticipation timing (CAT) literature, of those studies that report sex differences, a majority found that males are both more accurate and more precise than females. Noting that many CAT studies report no sex differences, we discuss appropriate interpretations of such null findings. While acknowledging that CAT performance may be influenced by experience we suggest that the sex difference may have originated among our ancestors with the evolutionary selection of men for hunting and women for gathering. PMID:21125324

  9. Measuring the accuracy and precision of quantitative coronary angiography using a digitally simulated test phantom

    NASA Astrophysics Data System (ADS)

    Morioka, Craig A.; Whiting, James S.; LeFree, Michelle T.

    1998-06-01

    Quantitative coronary angiography (QCA) diameter measurements have been used as an endpoint measurement in clinical studies involving therapies to reduce coronary atherosclerosis. The accuracy and precision of the QCA measure can affect the sample size and study conclusions of a clinical study. Measurements using x-ray test phantoms can underestimate the precision and accuracy of the actual arteries in clinical digital angiograms because they do not contain complex patient structures. Determining the clinical performance of QCA algorithms under clinical conditions is difficult because: (1) no gold standard test object exists in clinical images, (2) phantom images do not have any structured background noise. We purpose the use of computer simulated arteries as a replacement for traditional angiographic test phantoms to evaluate QCA algorithm performance.

  10. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new

  11. Comparison between predicted and actual accuracies for an Ultra-Precision CNC measuring machine

    SciTech Connect

    Thompson, D.C.; Fix, B.L.

    1995-05-30

    At the 1989 CIRP annual meeting, we reported on the design of a specialized, ultra-precision CNC measuring machine, and on the error budget that was developed to guide the design process. In our paper we proposed a combinatorial rule for merging estimated and/or calculated values for all known sources of error, to yield a single overall predicted accuracy for the machine. In this paper we compare our original predictions with measured performance of the completed instrument.

  12. Evaluation of precision and accuracy of selenium measurements in biological materials using neutron activation analysis

    SciTech Connect

    Greenberg, R.R.

    1988-01-01

    In recent years, the accurate determination of selenium in biological materials has become increasingly important in view of the essential nature of this element for human nutrition and its possible role as a protective agent against cancer. Unfortunately, the accurate determination of selenium in biological materials is often difficult for most analytical techniques for a variety of reasons, including interferences, complicated selenium chemistry due to the presence of this element in multiple oxidation states and in a variety of different organic species, stability and resistance to destruction of some of these organo-selenium species during acid dissolution, volatility of some selenium compounds, and potential for contamination. Neutron activation analysis (NAA) can be one of the best analytical techniques for selenium determinations in biological materials for a number of reasons. Currently, precision at the 1% level (1s) and overall accuracy at the 1 to 2% level (95% confidence interval) can be attained at the U.S. National Bureau of Standards (NBS) for selenium determinations in biological materials when counting statistics are not limiting (using the {sup 75}Se isotope). An example of this level of precision and accuracy is summarized. Achieving this level of accuracy, however, requires strict attention to all sources of systematic error. Precise and accurate results can also be obtained after radiochemical separations.

  13. Precision and accuracy of 3D lower extremity residua measurement systems

    NASA Astrophysics Data System (ADS)

    Commean, Paul K.; Smith, Kirk E.; Vannier, Michael W.; Hildebolt, Charles F.; Pilgram, Thomas K.

    1996-04-01

    Accurate and reproducible geometric measurement of lower extremity residua is required for custom prosthetic socket design. We compared spiral x-ray computed tomography (SXCT) and 3D optical surface scanning (OSS) with caliper measurements and evaluated the precision and accuracy of each system. Spiral volumetric CT scanned surface and subsurface information was used to make external and internal measurements, and finite element models (FEMs). SXCT and OSS were used to measure lower limb residuum geometry of 13 below knee (BK) adult amputees. Six markers were placed on each subject's BK residuum and corresponding plaster casts and distance measurements were taken to determine precision and accuracy for each system. Solid models were created from spiral CT scan data sets with the prosthesis in situ under different loads using p-version finite element analysis (FEA). Tissue properties of the residuum were estimated iteratively and compared with values taken from the biomechanics literature. The OSS and SXCT measurements were precise within 1% in vivo and 0.5% on plaster casts, and accuracy was within 3.5% in vivo and 1% on plaster casts compared with caliper measures. Three-dimensional optical surface and SXCT imaging systems are feasible for capturing the comprehensive 3D surface geometry of BK residua, and provide distance measurements statistically equivalent to calipers. In addition, SXCT can readily distinguish internal soft tissue and bony structure of the residuum. FEM can be applied to determine tissue material properties interactively using inverse methods.

  14. Increasing the precision and accuracy of top-loading balances:  application of experimental design.

    PubMed

    Bzik, T J; Henderson, P B; Hobbs, J P

    1998-01-01

    The traditional method of estimating the weight of multiple objects is to obtain the weight of each object individually. We demonstrate that the precision and accuracy of these estimates can be improved by using a weighing scheme in which multiple objects are simultaneously on the balance. The resulting system of linear equations is solved to yield the weight estimates for the objects. Precision and accuracy improvements can be made by using a weighing scheme without requiring any more weighings than the number of objects when a total of at least six objects are to be weighed. It is also necessary that multiple objects can be weighed with about the same precision as that obtained with a single object, and the scale bias remains relatively constant over the set of weighings. Simulated and empirical examples are given for a system of eight objects in which up to five objects can be weighed simultaneously. A modified Plackett-Burman weighing scheme yields a 25% improvement in precision over the traditional method and implicitly removes the scale bias from seven of the eight objects. Applications of this novel use of experimental design techniques are shown to have potential commercial importance for quality control methods that rely on the mass change rate of an object. PMID:21644600

  15. Large format focal plane array integration with precision alignment, metrology and accuracy capabilities

    NASA Astrophysics Data System (ADS)

    Neumann, Jay; Parlato, Russell; Tracy, Gregory; Randolph, Max

    2015-09-01

    Focal plane alignment for large format arrays and faster optical systems require enhanced precision methodology and stability over temperature. The increase in focal plane array size continues to drive the alignment capability. Depending on the optical system, the focal plane flatness of less than 25μm (.001") is required over transition temperatures from ambient to cooled operating temperatures. The focal plane flatness requirement must also be maintained in airborne or launch vibration environments. This paper addresses the challenge of the detector integration into the focal plane module and housing assemblies, the methodology to reduce error terms during integration and the evaluation of thermal effects. The driving factors influencing the alignment accuracy include: datum transfers, material effects over temperature, alignment stability over test, adjustment precision and traceability to NIST standard. The FPA module design and alignment methodology reduces the error terms by minimizing the measurement transfers to the housing. In the design, the proper material selection requires matched coefficient of expansion materials minimizes both the physical shift over temperature as well as lowering the stress induced into the detector. When required, the co-registration of focal planes and filters can achieve submicron relative positioning by applying precision equipment, interferometry and piezoelectric positioning stages. All measurements and characterizations maintain traceability to NIST standards. The metrology characterizes the equipment's accuracy, repeatability and precision of the measurements.

  16. Robust Flight Path Determination for Mars Precision Landing Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Kohen, Hamid

    1997-01-01

    This paper documents the application of genetic algorithms (GAs) to the problem of robust flight path determination for Mars precision landing. The robust flight path problem is defined here as the determination of the flight path which delivers a low-lift open-loop controlled vehicle to its desired final landing location while minimizing the effect of perturbations due to uncertainty in the atmospheric model and entry conditions. The genetic algorithm was capable of finding solutions which reduced the landing error from 111 km RMS radial (open-loop optimal) to 43 km RMS radial (optimized with respect to perturbations) using 200 hours of computation on an Ultra-SPARC workstation. Further reduction in the landing error is possible by going to closed-loop control which can utilize the GA optimized paths as nominal trajectories for linearization.

  17. Using statistics and software to maximize precision and accuracy in U-Pb geochronological measurements

    NASA Astrophysics Data System (ADS)

    McLean, N.; Bowring, J. F.; Bowring, S. A.

    2009-12-01

    Uncertainty in U-Pb geochronology results from a wide variety of factors, including isotope ratio determinations, common Pb corrections, initial daughter product disequilibria, instrumental mass fractionation, isotopic tracer calibration, and U decay constants and isotopic composition. The relative contribution of each depends on the proportion of radiogenic to common Pb, the measurement technique, and the quality of systematic error determinations. Random and systematic uncertainty contributions may be propagated into individual analyses or for an entire population, and must be propagated correctly to accurately interpret data. Tripoli and U-Pb_Redux comprise a new data reduction and error propagation software package that combines robust cycle measurement statistics with rigorous multivariate data analysis and presents the results graphically and interactively. Maximizing the precision and accuracy of a measurement begins with correct appraisal and codification of the systematic and random errors for each analysis. For instance, a large dataset of total procedural Pb blank analyses defines a multivariate normal distribution, describing the mean of and variation in isotopic composition (IC) that must be subtracted from each analysis. Uncertainty in the size and IC of each Pb blank is related to the (random) uncertainty in ratio measurements and the (systematic) uncertainty involved in tracer subtraction. Other sample and measurement parameters can be quantified in the same way, represented as statistical distributions that describe their uncertainty or variation, and are input into U-Pb_Redux as such before the raw sample isotope ratios are measured. During sample measurement, U-Pb_Redux and Tripoli can relay cycle data in real time, calculating a date and uncertainty for each new cycle or block. The results are presented in U-Pb_Redux as an interactive user interface with multiple visualization tools. One- and two-dimensional plots of each calculated date and

  18. Automatic and robust extrinsic camera calibration for high-accuracy mobile mapping

    NASA Astrophysics Data System (ADS)

    Goeman, Werner; Douterloigne, Koen; Bogaert, Peter; Pires, Rui; Gautama, Sidharta

    2012-10-01

    A mobile mapping system (MMS) is the answer of the geoinformation community to the exponentially growing demand for various geospatial data with increasingly higher accuracies and captured by multiple sensors. As the mobile mapping technology is pushed to explore its use for various applications on water, rail, or road, the need emerges to have an external sensor calibration procedure which is portable, fast and easy to perform. This way, sensors can be mounted and demounted depending on the application requirements without the need for time consuming calibration procedures. A new methodology is presented to provide a high quality external calibration of cameras which is automatic, robust and fool proof.The MMS uses an Applanix POSLV420, which is a tightly coupled GPS/INS positioning system. The cameras used are Point Grey color video cameras synchronized with the GPS/INS system. The method uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration a well studied absolute orientation problem needs to be solved. Here, a mutual information based image registration technique is studied for automatic alignment of the ranging pole. Finally, a few benchmarking tests are done under various lighting conditions which proves the methodology's robustness, by showing high absolute stereo measurement accuracies of a few centimeters.

  19. Accuracy and precision of quantitative 31P-MRS measurements of human skeletal muscle mitochondrial function.

    PubMed

    Layec, Gwenael; Gifford, Jayson R; Trinity, Joel D; Hart, Corey R; Garten, Ryan S; Park, Song Y; Le Fur, Yann; Jeong, Eun-Kee; Richardson, Russell S

    2016-08-01

    Although theoretically sound, the accuracy and precision of (31)P-magnetic resonance spectroscopy ((31)P-MRS) approaches to quantitatively estimate mitochondrial capacity are not well documented. Therefore, employing four differing models of respiratory control [linear, kinetic, and multipoint adenosine diphosphate (ADP) and phosphorylation potential], this study sought to determine the accuracy and precision of (31)P-MRS assessments of peak mitochondrial adenosine-triphosphate (ATP) synthesis rate utilizing directly measured peak respiration (State 3) in permeabilized skeletal muscle fibers. In 23 subjects of different fitness levels, (31)P-MRS during a 24-s maximal isometric knee extension and high-resolution respirometry in muscle fibers from the vastus lateralis was performed. Although significantly correlated with State 3 respiration (r = 0.72), both the linear (45 ± 13 mM/min) and phosphorylation potential (47 ± 16 mM/min) models grossly overestimated the calculated in vitro peak ATP synthesis rate (P < 0.05). Of the ADP models, the kinetic model was well correlated with State 3 respiration (r = 0.72, P < 0.05), but moderately overestimated ATP synthesis rate (P < 0.05), while the multipoint model, although being somewhat less well correlated with State 3 respiration (r = 0.55, P < 0.05), most accurately reflected peak ATP synthesis rate. Of note, the PCr recovery time constant (τ), a qualitative index of mitochondrial capacity, exhibited the strongest correlation with State 3 respiration (r = 0.80, P < 0.05). Therefore, this study reveals that each of the (31)P-MRS data analyses, including PCr τ, exhibit precision in terms of mitochondrial capacity. As only the multipoint ADP model did not overstimate the peak skeletal muscle mitochondrial ATP synthesis, the multipoint ADP model is the only quantitative approach to exhibit both accuracy and precision. PMID:27302751

  20. Accuracy and precision of ice stream bed topography derived from ground-based radar surveys

    NASA Astrophysics Data System (ADS)

    King, Edward

    2016-04-01

    There is some confusion within the glaciological community as to the accuracy of the basal topography derived from radar measurements. A number of texts and papers state that basal topography cannot be determined to better than one quarter of the wavelength of the radar system. On the other hand King et al (Nature Geoscience, 2009) claimed that features of the bed topography beneath Rutford Ice Stream, Antarctica can be distinguished to +/- 3m using a 3 MHz radar system (which has a quarter wavelength of 14m in ice). These statements of accuracy are mutually exclusive. I will show in this presentation that the measurement of ice thickness is a radar range determination to a single strongly-reflective target. This measurement has much higher accuracy than the resolution of two targets of similar reflection strength, which is governed by the quarter-wave criterion. The rise time of the source signal and the sensitivity and digitisation interval of the recording system are the controlling criteria on radar range accuracy. A dataset from Pine Island Glacier, West Antarctica will be used to illustrate these points, as well as the repeatability or precision of radar range measurements, and the influence of gridding parameters and positioning accuracy on the final DEM product.

  1. Wound Area Measurement with Digital Planimetry: Improved Accuracy and Precision with Calibration Based on 2 Rulers

    PubMed Central

    Foltynski, Piotr

    2015-01-01

    Introduction In the treatment of chronic wounds the wound surface area change over time is useful parameter in assessment of the applied therapy plan. The more precise the method of wound area measurement the earlier may be identified and changed inappropriate treatment plan. Digital planimetry may be used in wound area measurement and therapy assessment when it is properly used, but the common problem is the camera lens orientation during the taking of a picture. The camera lens axis should be perpendicular to the wound plane, and if it is not, the measured area differ from the true area. Results Current study shows that the use of 2 rulers placed in parallel below and above the wound for the calibration increases on average 3.8 times the precision of area measurement in comparison to the measurement with one ruler used for calibration. The proposed procedure of calibration increases also 4 times accuracy of area measurement. It was also showed that wound area range and camera type do not influence the precision of area measurement with digital planimetry based on two ruler calibration, however the measurements based on smartphone camera were significantly less accurate than these based on D-SLR or compact cameras. Area measurement on flat surface was more precise with the digital planimetry with 2 rulers than performed with the Visitrak device, the Silhouette Mobile device or the AreaMe software-based method. Conclusion The calibration in digital planimetry with using 2 rulers remarkably increases precision and accuracy of measurement and therefore should be recommended instead of calibration based on single ruler. PMID:26252747

  2. Accuracy of 3D white light scanning of abutment teeth impressions: evaluation of trueness and precision

    PubMed Central

    Jeon, Jin-Hun; Kim, Hae-Young; Kim, Ji-Hwan

    2014-01-01

    PURPOSE This study aimed to evaluate the accuracy of digitizing dental impressions of abutment teeth using a white light scanner and to compare the findings among teeth types. MATERIALS AND METHODS To assess precision, impressions of the canine, premolar, and molar prepared to receive all-ceramic crowns were repeatedly scanned to obtain five sets of 3-D data (STL files). Point clouds were compared and error sizes were measured (n=10 per type). Next, to evaluate trueness, impressions of teeth were rotated by 10°-20° and scanned. The obtained data were compared with the first set of data for precision assessment, and the error sizes were measured (n=5 per type). The Kruskal-Wallis test was performed to evaluate precision and trueness among three teeth types, and post-hoc comparisons were performed using the Mann-Whitney U test with Bonferroni correction (α=.05). RESULTS Precision discrepancies for the canine, premolar, and molar were 3.7 µm, 3.2 µm, and 7.3 µm, respectively, indicating the poorest precision for the molar (P<.001). Trueness discrepancies for teeth types were 6.2 µm, 11.2 µm, and 21.8 µm, respectively, indicating the poorest trueness for the molar (P=.007). CONCLUSION In respect to accuracy the molar showed the largest discrepancies compared with the canine and premolar. Digitizing of dental impressions of abutment teeth using a white light scanner was assessed to be a highly accurate method and provided discrepancy values in a clinically acceptable range. Further study is needed to improve digitizing performance of white light scanning in axial wall. PMID:25551007

  3. An efficient camera calibration technique offering robustness and accuracy over a wide range of lens distortion.

    PubMed

    Rahman, Taufiqur; Krouglicof, Nicholas

    2012-02-01

    In the field of machine vision, camera calibration refers to the experimental determination of a set of parameters that describe the image formation process for a given analytical model of the machine vision system. Researchers working with low-cost digital cameras and off-the-shelf lenses generally favor camera calibration techniques that do not rely on specialized optical equipment, modifications to the hardware, or an a priori knowledge of the vision system. Most of the commonly used calibration techniques are based on the observation of a single 3-D target or multiple planar (2-D) targets with a large number of control points. This paper presents a novel calibration technique that offers improved accuracy, robustness, and efficiency over a wide range of lens distortion. This technique operates by minimizing the error between the reconstructed image points and their experimentally determined counterparts in "distortion free" space. This facilitates the incorporation of the exact lens distortion model. In addition, expressing spatial orientation in terms of unit quaternions greatly enhances the proposed calibration solution by formulating a minimally redundant system of equations that is free of singularities. Extensive performance benchmarking consisting of both computer simulation and experiments confirmed higher accuracy in calibration regardless of the amount of lens distortion present in the optics of the camera. This paper also experimentally confirmed that a comprehensive lens distortion model including higher order radial and tangential distortion terms improves calibration accuracy. PMID:21843988

  4. The tradeoff between accuracy and precision in latent variable models of mediation processes

    PubMed Central

    Ledgerwood, Alison; Shrout, Patrick E.

    2016-01-01

    Social psychologists place high importance on understanding mechanisms, and frequently employ mediation analyses to shed light on the process underlying an effect. Such analyses can be conducted using observed variables (e.g., a typical regression approach) or latent variables (e.g., a SEM approach), and choosing between these methods can be a more complex and consequential decision than researchers often realize. The present paper adds to the literature on mediation by examining the relative tradeoff between accuracy and precision in latent versus observed variable modeling. Whereas past work has shown that latent variable models tend to produce more accurate estimates, we demonstrate that observed variable models tend to produce more precise estimates, and examine this relative tradeoff both theoretically and empirically in a typical three-variable mediation model across varying levels of effect size and reliability. We discuss implications for social psychologists seeking to uncover mediating variables, and recommend practical approaches for maximizing both accuracy and precision in mediation analyses. PMID:21806305

  5. Accuracy or precision: Implications of sample design and methodology on abundance estimation

    USGS Publications Warehouse

    Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.

    2015-01-01

    Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.

  6. Accuracy and precision of stream reach water surface slopes estimated in the field and from maps

    USGS Publications Warehouse

    Isaak, D.J.; Hubert, W.A.; Krueger, K.L.

    1999-01-01

    The accuracy and precision of five tools used to measure stream water surface slope (WSS) were evaluated. Water surface slopes estimated in the field with a clinometer or from topographic maps used in conjunction with a map wheel or geographic information system (GIS) were significantly higher than WSS estimated in the field with a surveying level (biases of 34, 41, and 53%, respectively). Accuracy of WSS estimates obtained with an Abney level did not differ from surveying level estimates, but conclusions regarding the accuracy of Abney levels and clinometers were weakened by intratool variability. The surveying level estimated WSS most precisely (coefficient of variation [CV] = 0.26%), followed by the GIS (CV = 1.87%), map wheel (CV = 6.18%), Abney level (CV = 13.68%), and clinometer (CV = 21.57%). Estimates of WSS measured in the field with an Abney level and estimated for the same reaches with a GIS used in conjunction with l:24,000-scale topographic maps were significantly correlated (r = 0.86), but there was a tendency for the GIS to overestimate WSS. Detailed accounts of the methods used to measure WSS and recommendations regarding the measurement of WSS are provided.

  7. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients

    PubMed Central

    Asadian, Simin; Khatony, Alireza; Moradi, Gholamreza; Abdi, Alireza; Rezaei, Mansour

    2016-01-01

    Introduction An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. Methods In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients’ body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. Results There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (P<0.001). Kappa coefficients showed good agreement between the temperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%). Paired t-test demonstrated an acceptable precision with forehead (P=0.132), left (P=0.18) and right (P=0.318) tympanic membranes, oral (P=1.00), and axillary (P=1.00) methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. Conclusion The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left) for assessing a patient’s body temperature in the intensive care units because of high accuracy and acceptable precision. PMID:27621673

  8. Assessing accuracy and precision for field and laboratory data: a perspective in ecosystem restoration

    USGS Publications Warehouse

    Stapanian, Martin A.; Lewis, Timothy E; Palmer, Craig J.; Middlebrook Amos, Molly

    2016-01-01

    Unlike most laboratory studies, rigorous quality assurance/quality control (QA/QC) procedures may be lacking in ecosystem restoration (“ecorestoration”) projects, despite legislative mandates in the United States. This is due, in part, to ecorestoration specialists making the false assumption that some types of data (e.g. discrete variables such as species identification and abundance classes) are not subject to evaluations of data quality. Moreover, emergent behavior manifested by complex, adapting, and nonlinear organizations responsible for monitoring the success of ecorestoration projects tend to unconsciously minimize disorder, QA/QC being an activity perceived as creating disorder. We discuss similarities and differences in assessing precision and accuracy for field and laboratory data. Although the concepts for assessing precision and accuracy of ecorestoration field data are conceptually the same as laboratory data, the manner in which these data quality attributes are assessed is different. From a sample analysis perspective, a field crew is comparable to a laboratory instrument that requires regular “recalibration,” with results obtained by experts at the same plot treated as laboratory calibration standards. Unlike laboratory standards and reference materials, the “true” value for many field variables is commonly unknown. In the laboratory, specific QA/QC samples assess error for each aspect of the measurement process, whereas field revisits assess precision and accuracy of the entire data collection process following initial calibration. Rigorous QA/QC data in an ecorestoration project are essential for evaluating the success of a project, and they provide the only objective “legacy” of the dataset for potential legal challenges and future uses.

  9. Mapping stream habitats with a global positioning system: Accuracy, precision, and comparison with traditional methods

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.

    2006-01-01

    We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media

  10. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    PubMed

    Wells, Emma; Wolfe, Marlene K; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4-19% error), then test strips (5.2-48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration. Given the

  11. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions

    PubMed Central

    Wells, Emma; Wolfe, Marlene K.; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4–19% error), then test strips (5.2–48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5–11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14–37 for test strips and $33–609 for titration

  12. Constructing a precise and robust chronology for the varved sediment record of Lake Czechowskie (Poland)

    NASA Astrophysics Data System (ADS)

    Ott, Florian; Brauer, Achim; Słowiński, Michał; Wulf, Sabine; Putyrskaya, Victoria; Blaszkiewicz, Miroslaw

    2014-05-01

    Annually laminated (varved) sediment records are essential for detailed investigations of past climate and environmental changes as they function as a natural memory far beyond instrumental datasets. However, reliable reconstructions of past changes need a robust chronology. In order to determine Holocene inter-annual and decadal-scale variability and to establish a precise time scale we investigated varved sediments of Lake Czechowskie (53°52' N/ 18°14' E, 108 m a.s.l.), northern Poland. During two coring campaigns in 2009 and 2012 we recovered several long and short cores with the longest core reaching 14.25 m. Here we present a multiple dating approach for the Lake Czechowskie sediments. The chronology comprises varve counting for the Holocene time period and AMS 14C dating (19 plant macro remains and two bulk samples) for the entire sediment record reaching back to 14.0 cal ka BP. Varve counting between 14C dated samples and Bayesian age modeling helped to identify and omit samples either too old or too young caused by redeposition or too low C contents, respectively. The good agreement between varve chronology and modeled age based on radiocarbon dates proves the robust age control for the sediment profile. Additionally, independent chronological anchor points derived from (i) 137Cs activity concentration measurements for the last ca. 50 years and (ii) newly detected tephra layers of the Askja AD 1875 eruption and the Laacher See Tephra (12880 varve yrs BP) are used as precise dated isochrones. These volcanic ash layers can be further used as tie points to synchronize and correlate different lake records and to investigate local and regional differences to climatic and environmental changes over a wider geographic region on a common age scale. This study is a contribution to the Virtual Institute of Integrated Climate and Landscape Evolution Analysis -ICLEA- of the Helmholtz Association and the Helmholtz Association climate initiative REKLIM topic 8 "Rapid

  13. Precision and accuracy of spectrophotometric pH measurements at environmental conditions in the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Hammer, Karoline; Schneider, Bernd; Kuliński, Karol; Schulz-Bull, Detlef E.

    2014-06-01

    The increasing uptake of anthropogenic CO2 by the oceans has raised an interest in precise and accurate pH measurement in order to assess the impact on the marine CO2-system. Spectrophotometric pH measurements were refined during the last decade yielding a precision and accuracy that cannot be achieved with the conventional potentiometric method. However, until now the method was only tested in oceanic systems with a relative stable and high salinity and a small pH range. This paper describes the first application of such a pH measurement system at conditions in the Baltic Sea which is characterized by a wide salinity and pH range. The performance of the spectrophotometric system at pH values as low as 7.0 (“total” scale) and salinities between 0 and 35 was examined using TRIS-buffer solutions, certified reference materials, and tests of consistency with measurements of other parameters of the marine CO2 system. Using m-cresol purple as indicator dye and a spectrophotometric measurement system designed at Scripps Institution of Oceanography (B. Carter, A. Dickson), a precision better than ±0.001 and an accuracy between ±0.01 and ±0.02 was achieved within the observed pH and salinity ranges in the Baltic Sea. The influence of the indicator dye on the pH of the sample was determined theoretically and is presented as a pH correction term for the different alkalinity regimes in the Baltic Sea. Because of the encouraging tests, the ease of operation and the fact that the measurements refer to the internationally accepted “total” pH scale, it is recommended to use the spectrophotometric method also for pH monitoring and trend detection in the Baltic Sea.

  14. Improvement in precision, accuracy, and efficiency in sstandardizing the characterization of granular materials

    SciTech Connect

    Tucker, Jonathan R.; Shadle, Lawrence J.; Benyahia, Sofiane; Mei, Joseph; Guenther, Chris; Koepke, M. E.

    2013-01-01

    Useful prediction of the kinematics, dynamics, and chemistry of a system relies on precision and accuracy in the quantification of component properties, operating mechanisms, and collected data. In an attempt to emphasize, rather than gloss over, the benefit of proper characterization to fundamental investigations of multiphase systems incorporating solid particles, a set of procedures were developed and implemented for the purpose of providing a revised methodology having the desirable attributes of reduced uncertainty, expanded relevance and detail, and higher throughput. Better, faster, cheaper characterization of multiphase systems result. Methodologies are presented to characterize particle size, shape, size distribution, density (particle, skeletal and bulk), minimum fluidization velocity, void fraction, particle porosity, and assignment within the Geldart Classification. A novel form of the Ergun equation was used to determine the bulk void fractions and particle density. Accuracy of properties-characterization methodology was validated on materials of known properties prior to testing materials of unknown properties. Several of the standard present-day techniques were scrutinized and improved upon where appropriate. Validity, accuracy, and repeatability were assessed for the procedures presented and deemed higher than present-day techniques. A database of over seventy materials has been developed to assist in model validation efforts and future desig

  15. Automated optogenetic feedback control for precise and robust regulation of gene expression and cell growth

    PubMed Central

    Milias-Argeitis, Andreas; Rullan, Marc; Aoki, Stephanie K.; Buchmann, Peter; Khammash, Mustafa

    2016-01-01

    Dynamic control of gene expression can have far-reaching implications for biotechnological applications and biological discovery. Thanks to the advantages of light, optogenetics has emerged as an ideal technology for this task. Current state-of-the-art methods for optical expression control fail to combine precision with repeatability and cannot withstand changing operating culture conditions. Here, we present a novel fully automatic experimental platform for the robust and precise long-term optogenetic regulation of protein production in liquid Escherichia coli cultures. Using a computer-controlled light-responsive two-component system, we accurately track prescribed dynamic green fluorescent protein expression profiles through the application of feedback control, and show that the system adapts to global perturbations such as nutrient and temperature changes. We demonstrate the efficacy and potential utility of our approach by placing a key metabolic enzyme under optogenetic control, thus enabling dynamic regulation of the culture growth rate with potential applications in bacterial physiology studies and biotechnology. PMID:27562138

  16. Automated optogenetic feedback control for precise and robust regulation of gene expression and cell growth.

    PubMed

    Milias-Argeitis, Andreas; Rullan, Marc; Aoki, Stephanie K; Buchmann, Peter; Khammash, Mustafa

    2016-01-01

    Dynamic control of gene expression can have far-reaching implications for biotechnological applications and biological discovery. Thanks to the advantages of light, optogenetics has emerged as an ideal technology for this task. Current state-of-the-art methods for optical expression control fail to combine precision with repeatability and cannot withstand changing operating culture conditions. Here, we present a novel fully automatic experimental platform for the robust and precise long-term optogenetic regulation of protein production in liquid Escherichia coli cultures. Using a computer-controlled light-responsive two-component system, we accurately track prescribed dynamic green fluorescent protein expression profiles through the application of feedback control, and show that the system adapts to global perturbations such as nutrient and temperature changes. We demonstrate the efficacy and potential utility of our approach by placing a key metabolic enzyme under optogenetic control, thus enabling dynamic regulation of the culture growth rate with potential applications in bacterial physiology studies and biotechnology. PMID:27562138

  17. Hepatic perfusion in a tumor model using DCE-CT: an accuracy and precision study

    NASA Astrophysics Data System (ADS)

    Stewart, Errol E.; Chen, Xiaogang; Hadway, Jennifer; Lee, Ting-Yim

    2008-08-01

    In the current study we investigate the accuracy and precision of hepatic perfusion measurements based on the Johnson and Wilson model with the adiabatic approximation. VX2 carcinoma cells were implanted into the livers of New Zealand white rabbits. Simultaneous dynamic contrast-enhanced computed tomography (DCE-CT) and radiolabeled microsphere studies were performed under steady-state normo-, hyper- and hypo-capnia. The hepatic arterial blood flows (HABF) obtained using both techniques were compared with ANOVA. The precision was assessed by the coefficient of variation (CV). Under normo-capnia the microsphere HABF were 51.9 ± 4.2, 40.7 ± 4.9 and 99.7 ± 6.0 ml min-1 (100 g)-1 while DCE-CT HABF were 50.0 ± 5.7, 37.1 ± 4.5 and 99.8 ± 6.8 ml min-1 (100 g)-1 in normal tissue, tumor core and rim, respectively. There were no significant differences between HABF measurements obtained with both techniques (P > 0.05). Furthermore, a strong correlation was observed between HABF values from both techniques: slope of 0.92 ± 0.05, intercept of 4.62 ± 2.69 ml min-1 (100 g)-1 and R2 = 0.81 ± 0.05 (P < 0.05). The Bland-Altman plot comparing DCE-CT and microsphere HABF measurements gives a mean difference of -0.13 ml min-1 (100 g)-1, which is not significantly different from zero. DCE-CT HABF is precise, with CV of 5.7, 24.9 and 1.4% in the normal tissue, tumor core and rim, respectively. Non-invasive measurement of HABF with DCE-CT is accurate and precise. DCE-CT can be an important extension of CT to assess hepatic function besides morphology in liver diseases.

  18. Effects of shortened acquisition time on accuracy and precision of quantitative estimates of organ activity1

    PubMed Central

    He, Bin; Frey, Eric C.

    2010-01-01

    Purpose: Quantitative estimation of in vivo organ uptake is an essential part of treatment planning for targeted radionuclide therapy. This usually involves the use of planar or SPECT scans with acquisition times chosen based more on image quality considerations rather than the minimum needed for precise quantification. In previous simulation studies at clinical count levels (185 MBq 111In), the authors observed larger variations in accuracy of organ activity estimates resulting from anatomical and uptake differences than statistical noise. This suggests that it is possible to reduce the acquisition time without substantially increasing the variation in accuracy. Methods: To test this hypothesis, the authors compared the accuracy and variation in accuracy of organ activity estimates obtained from planar and SPECT scans at various count levels. A simulated phantom population with realistic variations in anatomy and biodistribution was used to model variability in a patient population. Planar and SPECT projections were simulated using previously validated Monte Carlo simulation tools. The authors simulated the projections at count levels approximately corresponding to 1.5–30 min of total acquisition time. The projections were processed using previously described quantitative SPECT (QSPECT) and planar (QPlanar) methods. The QSPECT method was based on the OS-EM algorithm with compensations for attenuation, scatter, and collimator-detector response. The QPlanar method is based on the ML-EM algorithm using the same model-based compensation for all the image degrading effects as the QSPECT method. The volumes of interests (VOIs) were defined based on the true organ configuration in the phantoms. The errors in organ activity estimates from different count levels and processing methods were compared in terms of mean and standard deviation over the simulated phantom population. Results: There was little degradation in quantitative reliability when the acquisition time was

  19. Slight pressure imbalances can affect accuracy and precision of dual inlet-based clumped isotope analysis.

    PubMed

    Fiebig, Jens; Hofmann, Sven; Löffler, Niklas; Lüdecke, Tina; Methner, Katharina; Wacker, Ulrike

    2016-01-01

    It is well known that a subtle nonlinearity can occur during clumped isotope analysis of CO2 that - if remaining unaddressed - limits accuracy. The nonlinearity is induced by a negative background on the m/z 47 ion Faraday cup, whose magnitude is correlated with the intensity of the m/z 44 ion beam. The origin of the negative background remains unclear, but is possibly due to secondary electrons. Usually, CO2 gases of distinct bulk isotopic compositions are equilibrated at 1000 °C and measured along with the samples in order to be able to correct for this effect. Alternatively, measured m/z 47 beam intensities can be corrected for the contribution of secondary electrons after monitoring how the negative background on m/z 47 evolves with the intensity of the m/z 44 ion beam. The latter correction procedure seems to work well if the m/z 44 cup exhibits a wider slit width than the m/z 47 cup. Here we show that the negative m/z 47 background affects precision of dual inlet-based clumped isotope measurements of CO2 unless raw m/z 47 intensities are directly corrected for the contribution of secondary electrons. Moreover, inaccurate results can be obtained even if the heated gas approach is used to correct for the observed nonlinearity. The impact of the negative background on accuracy and precision arises from small imbalances in m/z 44 ion beam intensities between reference and sample CO2 measurements. It becomes the more significant the larger the relative contribution of secondary electrons to the m/z 47 signal is and the higher the flux rate of CO2 into the ion source is set. These problems can be overcome by correcting the measured m/z 47 ion beam intensities of sample and reference gas for the contributions deriving from secondary electrons after scaling these contributions to the intensities of the corresponding m/z 49 ion beams. Accuracy and precision of this correction are demonstrated by clumped isotope analysis of three internal carbonate standards. The

  20. Balancing accuracy, robustness, and efficiency in simulations of coupled magma/mantle dynamics

    NASA Astrophysics Data System (ADS)

    Katz, R. F.

    2011-12-01

    Magmatism plays a central role in many Earth-science problems, and is particularly important for the chemical evolution of the mantle. The standard theory for coupled magma/mantle dynamics is fundamentally multi-physical, comprising mass and force balance for two phases, plus conservation of energy and composition in a two-component (minimum) thermochemical system. The tight coupling of these various aspects of the physics makes obtaining numerical solutions a significant challenge. Previous authors have advanced by making drastic simplifications, but these have limited applicability. Here I discuss progress, enabled by advanced numerical software libraries, in obtaining numerical solutions to the full system of governing equations. The goals in developing the code are as usual: accuracy of solutions, robustness of the simulation to non-linearities, and efficiency of code execution. I use the cutting-edge example of magma genesis and migration in a heterogeneous mantle to elucidate these issues. I describe the approximations employed and their consequences, as a means to frame the question of where and how to make improvements. I conclude that the capabilities needed to advance multi-physics simulation are, in part, distinct from those of problems with weaker coupling, or fewer coupled equations. Chief among these distinct requirements is the need to dynamically adjust the solution algorithm to maintain robustness in the face of coupled nonlinearities that would otherwise inhibit convergence. This may mean introducing Picard iteration rather than full coupling, switching between semi-implicit and explicit time-stepping, or adaptively increasing the strength of preconditioners. All of these can be accomplished by the user with, for example, PETSc. Formalising this adaptivity should be a goal for future development of software packages that seek to enable multi-physics simulation.

  1. Estimated results analysis and application of the precise point positioning based high-accuracy ionosphere delay

    NASA Astrophysics Data System (ADS)

    Wang, Shi-tai; Peng, Jun-huan

    2015-12-01

    The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.

  2. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    SciTech Connect

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence of a significant quantity of 238U in the samples.

  3. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    DOE PAGESBeta

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence ofmore » a significant quantity of 238U in the samples.« less

  4. Accuracy and Robustness Improvements of Echocardiographic Particle Image Velocimetry for Routine Clinical Cardiac Evaluation

    NASA Astrophysics Data System (ADS)

    Meyers, Brett; Vlachos, Pavlos; Charonko, John; Giarra, Matthew; Goergen, Craig

    2015-11-01

    Echo Particle Image Velocimetry (echoPIV) is a recent development in flow visualization that provides improved spatial resolution with high temporal resolution in cardiac flow measurement. Despite increased interest a limited number of published echoPIV studies are clinical, demonstrating that the method is not broadly accepted within the medical community. This is due to the fact that use of contrast agents are typically reserved for subjects whose initial evaluation produced very low quality recordings. Thus high background noise and low contrast levels characterize most scans, which hinders echoPIV from producing accurate measurements. To achieve clinical acceptance it is necessary to develop processing strategies that improve accuracy and robustness. We hypothesize that using a short-time moving window ensemble (MWE) correlation can improve echoPIV flow measurements on low image quality clinical scans. To explore the potential of the short-time MWE correlation, evaluation of artificial ultrasound images was performed. Subsequently, a clinical cohort of patients with diastolic dysfunction was evaluated. Qualitative and quantitative comparisons between echoPIV measurements and Color M-mode scans were carried out to assess the improvements delivered by the proposed methodology.

  5. On accuracy, robustness, and security of bag-of-word search systems

    NASA Astrophysics Data System (ADS)

    Voloshynovskiy, Svyatoslav; Diephuis, Maurits; Kostadinov, Dimche; Farhadzadeh, Farzad; Holotyak, Taras

    2014-02-01

    In this paper, we present a statistical framework for the analysis of the performance of Bag-of-Words (BOW) systems. The paper aims at establishing a better understanding of the impact of different elements of BOW systems such as the robustness of descriptors, accuracy of assignment, descriptor compression and pooling and finally decision making. We also study the impact of geometrical information on the BOW system performance and compare the results with different pooling strategies. The proposed framework can also be of interest for a security and privacy analysis of BOW systems. The experimental results on real images and descriptors confirm our theoretical findings. Notation: We use capital letters to denote scalar random variables X and X to denote vector random variables, corresponding small letters x and x to denote the realisations of scalar and vector random variables, respectively. We use X pX(x) or simply X p(x) to indicate that a random variable X is distributed according to pX(x). N(μ, σ 2 X ) stands for the Gaussian distribution with mean μ and variance σ2 X . B(L, Pb) denotes the binomial distribution with sequence length L and probability of success Pb. ||.|| denotes the Euclidean vector norm and Q(.) stands for the Q-function. D(.||.) denotes the divergence and E{.} denotes the expectation.

  6. Accuracy and precision of estimating age of gray wolves by tooth wear

    USGS Publications Warehouse

    Gipson, P.S.; Ballard, W.B.; Nowak, R.M.; Mech, L.D.

    2000-01-01

    We evaluated the accuracy and precision of tooth wear for aging gray wolves (Canis lupus) from Alaska, Minnesota, and Ontario based on 47 known-age or known-minimum-age skulls. Estimates of age using tooth wear and a commercial cementum annuli-aging service were useful for wolves up to 14 years old. The precision of estimates from cementum annuli was greater than estimates from tooth wear, but tooth wear estimates are more applicable in the field. We tended to overestimate age by 1-2 years and occasionally by 3 or 4 years. The commercial service aged young wolves with cementum annuli to within ?? 1 year of actual age, but under estimated ages of wolves ???9 years old by 1-3 years. No differences were detected in tooth wear patterns for wild wolves from Alaska, Minnesota, and Ontario, nor between captive and wild wolves. Tooth wear was not appropriate for aging wolves with an underbite that prevented normal wear or severely broken and missing teeth.

  7. Accuracy and precision of gait events derived from motion capture in horses during walk and trot.

    PubMed

    Boye, Jenny Katrine; Thomsen, Maj Halling; Pfau, Thilo; Olsen, Emil

    2014-03-21

    This study aimed to create an evidence base for detection of stance-phase timings from motion capture in horses. The objective was to compare the accuracy (bias) and precision (SD) for five published algorithms for the detection of hoof-on and hoof-off using force plates as the reference standard. Six horses were walked and trotted over eight force plates surrounded by a synchronised 12-camera infrared motion capture system. The five algorithms (A-E) were based on: (A) horizontal velocity of the hoof; (B) Fetlock angle and horizontal hoof velocity; (C) horizontal displacement of the hoof relative to the centre of mass; (D) horizontal velocity of the hoof relative to the Centre of Mass and; (E) vertical acceleration of the hoof. A total of 240 stance phases in walk and 240 stance phases in trot were included in the assessment. Method D provided the most accurate and precise results in walk for stance phase duration with a bias of 4.1% for front limbs and 4.8% for hind limbs. For trot we derived a combination of method A for hoof-on and method E for hoof-off resulting in a bias of -6.2% of stance in the front limbs and method B for the hind limbs with a bias of 3.8% of stance phase duration. We conclude that motion capture yields accurate and precise detection of gait events for horses walking and trotting over ground and the results emphasise a need for different algorithms for front limbs versus hind limbs in trot. PMID:24529754

  8. Gaining Precision and Accuracy on Microprobe Trace Element Analysis with the Multipoint Background Method

    NASA Astrophysics Data System (ADS)

    Allaz, J. M.; Williams, M. L.; Jercinovic, M. J.; Donovan, J. J.

    2014-12-01

    Electron microprobe trace element analysis is a significant challenge, but can provide critical data when high spatial resolution is required. Due to the low peak intensity, the accuracy and precision of such analyses relies critically on background measurements, and on the accuracy of any pertinent peak interference corrections. A linear regression between two points selected at appropriate off-peak positions is a classical approach for background characterization in microprobe analysis. However, this approach disallows an accurate assessment of background curvature (usually exponential). Moreover, if present, background interferences can dramatically affect the results if underestimated or ignored. The acquisition of a quantitative WDS scan over the spectral region of interest is still a valuable option to determine the background intensity and curvature from a fitted regression of background portions of the scan, but this technique retains an element of subjectivity as the analyst has to select areas in the scan, which appear to represent background. We present here a new method, "Multi-Point Background" (MPB), that allows acquiring up to 24 off-peak background measurements from wavelength positions around the peaks. This method aims to improve the accuracy, precision, and objectivity of trace element analysis. The overall efficiency is amended because no systematic WDS scan needs to be acquired in order to check for the presence of possible background interferences. Moreover, the method is less subjective because "true" backgrounds are selected by the statistical exclusion of erroneous background measurements, reducing the need for analyst intervention. This idea originated from efforts to refine EPMA monazite U-Th-Pb dating, where it was recognised that background errors (peak interference or background curvature) could result in errors of several tens of million years on the calculated age. Results obtained on a CAMECA SX-100 "UltraChron" using monazite

  9. Impact of survey workflow on precision and accuracy of terrestrial LiDAR datasets

    NASA Astrophysics Data System (ADS)

    Gold, P. O.; Cowgill, E.; Kreylos, O.

    2009-12-01

    Ground-based LiDAR (Light Detection and Ranging) survey techniques are enabling remote visualization and quantitative analysis of geologic features at unprecedented levels of detail. For example, digital terrain models computed from LiDAR data have been used to measure displaced landforms along active faults and to quantify fault-surface roughness. But how accurately do terrestrial LiDAR data represent the true ground surface, and in particular, how internally consistent and precise are the mosaiced LiDAR datasets from which surface models are constructed? Addressing this question is essential for designing survey workflows that capture the necessary level of accuracy for a given project while minimizing survey time and equipment, which is essential for effective surveying of remote sites. To address this problem, we seek to define a metric that quantifies how scan registration error changes as a function of survey workflow. Specifically, we are using a Trimble GX3D laser scanner to conduct a series of experimental surveys to quantify how common variables in field workflows impact the precision of scan registration. Primary variables we are testing include 1) use of an independently measured network of control points to locate scanner and target positions, 2) the number of known-point locations used to place the scanner and point clouds in 3-D space, 3) the type of target used to measure distances between the scanner and the known points, and 4) setting up the scanner over a known point as opposed to resectioning of known points. Precision of the registered point cloud is quantified using Trimble Realworks software by automatic calculation of registration errors (errors between locations of the same known points in different scans). Accuracy of the registered cloud (i.e., its ground-truth) will be measured in subsequent experiments. To obtain an independent measure of scan-registration errors and to better visualize the effects of these errors on a registered point

  10. Precision, accuracy, and application of diver-towed underwater GPS receivers.

    PubMed

    Schories, Dirk; Niedzwiedz, Gerd

    2012-04-01

    Diver-towed global positioning systems (GPS) handhelds have been used for a few years in underwater monitoring studies. We modeled the accuracy of this method using the software KABKURR originally developed by the University of Rostock for fishing and marine engineering. Additionally, three field experiments were conducted to estimate the precision of the method and apply it in the field: (1) an experiment of underwater transects from 5 to 35 m in the Southern Chile fjord region, (2) a transect from 5 to 30 m under extreme climatic conditions in the Antarctic, and (3) an underwater tracking experiment at Lake Ranco, Southern Chile. The coiled cable length in relation to water depth is the main error source besides the signal quality of the GPS under calm weather conditions. The forces used in the model resulted in a displacement of 2.3 m in a depth of 5 m, 3.2 m at a 10-m depth, 4.6 m in a 20-m depth, 5.5 m at a 30-m depth, and 6.8 m in a 40-m depth, when only an additional 0.5 m cable extension was used compared to the water depth. The GPS buoy requires good buoyancy in order to keep its position at the water surface when the diver is trying to minimize any additional cable extension error. The diver has to apply a tensile force for shortening the cable length at the lower cable end. Repeated diving along transect lines from 5 to 35 m resulted only in small deviations independent of water depth indicating the precision of the method for monitoring studies. Routing of given reference points with a Garmin 76CSx handheld placed in an underwater housing resulted in mean deviances less than 6 m at a water depth of 10 m. Thus, we can confirm that diver-towed GPS handhelds give promising results when used for underwater research in shallow water and open a wide field of applicability, but no submeter accuracy is possible due to the different error sources. PMID:21614620

  11. Robustness

    NASA Technical Reports Server (NTRS)

    Ryan, R.

    1993-01-01

    Robustness is a buzz word common to all newly proposed space systems design as well as many new commercial products. The image that one conjures up when the word appears is a 'Paul Bunyon' (lumberjack design), strong and hearty; healthy with margins in all aspects of the design. In actuality, robustness is much broader in scope than margins, including such factors as simplicity, redundancy, desensitization to parameter variations, control of parameter variations (environments flucation), and operational approaches. These must be traded with concepts, materials, and fabrication approaches against the criteria of performance, cost, and reliability. This includes manufacturing, assembly, processing, checkout, and operations. The design engineer or project chief is faced with finding ways and means to inculcate robustness into an operational design. First, however, be sure he understands the definition and goals of robustness. This paper will deal with these issues as well as the need for the requirement for robustness.

  12. Welcome detailed data, but with a grain of salt: accuracy, precision, uncertainty in flood inundation modeling

    NASA Astrophysics Data System (ADS)

    Dottori, Francesco; Di Baldassarre, Giuliano; Todini, Ezio

    2013-04-01

    New survey techniques are providing a huge amount of high-detailed and accurate data which can be extremely valuable for flood inundation modeling. Such data availability raises the issue of how to exploit their information content to provide reliable flood risk mapping and predictions. We think that these data should form the basis of hydraulic modelling anytime they are available. However, high expectations regarding these datasets should be tempered as some important issues should be considered. These include: the large number of uncertainty sources in model structure and available data; the difficult evaluation of model results, due to the scarcity of observed data; the computational efficiency; the false confidence that can be given by high-resolution results, as accuracy of results is not necessarily increased by higher precision. We briefly discuss these issues and existing approaches which can be used to manage high detailed data. In our opinion, methods based on sub-grid and roughness upscaling treatments would be in many instances an appropriate solution to maintain consistence with the uncertainty related to model structure and data available for model building and evaluation.

  13. Precision and accuracy of regional radioactivity quantitation using the maximum likelihood EM reconstruction algorithm

    SciTech Connect

    Carson, R.E.; Yan, Y.; Chodkowski, B.; Yap, T.K.; Daube-Witherspoon, M.E. )

    1994-09-01

    The imaging characteristics of maximum likelihood (ML) reconstruction using the EM algorithm for emission tomography have been extensively evaluated. There has been less study of the precision and accuracy of ML estimates of regional radioactivity concentration. The authors developed a realistic brain slice simulation by segmenting a normal subject's MRI scan into gray matter, white matter, and CSF and produced PET sinogram data with a model that included detector resolution and efficiencies, attenuation, scatter, and randoms. Noisy realizations at different count levels were created, and ML and filtered backprojection (FBP) reconstructions were performed. The bias and variability of ROI values were determined. In addition, the effects of ML pixel size, image smoothing and region size reduction were assessed. ML estimates at 1,000 iterations (0.6 sec per iteration on a parallel computer) for 1-cm[sup 2] gray matter ROIs showed negative biases of 6% [+-] 2% which can be reduced to 0% [+-] 3% by removing the outer 1-mm rim of each ROI. FBP applied to the full-size ROIs had 15% [+-] 4% negative bias with 50% less noise than ML. Shrinking the FBP regions provided partial bias compensation with noise increases to levels similar to ML. Smoothing of ML images produced biases comparable to FBP with slightly less noise. Because of its heavy computational requirements, the ML algorithm will be most useful for applications in which achieving minimum bias is important.

  14. Modeling precision and accuracy of a LWIR microgrid array imaging polarimeter

    NASA Astrophysics Data System (ADS)

    Boger, James K.; Tyo, J. Scott; Ratliff, Bradley M.; Fetrow, Matthew P.; Black, Wiley T.; Kumar, Rakesh

    2005-08-01

    Long-wave infrared (LWIR) imaging is a prominent and useful technique for remote sensing applications. Moreover, polarization imaging has been shown to provide additional information about the imaged scene. However, polarization estimation requires that multiple measurements be made of each observed scene point under optically different conditions. This challenging measurement strategy makes the polarization estimates prone to error. The sources of this error differ depending upon the type of measurement scheme used. In this paper, we examine one particular measurement scheme, namely, a simultaneous multiple-measurement imaging polarimeter (SIP) using a microgrid polarizer array. The imager is composed of a microgrid polarizer masking a LWIR HgCdTe focal plane array (operating at 8.3-9.3 μm), and is able to make simultaneous modulated scene measurements. In this paper we present an analytical model that is used to predict the performance of the system in order to help interpret real results. This model is radiometrically accurate and accounts for the temperature of the camera system optics, spatial nonuniformity and drift, optical resolution and other sources of noise. This model is then used in simulation to validate it against laboratory measurements. The precision and accuracy of the SIP instrument is then studied.

  15. Evaluation of Precise Point Positioning accuracy under large total electron content variations in equatorial latitudes

    NASA Astrophysics Data System (ADS)

    Rodríguez-Bilbao, I.; Moreno Monge, B.; Rodríguez-Caderot, G.; Herraiz, M.; Radicella, S. M.

    2015-01-01

    The ionosphere is one of the largest contributors to errors in GNSS positioning. Although in Precise Point Positioning (PPP) the ionospheric delay is corrected to a first order through the 'iono-free combination', significant errors may still be observed when large electron density gradients are present. To confirm this phenomenon, the temporal behavior of intense fluctuations of total electron content (TEC) and PPP altitude accuracy at equatorial latitudes are analyzed during four years of different solar activity. For this purpose, equatorial plasma irregularities are identified with periods of high rate of change of TEC (ROT). The largest ROT values are observed from 19:00 to 01:00 LT, especially around magnetic equinoxes, although some differences exist between the stations depending on their location. Highest ROT values are observed in the American and African regions. In general, large ROT events are accompanied by frequent satellite signal losses and an increase in the PPP altitude error during years 2001, 2004 and 2011. A significant increase in the PPP altitude error RMS is observed in epochs of high ROT with respect to epochs of low ROT in years 2001, 2004 and 2011, reaching up to 0.26 m in the 19:00-01:00 LT period.

  16. David Weston--Ocean science of invariant principles, total accuracy, and appropriate precision

    NASA Astrophysics Data System (ADS)

    Roebuck, Ian

    2002-11-01

    David Weston's entire professional career was as a member of the Royal Navy Scientific Service, working in the field of ocean acoustics and its applications to maritime operations. The breadth of his interests has often been remarked upon, but because of the sensitive nature of his work at the time, it was indeed much more diverse than his published papers showed. This presentation, from the successors to the laboratories he illuminated for many years, is an attempt to fill in at least some of the gaps. The presentation also focuses on the underlying scientific philosophy of David's work, rooted in the British tradition of applicable mathematics and physics. A deep appreciation of the role of invariants and dimensional methods, and awareness of the sensitivity of any models to changes to the input assumptions, was at the heart of his approach. The needs of the Navy kept him rigorous in requiring accuracy, and clear about the distinction between it and precision. Examples of these principles are included, still as relevant today as they were when he insisted on applying them 30 years ago.

  17. Sub-nm accuracy metrology for ultra-precise reflective X-ray optics

    NASA Astrophysics Data System (ADS)

    Siewert, F.; Buchheim, J.; Zeschke, T.; Brenner, G.; Kapitzki, S.; Tiedtke, K.

    2011-04-01

    The transport and monochromatization of synchrotron light from a high brilliant laser-like source to the experimental station without significant loss of brilliance and coherence is a challenging task in X-ray optics and requires optical elements of utmost accuracy. These are wave-front preserving plane mirrors with lengths of up to 1 m characterized by residual slope errors in the range of 0.05 μrad (rms) and values of 0.1 nm (rms) for micro-roughness. In the case of focusing optical elements like elliptical cylinders the required residual slope error is in the range of 0.25 μrad rms and better. In addition the alignment of optical elements is a critical and beamline performance limiting topic. Thus the characterization of ultra-precise reflective optical elements for FEL-beamline application in the free and mounted states is of significant importance. We will discuss recent results in the field of metrology achieved at the BESSY-II Optics Laboratory (BOL) of the Helmholtz Zentrum Berlin (HZB) by use of the Nanometer Optical Component Measuring Machine (NOM). Different types of mirror have been inspected by line-scan and slope mapping in the free and mounted states. Based on these results the mirror clamping of a combined mirror/grating set-up for the BL-beamlines at FLASH was improved.

  18. Obtaining identical results with double precision global accuracy on different numbers of processors in parallel particle Monte Carlo simulations

    SciTech Connect

    Cleveland, Mathew A. Brunner, Thomas A.; Gentile, Nicholas A.; Keasler, Jeffrey A.

    2013-10-15

    We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositions will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.

  19. 13 Years of TOPEX/POSEIDON Precision Orbit Determination and the 10-fold Improvement in Expected Orbit Accuracy

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Beckley, B. D.; Klosko, S. M.

    2006-01-01

    Launched in the summer of 1992, TOPEX/POSEIDON (T/P) was a joint mission between NASA and the Centre National d Etudes Spatiales (CNES), the French Space Agency, to make precise radar altimeter measurements of the ocean surface. After the remarkably successful 13-years of mapping the ocean surface T/P lost its ability to maneuver and was de-commissioned January 2006. T/P revolutionized the study of the Earth s oceans by vastly exceeding pre-launch estimates of surface height accuracy recoverable from radar altimeter measurements. The precision orbit lies at the heart of the altimeter measurement providing the reference frame from which the radar altimeter measurements are made. The expected quality of orbit knowledge had limited the measurement accuracy expectations of past altimeter missions, and still remains a major component in the error budget of all altimeter missions. This paper describes critical improvements made to the T/P orbit time series over the 13-years of precise orbit determination (POD) provided by the GSFC Space Geodesy Laboratory. The POD improvements from the pre-launch T/P expectation of radial orbit accuracy and Mission requirement of 13-cm to an expected accuracy of about 1.5-cm with today s latest orbits will be discussed. The latest orbits with 1.5 cm RMS radial accuracy represent a significant improvement to the 2.0-cm accuracy orbits currently available on the T/P Geophysical Data Record (GDR) altimeter product.

  20. Measurement Precision and Accuracy of the Centre Location of AN Ellipse by Weighted Centroid Method

    NASA Astrophysics Data System (ADS)

    Matsuoka, R.

    2015-03-01

    Circular targets are often utilized in photogrammetry, and a circle on a plane is projected as an ellipse onto an oblique image. This paper reports an analysis conducted in order to investigate the measurement precision and accuracy of the centre location of an ellipse on a digital image by an intensity-weighted centroid method. An ellipse with a semi-major axis a, a semi-minor axis b, and a rotation angle θ of the major axis is investigated. In the study an equivalent radius r = (a2cos2θ + b2sin2θ)1/2 is adopted as a measure of the dimension of an ellipse. First an analytical expression representing a measurement error (ϵx, ϵy,) is obtained. Then variances Vx of ϵx are obtained at 1/256 pixel intervals from 0.5 to 100 pixels in r by numerical integration, because a formula representing Vx is unable to be obtained analytically when r > 0.5. The results of the numerical integration indicate that Vxwould oscillate in a 0.5 pixel cycle in r and Vx excluding the oscillation component would be inversely proportional to the cube of r. Finally an effective approximate formula of Vx from 0.5 to 100 pixels in r is obtained by least squares adjustment. The obtained formula is a fractional expression of which numerator is a fifth-degree polynomial of {r-0.5×int(2r)} expressing the oscillation component and denominator is the cube of r. Here int(x) is the function to return the integer part of the value x. Coefficients of the fifth-degree polynomial of the numerator can be expressed by a quadratic polynomial of {0.5×int(2r)+0.25}.

  1. Accuracy, precision and response time of consumer bimetal and digital thermometers for cooked ground beef patties and chicken breasts

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Three models each of consumer instant-read bimetal and digital thermometers were tested for accuracy, precision and response time compared to a calibrated thermocouple in cooked 80 percent and 90 percent lean ground beef patties and boneless and bone-in split chicken breasts. At the recommended inse...

  2. Sensitivity Analysis for Characterizing the Accuracy and Precision of JEM/SMILES Mesospheric O3

    NASA Astrophysics Data System (ADS)

    Esmaeili Mahani, M.; Baron, P.; Kasai, Y.; Murata, I.; Kasaba, Y.

    2011-12-01

    The main purpose of this study is to evaluate the Superconducting sub-Millimeter Limb Emission Sounder (SMILES) measurements of mesospheric ozone, O3. As the first step, the error due to the impact of Mesospheric Temperature Inversions (MTIs) on ozone retrieval has been determined. The impacts of other parameters such as pressure variability, solar events, and etc. on mesospheric O3 will also be investigated. Ozone, is known to be important due to the stratospheric O3 layer protection of life on Earth by absorbing harmful UV radiations. However, O3 chemistry can be studied purely in the mesosphere without distraction of heterogeneous situation and dynamical variations due to the short lifetime of O3 in this region. Mesospheric ozone is produced by the photo-dissociation of O2 and the subsequent reaction of O with O2. Diurnal and semi-diurnal variations of mesospheric ozone are associated with variations in solar activity. The amplitude of the diurnal variation increases from a few percent at an altitude of 50 km, to about 80 percent at 70 km. Although despite the apparent simplicity of this situation, significant disagreements exist between the predictions from the existing models and observations, which need to be resolved. SMILES is a highly sensitive radiometer with a few to several tens percent of precision from upper troposphere to the mesosphere. SMILES was developed by the Japanese Aerospace eXploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT) located at the Japanese Experiment Module (JEM) on the International Space Station (ISS). SMILES has successfully measured the vertical distributions and the diurnal variations of various atmospheric species in the latitude range of 38S to 65N from October 2009 to April 2010. A sensitivity analysis is being conducted to investigate the expected precision and accuracy of the mesospheric O3 profiles (from 50 to 90 km height) due to the impact of Mesospheric Temperature

  3. Analysis of the Accuracy and Robustness of the Leap Motion Controller

    PubMed Central

    Weichert, Frank; Bachmann, Daniel; Rudak, Bartholomäus; Fisseler, Denis

    2013-01-01

    The Leap Motion Controller is a new device for hand gesture controlled user interfaces with declared sub-millimeter accuracy. However, up to this point its capabilities in real environments have not been analyzed. Therefore, this paper presents a first study of a Leap Motion Controller. The main focus of attention is on the evaluation of the accuracy and repeatability. For an appropriate evaluation, a novel experimental setup was developed making use of an industrial robot with a reference pen allowing a position accuracy of 0.2 mm. Thereby, a deviation between a desired 3D position and the average measured positions below 0.2 mm has been obtained for static setups and of 1.2 mm for dynamic setups. Using the conclusion of this analysis can improve the development of applications for the Leap Motion controller in the field of Human-Computer Interaction. PMID:23673678

  4. Precise Point Positioning for the Efficient and Robust Analysis of GPS Data from Large Networks

    NASA Technical Reports Server (NTRS)

    Zumberge, J. F.; Heflin, M. B.; Jefferson, D. C.; Watkins, M. M.; Webb, F. H.

    1997-01-01

    Networks of dozens to hundreds of permanently operating precision Global Positioning System (GPS) receivers are emerging at spatial scales that range from 10(exp 0) to 10(exp 3) km. To keep the computational burden associated with the analysis of such data economically feasible, one approach is to first determine precise GPS satellite positions and clock corrections from a globally distributed network of GPS receivers. Their, data from the local network are analyzed by estimating receiver- specific parameters with receiver-specific data satellite parameters are held fixed at their values determined in the global solution. This "precise point positioning" allows analysis of data from hundreds to thousands of sites every (lay with 40-Mflop computers, with results comparable in quality to the simultaneous analysis of all data. The reference frames for the global and network solutions can be free of distortion imposed by erroneous fiducial constraints on any sites.

  5. Precise Point Positioning for the Efficient and Robust Analysis of GPS Data From Large Networks

    NASA Technical Reports Server (NTRS)

    Zumberge, J. F.; Heflin, M. B.; Jefferson, D. C.; Watkins, M. M.; Webb, F. H.

    1997-01-01

    Networks of dozens to hundreds of permanently operating precision Global Positioning System (GPS) receivers are emerging at spatial scales that range from 10(exp 0) to 10(exp 3) km. To keep the computational burden associated with the analysis of such data economically feasible, one approach is to first determine precise GPS satellite positions and clock corrections from a globally distributed network of GPS receivers. Then, data from the local network are analyzed by estimating receiver specific parameters with receiver-specific data; satellite parameters are held fixed at their values determined in the global solution. This "precise point positioning" allows analysis of data from hundreds to thousands of sites every day with 40 Mflop computers, with results comparable in quality to the simultaneous analysis of all data. The reference frames for the global and network solutions can be free of distortion imposed by erroneous fiducial constraints on any sites.

  6. Use of single-representative reverse-engineered surface-models for RSA does not affect measurement accuracy and precision.

    PubMed

    Seehaus, Frank; Schwarze, Michael; Flörkemeier, Thilo; von Lewinski, Gabriela; Kaptein, Bart L; Jakubowitz, Eike; Hurschler, Christof

    2016-05-01

    Implant migration can be accurately quantified by model-based Roentgen stereophotogrammetric analysis (RSA), using an implant surface model to locate the implant relative to the bone. In a clinical situation, a single reverse engineering (RE) model for each implant type and size is used. It is unclear to what extent the accuracy and precision of migration measurement is affected by implant manufacturing variability unaccounted for by a single representative model. Individual RE models were generated for five short-stem hip implants of the same type and size. Two phantom analyses and one clinical analysis were performed: "Accuracy-matched models": one stem was assessed, and the results from the original RE model were compared with randomly selected models. "Accuracy-random model": each of the five stems was assessed and analyzed using one randomly selected RE model. "Precision-clinical setting": implant migration was calculated for eight patients, and all five available RE models were applied to each case. For the two phantom experiments, the 95%CI of the bias ranged from -0.28 mm to 0.30 mm for translation and -2.3° to 2.5° for rotation. In the clinical setting, precision is less than 0.5 mm and 1.2° for translation and rotation, respectively, except for rotations about the proximodistal axis (<4.1°). High accuracy and precision of model-based RSA can be achieved and are not biased by using a single representative RE model. At least for implants similar in shape to the investigated short-stem, individual models are not necessary. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 34:903-910, 2016. PMID:26553748

  7. Dichotomy in perceptual learning of interval timing: calibration of mean accuracy and precision differ in specificity and time course.

    PubMed

    Sohn, Hansem; Lee, Sang-Hun

    2013-01-01

    Our brain is inexorably confronted with a dynamic environment in which it has to fine-tune spatiotemporal representations of incoming sensory stimuli and commit to a decision accordingly. Among those representations needing constant calibration is interval timing, which plays a pivotal role in various cognitive and motor tasks. To investigate how perceived time interval is adjusted by experience, we conducted a human psychophysical experiment using an implicit interval-timing task in which observers responded to an invisible bar drifting at a constant speed. We tracked daily changes in distributions of response times for a range of physical time intervals over multiple days of training with two major types of timing performance, mean accuracy and precision. We found a decoupled dynamics of mean accuracy and precision in terms of their time course and specificity of perceptual learning. Mean accuracy showed feedback-driven instantaneous calibration evidenced by a partial transfer around the time interval trained with feedback, while timing precision exhibited a long-term slow improvement with no evident specificity. We found that a Bayesian observer model, in which a subjective time interval is determined jointly by a prior and likelihood function for timing, captures the dissociative temporal dynamics of the two types of timing measures simultaneously. Finally, the model suggested that the width of the prior, not the likelihoods, gradually shrinks over sessions, substantiating the important role of prior knowledge in perceptual learning of interval timing. PMID:23076112

  8. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Accuracy Analysis

    NASA Astrophysics Data System (ADS)

    Sarrazin, F.; Pianosi, F.; Hartmann, A. J.; Wagener, T.

    2014-12-01

    Sensitivity analysis aims to characterize the impact that changes in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). It is a valuable diagnostic tool for model understanding and for model improvement, it enhances calibration efficiency, and it supports uncertainty and scenario analysis. It is of particular interest for environmental models because they are often complex, non-linear, non-monotonic and exhibit strong interactions between their parameters. However, sensitivity analysis has to be carefully implemented to produce reliable results at moderate computational cost. For example, sample size can have a strong impact on the results and has to be carefully chosen. Yet, there is little guidance available for this step in environmental modelling. The objective of the present study is to provide guidelines for a robust sensitivity analysis, in order to support modellers in making appropriate choices for its implementation and in interpreting its outcome. We considered hydrological models with increasing level of complexity. We tested four sensitivity analysis methods, Regional Sensitivity Analysis, Method of Morris, a density-based (PAWN) and a variance-based (Sobol) method. The convergence and variability of sensitivity indices were investigated. We used bootstrapping to assess and improve the robustness of sensitivity indices even for limited sample sizes. Finally, we propose a quantitative validation approach for sensitivity analysis based on the Kolmogorov-Smirnov statistics.

  9. Quantifying Vegetation Change in Semiarid Environments: Precision and Accuracy of Spectral Mixture Analysis and the Normalized Difference Vegetation Index

    NASA Technical Reports Server (NTRS)

    Elmore, Andrew J.; Mustard, John F.; Manning, Sara J.; Elome, Andrew J.

    2000-01-01

    Because in situ techniques for determining vegetation abundance in semiarid regions are labor intensive, they usually are not feasible for regional analyses. Remotely sensed data provide the large spatial scale necessary, but their precision and accuracy in determining vegetation abundance and its change through time have not been quantitatively determined. In this paper, the precision and accuracy of two techniques, Spectral Mixture Analysis (SMA) and Normalized Difference Vegetation Index (NDVI) applied to Landsat TM data, are assessed quantitatively using high-precision in situ data. In Owens Valley, California we have 6 years of continuous field data (1991-1996) for 33 sites acquired concurrently with six cloudless Landsat TM images. The multitemporal remotely sensed data were coregistered to within 1 pixel, radiometrically intercalibrated using temporally invariante surface features and geolocated to within 30 m. These procedures facilitated the accurate location of field-monitoring sites within the remotely sensed data. Formal uncertainties in the registration, radiometric alignment, and modeling were determined. Results show that SMA absolute percent live cover (%LC) estimates are accurate to within ?4.0%LC and estimates of change in live cover have a precision of +/-3.8%LC. Furthermore, even when applied to areas of low vegetation cover, the SMA approach correctly determined the sense of clump, (i.e., positive or negative) in 87% of the samples. SMA results are superior to NDVI, which, although correlated with live cover, is not a quantitative measure and showed the correct sense of change in only 67%, of the samples.

  10. Accuracy and precisions of water quality parameters retrieved from particle swarm optimisation in a sub-tropical lake

    NASA Astrophysics Data System (ADS)

    Campbell, Glenn; Phinn, Stuart R.

    2009-09-01

    Optical remote sensing has been used to map and monitor water quality parameters such as the concentrations of hydrosols (chlorophyll and other pigments, total suspended material, and coloured dissolved organic matter). In the inversion / optimisation approach a forward model is used to simulate the water reflectance spectra from a set of parameters and the set that gives the closest match is selected as the solution. The accuracy of the hydrosol retrieval is dependent on an efficient search of the solution space and the reliability of the similarity measure. In this paper the Particle Swarm Optimisation (PSO) was used to search the solution space and seven similarity measures were trialled. The accuracy and precision of this method depends on the inherent noise in the spectral bands of the sensor being employed, as well as the radiometric corrections applied to images to calculate the subsurface reflectance. Using the Hydrolight® radiative transfer model and typical hydrosol concentrations from Lake Wivenhoe, Australia, MERIS reflectance spectra were simulated. The accuracy and precision of hydrosol concentrations derived from each similarity measure were evaluated after errors associated with the air-water interface correction, atmospheric correction and the IOP measurement were modelled and applied to the simulated reflectance spectra. The use of band specific empirically estimated values for the anisotropy value in the forward model improved the accuracy of hydrosol retrieval. The results of this study will be used to improve an algorithm for the remote sensing of water quality for freshwater impoundments.

  11. Nano-accuracy measurements and the surface profiler by use of Monolithic Hollow Penta-Prism for precision mirror testing

    NASA Astrophysics Data System (ADS)

    Qian, Shinan; Wayne, Lewis; Idir, Mourad

    2014-09-01

    We developed a Monolithic Hollow Penta-Prism Long Trace Profiler-NOM (MHPP-LTP-NOM) to attain nano-accuracy in testing plane- and near-plane-mirrors. A new developed Monolithic Hollow Penta-Prism (MHPP) combined with the advantages of PPLTP and autocollimator ELCOMAT of the Nano-Optic-Measuring Machine (NOM) is used to enhance the accuracy and stability of our measurements. Our precise system-alignment method by using a newly developed CCD position-monitor system (PMS) assured significant thermal stability and, along with our optimized noise-reduction analytic method, ensured nano-accuracy measurements. Herein we report our tests results; all errors are about 60 nrad rms or less in tests of plane- and near-plane- mirrors.

  12. Accuracy and robustness of a simple algorithm to measure vessel diameter from B-mode ultrasound images.

    PubMed

    Hunt, Brian E; Flavin, Daniel C; Bauschatz, Emily; Whitney, Heather M

    2016-06-01

    Measurement of changes in arterial vessel diameter can be used to assess the state of cardiovascular health, but the use of such measurements as biomarkers is contingent upon the accuracy and robustness of the measurement. This work presents a simple algorithm for measuring diameter from B-mode images derived from vascular ultrasound. The algorithm is based upon Gaussian curve fitting and a Viterbi search process. We assessed the accuracy of the algorithm by measuring the diameter of a digital reference object (DRO) and ultrasound-derived images of a carotid artery. We also assessed the robustness of the algorithm by manipulating the quality of the image. Across a broad range of signal-to-noise ratio and with varying image edge error, the algorithm measured vessel diameter within 0.7% of the creation dimensions of the DRO. This was a similar level of difference (0.8%) to when an ultrasound image was used. When SNR dropped to 18 dB, measurement error increased to 1.3%. When edge position was varied by as much as 10%, measurement error was well maintained between 0.68 and 0.75%. All these errors fall well within the margin of error established by the medical physics community for quantitative ultrasound measurements. We conclude that this simple algorithm provides consistent and accurate measurement of lumen diameter from B-mode images across a broad range of image quality. PMID:27055985

  13. Potassium conductance dynamics confer robust spike-time precision in a neuromorphic model of the auditory brain stem

    PubMed Central

    Boahen, Kwabena

    2013-01-01

    A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (gKL) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking gKL in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced gKL with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of gKL were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability. PMID:23554436

  14. Robust Heterogeneous Anisotropic Elastic Network Model Precisely Reproduces the Experimental B-factors of Biomolecules.

    PubMed

    Xia, Fei; Tong, Dudu; Lu, Lanyuan

    2013-08-13

    A computational method called the progressive fluctuation matching (PFM) is developed for constructing robust heterogeneous anisotropic network models (HANMs) for biomolecular systems. An HANM derived through the PFM approach consists of harmonic springs with realistic positive force constants, and yields the calculated B-factors that are basically identical to the experimental ones. For the four tested protein systems including crambin, trypsin inhibitor, HIV-1 protease, and lysozyme, the root-mean-square deviations between the experimental and the computed B-factors are only 0.060, 0.095, 0.247, and 0.049 Å(2), respectively, and the correlation coefficients are 0.99 for all. By comparing the HANM/ANM normal modes to their counterparts derived from both an atomistic force field and an NMR structure ensemble, it is found that HANM may provide more accurate results on protein dynamics. PMID:26584122

  15. A robust Hough transform algorithm for determining the radiation centers of circular and rectangular fields with subpixel accuracy.

    PubMed

    Du, Weiliang; Yang, James

    2009-02-01

    Uncertainty in localizing the radiation field center is among the major components that contribute to the overall positional error and thus must be minimized. In this study, we developed a Hough transform (HT)-based computer algorithm to localize the radiation center of a circular or rectangular field with subpixel accuracy. We found that the HT method detected the centers of the test circular fields with an absolute error of 0.037 +/- 0.019 pixels. On a typical electronic portal imager with 0.5 mm image resolution, this mean detection error was translated to 0.02 mm, which was much finer than the image resolution. It is worth noting that the subpixel accuracy described here does not include experimental uncertainties such as linac mechanical instability or room laser inaccuracy. The HT method was more accurate and more robust to image noise and artifacts than the traditional center-of-mass method. Application of the HT method in Winston-Lutz tests was demonstrated to measure the ball-radiation center alignment with subpixel accuracy. Finally, the method was applied to quantitative evaluation of the radiation center wobble during collimator rotation. PMID:19124954

  16. A high-precision Jacob's staff with improved spatial accuracy and laser sighting capability

    NASA Astrophysics Data System (ADS)

    Patacci, Marco

    2016-04-01

    A new Jacob's staff design incorporating a 3D positioning stage and a laser sighting stage is described. The first combines a compass and a circular spirit level on a movable bracket and the second introduces a laser able to slide vertically and rotate on a plane parallel to bedding. The new design allows greater precision in stratigraphic thickness measurement while restricting the cost and maintaining speed of measurement to levels similar to those of a traditional Jacob's staff. Greater precision is achieved as a result of: a) improved 3D positioning of the rod through the use of the integrated compass and spirit level holder; b) more accurate sighting of geological surfaces by tracing with height adjustable rotatable laser; c) reduced error when shifting the trace of the log laterally (i.e. away from the dip direction) within the trace of the laser plane, and d) improved measurement of bedding dip and direction necessary to orientate the Jacob's staff, using the rotatable laser. The new laser holder design can also be used to verify parallelism of a geological surface with structural dip by creating a visual planar datum in the field and thus allowing determination of surfaces which cut the bedding at an angle (e.g., clinoforms, levees, erosion surfaces, amalgamation surfaces, etc.). Stratigraphic thickness measurements and estimates of measurement uncertainty are valuable to many applications of sedimentology and stratigraphy at different scales (e.g., bed statistics, reconstruction of palaeotopographies, depositional processes at bed scale, architectural element analysis), especially when a quantitative approach is applied to the analysis of the data; the ability to collect larger data sets with improved precision will increase the quality of such studies.

  17. Performance characterization of precision micro robot using a machine vision system over the Internet for guaranteed positioning accuracy

    NASA Astrophysics Data System (ADS)

    Kwon, Yongjin; Chiou, Richard; Rauniar, Shreepud; Sosa, Horacio

    2005-11-01

    There is a missing link between a virtual development environment (e.g., a CAD/CAM driven offline robotic programming) and production requirements of the actual robotic workcell. Simulated robot path planning and generation of pick-and-place coordinate points will not exactly coincide with the robot performance due to lack of consideration in variations in individual robot repeatability and thermal expansion of robot linkages. This is especially important when robots are controlled and programmed remotely (e.g., through Internet or Ethernet) since remote users have no physical contact with robotic systems. Using the current technology in Internet-based manufacturing that is limited to a web camera for live image transfer has been a significant challenge for the robot task performance. Consequently, the calibration and accuracy quantification of robot critical to precision assembly have to be performed on-site and the verification of robot positioning accuracy cannot be ascertained remotely. In worst case, the remote users have to assume the robot performance envelope provided by the manufacturers, which may causes a potentially serious hazard for system crash and damage to the parts and robot arms. Currently, there is no reliable methodology for remotely calibrating the robot performance. The objective of this research is, therefore, to advance the current state-of-the-art in Internet-based control and monitoring technology, with a specific aim in the accuracy calibration of micro precision robotic system for the development of a novel methodology utilizing Ethernet-based smart image sensors and other advanced precision sensory control network.

  18. ACCURACY AND PRECISION OF A METHOD TO STUDY KINEMATICS OF THE TEMPOROMANDIBULAR JOINT: COMBINATION OF MOTION DATA AND CT IMAGING

    PubMed Central

    Baltali, Evre; Zhao, Kristin D.; Koff, Matthew F.; Keller, Eugene E.; An, Kai-Nan

    2008-01-01

    The purpose of the study was to test the precision and accuracy of a method used to track selected landmarks during motion of the temporomandibular joint (TMJ). A precision phantom device was constructed and relative motions between two rigid bodies on the phantom device were measured using optoelectronic (OE) and electromagnetic (EM) motion tracking devices. The motion recordings were also combined with a 3D CT image for each type of motion tracking system (EM+CT and OE+CT) to mimic methods used in previous studies. In the OE and EM data collections, specific landmarks on the rigid bodies were determined using digitization. In the EM+CT and OE+CT data sets, the landmark locations were obtained from the CT images. 3D linear distances and 3D curvilinear path distances were calculated for the points. The accuracy and precision for all 4 methods were evaluated (EM, OE, EM+CT and OE+CT). In addition, results were compared with and without the CT imaging (EM vs. EM+CT, OE vs. OE+CT). All systems overestimated the actual 3D curvilinear path lengths. All systems also underestimated the actual rotation values. The accuracy of all methods was within 0.5 mm for 3D curvilinear path calculations, 0.05 mm for 3D linear distance calculations, and 0.2° for rotation calculations. In addition, Bland-Altman plots for each configuration of the systems suggest that measurements obtained from either system are repeatable and comparable. PMID:18617178

  19. Accuracy and Precision of Three-Dimensional Low Dose CT Compared to Standard RSA in Acetabular Cups: An Experimental Study.

    PubMed

    Brodén, Cyrus; Olivecrona, Henrik; Maguire, Gerald Q; Noz, Marilyn E; Zeleznik, Michael P; Sköldenberg, Olof

    2016-01-01

    Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting. PMID:27478832

  20. Accuracy and Precision of Three-Dimensional Low Dose CT Compared to Standard RSA in Acetabular Cups: An Experimental Study

    PubMed Central

    Olivecrona, Henrik; Maguire, Gerald Q.; Noz, Marilyn E.; Zeleznik, Michael P.

    2016-01-01

    Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting. PMID:27478832

  1. Robust elastic network model: A general modeling for precise understanding of protein dynamics.

    PubMed

    Kim, Min Hyeok; Lee, Byung Ho; Kim, Moon Ki

    2015-06-01

    In the study of protein dynamics relevant to functions, normal mode analysis based on elastic network models (ENMs) has become popular. These models are usually validated by comparing the calculated atomic fluctuation for a single protein in a vacuum to experimental temperature factors in the crystal packing state. Without reflecting the crystal packing effect, in addition, their arbitrary assignment of spring constants leads to inaccurate simulation results, yielding a low correlation of the B-factor. To overcome this limitation, we propose a robust elastic network model (RENM) that not only considers the crystalline effect by using symmetric constraint information but also uses lumped masses and specific spring constants based on the type of amino acids and chemical interactions, respectively. Simulation results with more than 500 protein structures verify qualitatively and quantitatively that one can obtain the better correlation of the B-factor by RENM without additional computational burden. Moreover, an optimal spring constant in physical units (dyne/cm) is quantitatively determined as a function of the temperature at 100 and 290K, which enables us to predict the atomic fluctuations and vibrational density of states (VDOS) without a fitting process. The additional investigation of 80 high-resolution crystal structures with anisotropic displacement parameters (ADPs) indicates that RENM could give a full description of vibrational characteristics of individual residues in proteins. PMID:25891099

  2. The accuracy and precision of DXA for assessing body composition in team sport athletes.

    PubMed

    Bilsborough, Johann Christopher; Greenway, Kate; Opar, David; Livingstone, Steuart; Cordy, Justin; Coutts, Aaron James

    2014-01-01

    This study determined the precision of pencil and fan beam dual-energy X-ray absorptiometry (DXA) devices for assessing body composition in professional Australian Football players. Thirty-six professional Australian Football players, in two groups (fan DXA, N = 22; pencil DXA, N = 25), underwent two consecutive DXA scans. A whole body phantom with known values for fat mass, bone mineral content and fat-free soft tissue mass was also used to validate each DXA device. Additionally, the criterion phantom was scanned 20 times by each DXA to assess reliability. Test-retest reliability of DXA anthropometric measures were derived from repeated fan and pencil DXA scans. Fat-free soft tissue mass and bone mineral content from both DXA units showed strong correlations with, and trivial differences to, the criterion phantom values. Fat mass from both DXA showed moderate correlations with criterion measures (pencil: r = 0.64; fan: r = 0.67) and moderate differences with the criterion value. The limits of agreement were similar for both fan beam DXA and pencil beam DXA (fan: fat-free soft tissue mass = -1650 ± 179 g, fat mass = -357 ± 316 g, bone mineral content = 289 ± 122 g; pencil: fat-free soft tissue mass = -1701 ± 257 g, fat mass = -359 ± 326 g, bone mineral content = 177 ± 117 g). DXA also showed excellent precision for bone mineral content (coefficient of variation (%CV) fan = 0.6%; pencil = 1.5%) and fat-free soft tissue mass (%CV fan = 0.3%; pencil = 0.5%) and acceptable reliability for fat measures (%CV fan: fat mass = 2.5%, percent body fat = 2.5%; pencil: fat mass = 5.9%, percent body fat = 5.7%). Both DXA provide precise measures of fat-free soft tissue mass and bone mineral content in lean Australian Football players. DXA-derived fat-free soft tissue mass and bone mineral content are suitable for assessing body composition in lean team sport athletes. PMID:24914773

  3. A Time Projection Chamber for High Accuracy and Precision Fission Cross-Section Measurements

    SciTech Connect

    T. Hill; K. Jewell; M. Heffner; D. Carter; M. Cunningham; V. Riot; J. Ruz; S. Sangiorgio; B. Seilhan; L. Snyder; D. M. Asner; S. Stave; G. Tatishvili; L. Wood; R. G. Baker; J. L. Klay; R. Kudo; S. Barrett; J. King; M. Leonard; W. Loveland; L. Yao; C. Brune; S. Grimes; N. Kornilov; T. N. Massey; J. Bundgaard; D. L. Duke; U. Greife; U. Hager; E. Burgett; J. Deaven; V. Kleinrath; C. McGrath; B. Wendt; N. Hertel; D. Isenhower; N. Pickle; H. Qu; S. Sharma; R. T. Thornton; D. Tovwell; R. S. Towell; S.

    2014-09-01

    The fission Time Projection Chamber (fissionTPC) is a compact (15 cm diameter) two-chamber MICROMEGAS TPC designed to make precision cross-section measurements of neutron-induced fission. The actinide targets are placed on the central cathode and irradiated with a neutron beam that passes axially through the TPC inducing fission in the target. The 4p acceptance for fission fragments and complete charged particle track reconstruction are powerful features of the fissionTPC which will be used to measure fission cross-sections and examine the associated systematic errors. This paper provides a detailed description of the design requirements, the design solutions, and the initial performance of the fissionTPC.

  4. The Precision and Accuracy of AIRS Level 1B Radiances for Climate Studies

    NASA Technical Reports Server (NTRS)

    Hearty, Thomas J.; Gaiser, Steve; Pagano, Tom; Aumann, Hartmut

    2004-01-01

    We investigate uncertainties in the Atmospheric Infrared Sounder (AIRS) radiances based on in-flight and preflight calibration algorithms and observations. The global coverage and spectra1 resolution ((lamda)/(Delta)(lamda) 1200) of AIRS enable it to produce a data set that can be used as a climate data record over the lifetime of the instrument. Therefore, we examine the effects of the uncertainties in the calibration and the detector stability on future climate studies. The uncertainties of the parameters that go into the AIRS radiometric calibration are propagated to estimate the accuracy of the radiances and any climate data record created from AIRS measurements. The calculated radiance uncertainties are consistent with observations. Algorithm enhancements may be able to reduce the radiance uncertainties by as much as 7%. We find that the orbital variation of the gain contributes a brightness temperature bias of < 0.01 K.

  5. Quantification and visualization of carotid segmentation accuracy and precision using a 2D standardized carotid map

    NASA Astrophysics Data System (ADS)

    Chiu, Bernard; Ukwatta, Eranga; Shavakh, Shadi; Fenster, Aaron

    2013-06-01

    This paper describes a framework for vascular image segmentation evaluation. Since the size of vessel wall and plaque burden is defined by the lumen and wall boundaries in vascular segmentation, these two boundaries should be considered as a pair in statistical evaluation of a segmentation algorithm. This work proposed statistical metrics to evaluate the difference of local vessel wall thickness (VWT) produced by manual and algorithm-based semi-automatic segmentation methods (ΔT) with the local segmentation standard deviation of the wall and lumen boundaries considered. ΔT was further approximately decomposed into the local wall and lumen boundary differences (ΔW and ΔL respectively) in order to provide information regarding which of the wall and lumen segmentation errors contribute more to the VWT difference. In this study, the lumen and wall boundaries in 3D carotid ultrasound images acquired for 21 subjects were each segmented five times manually and by a level-set segmentation algorithm. The (absolute) difference measures (i.e., ΔT, ΔW, ΔL and their absolute values) and the pooled local standard deviation of manually and algorithmically segmented wall and lumen boundaries were computed for each subject and represented in a 2D standardized map. The local accuracy and variability of the segmentation algorithm at each point can be quantified by the average of these metrics for the whole group of subjects and visualized on the 2D standardized map. Based on the results shown on the 2D standardized map, a variety of strategies, such as adding anchor points and adjusting weights of different forces in the algorithm, can be introduced to improve the accuracy and variability of the algorithm.

  6. Fragile associations coexist with robust memories for precise details in long-term memory.

    PubMed

    Lew, Timothy F; Pashler, Harold E; Vul, Edward

    2016-03-01

    What happens to memories as we forget? They might gradually lose fidelity, lose their associations (and thus be retrieved in response to the incorrect cues), or be completely lost. Typical long-term memory studies assess memory as a binary outcome (correct/incorrect), and cannot distinguish these different kinds of forgetting. Here we assess long-term memory for scalar information, thus allowing us to quantify how different sources of error diminish as we learn, and accumulate as we forget. We trained subjects on visual and verbal continuous quantities (the locations of objects and the distances between major cities, respectively), tested subjects after extended delays, and estimated whether recall errors arose due to imprecise estimates, misassociations, or complete forgetting. Although subjects quickly formed precise memories and retained them for a long time, they were slow to learn correct associations and quick to forget them. These results suggest that long-term recall is especially limited in its ability to form and retain associations. PMID:26371498

  7. Tissue Probability Map Constrained 4-D Clustering Algorithm for Increased Accuracy and Robustness in Serial MR Brain Image Segmentation

    PubMed Central

    Xue, Zhong; Shen, Dinggang; Li, Hai; Wong, Stephen

    2010-01-01

    The traditional fuzzy clustering algorithm and its extensions have been successfully applied in medical image segmentation. However, because of the variability of tissues and anatomical structures, the clustering results might be biased by the tissue population and intensity differences. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serial MR brain image segmentation, i.e., a series of 3-D MR brain images of the same subject at different time points. Using the new serial image segmentation algorithm in the framework of the CLASSIC framework, which iteratively segments the images and estimates the longitudinal deformations, we improved both accuracy and robustness for serial image computing, and at the mean time produced longitudinally consistent segmentation and stable measures. In the algorithm, the tissue probability maps consist of both the population-based and subject-specific segmentation priors. Experimental study using both simulated longitudinal MR brain data and the Alzheimer’s Disease Neuroimaging Initiative (ADNI) data confirmed that using both priors more accurate and robust segmentation results can be obtained. The proposed algorithm can be applied in longitudinal follow up studies of MR brain imaging with subtle morphological changes for neurological disorders. PMID:26566399

  8. Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine.

    PubMed

    Castaneda, Christian; Nalley, Kip; Mannion, Ciaran; Bhattacharyya, Pritish; Blake, Patrick; Pecora, Andrew; Goy, Andre; Suh, K Stephen

    2015-01-01

    As research laboratories and clinics collaborate to achieve precision medicine, both communities are required to understand mandated electronic health/medical record (EHR/EMR) initiatives that will be fully implemented in all clinics in the United States by 2015. Stakeholders will need to evaluate current record keeping practices and optimize and standardize methodologies to capture nearly all information in digital format. Collaborative efforts from academic and industry sectors are crucial to achieving higher efficacy in patient care while minimizing costs. Currently existing digitized data and information are present in multiple formats and are largely unstructured. In the absence of a universally accepted management system, departments and institutions continue to generate silos of information. As a result, invaluable and newly discovered knowledge is difficult to access. To accelerate biomedical research and reduce healthcare costs, clinical and bioinformatics systems must employ common data elements to create structured annotation forms enabling laboratories and clinics to capture sharable data in real time. Conversion of these datasets to knowable information should be a routine institutionalized process. New scientific knowledge and clinical discoveries can be shared via integrated knowledge environments defined by flexible data models and extensive use of standards, ontologies, vocabularies, and thesauri. In the clinical setting, aggregated knowledge must be displayed in user-friendly formats so that physicians, non-technical laboratory personnel, nurses, data/research coordinators, and end-users can enter data, access information, and understand the output. The effort to connect astronomical numbers of data points, including '-omics'-based molecular data, individual genome sequences, experimental data, patient clinical phenotypes, and follow-up data is a monumental task. Roadblocks to this vision of integration and interoperability include ethical, legal

  9. Light-Directed Self-Assembly of Robust Alginate Gels at Precise Locations in Microfluidic Channels.

    PubMed

    Oh, Hyuntaek; Lu, Annie Xi; Javvaji, Vishal; DeVoe, Don L; Raghavan, Srinivasa R

    2016-07-13

    Recently there has been much interest in using light to activate self-assembly of molecules in a fluid, leading to gelation. The advantage of light over other stimuli lies in its spatial selectivity, i.e., its ability to be directed at a precise location, which could be particularly useful in microfluidic applications. However, existing light-responsive fluids are not suitable for these purposes since they do not convert into sufficiently strong gels that can withstand shear. Here, we address this deficiency by developing a new light-responsive system based on the well-known polysaccharide, alginate. The fluid is composed entirely of commercially available components: alginate, a photoacid generator (PAG), and a chelated complex of divalent strontium (Sr(2+)) cations. Upon exposure to ultraviolet (UV) light, the PAG dissociates to release H(+) ions, which in turn induce the release of free Sr(2+) from the chelate. The Sr(2+) ions self-assemble with the alginate chains to give a stiff gel with an elastic modulus ∼2000 Pa and a yield stress ∼400 Pa (this gel is strong enough to be picked up and held by one's fingers). The above fluid is sent through a network of microchannels and a short segment of a specific channel is exposed to UV light. At that point, the fluid is locally transformed into a strong gel in a few minutes, and the resulting gel blocks the flow through that channel while other channels remain open. When the UV light is removed, the gel is gradually diluted by the flow and the channel reopens. We have thus demonstrated a remote-controlled fluidic valve that can be closed by shining light and reopened when the light is removed. In addition, we also show that light-induced gelation of our alginate fluid can be used to deposit biocompatible payloads at specific addresses within a microchannel. PMID:27347595

  10. Precise and Continuous Time and Frequency Synchronisation at the 5×10-19 Accuracy Level

    PubMed Central

    Wang, B.; Gao, C.; Chen, W. L.; Miao, J.; Zhu, X.; Bai, Y.; Zhang, J. W.; Feng, Y. Y.; Li, T. C.; Wang, L. J.

    2012-01-01

    The synchronisation of time and frequency between remote locations is crucial for many important applications. Conventional time and frequency dissemination often makes use of satellite links. Recently, the communication fibre network has become an attractive option for long-distance time and frequency dissemination. Here, we demonstrate accurate frequency transfer and time synchronisation via an 80 km fibre link between Tsinghua University (THU) and the National Institute of Metrology of China (NIM). Using a 9.1 GHz microwave modulation and a timing signal carried by two continuous-wave lasers and transferred across the same 80 km urban fibre link, frequency transfer stability at the level of 5×10−19/day was achieved. Time synchronisation at the 50 ps precision level was also demonstrated. The system is reliable and has operated continuously for several months. We further discuss the feasibility of using such frequency and time transfer over 1000 km and its applications to long-baseline radio astronomy. PMID:22870385

  11. Towards the next decades of precision and accuracy in a 87Sr optical lattice clock

    NASA Astrophysics Data System (ADS)

    Martin, Michael; Lin, Yige; Swallows, Matthew; Bishof, Michael; Blatt, Sebastian; Benko, Craig; Chen, Licheng; Hirokawa, Takako; Rey, Ana Maria; Ye, Jun

    2011-05-01

    Optical lattice clocks based on ensembles of neutral atoms have the potential to operate at the highest levels of stability due to the parallel interrogation of many atoms. However, the control of systematic shifts in these systems is correspondingly difficult due to potential collisional atomic interactions. By tightly confining samples of ultracold fermionic 87Sr atoms in a two-dimensional optical lattice, as opposed to the conventional one-dimensional geometry, we increase the collisional interaction energy to be the largest relevant energy scale, thus entering the strongly interacting regime of clock operation. We show both theoretically and experimentally that this increase in interaction energy results in a paradoxical decrease in the collisional shift, reducing this key systematic to the 10-17 level. We also present work towards next- generation ultrastable lasers to attain quantum-limited clock operation, potentially enhancing clock precision by an order of magnitude. This work was supported by a grant from the ARO with funding from the DARPA OLE program, NIST, NSF, and AFOSR.

  12. Tedlar bag sampling technique for vertical profiling of carbon dioxide through the atmospheric boundary layer with high precision and accuracy.

    PubMed

    Schulz, Kristen; Jensen, Michael L; Balsley, Ben B; Davis, Kenneth; Birks, John W

    2004-07-01

    Carbon dioxide is the most important greenhouse gas other than water vapor, and its modulation by the biosphere is of fundamental importance to our understanding of global climate change. We have developed a new technique for vertical profiling of CO2 and meteorological parameters through the atmospheric boundary layer and well into the free troposphere. Vertical profiling of CO2 mixing ratios allows estimates of landscape-scale fluxes characteristic of approximately100 km2 of an ecosystem. The method makes use of a powered parachute as a platform and a new Tedlar bag air sampling technique. Air samples are returned to the ground where measurements of CO2 mixing ratios are made with high precision (< or =0.1%) and accuracy (< or =0.1%) using a conventional nondispersive infrared analyzer. Laboratory studies are described that characterize the accuracy and precision of the bag sampling technique and that measure the diffusion coefficient of CO2 through the Tedlar bag wall. The technique has been applied in field studies in the proximity of two AmeriFlux sites, and results are compared with tower measurements of CO2. PMID:15296321

  13. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review).

    PubMed

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194

  14. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation.

    PubMed

    Choe, Kyoung Whan; Blake, Randolph; Lee, Sang-Hun

    2016-01-01

    Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (<16s) and large (∼4min) time scales, and these pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task. PMID:25578924

  15. Accuracy Assessment of the Precise Point Positioning for Different Troposphere Models

    NASA Astrophysics Data System (ADS)

    Oguz Selbesoglu, Mahmut; Gurturk, Mert; Soycan, Metin

    2016-04-01

    This study investigates the accuracy and repeatability of PPP technique at different latitudes by using different troposphere delay models. Nine IGS stations were selected between 00-800 latitudes at northern hemisphere and southern hemisphere. Coordinates were obtained for 7 days at 1 hour intervals in summer and winter. At first, the coordinates were estimated by using Niell troposphere delay model with and without including north and east gradients in order to investigate the contribution of troposphere delay gradients to the positioning . Secondly, Saastamoinen model was used to eliminate troposphere path delays by using standart atmosphere parameters were extrapolated for all station levels. Finally, coordinates were estimated by using RTCA-MOPS empirical troposphere delay model. Results demonstrate that Niell troposphere delay model with horizontal gradients has better mean values of rms errors 0.09 % and 65 % than the Niell troposphere model without horizontal gradients and RTCA-MOPS model, respectively. Saastamoinen model mean values of rms errors were obtained approximately 4 times bigger than the Niell troposphere delay model with horizontal gradients.

  16. A simple device for high-precision head image registration: Preliminary performance and accuracy tests

    SciTech Connect

    Pallotta, Stefania

    2007-05-15

    The purpose of this paper is to present a new device for multimodal head study registration and to examine its performance in preliminary tests. The device consists of a system of eight markers fixed to mobile carbon pipes and bars which can be easily mounted on the patient's head using the ear canals and the nasal bridge. Four graduated scales fixed to the rigid support allow examiners to find the same device position on the patient's head during different acquisitions. The markers can be filled with appropriate substances for visualisation in computed tomography (CT), magnetic resonance, single photon emission computer tomography (SPECT) and positron emission tomography images. The device's rigidity and its position reproducibility were measured in 15 repeated CT acquisitions of the Alderson Rando anthropomorphic phantom and in two SPECT studies of a patient. The proposed system displays good rigidity and reproducibility characteristics. A relocation accuracy of less than 1,5 mm was found in more than 90% of the results. The registration parameters obtained using such a device were compared to those obtained using fiducial markers fixed on phantom and patient heads, resulting in differences of less than 1 deg. and 1 mm for rotation and translation parameters, respectively. Residual differences between fiducial marker coordinates in reference and in registered studies were less than 1 mm in more than 90% of the results, proving that the device performed as accurately as noninvasive stereotactic devices. Finally, an example of multimodal employment of the proposed device is reported.

  17. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review)

    PubMed Central

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194

  18. Flight control and landing precision in the nocturnal bee Megalopta is robust to large changes in light intensity.

    PubMed

    Baird, Emily; Fernandez, Diana C; Wcislo, William T; Warrant, Eric J

    2015-01-01

    Like their diurnal relatives, Megalopta genalis use visual information to control flight. Unlike their diurnal relatives, however, they do this at extremely low light intensities. Although Megalopta has developed optical specializations to increase visual sensitivity, theoretical studies suggest that this enhanced sensitivity does not enable them to capture enough light to use visual information to reliably control flight in the rainforest at night. It has been proposed that Megalopta gain extra sensitivity by summing visual information over time. While enhancing the reliability of vision, this strategy would decrease the accuracy with which they can detect image motion-a crucial cue for flight control. Here, we test this temporal summation hypothesis by investigating how Megalopta's flight control and landing precision is affected by light intensity and compare our findings with the results of similar experiments performed on the diurnal bumblebee Bombus terrestris, to explore the extent to which Megalopta's adaptations to dim light affect their precision. We find that, unlike Bombus, light intensity does not affect flight and landing precision in Megalopta. Overall, we find little evidence that Megalopta uses a temporal summation strategy in dim light, while we find strong support for the use of this strategy in Bombus. PMID:26578977

  19. Flight control and landing precision in the nocturnal bee Megalopta is robust to large changes in light intensity

    PubMed Central

    Baird, Emily; Fernandez, Diana C.; Wcislo, William T.; Warrant, Eric J.

    2015-01-01

    Like their diurnal relatives, Megalopta genalis use visual information to control flight. Unlike their diurnal relatives, however, they do this at extremely low light intensities. Although Megalopta has developed optical specializations to increase visual sensitivity, theoretical studies suggest that this enhanced sensitivity does not enable them to capture enough light to use visual information to reliably control flight in the rainforest at night. It has been proposed that Megalopta gain extra sensitivity by summing visual information over time. While enhancing the reliability of vision, this strategy would decrease the accuracy with which they can detect image motion—a crucial cue for flight control. Here, we test this temporal summation hypothesis by investigating how Megalopta's flight control and landing precision is affected by light intensity and compare our findings with the results of similar experiments performed on the diurnal bumblebee Bombus terrestris, to explore the extent to which Megalopta's adaptations to dim light affect their precision. We find that, unlike Bombus, light intensity does not affect flight and landing precision in Megalopta. Overall, we find little evidence that Megalopta uses a temporal summation strategy in dim light, while we find strong support for the use of this strategy in Bombus. PMID:26578977

  20. A Method of Determining Accuracy and Precision for Dosimeter Systems Using Accreditation Data

    SciTech Connect

    Rick Cummings and John Flood

    2010-12-01

    A study of the uncertainty of dosimeter results is required by the national accreditation programs for each dosimeter model for which accreditation is sought. Typically, the methods used to determine uncertainty have included the partial differentiation method described in the U.S. Guide to Uncertainty in Measurements or the use of Monte Carlo techniques and probability distribution functions to generate simulated dose results. Each of these techniques has particular strengths and should be employed when the areas of uncertainty are required to be understood in detail. However, the uncertainty of dosimeter results can also be determined using a Model II One-Way Analysis of Variance technique and accreditation testing data. The strengths of the technique include (1) the method is straightforward and the data are provided under accreditation testing and (2) the method provides additional data for the analysis of long-term uncertainty using Statistical Process Control (SPC) techniques. The use of SPC to compare variances and standard deviations over time is described well in other areas and is not discussed in detail in this paper. The application of Analysis of Variance to historic testing data indicated that the accuracy in a representative dosimetry system (Panasonic® Model UD-802) was 8.2%, 5.1%, and 4.8% and the expanded uncertainties at the 95% confidence level were 10.7%, 14.9%, and 15.2% for the Accident, Protection Level-Shallow, and Protection Level-Deep test categories in the Department of Energy Laboratory Accreditation Program, respectively. The 95% level of confidence ranges were (0.98 to 1.19), (0.90 to 1.20), and (0.90 to 1.20) for the three groupings of test categories, respectively.

  1. A method of determining accuracy and precision for dosimeter systems using accreditation data.

    PubMed

    Cummings, Frederick; Flood, John R

    2010-12-01

    A study of the uncertainty of dosimeter results is required by the national accreditation programs for each dosimeter model for which accreditation is sought. Typically, the methods used to determine uncertainty have included the partial differentiation method described in the U.S. Guide to Uncertainty in Measurements or the use of Monte Carlo techniques and probability distribution functions to generate simulated dose results. Each of these techniques has particular strengths and should be employed when the areas of uncertainty are required to be understood in detail. However, the uncertainty of dosimeter results can also be determined using a Model II One-Way Analysis of Variance technique and accreditation testing data. The strengths of the technique include (1) the method is straightforward and the data are provided under accreditation testing and (2) the method provides additional data for the analysis of long-term uncertainty using Statistical Process Control (SPC) techniques. The use of SPC to compare variances and standard deviations over time is described well in other areas and is not discussed in detail in this paper. The application of Analysis of Variance to historic testing data indicated that the accuracy in a representative dosimetry system (Panasonic® Model UD-802) was 8.2%, 5.1%, and 4.8% and the expanded uncertainties at the 95% confidence level were 10.7%, 14.9%, and 15.2% for the Accident, Protection Level-Shallow, and Protection Level-Deep test categories in the Department of Energy Laboratory Accreditation Program, respectively. The 95% level of confidence ranges were (0.98 to 1.19), (0.90 to 1.20), and (0.90 to 1.20) for the three groupings of test categories, respectively. PMID:21068596

  2. Video image analysis in the Australian meat industry - precision and accuracy of predicting lean meat yield in lamb carcasses.

    PubMed

    Hopkins, D L; Safari, E; Thompson, J M; Smith, C R

    2004-06-01

    A wide selection of lamb types of mixed sex (ewes and wethers) were slaughtered at a commercial abattoir and during this process images of 360 carcasses were obtained online using the VIAScan® system developed by Meat and Livestock Australia. Soft tissue depth at the GR site (thickness of tissue over the 12th rib 110 mm from the midline) was measured by an abattoir employee using the AUS-MEAT sheep probe (PGR). Another measure of this thickness was taken in the chiller using a GR knife (NGR). Each carcass was subsequently broken down to a range of trimmed boneless retail cuts and the lean meat yield determined. The current industry model for predicting meat yield uses hot carcass weight (HCW) and tissue depth at the GR site. A low level of accuracy and precision was found when HCW and PGR were used to predict lean meat yield (R(2)=0.19, r.s.d.=2.80%), which could be improved markedly when PGR was replaced by NGR (R(2)=0.41, r.s.d.=2.39%). If the GR measures were replaced by 8 VIAScan® measures then greater prediction accuracy could be achieved (R(2)=0.52, r.s.d.=2.17%). A similar result was achieved when the model was based on principal components (PCs) computed from the 8 VIAScan® measures (R(2)=0.52, r.s.d.=2.17%). The use of PCs also improved the stability of the model compared to a regression model based on HCW and NGR. The transportability of the models was tested by randomly dividing the data set and comparing coefficients and the level of accuracy and precision. Those models based on PCs were superior to those based on regression. It is demonstrated that with the appropriate modeling the VIAScan® system offers a workable method for predicting lean meat yield automatically. PMID:22061323

  3. Accuracy and reliability of multi-GNSS real-time precise positioning: GPS, GLONASS, BeiDou, and Galileo

    NASA Astrophysics Data System (ADS)

    Li, Xingxing; Ge, Maorong; Dai, Xiaolei; Ren, Xiaodong; Fritsche, Mathias; Wickert, Jens; Schuh, Harald

    2015-06-01

    In this contribution, we present a GPS+GLONASS+BeiDou+Galileo four-system model to fully exploit the observations of all these four navigation satellite systems for real-time precise orbit determination, clock estimation and positioning. A rigorous multi-GNSS analysis is performed to achieve the best possible consistency by processing the observations from different GNSS together in one common parameter estimation procedure. Meanwhile, an efficient multi-GNSS real-time precise positioning service system is designed and demonstrated by using the multi-GNSS Experiment, BeiDou Experimental Tracking Network, and International GNSS Service networks including stations all over the world. The statistical analysis of the 6-h predicted orbits show that the radial and cross root mean square (RMS) values are smaller than 10 cm for BeiDou and Galileo, and smaller than 5 cm for both GLONASS and GPS satellites, respectively. The RMS values of the clock differences between real-time and batch-processed solutions for GPS satellites are about 0.10 ns, while the RMS values for BeiDou, Galileo and GLONASS are 0.13, 0.13 and 0.14 ns, respectively. The addition of the BeiDou, Galileo and GLONASS systems to the standard GPS-only processing, reduces the convergence time almost by 70 %, while the positioning accuracy is improved by about 25 %. Some outliers in the GPS-only solutions vanish when multi-GNSS observations are processed simultaneous. The availability and reliability of GPS precise positioning decrease dramatically as the elevation cutoff increases. However, the accuracy of multi-GNSS precise point positioning (PPP) is hardly decreased and few centimeter are still achievable in the horizontal components even with 40 elevation cutoff. At 30 and 40 elevation cutoffs, the availability rates of GPS-only solution drop significantly to only around 70 and 40 %, respectively. However, multi-GNSS PPP can provide precise position estimates continuously (availability rate is more than 99

  4. In silico instrumental response correction improves precision of label-free proteomics and accuracy of proteomics-based predictive models.

    PubMed

    Lyutvinskiy, Yaroslav; Yang, Hongqian; Rutishauser, Dorothea; Zubarev, Roman A

    2013-08-01

    In the analysis of proteome changes arising during the early stages of a biological process (e.g. disease or drug treatment) or from the indirect influence of an important factor, the biological variations of interest are often small (∼10%). The corresponding requirements for the precision of proteomics analysis are high, and this often poses a challenge, especially when employing label-free quantification. One of the main contributors to the inaccuracy of label-free proteomics experiments is the variability of the instrumental response during LC-MS/MS runs. Such variability might include fluctuations in the electrospray current, transmission efficiency from the air-vacuum interface to the detector, and detection sensitivity. We have developed an in silico post-processing method of reducing these variations, and have thus significantly improved the precision of label-free proteomics analysis. For abundant blood plasma proteins, a coefficient of variation of approximately 1% was achieved, which allowed for sex differentiation in pooled samples and ≈90% accurate differentiation of individual samples by means of a single LC-MS/MS analysis. This method improves the precision of measurements and increases the accuracy of predictive models based on the measurements. The post-acquisition nature of the correction technique and its generality promise its widespread application in LC-MS/MS-based methods such as proteomics and metabolomics. PMID:23589346

  5. Accuracy and Precision of Equine Gait Event Detection during Walking with Limb and Trunk Mounted Inertial Sensors

    PubMed Central

    Olsen, Emil; Andersen, Pia Haubro; Pfau, Thilo

    2012-01-01

    The increased variations of temporal gait events when pathology is present are good candidate features for objective diagnostic tests. We hypothesised that the gait events hoof-on/off and stance can be detected accurately and precisely using features from trunk and distal limb-mounted Inertial Measurement Units (IMUs). Four IMUs were mounted on the distal limb and five IMUs were attached to the skin over the dorsal spinous processes at the withers, fourth lumbar vertebrae and sacrum as well as left and right tuber coxae. IMU data were synchronised to a force plate array and a motion capture system. Accuracy (bias) and precision (SD of bias) was calculated to compare force plate and IMU timings for gait events. Data were collected from seven horses. One hundred and twenty three (123) front limb steps were analysed; hoof-on was detected with a bias (SD) of −7 (23) ms, hoof-off with 0.7 (37) ms and front limb stance with −0.02 (37) ms. A total of 119 hind limb steps were analysed; hoof-on was found with a bias (SD) of −4 (25) ms, hoof-off with 6 (21) ms and hind limb stance with 0.2 (28) ms. IMUs mounted on the distal limbs and sacrum can detect gait events accurately and precisely. PMID:22969392

  6. How good is a PCR efficiency estimate: Recommendations for precise and robust qPCR efficiency assessments.

    PubMed

    Svec, David; Tichopad, Ales; Novosadova, Vendula; Pfaffl, Michael W; Kubista, Mikael

    2015-03-01

    We have examined the imprecision in the estimation of PCR efficiency by means of standard curves based on strategic experimental design with large number of technical replicates. In particular, how robust this estimation is in terms of a commonly varying factors: the instrument used, the number of technical replicates performed and the effect of the volume transferred throughout the dilution series. We used six different qPCR instruments, we performed 1-16 qPCR replicates per concentration and we tested 2-10 μl volume of analyte transferred, respectively. We find that the estimated PCR efficiency varies significantly across different instruments. Using a Monte Carlo approach, we find the uncertainty in the PCR efficiency estimation may be as large as 42.5% (95% CI) if standard curve with only one qPCR replicate is used in 16 different plates. Based on our investigation we propose recommendations for the precise estimation of PCR efficiency: (1) one robust standard curve with at least 3-4 qPCR replicates at each concentration shall be generated, (2) the efficiency is instrument dependent, but reproducibly stable on one platform, and (3) using a larger volume when constructing serial dilution series reduces sampling error and enables calibration across a wider dynamic range. PMID:27077029

  7. Quantifying precision and accuracy of measurements of dissolved inorganic carbon stable isotopic composition using continuous-flow isotope-ratio mass spectrometry

    PubMed Central

    Waldron, Susan; Marian Scott, E; Vihermaa, Leena E; Newton, Jason

    2014-01-01

    RATIONALE We describe an analytical procedure that allows sample collection and measurement of carbon isotopic composition (δ13CV-PDB value) and dissolved inorganic carbon concentration, [DIC], in aqueous samples without further manipulation post field collection. By comparing outputs from two different mass spectrometers, we quantify with the statistical rigour uncertainty associated with the estimation of an unknown measurement. This is rarely undertaken, but it is needed to understand the significance of field data and to interpret quality assurance exercises. METHODS Immediate acidification of field samples during collection in evacuated, pre-acidified vials removed the need for toxic chemicals to inhibit continued bacterial activity that might compromise isotopic and concentration measurements. Aqueous standards mimicked the sample matrix and avoided headspace fractionation corrections. Samples were analysed using continuous-flow isotope-ratio mass spectrometry, but for low DIC concentration the mass spectrometer response could be non-linear. This had to be corrected for. RESULTS Mass spectrometer non-linearity exists. Rather than estimating precision as the repeat analysis of an internal standard, we have adopted inverse linear calibrations to quantify the precision and 95% confidence intervals (CI) of the δ13CDIC values. The response for [DIC] estimation was always linear. For 0.05–0.5 mM DIC internal standards, however, changes in mass spectrometer linearity resulted in estimations of the precision in the δ13CVPDB value of an unknown ranging from ± 0.44‰ to ± 1.33‰ (mean values) and a mean 95% CI half-width of ±1.1–3.1‰. CONCLUSIONS Mass spectrometer non-linearity should be considered in estimating uncertainty in measurement. Similarly, statistically robust estimates of precision and accuracy should also be adopted. Such estimations do not inhibit research advances: our consideration of small-scale spatial variability at two points on a

  8. Detailed data is welcome, but with a pinch of salt: Accuracy, precision, and uncertainty in flood inundation modeling

    NASA Astrophysics Data System (ADS)

    Dottori, F.; Di Baldassarre, G.; Todini, E.

    2013-09-01

    New survey techniques provide a large amount of high-resolution data, which can be extremely precious for flood inundation modeling. Such data availability raises the issue as to how to exploit their information content to effectively improve flood risk mapping and predictions. In this paper, we will discuss a number of important issues which should be taken into account in works related to flood modeling. These include the large number of uncertainty sources in model structure and available data; the difficult evaluation of model results, due to the scarcity of observed data; computational efficiency; false confidence that can be given by high-resolution outputs, as accuracy is not necessarily increased by higher precision. Finally, we briefly review and discuss a number of existing approaches, such as subgrid parameterization and roughness upscaling methods, which can be used to incorporate high detailed data into flood inundation models, balancing efficiency and reliability.

  9. Community-based Approaches to Improving Accuracy, Precision, and Reproducibility in U-Pb and U-Th Geochronology

    NASA Astrophysics Data System (ADS)

    McLean, N. M.; Condon, D. J.; Bowring, S. A.; Schoene, B.; Dutton, A.; Rubin, K. H.

    2015-12-01

    The last two decades have seen a grassroots effort by the international geochronology community to "calibrate Earth history through teamwork and cooperation," both as part of the EARTHTIME initiative and though several daughter projects with similar goals. Its mission originally challenged laboratories "to produce temporal constraints with uncertainties approaching 0.1% of the radioisotopic ages," but EARTHTIME has since exceeded its charge in many ways. Both the U-Pb and Ar-Ar chronometers first considered for high-precision timescale calibration now regularly produce dates at the sub-per mil level thanks to instrumentation, laboratory, and software advances. At the same time new isotope systems, including U-Th dating of carbonates, have developed comparable precision. But the larger, inter-related scientific challenges envisioned at EARTHTIME's inception remain - for instance, precisely calibrating the global geologic timescale, estimating rates of change around major climatic perturbations, and understanding evolutionary rates through time - and increasingly require that data from multiple geochronometers be combined. To solve these problems, the next two decades of uranium-daughter geochronology will require further advances in accuracy, precision, and reproducibility. The U-Th system has much in common with U-Pb, in that both parent and daughter isotopes are solids that can easily be weighed and dissolved in acid, and have well-characterized reference materials certified for isotopic composition and/or purity. For U-Pb, improving lab-to-lab reproducibility has entailed dissolving precisely weighed U and Pb metals of known purity and isotopic composition together to make gravimetric solutions, then using these to calibrate widely distributed tracers composed of artificial U and Pb isotopes. To mimic laboratory measurements, naturally occurring U and Pb isotopes were also mixed in proportions to mimic samples of three different ages, to be run as internal

  10. Assessment of accuracy and precision of 3D reconstruction of unicompartmental knee arthroplasty in upright position using biplanar radiography.

    PubMed

    Tsai, Tsung-Yuan; Dimitriou, Dimitris; Hosseini, Ali; Liow, Ming Han Lincoln; Torriani, Martin; Li, Guoan; Kwon, Young-Min

    2016-07-01

    This study aimed to evaluate the precision and accuracy of 3D reconstruction of UKA component position, contact location and lower limb alignment in standing position using biplanar radiograph. Two human specimens with 4 medial UKAs were implanted with beads for radiostereometric analysis (RSA). The specimens were frozen in standing position and CT-scanned to obtain relative positions between the beads, bones and UKA components. The specimens were then imaged using biplanar radiograph (EOS). The positions of the femur, tibia, UKA components and UKA contact locations were obtained using RSA- and EOS-based techniques. Intraclass correlation coefficient (ICC) was calculated for inter-observer reliability of the EOS technique. The average (standard deviation) of the differences between two techniques in translations and rotations were less than 0.18 (0.29) mm and 0.39° (0.66°) for UKA components. The root-mean-square-errors (RMSE) of contact location along the anterior/posterior and medial/lateral directions were 0.84mm and 0.30mm. The RMSEs of the knee rotations were less than 1.70°. The ICCs for the EOS-based segmental orientations between two raters were larger than 0.98. The results suggest the EOS-based 3D reconstruction technique can precisely determine component position, contact location and lower limb alignment for UKA patients in weight-bearing standing position. PMID:27117422

  11. THE PRECISION AND ACCURACY OF EARLY EPOCH OF REIONIZATION FOREGROUND MODELS: COMPARING MWA AND PAPER 32-ANTENNA SOURCE CATALOGS

    SciTech Connect

    Jacobs, Daniel C.; Bowman, Judd; Aguirre, James E.

    2013-05-20

    As observations of the Epoch of Reionization (EoR) in redshifted 21 cm emission begin, we assess the accuracy of the early catalog results from the Precision Array for Probing the Epoch of Reionization (PAPER) and the Murchison Wide-field Array (MWA). The MWA EoR approach derives much of its sensitivity from subtracting foregrounds to <1% precision, while the PAPER approach relies on the stability and symmetry of the primary beam. Both require an accurate flux calibration to set the amplitude of the measured power spectrum. The two instruments are very similar in resolution, sensitivity, sky coverage, and spectral range and have produced catalogs from nearly contemporaneous data. We use a Bayesian Markov Chain Monte Carlo fitting method to estimate that the two instruments are on the same flux scale to within 20% and find that the images are mostly in good agreement. We then investigate the source of the errors by comparing two overlapping MWA facets where we find that the differences are primarily related to an inaccurate model of the primary beam but also correlated errors in bright sources due to CLEAN. We conclude with suggestions for mitigating and better characterizing these effects.

  12. Error propagation in relative real-time reverse transcription polymerase chain reaction quantification models: the balance between accuracy and precision.

    PubMed

    Nordgård, Oddmund; Kvaløy, Jan Terje; Farmen, Ragne Kristin; Heikkilä, Reino

    2006-09-15

    Real-time reverse transcription polymerase chain reaction (RT-PCR) has gained wide popularity as a sensitive and reliable technique for mRNA quantification. The development of new mathematical models for such quantifications has generally paid little attention to the aspect of error propagation. In this study we evaluate, both theoretically and experimentally, several recent models for relative real-time RT-PCR quantification of mRNA with respect to random error accumulation. We present error propagation expressions for the most common quantification models and discuss the influence of the various components on the total random error. Normalization against a calibrator sample to improve comparability between different runs is shown to increase the overall random error in our system. On the other hand, normalization against multiple reference genes, introduced to improve accuracy, does not increase error propagation compared to normalization against a single reference gene. Finally, we present evidence that sample-specific amplification efficiencies determined from individual amplification curves primarily increase the random error of real-time RT-PCR quantifications and should be avoided. Our data emphasize that the gain of accuracy associated with new quantification models should be validated against the corresponding loss of precision. PMID:16899212

  13. Enhancement of the accuracy of the ( P-ω) method through the implementation of a nonlinear robust observer

    NASA Astrophysics Data System (ADS)

    Kfoury, G. A.; Chalhoub, N. G.; Henein, N. A.; Bryzik, W.

    2006-04-01

    The ( P-ω) method is a model-based approach developed for determining the instantaneous friction torque in internal combustion engines. This scheme requires measurements of the cylinder gas pressure, the engine load torque, the crankshaft angular displacement and its time derivatives. The effects of the higher order dynamics of the crank-slider mechanism on the measured angular motion of the crankshaft have caused the ( P-ω) method to yield erroneous results, especially, at high engine speeds. To alleviate this problem, a nonlinear sliding mode observer has been developed herein to accurately estimate the rigid and flexible motions of the piston-assembly/connecting-rod/crankshaft mechanism of a single cylinder engine. The observer has been designed to yield a robust performance in the presence of disturbances and modeling imprecision. The digital simulation results, generated under transient conditions representing a decrease in the engine speed, have illustrated the rapid convergence of the estimated state variables to the actual ones in the presence of both structured and unstructured uncertainties. Moreover, this study has proven that the use of the estimated rather than the measured angular displacement of the crankshaft and its time derivatives can significantly improve the accuracy of the ( P-ω) method in determining the instantaneous engine friction torque.

  14. Single-frequency receivers as master permanent stations in GNSS networks: precision and accuracy of the positioning in mixed networks

    NASA Astrophysics Data System (ADS)

    Dabove, Paolo; Manzino, Ambrogio Maria

    2015-04-01

    The use of GPS/GNSS instruments is a common practice in the world at both a commercial and academic research level. Since last ten years, Continuous Operating Reference Stations (CORSs) networks were born in order to achieve the possibility to extend a precise positioning more than 15 km far from the master station. In this context, the Geomatics Research Group of DIATI at the Politecnico di Torino has carried out several experiments in order to evaluate the achievable precision obtainable with different GNSS receivers (geodetic and mass-market) and antennas if a CORSs network is considered. This work starts from the research above described, in particular focusing the attention on the usefulness of single frequency permanent stations in order to thicken the existing CORSs, especially for monitoring purposes. Two different types of CORSs network are available today in Italy: the first one is the so called "regional network" and the second one is the "national network", where the mean inter-station distances are about 25/30 and 50/70 km respectively. These distances are useful for many applications (e.g. mobile mapping) if geodetic instruments are considered but become less useful if mass-market instruments are used or if the inter-station distance between master and rover increases. In this context, some innovative GNSS networks were developed and tested, analyzing the performance of rover's positioning in terms of quality, accuracy and reliability both in real-time and post-processing approach. The use of single frequency GNSS receivers leads to have some limits, especially due to a limited baseline length, the possibility to obtain a correct fixing of the phase ambiguity for the network and to fix the phase ambiguity correctly also for the rover. These factors play a crucial role in order to reach a positioning with a good level of accuracy (as centimetric o better) in a short time and with an high reliability. The goal of this work is to investigate about the

  15. Standardization of Operator-Dependent Variables Affecting Precision and Accuracy of the Disk Diffusion Method for Antibiotic Susceptibility Testing.

    PubMed

    Hombach, Michael; Maurer, Florian P; Pfiffner, Tamara; Böttger, Erik C; Furrer, Reinhard

    2015-12-01

    Parameters like zone reading, inoculum density, and plate streaking influence the precision and accuracy of disk diffusion antibiotic susceptibility testing (AST). While improved reading precision has been demonstrated using automated imaging systems, standardization of the inoculum and of plate streaking have not been systematically investigated yet. This study analyzed whether photometrically controlled inoculum preparation and/or automated inoculation could further improve the standardization of disk diffusion. Suspensions of Escherichia coli ATCC 25922 and Staphylococcus aureus ATCC 29213 of 0.5 McFarland standard were prepared by 10 operators using both visual comparison to turbidity standards and a Densichek photometer (bioMérieux), and the resulting CFU counts were determined. Furthermore, eight experienced operators each inoculated 10 Mueller-Hinton agar plates using a single 0.5 McFarland standard bacterial suspension of E. coli ATCC 25922 using regular cotton swabs, dry flocked swabs (Copan, Brescia, Italy), or an automated streaking device (BD-Kiestra, Drachten, Netherlands). The mean CFU counts obtained from 0.5 McFarland standard E. coli ATCC 25922 suspensions were significantly different for suspensions prepared by eye and by Densichek (P < 0.001). Preparation by eye resulted in counts that were closer to the CLSI/EUCAST target of 10(8) CFU/ml than those resulting from Densichek preparation. No significant differences in the standard deviations of the CFU counts were observed. The interoperator differences in standard deviations when dry flocked swabs were used decreased significantly compared to the differences when regular cotton swabs were used, whereas the mean of the standard deviations of all operators together was not significantly altered. In contrast, automated streaking significantly reduced both interoperator differences, i.e., the individual standard deviations, compared to the standard deviations for the manual method, and the mean of

  16. Accuracy and precision of MR blood oximetry based on the long paramagnetic cylinder approximation of large vessels.

    PubMed

    Langham, Michael C; Magland, Jeremy F; Epstein, Charles L; Floyd, Thomas F; Wehrli, Felix W

    2009-08-01

    An accurate noninvasive method to measure the hemoglobin oxygen saturation (%HbO(2)) of deep-lying vessels without catheterization would have many clinical applications. Quantitative MRI may be the only imaging modality that can address this difficult and important problem. MR susceptometry-based oximetry for measuring blood oxygen saturation in large vessels models the vessel as a long paramagnetic cylinder immersed in an external field. The intravascular magnetic susceptibility relative to surrounding muscle tissue is a function of oxygenated hemoglobin (HbO(2)) and can be quantified with a field-mapping pulse sequence. In this work, the method's accuracy and precision was investigated theoretically on the basis of an analytical expression for the arbitrarily oriented cylinder, as well as experimentally in phantoms and in vivo in the femoral artery and vein at 3T field strength. Errors resulting from vessel tilt, noncircularity of vessel cross-section, and induced magnetic field gradients were evaluated and methods for correction were designed and implemented. Hemoglobin saturation was measured at successive vessel segments, differing in geometry, such as eccentricity and vessel tilt, but constant blood oxygen saturation levels, as a means to evaluate measurement consistency. The average standard error and coefficient of variation of measurements in phantoms were <2% with tilt correction alone, in agreement with theory, suggesting that high accuracy and reproducibility can be achieved while ignoring noncircularity for tilt angles up to about 30 degrees . In vivo, repeated measurements of %HbO(2) in the femoral vessels yielded a coefficient of variation of less than 5%. In conclusion, the data suggest that %HbO(2) can be measured reproducibly in vivo in large vessels of the peripheral circulation on the basis of the paramagnetic cylinder approximation of the incremental field. PMID:19526517

  17. Routine OGTT: A Robust Model Including Incretin Effect for Precise Identification of Insulin Sensitivity and Secretion in a Single Individual

    PubMed Central

    De Gaetano, Andrea; Panunzi, Simona; Matone, Alice; Samson, Adeline; Vrbikova, Jana; Bendlova, Bela; Pacini, Giovanni

    2013-01-01

    In order to provide a method for precise identification of insulin sensitivity from clinical Oral Glucose Tolerance Test (OGTT) observations, a relatively simple mathematical model (Simple Interdependent glucose/insulin MOdel SIMO) for the OGTT, which coherently incorporates commonly accepted physiological assumptions (incretin effect and saturating glucose-driven insulin secretion) has been developed. OGTT data from 78 patients in five different glucose tolerance groups were analyzed: normal glucose tolerance (NGT), impaired glucose tolerance (IGT), impaired fasting glucose (IFG), IFG+IGT, and Type 2 Diabetes Mellitus (T2DM). A comparison with the 2011 Salinari (COntinuos GI tract MOdel, COMO) and the 2002 Dalla Man (Dalla Man MOdel, DMMO) models was made with particular attention to insulin sensitivity indices ISCOMO, ISDMMO and kxgi (the insulin sensitivity index for SIMO). ANOVA on kxgi values across groups resulted significant overall (P<0.001), and post-hoc comparisons highlighted the presence of three different groups: NGT (8.62×10−5±9.36×10−5 min−1pM−1), IFG (5.30×10−5±5.18×10−5) and combined IGT, IFG+IGT and T2DM (2.09×10−5±1.95×10−5, 2.38×10−5±2.28×10−5 and 2.38×10−5±2.09×10−5 respectively). No significance was obtained when comparing ISCOMO or ISDMMO across groups. Moreover, kxgi presented the lowest sample average coefficient of variation over the five groups (25.43%), with average CVs for ISCOMO and ISDMMO of 70.32% and 57.75% respectively; kxgi also presented the strongest correlations with all considered empirical measures of insulin sensitivity. While COMO and DMMO appear over-parameterized for fitting single-subject clinical OGTT data, SIMO provides a robust, precise, physiologically plausible estimate of insulin sensitivity, with which habitual empirical insulin sensitivity indices correlate well. The kxgi index, reflecting insulin secretion dependency on glycemia, also significantly differentiates clinically

  18. Routine OGTT: a robust model including incretin effect for precise identification of insulin sensitivity and secretion in a single individual.

    PubMed

    De Gaetano, Andrea; Panunzi, Simona; Matone, Alice; Samson, Adeline; Vrbikova, Jana; Bendlova, Bela; Pacini, Giovanni

    2013-01-01

    In order to provide a method for precise identification of insulin sensitivity from clinical Oral Glucose Tolerance Test (OGTT) observations, a relatively simple mathematical model (Simple Interdependent glucose/insulin MOdel SIMO) for the OGTT, which coherently incorporates commonly accepted physiological assumptions (incretin effect and saturating glucose-driven insulin secretion) has been developed. OGTT data from 78 patients in five different glucose tolerance groups were analyzed: normal glucose tolerance (NGT), impaired glucose tolerance (IGT), impaired fasting glucose (IFG), IFG+IGT, and Type 2 Diabetes Mellitus (T2DM). A comparison with the 2011 Salinari (COntinuos GI tract MOdel, COMO) and the 2002 Dalla Man (Dalla Man MOdel, DMMO) models was made with particular attention to insulin sensitivity indices ISCOMO, ISDMMO and kxgi (the insulin sensitivity index for SIMO). ANOVA on kxgi values across groups resulted significant overall (P<0.001), and post-hoc comparisons highlighted the presence of three different groups: NGT (8.62×10(-5)±9.36×10(-5) min(-1)pM(-1)), IFG (5.30×10(-5)±5.18×10(-5)) and combined IGT, IFG+IGT and T2DM (2.09×10(-5)±1.95×10(-5), 2.38×10(-5)±2.28×10(-5) and 2.38×10(-5)±2.09×10(-5) respectively). No significance was obtained when comparing ISCOMO or ISDMMO across groups. Moreover, kxgi presented the lowest sample average coefficient of variation over the five groups (25.43%), with average CVs for ISCOMO and ISDMMO of 70.32% and 57.75% respectively; kxgi also presented the strongest correlations with all considered empirical measures of insulin sensitivity. While COMO and DMMO appear over-parameterized for fitting single-subject clinical OGTT data, SIMO provides a robust, precise, physiologically plausible estimate of insulin sensitivity, with which habitual empirical insulin sensitivity indices correlate well. The kxgi index, reflecting insulin secretion dependency on glycemia, also significantly differentiates clinically

  19. Determination of the precision and accuracy of morphological measurements using the Kinect™ sensor: comparison with standard stereophotogrammetry.

    PubMed

    Bonnechère, B; Jansen, B; Salvia, P; Bouzahouene, H; Sholukha, V; Cornelis, J; Rooze, M; Van Sint Jan, S

    2014-01-01

    The recent availability of the Kinect™ sensor, a low-cost Markerless Motion Capture (MMC) system, could give new and interesting insights into ergonomics (e.g. the creation of a morphological database). Extensive validation of this system is still missing. The aim of the study was to determine if the Kinect™ sensor can be used as an easy, cheap and fast tool to conduct morphology estimation. A total of 48 subjects were analysed using MMC. Results were compared with measurements obtained from a high-resolution stereophotogrammetric system, a marker-based system (MBS). Differences between MMC and MBS were found; however, these differences were systematically correlated and enabled regression equations to be obtained to correct MMC results. After correction, final results were in agreement with MBS data (p = 0.99). Results show that measurements were reproducible and precise after applying regression equations. Kinect™ sensors-based systems therefore seem to be suitable for use as fast and reliable tools to estimate morphology. Practitioner Summary: The Kinect™ sensor could eventually be used for fast morphology estimation as a body scanner. This paper presents an extensive validation of this device for anthropometric measurements in comparison to manual measurements and stereophotogrammetric devices. The accuracy is dependent on the segment studied but the reproducibility is excellent. PMID:24646374

  20. Progress integrating ID-TIMS U-Pb geochronology with accessory mineral geochemistry: towards better accuracy and higher precision time

    NASA Astrophysics Data System (ADS)

    Schoene, B.; Samperton, K. M.; Crowley, J. L.; Cottle, J. M.

    2012-12-01

    It is increasingly common that hand samples of plutonic and volcanic rocks contain zircon with dates that span between zero and >100 ka. This recognition comes from the increased application of U-series geochronology on young volcanic rocks and the increased precision to better than 0.1% on single zircons by the U-Pb ID-TIMS method. It has thus become more difficult to interpret such complicated datasets in terms of ashbed eruption or magma emplacement, which are critical constraints for geochronologic applications ranging from biotic evolution and the stratigraphic record to magmatic and metamorphic processes in orogenic belts. It is important, therefore, to develop methods that aid in interpreting which minerals, if any, date the targeted process. One promising tactic is to better integrate accessory mineral geochemistry with high-precision ID-TIMS U-Pb geochronology. These dual constraints can 1) identify cogenetic populations of minerals, and 2) record magmatic or metamorphic fluid evolution through time. Goal (1) has been widely sought with in situ geochronology and geochemical analysis but is limited by low-precision dates. Recent work has attempted to bridge this gap by retrieving the typically discarded elution from ion exchange chemistry that precedes ID-TIMS U-Pb geochronology and analyzing it by ICP-MS (U-Pb TIMS-TEA). The result integrates geochemistry and high-precision geochronology from the exact same volume of material. The limitation of this method is the relatively coarse spatial resolution compared to in situ techniques, and thus averages potentially complicated trace element profiles through single minerals or mineral fragments. In continued work, we test the effect of this on zircon by beginning with CL imaging to reveal internal zonation and growth histories. This is followed by in situ LA-ICPMS trace element transects of imaged grains to reveal internal geochemical zonation. The same grains are then removed from grain-mount, fragmented, and

  1. Engineered, Robust Polyelectrolyte Multilayers by Precise Control of Surface Potential for Designer Protein, Cell, and Bacteria Adsorption.

    PubMed

    Zhu, Xiaoying; Guo, Shifeng; He, Tao; Jiang, Shan; Jańczewski, Dominik; Vancso, G Julius

    2016-02-01

    Cross-linked layer-by-layer (LbL) assemblies with a precisely tuned surface ζ-potential were fabricated to control the adsorption of proteins, mammalian cells, and bacteria for different biomedical applications. Two weak polyions including a synthesized polyanion and polyethylenimine were assembled under controlled conditions and cross-linked to prepare three robust LbL films as model surfaces with similar roughness and water affinity but displaying negative, zero, and positive net charges at the physiological pH (7.4). These surfaces were tested for their abilities to adsorb proteins, including bovine serum albumin (BSA) and lysozyme (LYZ). In the adsorption tests, the LbL films bind more proteins with opposite charges but less of those with like charges, indicating that electrostatic interactions play a major role in protein adsorption. However, LYZ showed higher nonspecific adsorption than BSA, because of the specific behavior of LYZ molecules, such as stacked multilayer formation during adsorption. To exclude such stacking effects from experiments, protein molecules were covalently immobilized on AFM colloidal probes to measure the adhesion forces against the model surfaces utilizing direct protein molecule-surface contacts. The results confirmed the dominating role of electrostatic forces in protein adhesion. In fibroblast cell and bacteria adhesion tests, similar trends (high adhesion on positively charged surfaces, but much lower on neutral and negatively charged surfaces) were observed because the fibroblast cell and bacterial surfaces studied possess negative potentials. The cross-linked LbL films with improved stability and engineered surface charge described in this study provide an excellent platform to control the behavior of different charged objects and can be utilized in practical biomedical applications. PMID:26756285

  2. Strategy for high-accuracy-and-precision retrieval of atmospheric methane from the mid-infrared FTIR network

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Forster, F.; Rettinger, M.; Jones, N.

    2011-05-01

    We present a strategy (MIR-GBM v1.0) for the retrieval of column-averaged dry-air mole fractions of methane (XCH4) with a precision <0.3 % (1-σ diurnal variation, 7-min integration) and a seasonal bias <0.14 % from mid-infrared ground-based solar FTIR measurements of the Network for the Detection of Atmospheric Composition Change (NDACC, comprising 22 FTIR stations). This makes NDACC methane data useful for satellite validation and for the inversion of regional-scale sources and sinks in addition to long-term trend analysis. Such retrievals complement the high accuracy and precision near-infrared observations of the younger Total Carbon Column Observing Network (TCCON) with time series dating back 15 yr or so before TCCON operations began. MIR-GBM v1.0 is using HITRAN 2000 (including the 2001 update release) and 3 spectral micro windows (2613.70-2615.40 cm-1, 2835.50-2835.80 cm-1, 2921.00-2921.60 cm-1). A first-order Tikhonov constraint is applied to the state vector given in units of per cent of volume mixing ratio. It is tuned to achieve minimum diurnal variation without damping seasonality. Final quality selection of the retrievals uses a threshold for the ratio of root-mean-square spectral residuals and information content (<0.15 %). Column-averaged dry-air mole fractions are calculated using the retrieved methane profiles and four-times-daily pressure-temperature-humidity profiles from National Center for Environmental Prediction (NCEP) interpolated to the time of measurement. MIR-GBM v1.0 is the optimum of 24 tested retrieval strategies (8 different spectral micro-window selections, 3 spectroscopic line lists: HITRAN 2000, 2004, 2008). Dominant errors of the non-optimum retrieval strategies are HDO/H2O-CH4 interference errors (seasonal bias up to ≈4 %). Therefore interference errors have been quantified at 3 test sites covering clear-sky integrated water vapor levels representative for all NDACC sites (Wollongong maximum = 44.9 mm, Garmisch mean = 14.9 mm

  3. Strategy for high-accuracy-and-precision retrieval of atmospheric methane from the mid-infrared FTIR network

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Forster, F.; Rettinger, M.; Jones, N.

    2011-09-01

    We present a strategy (MIR-GBM v1.0) for the retrieval of column-averaged dry-air mole fractions of methane (XCH4) with a precision <0.3% (1-σ diurnal variation, 7-min integration) and a seasonal bias <0.14% from mid-infrared ground-based solar FTIR measurements of the Network for the Detection of Atmospheric Composition Change (NDACC, comprising 22 FTIR stations). This makes NDACC methane data useful for satellite validation and for the inversion of regional-scale sources and sinks in addition to long-term trend analysis. Such retrievals complement the high accuracy and precision near-infrared observations of the younger Total Carbon Column Observing Network (TCCON) with time series dating back 15 years or so before TCCON operations began. MIR-GBM v1.0 is using HITRAN 2000 (including the 2001 update release) and 3 spectral micro windows (2613.70-2615.40 cm-1, 2835.50-2835.80 cm-1, 2921.00-2921.60 cm-1). A first-order Tikhonov constraint is applied to the state vector given in units of per cent of volume mixing ratio. It is tuned to achieve minimum diurnal variation without damping seasonality. Final quality selection of the retrievals uses a threshold for the goodness of fit (χ2 < 1) as well as for the ratio of root-mean-square spectral noise and information content (<0.15%). Column-averaged dry-air mole fractions are calculated using the retrieved methane profiles and four-times-daily pressure-temperature-humidity profiles from National Center for Environmental Prediction (NCEP) interpolated to the time of measurement. MIR-GBM v1.0 is the optimum of 24 tested retrieval strategies (8 different spectral micro-window selections, 3 spectroscopic line lists: HITRAN 2000, 2004, 2008). Dominant errors of the non-optimum retrieval strategies are systematic HDO/H2O-CH4 interference errors leading to a seasonal bias up to ≈5%. Therefore interference errors have been quantified at 3 test sites covering clear-sky integrated water vapor levels representative for all NDACC

  4. Airborne Laser CO2 Column Measurements: Evaluation of Precision and Accuracy Under a Wide Range of Surface and Atmospheric Conditions

    NASA Astrophysics Data System (ADS)

    Browell, E. V.; Dobler, J. T.; Kooi, S. A.; Fenn, M. A.; Choi, Y.; Vay, S. A.; Harrison, F. W.; Moore, B.

    2011-12-01

    This paper discusses the latest flight test results of a multi-frequency intensity-modulated (IM) continuous-wave (CW) laser absorption spectrometer (LAS) that operates near 1.57 μm for remote CO2 column measurements. This IM-LAS system is under development for a future space-based mission to determine the global distribution of regional-scale CO2 sources and sinks, which is the objective of the NASA Active Sensing of CO2 Emissions during Nights, Days, and Seasons (ASCENDS) mission. A prototype of the ASCENDS system, called the Multi-frequency Fiber Laser Lidar (MFLL), has been flight tested in eleven airborne campaigns since May 2005. This paper compares the most recent results obtained during the 2010 and 2011 UC-12 and DC-8 flight tests, where MFLL remote CO2 column measurements were evaluated against airborne in situ CO2 profile measurements traceable to World Meteorological Organization standards. The major change to the MFLL system in 2011 was the implementation of several different IM modes, which could be quickly changed in flight, to directly compare the precision and accuracy of MFLL CO2 measurements in each mode. The different IM modes that were evaluated included "fixed" IM frequencies near 50, 200, and 500 kHz; frequencies changed in short time steps (Stepped); continuously swept frequencies (Swept); and a pseudo noise (PN) code. The Stepped, Swept, and PN modes were generated to evaluate the ability of these IM modes to desensitize MFLL CO2 column measurements to intervening optically thin aerosols/clouds. MFLL was flown on the NASA Langley UC-12 aircraft in May 2011 to evaluate the newly implemented IM modes and their impact on CO2 measurement precision and accuracy, and to determine which IM mode provided the greatest thin cloud rejection (TCR) for the CO2 column measurements. Within the current hardware limitations of the MFLL system, the "fixed" 50 kHz results produced similar SNR values to those found previously. The SNR decreased as expected

  5. Evaluation of precision and accuracy of the Borgwaldt RM20S(®) smoking machine designed for in vitro exposure.

    PubMed

    Kaur, Navneet; Lacasse, Martine; Roy, Jean-Philippe; Cabral, Jean-Louis; Adamson, Jason; Errington, Graham; Waldron, Karen C; Gaça, Marianna; Morin, André

    2010-12-01

    The Borgwaldt RM20S(®) smoking machine enables the generation, dilution, and transfer of fresh cigarette smoke to cell exposure chambers, for in vitro analyses. We present a study confirming the precision (repeatability r, reproducibility R) and accuracy of smoke dose generated by the Borgwaldt RM20S(®) system and delivery to exposure chambers. Due to the aerosol nature of cigarette smoke, the repeatability of the dilution of the vapor phase in air was assessed by quantifying two reference standard gases: methane (CH(4), r between 29.0 and 37.0 and RSD between 2.2% and 4.5%) and carbon monoxide (CO, r between 166.8 and 235.8 and RSD between 0.7% and 3.7%). The accuracy of dilution (percent error) for CH(4) and CO was between 6.4% and 19.5% and between 5.8% and 6.4%, respectively, over a 10-1000-fold dilution range. To corroborate our findings, a small inter-laboratory study was carried out for CH(4) measurements. The combined dilution repeatability had an r between 21.3 and 46.4, R between 52.9 and 88.4, RSD between 6.3% and 17.3%, and error between 4.3% and 13.1%. Based on the particulate component of cigarette smoke (3R4F), the repeatability (RSD = 12%) of the undiluted smoke generated by the Borgwaldt RM20S(®) was assessed by quantifying solanesol using high-performance liquid chromatography with ultraviolet detection (HPLC/UV). Finally, the repeatability (r between 0.98 and 4.53 and RSD between 8.8% and 12%) of the dilution of generated smoke particulate phase was assessed by quantifying solanesol following various dilutions of cigarette smoke. The findings in this study suggest the Borgwaldt RM20S(®) smoking machine is a reliable tool to generate and deliver repeatable and reproducible doses of whole smoke to in vitro cultures. PMID:21126153

  6. Effect of modulation frequency bandwidth on measurement accuracy and precision for digital diffuse optical spectroscopy (dDOS)

    NASA Astrophysics Data System (ADS)

    Jung, Justin; Istfan, Raeef; Roblyer, Darren

    2014-03-01

    Near-infrared (NIR) frequency-domain Diffuse Optical Spectroscopy (DOS) is an emerging technology with a growing number of potential clinical applications. In an effort to reduce DOS system complexity and improve portability, we recently demonstrated a direct digital sampling method that utilizes digital signal generation and detection as a replacement for more traditional analog methods. In our technique, a fast analog-to-digital converter (ADC) samples the detected time-domain radio frequency (RF) waveforms at each modulation frequency in a broad-bandwidth sweep (50- 300MHz). While we have shown this method provides comparable results to other DOS technologies, the process is data intensive as digital samples must be stored and processed for each modulation frequency and wavelength. We explore here the effect of reducing the modulation frequency bandwidth on the accuracy and precision of extracted optical properties. To accomplish this, the performance of the digital DOS (dDOS) system was compared to a gold standard network analyzer based DOS system. With a starting frequency of 50MHz, the input signal of the dDOS system was swept to 100, 150, 250, or 300MHz in 4MHz increments and results were compared to full 50-300MHz networkanalyzer DOS measurements. The average errors in extracted μa and μs' with dDOS were lowest for the full 50-300MHz sweep (less than 3%) and were within 3.8% for frequency bandwidths as narrow as 50-150MHz. The errors increased to as much as 9.0% when a bandwidth of 50-100MHz was tested. These results demonstrate the possibility for reduced data collection with dDOS without critical compensation of optical property extraction.

  7. Accuracy, precision and response time of consumer fork, remote digital probe and disposable indicator thermometers for cooked ground beef patties and chicken breasts

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Nine different commercially available instant-read consumer thermometers (forks, remotes, digital probe and disposable color change indicators) were tested for accuracy and precision compared to a calibrated thermocouple in 80 percent and 90 percent lean ground beef patties, and boneless and bone-in...

  8. An Examination of the Precision and Technical Accuracy of the First Wave of Group-Randomized Trials Funded by the Institute of Education Sciences

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Raudenbush, Stephen W.

    2009-01-01

    This article examines the power analyses for the first wave of group-randomized trials funded by the Institute of Education Sciences. Specifically, it assesses the precision and technical accuracy of the studies. The authors identified the appropriate experimental design and estimated the minimum detectable standardized effect size (MDES) for each…

  9. Deformable Image Registration for Adaptive Radiation Therapy of Head and Neck Cancer: Accuracy and Precision in the Presence of Tumor Changes

    SciTech Connect

    Mencarelli, Angelo; Kranen, Simon Robert van; Hamming-Vrieze, Olga; Beek, Suzanne van; Nico Rasch, Coenraad Robert; Herk, Marcel van; Sonke, Jan-Jakob

    2014-11-01

    Purpose: To compare deformable image registration (DIR) accuracy and precision for normal and tumor tissues in head and neck cancer patients during the course of radiation therapy (RT). Methods and Materials: Thirteen patients with oropharyngeal tumors, who underwent submucosal implantation of small gold markers (average 6, range 4-10) around the tumor and were treated with RT were retrospectively selected. Two observers identified 15 anatomical features (landmarks) representative of normal tissues in the planning computed tomography (pCT) scan and in weekly cone beam CTs (CBCTs). Gold markers were digitally removed after semiautomatic identification in pCTs and CBCTs. Subsequently, landmarks and gold markers on pCT were propagated to CBCTs, using a b-spline-based DIR and, for comparison, rigid registration (RR). To account for observer variability, the pair-wise difference analysis of variance method was applied. DIR accuracy (systematic error) and precision (random error) for landmarks and gold markers were quantified. Time trend of the precisions for RR and DIR over the weekly CBCTs were evaluated. Results: DIR accuracies were submillimeter and similar for normal and tumor tissue. DIR precision (1 SD) on the other hand was significantly different (P<.01), with 2.2 mm vector length in normal tissue versus 3.3 mm in tumor tissue. No significant time trend in DIR precision was found for normal tissue, whereas in tumor, DIR precision was significantly (P<.009) degraded during the course of treatment by 0.21 mm/week. Conclusions: DIR for tumor registration proved to be less precise than that for normal tissues due to limited contrast and complex non-elastic tumor response. Caution should therefore be exercised when applying DIR for tumor changes in adaptive procedures.

  10. Millimeter-accuracy GPS landslide monitoring using Precise Point Positioning with Single Receiver Phase Ambiguity (PPP-SRPA) resolution: a case study in Puerto Rico

    NASA Astrophysics Data System (ADS)

    Wang, G. Q.

    2013-03-01

    Continuous Global Positioning System (GPS) monitoring is essential for establishing the rate and pattern of superficial movements of landslides. This study demonstrates a technique which uses a stand-alone GPS station to conduct millimeter-accuracy landslide monitoring. The Precise Point Positioning with Single Receiver Phase Ambiguity (PPP-SRPA) resolution employed by the GIPSY/OASIS software package (V6.1.2) was applied in this study. Two-years of continuous GPS data collected at a creeping landslide were used to evaluate the accuracy of the PPP-SRPA solutions. The criterion for accuracy was the root-mean-square (RMS) of residuals of the PPP-SRPA solutions with respect to "true" landslide displacements over the two-year period. RMS is often regarded as repeatability or precision in GPS literature. However, when contrasted with a known "true" position or displacement it could be termed RMS accuracy or simply accuracy. This study indicated that the PPP-SRPA resolution can provide an accuracy of 2 to 3 mm horizontally and 8 mm vertically for 24-hour sessions with few outliers (< 1%) in the Puerto Rico region. Horizontal accuracy below 5 mm can be stably achieved with 4-hour or longer sessions if avoiding the collection of data during extreme weather conditions. Vertical accuracy below 10 mm can be achieved with 8-hour or longer sessions. This study indicates that the PPP-SRPA resolution is competitive with the conventional carrier-phase double-difference network resolution for static (longer than 4 hours) landslide monitoring while maintaining many advantages. It is evident that the PPP-SRPA method would become an attractive alternative to the conventional carrier-phase double-difference method for landslide monitoring, notably in remote areas or developing countries.

  11. Application of U-Pb ID-TIMS dating to the end-Triassic global crisis: testing the limits on precision and accuracy in a multidisciplinary whodunnit (Invited)

    NASA Astrophysics Data System (ADS)

    Schoene, B.; Schaltegger, U.; Guex, J.; Bartolini, A.

    2010-12-01

    The ca. 201.4 Ma Triassic-Jurassic boundary is characterized by one of the most devastating mass-extinctions in Earth history, subsequent biologic radiation, rapid carbon cycle disturbances and enormous flood basalt volcanism (Central Atlantic Magmatic Province - CAMP). Considerable uncertainty remains regarding the temporal and causal relationship between these events though this link is important for understanding global environmental change under extreme stresses. We present ID-TIMS U-Pb zircon geochronology on volcanic ash beds from two marine sections that span the Triassic-Jurassic boundary and from the CAMP in North America. To compare the timing of the extinction with the onset of the CAMP, we assess the precision and accuracy of ID-TIMS U-Pb zircon geochronology by exploring random and systematic uncertainties, reproducibility, open-system behavior, and pre-eruptive crystallization of zircon. We find that U-Pb ID-TIMS dates on single zircons can be internally and externally reproducible at 0.05% of the age, consistent with recent experiments coordinated through the EARTHTIME network. Increased precision combined with methods alleviating Pb-loss in zircon reveals that these ash beds contain zircon that crystallized between 10^5 and 10^6 years prior to eruption. Mineral dates older than eruption ages are prone to affect all geochronologic methods and therefore new tools exploring this form of “geologic uncertainty” will lead to better time constraints for ash bed deposition. In an effort to understand zircon dates within the framework of a magmatic system, we analyzed zircon trace elements by solution ICPMS for the same volume of zircon dated by ID-TIMS. In one example we argue that zircon trace element patterns as a function of time result from a mix of xeno-, ante-, and autocrystic zircons in the ash bed, and approximate eruption age with the youngest zircon date. In a contrasting example from a suite of Cretaceous andesites, zircon trace elements

  12. High precision and high accuracy isotopic measurement of uranium using lead and thorium calibration solutions by inductively coupled plasma-multiple collector-mass spectrometry

    SciTech Connect

    Bowen, I.; Walder, A.J.; Hodgson, T.; Parrish, R.R. |

    1998-12-31

    A novel method for the high accuracy and high precision measurement of uranium isotopic composition by Inductively Coupled Plasma-Multiple Collector-Mass Spectrometry is discussed. Uranium isotopic samples are spiked with either thorium or lead for use as internal calibration reference materials. This method eliminates the necessity to periodically measure uranium standards to correct for changing mass bias when samples are measured over long time periods. This technique has generated among the highest levels of analytical precision on both the major and minor isotopes of uranium. Sample throughput has also been demonstrated to exceed Thermal Ionization Mass Spectrometry by a factor of four to five.

  13. Towards the GEOSAT Follow-On Precise Orbit Determination Goals of High Accuracy and Near-Real-Time Processing

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Zelensky, Nikita P.; Chinn, Douglas S.; Beckley, Brian D.; Lillibridge, John L.

    2006-01-01

    The US Navy's GEOSAT Follow-On spacecraft (GFO) primary mission objective is to map the oceans using a radar altimeter. Satellite laser ranging data, especially in combination with altimeter crossover data, offer the only means of determining high-quality precise orbits. Two tuned gravity models, PGS7727 and PGS7777b, were created at NASA GSFC for GFO that reduce the predicted radial orbit through degree 70 to 13.7 and 10.0 mm. A macromodel was developed to model the nonconservative forces and the SLR spacecraft measurement offset was adjusted to remove a mean bias. Using these improved models, satellite-ranging data, altimeter crossover data, and Doppler data are used to compute both daily medium precision orbits with a latency of less than 24 hours. Final precise orbits are also computed using these tracking data and exported with a latency of three to four weeks to NOAA for use on the GFO Geophysical Data Records (GDR s). The estimated orbit precision of the daily orbits is between 10 and 20 cm, whereas the precise orbits have a precision of 5 cm.

  14. The precision and accuracy of iterative and non-iterative methods of photopeak integration in activation analysis, with particular reference to the analysis of multiplets

    USGS Publications Warehouse

    Baedecker, P.A.

    1977-01-01

    The relative precisions obtainable using two digital methods, and three iterative least squares fitting procedures of photopeak integration have been compared empirically using 12 replicate counts of a test sample with 14 photopeaks of varying intensity. The accuracy by which the various iterative fitting methods could analyse synthetic doublets has also been evaluated, and compared with a simple non-iterative approach. ?? 1977 Akade??miai Kiado??.

  15. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  16. Accuracy and precision of a custom camera-based system for 2-d and 3-d motion tracking during speech and nonspeech motor tasks.

    PubMed

    Feng, Yongqiang; Max, Ludo

    2014-04-01

    PURPOSE Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and submillimeter accuracy. METHOD The authors examined the accuracy and precision of 2-D and 3-D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially available computer software (APAS, Ariel Dynamics), and a custom calibration device. RESULTS Overall root-mean-square error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3- vs. 6-mm diameter) was negligible at all frame rates for both 2-D and 3-D data. CONCLUSION Motion tracking with consumer-grade digital cameras and the APAS software can achieve submillimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  17. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms.

    PubMed

    Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B; Liu, Shih-Chii

    2015-01-01

    Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time. PMID:26217169

  18. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms

    PubMed Central

    Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B.; Liu, Shih-Chii

    2015-01-01

    Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time. PMID:26217169

  19. High-accuracy and robust face recognition system based on optical parallel correlator using a temporal image sequence

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko

    2005-09-01

    Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.

  20. Optimizing the accuracy and precision of the single-pulse Laue technique for synchrotron photo-crystallography

    SciTech Connect

    Kaminski, Radoslaw; Graber, Timothy; Benedict, Jason B.; Henning, Robert; Chen, Yu-Sheng; Scheins, Stephan; Messerschmidt, Marc; Coppens, Philip

    2010-06-24

    The accuracy that can be achieved in single-pulse pump-probe Laue experiments is discussed. It is shown that with careful tuning of the experimental conditions a reproducibility of the intensity ratios of equivalent intensities obtained in different measurements of 3-4% can be achieved. The single-pulse experiments maximize the time resolution that can be achieved and, unlike stroboscopic techniques in which the pump-probe cycle is rapidly repeated, minimize the temperature increase due to the laser exposure of the sample.

  1. Optimizing the accuracy and precision of the single-pulse Laue technique for synchrotron photo-crystallography

    PubMed Central

    Kamiński, Radosław; Graber, Timothy; Benedict, Jason B.; Henning, Robert; Chen, Yu-Sheng; Scheins, Stephan; Messerschmidt, Marc; Coppens, Philip

    2010-01-01

    The accuracy that can be achieved in single-pulse pump-probe Laue experiments is discussed. It is shown that with careful tuning of the experimental conditions a reproducibility of the intensity ratios of equivalent intensities obtained in different measurements of 3–4% can be achieved. The single-pulse experiments maximize the time resolution that can be achieved and, unlike stroboscopic techniques in which the pump-probe cycle is rapidly repeated, minimize the temperature increase due to the laser exposure of the sample. PMID:20567080

  2. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique

    PubMed Central

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan

    2014-01-01

    Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823

  3. Accuracy and precision of end-expiratory lung-volume measurements by automated nitrogen washout/washin technique in patients with acute respiratory distress syndrome

    PubMed Central

    2011-01-01

    Introduction End-expiratory lung volume (EELV) is decreased in acute respiratory distress syndrome (ARDS), and bedside EELV measurement may help to set positive end-expiratory pressure (PEEP). Nitrogen washout/washin for EELV measurement is available at the bedside, but assessments of accuracy and precision in real-life conditions are scant. Our purpose was to (a) assess EELV measurement precision in ARDS patients at two PEEP levels (three pairs of measurements), and (b) compare the changes (Δ) induced by PEEP for total EELV with the PEEP-induced changes in lung volume above functional residual capacity measured with passive spirometry (ΔPEEP-volume). The minimal predicted increase in lung volume was calculated from compliance at low PEEP and ΔPEEP to ensure the validity of lung-volume changes. Methods Thirty-four patients with ARDS were prospectively included in five university-hospital intensive care units. ΔEELV and ΔPEEP volumes were compared between 6 and 15 cm H2O of PEEP. Results After exclusion of three patients, variability of the nitrogen technique was less than 4%, and the largest difference between measurements was 81 ± 64 ml. ΔEELV and ΔPEEP-volume were only weakly correlated (r2 = 0.47); 95% confidence interval limits, -414 to 608 ml). In four patients with the highest PEEP (≥ 16 cm H2O), ΔEELV was lower than the minimal predicted increase in lung volume, suggesting flawed measurements, possibly due to leaks. Excluding those from the analysis markedly strengthened the correlation between ΔEELV and ΔPEEP volume (r2 = 0.80). Conclusions In most patients, the EELV technique has good reproducibility and accuracy, even at high PEEP. At high pressures, its accuracy may be limited in case of leaks. The minimal predicted increase in lung volume may help to check for accuracy. PMID:22166727

  4. Technical note: precision and accuracy of in vitro digestion of neutral detergent fiber and predicted net energy of lactation content of fibrous feeds.

    PubMed

    Spanghero, M; Berzaghi, P; Fortina, R; Masoero, F; Rapetti, L; Zanfi, C; Tassone, S; Gallo, A; Colombini, S; Ferlito, J C

    2010-10-01

    The objective of this study was to test the precision and agreement with in situ data (accuracy) of neutral detergent fiber degradability (NDFD) obtained with the rotating jar in vitro system (Daisy(II) incubator, Ankom Technology, Fairport, NY). Moreover, the precision of the chemical assays requested by the National Research Council (2001) for feed energy calculations and the estimated net energy of lactation contents were evaluated. Precision was measured as standard deviation (SD) of reproducibility (S(R)) and repeatability (S(r)) (between- and within-laboratory variability, respectively), which were expressed as coefficients of variation (SD/mean × 100, S(R) and S(r), respectively). Ten fibrous feed samples (alfalfa dehydrated, alfalfa hay, corn cob, corn silage, distillers grains, meadow hay, ryegrass hay, soy hulls, wheat bran, and wheat straw) were analyzed by 5 laboratories. Analyses of dry matter (DM), ash, crude protein (CP), neutral detergent fiber (NDF), and acid detergent fiber (ADF) had satisfactory S(r), from 0.4 to 2.9%, and S(R), from 0.7 to 6.2%, with the exception of ether extract (EE) and CP bound to NDF or ADF. Extending the fermentation time from 30 to 48 h increased the NDFD values (from 42 to 54% on average across all tested feeds) and improved the NDFD precision, in terms of both S(r) (12 and 7% for 30 and 48 h, respectively) and S(R) (17 and 10% for 30 and 48 h, respectively). The net energy for lactation (NE(L)) predicted from 48-h incubation NDFD data approximated well the tabulated National Research Council (2001) values for several feeds, and the improvement in NDFD precision given by longer incubations (48 vs. 30 h) also improved precision of the NE(L) estimates from 11 to 8%. Data obtained from the rotating jar in vitro technique compared well with in situ data. In conclusion, the adoption of a 48-h period of incubation improves repeatability and reproducibility of NDFD and accuracy and reproducibility of the associated calculated

  5. Evaluation of the geomorphometric results and residual values of a robust plane fitting method applied to different DTMs of various scales and accuracy

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor

    2013-04-01

    Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is

  6. Leaf vein length per unit area is not intrinsically dependent on image magnification: avoiding measurement artifacts for accuracy and precision.

    PubMed

    Sack, Lawren; Caringella, Marissa; Scoffoni, Christine; Mason, Chase; Rawls, Michael; Markesteijn, Lars; Poorter, Lourens

    2014-10-01

    Leaf vein length per unit leaf area (VLA; also known as vein density) is an important determinant of water and sugar transport, photosynthetic function, and biomechanical support. A range of software methods are in use to visualize and measure vein systems in cleared leaf images; typically, users locate veins by digital tracing, but recent articles introduced software by which users can locate veins using thresholding (i.e. based on the contrasting of veins in the image). Based on the use of this method, a recent study argued against the existence of a fixed VLA value for a given leaf, proposing instead that VLA increases with the magnification of the image due to intrinsic properties of the vein system, and recommended that future measurements use a common, low image magnification for measurements. We tested these claims with new measurements using the software LEAFGUI in comparison with digital tracing using ImageJ software. We found that the apparent increase of VLA with magnification was an artifact of (1) using low-quality and low-magnification images and (2) errors in the algorithms of LEAFGUI. Given the use of images of sufficient magnification and quality, and analysis with error-free software, the VLA can be measured precisely and accurately. These findings point to important principles for improving the quantity and quality of important information gathered from leaf vein systems. PMID:25096977

  7. Deployment of precise and robust sensors on board ISS-for scientific experiments and for operation of the station.

    PubMed

    Stenzel, Christian

    2016-09-01

    The International Space Station (ISS) is the largest technical vehicle ever built by mankind. It provides a living area for six astronauts and also represents a laboratory in which scientific experiments are conducted in an extraordinary environment. The deployed sensor technology contributes significantly to the operational and scientific success of the station. The sensors on board the ISS can be thereby classified into two categories which differ significantly in their key features: (1) sensors related to crew and station health, and (2) sensors to provide specific measurements in research facilities. The operation of the station requires robust, long-term stable and reliable sensors, since they assure the survival of the astronauts and the intactness of the station. Recently, a wireless sensor network for measuring environmental parameters like temperature, pressure, and humidity was established and its function could be successfully verified over several months. Such a network enhances the operational reliability and stability for monitoring these critical parameters compared to single sensors. The sensors which are implemented into the research facilities have to fulfil other objectives. The high performance of the scientific experiments that are conducted in different research facilities on-board demands the perfect embedding of the sensor in the respective instrumental setup which forms the complete measurement chain. It is shown that the performance of the single sensor alone does not determine the success of the measurement task; moreover, the synergy between different sensors and actuators as well as appropriate sample taking, followed by an appropriate sample preparation play an essential role. The application in a space environment adds additional challenges to the sensor technology, for example the necessity for miniaturisation, automation, reliability, and long-term operation. An alternative is the repetitive calibration of the sensors. This approach

  8. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, Jacquelyn C.; Thompson, Anne M.; Schmidlin, F. J.; Oltmans, S. J.; Smit, H. G. J.

    2004-01-01

    Since 1998 the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 ozone profiles over eleven southern hemisphere tropical and subtropical stations. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used to measure ozone. The data are archived at: &ttp://croc.gsfc.nasa.gov/shadoz>. In analysis of ozonesonde imprecision within the SHADOZ dataset, Thompson et al. [JGR, 108,8238,20031 we pointed out that variations in ozonesonde technique (sensor solution strength, instrument manufacturer, data processing) could lead to station-to-station biases within the SHADOZ dataset. Imprecisions and accuracy in the SHADOZ dataset are examined in light of new data. First, SHADOZ total ozone column amounts are compared to version 8 TOMS (2004 release). As for TOMS version 7, satellite total ozone is usually higher than the integrated column amount from the sounding. Discrepancies between the sonde and satellite datasets decline two percentage points on average, compared to version 7 TOMS offsets. Second, the SHADOZ station data are compared to results of chamber simulations (JOSE-2000, Juelich Ozonesonde Intercomparison Experiment) in which the various SHADOZ techniques were evaluated. The range of JOSE column deviations from a standard instrument (-10%) in the chamber resembles that of the SHADOZ station data. It appears that some systematic variations in the SHADOZ ozone record are accounted for by differences in solution strength, data processing and instrument type (manufacturer).

  9. EFFECT OF RADIATION DOSE LEVEL ON ACCURACY AND PRECISION OF MANUAL SIZE MEASUREMENTS IN CHEST TOMOSYNTHESIS EVALUATED USING SIMULATED PULMONARY NODULES

    PubMed Central

    Söderman, Christina; Johnsson, Åse Allansdotter; Vikgren, Jenny; Norrlund, Rauni Rossi; Molnar, David; Svalkvist, Angelica; Månsson, Lars Gunnar; Båth, Magnus

    2016-01-01

    The aim of the present study was to investigate the dependency of the accuracy and precision of nodule diameter measurements on the radiation dose level in chest tomosynthesis. Artificial ellipsoid-shaped nodules with known dimensions were inserted in clinical chest tomosynthesis images. Noise was added to the images in order to simulate radiation dose levels corresponding to effective doses for a standard-sized patient of 0.06 and 0.04 mSv. These levels were compared with the original dose level, corresponding to an effective dose of 0.12 mSv for a standard-sized patient. Four thoracic radiologists measured the longest diameter of the nodules. The study was restricted to nodules located in high-dose areas of the tomosynthesis projection radiographs. A significant decrease of the measurement accuracy and intraobserver variability was seen for the lowest dose level for a subset of the observers. No significant effect of dose level on the interobserver variability was found. The number of non-measurable small nodules (≤5 mm) was higher for the two lowest dose levels compared with the original dose level. In conclusion, for pulmonary nodules at positions in the lung corresponding to locations in high-dose areas of the projection radiographs, using a radiation dose level resulting in an effective dose of 0.06 mSv to a standard-sized patient may be possible in chest tomosynthesis without affecting the accuracy and precision of nodule diameter measurements to any large extent. However, an increasing number of non-measurable small nodules (≤5 mm) with decreasing radiation dose may raise some concerns regarding an applied general dose reduction for chest tomosynthesis examinations in the clinical praxis. PMID:26994093

  10. EFFECT OF RADIATION DOSE LEVEL ON ACCURACY AND PRECISION OF MANUAL SIZE MEASUREMENTS IN CHEST TOMOSYNTHESIS EVALUATED USING SIMULATED PULMONARY NODULES.

    PubMed

    Söderman, Christina; Johnsson, Åse Allansdotter; Vikgren, Jenny; Norrlund, Rauni Rossi; Molnar, David; Svalkvist, Angelica; Månsson, Lars Gunnar; Båth, Magnus

    2016-06-01

    The aim of the present study was to investigate the dependency of the accuracy and precision of nodule diameter measurements on the radiation dose level in chest tomosynthesis. Artificial ellipsoid-shaped nodules with known dimensions were inserted in clinical chest tomosynthesis images. Noise was added to the images in order to simulate radiation dose levels corresponding to effective doses for a standard-sized patient of 0.06 and 0.04 mSv. These levels were compared with the original dose level, corresponding to an effective dose of 0.12 mSv for a standard-sized patient. Four thoracic radiologists measured the longest diameter of the nodules. The study was restricted to nodules located in high-dose areas of the tomosynthesis projection radiographs. A significant decrease of the measurement accuracy and intraobserver variability was seen for the lowest dose level for a subset of the observers. No significant effect of dose level on the interobserver variability was found. The number of non-measurable small nodules (≤5 mm) was higher for the two lowest dose levels compared with the original dose level. In conclusion, for pulmonary nodules at positions in the lung corresponding to locations in high-dose areas of the projection radiographs, using a radiation dose level resulting in an effective dose of 0.06 mSv to a standard-sized patient may be possible in chest tomosynthesis without affecting the accuracy and precision of nodule diameter measurements to any large extent. However, an increasing number of non-measurable small nodules (≤5 mm) with decreasing radiation dose may raise some concerns regarding an applied general dose reduction for chest tomosynthesis examinations in the clinical praxis. PMID:26994093

  11. Ultra-Precision Measurement and Control of Angle Motion in Piezo-Based Platforms Using Strain Gauge Sensors and a Robust Composite Controller

    PubMed Central

    Liu, Lei; Bai, Yu-Guang; Zhang, Da-Li; Wu, Zhi-Gang

    2013-01-01

    The measurement and control strategy of a piezo-based platform by using strain gauge sensors (SGS) and a robust composite controller is investigated in this paper. First, the experimental setup is constructed by using a piezo-based platform, SGS sensors, an AD5435 platform and two voltage amplifiers. Then, the measurement strategy to measure the tip/tilt angles accurately in the order of sub-μrad is presented. A comprehensive composite control strategy design to enhance the tracking accuracy with a novel driving principle is also proposed. Finally, an experiment is presented to validate the measurement and control strategy. The experimental results demonstrate that the proposed measurement and control strategy provides accurate angle motion with a root mean square (RMS) error of 0.21 μrad, which is approximately equal to the noise level. PMID:23860316

  12. SU-E-J-147: Monte Carlo Study of the Precision and Accuracy of Proton CT Reconstructed Relative Stopping Power Maps

    SciTech Connect

    Dedes, G; Asano, Y; Parodi, K; Arbor, N; Dauvergne, D; Testa, E; Letang, J; Rit, S

    2015-06-15

    Purpose: The quantification of the intrinsic performances of proton computed tomography (pCT) as a modality for treatment planning in proton therapy. The performance of an ideal pCT scanner is studied as a function of various parameters. Methods: Using GATE/Geant4, we simulated an ideal pCT scanner and scans of several cylindrical phantoms with various tissue equivalent inserts of different sizes. Insert materials were selected in order to be of clinical relevance. Tomographic images were reconstructed using a filtered backprojection algorithm taking into account the scattering of protons into the phantom. To quantify the performance of the ideal pCT scanner, we study the precision and the accuracy with respect to the theoretical relative stopping power ratios (RSP) values for different beam energies, imaging doses, insert sizes and detector positions. The planning range uncertainty resulting from the reconstructed RSP is also assessed by comparison with the range of the protons in the analytically simulated phantoms. Results: The results indicate that pCT can intrinsically achieve RSP resolution below 1%, for most examined tissues at beam energies below 300 MeV and for imaging doses around 1 mGy. RSP maps accuracy of less than 0.5 % is observed for most tissue types within the studied dose range (0.2–1.5 mGy). Finally, the uncertainty in the proton range due to the accuracy of the reconstructed RSP map is well below 1%. Conclusion: This work explores the intrinsic performance of pCT as an imaging modality for proton treatment planning. The obtained results show that under ideal conditions, 3D RSP maps can be reconstructed with an accuracy better than 1%. Hence, pCT is a promising candidate for reducing the range uncertainties introduced by the use of X-ray CT alongside with a semiempirical calibration to RSP.Supported by the DFG Cluster of Excellence Munich-Centre for Advanced Photonics (MAP)

  13. Precision powder feeder

    DOEpatents

    Schlienger, M. Eric; Schmale, David T.; Oliver, Michael S.

    2001-07-10

    A new class of precision powder feeders is disclosed. These feeders provide a precision flow of a wide range of powdered materials, while remaining robust against jamming or damage. These feeders can be precisely controlled by feedback mechanisms.

  14. Factors influence accuracy and precision in the determination of the elemental composition of defense waste glass by ICP-emission spectrometry

    SciTech Connect

    Goode, S.R.

    1995-12-31

    The influence of instrumental factors on the accuracy and precision of the determination of the composition of glass and glass feedstock is presented. In addition, the effects of different methods of sampling, dissolution methods, and standardization procedures and their effect on the quality of the chemical analysis will also be presented. The target glass simulates the material that will be prepared by the vitrification of highly radioactive liquid defense waste. The glass and feedstock streams must be well characterized to ensure a durable glass; current models estimate a 100,000 year lifetime. The elemental composition will be determined by ICP-emission spectrometry with radiation exposure issues requiring a multielement analysis for all constituents, on a single analytical sample, using compromise conditions.

  15. Approaches for achieving long-term accuracy and precision of δ18O and δ2H for waters analyzed using laser absorption spectrometers.

    PubMed

    Wassenaar, Leonard I; Coplen, Tyler B; Aggarwal, Pradeep K

    2014-01-21

    The measurement of δ(2)H and δ(18)O in water samples by laser absorption spectroscopy (LAS) are adopted increasingly in hydrologic and environmental studies. Although LAS instrumentation is easy to use, its incorporation into laboratory operations is not as easy, owing to extensive offline data manipulation required for outlier detection, derivation and application of algorithms to correct for between-sample memory, correcting for linear and nonlinear instrumental drift, VSMOW-SLAP scale normalization, and in maintaining long-term QA/QC audits. Here we propose a series of standardized water-isotope LAS performance tests and routine sample analysis templates, recommended procedural guidelines, and new data processing software (LIMS for Lasers) that altogether enables new and current LAS users to achieve and sustain long-term δ(2)H and δ(18)O accuracy and precision for these important isotopic assays. PMID:24328223

  16. A Study of the Accuracy and Precision Among XRF, ICP-MS, and PIXE on Trace Element Analyses of Small Water Samples

    NASA Astrophysics Data System (ADS)

    Naik, Sahil; Patnaik, Ritish; Kummari, Venkata; Phinney, Lucas; Dhoubhadel, Mangal; Jesseph, Aaron; Hoffmann, William; Verbeck, Guido; Rout, Bibhudutta

    2010-10-01

    The study aimed to compare the viability, precision, and accuracy among three popular instruments - X-ray Fluorescence (XRF), Inductively Coupled Plasma Mass Spectrometer (ICP-MS), and Particle-Induced X-ray Emission (PIXE) - used to analyze the trace elemental composition of small water samples. Ten-milliliter water samples from public tap water sources in seven different localities in India (Bangalore, Kochi, Bhubaneswar, Cuttack, Puri, Hospet, and Pipili) were prepared through filtration and dilution for proper analysis. The project speculates that the ICP-MS will give the most accurate and precise trace elemental analysis, followed by PIXE and XRF. XRF will be seen as a portable and affordable instrument that can analyze samples on-site while ICP-MS is extremely accurate, and expensive option for off-site analyses. PIXE will be deemed to be too expensive and cumbersome for on-site analysis; however, laboratories with a PIXE accelerator can use the instrument to get accurate analyses.

  17. Improving Precision and Accuracy of Isotope Ratios from Short Transient Laser Ablation-Multicollector-Inductively Coupled Plasma Mass Spectrometry Signals: Application to Micrometer-Size Uranium Particles.

    PubMed

    Claverie, Fanny; Hubert, Amélie; Berail, Sylvain; Donard, Ariane; Pointurier, Fabien; Pécheyran, Christophe

    2016-04-19

    The isotope drift encountered on short transient signals measured by multicollector inductively coupled plasma mass spectrometry (MC-ICPMS) is related to differences in detector time responses. Faraday to Faraday and Faraday to ion counter time lags were determined and corrected using VBA data processing based on the synchronization of the isotope signals. The coefficient of determination of the linear fit between the two isotopes was selected as the best criterion to obtain accurate detector time lag. The procedure was applied to the analysis by laser ablation-MC-ICPMS of micrometer sized uranium particles (1-3.5 μm). Linear regression slope (LRS) (one isotope plotted over the other), point-by-point, and integration methods were tested to calculate the (235)U/(238)U and (234)U/(238)U ratios. Relative internal precisions of 0.86 to 1.7% and 1.2 to 2.4% were obtained for (235)U/(238)U and (234)U/(238)U, respectively, using LRS calculation, time lag, and mass bias corrections. A relative external precision of 2.1% was obtained for (235)U/(238)U ratios with good accuracy (relative difference with respect to the reference value below 1%). PMID:27031645

  18. An in-depth evaluation of accuracy and precision in Hg isotopic analysis via pneumatic nebulization and cold vapor generation multi-collector ICP-mass spectrometry.

    PubMed

    Rua-Ibarz, Ana; Bolea-Fernandez, Eduardo; Vanhaecke, Frank

    2016-01-01

    Mercury (Hg) isotopic analysis via multi-collector inductively coupled plasma (ICP)-mass spectrometry (MC-ICP-MS) can provide relevant biogeochemical information by revealing sources, pathways, and sinks of this highly toxic metal. In this work, the capabilities and limitations of two different sample introduction systems, based on pneumatic nebulization (PN) and cold vapor generation (CVG), respectively, were evaluated in the context of Hg isotopic analysis via MC-ICP-MS. The effect of (i) instrument settings and acquisition parameters, (ii) concentration of analyte element (Hg), and internal standard (Tl)-used for mass discrimination correction purposes-and (iii) different mass bias correction approaches on the accuracy and precision of Hg isotope ratio results was evaluated. The extent and stability of mass bias were assessed in a long-term study (18 months, n = 250), demonstrating a precision ≤0.006% relative standard deviation (RSD). CVG-MC-ICP-MS showed an approximately 20-fold enhancement in Hg signal intensity compared with PN-MC-ICP-MS. For CVG-MC-ICP-MS, the mass bias induced by instrumental mass discrimination was accurately corrected for by using either external correction in a sample-standard bracketing approach (SSB) or double correction, consisting of the use of Tl as internal standard in a revised version of the Russell law (Baxter approach), followed by SSB. Concomitant matrix elements did not affect CVG-ICP-MS results. Neither with PN, nor with CVG, any evidence for mass-independent discrimination effects in the instrument was observed within the experimental precision obtained. CVG-MC-ICP-MS was finally used for Hg isotopic analysis of reference materials (RMs) of relevant environmental origin. The isotopic composition of Hg in RMs of marine biological origin testified of mass-independent fractionation that affected the odd-numbered Hg isotopes. While older RMs were used for validation purposes, novel Hg isotopic data are provided for the

  19. Dual-energy X-ray absorptiometry for measuring total bone mineral content in the rat: study of accuracy and precision.

    PubMed

    Casez, J P; Muehlbauer, R C; Lippuner, K; Kelly, T; Fleisch, H; Jaeger, P

    1994-07-01

    Sequential studies of osteopenic bone disease in small animals require the availability of non-invasive, accurate and precise methods to assess bone mineral content (BMC) and bone mineral density (BMD). Dual-energy X-ray absorptiometry (DXA), which is currently used in humans for this purpose, can also be applied to small animals by means of adapted software. Precision and accuracy of DXA was evaluated in 10 rats weighing 50-265 g. The rats were anesthetized with a mixture of ketamine-xylazine administrated intraperitoneally. Each rat was scanned six times consecutively in the antero-posterior incidence after repositioning using the rat whole-body software for determination of whole-body BMC and BMD (Hologic QDR 1000, software version 5.52). Scan duration was 10-20 min depending on rat size. After the last measurement, rats were sacrificed and soft tissues were removed by dermestid beetles. Skeletons were then scanned in vitro (ultra high resolution software, version 4.47). Bones were subsequently ashed and dissolved in hydrochloric acid and total body calcium directly assayed by atomic absorption spectrophotometry (TBCa[chem]). Total body calcium was also calculated from the DXA whole-body in vivo measurement (TBCa[DXA]) and from the ultra high resolution measurement (TBCa[UH]) under the assumption that calcium accounts for 40.5% of the BMC expressed as hydroxyapatite. Precision error for whole-body BMC and BMD (mean +/- S.D.) was 1.3% and 1.5%, respectively. Simple regression analysis between TBCa[DXA] or TBCa[UH] and TBCa[chem] revealed tight correlations (n = 0.991 and 0.996, respectively), with slopes and intercepts which were significantly different from 1 and 0, respectively.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:7950505

  20. The accuracy and precision of two non-invasive, magnetic resonance-guided focused ultrasound-based thermal diffusivity estimation methods

    PubMed Central

    Dillon, Christopher R.; Payne, Allison; Christensen, Douglas A.; Roemer, Robert B.

    2016-01-01

    Purpose The use of correct tissue thermal diffusivity values is necessary for making accurate thermal modeling predictions during magnetic resonance-guided focused ultrasound (MRgFUS) treatment planning. This study evaluates the accuracy and precision of two non-invasive thermal diffusivity estimation methods, a Gaussian Temperature method published by Cheng and Plewes in 2002 and a Gaussian specific absorption rate (SAR) method published by Dillon et al in 2012. Materials and Methods Both methods utilize MRgFUS temperature data obtained during cooling following a short (<25s) heating pulse. The Gaussian SAR method can also use temperatures obtained during heating. Experiments were performed at low heating levels (ΔT~10°C) in ex vivo pork muscle and in vivo rabbit back muscle. The non-invasive MRgFUS thermal diffusivity estimates were compared with measurements from two standard invasive methods. Results Both non-invasive methods accurately estimate thermal diffusivity when using MR-temperature cooling data (overall ex vivo error<6%, in vivo<12%). Including heating data in the Gaussian SAR method further reduces errors (ex vivo error<2%, in vivo<3%). The significantly lower standard deviation values (p<0.03) of the Gaussian SAR method indicate that it has better precision than the Gaussian Temperature method. Conclusions With repeated sonications, either MR-based method could provide accurate thermal diffusivity values for MRgFUS therapies. Fitting to more data simultaneously likely makes the Gaussian SAR method less susceptible to noise, and using heating data helps it converge more consistently to the FUS fitting parameters and thermal diffusivity. These effects lead to the improved precision of the Gaussian SAR method. PMID:25198092

  1. A robust method for high-precision quantification of the complex three-dimensional vasculatures acquired by X-ray microtomography.

    PubMed

    Tan, Hai; Wang, Dadong; Li, Rongxin; Sun, Changming; Lagerstrom, Ryan; He, You; Xue, Yanling; Xiao, Tiqiao

    2016-09-01

    The quantification of micro-vasculatures is important for the analysis of angiogenesis on which the detection of tumor growth or hepatic fibrosis depends. Synchrotron-based X-ray computed micro-tomography (SR-µCT) allows rapid acquisition of micro-vasculature images at micrometer-scale spatial resolution. Through skeletonization, the statistical features of the micro-vasculature can be extracted from the skeleton of the micro-vasculatures. Thinning is a widely used algorithm to produce the vascular skeleton in medical research. Existing three-dimensional thinning methods normally emphasize the preservation of topological structure rather than geometrical features in generating the skeleton of a volumetric object. This results in three problems and limits the accuracy of the quantitative results related to the geometrical structure of the vasculature. The problems include the excessively shortened length of elongated objects, eliminated branches of blood vessel tree structure, and numerous noisy spurious branches. The inaccuracy of the skeleton directly introduces errors in the quantitative analysis, especially on the parameters concerning the vascular length and the counts of vessel segments and branching points. In this paper, a robust method using a consolidated end-point constraint for thinning, which generates geometry-preserving skeletons in addition to maintaining the topology of the vasculature, is presented. The improved skeleton can be used to produce more accurate quantitative results. Experimental results from high-resolution SR-µCT images show that the end-point constraint produced by the proposed method can significantly improve the accuracy of the skeleton obtained using the existing ITK three-dimensional thinning filter. The produced skeleton has laid the groundwork for accurate quantification of the angiogenesis. This is critical for the early detection of tumors and assessing anti-angiogenesis treatments. PMID:27577778

  2. Toward robust deconvolution of pass-through paleomagnetic measurements: new tool to estimate magnetometer sensor response and laser interferometry of sample positioning accuracy

    NASA Astrophysics Data System (ADS)

    Oda, Hirokuni; Xuan, Chuang; Yamamoto, Yuhji

    2016-07-01

    Pass-through superconducting rock magnetometers (SRM) offer rapid and high-precision remanence measurements for continuous samples that are essential for modern paleomagnetism studies. However, continuous SRM measurements are inevitably smoothed and distorted due to the convolution effect of SRM sensor response. Deconvolution is necessary to restore accurate magnetization from pass-through SRM data, and robust deconvolution requires reliable estimate of SRM sensor response as well as understanding of uncertainties associated with the SRM measurement system. In this paper, we use the SRM at Kochi Core Center (KCC), Japan, as an example to introduce new tool and procedure for accurate and efficient estimate of SRM sensor response. To quantify uncertainties associated with the SRM measurement due to track positioning errors and test their effects on deconvolution, we employed laser interferometry for precise monitoring of track positions both with and without placing a u-channel sample on the SRM tray. The acquired KCC SRM sensor response shows significant cross-term of Z-axis magnetization on the X-axis pick-up coil and full widths of ~46-54 mm at half-maximum response for the three pick-up coils, which are significantly narrower than those (~73-80 mm) for the liquid He-free SRM at Oregon State University. Laser interferometry measurements on the KCC SRM tracking system indicate positioning uncertainties of ~0.1-0.2 and ~0.5 mm for tracking with and without u-channel sample on the tray, respectively. Positioning errors appear to have reproducible components of up to ~0.5 mm possibly due to patterns or damages on tray surface or rope used for the tracking system. Deconvolution of 50,000 simulated measurement data with realistic error introduced based on the position uncertainties indicates that although the SRM tracking system has recognizable positioning uncertainties, they do not significantly debilitate the use of deconvolution to accurately restore high

  3. The Impact of 3D Volume-of-Interest Definition on Accuracy and Precision of Activity Estimation in Quantitative SPECT and Planar Processing Methods

    PubMed Central

    He, Bin; Frey, Eric C.

    2010-01-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise, and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT), and planar (QPlanar) processing. Another important effect impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimations. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in the same transaxial plane in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g., in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from −1 to 1 voxels in increments of 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ

  4. Bias, precision and accuracy in the estimation of cuticular and respiratory water loss: a case study from a highly variable cockroach, Perisphaeria sp.

    PubMed

    Gray, Emilie M; Chown, Steven L

    2008-01-01

    We compared the precision, bias and accuracy of two techniques that were recently proposed to estimate the contributions of cuticular and respiratory water loss to total water loss in insects. We performed measurements of VCO2 and VH2O in normoxia, hyperoxia and anoxia using flow through respirometry on single individuals of the highly variable cockroach Perisphaeria sp. to compare estimates of cuticular and respiratory water loss (CWL and RWL) obtained by the VH2O-VCO2 y-intercept method with those obtained by the hyperoxic switch method. Precision was determined by assessing the repeatability of values obtained whereas bias was assessed by comparing the methods' results to each other and to values for other species found in the literature. We found that CWL was highly repeatable by both methods (R0.88) and resulted in similar values to measures of CWL determined during the closed-phase of discontinuous gas exchange (DGE). Repeatability of RWL was much lower (R=0.40) and significant only in the case of the hyperoxic method. RWL derived from the hyperoxic method is higher (by 0.044 micromol min(-1)) than that obtained from the method traditionally used for measuring water loss during the closed-phase of DGE, suggesting that in the past RWL may have been underestimated. The very low cuticular permeability of this species (3.88 microg cm(-2) h(-1) Torr(-1)) is reasonable given the seasonally hot and dry habitat where it lives. We also tested the hygric hypothesis proposed to account for the evolution of discontinuous gas exchange cycles and found no effect of respiratory pattern on RWL, although the ratio of mean VH2O to VCO2 was higher for continuous patterns compared with discontinuous ones. PMID:17949739

  5. Guidelines for Dual Energy X-Ray Absorptiometry Analysis of Trabecular Bone-Rich Regions in Mice: Improved Precision, Accuracy, and Sensitivity for Assessing Longitudinal Bone Changes.

    PubMed

    Shi, Jiayu; Lee, Soonchul; Uyeda, Michael; Tanjaya, Justine; Kim, Jong Kil; Pan, Hsin Chuan; Reese, Patricia; Stodieck, Louis; Lin, Andy; Ting, Kang; Kwak, Jin Hee; Soo, Chia

    2016-05-01

    Trabecular bone is frequently studied in osteoporosis research because changes in trabecular bone are the most common cause of osteoporotic fractures. Dual energy X-ray absorptiometry (DXA) analysis specific to trabecular bone-rich regions is crucial to longitudinal osteoporosis research. The purpose of this study is to define a novel method for accurately analyzing trabecular bone-rich regions in mice via DXA. This method will be utilized to analyze scans obtained from the International Space Station in an upcoming study of microgravity-induced bone loss. Thirty 12-week-old BALB/c mice were studied. The novel method was developed by preanalyzing trabecular bone-rich sites in the distal femur, proximal tibia, and lumbar vertebrae via high-resolution X-ray imaging followed by DXA and micro-computed tomography (micro-CT) analyses. The key DXA steps described by the novel method were (1) proper mouse positioning, (2) region of interest (ROI) sizing, and (3) ROI positioning. The precision of the new method was assessed by reliability tests and a 14-week longitudinal study. The bone mineral content (BMC) data from DXA was then compared to the BMC data from micro-CT to assess accuracy. Bone mineral density (BMD) intra-class correlation coefficients of the new method ranging from 0.743 to 0.945 and Levene's test showing that there was significantly lower variances of data generated by new method both verified its consistency. By new method, a Bland-Altman plot displayed good agreement between DXA BMC and micro-CT BMC for all sites and they were strongly correlated at the distal femur and proximal tibia (r=0.846, p<0.01; r=0.879, p<0.01, respectively). The results suggest that the novel method for site-specific analysis of trabecular bone-rich regions in mice via DXA yields more precise, accurate, and repeatable BMD measurements than the conventional method. PMID:26956416

  6. High-Precision Surface Inspection: Uncertainty Evaluation within an Accuracy Range of 15μm with Triangulation-based Laser Line Scanners

    NASA Astrophysics Data System (ADS)

    Dupuis, Jan; Kuhlmann, Heiner

    2014-06-01

    Triangulation-based range sensors, e.g. laser line scanners, are used for high-precision geometrical acquisition of free-form surfaces, for reverse engineering tasks or quality management. In contrast to classical tactile measuring devices, these scanners generate a great amount of 3D-points in a short period of time and enable the inspection of soft materials. However, for accurate measurements, a number of aspects have to be considered to minimize measurement uncertainties. This study outlines possible sources of uncertainties during the measurement process regarding the scanner warm-up, the impact of laser power and exposure time as well as scanner’s reaction to areas of discontinuity, e.g. edges. All experiments were performed using a fixed scanner position to avoid effects resulting from imaging geometry. The results show a significant dependence of measurement accuracy on the correct adaption of exposure time as a function of surface reflectivity and laser power. Additionally, it is illustrated that surface structure as well as edges can cause significant systematic uncertainties.

  7. Technical Note: Precision and accuracy of a commercially available CT optically stimulated luminescent dosimetry system for the measurement of CT dose index

    PubMed Central

    Vrieze, Thomas J.; Sturchio, Glenn M.; McCollough, Cynthia H.

    2012-01-01

    Purpose: To determine the precision and accuracy of CTDI100 measurements made using commercially available optically stimulated luminescent (OSL) dosimeters (Landaur, Inc.) as beam width, tube potential, and attenuating material were varied. Methods: One hundred forty OSL dosimeters were individually exposed to a single axial CT scan, either in air, a 16-cm (head), or 32-cm (body) CTDI phantom at both center and peripheral positions. Scans were performed using nominal total beam widths of 3.6, 6, 19.2, and 28.8 mm at 120 kV and 28.8 mm at 80 kV. Five measurements were made for each of 28 parameter combinations. Measurements were made under the same conditions using a 100-mm long CTDI ion chamber. Exposed OSL dosimeters were returned to the manufacturer, who reported dose to air (in mGy) as a function of distance along the probe, integrated dose, and CTDI100. Results: The mean precision averaged over 28 datasets containing five measurements each was 1.4% ± 0.6%, range = 0.6%–2.7% for OSL and 0.08% ± 0.06%, range = 0.02%–0.3% for ion chamber. The root mean square (RMS) percent differences between OSL and ion chamber CTDI100 values were 13.8%, 6.4%, and 8.7% for in-air, head, and body measurements, respectively, with an overall RMS percent difference of 10.1%. OSL underestimated CTDI100 relative to the ion chamber 21/28 times (75%). After manual correction of the 80 kV measurements, the RMS percent differences between OSL and ion chamber measurements were 9.9% and 10.0% for 80 and 120 kV, respectively. Conclusions: Measurements of CTDI100 with commercially available CT OSL dosimeters had a percent standard deviation of 1.4%. After energy-dependent correction factors were applied, the RMS percent difference in the measured CTDI100 values was about 10%, with a tendency of OSL to underestimate CTDI relative to the ion chamber. Unlike ion chamber methods, however, OSL dosimeters allow measurement of the radiation dose profile. PMID:23127052

  8. Technical Note: Precision and accuracy of a commercially available CT optically stimulated luminescent dosimetry system for the measurement of CT dose index

    SciTech Connect

    Vrieze, Thomas J.; Sturchio, Glenn M.; McCollough, Cynthia H.

    2012-11-15

    Purpose: To determine the precision and accuracy of CTDI{sub 100} measurements made using commercially available optically stimulated luminescent (OSL) dosimeters (Landaur, Inc.) as beam width, tube potential, and attenuating material were varied. Methods: One hundred forty OSL dosimeters were individually exposed to a single axial CT scan, either in air, a 16-cm (head), or 32-cm (body) CTDI phantom at both center and peripheral positions. Scans were performed using nominal total beam widths of 3.6, 6, 19.2, and 28.8 mm at 120 kV and 28.8 mm at 80 kV. Five measurements were made for each of 28 parameter combinations. Measurements were made under the same conditions using a 100-mm long CTDI ion chamber. Exposed OSL dosimeters were returned to the manufacturer, who reported dose to air (in mGy) as a function of distance along the probe, integrated dose, and CTDI{sub 100}. Results: The mean precision averaged over 28 datasets containing five measurements each was 1.4%{+-} 0.6%, range = 0.6%-2.7% for OSL and 0.08%{+-} 0.06%, range = 0.02%-0.3% for ion chamber. The root mean square (RMS) percent differences between OSL and ion chamber CTDI{sub 100} values were 13.8%, 6.4%, and 8.7% for in-air, head, and body measurements, respectively, with an overall RMS percent difference of 10.1%. OSL underestimated CTDI{sub 100} relative to the ion chamber 21/28 times (75%). After manual correction of the 80 kV measurements, the RMS percent differences between OSL and ion chamber measurements were 9.9% and 10.0% for 80 and 120 kV, respectively. Conclusions: Measurements of CTDI{sub 100} with commercially available CT OSL dosimeters had a percent standard deviation of 1.4%. After energy-dependent correction factors were applied, the RMS percent difference in the measured CTDI{sub 100} values was about 10%, with a tendency of OSL to underestimate CTDI relative to the ion chamber. Unlike ion chamber methods, however, OSL dosimeters allow measurement of the radiation dose profile.

  9. Accuracy and precision of porosity estimates based on velocity inversion of surface ground-penetrating radar data: A controlled experiment at the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Bradford, J.; Clement, W.

    2006-12-01

    Although rarely acquired, ground penetrating radar (GPR) data acquired in continuous multi-offset geometries can substantially improve our understanding of the subsurface compared to conventional single offset surveys. This improvement arises because multi-offset data enable full use of the information that the GPR signal can carry. The added information allows us to maximize the material property information extracted from a GPR survey. Of the array of potential multi-offset GPR measurements, traveltime vs offset information enables laterally and vertically continuous electromagnetic (EM) velocity measurements. In turn, the EM velocities provide estimates of water content via petrophysical relationships such as the CRIM or Topp's equations. In fully saturated media the water content is a direct measure of bulk porosity. The Boise Hydrogeophysical Research Site (BHRS) is a experimental wellfield located in a shallow alluvial aquifer near Boise, Idaho. In July, 2006 we conducted a controlled 3D multi-offset GPR experiment at the BHRS designed to test the accuracy of state-of-the-art velocity analysis methodologies. We acquired continuous multi-offset GPR data over an approximately 20 x 30 m 3D area. The GPR system was a Sensors and Software pulseEkko Pro multichannel system with 100 MHz antennas and was configured with 4 receivers and a single transmitter. Data were acquired in off-end geometry for a total of 16 offsets with a 1 m offset interval and 1 m near offset. The data were acquired on a 1 m x 1m grid in four passes, each consisting of a 3 m range of equally spaced offsets. The survey encompassed 13 wells finished to the ~20 m depth of the unconfined aquifer. We established velocity control by acquiring vertical radar profiles (VRPs) in all 13 wells. Preliminary velocity measurements using an established method of reflection tomography were within about 1 percent of local 1D velocity distributions determined from the VRPs. Vertical velocity precision from the

  10. Development and validation of an automated and marker-free CT-based spatial analysis method (CTSA) for assessment of femoral hip implant migration In vitro accuracy and precision comparable to that of radiostereometric analysis (RSA).

    PubMed

    Scheerlinck, Thierry; Polfliet, Mathias; Deklerck, Rudi; Van Gompel, Gert; Buls, Nico; Vandemeulebroucke, Jef

    2016-04-01

    Background and purpose - We developed a marker-free automated CT-based spatial analysis (CTSA) method to detect stem-bone migration in consecutive CT datasets and assessed the accuracy and precision in vitro. Our aim was to demonstrate that in vitro accuracy and precision of CTSA is comparable to that of radiostereometric analysis (RSA). Material and methods - Stem and bone were segmented in 2 CT datasets and both were registered pairwise. The resulting rigid transformations were compared and transferred to an anatomically sound coordinate system, taking the stem as reference. This resulted in 3 translation parameters and 3 rotation parameters describing the relative amount of stem-bone displacement, and it allowed calculation of the point of maximal stem migration. Accuracy was evaluated in 39 comparisons by imposing known stem migration on a stem-bone model. Precision was estimated in 20 comparisons based on a zero-migration model, and in 5 patients without stem loosening. Results - Limits of the 95% tolerance intervals (TIs) for accuracy did not exceed 0.28 mm for translations and 0.20° for rotations (largest standard deviation of the signed error (SDSE): 0.081 mm and 0.057°). In vitro, limits of the 95% TI for precision in a clinically relevant setting (8 comparisons) were below 0.09 mm and 0.14° (largest SDSE: 0.012 mm and 0.020°). In patients, the precision was lower, but acceptable, and dependent on CT scan resolution. Interpretation - CTSA allows detection of stem-bone migration with an accuracy and precision comparable to that of RSA. It could be valuable for evaluation of subtle stem loosening in clinical practice. PMID:26634843

  11. Development and validation of an automated and marker-free CT-based spatial analysis method (CTSA) for assessment of femoral hip implant migration In vitro accuracy and precision comparable to that of radiostereometric analysis (RSA)

    PubMed Central

    Scheerlinck, Thierry; Polfliet, Mathias; Deklerck, Rudi; Van Gompel, Gert; Buls, Nico; Vandemeulebroucke, Jef

    2016-01-01

    Background and purpose — We developed a marker-free automated CT-based spatial analysis (CTSA) method to detect stem-bone migration in consecutive CT datasets and assessed the accuracy and precision in vitro. Our aim was to demonstrate that in vitro accuracy and precision of CTSA is comparable to that of radiostereometric analysis (RSA). Material and methods — Stem and bone were segmented in 2 CT datasets and both were registered pairwise. The resulting rigid transformations were compared and transferred to an anatomically sound coordinate system, taking the stem as reference. This resulted in 3 translation parameters and 3 rotation parameters describing the relative amount of stem-bone displacement, and it allowed calculation of the point of maximal stem migration. Accuracy was evaluated in 39 comparisons by imposing known stem migration on a stem-bone model. Precision was estimated in 20 comparisons based on a zero-migration model, and in 5 patients without stem loosening. Results — Limits of the 95% tolerance intervals (TIs) for accuracy did not exceed 0.28 mm for translations and 0.20° for rotations (largest standard deviation of the signed error (SDSE): 0.081 mm and 0.057°). In vitro, limits of the 95% TI for precision in a clinically relevant setting (8 comparisons) were below 0.09 mm and 0.14° (largest SDSE: 0.012 mm and 0.020°). In patients, the precision was lower, but acceptable, and dependent on CT scan resolution. Interpretation — CTSA allows detection of stem-bone migration with an accuracy and precision comparable to that of RSA. It could be valuable for evaluation of subtle stem loosening in clinical practice. PMID:26634843

  12. The 1998-2000 SHADOZ (Southern Hemisphere ADditional OZonesondes) Tropical Ozone Climatology: Ozonesonde Precision, Accuracy and Station-to-Station Variability

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, Anne M.; McPeters, R. D.; Oltmans, S. J.; Schmidlin, F. J.; Bhartia, P. K. (Technical Monitor)

    2001-01-01

    As part of the SAFARI-2000 campaign, additional launches of ozonesondes were made at Irene, South Africa and at Lusaka, Zambia. These represent campaign augmentations to the SHADOZ database described in this paper. This network of 10 southern hemisphere tropical and subtropical stations, designated the Southern Hemisphere ADditional OZonesondes (SHADOZ) project and established from operational sites, provided over 1000 profiles from ozonesondes and radiosondes during the period 1998-2000. (Since that time, two more stations, one in southern Africa, have joined SHADOZ). Archived data are available at: http://code9l6.gsfc.nasa.gov/Data-services/shadoz>. Uncertainties and accuracies within the SHADOZ ozone data set are evaluated by analyzing: (1) imprecisions in stratospheric ozone profiles and in methods of extrapolating ozone above balloon burst; (2) comparisons of column-integrated total ozone from sondes with total ozone from the Earth-Probe/TOMS (Total Ozone Mapping Spectrometer) satellite and ground-based instruments; (3) possible biases from station-to-station due to variations in ozonesonde characteristics. The key results are: (1) Ozonesonde precision is 5%; (2) Integrated total ozone column amounts from the sondes are in good agreement (2-10%) with independent measurements from ground-based instruments at five SHADOZ sites and with overpass measurements from the TOMS satellite (version 7 data). (3) Systematic variations in TOMS-sonde offsets and in groundbased-sonde offsets from station to station reflect biases in sonde technique as well as in satellite retrieval. Discrepancies are present in both stratospheric and tropospheric ozone. (4) There is evidence for a zonal wave-one pattern in total and tropospheric ozone, but not in stratospheric ozone.

  13. Analysis of the accuracy and precision of the McMaster method in detection of the eggs of Toxocara and Trichuris species (Nematoda) in dog faeces.

    PubMed

    Kochanowski, Maciej; Dabrowska, Joanna; Karamon, Jacek; Cencek, Tomasz; Osiński, Zbigniew

    2013-07-01

    The aim of this study was to determine the accuracy and precision of McMaster method with Raynaud's modification in the detection of the eggs of the nematodes Toxocara canis (Werner, 1782) and Trichuris ovis (Abildgaard, 1795) in faeces of dogs. Four variants of McMaster method were used for counting: in one grid, two grids, the whole McMaster chamber and flotation in the tube. One hundred sixty samples were prepared from dog faeces (20 repetitions for each egg quantity) containing 15, 25, 50, 100, 150, 200, 250 and 300 eggs of T. canis and T. ovis in 1 g of faeces. To compare the influence of kind of faeces on the results, samples of dog faeces were enriched at the same levels with the eggs of another nematode, Ascaris suum Goeze, 1782. In addition, 160 samples of pig faeces were prepared and enriched only with A. suum eggs in the same way. The highest limit of detection (the lowest level of eggs that were detected in at least 50% of repetitions) in all McMaster chamber variants were obtained for T. canis eggs (25-250 eggs/g faeces). In the variant with flotation in the tube, the highest limit of detection was obtained for T. ovis eggs (100 eggs/g). The best results of the limit of detection, sensitivity and the lowest coefficients of variation were obtained with the use of the whole McMaster chamber variant. There was no significant impact of properties of faeces on the obtained results. Multiplication factors for the whole chamber were calculated on the basis of the transformed equation of the regression line, illustrating the relationship between the number of detected eggs and that of the eggs added to the'sample. Multiplication factors calculated for T. canis and T. ovis eggs were higher than those expected using McMaster method with Raynaud modification. PMID:23951934

  14. Accuracy and precision of 14C-based source apportionment of organic and elemental carbon in aerosols using the Swiss_4S protocol

    NASA Astrophysics Data System (ADS)

    Mouteva, G. O.; Fahrni, S. M.; Santos, G. M.; Randerson, J. T.; Zhang, Y.-L.; Szidat, S.; Czimczik, C. I.

    2015-09-01

    Aerosol source apportionment remains a critical challenge for understanding the transport and aging of aerosols, as well as for developing successful air pollution mitigation strategies. The contributions of fossil and non-fossil sources to organic carbon (OC) and elemental carbon (EC) in carbonaceous aerosols can be quantified by measuring the radiocarbon (14C) content of each carbon fraction. However, the use of 14C in studying OC and EC has been limited by technical challenges related to the physical separation of the two fractions and small sample sizes. There is no common procedure for OC/EC 14C analysis, and uncertainty studies have largely focused on the precision of yields. Here, we quantified the uncertainty in 14C measurement of aerosols associated with the isolation and analysis of each carbon fraction with the Swiss_4S thermal-optical analysis (TOA) protocol. We used an OC/EC analyzer (Sunset Laboratory Inc., OR, USA) coupled to a vacuum line to separate the two components. Each fraction was thermally desorbed and converted to carbon dioxide (CO2) in pure oxygen (O2). On average, 91 % of the evolving CO2 was then cryogenically trapped on the vacuum line, reduced to filamentous graphite, and measured for its 14C content via accelerator mass spectrometry (AMS). To test the accuracy of our setup, we quantified the total amount of extraneous carbon introduced during the TOA sample processing and graphitization as the sum of modern and fossil (14C-depleted) carbon introduced during the analysis of fossil reference materials (adipic acid for OC and coal for EC) and contemporary standards (oxalic acid for OC and rice char for EC) as a function of sample size. We further tested our methodology by analyzing five ambient airborne particulate matter (PM2.5) samples with a range of OC and EC concentrations and 14C contents in an interlaboratory comparison. The total modern and fossil carbon blanks of our setup were 0.8 ± 0.4 and 0.67 ± 0.34 μg C, respectively

  15. Accuracy and precision of 14C-based source apportionment of organic and elemental carbon in aerosols using the Swiss_4S protocol

    NASA Astrophysics Data System (ADS)

    Mouteva, G. O.; Fahrni, S. M.; Santos, G. M.; Randerson, J. T.; Zhang, Y. L.; Szidat, S.; Czimczik, C. I.

    2015-04-01

    Aerosol source apportionment remains a critical challenge for understanding the transport and aging of aerosols, as well as for developing successful air pollution mitigation strategies. The contributions of fossil and non-fossil sources to organic carbon (OC) and elemental carbon (EC) in carbonaceous aerosols can be quantified by measuring the radiocarbon (14C) content of each carbon fraction. However, the use of 14C in studying OC and EC has been limited by technical challenges related to the physical separation of the two fractions and small sample sizes. There is no common procedure for OC/EC 14C analysis, and uncertainty studies have largely focused on the precision of yields. Here, we quantified the uncertainty in 14C measurement of aerosols associated with the isolation and analysis of each carbon fraction with the Swiss_4S thermal-optical analysis (TOA) protocol. We used an OC/EC analyzer (Sunset Laboratory Inc., OR, USA) coupled to vacuum line to separate the two components. Each fraction was thermally desorbed and converted to carbon dioxide (CO2) in pure oxygen (O2). On average 91% of the evolving CO2 was then cryogenically trapped on the vacuum line, reduced to filamentous graphite, and measured for its 14C content via accelerator mass spectrometry (AMS). To test the accuracy of our set-up, we quantified the total amount of extraneous carbon introduced during the TOA sample processing and graphitization as the sum of modern and fossil (14C-depleted) carbon introduced during the analysis of fossil reference materials (adipic acid for OC and coal for EC) and contemporary standards (oxalic acid for OC and rice char for EC) as a function of sample size. We further tested our methodology by analyzing five ambient airborne particulate matter (PM2.5) samples with a range of OC and EC concentrations and 14C contents in an interlaboratory comparison. The total modern and fossil carbon blanks of our set-up were 0.8 ± 0.4 and 0.67 ± 0.34 μg C, respectively

  16. The effects of temporal-precision and time-minimization constraints on the spatial and temporal accuracy of aimed hand movements.

    PubMed

    Carlton, L G

    1994-03-01

    Discrete aimed hand movements, made by subjects given temporal-accuracy and time-minimization task instructions, were compared. Movements in the temporal-accuracy task were made to a point target with a goal movement time of 400 ms. A circular target then was manufactured that incorporated the measured spatial errors from the temporal-accuracy task, and subjects attempted to contact the target with a minimum movement time and without missing the circular target (time-minimization task instructions). This procedure resulted in equal movement amplitude and approximately equal spatial accuracy for the two task instructions. Movements under the time-minimization instructions were completed rapidly (M = 307 ms) without target misses, and tended to be made up of two submovements. In contrast, movements under temporal-accuracy instructions were made more slowly (M = 397 ms), matching the goal movement time, and were typically characterized by a single submovement. These data support the hypothesis that movement times, at a fixed movement amplitude versus target width ratio, decrease as the number of submovements increases, and that movements produced under temporal-accuracy and time-minimization have different control characteristics. These control differences are related to the linear and logarithmic speed-accuracy relations observed for temporal-accuracy and time-minimization tasks, respectively. PMID:15757833

  17. SU-E-J-03: Characterization of the Precision and Accuracy of a New, Preclinical, MRI-Guided Focused Ultrasound System for Image-Guided Interventions in Small-Bore, High-Field Magnets

    SciTech Connect

    Ellens, N; Farahani, K

    2015-06-15

    Purpose: MRI-guided focused ultrasound (MRgFUS) has many potential and realized applications including controlled heating and localized drug delivery. The development of many of these applications requires extensive preclinical work, much of it in small animal models. The goal of this study is to characterize the spatial targeting accuracy and reproducibility of a preclinical high field MRgFUS system for thermal ablation and drug delivery applications. Methods: The RK300 (FUS Instruments, Toronto, Canada) is a motorized, 2-axis FUS positioning system suitable for small bore (72 mm), high-field MRI systems. The accuracy of the system was assessed in three ways. First, the precision of the system was assessed by sonicating regular grids of 5 mm squares on polystyrene plates and comparing the resulting focal dimples to the intended pattern, thereby assessing the reproducibility and precision of the motion control alone. Second, the targeting accuracy was assessed by imaging a polystyrene plate with randomly drilled holes and replicating the hole pattern by sonicating the observed hole locations on intact polystyrene plates and comparing the results. Third, the practicallyrealizable accuracy and precision were assessed by comparing the locations of transcranial, FUS-induced blood-brain-barrier disruption (BBBD) (observed through Gadolinium enhancement) to the intended targets in a retrospective analysis of animals sonicated for other experiments. Results: The evenly-spaced grids indicated that the precision was 0.11 +/− 0.05 mm. When image-guidance was included by targeting random locations, the accuracy was 0.5 +/− 0.2 mm. The effective accuracy in the four rodent brains assessed was 0.8 +/− 0.6 mm. In all cases, the error appeared normally distributed (p<0.05) in both orthogonal axes, though the left/right error was systematically greater than the superior/inferior error. Conclusions: The targeting accuracy of this device is sub-millimeter, suitable for many

  18. Detecting declines in the abundance of a bull trout (Salvelinus confluentus) population: Understanding the accuracy, precision, and costs of our efforts

    USGS Publications Warehouse

    Al-Chokhachy, R.; Budy, P.; Conner, M.

    2009-01-01

    Using empirical field data for bull trout (Salvelinus confluentus), we evaluated the trade-off between power and sampling effort-cost using Monte Carlo simulations of commonly collected mark-recapture-resight and count data, and we estimated the power to detect changes in abundance across different time intervals. We also evaluated the effects of monitoring different components of a population and stratification methods on the precision of each method. Our results illustrate substantial variability in the relative precision, cost, and information gained from each approach. While grouping estimates by age or stage class substantially increased the precision of estimates, spatial stratification of sampling units resulted in limited increases in precision. Although mark-resight methods allowed for estimates of abundance versus indices of abundance, our results suggest snorkel surveys may be a more affordable monitoring approach across large spatial scales. Detecting a 25% decline in abundance after 5 years was not possible, regardless of technique (power = 0.80), without high sampling effort (48% of study site). Detecting a 25% decline was possible after 15 years, but still required high sampling efforts. Our results suggest detecting moderate changes in abundance of freshwater salmonids requires considerable resource and temporal commitments and highlight the difficulties of using abundance measures for monitoring bull trout populations.

  19. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  20. Method and system using power modulation for maskless vapor deposition of spatially graded thin film and multilayer coatings with atomic-level precision and accuracy

    DOEpatents

    Montcalm, Claude; Folta, James Allen; Tan, Swie-In; Reiss, Ira

    2002-07-30

    A method and system for producing a film (preferably a thin film with highly uniform or highly accurate custom graded thickness) on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source operated with time-varying flux distribution. In preferred embodiments, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. A user selects a source flux modulation recipe for achieving a predetermined desired thickness profile of the deposited film. The method relies on precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  1. Precision Fabrication of a Large-Area Sinusoidal Surface Using a Fast-Tool-Servo Technique ─Improvement of Local Fabrication Accuracy

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Tano, Makoto; Araki, Takeshi; Kiyono, Satoshi

    This paper describes a diamond turning fabrication system for a sinusoidal grid surface. The wavelength and amplitude of the sinusoidal wave in each direction are 100µm and 100nm, respectively. The fabrication system, which is based on a fast-tool-servo (FTS), has the ability to generate the angle grid surface over an area of φ 150mm. This paper focuses on the improvement of the local fabrication accuracy. The areas considered are each approximately 1 × 1mm, and can be imaged by an interference microscope. Specific fabrication errors of the manufacturing process, caused by the round nose geometry of the diamond cutting tool and the data digitization, are successfully identified by Discrete Fourier Transform of the microscope images. Compensation processes are carried out to reduce the errors. As a result, the fabrication errors in local areas of the angle grid surface are reduced by 1/10.

  2. Preliminary assessment of the accuracy and precision of TOPEX/POSEIDON altimeter data with respect to the large-scale ocean circulation

    NASA Technical Reports Server (NTRS)

    Wunsch, Carl; Stammer, Detlef

    1994-01-01

    TOPEX/POSEIDON sea surface height measurements are examined for quantitative consistency with known elements of the oceanic general circulation and its variability. Project-provided corrections were accepted but are at tested as part of the overall results. The ocean was treated as static over each 10-day repeat cycle and maps constructed of the absolute sea surface topography from simple averages in 2 deg x 2 deg bins. A hybrid geoid model formed from a combination of the recent Joint Gravity Model-2 and the project-provided Ohio State University geoid was used to estimate the absolute topography in each 10-day period. Results are examined in terms of the annual average, seasonal average, seasonal variations, and variations near the repeat period. Conclusion are as follows: the orbit error is now difficult to observe, having been reduced to a level at or below the level of other error sources; the geoid dominates the error budget of the estimates of the absolute topography; the estimated seasonal cycle is consistent with prior estimates; shorter-period variability is dominated on the largest scales by an oscillation near 50 days in spherical harmonics Y(sup m)(sub 1)(theta, lambda) with an amplitude near 10 cm, close to the simplest alias of the M(sub 2) tide. This spectral peak and others visible in the periodograms support the hypothesis that the largest remaining time-dependent errors lie in the tidal models. Though discrepancies attribute to the geoid are within the formal uncertainties of the good estimates, removal of them is urgent for circulation studies. Current gross accuracy of the TOPEX/POSEIDON mission is in the range of 5-10 cm, distributed overbroad band of frequencies and wavenumbers. In finite bands, accuracies approach the 1-cm level, and expected improvements arising from extended mission duration should reduce these numbers by nearly an order of magnitude.

  3. Leaf Vein Length per Unit Area Is Not Intrinsically Dependent on Image Magnification: Avoiding Measurement Artifacts for Accuracy and Precision1[W][OPEN

    PubMed Central

    Sack, Lawren; Caringella, Marissa; Scoffoni, Christine; Mason, Chase; Rawls, Michael; Markesteijn, Lars; Poorter, Lourens

    2014-01-01

    Leaf vein length per unit leaf area (VLA; also known as vein density) is an important determinant of water and sugar transport, photosynthetic function, and biomechanical support. A range of software methods are in use to visualize and measure vein systems in cleared leaf images; typically, users locate veins by digital tracing, but recent articles introduced software by which users can locate veins using thresholding (i.e. based on the contrasting of veins in the image). Based on the use of this method, a recent study argued against the existence of a fixed VLA value for a given leaf, proposing instead that VLA increases with the magnification of the image due to intrinsic properties of the vein system, and recommended that future measurements use a common, low image magnification for measurements. We tested these claims with new measurements using the software LEAFGUI in comparison with digital tracing using ImageJ software. We found that the apparent increase of VLA with magnification was an artifact of (1) using low-quality and low-magnification images and (2) errors in the algorithms of LEAFGUI. Given the use of images of sufficient magnification and quality, and analysis with error-free software, the VLA can be measured precisely and accurately. These findings point to important principles for improving the quantity and quality of important information gathered from leaf vein systems. PMID:25096977

  4. High-accuracy, high-precision, high-resolution, continuous monitoring of urban greenhouse gas emissions? Results to date from INFLUX

    NASA Astrophysics Data System (ADS)

    Davis, K. J.; Brewer, A.; Cambaliza, M. O. L.; Deng, A.; Hardesty, M.; Gurney, K. R.; Heimburger, A. M. F.; Karion, A.; Lauvaux, T.; Lopez-Coto, I.; McKain, K.; Miles, N. L.; Patarasuk, R.; Prasad, K.; Razlivanov, I. N.; Richardson, S.; Sarmiento, D. P.; Shepson, P. B.; Sweeney, C.; Turnbull, J. C.; Whetstone, J. R.; Wu, K.

    2015-12-01

    The Indianapolis Flux Experiment (INFLUX) is testing the boundaries of our ability to use atmospheric measurements to quantify urban greenhouse gas (GHG) emissions. The project brings together inventory assessments, tower-based and aircraft-based atmospheric measurements, and atmospheric modeling to provide high-accuracy, high-resolution, continuous monitoring of emissions of GHGs from the city. Results to date include a multi-year record of tower and aircraft based measurements of the urban CO2 and CH4 signal, long-term atmospheric modeling of GHG transport, and emission estimates for both CO2 and CH4 based on both tower and aircraft measurements. We will present these emissions estimates, the uncertainties in each, and our assessment of the primary needs for improvements in these emissions estimates. We will also present ongoing efforts to improve our understanding of atmospheric transport and background atmospheric GHG mole fractions, and to disaggregate GHG sources (e.g. biogenic vs. fossil fuel CO2 fluxes), topics that promise significant improvement in urban GHG emissions estimates.

  5. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset 1998-2000 in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, A. M.; Schmidlin, F. J.; Oltmans, S. J.; McPeters, R. D.; Smit, H. G. J.

    2003-01-01

    A network of 12 southern hemisphere tropical and subtropical stations in the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 profiles of stratospheric and tropospheric ozone since 1998. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used with standard radiosondes for pressure, temperature and relative humidity measurements. The archived data are available at:http: //croc.gsfc.nasa.gov/shadoz. In Thompson et al., accuracies and imprecisions in the SHADOZ 1998- 2000 dataset were examined using ground-based instruments and the TOMS total ozone measurement (version 7) as references. Small variations in ozonesonde technique introduced possible biases from station-to-station. SHADOZ total ozone column amounts are now compared to version 8 TOMS; discrepancies between the two datasets are reduced 2\\% on average. An evaluation of ozone variations among the stations is made using the results of a series of chamber simulations of ozone launches (JOSIE-2000, Juelich Ozonesonde Intercomparison Experiment) in which a standard reference ozone instrument was employed with the various sonde techniques used in SHADOZ. A number of variations in SHADOZ ozone data are explained when differences in solution strength, data processing and instrument type (manufacturer) are taken into account.

  6. The effect of dilution and the use of a post-extraction nucleic acid purification column on the accuracy, precision, and inhibition of environmental DNA samples

    USGS Publications Warehouse

    Mckee, Anna M.; Spear, Stephen F.; Pierson, Todd W.

    2015-01-01

    Isolation of environmental DNA (eDNA) is an increasingly common method for detecting presence and assessing relative abundance of rare or elusive species in aquatic systems via the isolation of DNA from environmental samples and the amplification of species-specific sequences using quantitative PCR (qPCR). Co-extracted substances that inhibit qPCR can lead to inaccurate results and subsequent misinterpretation about a species’ status in the tested system. We tested three treatments (5-fold and 10-fold dilutions, and spin-column purification) for reducing qPCR inhibition from 21 partially and fully inhibited eDNA samples collected from coastal plain wetlands and mountain headwater streams in the southeastern USA. All treatments reduced the concentration of DNA in the samples. However, column purified samples retained the greatest sensitivity. For stream samples, all three treatments effectively reduced qPCR inhibition. However, for wetland samples, the 5-fold dilution was less effective than other treatments. Quantitative PCR results for column purified samples were more precise than the 5-fold and 10-fold dilutions by 2.2× and 3.7×, respectively. Column purified samples consistently underestimated qPCR-based DNA concentrations by approximately 25%, whereas the directional bias in qPCR-based DNA concentration estimates differed between stream and wetland samples for both dilution treatments. While the directional bias of qPCR-based DNA concentration estimates differed among treatments and locations, the magnitude of inaccuracy did not. Our results suggest that 10-fold dilution and column purification effectively reduce qPCR inhibition in mountain headwater stream and coastal plain wetland eDNA samples, and if applied to all samples in a study, column purification may provide the most accurate relative qPCR-based DNA concentrations estimates while retaining the greatest assay sensitivity.

  7. Re-Os geochronology of the El Salvador porphyry Cu-Mo deposit, Chile: Tracking analytical improvements in accuracy and precision over the past decade

    NASA Astrophysics Data System (ADS)

    Zimmerman, Aaron; Stein, Holly J.; Morgan, John W.; Markey, Richard J.; Watanabe, Yasushi

    2014-04-01

    deposit geochronology. The timing and duration of mineralization from Re-Os dating of ore minerals is more precise than estimates from previously reported 40Ar/39Ar and K-Ar ages on alteration minerals. The Re-Os results suggest that the mineralization is temporally distinct from pre-mineral rhyolite porphyry (42.63 ± 0.28 Ma) and is immediately prior to or overlapping with post-mineral latite dike emplacement (41.16 ± 0.48 Ma). Based on the Re-Os and other geochronologic data, the Middle Eocene intrusive activity in the El Salvador district is divided into three pulses: (1) 44-42.5 Ma for weakly mineralized porphyry intrusions, (2) 41.8-41.2 Ma for intensely mineralized porphyry intrusions, and (3) ∼41 Ma for small latite dike intrusions without major porphyry stocks. The orientation of igneous dikes and porphyry stocks changed from NNE-SSW during the first pulse to WNW-ESE for the second and third pulses. This implies that the WNW-ESE striking stress changed from σ3 (minimum principal compressive stress) during the first pulse to σHmax (maximum principal compressional stress in a horizontal plane) during the second and third pulses. Therefore, the focus of intense porphyry Cu-Mo mineralization occurred during a transient geodynamic reconfiguration just before extinction of major intrusive activity in the region.

  8. Precision optical metrology without lasers

    NASA Astrophysics Data System (ADS)

    Bergmann, Ralf B.; Burke, Jan; Falldorf, Claas

    2015-07-01

    Optical metrology is a key technique when it comes to precise and fast measurement with a resolution down to the micrometer or even nanometer regime. The choice of a particular optical metrology technique and the quality of results depends on sample parameters such as size, geometry and surface roughness as well as user requirements such as resolution, measurement time and robustness. Interferometry-based techniques are well known for their low measurement uncertainty in the nm range, but usually require careful isolation against vibration and a laser source that often needs shielding for reasons of eye-safety. In this paper, we concentrate on high precision optical metrology without lasers by using the gradient based measurement technique of deflectometry and the finite difference based technique of shear interferometry. Careful calibration of deflectometry systems allows one to investigate virtually all kinds of reflecting surfaces including aspheres or free-form surfaces with measurement uncertainties below the μm level. Computational Shear Interferometry (CoSI) allows us to combine interferometric accuracy and the possibility to use cheap and eye-safe low-brilliance light sources such as e.g. fiber coupled LEDs or even liquid crystal displays. We use CoSI e.g. for quantitative phase contrast imaging in microscopy. We highlight the advantages of both methods, discuss their transfer functions and present results on the precision of both techniques.

  9. Assessing the Accuracy and Precision of Inorganic Geochemical Data Produced through Flux Fusion and Acid Digestions: Multiple (60+) Comprehensive Analyses of BHVO-2 and the Development of Improved "Accepted" Values

    NASA Astrophysics Data System (ADS)

    Ireland, T. J.; Scudder, R.; Dunlea, A. G.; Anderson, C. H.; Murray, R. W.

    2014-12-01

    The use of geological standard reference materials (SRMs) to assess both the accuracy and the reproducibility of geochemical data is a vital consideration in determining the major and trace element abundances of geologic, oceanographic, and environmental samples. Calibration curves commonly are generated that are predicated on accurate analyses of these SRMs. As a means to verify the robustness of these calibration curves, a SRM can also be run as an unknown item (i.e., not included as a data point in the calibration). The experimentally derived composition of the SRM can thus be compared to the certified (or otherwise accepted) value. This comparison gives a direct measure of the accuracy of the method used. Similarly, if the same SRM is analyzed as an unknown over multiple analytical sessions, the external reproducibility of the method can be evaluated. Two common bulk digestion methods used in geochemical analysis are flux fusion and acid digestion. The flux fusion technique is excellent at ensuring complete digestion of a variety of sample types, is quick, and does not involve much use of hazardous acids. However, this technique is hampered by a high amount of total dissolved solids and may be accompanied by an increased analytical blank for certain trace elements. On the other hand, acid digestion (using a cocktail of concentrated nitric, hydrochloric and hydrofluoric acids) provides an exceptionally clean digestion with very low analytical blanks. However, this technique results in a loss of Si from the system and may compromise results for a few other elements (e.g., Ge). Our lab uses flux fusion for the determination of major elements and a few key trace elements by ICP-ES, while acid digestion is used for Ti and trace element analyses by ICP-MS. Here we present major and trace element data for BHVO-2, a frequently used SRM derived from a Hawaiian basalt, gathered over a period of over two years (30+ analyses by each technique). We show that both digestion

  10. Relative accuracy evaluation.

    PubMed

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  11. Relative Accuracy Evaluation

    PubMed Central

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  12. Precision electron polarimetry

    SciTech Connect

    Chudakov, Eugene A.

    2013-11-01

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. M{\\o}ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at ~300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100\\%-polarized electron target for M{\\o}ller polarimetry.

  13. Precision electron polarimetry

    SciTech Connect

    Chudakov, E.

    2013-11-07

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. Mo/ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at 300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100%-polarized electron target for Mo/ller polarimetry.

  14. SU-E-P-54: Evaluation of the Accuracy and Precision of IGPS-O X-Ray Image-Guided Positioning System by Comparison with On-Board Imager Cone-Beam Computed Tomography

    SciTech Connect

    Zhang, D; Wang, W; Jiang, B; Fu, D

    2015-06-15

    Purpose: The purpose of this study is to assess the positioning accuracy and precision of IGPS-O system which is a novel radiographic kilo-voltage x-ray image-guided positioning system developed for clinical IGRT applications. Methods: IGPS-O x-ray image-guided positioning system consists of two oblique sets of radiographic kilo-voltage x-ray projecting and imaging devices which were equiped on the ground and ceiling of treatment room. This system can determine the positioning error in the form of three translations and three rotations according to the registration of two X-ray images acquired online and the planning CT image. An anthropomorphic head phantom and an anthropomorphic thorax phantom were used for this study. The phantom was set up on the treatment table with correct position and various “planned” setup errors. Both IGPS-O x-ray image-guided positioning system and the commercial On-board Imager Cone-beam Computed Tomography (OBI CBCT) were used to obtain the setup errors of the phantom. Difference of the Result between the two image-guided positioning systems were computed and analyzed. Results: The setup errors measured by IGPS-O x-ray image-guided positioning system and the OBI CBCT system showed a general agreement, the means and standard errors of the discrepancies between the two systems in the left-right, anterior-posterior, superior-inferior directions were −0.13±0.09mm, 0.03±0.25mm, 0.04±0.31mm, respectively. The maximum difference was only 0.51mm in all the directions and the angular discrepancy was 0.3±0.5° between the two systems. Conclusion: The spatial and angular discrepancies between IGPS-O system and OBI CBCT for setup error correction was minimal. There is a general agreement between the two positioning system. IGPS-O x-ray image-guided positioning system can achieve as good accuracy as CBCT and can be used in the clinical IGRT applications.

  15. Application of AFINCH as a Tool for Evaluating the Effects of Streamflow-Gaging-Network Size and Composition on the Accuracy and Precision of Streamflow Estimates at Ungaged Locations in the Southeast Lake Michigan Hydrologic Subregion

    USGS Publications Warehouse

    Koltun, G.F.; Holtschlag, David J.

    2010-01-01

    Bootstrapping techniques employing random subsampling were used with the AFINCH (Analysis of Flows In Networks of CHannels) model to gain insights into the effects of variation in streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the 0405 (Southeast Lake Michigan) hydrologic subregion. AFINCH uses stepwise-regression techniques to estimate monthly water yields from catchments based on geospatial-climate and land-cover data in combination with available streamflow and water-use data. Calculations are performed on a hydrologic-subregion scale for each catchment and stream reach contained in a National Hydrography Dataset Plus (NHDPlus) subregion. Water yields from contributing catchments are multiplied by catchment areas and resulting flow values are accumulated to compute streamflows in stream reaches which are referred to as flow lines. AFINCH imposes constraints on water yields to ensure that observed streamflows are conserved at gaged locations. Data from the 0405 hydrologic subregion (referred to as Southeast Lake Michigan) were used for the analyses. Daily streamflow data were measured in the subregion for 1 or more years at a total of 75 streamflow-gaging stations during the analysis period which spanned water years 1971-2003. The number of streamflow gages in operation each year during the analysis period ranged from 42 to 56 and averaged 47. Six sets (one set for each censoring level), each composed of 30 random subsets of the 75 streamflow gages, were created by censoring (removing) approximately 10, 20, 30, 40, 50, and 75 percent of the streamflow gages (the actual percentage of operating streamflow gages censored for each set varied from year to year, and within the year from subset to subset, but averaged approximately the indicated percentages). Streamflow estimates for six flow lines each were aggregated by censoring level, and results were analyzed to assess (a) how the size

  16. Online image-guided intensity-modulated radiotherapy for prostate cancer: How much improvement can we expect? A theoretical assessment of clinical benefits and potential dose escalation by improving precision and accuracy of radiation delivery

    SciTech Connect

    Ghilezan, Michel; Yan Di . E-mail: dyan@beaumont.edu; Liang Jian; Jaffray, David; Wong, John; Martinez, Alvaro

    2004-12-01

    Purpose: To quantify the theoretical benefit, in terms of improvement in precision and accuracy of treatment delivery and in dose increase, of using online image-guided intensity-modulated radiotherapy (IG-IMRT) performed with onboard cone-beam computed tomography (CT), in an ideal setting of no intrafraction motion/deformation, in the treatment of prostate cancer. Methods and materials: Twenty-two prostate cancer patients treated with conventional radiotherapy underwent multiple serial CT scans (median 18 scans per patient) during their treatment. We assumed that these data sets were equivalent to image sets obtainable by an onboard cone-beam CT. Each patient treatment was simulated with conventional IMRT and online IG-IMRT separately. The conventional IMRT plan was generated on the basis of pretreatment CT, with a clinical target volume to planning target volume (CTV-to-PTV) margin of 1 cm, and the online IG-IMRT plan was created before each treatment fraction on the basis of the CT scan of the day, without CTV-to-PTV margin. The inverse planning process was similar for both conventional IMRT and online IG-IMRT. Treatment dose for each organ of interest was quantified, including patient daily setup error and internal organ motion/deformation. We used generalized equivalent uniform dose (EUD) to compare the two approaches. The generalized EUD (percentage) of each organ of interest was scaled relative to the prescription dose at treatment isocenter for evaluation and comparison. On the basis of bladder wall and rectal wall EUD, a dose-escalation coefficient was calculated, representing the potential increment of the treatment dose achievable with online IG-IMRT as compared with conventional IMRT. Results: With respect to radiosensitive tumor, the average EUD for the target (prostate plus seminal vesicles) was 96.8% for conventional IMRT and 98.9% for online IG-IMRT, with standard deviations (SDs) of 5.6% and 0.7%, respectively (p < 0.0001). The average EUDs of

  17. Precision digital control systems

    NASA Astrophysics Data System (ADS)

    Vyskub, V. G.; Rozov, B. S.; Savelev, V. I.

    This book is concerned with the characteristics of digital control systems of great accuracy. A classification of such systems is considered along with aspects of stabilization, programmable control applications, digital tracking systems and servomechanisms, and precision systems for the control of a scanning laser beam. Other topics explored are related to systems of proportional control, linear devices and methods for increasing precision, approaches for further decreasing the response time in the case of high-speed operation, possibilities for the implementation of a logical control law, and methods for the study of precision digital control systems. A description is presented of precision automatic control systems which make use of electronic computers, taking into account the existing possibilities for an employment of computers in automatic control systems, approaches and studies required for including a computer in such control systems, and an analysis of the structure of automatic control systems with computers. Attention is also given to functional blocks in the considered systems.

  18. Robust Regression.

    PubMed

    Huang, Dong; Cabral, Ricardo; De la Torre, Fernando

    2016-02-01

    Discriminative methods (e.g., kernel regression, SVM) have been extensively used to solve problems such as object recognition, image alignment and pose estimation from images. These methods typically map image features ( X) to continuous (e.g., pose) or discrete (e.g., object category) values. A major drawback of existing discriminative methods is that samples are directly projected onto a subspace and hence fail to account for outliers common in realistic training sets due to occlusion, specular reflections or noise. It is important to notice that existing discriminative approaches assume the input variables X to be noise free. Thus, discriminative methods experience significant performance degradation when gross outliers are present. Despite its obvious importance, the problem of robust discriminative learning has been relatively unexplored in computer vision. This paper develops the theory of robust regression (RR) and presents an effective convex approach that uses recent advances on rank minimization. The framework applies to a variety of problems in computer vision including robust linear discriminant analysis, regression with missing data, and multi-label classification. Several synthetic and real examples with applications to head pose estimation from images, image and video classification and facial attribute classification with missing data are used to illustrate the benefits of RR. PMID:26761740

  19. A precise spectrophotometric method for measuring sodium dodecyl sulfate concentration.

    PubMed

    Rupprecht, Kevin R; Lang, Ewa Z; Gregory, Svetoslava D; Bergsma, Janet M; Rae, Tracey D; Fishpaugh, Jeffrey R

    2015-10-01

    Sodium dodecyl sulfate (SDS) is used to denature and solubilize proteins, especially membrane and other hydrophobic proteins. A quantitative method to determine the concentration of SDS using the dye Stains-All is known. However, this method lacks the accuracy and reproducibility necessary for use with protein solutions where SDS concentration is a critical factor, so we modified this method after examining multiple parameters (solvent, pH, buffers, and light exposure). The improved method is simple to implement, robust, accurate, and (most important) precise. PMID:26150094

  20. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  1. Precision Nova operations

    NASA Astrophysics Data System (ADS)

    Ehrlich, Robert B.; Miller, John L.; Saunders, Rodney L.; Thompson, Calvin E.; Weiland, Timothy L.; Laumann, Curt W.

    1995-12-01

    To improve the symmetry of x-ray drive on indirectly driven ICF capsules, we have increased the accuracy of operating procedures and diagnostics on the Nova laser. Precision Nova operations include routine precision power balance to within 10% rms in the 'foot' and 5% rms in the peak of shaped pulses, beam synchronization to within 10 ps rms, and pointing of the beams onto targets to within 35 micrometer rms. We have also added a 'fail-safe chirp' system to avoid stimulated Brillouin scattering (SBS) in optical components during high energy shots.

  2. Precision Nova operations

    SciTech Connect

    Ehrlich, R.B.; Miller, J.L.; Saunders, R.L.; Thompson, C.E.; Weiland, T.L.; Laumann, C.W.

    1995-09-01

    To improve the symmetry of x-ray drive on indirectly driven ICF capsules, we have increased the accuracy of operating procedures and diagnostics on the Nova laser. Precision Nova operations includes routine precision power balance to within 10% rms in the ``foot`` and 5% nns in the peak of shaped pulses, beam synchronization to within 10 ps rms, and pointing of the beams onto targets to within 35 {mu}m rms. We have also added a ``fail-safe chirp`` system to avoid Stimulated Brillouin Scattering (SBS) in optical components during high energy shots.

  3. An improved robust hand-eye calibration for endoscopy navigation system

    NASA Astrophysics Data System (ADS)

    He, Wei; Kang, Kumsok; Li, Yanfang; Shi, Weili; Miao, Yu; He, Fei; Yan, Fei; Yang, Huamin; Zhang, Huimao; Mori, Kensaku; Jiang, Zhengang

    2016-03-01

    Endoscopy is widely used in clinical application, and surgical navigation system is an extremely important way to enhance the safety of endoscopy. The key to improve the accuracy of the navigation system is to solve the positional relationship between camera and tracking marker precisely. The problem can be solved by the hand-eye calibration method based on dual quaternions. However, because of the tracking error and the limited motion of the endoscope, the sample motions may contain some incomplete motion samples. Those motions will cause the algorithm unstable and inaccurate. An advanced selection rule for sample motions is proposed in this paper to improve the stability and accuracy of the methods based on dual quaternion. By setting the motion filter to filter out the incomplete motion samples, finally, high precision and robust result is achieved. The experimental results show that the accuracy and stability of camera registration have been effectively improved by selecting sample motion data automatically.

  4. State of the Field: Extreme Precision Radial Velocities

    NASA Astrophysics Data System (ADS)

    Fischer, Debra A.; Anglada-Escude, Guillem; Arriagada, Pamela; Baluev, Roman V.; Bean, Jacob L.; Bouchy, Francois; Buchhave, Lars A.; Carroll, Thorsten; Chakraborty, Abhijit; Crepp, Justin R.; Dawson, Rebekah I.; Diddams, Scott A.; Dumusque, Xavier; Eastman, Jason D.; Endl, Michael; Figueira, Pedro; Ford, Eric B.; Foreman-Mackey, Daniel; Fournier, Paul; Fűrész, Gabor; Gaudi, B. Scott; Gregory, Philip C.; Grundahl, Frank; Hatzes, Artie P.; Hébrard, Guillaume; Herrero, Enrique; Hogg, David W.; Howard, Andrew W.; Johnson, John A.; Jorden, Paul; Jurgenson, Colby A.; Latham, David W.; Laughlin, Greg; Loredo, Thomas J.; Lovis, Christophe; Mahadevan, Suvrath; McCracken, Tyler M.; Pepe, Francesco; Perez, Mario; Phillips, David F.; Plavchan, Peter P.; Prato, Lisa; Quirrenbach, Andreas; Reiners, Ansgar; Robertson, Paul; Santos, Nuno C.; Sawyer, David; Segransan, Damien; Sozzetti, Alessandro; Steinmetz, Tilo; Szentgyorgyi, Andrew; Udry, Stéphane; Valenti, Jeff A.; Wang, Sharon X.; Wittenmyer, Robert A.; Wright, Jason T.

    2016-06-01

    The Second Workshop on Extreme Precision Radial Velocities defined circa 2015 the state of the art Doppler precision and identified the critical path challenges for reaching 10 cm s‑1 measurement precision. The presentations and discussion of key issues for instrumentation and data analysis and the workshop recommendations for achieving this bold precision are summarized here. Beginning with the High Accuracy Radial Velocity Planet Searcher spectrograph, technological advances for precision radial velocity (RV) measurements have focused on building extremely stable instruments. To reach still higher precision, future spectrometers will need to improve upon the state of the art, producing even higher fidelity spectra. This should be possible with improved environmental control, greater stability in the illumination of the spectrometer optics, better detectors, more precise wavelength calibration, and broader bandwidth spectra. Key data analysis challenges for the precision RV community include distinguishing center of mass (COM) Keplerian motion from photospheric velocities (time correlated noise) and the proper treatment of telluric contamination. Success here is coupled to the instrument design, but also requires the implementation of robust statistical and modeling techniques. COM velocities produce Doppler shifts that affect every line identically, while photospheric velocities produce line profile asymmetries with wavelength and temporal dependencies that are different from Keplerian signals. Exoplanets are an important subfield of astronomy and there has been an impressive rate of discovery over the past two decades. However, higher precision RV measurements are required to serve as a discovery technique for potentially habitable worlds, to confirm and characterize detections from transit missions, and to provide mass measurements for other space-based missions. The future of exoplanet science has very different trajectories depending on the precision that

  5. Precise Orbit Determination for ALOS

    NASA Technical Reports Server (NTRS)

    Nakamura, Ryo; Nakamura, Shinichi; Kudo, Nobuo; Katagiri, Seiji

    2007-01-01

    The Advanced Land Observing Satellite (ALOS) has been developed to contribute to the fields of mapping, precise regional land coverage observation, disaster monitoring, and resource surveying. Because the mounted sensors need high geometrical accuracy, precise orbit determination for ALOS is essential for satisfying the mission objectives. So ALOS mounts a GPS receiver and a Laser Reflector (LR) for Satellite Laser Ranging (SLR). This paper deals with the precise orbit determination experiments for ALOS using Global and High Accuracy Trajectory determination System (GUTS) and the evaluation of the orbit determination accuracy by SLR data. The results show that, even though the GPS receiver loses lock of GPS signals more frequently than expected, GPS-based orbit is consistent with SLR-based orbit. And considering the 1 sigma error, orbit determination accuracy of a few decimeters (peak-to-peak) was achieved.

  6. Robust and intelligent bearing estimation

    SciTech Connect

    Claassen, J.P.

    1998-07-01

    As the monitoring thresholds of global and regional networks are lowered, bearing estimates become more important to the processes which associate (sparse) detections and which locate events. Current methods of estimating bearings from observations by 3-component stations and arrays lack both accuracy and precision. Methods are required which will develop all the precision inherently available in the arrival, determine the measurability of the arrival, provide better estimates of the bias induced by the medium, permit estimates at lower SNRs, and provide physical insight into the effects of the medium on the estimates. Initial efforts have focused on 3-component stations since the precision is poorest there. An intelligent estimation process for 3-component stations has been developed and explored. The method, called SEE for Search, Estimate, and Evaluation, adaptively exploits all the inherent information in the arrival at every step of the process to achieve optimal results. In particular, the approach uses a consistent and robust mathematical framework to define the optimal time-frequency windows on which to make estimates, to make the bearing estimates themselves, and to withdraw metrics helpful in choosing the best estimate(s) or admitting that the bearing is immeasurable. The approach is conceptually superior to current methods, particular those which rely on real values signals. The method has been evaluated to a considerable extent in a seismically active region and has demonstrated remarkable utility by providing not only the best estimates possible but also insight into the physical processes affecting the estimates. It has been shown, for example, that the best frequency at which to make an estimate seldom corresponds to the frequency having the best detection SNR and sometimes the best time interval is not at the onset of the signal. The method is capable of measuring bearing dispersion, thereby withdrawing the bearing bias as a function of frequency

  7. Accuracy and precision of gravitational-wave models of inspiraling neutron star-black hole binaries with spin: Comparison with matter-free numerical relativity in the low-frequency regime

    NASA Astrophysics Data System (ADS)

    Bhagwat, Swetha; Kumar, Prayush; Barkett, Kevin; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilagyi, Bela; LIGO Collaboration

    2016-03-01

    Detection of gravitational wave involves extracting extremely weak signal from noisy data and their detection depends crucially on the accuracy of the signal models. The most accurate models of compact binary coalescence are known to come from solving the Einstein's equation numerically without any approximations. However, this is computationally formidable. As a more practical alternative, several analytic or semi analytic approximations are developed to model these waveforms. However, the work of Nitz et al. (2013) demonstrated that there is disagreement between these models. We present a careful follow up study on accuracies of different waveform families for spinning black-hole neutron star binaries, in context of both detection and parameter estimation and find that SEOBNRv2 to be the most faithful model. Post Newtonian models can be used for detection but we find that they could lead to large parameter bias. Supported by National Science Foundation (NSF) Awards No. PHY-1404395 and No. AST-1333142.

  8. Robust quantitative scratch assay

    PubMed Central

    Vargas, Andrea; Angeli, Marc; Pastrello, Chiara; McQuaid, Rosanne; Li, Han; Jurisicova, Andrea; Jurisica, Igor

    2016-01-01

    The wound healing assay (or scratch assay) is a technique frequently used to quantify the dependence of cell motility—a central process in tissue repair and evolution of disease—subject to various treatments conditions. However processing the resulting data is a laborious task due its high throughput and variability across images. This Robust Quantitative Scratch Assay algorithm introduced statistical outputs where migration rates are estimated, cellular behaviour is distinguished and outliers are identified among groups of unique experimental conditions. Furthermore, the RQSA decreased measurement errors and increased accuracy in the wound boundary at comparable processing times compared to previously developed method (TScratch). Availability and implementation: The RQSA is freely available at: http://ophid.utoronto.ca/RQSA/RQSA_Scripts.zip. The image sets used for training and validation and results are available at: (http://ophid.utoronto.ca/RQSA/trainingSet.zip, http://ophid.utoronto.ca/RQSA/validationSet.zip, http://ophid.utoronto.ca/RQSA/ValidationSetResults.zip, http://ophid.utoronto.ca/RQSA/ValidationSet_H1975.zip, http://ophid.utoronto.ca/RQSA/ValidationSet_H1975Results.zip, http://ophid.utoronto.ca/RQSA/RobustnessSet.zip, http://ophid.utoronto.ca/RQSA/RobustnessSet.zip). Supplementary Material is provided for detailed description of the development of the RQSA. Contact: juris@ai.utoronto.ca Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26722119

  9. Making Precise Antenna Reflectors For Millimeter Wavelengths

    NASA Technical Reports Server (NTRS)

    Sharp, G. Richard; Wanhainen, Joyce S.; Ketelsen, Dean A.

    1994-01-01

    In improved method of fabrication of precise, lightweight antenna reflectors for millimeter wavelengths, required precise contours of reflecting surfaces obtained by computer numberically controlled machining of surface layers bonded to lightweight, rigid structures. Achievable precision greater than that of older, more-expensive fabrication method involving multiple steps of low- and high-temperature molding, in which some accuracy lost at each step.

  10. Robust verification analysis

    NASA Astrophysics Data System (ADS)

    Rider, William; Witkowski, Walt; Kamm, James R.; Wildey, Tim

    2016-02-01

    We introduce a new methodology for inferring the accuracy of computational simulations through the practice of solution verification. We demonstrate this methodology on examples from computational heat transfer, fluid dynamics and radiation transport. Our methodology is suited to both well- and ill-behaved sequences of simulations. Our approach to the analysis of these sequences of simulations incorporates expert judgment into the process directly via a flexible optimization framework, and the application of robust statistics. The expert judgment is systematically applied as constraints to the analysis, and together with the robust statistics guards against over-emphasis on anomalous analysis results. We have named our methodology Robust Verification. Our methodology is based on utilizing multiple constrained optimization problems to solve the verification model in a manner that varies the analysis' underlying assumptions. Constraints applied in the analysis can include expert judgment regarding convergence rates (bounds and expectations) as well as bounding values for physical quantities (e.g., positivity of energy or density). This approach then produces a number of error models, which are then analyzed through robust statistical techniques (median instead of mean statistics). This provides self-contained, data and expert informed error estimation including uncertainties for both the solution itself and order of convergence. Our method produces high quality results for the well-behaved cases relatively consistent with existing practice. The methodology can also produce reliable results for ill-behaved circumstances predicated on appropriate expert judgment. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and ill-behaved simulations.

  11. Precision translator

    DOEpatents

    Reedy, Robert P.; Crawford, Daniel W.

    1984-01-01

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  12. Precision translator

    DOEpatents

    Reedy, R.P.; Crawford, D.W.

    1982-03-09

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  13. Precision and accuracy in fluorescent short tandem repeat DNA typing: assessment of benefits imparted by the use of allelic ladders with the AmpF/STR Profiler Plus kit.

    PubMed

    Leclair, Benoît; Frégeau, Chantal J; Bowen, Kathy L; Fourney, Ron M

    2004-03-01

    Base-calling precision of short tandem repeat (STR) allelic bands on dynamic slab-gel electrophoresis systems was evaluated. Data was collected from over 6000 population database allele peaks generated from 468 population database samples amplified with the AmpF/STR Profiler Plus (PP) kit and electrophoresed on ABD 377 DNA sequencers. Precision was measured by way of standard deviations and was shown to be essentially the same, whether using fixed or floating bin genotyping. However, the allelic ladders have proven more sensitive to electrophoretic variations than database samples, which have caused some floating bins of D18S51 to shift on occasion. This observation prompted the investigation of polyacrylamide gel formulations in order to stabilize allelic ladder migration. The results demonstrate that, although alleles comprised in allelic ladders and questioned samples run on the same gel should migrate in an identical manner, this premise needs to be verified for any given electrophoresis platform and gel formulation. We show that the compilation of base-calling data is a very informative and useful tool for assessing the performance stability of dynamic gel electrophoresis systems, stability on which depends genotyping result quality. PMID:15004837

  14. Simulation of agronomic images for an automatic evaluation of crop/ weed discrimination algorithm accuracy

    NASA Astrophysics Data System (ADS)

    Jones, G.; Gée, Ch.; Truchetet, F.

    2007-01-01

    In the context of precision agriculture, we present a robust and automatic method based on simulated images for evaluating the efficiency of any crop/weed discrimination algorithms for a inter-row weed infestation rate. To simulate these images two different steps are required: 1) modeling of a crop field from the spatial distribution of plants (crop and weed) 2) projection of the created field through an optical system to simulate photographing. Then an application is proposed investigating the accuracy and robustness of crop/weed discrimination algorithm combining a line detection (Hough transform) and a plant discrimination (crop and weeds). The accuracy of weed infestation rate estimate for each image is calculated by direct comparison to the initial weed infestation rate of the simulated images. It reveals an performance better than 85%.

  15. Precision GPS ephemerides and baselines

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Based on the research, the area of precise ephemerides for GPS satellites, the following observations can be made pertaining to the status and future work needed regarding orbit accuracy. There are several aspects which need to be addressed in discussing determination of precise orbits, such as force models, kinematic models, measurement models, data reduction/estimation methods, etc. Although each one of these aspects was studied at CSR in research efforts, only points pertaining to the force modeling aspect are addressed.

  16. RoPEUS: A New Robust Algorithm for Static Positioning in Ultrasonic Systems

    PubMed Central

    Prieto, José Carlos; Croux, Christophe; Jiménez, Antonio Ramón

    2009-01-01

    A well known problem for precise positioning in real environments is the presence of outliers in the measurement sample. Its importance is even bigger in ultrasound based systems since this technology needs a direct line of sight between emitters and receivers. Standard techniques for outlier detection in range based systems do not usually employ robust algorithms, failing when multiple outliers are present. The direct application of standard robust regression algorithms fails in static positioning (where only the current measurement sample is considered) in real ultrasound based systems mainly due to the limited number of measurements and the geometry effects. This paper presents a new robust algorithm, called RoPEUS, based on MM estimation, that follows a typical two-step strategy: 1) a high breakdown point algorithm to obtain a clean sample, and 2) a refinement algorithm to increase the accuracy of the solution. The main modifications proposed to the standard MM robust algorithm are a built in check of partial solutions in the first step (rejecting bad geometries) and the off-line calculation of the scale of the measurements. The algorithm is tested with real samples obtained with the 3D-LOCUS ultrasound localization system in an ideal environment without obstacles. These measurements are corrupted with typical outlying patterns to numerically evaluate the algorithm performance with respect to the standard parity space algorithm. The algorithm proves to be robust under single or multiple outliers, providing similar accuracy figures in all cases. PMID:22408522

  17. Advances in precision mirror figure metrology (abstract)

    SciTech Connect

    Takacs, P.Z.; Furenlid, K. ); Church, E.L. )

    1992-01-01

    New developments in optical measurement techniques have made it possible to test the surface quality on grazing incidence optics with extreme precision and accuracy. An instrument developed at Brookhaven, the Long Trace Profiler (LTP), measures the figure of large (up to 1 m long) cylindrical aspheres with nanometer accuracy. The LTP optical system is based around a common-path interferometer design belonging to the class of slope measuring interferometers and, as such, it is very robust, stable, and vibration insensitive. A unique error correction technique removes the effect of tilt errors in the optical head as it traverses the air bearing, thus allowing one to accurately measure the absolute surface profile and radius of curvature. This is of critical importance to the manufacture of long-radius spherical optics used in high-resolution soft x-ray monochromators and in the testing of mirror bending systems. This talk will review the principle of operation of the LTP, probe the factors limiting the performance of the system, and will examine the current state of the art in synchrotron radiation mirror manufacturing quality (as viewed by our metrology techniques). This research was supported by the U.S. Department of Energy Contract No. DE-AC02-76CH00016.

  18. Precision synchrotron radiation detectors

    SciTech Connect

    Levi, M.; Rouse, F.; Butler, J.; Jung, C.K.; Lateur, M.; Nash, J.; Tinsman, J.; Wormser, G.; Gomez, J.J.; Kent, J.

    1989-03-01

    Precision detectors to measure synchrotron radiation beam positions have been designed and installed as part of beam energy spectrometers at the Stanford Linear Collider (SLC). The distance between pairs of synchrotron radiation beams is measured absolutely to better than 28 /mu/m on a pulse-to-pulse basis. This contributes less than 5 MeV to the error in the measurement of SLC beam energies (approximately 50 GeV). A system of high-resolution video cameras viewing precisely-aligned fiducial wire arrays overlaying phosphorescent screens has achieved this accuracy. Also, detectors of synchrotron radiation using the charge developed by the ejection of Compton-recoil electrons from an array of fine wires are being developed. 4 refs., 5 figs., 1 tab.

  19. FTRAC--A robust fluoroscope tracking fiducial

    SciTech Connect

    Jain, Ameet Kumar; Mustafa, Tabish; Zhou, Yu; Burdette, Clif; Chirikjian, Gregory S.; Fichtinger, Gabor

    2005-10-15

    C-arm fluoroscopy is ubiquitous in contemporary surgery, but it lacks the ability to accurately reconstruct three-dimensional (3D) information. A major obstacle in fluoroscopic reconstruction is discerning the pose of the x-ray image, in 3D space. Optical/magnetic trackers tend to be prohibitively expensive, intrusive and cumbersome in many applications. We present single-image-based fluoroscope tracking (FTRAC) with the use of an external radiographic fiducial consisting of a mathematically optimized set of ellipses, lines, and points. This is an improvement over contemporary fiducials, which use only points. The fiducial encodes six degrees of freedom in a single image by creating a unique view from any direction. A nonlinear optimizer can rapidly compute the pose of the fiducial using this image. The current embodiment has salient attributes: small dimensions (3x3x5 cm); need not be close to the anatomy of interest; and accurately segmentable. We tested the fiducial and the pose recovery method on synthetic data and also experimentally on a precisely machined mechanical phantom. Pose recovery in phantom experiments had an accuracy of 0.56 mm in translation and 0.33 deg. in orientation. Object reconstruction had a mean error of 0.53 mm with 0.16 mm STD. The method offers accuracies similar to commercial tracking systems, and appears to be sufficiently robust for intraoperative quantitative C-arm fluoroscopy. Simulation experiments indicate that the size can be further reduced to 1x1x2 cm, with only a marginal drop in accuracy.

  20. Precise Measurement for Manufacturing

    NASA Technical Reports Server (NTRS)

    2003-01-01

    A metrology instrument known as PhaseCam supports a wide range of applications, from testing large optics to controlling factory production processes. This dynamic interferometer system enables precise measurement of three-dimensional surfaces in the manufacturing industry, delivering speed and high-resolution accuracy in even the most challenging environments.Compact and reliable, PhaseCam enables users to make interferometric measurements right on the factory floor. The system can be configured for many different applications, including mirror phasing, vacuum/cryogenic testing, motion/modal analysis, and flow visualization.

  1. Precision Pointing System Development

    SciTech Connect

    BUGOS, ROBERT M.

    2003-03-01

    The development of precision pointing systems has been underway in Sandia's Electronic Systems Center for over thirty years. Important areas of emphasis are synthetic aperture radars and optical reconnaissance systems. Most applications are in the aerospace arena, with host vehicles including rockets, satellites, and manned and unmanned aircraft. Systems have been used on defense-related missions throughout the world. Presently in development are pointing systems with accuracy goals in the nanoradian regime. Future activity will include efforts to dramatically reduce system size and weight through measures such as the incorporation of advanced materials and MEMS inertial sensors.

  2. Robust snapshot interferometric spectropolarimetry.

    PubMed

    Kim, Daesuk; Seo, Yoonho; Yoon, Yonghee; Dembele, Vamara; Yoon, Jae Woong; Lee, Kyu Jin; Magnusson, Robert

    2016-05-15

    This Letter describes a Stokes vector measurement method based on a snapshot interferometric common-path spectropolarimeter. The proposed scheme, which employs an interferometric polarization-modulation module, can extract the spectral polarimetric parameters Ψ(k) and Δ(k) of a transmissive anisotropic object by which an accurate Stokes vector can be calculated in the spectral domain. It is inherently strongly robust to the object 3D pose variation, since it is designed distinctly so that the measured object can be placed outside of the interferometric module. Experiments are conducted to verify the feasibility of the proposed system. The proposed snapshot scheme enables us to extract the spectral Stokes vector of a transmissive anisotropic object within tens of msec with high accuracy. PMID:27176992

  3. Robust Vertex Classification.

    PubMed

    Chen, Li; Shen, Cencheng; Vogelstein, Joshua T; Priebe, Carey E

    2016-03-01

    For random graphs distributed according to stochastic blockmodels, a special case of latent position graphs, adjacency spectral embedding followed by appropriate vertex classification is asymptotically Bayes optimal; but this approach requires knowledge of and critically depends on the model dimension. In this paper, we propose a sparse representation vertex classifier which does not require information about the model dimension. This classifier represents a test vertex as a sparse combination of the vertices in the training set and uses the recovered coefficients to classify the test vertex. We prove consistency of our proposed classifier for stochastic blockmodels, and demonstrate that the sparse representation classifier can predict vertex labels with higher accuracy than adjacency spectral embedding approaches via both simulation studies and real data experiments. Our results demonstrate the robustness and effectiveness of our proposed vertex classifier when the model dimension is unknown. PMID:26340770

  4. Classification of LIDAR Data for Generating a High-Precision Roadway Map

    NASA Astrophysics Data System (ADS)

    Jeong, J.; Lee, I.

    2016-06-01

    Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.

  5. An accuracy measurement method for star trackers based on direct astronomic observation

    PubMed Central

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-01-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412

  6. An accuracy measurement method for star trackers based on direct astronomic observation

    NASA Astrophysics Data System (ADS)

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-03-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers.

  7. An accuracy measurement method for star trackers based on direct astronomic observation.

    PubMed

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-01-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412

  8. Robust omniphobic surfaces

    PubMed Central

    Tuteja, Anish; Choi, Wonjae; Mabry, Joseph M.; McKinley, Gareth H.; Cohen, Robert E.

    2008-01-01

    Superhydrophobic surfaces display water contact angles greater than 150° in conjunction with low contact angle hysteresis. Microscopic pockets of air trapped beneath the water droplets placed on these surfaces lead to a composite solid-liquid-air interface in thermodynamic equilibrium. Previous experimental and theoretical studies suggest that it may not be possible to form similar fully-equilibrated, composite interfaces with drops of liquids, such as alkanes or alcohols, that possess significantly lower surface tension than water (γlv = 72.1 mN/m). In this work we develop surfaces possessing re-entrant texture that can support strongly metastable composite solid-liquid-air interfaces, even with very low surface tension liquids such as pentane (γlv = 15.7 mN/m). Furthermore, we propose four design parameters that predict the measured contact angles for a liquid droplet on a textured surface, as well as the robustness of the composite interface, based on the properties of the solid surface and the contacting liquid. These design parameters allow us to produce two different families of re-entrant surfaces— randomly-deposited electrospun fiber mats and precisely fabricated microhoodoo surfaces—that can each support a robust composite interface with essentially any liquid. These omniphobic surfaces display contact angles greater than 150° and low contact angle hysteresis with both polar and nonpolar liquids possessing a wide range of surface tensions. PMID:19001270

  9. Robust efficient video fingerprinting

    NASA Astrophysics Data System (ADS)

    Puri, Manika; Lubin, Jeffrey

    2009-02-01

    We have developed a video fingerprinting system with robustness and efficiency as the primary and secondary design criteria. In extensive testing, the system has shown robustness to cropping, letter-boxing, sub-titling, blur, drastic compression, frame rate changes, size changes and color changes, as well as to the geometric distortions often associated with camcorder capture in cinema settings. Efficiency is afforded by a novel two-stage detection process in which a fast matching process first computes a number of likely candidates, which are then passed to a second slower process that computes the overall best match with minimal false alarm probability. One key component of the algorithm is a maximally stable volume computation - a three-dimensional generalization of maximally stable extremal regions - that provides a content-centric coordinate system for subsequent hash function computation, independent of any affine transformation or extensive cropping. Other key features include an efficient bin-based polling strategy for initial candidate selection, and a final SIFT feature-based computation for final verification. We describe the algorithm and its performance, and then discuss additional modifications that can provide further improvement to efficiency and accuracy.

  10. Precision GPS ephemerides and baselines

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The emphasis of this grant was focused on precision ephemerides for the Global Positioning System (GPS) satellites for geodynamics applications. During the period of this grant, major activities were in the areas of thermal force modeling, numerical integration accuracy improvement for eclipsing satellites, analysis of GIG '91 campaign data, and the Southwest Pacific campaign data analysis.

  11. Precision orbit computations for Starlette

    NASA Technical Reports Server (NTRS)

    Marsh, J. G.; Williamson, R. G.

    1976-01-01

    The Starlette satellite, launched in February 1975 by the French Centre National d'Etudes Spatiales, was designed to minimize the effects of nongravitational forces and to obtain the highest possible accuracy for laser range measurements. Analyses of the first four months of global laser tracking data confirmed the stability of the orbit and the precision to which the satellite's position is established.

  12. Optimized robust plasma sampling for glomerular filtration rate studies.

    PubMed

    Murray, Anthony W; Gannon, Mark A; Barnfield, Mark C; Waller, Michael L

    2012-09-01

    In the presence of abnormal fluid collection (e.g. ascites), the measurement of glomerular filtration rate (GFR) based on a small number (1-4) of plasma samples fails. This study investigated how a few samples will allow adequate characterization of plasma clearance to give a robust and accurate GFR measurement. A total of 68 nine-sample GFR tests (from 45 oncology patients) with abnormal clearance of a glomerular tracer were audited to develop a Monte Carlo model. This was used to generate 20 000 synthetic but clinically realistic clearance curves, which were sampled at the 10 time points suggested by the British Nuclear Medicine Society. All combinations comprising between four and 10 samples were then used to estimate the area under the clearance curve by nonlinear regression. The audited clinical plasma curves were all well represented pragmatically as biexponential curves. The area under the curve can be well estimated using as few as five judiciously timed samples (5, 10, 15, 90 and 180 min). Several seven-sample schedules (e.g. 5, 10, 15, 60, 90, 180 and 240 min) are tolerant to any one sample being discounted without significant loss of accuracy or precision. A research tool has been developed that can be used to estimate the accuracy and precision of any pattern of plasma sampling in the presence of 'third-space' kinetics. This could also be used clinically to estimate the accuracy and precision of GFR calculated from mistimed or incomplete sets of samples. It has been used to identify optimized plasma sampling schedules for GFR measurement. PMID:22825040

  13. Accuracy and precision of gravitational-wave models of inspiraling neutron star-black hole binaries with spin: Comparison with matter-free numerical relativity in the low-frequency regime

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilágyi, Béla

    2015-11-01

    Coalescing binaries of neutron stars and black holes are one of the most important sources of gravitational waves for the upcoming network of ground-based detectors. Detection and extraction of astrophysical information from gravitational-wave signals requires accurate waveform models. The effective-one-body and other phenomenological models interpolate between analytic results and numerical relativity simulations, that typically span O (10 ) orbits before coalescence. In this paper we study the faithfulness of these models for neutron star-black hole binaries. We investigate their accuracy using new numerical relativity (NR) simulations that span 36-88 orbits, with mass ratios q and black hole spins χBH of (q ,χBH)=(7 ,±0.4 ),(7 ,±0.6 ) , and (5 ,-0.9 ). These simulations were performed treating the neutron star as a low-mass black hole, ignoring its matter effects. We find that (i) the recently published SEOBNRv1 and SEOBNRv2 models of the effective-one-body family disagree with each other (mismatches of a few percent) for black hole spins χBH≥0.5 or χBH≤-0.3 , with waveform mismatch accumulating during early inspiral; (ii) comparison with numerical waveforms indicates that this disagreement is due to phasing errors of SEOBNRv1, with SEOBNRv2 in good agreement with all of our simulations; (iii) phenomenological waveforms agree with SEOBNRv2 only for comparable-mass low-spin binaries, with overlaps below 0.7 elsewhere in the neutron star-black hole binary parameter space; (iv) comparison with numerical waveforms shows that most of this model's dephasing accumulates near the frequency interval where it switches to a phenomenological phasing prescription; and finally (v) both SEOBNR and post-Newtonian models are effectual for neutron star-black hole systems, but post-Newtonian waveforms will give a significant bias in parameter recovery. Our results suggest that future gravitational-wave detection searches and parameter estimation efforts would benefit

  14. Robust Optimization of Alginate-Carbopol 940 Bead Formulations

    PubMed Central

    López-Cacho, J. M.; González-R, Pedro L.; Talero, B.; Rabasco, A. M.; González-Rodríguez, M. L.

    2012-01-01

    Formulation process is a very complex activity which sometimes implicates taking decisions about parameters or variables to obtain the best results in a high variability or uncertainty context. Therefore, robust optimization tools can be very useful for obtaining high quality formulations. This paper proposes the optimization of different responses through the robust Taguchi method. Each response was evaluated like a noise variable, allowing the application of Taguchi techniques to obtain a response under the point of view of the signal to noise ratio. A L18 Taguchi orthogonal array design was employed to investigate the effect of eight independent variables involved in the formulation of alginate-Carbopol beads. Responses evaluated were related to drug release profile from beads (t50% and AUC), swelling performance, encapsulation efficiency, shape and size parameters. Confirmation tests to verify the prediction model were carried out and the obtained results were very similar to those predicted in every profile. Results reveal that the robust optimization is a very useful approach that allows greater precision and accuracy to the desired value. PMID:22645438

  15. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  16. Precision spectroscopy of Helium

    SciTech Connect

    Cancio, P.; Giusfredi, G.; Mazzotti, D.; De Natale, P.; De Mauro, C.; Krachmalnicoff, V.; Inguscio, M.

    2005-05-05

    Accurate Quantum-Electrodynamics (QED) tests of the simplest bound three body atomic system are performed by precise laser spectroscopic measurements in atomic Helium. In this paper, we present a review of measurements between triplet states at 1083 nm (23S-23P) and at 389 nm (23S-33P). In 4He, such data have been used to measure the fine structure of the triplet P levels and, then, to determine the fine structure constant when compared with equally accurate theoretical calculations. Moreover, the absolute frequencies of the optical transitions have been used for Lamb-shift determinations of the levels involved with unprecedented accuracy. Finally, determination of the He isotopes nuclear structure and, in particular, a measurement of the nuclear charge radius, are performed by using hyperfine structure and isotope-shift measurements.

  17. Precision ozone vapor pressure measurements

    NASA Technical Reports Server (NTRS)

    Hanson, D.; Mauersberger, K.

    1985-01-01

    The vapor pressure above liquid ozone has been measured with a high accuracy over a temperature range of 85 to 95 K. At the boiling point of liquid argon (87.3 K) an ozone vapor pressure of 0.0403 Torr was obtained with an accuracy of + or - 0.7 percent. A least square fit of the data provided the Clausius-Clapeyron equation for liquid ozone; a latent heat of 82.7 cal/g was calculated. High-precision vapor pressure data are expected to aid research in atmospheric ozone measurements and in many laboratory ozone studies such as measurements of cross sections and reaction rates.

  18. Mixed-Precision Spectral Deferred Correction: Preprint

    SciTech Connect

    Grout, Ray W. S.

    2015-09-02

    Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.

  19. Quality, precision and accuracy of the maximum No. 40 anemometer

    SciTech Connect

    Obermeir, J.; Blittersdorf, D.

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  20. Precision and accuracy of decay constants and age standards

    NASA Astrophysics Data System (ADS)

    Villa, I. M.

    2011-12-01

    40 years of round-robin experiments with age standards teach us that systematic errors must be present in at least N-1 labs if participants provide N mutually incompatible data. In EarthTime, the U-Pb community has produced and distributed synthetic solutions with full metrological traceability. Collector linearity is routinely calibrated under variable conditions (e.g. [1]). Instrumental mass fractionation is measured in-run with double spikes (e.g. 233U-236U). Parent-daughter ratios are metrologically traceable, so the full uncertainty budget of a U-Pb age should coincide with interlaboratory uncertainty. TIMS round-robin experiments indeed show a decrease of N towards the ideal value of 1. Comparing 235U-207Pb with 238U-206Pb ages (e.g. [2]) has resulted in a credible re-evaluation of the 235U decay constant, with lower uncertainty than gamma counting. U-Pb microbeam techniques reveal the link petrology-microtextures-microchemistry-isotope record but do not achieve the low uncertainty of TIMS. In the K-Ar community, N is large; interlaboratory bias is > 10 times self-assessed uncertainty. Systematic errors may have analytical and petrological reasons. Metrological traceability is not yet implemented (substantial advance may come from work in progress, e.g. [7]). One of the worst problems is collector stability and linearity. Using electron multipliers (EM) instead of Faraday buckets (FB) reduces both dynamic range and collector linearity. Mass spectrometer backgrounds are never zero; the extent as well as the predictability of their variability must be propagated into the uncertainty evaluation. The high isotope ratio of the atmospheric Ar requires a large dynamic range over which linearity must be demonstrated under all analytical conditions to correctly estimate mass fractionation. The only assessment of EM linearity in Ar analyses [3] points out many fundamental problems; the onus of proof is on every laboratory claiming low uncertainties. Finally, sample size reduction is often associated to reducing clean-up time to increase sample/blank ratio; this may be self-defeating, as "dry blanks" [4] do not represent either the isotopic composition or the amount of Ar released by the sample chamber when exposed to unpurified sample gas. Single grains enhance background and purification problems relative to large sample sizes measured on FB. Petrologically, many natural "standards" are not ideal (e.g. MMhb1 [5], B4M [6]), as their original distributors never conceived petrology as the decisive control on isotope retention. Comparing ever smaller aliquots of unequilibrated minerals causes ever larger age variations. Metrologically traceable synthetic isotope mixtures still lie in the future. Petrological non-ideality of natural standards does not allow a metrological uncertainty budget. Collector behavior, on the contrary, does. Its quantification will, by definition, make true intralaboratory uncertainty greater or equal to interlaboratory bias. [1] Chen J, Wasserburg GJ, 1981. Analyt Chem 53, 2060-2067 [2] Mattinson JM, 2010. Chem Geol 275, 186-198 [3] Turrin B et al, 2010. G-cubed, 11, Q0AA09 [4] Baur H, 1975. PhD thesis, ETH Zürich, No. 6596 [5] Villa IM et al, 1996. Contrib Mineral Petrol 126, 67-80 [6] Villa IM, Heri AR, 2010. AGU abstract V31A-2296 [7] Morgan LE et al, in press. G-cubed, 2011GC003719

  1. Factors affecting accuracy and precision in PET volume imaging

    SciTech Connect

    Karp, J.S.; Daube-Witherspoon, M.E.; Muehllehner, G. )

    1991-03-01

    Volume imaging positron emission tomographic (PET) scanners with no septa and a large axial acceptance angle offer several advantages over multiring PET scanners. A volume imaging scanner combines high sensitivity with fine axial sampling and spatial resolution. The fine axial sampling minimizes the partial volume effect, which affects the measured concentration of an object. Even if the size of an object is large compared to the slice spacing in a multiring scanner, significant variation in the concentration is measured as a function of the axial position of the object. With a volume imaging scanner, it is necessary to use a three-dimensional reconstruction algorithm in order to avoid variations in the axial resolution as a function of the distance from the center of the scanner. In addition, good energy resolution is needed in order to use a high energy threshold to reduce the coincident scattered radiation.

  2. Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images

    PubMed Central

    Frey, Eric C.; Humm, John L.; Ljungberg, Michael

    2012-01-01

    The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429

  3. Tomography & Geochemistry: Precision, Repeatability, Accuracy and Joint Interpretations

    NASA Astrophysics Data System (ADS)

    Foulger, G. R.; Panza, G. F.; Artemieva, I. M.; Bastow, I. D.; Cammarano, F.; Doglioni, C.; Evans, J. R.; Hamilton, W. B.; Julian, B. R.; Lustrino, M.; Thybo, H.; Yanovskaya, T. B.

    2015-12-01

    Seismic tomography can reveal the spatial seismic structure of the mantle, but has little ability to constrain composition, phase or temperature. In contrast, petrology and geochemistry can give insights into mantle composition, but have severely limited spatial control on magma sources. For these reasons, results from these three disciplines are often interpreted jointly. Nevertheless, the limitations of each method are often underestimated, and underlying assumptions de-emphasized. Examples of the limitations of seismic tomography include its ability to image in detail the three-dimensional structure of the mantle or to determine with certainty the strengths of anomalies. Despite this, published seismic anomaly strengths are often unjustifiably translated directly into physical parameters. Tomography yields seismological parameters such as wave speed and attenuation, not geological or thermal parameters. Much of the mantle is poorly sampled by seismic waves, and resolution- and error-assessment methods do not express the true uncertainties. These and other problems have become highlighted in recent years as a result of multiple tomography experiments performed by different research groups, in areas of particular interest e.g., Yellowstone. The repeatability of the results is often poorer than the calculated resolutions. The ability of geochemistry and petrology to identify magma sources and locations is typically overestimated. These methods have little ability to determine source depths. Models that assign geochemical signatures to specific layers in the mantle, including the transition zone, the lower mantle, and the core-mantle boundary, are based on speculative models that cannot be verified and for which viable, less-astonishing alternatives are available. Our knowledge is poor of the size, distribution and location of protoliths, and of metasomatism of magma sources, the nature of the partial-melting and melt-extraction process, the mixing of disparate melts, and the re-assimilation of crust and mantle lithosphere by rising melt. Interpretations of seismic tomography, petrologic and geochemical observations, and all three together, are ambiguous, and this needs to be emphasized more in presenting interpretations so that the viability of the models can be assessed more reliably.

  4. Global positioning system measurements for crustal deformation: Precision and accuracy

    USGS Publications Warehouse

    Prescott, W.H.; Davis, J.L.; Svarc, J.L.

    1989-01-01

    Analysis of 27 repeated observations of Global Positioning System (GPS) position-difference vectors, up to 11 kilometers in length, indicates that the standard deviation of the measurements is 4 millimeters for the north component, 6 millimeters for the east component, and 10 to 20 millimeters for the vertical component. The uncertainty grows slowly with increasing vector length. At 225 kilometers, the standard deviation of the measurement is 6, 11, and 40 millimeters for the north, east, and up components, respectively. Measurements with GPS and Geodolite, an electromagnetic distance-measuring system, over distances of 10 to 40 kilometers agree within 0.2 part per million. Measurements with GPS and very long baseline interferometry of the 225-kilometer vector agree within 0.05 part per million.

  5. Precision and accuracy of visual foliar injury assessments

    SciTech Connect

    Gumpertz, M.L.; Tingey, D.T.; Hogsett, W.E.

    1982-07-01

    The study compared three measures of foliar injury: (i) mean percent leaf area injured of all leaves on the plant, (ii) mean percent leaf area injured of the three most injured leaves, and (iii) the proportion of injured leaves to total number of leaves. For the first measure, the variation caused by reader biases and day-to-day variations were compared with the innate plant-to-plant variation. Bean (Phaseolus vulgaris 'Pinto'), pea (Pisum sativum 'Little Marvel'), radish (Rhaphanus sativus 'Cherry Belle'), and spinach (Spinacia oleracea 'Northland') plants were exposed to either 3 ..mu..L L/sup -1/ SO/sub 2/ or 0.3 ..mu..L L/sup -1/ ozone for 2 h. Three leaf readers visually assessed the percent injury on every leaf of each plant while a fourth reader used a transparent grid to make an unbiased assessment for each plant. The mean leaf area injured of the three most injured leaves was highly correlated with all leaves on the plant only if the three most injured leaves were <100% injured. The proportion of leaves injured was not highly correlated with percent leaf area injured of all leaves on the plant for any species in this study. The largest source of variation in visual assessments was plant-to-plant variation, which ranged from 44 to 97% of the total variance, followed by variation among readers (0-32% of the variance). Except for radish exposed to ozone, the day-to-day variation accounted for <18% of the total. Reader bias in assessment of ozone injury was significant but could be adjusted for each reader by a simple linear regression (R/sup 2/ = 0.89-0.91) of the visual assessments against the grid assessments.

  6. Robust design of dynamic observers

    NASA Technical Reports Server (NTRS)

    Bhattacharyya, S. P.

    1974-01-01

    The two (identity) observer realizations z = Mz + Ky and z = transpose of Az + transpose of K(y - transpose of Cz), respectively called the open loop and closed loop realizations, for the linear system x = Ax, y = Cx are analyzed with respect to the requirement of robustness; i.e., the requirement that the observer continue to regulate the error x - z satisfactorily despite small variations in the observer parameters from the projected design values. The results show that the open loop realization is never robust, that robustness requires a closed loop implementation, and that the closed loop realization is robust with respect to small perturbations in the gains transpose of K if and only if the observer can be built to contain an exact replica of the unstable and underdamped dynamics of the system being observed. These results clarify the stringent accuracy requirements on both models and hardware that must be met before an observer can be considered for use in a control system.

  7. Arrival Metering Precision Study

    NASA Technical Reports Server (NTRS)

    Prevot, Thomas; Mercer, Joey; Homola, Jeffrey; Hunt, Sarah; Gomez, Ashley; Bienert, Nancy; Omar, Faisal; Kraut, Joshua; Brasil, Connie; Wu, Minghong, G.

    2015-01-01

    This paper describes the background, method and results of the Arrival Metering Precision Study (AMPS) conducted in the Airspace Operations Laboratory at NASA Ames Research Center in May 2014. The simulation study measured delivery accuracy, flight efficiency, controller workload, and acceptability of time-based metering operations to a meter fix at the terminal area boundary for different resolution levels of metering delay times displayed to the air traffic controllers and different levels of airspeed information made available to the Time-Based Flow Management (TBFM) system computing the delay. The results show that the resolution of the delay countdown timer (DCT) on the controllers display has a significant impact on the delivery accuracy at the meter fix. Using the 10 seconds rounded and 1 minute rounded DCT resolutions resulted in more accurate delivery than 1 minute truncated and were preferred by the controllers. Using the speeds the controllers entered into the fourth line of the data tag to update the delay computation in TBFM in high and low altitude sectors increased air traffic control efficiency and reduced fuel burn for arriving aircraft during time based metering.

  8. Accuracy in optical overlay metrology

    NASA Astrophysics Data System (ADS)

    Bringoltz, Barak; Marciano, Tal; Yaziv, Tal; DeLeeuw, Yaron; Klein, Dana; Feler, Yoel; Adam, Ido; Gurevich, Evgeni; Sella, Noga; Lindenfeld, Ze'ev; Leviant, Tom; Saltoun, Lilach; Ashwal, Eltsafon; Alumot, Dror; Lamhot, Yuval; Gao, Xindong; Manka, James; Chen, Bryan; Wagner, Mark

    2016-03-01

    In this paper we discuss the mechanism by which process variations determine the overlay accuracy of optical metrology. We start by focusing on scatterometry, and showing that the underlying physics of this mechanism involves interference effects between cavity modes that travel between the upper and lower gratings in the scatterometry target. A direct result is the behavior of accuracy as a function of wavelength, and the existence of relatively well defined spectral regimes in which the overlay accuracy and process robustness degrades (`resonant regimes'). These resonances are separated by wavelength regions in which the overlay accuracy is better and independent of wavelength (we term these `flat regions'). The combination of flat and resonant regions forms a spectral signature which is unique to each overlay alignment and carries certain universal features with respect to different types of process variations. We term this signature the `landscape', and discuss its universality. Next, we show how to characterize overlay performance with a finite set of metrics that are available on the fly, and that are derived from the angular behavior of the signal and the way it flags resonances. These metrics are used to guarantee the selection of accurate recipes and targets for the metrology tool, and for process control with the overlay tool. We end with comments on the similarity of imaging overlay to scatterometry overlay, and on the way that pupil overlay scatterometry and field overlay scatterometry differ from an accuracy perspective.

  9. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    LRO definitive and predictive accuracy requirements were easily met in the nominal mission orbit, using the LP150Q lunar gravity model. center dot Accuracy of the LP150Q model is poorer in the extended mission elliptical orbit. center dot Later lunar gravity models, in particular GSFC-GRAIL-270, improve OD accuracy in the extended mission. center dot Implementation of a constrained plane when the orbit is within 45 degrees of the Earth-Moon line improves cross-track accuracy. center dot Prediction accuracy is still challenged during full-Sun periods due to coarse spacecraft area modeling - Implementation of a multi-plate area model with definitive attitude input can eliminate prediction violations. - The FDF is evaluating using analytic and predicted attitude modeling to improve full-Sun prediction accuracy. center dot Comparison of FDF ephemeris file to high-precision ephemeris files provides gross confirmation that overlap compares properly assess orbit accuracy.

  10. Precision Polarimetry for Cold Neutrons

    NASA Astrophysics Data System (ADS)

    Barron-Palos, Libertad; Bowman, J. David; Chupp, Timothy E.; Crawford, Christopher; Danagoulian, Areg; Gentile, Thomas R.; Jones, Gordon; Klein, Andreas; Penttila, Seppo I.; Salas-Bacci, Americo; Sharma, Monisha; Wilburn, W. Scott

    2007-10-01

    The abBA and PANDA experiments, currently under development, aim to measure the correlation coefficients in the polarized free neutron beta decay at the FnPB in SNS. The polarization of the neutron beam, polarized with a ^3He spin filter, has to be known with high precision in order to achieve the goal accuracy of these experiments. In the NPDGamma experiment, where a ^3He spin filter was used, it was observed that backgrounds play an important role in the precision to which the polarization can be determined. An experiment that focuses in the reduction of background sources to establish techniques and find the upper limit for the polarization accuracy with these spin filters is currently in progress at LANSCE. A description of the measurement and results will be presented.

  11. Robust extraction of the aorta and pulmonary artery from 3D MDCT image data

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2010-03-01

    Accurate definition of the aorta and pulmonary artery from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents robust methods for defining the aorta and pulmonary artery in the central chest. The methods work on both contrast enhanced and no-contrast 3D MDCT image data. The automatic methods use a common approach employing model fitting and selection and adaptive refinement. During the occasional event that more precise vascular extraction is desired or the method fails, we also have an alternate semi-automatic fail-safe method. The semi-automatic method extracts the vasculature by extending the medial axes into a user-guided direction. A ground-truth study over a series of 40 human 3D MDCT images demonstrates the efficacy, accuracy, robustness, and efficiency of the methods.

  12. Robust Fault Detection and Isolation for Stochastic Systems

    NASA Technical Reports Server (NTRS)

    George, Jemin; Gregory, Irene M.

    2010-01-01

    This paper outlines the formulation of a robust fault detection and isolation scheme that can precisely detect and isolate simultaneous actuator and sensor faults for uncertain linear stochastic systems. The given robust fault detection scheme based on the discontinuous robust observer approach would be able to distinguish between model uncertainties and actuator failures and therefore eliminate the problem of false alarms. Since the proposed approach involves precise reconstruction of sensor faults, it can also be used for sensor fault identification and the reconstruction of true outputs from faulty sensor outputs. Simulation results presented here validate the effectiveness of the robust fault detection and isolation system.

  13. Mechanisms for Robust Cognition.

    PubMed

    Walsh, Matthew M; Gluck, Kevin A

    2015-08-01

    To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within variable environments. This raises the question, how do cognitive systems achieve similarly high degrees of robustness? The aim of this study was to identify a set of mechanisms that enhance robustness in cognitive systems. We identify three mechanisms that enhance robustness in biological and engineered systems: system control, redundancy, and adaptability. After surveying the psychological literature for evidence of these mechanisms, we provide simulations illustrating how each contributes to robust cognition in a different psychological domain: psychomotor vigilance, semantic memory, and strategy selection. These simulations highlight features of a mathematical approach for quantifying robustness, and they provide concrete examples of mechanisms for robust cognition. PMID:25352094

  14. Central difference predictive filter for attitude determination with low precision sensors and model errors

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Chen, Xiaoqian; Misra, Arun K.

    2014-12-01

    Attitude determination is one of the key technologies for Attitude Determination and Control System (ADCS) of a satellite. However, serious model errors may exist which will affect the estimation accuracy of ACDS, especially for a small satellite with low precision sensors. In this paper, a central difference predictive filter (CDPF) is proposed for attitude determination of small satellites with model errors and low precision sensors. The new filter is proposed by introducing the Stirling's polynomial interpolation formula to extend the traditional predictive filter (PF). It is shown that the proposed filter has higher accuracy for the estimation of system states than the traditional PF. It is known that the unscented Kalman filter (UKF) has also been used in the ADCS of small satellites with low precision sensors. In order to evaluate the performance of the proposed filter, the UKF is also employed to compare it with the CDPF. Numerical simulations show that the proposed CDPF is more effective and robust in dealing with model errors and low precision sensors compared with the UKF or traditional PF.

  15. A robust motion estimation system for minimal invasive laparoscopy

    NASA Astrophysics Data System (ADS)

    Marcinczak, Jan Marek; von Öhsen, Udo; Grigat, Rolf-Rainer

    2012-02-01

    Laparoscopy is a reliable imaging method to examine the liver. However, due to the limited field of view, a lot of experience is required from the surgeon to interpret the observed anatomy. Reconstruction of organ surfaces provide valuable additional information to the surgeon for a reliable diagnosis. Without an additional external tracking system the structure can be recovered from feature correspondences between different frames. In laparoscopic images blurred frames, specular reflections and inhomogeneous illumination make feature tracking a challenging task. We propose an ego-motion estimation system for minimal invasive laparoscopy that can cope with specular reflection, inhomogeneous illumination and blurred frames. To obtain robust feature correspondence, the approach combines SIFT and specular reflection segmentation with a multi-frame tracking scheme. The calibrated five-point algorithm is used with the MSAC robust estimator to compute the motion of the endoscope from multi-frame correspondence. The algorithm is evaluated using endoscopic videos of a phantom. The small incisions and the rigid endoscope limit the motion in minimal invasive laparoscopy. These limitations are considered in our evaluation and are used to analyze the accuracy of pose estimation that can be achieved by our approach. The endoscope is moved by a robotic system and the ground truth motion is recorded. The evaluation on typical endoscopic motion gives precise results and demonstrates the practicability of the proposed pose estimation system.

  16. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  17. Electrosurgery with cellular precision.

    PubMed

    Palanker, Daniel V; Vankov, Alexander; Huie, Philip

    2008-02-01

    Electrosurgery, one of the most-often used surgical tools, is a robust but somewhat crude technology that has changed surprisingly little since its invention almost a century ago. Continuous radiofrequency is still used for tissue cutting, with thermal damage extending to hundreds of micrometers. In contrast, lasers developed 70 years later, have been constantly perfected, and the laser-tissue interactions explored in great detail, which has allowed tissue ablation with cellular precision in many laser applications. We discuss mechanisms of tissue damage by electric field, and demonstrate that electrosurgery with properly optimized waveforms and microelectrodes can rival many advanced lasers. Pulsed electric waveforms with burst durations ranging from 10 to 100 micros applied via insulated planar electrodes with 12 microm wide exposed edges produced plasma-mediated dissection of tissues with the collateral damage zone ranging from 2 to 10 microm. Length of the electrodes can vary from micrometers to centimeters and all types of soft tissues-from membranes to cartilage and skin could be dissected in liquid medium and in a dry field. This technology may allow for major improvements in outcomes of the current surgical procedures and development of much more refined surgical techniques. PMID:18270030

  18. Turning process monitoring using a robust and miniaturized non-incremental interferometric distance sensor

    NASA Astrophysics Data System (ADS)

    Günther, P.; Dreier, F.; Pfister, T.; Czarske, J.

    2011-05-01

    In-process shape measurements of rotating objects such as turning parts at a metal working lathe are of great importance for monitoring production processes or to enable zero-error production. Therefore, contactless and compact sensors with high temporal resolution as well as high precision are necessary. Furthermore, robust sensors are required which withstand the rough ambient conditions in production environment. Thus, we developed a miniaturized and robust non-incremental fiber-optic distance sensor with dimensions of only 30x40x90 mm3 which can be attached directly adjacent to the turning tool bit of a metal working lathe and allows precise in-process 3D shape measurements of turning parts. In this contribution we present the results of in-process shape measurements during the turning process at a metal working lathe using a miniaturized interferometric distance sensor. The absolute radius of the turning workpiece can be determined with micron precision. To proof the accuracy of the measurement results, comparative measurements with tactile sensors have to be performed.

  19. Simple, accurate, and precise measurements of thermal diffusivity in liquids using a thermal-wave cavity

    NASA Astrophysics Data System (ADS)

    Balderas-López, J. A.; Mandelis, A.

    2001-06-01

    A simple methodology for the direct measurement of the thermal wavelength using a thermal-wave cavity, and its application to the evaluation of the thermal diffusivity of liquids is described. The simplicity and robustness of this technique lie in its relative measurement features for both the thermal-wave phase and cavity length, thus eliminating the need for taking into account difficult-to-quantify and time-consuming instrumental phase shifts. Two liquid samples were used: distilled water and ethylene glycol. Excellent agreement was found with reported results in the literature. The accuracy of the thermal diffusivity measurements using the new methodology originates in the use of only difference measurements in the thermal-wave phase and cavity length. Measurement precision is directly related to the corresponding precision on the measurement of the thermal wavelength.

  20. Contributions of Satellite Laser Ranging to the Precise Orbit Determination of Low Earth Orbiters

    NASA Astrophysics Data System (ADS)

    Wirnsberger, H.; Krauss, S.; Baur, O.

    2014-11-01

    Space-based monitoring and modeling of the system Earth requires precise knowledge of the orbits of artificial satellites. In this framework, since decades Satellite Laser Ranging (SLR) contributes with high measurement accuracy and robust tracking data to precise orbit determination. One essential role of SLR tracking is the external validation of orbit solutions derived from Global Navigation Satellite Systems (GNSS), such as the Global Positioning System (GPS). This valuable task of external validation is performed by the comparison of computed ranges based on orbit solutions and unambiguous SLR tracking data (observed ranges). Apart from validation, extension of the existing SLR network by passive antennas in combination with multistatic observations provides improvements in orbit determination processes with the background of sparse tracking data. Conceptually, these multistatic observations refer to the tracking of spacecraft from an active SLR-station and the detection of the diffuse reflected photons from the spacecraft at one or more passive stations.

  1. Knowledge discovery by accuracy maximization

    PubMed Central

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-01-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821

  2. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  3. Robust Adaptive Control

    NASA Technical Reports Server (NTRS)

    Narendra, K. S.; Annaswamy, A. M.

    1985-01-01

    Several concepts and results in robust adaptive control are are discussed and is organized in three parts. The first part surveys existing algorithms. Different formulations of the problem and theoretical solutions that have been suggested are reviewed here. The second part contains new results related to the role of persistent excitation in robust adaptive systems and the use of hybrid control to improve robustness. In the third part promising new areas for future research are suggested which combine different approaches currently known.

  4. Construction concepts for precision segmented reflectors

    NASA Technical Reports Server (NTRS)

    Mikulas, Martin M., Jr.; Withnell, Peter R.

    1993-01-01

    Three construction concepts for deployable precision segmented reflectors are presented. The designs produce reflectors with very high surface accuracies and diameters three to five times the width of the launch vehicle shroud. Of primary importance is the reliability of both the deployment process and the reflector operation. This paper is conceptual in nature, and uses these criteria to present beneficial design concepts for deployable precision segmented reflectors.

  5. High-precision arithmetic in mathematical physics

    DOE PAGESBeta

    Bailey, David H.; Borwein, Jonathan M.

    2015-05-12

    For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.

  6. Mechanisms for Robust Cognition

    ERIC Educational Resources Information Center

    Walsh, Matthew M.; Gluck, Kevin A.

    2015-01-01

    To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within…

  7. The Magsat precision vector magnetometer

    NASA Technical Reports Server (NTRS)

    Acuna, M. H.

    1980-01-01

    This paper examines the Magsat precision vector magnetometer which is designed to measure projections of the ambient field in three orthogonal directions. The system contains a highly stable and linear triaxial fluxgate magnetometer with a dynamic range of + or - 2000 nT (1 nT = 10 to the -9 weber per sq m). The magnetometer electronics, analog-to-digital converter, and digitally controlled current sources are implemented with redundant designs to avoid a loss of data in case of failures. Measurements are carried out with an accuracy of + or - 1 part in 64,000 in magnitude and 5 arcsec in orientation (1 arcsec = 0.00028 deg).

  8. Precise Countersinking Tool

    NASA Technical Reports Server (NTRS)

    Jenkins, Eric S.; Smith, William N.

    1992-01-01

    Tool countersinks holes precisely with only portable drill; does not require costly machine tool. Replaceable pilot stub aligns axis of tool with centerline of hole. Ensures precise cut even with imprecise drill. Designed for relatively low cutting speeds.

  9. Accuracy Studies of a Magnetometer-Only Attitude-and-Rate-Determination System

    NASA Technical Reports Server (NTRS)

    Challa, M. (Editor); Wheeler, C. (Editor)

    1996-01-01

    A personal computer based system was recently prototyped that uses measurements from a three axis magnetometer (TAM) to estimate the attitude and rates of a spacecraft using no a priori knowledge of the spacecraft's state. Past studies using in-flight data from the Solar, Anomalous, and Magnetospheric Particles Explorer focused on the robustness of the system and demonstrated that attitude and rate estimates could be obtained accurately to 1.5 degrees (deg) and 0.01 deg per second (deg/sec), respectively, despite limitations in the data and in the accuracies of te truth models. This paper studies the accuracy of the Kalman filter in the system using several orbits of in-flight Earth Radiation Budget Satellite (ERBS) data and attitude and rate truth models obtained from high precision sensors to demonstrate the practical capabilities. This paper shows the following: Using telemetered TAM data, attitude accuracies of 0.2 to 0.4 deg and rate accuracies of 0.002 to 0.005 deg/sec (within ERBS attitude control requirements of 1 deg and 0.0005 deg/sec) can be obtained with minimal tuning of the filter; Replacing the TAM data in the telemetry with simulated TAM data yields corresponding accuracies of 0.1 to 0.2 deg and 0.002 to 0.005 deg/sec, thus demonstrating that the filter's accuracy can be significantly enhanced by further calibrating the TAM. Factors affecting the fillter's accuracy and techniques for tuning the system's Kalman filter are also presented.

  10. "Precision" drug development?

    PubMed

    Woodcock, J

    2016-02-01

    The concept of precision medicine has entered broad public consciousness, spurred by a string of targeted drug approvals, highlighted by the availability of personal gene sequences, and accompanied by some remarkable claims about the future of medicine. It is likely that precision medicines will require precision drug development programs. What might such programs look like? PMID:26331240

  11. Precision agricultural systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Precision agriculture is a new farming practice that has been developing since late 1980s. It has been variously referred to as precision farming, prescription farming, site-specific crop management, to name but a few. There are numerous definitions for precision agriculture, but the central concept...

  12. Robust fault detection and isolation in stochastic systems

    NASA Astrophysics Data System (ADS)

    George, Jemin

    2012-07-01

    This article outlines the formulation of a robust fault detection and isolation (FDI) scheme that can precisely detect and isolate simultaneous actuator and sensor faults for uncertain linear stochastic systems. The given robust fault detection scheme based on the discontinuous robust observer approach would be able to distinguish between model uncertainties and actuator failures and therefore eliminate the problem of false alarms. Since the proposed approach involves estimating sensor faults, it can also be used for sensor fault identification and the reconstruction of true outputs from faulty sensor outputs. Simulation results presented here validate the effectiveness of the proposed robust FDI system.

  13. Ruggedness and robustness testing.

    PubMed

    Dejaegher, Bieke; Heyden, Yvan Vander

    2007-07-27

    Due to the strict regulatory requirements, especially in pharmaceutical analysis, analysis results with an acceptable quality should be reported. Thus, a proper validation of the measurement method is required. In this context, ruggedness and robustness testing becomes increasingly more important. In this review, the definitions of ruggedness and robustness are given, followed by a short explanation of the different approaches applied to examine the ruggedness or the robustness of an analytical method. Then, case studies, describing ruggedness or robustness tests of high-performance liquid chromatographic (HPLC), capillary electrophoretic (CE), gas chromatographic (GC), supercritical fluid chromatographic (SFC), and ultra-performance liquid chromatographic (UPLC) assay methods, are critically reviewed and discussed. Mainly publications of the last 10 years are considered. PMID:17379230

  14. Approaches to robustness

    NASA Astrophysics Data System (ADS)

    Cox, Henry; Heaney, Kevin D.

    2003-04-01

    The term robustness in signal processing applications usually refers to approaches that are not degraded significantly when the assumptions that were invoked in defining the processing algorithm are no longer valid. Highly tuned algorithms that fall apart in real-world conditions are useless. The classic example is super-directive arrays of closely spaced elements. The very narrow beams and high directivity could be predicted under ideal conditions, could not be achieved under realistic conditions of amplitude, phase and position errors. The robust design tries to take into account the real environment as part of the optimization problem. This problem led to the introduction of the white noise gain constraint and diagonal loading in adaptive beam forming. Multiple linear constraints have been introduced in pursuit of robustness. Sonar systems such as towed arrays operate in less than ideal conditions, making robustness a concern. A special problem in sonar systems is failed array elements. This leads to severe degradation in beam patterns and bearing response patterns. Another robustness issue arises in matched field processing that uses an acoustic propagation model in the beamforming. Knowledge of the environmental parameters is usually limited. This paper reviews the various approaches to achieving robustness in sonar systems.

  15. Robust visual tracking with contiguous occlusion constraint

    NASA Astrophysics Data System (ADS)

    Wang, Pengcheng; Qian, Weixian; Chen, Qian

    2016-02-01

    Visual tracking plays a fundamental role in video surveillance, robot vision and many other computer vision applications. In this paper, a robust visual tracking method that is motivated by the regularized ℓ1 tracker is proposed. We focus on investigating the case that the object target is occluded. Generally, occlusion can be treated as some kind of contiguous outlier with the target object as background. However, the penalty function of the ℓ1 tracker is not robust for relatively dense error distributed in the contiguous regions. Thus, we exploit a nonconvex penalty function and MRFs for outlier modeling, which is more probable to detect the contiguous occluded regions and recover the target appearance. For long-term tracking, a particle filter framework along with a dynamic model update mechanism is developed. Both qualitative and quantitative evaluations demonstrate a robust and precise performance.

  16. Nanotechnology Based Environmentally Robust Primers

    SciTech Connect

    Barbee, T W Jr; Gash, A E; Satcher, J H Jr; Simpson, R L

    2003-03-18

    An initiator device structure consisting of an energetic metallic nano-laminate foil coated with a sol-gel derived energetic nano-composite has been demonstrated. The device structure consists of a precision sputter deposition synthesized nano-laminate energetic foil of non-toxic and non-hazardous metals along with a ceramic-based energetic sol-gel produced coating made up of non-toxic and non-hazardous components such as ferric oxide and aluminum metal. Both the nano-laminate and sol-gel technologies are versatile commercially viable processes that allow the ''engineering'' of properties such as mechanical sensitivity and energy output. The nano-laminate serves as the mechanically sensitive precision igniter and the energetic sol-gel functions as a low-cost, non-toxic, non-hazardous booster in the ignition train. In contrast to other energetic nanotechnologies these materials can now be safely manufactured at application required levels, are structurally robust, have reproducible and engineerable properties, and have excellent aging characteristics.

  17. Precision CW laser automatic tracking system investigated

    NASA Technical Reports Server (NTRS)

    Lang, K. T.; Lucy, R. F.; Mcgann, E. J.; Peters, C. J.

    1966-01-01

    Precision laser tracker capable of tracking a low acceleration target to an accuracy of about 20 microradians rms is being constructed and tested. This laser tracking has the advantage of discriminating against other optical sources and the capability of simultaneously measuring range.

  18. Using satellite data to increase accuracy of PMF calculations

    SciTech Connect

    Mettel, M.C.

    1992-03-01

    The accuracy of a flood severity estimate depends on the data used. The more detailed and precise the data, the more accurate the estimate. Earth observation satellites gather detailed data for determining the probable maximum flood at hydropower projects.

  19. Precise Positioning with Multi-GNSS and its Advantage for Seismic Parameters Inversion

    NASA Astrophysics Data System (ADS)

    Chen, K.; Li, X.; Babeyko, A. Y.; Ge, M.

    2015-12-01

    Together with the ongoing modernization of U.S. GPS and Russian GLONASS, the two new emerging global navigation satellite systems (BeiDou from China and Galileo from European Union) have already been running and multi-GNSS era is coming. Compared with single system, multi-GNSS can significantly improve the satellite visibility, optimize the spatial geometry, reduce dilution of precision and will be of great benefits to both scientific applications and engineering services. In this contribution, we focus mainly on its potential advantages for earthquake parameters estimation and tsunami early warning. First, we assess the precise positioning performance of multi-GNSS by an out-door experiment on a shaking table. Three positioning methods were used to retrieve the simulated seismic signal: precise point positioning (PPP), variometric approach for displacements analysis stand-alone engine (VADASE) and temporal point positioning (TPP). In addition to that, with respect to VADASE and TPP, we extended the original dual-frequency model to single-frequency model and then tested the algorithms. Accuracy, reliability, and continuity were evaluated and analyzed in detail accordingly. Our results revealed that multi-GNSS offer more precise and robust positioning results over GPS-only. At last, as a case study, multi-GNSS data recorded during 2014 Pisagua Earthquake were re-processed. Using co-seismic displacements from GPS and multi-GNSS, earthquake and the aftermath tsunami were inverted, respectively.

  20. Robust Bayesian Fluorescence Lifetime Estimation, Decay Model Selection and Instrument Response Determination for Low-Intensity FLIM Imaging

    PubMed Central

    Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.

    2016-01-01

    We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322

  1. Simultaneous HPLC determination of 22 components of essential oils; method robustness with experimental design.

    PubMed

    Porel, A; Sanyal, Y; Kundu, A

    2014-01-01

    The aim of the present study was the development and validation of a simple, precise and specific reversed phase HPLC method for the simultaneous determination of 22 components present in different essential oils namely cinnamon bark oil, caraway oil and cardamom fruit oil. The chromatographic separation of all the components was achieved on Wakosil-II C18 column with mixture of 30 mM ammonium acetate buffer (pH 4.7), methanol and acetonitrile in different ratio as mobile phase in a ternary linear gradient mode. The calibration graphs plotted with five different concentrations of each component were linear with a regression coefficient R(2) >0.999. The limit of detection and limit of quantitation were estimated for all the components. Effect on analytical responses by small and deliberate variation of critical factors was examined by robustness testing with Design of Experiment employing Central Composite Design and established that this method was robust. The method was then validated for linearity, precision, accuracy, specificity and demonstrated to be applicable to the determination of the ingredients in commercial sample of essential oil. PMID:24799735

  2. Simultaneous HPLC Determination of 22 Components of Essential Oils; Method Robustness with Experimental Design

    PubMed Central

    Porel, A.; Sanyal, Y.; Kundu, A.

    2014-01-01

    The aim of the present study was the development and validation of a simple, precise and specific reversed phase HPLC method for the simultaneous determination of 22 components present in different essential oils namely cinnamon bark oil, caraway oil and cardamom fruit oil. The chromatographic separation of all the components was achieved on Wakosil–II C18 column with mixture of 30 mM ammonium acetate buffer (pH 4.7), methanol and acetonitrile in different ratio as mobile phase in a ternary linear gradient mode. The calibration graphs plotted with five different concentrations of each component were linear with a regression coefficient R2 >0.999. The limit of detection and limit of quantitation were estimated for all the components. Effect on analytical responses by small and deliberate variation of critical factors was examined by robustness testing with Design of Experiment employing Central Composite Design and established that this method was robust. The method was then validated for linearity, precision, accuracy, specificity and demonstrated to be applicable to the determination of the ingredients in commercial sample of essential oil. PMID:24799735

  3. The Seasat Precision Orbit Determination Experiment

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Born, G. H.

    1980-01-01

    The objectives and conclusions reached during the Seasat Precision Orbit Determination Experiment are discussed. It is noted that the activities of the experiment team included extensive software calibration and validation and an intense effort to validate and improve the dynamic models which describe the satellite's motion. Significant improvement in the gravitational model was obtained during the experiment, and it is pointed out that the current accuracy of the Seasat altitude ephemeris is 1.5 m rms. An altitude ephemeris for the Seasat spacecraft with an accuracy of 0.5 m rms is seen as possible with further improvements in the geopotential, atmospheric drag, and solar radiation pressure models. It is concluded that since altimetry missions with a 2-cm precision altimeter are contemplated, the precision orbit determination effort initiated under the Seasat Project must be continued and expanded.

  4. Assessing the accuracy of the van der Waals density functionals for rare-gas and small molecular systems

    NASA Astrophysics Data System (ADS)

    Callsen, Martin; Hamada, Ikutaro

    2015-05-01

    The precise description of chemical bonds with different natures is a prerequisite for an accurate electronic structure method. The van der Waals density functional is a promising approach that meets such a requirement. Nevertheless, the accuracy should be assessed for a variety of materials to test the robustness of the method. We present benchmark calculations for weakly interacting molecular complexes and rare-gas systems as well as covalently bound molecular systems, in order to assess the accuracy and applicability of rev-vdW-DF2, a recently proposed variant [I. Hamada, Phys. Rev. B 89, 121103 (2014), 10.1103/PhysRevB.89.121103] of the van der Waals density functional. It is shown that although the calculated atomization energies for small molecules are less accurate rev-vdW-DF2 describes the interaction energy curves for the weakly interacting molecules and rare-gas complexes, as well as the bond lengths of diatomic molecules, reasonably well.

  5. Precision performance lamp technology

    NASA Astrophysics Data System (ADS)

    Bell, Dean A.; Kiesa, James E.; Dean, Raymond A.

    1997-09-01

    A principal function of a lamp is to produce light output with designated spectra, intensity, and/or geometric radiation patterns. The function of a precision performance lamp is to go beyond these parameters and into the precision repeatability of performance. All lamps are not equal. There are a variety of incandescent lamps, from the vacuum incandescent indictor lamp to the precision lamp of a blood analyzer. In the past the definition of a precision lamp was described in terms of wattage, light center length (LCL), filament position, and/or spot alignment. This paper presents a new view of precision lamps through the discussion of a new segment of lamp design, which we term precision performance lamps. The definition of precision performance lamps will include (must include) the factors of a precision lamp. But what makes a precision lamp a precision performance lamp is the manner in which the design factors of amperage, mscp (mean spherical candlepower), efficacy (lumens/watt), life, not considered individually but rather considered collectively. There is a statistical bias in a precision performance lamp for each of these factors; taken individually and as a whole. When properly considered the results can be dramatic to the system design engineer, system production manage and the system end-user. It can be shown that for the lamp user, the use of precision performance lamps can translate to: (1) ease of system design, (2) simplification of electronics, (3) superior signal to noise ratios, (4) higher manufacturing yields, (5) lower system costs, (6) better product performance. The factors mentioned above are described along with their interdependent relationships. It is statistically shown how the benefits listed above are achievable. Examples are provided to illustrate how proper attention to precision performance lamp characteristics actually aid in system product design and manufacturing to build and market more, market acceptable product products in the

  6. Robust control of accelerators

    SciTech Connect

    Johnson, W.J.D. ); Abdallah, C.T. )

    1990-01-01

    The problem of controlling the variations in the rf power system can be effectively cast as an application of modern control theory. Two components of this theory are obtaining a model and a feedback structure. The model inaccuracies influence the choice of a particular controller structure. Because of the modeling uncertainty, one has to design either a variable, adaptive controller or a fixed, robust controller to achieve the desired objective. The adaptive control scheme usually results in very complex hardware; and, therefore, shall not be pursued in this research. In contrast, the robust control methods leads to simpler hardware. However, robust control requires a more accurate mathematical model of the physical process than is required by adaptive control. Our research at the Los Alamos National Laboratory (LANL) and the University of New Mexico (UNM) has led to the development and implementation of a new robust rf power feedback system. In this paper, we report on our research progress. In section one, the robust control problem for the rf power system and the philosophy adopted for the beginning phase of our research is presented. In section two, the results of our proof-of-principle experiments are presented. In section three, we describe the actual controller configuration that is used in LANL FEL physics experiments. The novelty of our approach is that the control hardware is implemented directly in rf without demodulating, compensating, and then remodulating.

  7. Robust control of accelerators

    NASA Astrophysics Data System (ADS)

    Joel, W.; Johnson, D.; Chaouki, Abdallah T.

    1991-07-01

    The problem of controlling the variations in the rf power system can be effectively cast as an application of modern control theory. Two components of this theory are obtaining a model and a feedback structure. The model inaccuracies influence the choice of a particular controller structure. Because of the modelling uncertainty, one has to design either a variable, adaptive controller or a fixed, robust controller to achieve the desired objective. The adaptive control scheme usually results in very complex hardware; and, therefore, shall not be pursued in this research. In contrast, the robust control method leads to simpler hardware. However, robust control requires a more accurate mathematical model of the physical process than is required by adaptive control. Our research at the Los Alamos National Laboratory (LANL) and the University of New Mexico (UNM) has led to the development and implementation of a new robust rf power feedback system. In this article, we report on our research progress. In section 1, the robust control problem for the rf power system and the philosophy adopted for the beginning phase of our research is presented. In section 2, the results of our proof-of-principle experiments are presented. In section 3, we describe the actual controller configuration that is used in LANL FEL physics experiments. The novelty of our approach is that the control hardware is implemented directly in rf. without demodulating, compensating, and then remodulating.

  8. Engineering robust intelligent robots

    NASA Astrophysics Data System (ADS)

    Hall, E. L.; Ali, S. M. Alhaj; Ghaffari, M.; Liao, X.; Cao, M.

    2010-01-01

    The purpose of this paper is to discuss the challenge of engineering robust intelligent robots. Robust intelligent robots may be considered as ones that not only work in one environment but rather in all types of situations and conditions. Our past work has described sensors for intelligent robots that permit adaptation to changes in the environment. We have also described the combination of these sensors with a "creative controller" that permits adaptive critic, neural network learning, and a dynamic database that permits task selection and criteria adjustment. However, the emphasis of this paper is on engineering solutions which are designed for robust operations and worst case situations such as day night cameras or rain and snow solutions. This ideal model may be compared to various approaches that have been implemented on "production vehicles and equipment" using Ethernet, CAN Bus and JAUS architectures and to modern, embedded, mobile computing architectures. Many prototype intelligent robots have been developed and demonstrated in terms of scientific feasibility but few have reached the stage of a robust engineering solution. Continual innovation and improvement are still required. The significance of this comparison is that it provides some insights that may be useful in designing future robots for various manufacturing, medical, and defense applications where robust and reliable performance is essential.

  9. Advanced irrigation engineering: Precision and Precise

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Irrigation advances in precision irrigation (PI) or site-specific irrigation (SSI) have been considerable in research; however commercialization lags. A primary necessity for it is variability in soil texture that affects soil water holding capacity and crop yield. Basically, SSI/PI uses variable ra...

  10. Advanced irrigation engineering: Precision and Precise

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Irrigation advances in precision irrigation (PI) or site specific irrigation (SSI) have been considerable in research; however commercialization lags. A primary necessity for PI/SSI is variability in soil texture that affects soil water holding capacity and crop yield. Basically, SSI/PI uses variabl...

  11. Precision aerial application for site-specific rice crop management

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Precision agriculture includes different technologies that allow agricultural professional to use information management tools to optimize agriculture production. The new technologies allow aerial application applicators to improve application accuracy and efficiency, which saves time and money for...

  12. Backward smoothing for precise GNSS applications

    NASA Astrophysics Data System (ADS)

    Vaclavovic, Pavel; Dousa, Jan

    2015-10-01

    The Extended Kalman filter is widely used for its robustness and simple implementation. Parameters estimated for solving dynamical systems usually require certain time to converge and need to be smoothed by a dedicated algorithms. The purpose of our study was to implement smoothing algorithms for processing both code and carrier phase observations with Precise Point Positioning method. We implemented and used the well known Rauch-Tung-Striebel smoother (RTS). It has been found out that the RTS suffer from significant numerical instability in smoothed state covariance matrix determination. We improved the processing with algorithms based on Singular Value Decomposition, which was more robust. Observations from many permanent stations have been processed with final orbits and clocks provided by the International GNSS service (IGS), and the smoothing improved stability and precision in every cases. Moreover, (re)convergence of the parameters were always successfully eliminated.

  13. Fast, Accurate and Precise Mid-Sagittal Plane Location in 3D MR Images of the Brain

    NASA Astrophysics Data System (ADS)

    Bergo, Felipe P. G.; Falcão, Alexandre X.; Yasuda, Clarissa L.; Ruppert, Guilherme C. S.

    Extraction of the mid-sagittal plane (MSP) is a key step for brain image registration and asymmetry analysis. We present a fast MSP extraction method for 3D MR images, based on automatic segmentation of the brain and on heuristic maximization of the cerebro-spinal fluid within the MSP. The method is robust to severe anatomical asymmetries between the hemispheres, caused by surgical procedures and lesions. The method is also accurate with respect to MSP delineations done by a specialist. The method was evaluated on 64 MR images (36 pathological, 20 healthy, 8 synthetic), and it found a precise and accurate approximation of the MSP in all of them with a mean time of 60.0 seconds per image, mean angular variation within a same image (precision) of 1.26o and mean angular difference from specialist delineations (accuracy) of 1.64o.

  14. Robust Unit Commitment Considering Uncertain Demand Response

    DOE PAGESBeta

    Liu, Guodong; Tomsovic, Kevin

    2014-09-28

    Although price responsive demand response has been widely accepted as playing an important role in the reliable and economic operation of power system, the real response from demand side can be highly uncertain due to limited understanding of consumers' response to pricing signals. To model the behavior of consumers, the price elasticity of demand has been explored and utilized in both research and real practice. However, the price elasticity of demand is not precisely known and may vary greatly with operating conditions and types of customers. To accommodate the uncertainty of demand response, alternative unit commitment methods robust to themore » uncertainty of the demand response require investigation. In this paper, a robust unit commitment model to minimize the generalized social cost is proposed for the optimal unit commitment decision taking into account uncertainty of the price elasticity of demand. By optimizing the worst case under proper robust level, the unit commitment solution of the proposed model is robust against all possible realizations of the modeled uncertain demand response. Numerical simulations on the IEEE Reliability Test System show the e ectiveness of the method. Finally, compared to unit commitment with deterministic price elasticity of demand, the proposed robust model can reduce the average Locational Marginal Prices (LMPs) as well as the price volatility.« less

  15. Robust Unit Commitment Considering Uncertain Demand Response

    SciTech Connect

    Liu, Guodong; Tomsovic, Kevin

    2014-09-28

    Although price responsive demand response has been widely accepted as playing an important role in the reliable and economic operation of power system, the real response from demand side can be highly uncertain due to limited understanding of consumers' response to pricing signals. To model the behavior of consumers, the price elasticity of demand has been explored and utilized in both research and real practice. However, the price elasticity of demand is not precisely known and may vary greatly with operating conditions and types of customers. To accommodate the uncertainty of demand response, alternative unit commitment methods robust to the uncertainty of the demand response require investigation. In this paper, a robust unit commitment model to minimize the generalized social cost is proposed for the optimal unit commitment decision taking into account uncertainty of the price elasticity of demand. By optimizing the worst case under proper robust level, the unit commitment solution of the proposed model is robust against all possible realizations of the modeled uncertain demand response. Numerical simulations on the IEEE Reliability Test System show the e ectiveness of the method. Finally, compared to unit commitment with deterministic price elasticity of demand, the proposed robust model can reduce the average Locational Marginal Prices (LMPs) as well as the price volatility.

  16. System and method for high precision isotope ratio destructive analysis

    DOEpatents

    Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R

    2013-07-02

    A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).

  17. Deep Coupled Integration of CSAC and GNSS for Robust PNT.

    PubMed

    Ma, Lin; You, Zheng; Li, Bin; Zhou, Bin; Han, Runqi

    2015-01-01

    Global navigation satellite systems (GNSS) are the most widely used positioning, navigation, and timing (PNT) technology. However, a GNSS cannot provide effective PNT services in physical blocks, such as in a natural canyon, canyon city, underground, underwater, and indoors. With the development of micro-electromechanical system (MEMS) technology, the chip scale atomic clock (CSAC) gradually matures, and performance is constantly improved. A deep coupled integration of CSAC and GNSS is explored in this thesis to enhance PNT robustness. "Clock coasting" of CSAC provides time synchronized with GNSS and optimizes navigation equations. However, errors of clock coasting increase over time and can be corrected by GNSS time, which is stable but noisy. In this paper, weighted linear optimal estimation algorithm is used for CSAC-aided GNSS, while Kalman filter is used for GNSS-corrected CSAC. Simulations of the model are conducted, and field tests are carried out. Dilution of precision can be improved by integration. Integration is more accurate than traditional GNSS. When only three satellites are visible, the integration still works, whereas the traditional method fails. The deep coupled integration of CSAC and GNSS can improve the accuracy, reliability, and availability of PNT. PMID:26378542

  18. Deep Coupled Integration of CSAC and GNSS for Robust PNT

    PubMed Central

    Ma, Lin; You, Zheng; Li, Bin; Zhou, Bin; Han, Runqi

    2015-01-01

    Global navigation satellite systems (GNSS) are the most widely used positioning, navigation, and timing (PNT) technology. However, a GNSS cannot provide effective PNT services in physical blocks, such as in a natural canyon, canyon city, underground, underwater, and indoors. With the development of micro-electromechanical system (MEMS) technology, the chip scale atomic clock (CSAC) gradually matures, and performance is constantly improved. A deep coupled integration of CSAC and GNSS is explored in this thesis to enhance PNT robustness. “Clock coasting” of CSAC provides time synchronized with GNSS and optimizes navigation equations. However, errors of clock coasting increase over time and can be corrected by GNSS time, which is stable but noisy. In this paper, weighted linear optimal estimation algorithm is used for CSAC-aided GNSS, while Kalman filter is used for GNSS-corrected CSAC. Simulations of the model are conducted, and field tests are carried out. Dilution of precision can be improved by integration. Integration is more accurate than traditional GNSS. When only three satellites are visible, the integration still works, whereas the traditional method fails. The deep coupled integration of CSAC and GNSS can improve the accuracy, reliability, and availability of PNT. PMID:26378542

  19. Asynchronous RTK precise DGNSS positioning method for deriving a low-latency high-rate output

    NASA Astrophysics Data System (ADS)

    Liang, Zhang; Hanfeng, Lv; Dingjie, Wang; Yanqing, Hou; Jie, Wu

    2015-07-01

    Low-latency high-rate (1 Hz) precise real-time kinematic (RTK) can be applied in high-speed scenarios such as aircraft automatic landing, precise agriculture and intelligent vehicle. The classic synchronous RTK (SRTK) precise differential GNSS (DGNSS) positioning technology, however, is not able to obtain a low-latency high-rate output for the rover receiver because of long data link transmission time delays (DLTTD) from the reference receiver. To overcome the long DLTTD, this paper proposes an asynchronous real-time kinematic (ARTK) method using asynchronous observations from two receivers. The asynchronous observation model (AOM) is developed based on undifferenced carrier phase observation equations of the two receivers at different epochs with short baseline. The ephemeris error and atmosphere delay are the possible main error sources on positioning accuracy in this model, and they are analyzed theoretically. In a short DLTTD and during a period of quiet ionosphere activity, the main error sources decreasing positioning accuracy are satellite orbital errors: the "inverted ephemeris error" and the integration of satellite velocity error which increase linearly along with DLTTD. The cycle slip of asynchronous double-differencing carrier phase is detected by TurboEdit method and repaired by the additional ambiguity parameter method. The AOM can deal with synchronous observation model (SOM) and achieve precise positioning solution with synchronous observations as well, since the SOM is only a specific case of AOM. The proposed method not only can reduce the cost of data collection and transmission, but can also support the mobile phone network data link transfer mode for the data of the reference receiver. This method can avoid data synchronizing process besides ambiguity initialization step, which is very convenient for real-time navigation of vehicles. The static and kinematic experiment results show that this method achieves 20 Hz or even higher rate output in

  20. Systematic review of discharge coding accuracy

    PubMed Central

    Burns, E.M.; Rigby, E.; Mamidanna, R.; Bottle, A.; Aylin, P.; Ziprin, P.; Faiz, O.D.

    2012-01-01

    Introduction Routinely collected data sets are increasingly used for research, financial reimbursement and health service planning. High quality data are necessary for reliable analysis. This study aims to assess the published accuracy of routinely collected data sets in Great Britain. Methods Systematic searches of the EMBASE, PUBMED, OVID and Cochrane databases were performed from 1989 to present using defined search terms. Included studies were those that compared routinely collected data sets with case or operative note review and those that compared routinely collected data with clinical registries. Results Thirty-two studies were included. Twenty-five studies compared routinely collected data with case or operation notes. Seven studies compared routinely collected data with clinical registries. The overall median accuracy (routinely collected data sets versus case notes) was 83.2% (IQR: 67.3–92.1%). The median diagnostic accuracy was 80.3% (IQR: 63.3–94.1%) with a median procedure accuracy of 84.2% (IQR: 68.7–88.7%). There was considerable variation in accuracy rates between studies (50.5–97.8%). Since the 2002 introduction of Payment by Results, accuracy has improved in some respects, for example primary diagnoses accuracy has improved from 73.8% (IQR: 59.3–92.1%) to 96.0% (IQR: 89.3–96.3), P= 0.020. Conclusion Accuracy rates are improving. Current levels of reported accuracy suggest that routinely collected data are sufficiently robust to support their use for research and managerial decision-making. PMID:21795302

  1. Robustness of spatial micronetworks

    NASA Astrophysics Data System (ADS)

    McAndrew, Thomas C.; Danforth, Christopher M.; Bagrow, James P.

    2015-04-01

    Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure.

  2. Robustness of spatial micronetworks.

    PubMed

    McAndrew, Thomas C; Danforth, Christopher M; Bagrow, James P

    2015-04-01

    Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure. PMID:25974553

  3. Improving the precision matrix for precision cosmology

    NASA Astrophysics Data System (ADS)

    Paz, Dante J.; Sánchez, Ariel G.

    2015-12-01

    The estimation of cosmological constraints from observations of the large-scale structure of the Universe, such as the power spectrum or the correlation function, requires the knowledge of the inverse of the associated covariance matrix, namely the precision matrix, Ψ . In most analyses, Ψ is estimated from a limited set of mock catalogues. Depending on how many mocks are used, this estimation has an associated error which must be propagated into the final cosmological constraints. For future surveys such as Euclid and Dark Energy Spectroscopic Instrument, the control of this additional uncertainty requires a prohibitively large number of mock catalogues. In this work, we test a novel technique for the estimation of the precision matrix, the covariance tapering method, in the context of baryon acoustic oscillation measurements. Even though this technique was originally devised as a way to speed up maximum likelihood estimations, our results show that it also reduces the impact of noisy precision matrix estimates on the derived confidence intervals, without introducing biases on the target parameters. The application of this technique can help future surveys to reach their true constraining power using a significantly smaller number of mock catalogues.

  4. Precision Optics Curriculum.

    ERIC Educational Resources Information Center

    Reid, Robert L.; And Others

    This guide outlines the competency-based, two-year precision optics curriculum that the American Precision Optics Manufacturers Association has proposed to fill the void that it suggests will soon exist as many of the master opticians currently employed retire. The model, which closely resembles the old European apprenticeship model, calls for 300…

  5. Trap Array Configuration Influences Estimates and Precision of Black Bear Density and Abundance

    PubMed Central

    Wilton, Clay M.; Puckett, Emily E.; Beringer, Jeff; Gardner, Beth; Eggert, Lori S.; Belant, Jerrold L.

    2014-01-01

    Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI = 193–406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information

  6. A 3-D Multilateration: A Precision Geodetic Measurement System

    NASA Technical Reports Server (NTRS)

    Escobal, P. R.; Fliegel, H. F.; Jaffe, R. M.; Muller, P. M.; Ong, K. M.; Vonroos, O. H.

    1972-01-01

    A system was designed with the capability of determining 1-cm accuracy station positions in three dimensions using pulsed laser earth satellite tracking stations coupled with strictly geometric data reduction. With this high accuracy, several crucial geodetic applications become possible, including earthquake hazards assessment, precision surveying, plate tectonics, and orbital determination.

  7. Precision Spectroscopy of Atomic Hydrogen

    NASA Astrophysics Data System (ADS)

    Beyer, A.; Parthey, Ch G.; Kolachevsky, N.; Alnis, J.; Khabarova, K.; Pohl, R.; Peters, E.; Yost, D. C.; Matveev, A.; Predehl, K.; Droste, S.; Wilken, T.; Holzwarth, R.; Hänsch, T. W.; Abgrall, M.; Rovera, D.; Salomon, Ch; Laurent, Ph; Udem, Th

    2013-12-01

    Precise determinations of transition frequencies of simple atomic systems are required for a number of fundamental applications such as tests of quantum electrodynamics (QED), the determination of fundamental constants and nuclear charge radii. The sharpest transition in atomic hydrogen occurs between the metastable 2S state and the 1S ground state. Its transition frequency has now been measured with almost 15 digits accuracy using an optical frequency comb and a cesium atomic clock as a reference [1]. A recent measurement of the 2S - 2P3/2 transition frequency in muonic hydrogen is in significant contradiction to the hydrogen data if QED calculations are assumed to be correct [2, 3]. We hope to contribute to this so-called "proton size puzzle" by providing additional experimental input from hydrogen spectroscopy.

  8. System for precise position registration

    DOEpatents

    Sundelin, Ronald M.; Wang, Tong

    2005-11-22

    An apparatus for enabling accurate retaining of a precise position, such as for reacquisition of a microscopic spot or feature having a size of 0.1 mm or less, on broad-area surfaces after non-in situ processing. The apparatus includes a sample and sample holder. The sample holder includes a base and three support posts. Two of the support posts interact with a cylindrical hole and a U-groove in the sample to establish location of one point on the sample and a line through the sample. Simultaneous contact of the third support post with the surface of the sample defines a plane through the sample. All points of the sample are therefore uniquely defined by the sample and sample holder. The position registration system of the current invention provides accuracy, as measured in x, y repeatability, of at least 140 .mu.m.

  9. Accuracy of analyses of microelectronics nanostructures in atom probe tomography

    NASA Astrophysics Data System (ADS)

    Vurpillot, F.; Rolland, N.; Estivill, R.; Duguay, S.; Blavette, D.

    2016-07-01

    The routine use of atom probe tomography (APT) as a nano-analysis microscope in the semiconductor industry requires the precise evaluation of the metrological parameters of this instrument (spatial accuracy, spatial precision, composition accuracy or composition precision). The spatial accuracy of this microscope is evaluated in this paper in the analysis of planar structures such as high-k metal gate stacks. It is shown both experimentally and theoretically that the in-depth accuracy of reconstructed APT images is perturbed when analyzing this structure composed of an oxide layer of high electrical permittivity (higher-k dielectric constant) that separates the metal gate and the semiconductor channel of a field emitter transistor. Large differences in the evaporation field between these layers (resulting from large differences in material properties) are the main sources of image distortions. An analytic model is used to interpret inaccuracy in the depth reconstruction of these devices in APT.

  10. Doubly robust survival trees.

    PubMed

    Steingrimsson, Jon Arni; Diao, Liqun; Molinaro, Annette M; Strawderman, Robert L

    2016-09-10

    Estimating a patient's mortality risk is important in making treatment decisions. Survival trees are a useful tool and employ recursive partitioning to separate patients into different risk groups. Existing 'loss based' recursive partitioning procedures that would be used in the absence of censoring have previously been extended to the setting of right censored outcomes using inverse probability censoring weighted estimators of loss functions. In this paper, we propose new 'doubly robust' extensions of these loss estimators motivated by semiparametric efficiency theory for missing data that better utilize available data. Simulations and a data analysis demonstrate strong performance of the doubly robust survival trees compared with previously used methods. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27037609

  11. Robust Collaborative Recommendation

    NASA Astrophysics Data System (ADS)

    Burke, Robin; O'Mahony, Michael P.; Hurley, Neil J.

    Collaborative recommender systems are vulnerable to malicious users who seek to bias their output, causing them to recommend (or not recommend) particular items. This problem has been an active research topic since 2002. Researchers have found that the most widely-studied memory-based algorithms have significant vulnerabilities to attacks that can be fairly easily mounted. This chapter discusses these findings and the responses that have been investigated, especially detection of attack profiles and the implementation of robust recommendation algorithms.

  12. Robustness of metabolic networks

    NASA Astrophysics Data System (ADS)

    Jeong, Hawoong

    2009-03-01

    We investigated the robustness of cellular metabolism by simulating the system-level computational models, and also performed the corresponding experiments to validate our predictions. We address the cellular robustness from the ``metabolite''-framework by using the novel concept of ``flux-sum,'' which is the sum of all incoming or outgoing fluxes (they are the same under the pseudo-steady state assumption). By estimating the changes of the flux-sum under various genetic and environmental perturbations, we were able to clearly decipher the metabolic robustness; the flux-sum around an essential metabolite does not change much under various perturbations. We also identified the list of the metabolites essential to cell survival, and then ``acclimator'' metabolites that can control the cell growth were discovered. Furthermore, this concept of ``metabolite essentiality'' should be useful in developing new metabolic engineering strategies for improved production of various bioproducts and designing new drugs that can fight against multi-antibiotic resistant superbacteria by knocking-down the enzyme activities around an essential metabolite. Finally, we combined a regulatory network with the metabolic network to investigate its effect on dynamic properties of cellular metabolism.

  13. Robust impedance shaping telemanipulation

    SciTech Connect

    Colgate, J.E.

    1993-08-01

    When a human operator performs a task via a bilateral manipulator, the feel of the task is embodied in the mechanical impedance of the manipulator. Traditionally, a bilateral manipulator is designed for transparency; i.e., so that the impedance reflected through the manipulator closely approximates that of the task. Impedance shaping bilateral control, introduced here, differs in that it treats the bilateral manipulator as a means of constructively altering the impedance of a task. This concept is particularly valuable if the characteristic dimensions (e.g., force, length, time) of the task impedance are very different from those of the human limb. It is shown that a general form of impedance shaping control consists of a conventional power-scaling bilateral controller augmented with a real-time interactive task simulation (i.e., a virtual environment). An approach to impedance shaping based on kinematic similarity between tasks of different scale is introduced and illustrated with an example. It is shown that an important consideration in impedance shaping controller design is robustness; i.e., guaranteeing the stability of the operator/manipulator/task system. A general condition for the robustness of a bilateral manipulator is derived. This condition is based on the structured singular value ({mu}). An example of robust impedance shaping bilateral control is presented and discussed.

  14. Robustness of Interdependent Networks

    NASA Astrophysics Data System (ADS)

    Havlin, Shlomo

    2011-03-01

    In interdependent networks, when nodes in one network fail, they cause dependent nodes in other networks to also fail. This may happen recursively and can lead to a cascade of failures. In fact, a failure of a very small fraction of nodes in one network may lead to the complete fragmentation of a system of many interdependent networks. We will present a framework for understanding the robustness of interacting networks subject to such cascading failures and provide a basic analytic approach that may be useful in future studies. We present exact analytical solutions for the critical fraction of nodes that upon removal will lead to a failure cascade and to a complete fragmentation of two interdependent networks in a first order transition. Surprisingly, analyzing complex systems as a set of interdependent networks may alter a basic assumption that network theory has relied on: while for a single network a broader degree distribution of the network nodes results in the network being more robust to random failures, for interdependent networks, the broader the distribution is, the more vulnerable the networks become to random failure. We also show that reducing the coupling between the networks leads to a change from a first order percolation phase transition to a second order percolation transition at a critical point. These findings pose a significant challenge to the future design of robust networks that need to consider the unique properties of interdependent networks.

  15. Atomic interactions in precision interferometry using Bose-Einstein condensates

    SciTech Connect

    Jamison, Alan O.; Gupta, Subhadeep; Kutz, J. Nathan

    2011-10-15

    We present theoretical tools for predicting and reducing the effects of atomic interactions in Bose-Einstein condensate (BEC) interferometry experiments. To address mean-field shifts during free propagation, we derive a robust scaling solution that reduces the three-dimensional Gross-Pitaevskii equation to a set of three simple differential equations valid for any interaction strength. To model the other common components of a BEC interferometer--condensate splitting, manipulation, and recombination--we generalize the slowly varying envelope reduction, providing both analytic handles and dramatically improved simulations. Applying these tools to a BEC interferometer to measure the fine structure constant, {alpha}[S. Gupta, K. Dieckmann, Z. Hadzibabic, and D. E. Pritchard, Phys. Rev. Lett. 89, 140401 (2002)], we find agreement with the results of the original experiment and demonstrate that atomic interactions do not preclude measurement to better than part-per-billion accuracy, even for atomic species with relatively large scattering lengths. These tools help make BEC interferometry a viable choice for a broad class of precision measurements.

  16. Interoceptive accuracy and panic.

    PubMed

    Zoellner, L A; Craske, M G

    1999-12-01

    Psychophysiological models of panic hypothesize that panickers focus attention on and become anxious about the physical sensations associated with panic. Attention on internal somatic cues has been labeled interoception. The present study examined the role of physiological arousal and subjective anxiety on interoceptive accuracy. Infrequent panickers and nonanxious participants participated in an initial baseline to examine overall interoceptive accuracy. Next, participants ingested caffeine, about which they received either safety or no safety information. Using a mental heartbeat tracking paradigm, participants' count of their heartbeats during specific time intervals were coded based on polygraph measures. Infrequent panickers were more accurate in the perception of their heartbeats than nonanxious participants. Changes in physiological arousal were not associated with increased accuracy on the heartbeat perception task. However, higher levels of self-reported anxiety were associated with superior performance. PMID:10596462

  17. Precision Environmental Radiation Monitoring System

    SciTech Connect

    Vladimir Popov, Pavel Degtiarenko

    2010-07-01

    A new precision low-level environmental radiation monitoring system has been developed and tested at Jefferson Lab. This system provides environmental radiation measurements with accuracy and stability of the order of 1 nGy/h in an hour, roughly corresponding to approximately 1% of the natural cosmic background at the sea level. Advanced electronic front-end has been designed and produced for use with the industry-standard High Pressure Ionization Chamber detector hardware. A new highly sensitive readout electronic circuit was designed to measure charge from the virtually suspended ionization chamber ion collecting electrode. New signal processing technique and dedicated data acquisition were tested together with the new readout. The designed system enabled data collection in a remote Linux-operated computer workstation, which was connected to the detectors using a standard telephone cable line. The data acquisition system algorithm is built around the continuously running 24-bit resolution 192 kHz data sampling analog to digital convertor. The major features of the design include: extremely low leakage current in the input circuit, true charge integrating mode operation, and relatively fast response to the intermediate radiation change. These features allow operating of the device as an environmental radiation monitor, at the perimeters of the radiation-generating installations in densely populated areas, like in other monitoring and security applications requiring high precision and long-term stability. Initial system evaluation results are presented.

  18. Seasonal Effects on GPS PPP Accuracy

    NASA Astrophysics Data System (ADS)

    Saracoglu, Aziz; Ugur Sanli, D.

    2016-04-01

    GPS Precise Point Positioning (PPP) is now routinely used in many geophysical applications. Static positioning and 24 h data are requested for high precision results however real life situations do not always let us collect 24 h data. Thus repeated GPS surveys of 8-10 h observation sessions are still used by some research groups. Positioning solutions from shorter data spans are subject to various systematic influences, and the positioning quality as well as the estimated velocity is degraded. Researchers pay attention to the accuracy of GPS positions and of the estimated velocities derived from short observation sessions. Recently some research groups turned their attention to the study of seasonal effects (i.e. meteorological seasons) on GPS solutions. Up to now usually regional studies have been reported. In this study, we adopt a global approach and study the various seasonal effects (including the effect of the annual signal) on GPS solutions produced from short observation sessions. We use the PPP module of the NASA/JPL's GIPSY/OASIS II software and globally distributed GPS stations' data of the International GNSS Service. Accuracy studies previously performed with 10-30 consecutive days of continuous data. Here, data from each month of a year, incorporating two years in succession, is used in the analysis. Our major conclusion is that a reformulation for the GPS positioning accuracy is necessary when taking into account the seasonal effects, and typical one term accuracy formulation is expanded to a two-term one.

  19. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    A precision liquid level sensor utilizes a balanced bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  20. A precision analogue integrator system for heavy current measurement in MFDC resistance spot welding

    NASA Astrophysics Data System (ADS)

    Xia, Yu-Jun; Zhang, Zhong-Dian; Xia, Zhen-Xin; Zhu, Shi-Liang; Zhang, Rui

    2016-02-01

    In order to control and monitor the quality of middle frequency direct current (MFDC) resistance spot welding (RSW), precision measurement of the welding current up to 100 kA is required, for which Rogowski coils are the only viable current transducers at present. Thus, a highly accurate analogue integrator is the key to restoring the converted signals collected from the Rogowski coils. Previous studies emphasised that the integration drift is a major factor that influences the performance of analogue integrators, but capacitive leakage error also has a significant impact on the result, especially in long-time pulse integration. In this article, new methods of measuring and compensating capacitive leakage error are proposed to fabricate a precision analogue integrator system for MFDC RSW. A voltage holding test is carried out to measure the integration error caused by capacitive leakage, and an original integrator with a feedback adder is designed to compensate capacitive leakage error in real time. The experimental results and statistical analysis show that the new analogue integrator system could constrain both drift and capacitive leakage error, of which the effect is robust to different voltage levels of output signals. The total integration error is limited within  ±0.09 mV s-1 0.005% s-1 or full scale at a 95% confidence level, which makes it possible to achieve the precision measurement of the welding current of MFDC RSW with Rogowski coils of 0.1% accuracy class.

  1. High-precision simulations of the weak lensing effect on cosmic microwave background polarization

    NASA Astrophysics Data System (ADS)

    Fabbian, Giulio; Stompor, Radek

    2013-08-01

    We studied the accuracy, robustness, and self-consistency of pixel-domain simulations of the gravitational lensing effect on the primordial cosmic microwave background (CMB) anisotropies due to the large-scale structure of the Universe. In particular, we investigated the dependence of the precision of the results precision on some crucial parameters of these techniques and propose a semi-analytic framework to determine their values so that the required precision is a priori assured and the numerical workload simultaneously optimized. Our focus was on the B-mode signal, but we also discuss other CMB observables, such as the total intensity, T, and E-mode polarization, emphasizing differences and similarities between all these cases. Our semi-analytic considerations are backed up by extensive numerical results. Those are obtained using a code, nicknamed lenS2HAT - for lensing using scalable spherical harmonic transforms (S2HAT) - which we have developed in the course of this work. The code implements a version of the previously described pixel-domain approach and permits performing the simulations at very high resolutions and data volumes, thanks to its efficient parallelization provided by the S2HAT library - a parallel library for calculating of the spherical harmonic transforms. The code is made publicly available.

  2. Robust keyword retrieval method for OCRed text

    NASA Astrophysics Data System (ADS)

    Fujii, Yusaku; Takebe, Hiroaki; Tanaka, Hiroshi; Hotta, Yoshinobu

    2011-01-01

    Document management systems have become important because of the growing popularity of electronic filing of documents and scanning of books, magazines, manuals, etc., through a scanner or a digital camera, for storage or reading on a PC or an electronic book. Text information acquired by optical character recognition (OCR) is usually added to the electronic documents for document retrieval. Since texts generated by OCR generally include character recognition errors, robust retrieval methods have been introduced to overcome this problem. In this paper, we propose a retrieval method that is robust against both character segmentation and recognition errors. In the proposed method, the insertion of noise characters and dropping of characters in the keyword retrieval enables robustness against character segmentation errors, and character substitution in the keyword of the recognition candidate for each character in OCR or any other character enables robustness against character recognition errors. The recall rate of the proposed method was 15% higher than that of the conventional method. However, the precision rate was 64% lower.

  3. Precision retrieval of non-isothermal exo-atmospheres

    NASA Astrophysics Data System (ADS)

    Waldmann, Ingo Peter; Rocchetto, Marco

    2015-12-01

    Spectroscopy of extrasolar planets is as fast moving as it is new. When trying to characterise the atmospheres of these foreign worlds, we are faced with three challenges: 1) The correct treatment of atmospheric opacities at high temperatures, 2) Low signal-to-noise of the observed data, and 3) Large, degenerate parameter spaces. To advance in the interpretation of exoplanetary atmospheres, one must address these challenges in one coherent framework. This is particularly true for emission spectroscopy, where the need for non-isothermal temperature-pressure profiles significantly increases degeneracies in low signal-to-noise data. In the light of these challenges, we developed a novel, bayesian atmospheric retrieval suite, Tau-REx (Waldmann et al. 2015a,b). Tau-REx is a full line-by-line emission/transmission spectroscopy retrieval code based on the most complete hot line-lists from the ExoMol project. For emission spectroscopy, the correct retrieval of the atmosphere’s thermal gradient is extremely challenging with sparse and/or low SNR data. Tau-REx implements a novel two-stage retrieval algorithm which allows the code to iteratively adapt its retrieval complexity to the likelihood surface of the observed data. This way we achieve a very high retrieval accuracy and robustness to low SNR data. Using nested-sampling in conjunction with large scale cluster computing, Tau-REx integrates the full Bayesian Evidence, which allows for precise model selection of the exoplanet’s chemistry and thermal dynamics. Precision and statistical rigour is paramount in the measurement of quantities such as the carbon-oxygen ratio of planets which allow insights into the formation history of these exotic worlds. In this conference I will discuss the intricacies of retrieving the thermal emission of non-isothermal atmospheres and what can be learned from data of current and future facilities.

  4. Precision displacement reference system

    DOEpatents

    Bieg, Lothar F.; Dubois, Robert R.; Strother, Jerry D.

    2000-02-22

    A precision displacement reference system is described, which enables real time accountability over the applied displacement feedback system to precision machine tools, positioning mechanisms, motion devices, and related operations. As independent measurements of tool location is taken by a displacement feedback system, a rotating reference disk compares feedback counts with performed motion. These measurements are compared to characterize and analyze real time mechanical and control performance during operation.

  5. High-precision hydraulic Stewart platform

    NASA Astrophysics Data System (ADS)

    van Silfhout, Roelof G.

    1999-08-01

    We present a novel design for a Stewart platform (or hexapod), an apparatus which performs positioning tasks with high accuracy. The platform, which is supported by six hydraulic telescopic struts, provides six degrees of freedom with 1 μm resolution. Rotations about user defined pivot points can be specified for any axis of rotation with microradian accuracy. Motion of the platform is performed by changing the strut lengths. Servo systems set and maintain the length of the struts to high precision using proportional hydraulic valves and incremental encoders. The combination of hydraulic actuators and a design which is optimized in terms of mechanical stiffness enables the platform to manipulate loads of up to 20 kN. Sophisticated software allows direct six-axis positioning including true path control. Our platform is an ideal support structure for a large variety of scientific instruments that require a stable alignment base with high-precision motion.

  6. Accuracy metrics for judging time scale algorithms

    NASA Technical Reports Server (NTRS)

    Douglas, R. J.; Boulanger, J.-S.; Jacques, C.

    1994-01-01

    Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.

  7. Robust maximum a posteriori image super-resolution

    NASA Astrophysics Data System (ADS)

    Vrigkas, Michalis; Nikou, Christophoros; Kondi, Lisimachos P.

    2014-07-01

    A global robust M-estimation scheme for maximum a posteriori (MAP) image super-resolution which efficiently addresses the presence of outliers in the low-resolution images is proposed. In iterative MAP image super-resolution, the objective function to be minimized involves the highly resolved image, a parameter controlling the step size of the iterative algorithm, and a parameter weighing the data fidelity term with respect to the smoothness term. Apart from the robust estimation of the high-resolution image, the contribution of the proposed method is twofold: (1) the robust computation of the regularization parameters controlling the relative strength of the prior with respect to the data fidelity term and (2) the robust estimation of the optimal step size in the update of the high-resolution image. Experimental results demonstrate that integrating these estimations into a robust framework leads to significant improvement in the accuracy of the high-resolution image.

  8. Accuracy of deception judgments.

    PubMed

    Bond, Charles F; DePaulo, Bella M

    2006-01-01

    We analyze the accuracy of deception judgments, synthesizing research results from 206 documents and 24,483 judges. In relevant studies, people attempt to discriminate lies from truths in real time with no special aids or training. In these circumstances, people achieve an average of 54% correct lie-truth judgments, correctly classifying 47% of lies as deceptive and 61% of truths as nondeceptive. Relative to cross-judge differences in accuracy, mean lie-truth discrimination abilities are nontrivial, with a mean accuracy d of roughly .40. This produces an effect that is at roughly the 60th percentile in size, relative to others that have been meta-analyzed by social psychologists. Alternative indexes of lie-truth discrimination accuracy correlate highly with percentage correct, and rates of lie detection vary little from study to study. Our meta-analyses reveal that people are more accurate in judging audible than visible lies, that people appear deceptive when motivated to be believed, and that individuals regard their interaction partners as honest. We propose that people judge others' deceptions more harshly than their own and that this double standard in evaluating deceit can explain much of the accumulated literature. PMID:16859438

  9. Robust and efficient in situ quantum control

    NASA Astrophysics Data System (ADS)

    Ferrie, Christopher; Moussa, Osama

    2015-05-01

    Precision control of quantum systems is the driving force for both quantum technology and the probing of physics at the quantum and nanoscale levels. We propose an implementation-independent method for in situ quantum control that leverages recent advances in the direct estimation of quantum gate fidelity. Our algorithm takes account of the stochasticity of the problem, is suitable for closed-loop control, and requires only a constant number of fidelity-estimating experiments per iteration independent of the dimension of the control space. It is efficient and robust to both statistical and technical noise.

  10. Robust Photon Locking

    SciTech Connect

    Bayer, T.; Wollenhaupt, M.; Sarpe-Tudoran, C.; Baumert, T.

    2009-01-16

    We experimentally demonstrate a strong-field coherent control mechanism that combines the advantages of photon locking (PL) and rapid adiabatic passage (RAP). Unlike earlier implementations of PL and RAP by pulse sequences or chirped pulses, we use shaped pulses generated by phase modulation of the spectrum of a femtosecond laser pulse with a generalized phase discontinuity. The novel control scenario is characterized by a high degree of robustness achieved via adiabatic preparation of a state of maximum coherence. Subsequent phase control allows for efficient switching among different target states. We investigate both properties by photoelectron spectroscopy on potassium atoms interacting with the intense shaped light field.

  11. Complexity and robustness

    PubMed Central

    Carlson, J. M.; Doyle, John

    2002-01-01

    Highly optimized tolerance (HOT) was recently introduced as a conceptual framework to study fundamental aspects of complexity. HOT is motivated primarily by systems from biology and engineering and emphasizes, (i) highly structured, nongeneric, self-dissimilar internal configurations, and (ii) robust yet fragile external behavior. HOT claims these are the most important features of complexity and not accidents of evolution or artifices of engineering design but are inevitably intertwined and mutually reinforcing. In the spirit of this collection, our paper contrasts HOT with alternative perspectives on complexity, drawing on real-world examples and also model systems, particularly those from self-organized criticality. PMID:11875207

  12. Robust Systems Test Framework

    SciTech Connect

    Ballance, Robert A.

    2003-01-01

    The Robust Systems Test Framework (RSTF) provides a means of specifying and running test programs on various computation platforms. RSTF provides a level of specification above standard scripting languages. During a set of runs, standard timing information is collected. The RSTF specification can also gather job-specific information, and can include ways to classify test outcomes. All results and scripts can be stored into and retrieved from an SQL database for later data analysis. RSTF also provides operations for managing the script and result files, and for compiling applications and gathering compilation information such as optimization flags.

  13. Robust quantum spatial search

    NASA Astrophysics Data System (ADS)

    Tulsi, Avatar

    2016-07-01

    Quantum spatial search has been widely studied with most of the study focusing on quantum walk algorithms. We show that quantum walk algorithms are extremely sensitive to systematic errors. We present a recursive algorithm which offers significant robustness to certain systematic errors. To search N items, our recursive algorithm can tolerate errors of size O(1{/}√{ln N}) which is exponentially better than quantum walk algorithms for which tolerable error size is only O(ln N{/}√{N}). Also, our algorithm does not need any ancilla qubit. Thus our algorithm is much easier to implement experimentally compared to quantum walk algorithms.

  14. Robust Systems Test Framework

    2003-01-01

    The Robust Systems Test Framework (RSTF) provides a means of specifying and running test programs on various computation platforms. RSTF provides a level of specification above standard scripting languages. During a set of runs, standard timing information is collected. The RSTF specification can also gather job-specific information, and can include ways to classify test outcomes. All results and scripts can be stored into and retrieved from an SQL database for later data analysis. RSTF alsomore » provides operations for managing the script and result files, and for compiling applications and gathering compilation information such as optimization flags.« less

  15. Robust quantum spatial search

    NASA Astrophysics Data System (ADS)

    Tulsi, Avatar

    2016-04-01

    Quantum spatial search has been widely studied with most of the study focusing on quantum walk algorithms. We show that quantum walk algorithms are extremely sensitive to systematic errors. We present a recursive algorithm which offers significant robustness to certain systematic errors. To search N items, our recursive algorithm can tolerate errors of size O(1{/}√{N}) which is exponentially better than quantum walk algorithms for which tolerable error size is only O(ln N{/}√{N}) . Also, our algorithm does not need any ancilla qubit. Thus our algorithm is much easier to implement experimentally compared to quantum walk algorithms.

  16. Robust Kriged Kalman Filtering

    SciTech Connect

    Baingana, Brian; Dall'Anese, Emiliano; Mateos, Gonzalo; Giannakis, Georgios B.

    2015-11-11

    Although the kriged Kalman filter (KKF) has well-documented merits for prediction of spatial-temporal processes, its performance degrades in the presence of outliers due to anomalous events, or measurement equipment failures. This paper proposes a robust KKF model that explicitly accounts for presence of measurement outliers. Exploiting outlier sparsity, a novel l1-regularized estimator that jointly predicts the spatial-temporal process at unmonitored locations, while identifying measurement outliers is put forth. Numerical tests are conducted on a synthetic Internet protocol (IP) network, and real transformer load data. Test results corroborate the effectiveness of the novel estimator in joint spatial prediction and outlier identification.

  17. Robust control for uncertain structures

    NASA Technical Reports Server (NTRS)

    Douglas, Joel; Athans, Michael

    1991-01-01

    Viewgraphs on robust control for uncertain structures are presented. Topics covered include: robust linear quadratic regulator (RLQR) formulas; mismatched LQR design; RLQR design; interpretations of RLQR design; disturbance rejection; and performance comparisons: RLQR vs. mismatched LQR.

  18. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  19. Precision Higgs Physics

    NASA Astrophysics Data System (ADS)

    Boughezal, Radja

    2015-04-01

    The future of the high energy physics program will increasingly rely upon precision studies looking for deviations from the Standard Model. Run I of the Large Hadron Collider (LHC) triumphantly discovered the long-awaited Higgs boson, and there is great hope in the particle physics community that this new state will open a portal onto a new theory of Nature at the smallest scales. A precision study of Higgs boson properties is needed in order to test whether this belief is true. New theoretical ideas and high-precision QCD tools are crucial to fulfill this goal. They become even more important as larger data sets from LHC Run II further reduce the experimental errors and theoretical uncertainties begin to dominate. In this talk, I will review recent progress in understanding Higgs properties,including the calculation of precision predictions needed to identify possible physics beyond the Standard Model in the Higgs sector. New ideas for measuring the Higgs couplings to light quarks as well as bounding the Higgs width in a model-independent way will be discussed. Precision predictions for Higgs production in association with jets and ongoing efforts to calculate the inclusive N3LO cross section will be reviewed.

  20. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-05-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/√{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/√{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  1. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/√{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/√{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  2. Precise Indoor Localization for Mobile Laser Scanner

    NASA Astrophysics Data System (ADS)

    Kaijaluoto, R.; Hyyppä, A.

    2015-05-01

    Accurate 3D data is of high importance for indoor modeling for various applications in construction, engineering and cultural heritage documentation. For the lack of GNSS signals hampers use of kinematic platforms indoors, TLS is currently the most accurate and precise method for collecting such a data. Due to its static single view point data collection, excessive time and data redundancy are needed for integrity and coverage of data. However, localization methods with affordable scanners are used for solving mobile platform pose problem. The aim of this study was to investigate what level of trajectory accuracies can be achieved with high quality sensors and freely available state of the art planar SLAM algorithms, and how well this trajectory translates to a point cloud collected with a secondary scanner. In this study high precision laser scanners were used with a novel way to combine the strengths of two SLAM algorithms into functional method for precise localization. We collected five datasets using Slammer platform with two laser scanners, and processed them with altogether 20 different parameter sets. The results were validated against TLS reference. The results show increasing scan frequency improves the trajectory, reaching 20 mm RMSE levels for the best performing parameter sets. Further analysis of the 3D point cloud showed good agreement with TLS reference with 17 mm positional RMSE. With precision scanners the obtained point cloud allows for high level of detail data for indoor modeling with accuracies close to TLS at best with vastly improved data collection efficiency.

  3. Ultra-precision: enabling our future.

    PubMed

    Shore, Paul; Morantz, Paul

    2012-08-28

    This paper provides a perspective on the development of ultra-precision technologies: What drove their evolution and what do they now promise for the future as we face the consequences of consumption of the Earth's finite resources? Improved application of measurement is introduced as a major enabler of mass production, and its resultant impact on wealth generation is considered. This paper identifies the ambitions of the defence, automotive and microelectronics sectors as important drivers of improved manufacturing accuracy capability and ever smaller feature creation. It then describes how science fields such as astronomy have presented significant precision engineering challenges, illustrating how these fields of science have achieved unprecedented levels of accuracy, sensitivity and sheer scale. Notwithstanding their importance to science understanding, many science-driven ultra-precision technologies became key enablers for wealth generation and other well-being issues. Specific ultra-precision machine tools important to major astronomy programmes are discussed, as well as the way in which subsequently evolved machine tools made at the beginning of the twenty-first century, now provide much wider benefits. PMID:22802499

  4. Robustness and modeling error characterization

    NASA Technical Reports Server (NTRS)

    Lehtomaki, N. A.; Castanon, D. A.; Sandell, N. R., Jr.; Levy, B. C.; Athans, M.; Stein, G.

    1984-01-01

    The results on robustness theory presented here are extensions of those given in Lehtomaki et al., (1981). The basic innovation in these new results is that they utilize minimal additional information about the structure of the modeling error, as well as its magnitude, to assess the robustness of feedback systems for which robustness tests based on the magnitude of modeling error alone are inconclusive.

  5. Robustness in multicellular systems

    NASA Astrophysics Data System (ADS)

    Xavier, Joao

    2011-03-01

    Cells and organisms cope with the task of maintaining their phenotypes in the face of numerous challenges. Much attention has recently been paid to questions of how cells control molecular processes to ensure robustness. However, many biological functions are multicellular and depend on interactions, both physical and chemical, between cells. We use a combination of mathematical modeling and molecular biology experiments to investigate the features that convey robustness to multicellular systems. Cell populations must react to external perturbations by sensing environmental cues and acting coordinately in response. At the same time, they face a major challenge: the emergence of conflict from within. Multicellular traits are prone to cells with exploitative phenotypes that do not contribute to shared resources yet benefit from them. This is true in populations of single-cell organisms that have social lifestyles, where conflict can lead to the emergence of social ``cheaters,'' as well as in multicellular organisms, where conflict can lead to the evolution of cancer. I will describe features that diverse multicellular systems can have to eliminate potential conflicts as well as external perturbations.

  6. Fooled by local robustness.

    PubMed

    Sniedovich, Moshe

    2012-10-01

    One would have expected the considerable public debate created by Nassim Taleb's two best selling books on uncertainty, Fooled by Randomness and The Black Swan, to inspire greater caution to the fundamental difficulties posed by severe uncertainty. Yet, methodologies exhibiting an incautious approach to uncertainty have been proposed recently in a range of publications. So, the objective of this short note is to call attention to a prime example of an incautious approach to severe uncertainty that is manifested in the proposition to use the concept radius of stability as a measure of robustness against severe uncertainty. The central proposition of this approach, which is exemplified in info-gap decision theory, is this: use a simple radius of stability model to analyze and manage a severe uncertainty that is characterized by a vast uncertainty space, a poor point estimate, and a likelihood-free quantification of uncertainty. This short discussion serves then as a reminder that the generic radius of stability model is a model of local robustness. It is, therefore, utterly unsuitable for the treatment of severe uncertainty when the latter is characterized by a poor estimate of the parameter of interest, a vast uncertainty space, and a likelihood-free quantification of uncertainty. PMID:22384828

  7. How Physics Got Precise

    SciTech Connect

    Kleppner, Daniel

    2005-01-19

    Although the ancients knew the length of the year to about ten parts per million, it was not until the end of the 19th century that precision measurements came to play a defining role in physics. Eventually such measurements made it possible to replace human-made artifacts for the standards of length and time with natural standards. For a new generation of atomic clocks, time keeping could be so precise that the effects of the local gravitational potentials on the clock rates would be important. This would force us to re-introduce an artifact into the definition of the second - the location of the primary clock. I will describe some of the events in the history of precision measurements that have led us to this pleasing conundrum, and some of the unexpected uses of atomic clocks today.

  8. Precision gap particle separator

    DOEpatents

    Benett, William J.; Miles, Robin; Jones, II., Leslie M.; Stockton, Cheryl

    2004-06-08

    A system for separating particles entrained in a fluid includes a base with a first channel and a second channel. A precision gap connects the first channel and the second channel. The precision gap is of a size that allows small particles to pass from the first channel into the second channel and prevents large particles from the first channel into the second channel. A cover is positioned over the base unit, the first channel, the precision gap, and the second channel. An port directs the fluid containing the entrained particles into the first channel. An output port directs the large particles out of the first channel. A port connected to the second channel directs the small particles out of the second channel.

  9. Precision Muonium Spectroscopy

    NASA Astrophysics Data System (ADS)

    Jungmann, Klaus P.

    2016-09-01

    The muonium atom is the purely leptonic bound state of a positive muon and an electron. It has a lifetime of 2.2 µs. The absence of any known internal structure provides for precision experiments to test fundamental physics theories and to determine accurate values of fundamental constants. In particular ground state hyperfine structure transitions can be measured by microwave spectroscopy to deliver the muon magnetic moment. The frequency of the 1s-2s transition in the hydrogen-like atom can be determined with laser spectroscopy to obtain the muon mass. With such measurements fundamental physical interactions, in particular quantum electrodynamics, can also be tested at highest precision. The results are important input parameters for experiments on the muon magnetic anomaly. The simplicity of the atom enables further precise experiments, such as a search for muonium-antimuonium conversion for testing charged lepton number conservation and searches for possible antigravity of muons and dark matter.

  10. Demons deformable registration for CBCT-guided procedures in the head and neck: Convergence and accuracy

    SciTech Connect

    Nithiananthan, S.; Brock, K. K.; Daly, M. J.; Chan, H.; Irish, J. C.; Siewerdsen, J. H.

    2009-10-15

    Purpose: The accuracy and convergence behavior of a variant of the Demons deformable registration algorithm were investigated for use in cone-beam CT (CBCT)-guided procedures of the head and neck. Online use of deformable registration for guidance of therapeutic procedures such as image-guided surgery or radiation therapy places trade-offs on accuracy and computational expense. This work describes a convergence criterion for Demons registration developed to balance these demands; the accuracy of a multiscale Demons implementation using this convergence criterion is quantified in CBCT images of the head and neck. Methods: Using an open-source ''symmetric'' Demons registration algorithm, a convergence criterion based on the change in the deformation field between iterations was developed to advance among multiple levels of a multiscale image pyramid in a manner that optimized accuracy and computation time. The convergence criterion was optimized in cadaver studies involving CBCT images acquired using a surgical C-arm prototype modified for 3D intraoperative imaging. CBCT-to-CBCT registration was performed and accuracy was quantified in terms of the normalized cross-correlation (NCC) and target registration error (TRE). The accuracy and robustness of the algorithm were then tested in clinical CBCT images of ten patients undergoing radiation therapy of the head and neck. Results: The cadaver model allowed optimization of the convergence factor and initial measurements of registration accuracy: Demons registration exhibited TRE=(0.8{+-}0.3) mm and NCC=0.99 in the cadaveric head compared to TRE=(2.6{+-}1.0) mm and NCC=0.93 with rigid registration. Similarly for the patient data, Demons registration gave mean TRE=(1.6{+-}0.9) mm compared to rigid registration TRE=(3.6{+-}1.9) mm, suggesting registration accuracy at or near the voxel size of the patient images (1x1x2 mm{sup 3}). The multiscale implementation based on optimal convergence criteria completed registration in

  11. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  12. Asymptotic accuracy of two-class discrimination

    SciTech Connect

    Ho, T.K.; Baird, H.S.

    1994-12-31

    Poor quality-e.g. sparse or unrepresentative-training data is widely suspected to be one cause of disappointing accuracy of isolated-character classification in modern OCR machines. We conjecture that, for many trainable classification techniques, it is in fact the dominant factor affecting accuracy. To test this, we have carried out a study of the asymptotic accuracy of three dissimilar classifiers on a difficult two-character recognition problem. We state this problem precisely in terms of high-quality prototype images and an explicit model of the distribution of image defects. So stated, the problem can be represented as a stochastic source of an indefinitely long sequence of simulated images labeled with ground truth. Using this sequence, we were able to train all three classifiers to high and statistically indistinguishable asymptotic accuracies (99.9%). This result suggests that the quality of training data was the dominant factor affecting accuracy. The speed of convergence during training, as well as time/space trade-offs during recognition, differed among the classifiers.

  13. Precision Heating Process

    NASA Technical Reports Server (NTRS)

    1992-01-01

    A heat sealing process was developed by SEBRA based on technology that originated in work with NASA's Jet Propulsion Laboratory. The project involved connecting and transferring blood and fluids between sterile plastic containers while maintaining a closed system. SEBRA markets the PIRF Process to manufacturers of medical catheters. It is a precisely controlled method of heating thermoplastic materials in a mold to form or weld catheters and other products. The process offers advantages in fast, precise welding or shape forming of catheters as well as applications in a variety of other industries.

  14. Precision manometer gauge

    DOEpatents

    McPherson, M.J.; Bellman, R.A.

    1982-09-27

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  15. Precision manometer gauge

    DOEpatents

    McPherson, Malcolm J.; Bellman, Robert A.

    1984-01-01

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  16. Astrophysics with Microarcsecond Accuracy Astrometry

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  17. Evolving Robust Gene Regulatory Networks

    PubMed Central

    Noman, Nasimul; Monjo, Taku; Moscato, Pablo; Iba, Hitoshi

    2015-01-01

    Design and implementation of robust network modules is essential for construction of complex biological systems through hierarchical assembly of ‘parts’ and ‘devices’. The robustness of gene regulatory networks (GRNs) is ascribed chiefly to the underlying topology. The automatic designing capability of GRN topology that can exhibit robust behavior can dramatically change the current practice in synthetic biology. A recent study shows that Darwinian evolution can gradually develop higher topological robustness. Subsequently, this work presents an evolutionary algorithm that simulates natural evolution in silico, for identifying network topologies that are robust to perturbations. We present a Monte Carlo based method for quantifying topological robustness and designed a fitness approximation approach for efficient calculation of topological robustness which is computationally very intensive. The proposed framework was verified using two classic GRN behaviors: oscillation and bistability, although the framework is generalized for evolving other types of responses. The algorithm identified robust GRN architectures which were verified using different analysis and comparison. Analysis of the results also shed light on the relationship among robustness, cooperativity and complexity. This study also shows that nature has already evolved very robust architectures for its crucial systems; hence simulation of this natural process can be very valuable for designing robust biological systems. PMID:25616055

  18. Precision pointing and control of flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Bantell, M. H., Jr.

    1987-01-01

    The problem and long term objectives for the precision pointing and control of flexible spacecraft are given. The four basic objectives are stated in terms of two principle tasks. Under Task 1, robust low order controllers, improved structural modeling methods for control applications and identification methods for structural dynamics are being developed. Under Task 2, a lab test experiment for verification of control laws and system identification algorithms is being developed. For Task 1, work has focused on robust low order controller design and some initial considerations for structural modeling in control applications. For Task 2, work has focused on experiment design and fabrication, along with sensor selection and initial digital controller implementation. Conclusions are given.

  19. Robust automated knowledge capture.

    SciTech Connect

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  20. Robustness in Digital Hardware

    NASA Astrophysics Data System (ADS)

    Woods, Roger; Lightbody, Gaye

    The growth in electronics has probably been the equivalent of the Industrial Revolution in the past century in terms of how much it has transformed our daily lives. There is a great dependency on technology whether it is in the devices that control travel (e.g., in aircraft or cars), our entertainment and communication systems, or our interaction with money, which has been empowered by the onset of Internet shopping and banking. Despite this reliance, there is still a danger that at some stage devices will fail within the equipment's lifetime. The purpose of this chapter is to look at the factors causing failure and address possible measures to improve robustness in digital hardware technology and specifically chip technology, giving a long-term forecast that will not reassure the reader!

  1. Robust springback compensation

    NASA Astrophysics Data System (ADS)

    Carleer, Bart; Grimm, Peter

    2013-12-01

    Springback simulation and springback compensation are more and more applied in productive use of die engineering. In order to successfully compensate a tool accurate springback results are needed as well as an effective compensation approach. In this paper a methodology has been introduce in order to effectively compensate tools. First step is the full process simulation meaning that not only the drawing operation will be simulated but also all secondary operations like trimming and flanging. Second will be the verification whether the process is robust meaning that it obtains repeatable results. In order to effectively compensate a minimum clamping concept will be defined. Once these preconditions are fulfilled the tools can be compensated effectively.

  2. Robust Rocket Engine Concept

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.

    1995-01-01

    The potential for a revolutionary step in the durability of reusable rocket engines is made possible by the combination of several emerging technologies. The recent creation and analytical demonstration of life extending (or damage mitigating) control technology enables rapid rocket engine transients with minimum fatigue and creep damage. This technology has been further enhanced by the formulation of very simple but conservative continuum damage models. These new ideas when combined with recent advances in multidisciplinary optimization provide the potential for a large (revolutionary) step in reusable rocket engine durability. This concept has been named the robust rocket engine concept (RREC) and is the basic contribution of this paper. The concept also includes consideration of design innovations to minimize critical point damage.

  3. Multi-oriented windowed harmonic phase reconstruction for robust cardiac strain imaging.

    PubMed

    Cordero-Grande, Lucilio; Royuela-del-Val, Javier; Sanz-Estébanez, Santiago; Martín-Fernández, Marcos; Alberola-López, Carlos

    2016-04-01

    The purpose of this paper is to develop a method for direct estimation of the cardiac strain tensor by extending the harmonic phase reconstruction on tagged magnetic resonance images to obtain more precise and robust measurements. The extension relies on the reconstruction of the local phase of the image by means of the windowed Fourier transform and the acquisition of an overdetermined set of stripe orientations in order to avoid the phase interferences from structures outside the myocardium and the instabilities arising from the application of a gradient operator. Results have shown that increasing the number of acquired orientations provides a significant improvement in the reproducibility of the strain measurements and that the acquisition of an extended set of orientations also improves the reproducibility when compared with acquiring repeated samples from a smaller set of orientations. Additionally, biases in local phase estimation when using the original harmonic phase formulation are greatly diminished by the one here proposed. The ideas here presented allow the design of new methods for motion sensitive magnetic resonance imaging, which could simultaneously improve the resolution, robustness and accuracy of motion estimates. PMID:26745763

  4. Precision bolometer bridge

    NASA Technical Reports Server (NTRS)

    White, D. R.

    1968-01-01

    Prototype precision bolometer calibration bridge is manually balanced device for indicating dc bias and balance with either dc or ac power. An external galvanometer is used with the bridge for null indication, and the circuitry monitors voltage and current simultaneously without adapters in testing 100 and 200 ohm thin film bolometers.

  5. Precision metal molding

    NASA Technical Reports Server (NTRS)

    Townhill, A.

    1967-01-01

    Method provides precise alignment for metal-forming dies while permitting minimal thermal expansion without die warpage or cavity space restriction. The interfacing dowel bars and die side facings are arranged so the dies are restrained in one orthogonal angle and permitted to thermally expand in the opposite orthogonal angle.

  6. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    1985-01-29

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge. 2 figs.

  7. Precision liquid level sensor

    DOEpatents

    Field, Michael E.; Sullivan, William H.

    1985-01-01

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  8. Precision in Stereochemical Terminology

    ERIC Educational Resources Information Center

    Wade, Leroy G., Jr.

    2006-01-01

    An analysis of relatively new terminology that has given multiple definitions often resulting in students learning principles that are actually false is presented with an example of the new term stereogenic atom introduced by Mislow and Siegel. The Mislow terminology would be useful in some cases if it were used precisely and correctly, but it is…

  9. Precision physics at LHC

    SciTech Connect

    Hinchliffe, I.

    1997-05-01

    In this talk the author gives a brief survey of some physics topics that will be addressed by the Large Hadron Collider currently under construction at CERN. Instead of discussing the reach of this machine for new physics, the author gives examples of the types of precision measurements that might be made if new physics is discovered.

  10. Robust stability of second-order systems

    NASA Technical Reports Server (NTRS)

    Chuang, C.-H.

    1995-01-01

    It has been shown recently how virtual passive controllers can be designed for second-order dynamic systems to achieve robust stability. The virtual controllers were visualized as systems made up of spring, mass and damping elements. In this paper, a new approach emphasizing on the notion of positive realness to the same second-order dynamic systems is used. Necessary and sufficient conditions for positive realness are presented for scalar spring-mass-dashpot systems. For multi-input multi-output systems, we show how a mass-spring-dashpot system can be made positive real by properly choosing its output variables. In particular, sufficient conditions are shown for the system without output velocity. Furthermore, if velocity cannot be measured then the system parameters must be precise to keep the system positive real. In practice, system parameters are not always constant and cannot be measured precisely. Therefore, in order to be useful positive real systems must be robust to some degrees. This can be achieved with the design presented in this paper.

  11. Extensibility of a linear rapid robust design methodology

    NASA Astrophysics Data System (ADS)

    Steinfeldt, Bradley A.; Braun, Robert D.

    2016-05-01

    The extensibility of a linear rapid robust design methodology is examined. This analysis is approached from a computational cost and accuracy perspective. The sensitivity of the solution's computational cost is examined by analysing effects such as the number of design variables, nonlinearity of the CAs, and nonlinearity of the response in addition to several potential complexity metrics. Relative to traditional robust design methods, the linear rapid robust design methodology scaled better with the size of the problem and had performance that exceeded the traditional techniques examined. The accuracy of applying a method with linear fundamentals to nonlinear problems was examined. It is observed that if the magnitude of nonlinearity is less than 1000 times that of the nominal linear response, the error associated with applying successive linearization will result in ? errors in the response less than 10% compared to the full nonlinear error.

  12. High-precision positioning of radar scatterers

    NASA Astrophysics Data System (ADS)

    Dheenathayalan, Prabu; Small, David; Schubert, Adrian; Hanssen, Ramon F.

    2016-05-01

    Remote sensing radar satellites cover wide areas and provide spatially dense measurements, with millions of scatterers. Knowledge of the precise position of each radar scatterer is essential to identify the corresponding object and interpret the estimated deformation. The absolute position accuracy of synthetic aperture radar (SAR) scatterers in a 2D radar coordinate system, after compensating for atmosphere and tidal effects, is in the order of centimeters for TerraSAR-X (TSX) spotlight images. However, the absolute positioning in 3D and its quality description are not well known. Here, we exploit time-series interferometric SAR to enhance the positioning capability in three dimensions. The 3D positioning precision is parameterized by a variance-covariance matrix and visualized as an error ellipsoid centered at the estimated position. The intersection of the error ellipsoid with objects in the field is exploited to link radar scatterers to real-world objects. We demonstrate the estimation of scatterer position and its quality using 20 months of TSX stripmap acquisitions over Delft, the Netherlands. Using trihedral corner reflectors (CR) for validation, the accuracy of absolute positioning in 2D is about 7 cm. In 3D, an absolute accuracy of up to ˜ 66 cm is realized, with a cigar-shaped error ellipsoid having centimeter precision in azimuth and range dimensions, and elongated in cross-range dimension with a precision in the order of meters (the ratio of the ellipsoid axis lengths is 1/3/213, respectively). The CR absolute 3D position, along with the associated error ellipsoid, is found to be accurate and agree with the ground truth position at a 99 % confidence level. For other non-CR coherent scatterers, the error ellipsoid concept is validated using 3D building models. In both cases, the error ellipsoid not only serves as a quality descriptor, but can also help to associate radar scatterers to real-world objects.

  13. Making Activity Recognition Robust against Deceptive Behavior

    PubMed Central

    Saeb, Sohrab; Körding, Konrad; Mohr, David C.

    2015-01-01

    Healthcare services increasingly use the activity recognition technology to track the daily activities of individuals. In some cases, this is used to provide incentives. For example, some health insurance companies offer discount to customers who are physically active, based on the data collected from their activity tracking devices. Therefore, there is an increasing motivation for individuals to cheat, by making activity trackers detect activities that increase their benefits rather than the ones they actually do. In this study, we used a novel method to make activity recognition robust against deceptive behavior. We asked 14 subjects to attempt to trick our smartphone-based activity classifier by making it detect an activity other than the one they actually performed, for example by shaking the phone while seated to make the classifier detect walking. If they succeeded, we used their motion data to retrain the classifier, and asked them to try to trick it again. The experiment ended when subjects could no longer cheat. We found that some subjects were not able to trick the classifier at all, while others required five rounds of retraining. While classifiers trained on normal activity data predicted true activity with ~38% accuracy, training on the data gathered during the deceptive behavior increased their accuracy to ~84%. We conclude that learning the deceptive behavior of one individual helps to detect the deceptive behavior of others. Thus, we can make current activity recognition robust to deception by including deceptive activity data from a few individuals. PMID:26659118

  14. Biometric feature embedding using robust steganography technique

    NASA Astrophysics Data System (ADS)

    Rashid, Rasber D.; Sellahewa, Harin; Jassim, Sabah A.

    2013-05-01

    This paper is concerned with robust steganographic techniques to hide and communicate biometric data in mobile media objects like images, over open networks. More specifically, the aim is to embed binarised features extracted using discrete wavelet transforms and local binary patterns of face images as a secret message in an image. The need for such techniques can arise in law enforcement, forensics, counter terrorism, internet/mobile banking and border control. What differentiates this problem from normal information hiding techniques is the added requirement that there should be minimal effect on face recognition accuracy. We propose an LSB-Witness embedding technique in which the secret message is already present in the LSB plane but instead of changing the cover image LSB values, the second LSB plane will be changed to stand as a witness/informer to the receiver during message recovery. Although this approach may affect the stego quality, it is eliminating the weakness of traditional LSB schemes that is exploited by steganalysis techniques for LSB, such as PoV and RS steganalysis, to detect the existence of secrete message. Experimental results show that the proposed method is robust against PoV and RS attacks compared to other variants of LSB. We also discussed variants of this approach and determine capacity requirements for embedding face biometric feature vectors while maintain accuracy of face recognition.

  15. Making Activity Recognition Robust against Deceptive Behavior.

    PubMed

    Saeb, Sohrab; Körding, Konrad; Mohr, David C

    2015-01-01

    Healthcare services increasingly use the activity recognition technology to track the daily activities of individuals. In some cases, this is used to provide incentives. For example, some health insurance companies offer discount to customers who are physically active, based on the data collected from their activity tracking devices. Therefore, there is an increasing motivation for individuals to cheat, by making activity trackers detect activities that increase their benefits rather than the ones they actually do. In this study, we used a novel method to make activity recognition robust against deceptive behavior. We asked 14 subjects to attempt to trick our smartphone-based activity classifier by making it detect an activity other than the one they actually performed, for example by shaking the phone while seated to make the classifier detect walking. If they succeeded, we used their motion data to retrain the classifier, and asked them to try to trick it again. The experiment ended when subjects could no longer cheat. We found that some subjects were not able to trick the classifier at all, while others required five rounds of retraining. While classifiers trained on normal activity data predicted true activity with ~38% accuracy, training on the data gathered during the deceptive behavior increased their accuracy to ~84%. We conclude that learning the deceptive behavior of one individual helps to detect the deceptive behavior of others. Thus, we can make current activity recognition robust to deception by including deceptive activity data from a few individuals. PMID:26659118

  16. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  17. Atomically Precise Surface Engineering for Producing Imagers

    NASA Technical Reports Server (NTRS)

    Greer, Frank (Inventor); Jones, Todd J. (Inventor); Nikzad, Shouleh (Inventor); Hoenk, Michael E. (Inventor)

    2015-01-01

    High-quality surface coatings, and techniques combining the atomic precision of molecular beam epitaxy and atomic layer deposition, to fabricate such high-quality surface coatings are provided. The coatings made in accordance with the techniques set forth by the invention are shown to be capable of forming silicon CCD detectors that demonstrate world record detector quantum efficiency (>50%) in the near and far ultraviolet (155 nm-300 nm). The surface engineering approaches used demonstrate the robustness of detector performance that is obtained by achieving atomic level precision at all steps in the coating fabrication process. As proof of concept, the characterization, materials, and exemplary devices produced are presented along with a comparison to other approaches.

  18. Improving the accuracy of phase-shifting techniques

    NASA Astrophysics Data System (ADS)

    Cruz-Santos, William; López-García, Lourdes; Redondo-Galvan, Arturo

    2015-05-01

    The traditional phase-shifting profilometry technique is based on the projection of digital interference patterns and computation of the absolute phase map. Recently, a method was proposed that used phase interpolation to the corner detection, at subpixel accuracy in the projector image for improving the camera-projector calibration. We propose a general strategy to improve the accuracy in the search for correspondence that can be used to obtain high precision three-dimensional reconstruction. Experimental results show that our strategy can outperform the precision of the phase-shifting method.

  19. Robust Automatic Breast Cancer Staging Using A Combination of Functional Genomics and Image-Omics

    PubMed Central

    Su, Hai; Shen, Yong; Xing, Fuyong; Qi, Xin; Hirshfield, Kim M.; Yang, Lin; Foran, David J.

    2016-01-01

    Breast cancer is one of the leading cancers worldwide. Precision medicine is a new trend that systematically examines molecular and functional genomic information within each patient's cancer to identify the patterns that may affect treatment decisions and potential outcomes. As a part of precision medicine, computer-aided diagnosis enables joint analysis of functional genomic information and image from pathological images. In this paper we propose an integrated framework for breast cancer staging using image-omics and functional genomic information. The entire biomedical imaging informatics framework consists of image-omics extraction, feature combination, and classification. First, a robust automatic nuclei detection and segmentation is presented to identify tumor regions, delineate nuclei boundaries and calculate a set of image-based morphological features; next, the low dimensional image-omics is obtained through principal component analysis and is concatenated with the functional genomic features identified by a linear model. A support vector machine for differentiating stage I breast cancer from other stages are learned. We experimentally demonstrate that compared with a single type of representation (image-omics), the combination of image-omics and functional genomic feature can improve the classification accuracy by 3%. PMID:26737959

  20. Robust Decision-making Applied to Model Selection

    SciTech Connect

    Hemez, Francois M.

    2012-08-06

    The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define each of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.

  1. Principles and techniques for designing precision machines

    SciTech Connect

    Hale, L C

    1999-02-01

    This thesis is written to advance the reader's knowledge of precision-engineering principles and their application to designing machines that achieve both sufficient precision and minimum cost. It provides the concepts and tools necessary for the engineer to create new precision machine designs. Four case studies demonstrate the principles and showcase approaches and solutions to specific problems that generally have wider applications. These come from projects at the Lawrence Livermore National Laboratory in which the author participated: the Large Optics Diamond Turning Machine, Accuracy Enhancement of High- Productivity Machine Tools, the National Ignition Facility, and Extreme Ultraviolet Lithography. Although broad in scope, the topics go into sufficient depth to be useful to practicing precision engineers and often fulfill more academic ambitions. The thesis begins with a chapter that presents significant principles and fundamental knowledge from the Precision Engineering literature. Following this is a chapter that presents engineering design techniques that are general and not specific to precision machines. All subsequent chapters cover specific aspects of precision machine design. The first of these is Structural Design, guidelines and analysis techniques for achieving independently stiff machine structures. The next chapter addresses dynamic stiffness by presenting several techniques for Deterministic Damping, damping designs that can be analyzed and optimized with predictive results. Several chapters present a main thrust of the thesis, Exact-Constraint Design. A main contribution is a generalized modeling approach developed through the course of creating several unique designs. The final chapter is the primary case study of the thesis, the Conceptual Design of a Horizontal Machining Center.

  2. Robust 3D-2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Wang, Adam S.; Webster Stayman, J.; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A. Jay; Gokaslan, Ziya L.; Siewerdsen, Jeffrey H.

    2013-12-01

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with ‘success’ defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial

  3. Robust 3D–2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation

    PubMed Central

    Otake, Yoshito; Wang, Adam S; Stayman, J Webster; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A Jay; Gokaslan, Ziya L; Siewerdsen, Jeffrey H

    2016-01-01

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with `success' defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the

  4. A passion for precision

    SciTech Connect

    2010-05-19

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  5. Towards precision medicine.

    PubMed

    Ashley, Euan A

    2016-08-16

    There is great potential for genome sequencing to enhance patient care through improved diagnostic sensitivity and more precise therapeutic targeting. To maximize this potential, genomics strategies that have been developed for genetic discovery - including DNA-sequencing technologies and analysis algorithms - need to be adapted to fit clinical needs. This will require the optimization of alignment algorithms, attention to quality-coverage metrics, tailored solutions for paralogous or low-complexity areas of the genome, and the adoption of consensus standards for variant calling and interpretation. Global sharing of this more accurate genotypic and phenotypic data will accelerate the determination of causality for novel genes or variants. Thus, a deeper understanding of disease will be realized that will allow its targeting with much greater therapeutic precision. PMID:27528417

  6. Precision Polarization of Neutrons

    NASA Astrophysics Data System (ADS)

    Martin, Elise; Barron-Palos, Libertad; Couture, Aaron; Crawford, Christopher; Chupp, Tim; Danagoulian, Areg; Estes, Mary; Hona, Binita; Jones, Gordon; Klein, Andi; Penttila, Seppo; Sharma, Monisha; Wilburn, Scott

    2009-05-01

    Determining polarization of a cold neutron beam to high precision is required for the next generation neutron decay correlation experiments at the SNS, such as the proposed abBA and PANDA experiments. Precision polarimetry measurements were conducted at Los Alamos National Laboratory with the goal of determining the beam polarization to the level of 10-3 or better. The cold neutrons from FP12 were polarized using optically polarized ^3He gas as a spin filter, which has a highly spin-dependent absorption cross section. A second ^ 3He spin filter was used to analyze the neutron polarization after passing through a resonant RF spin rotator. A discussion of the experiment and results will be given.

  7. A passion for precision

    ScienceCinema

    None

    2011-10-06

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  8. Robust, accurate and fast automatic segmentation of the spinal cord.

    PubMed

    De Leener, Benjamin; Kadoury, Samuel; Cohen-Adad, Julien

    2014-09-01

    Spinal cord segmentation provides measures of atrophy and facilitates group analysis via inter-subject correspondence. Automatizing this procedure enables studies with large throughput and minimizes user bias. Although several automatic segmentation methods exist, they are often restricted in terms of image contrast and field-of-view. This paper presents a new automatic segmentation method (PropSeg) optimized for robustness, accuracy and speed. The algorithm is based on the propagation of a deformable model and is divided into three parts: firstly, an initialization step detects the spinal cord position and orientation using a circular Hough transform on multiple axial slices rostral and caudal to the starting plane and builds an initial elliptical tubular mesh. Secondly, a low-resolution deformable model is propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a local contrast-to-noise adaptation at each iteration. Thirdly, a refinement process and a global deformation are applied on the propagated mesh to provide an accurate segmentation of the spinal cord. Validation was performed in 15 healthy subjects and two patients with spinal cord injury, using T1- and T2-weighted images of the entire spinal cord and on multiecho T2*-weighted images. Our method was compared against manual segmentation and against an active surface method. Results show high precision for all the MR sequences. Dice coefficients were 0.9 for the T1- and T2-weighted cohorts and 0.86 for the T2*-weighted images. The proposed method runs in less than 1min on a normal computer and can be used to quantify morphological features such as cross-sectional area along the whole spinal cord. PMID:24780696

  9. Precision disablement aiming system

    DOEpatents

    Monda, Mark J.; Hobart, Clinton G.; Gladwell, Thomas Scott

    2016-02-16

    A disrupter to a target may be precisely aimed by positioning a radiation source to direct radiation towards the target, and a detector is positioned to detect radiation that passes through the target. An aiming device is positioned between the radiation source and the target, wherein a mechanical feature of the aiming device is superimposed on the target in a captured radiographic image. The location of the aiming device in the radiographic image is used to aim a disrupter towards the target.

  10. Precise linear sun sensor

    NASA Technical Reports Server (NTRS)

    Johnston, D. D.

    1972-01-01

    An evaluation of the precise linear sun sensor relating to future mission applications was performed. The test procedures, data, and results of the dual-axis, solid-state system are included. Brief descriptions of the sensing head and of the system's operational characteristics are presented. A unique feature of the system is that multiple sensor heads with various fields of view may be used with the same electronics.

  11. Precision laser aiming system

    SciTech Connect

    Ahrens, Brandon R.; Todd, Steven N.

    2009-04-28

    A precision laser aiming system comprises a disrupter tool, a reflector, and a laser fixture. The disrupter tool, the reflector and the laser fixture are configurable for iterative alignment and aiming toward an explosive device threat. The invention enables a disrupter to be quickly and accurately set up, aligned, and aimed in order to render safe or to disrupt a target from a standoff position.

  12. Accuracy in Judgments of Aggressiveness

    PubMed Central

    Kenny, David A.; West, Tessa V.; Cillessen, Antonius H. N.; Coie, John D.; Dodge, Kenneth A.; Hubbard, Julie A.; Schwartz, David

    2009-01-01

    Perceivers are both accurate and biased in their understanding of others. Past research has distinguished between three types of accuracy: generalized accuracy, a perceiver’s accuracy about how a target interacts with others in general; perceiver accuracy, a perceiver’s view of others corresponding with how the perceiver is treated by others in general; and dyadic accuracy, a perceiver’s accuracy about a target when interacting with that target. Researchers have proposed that there should be more dyadic than other forms of accuracy among well-acquainted individuals because of the pragmatic utility of forecasting the behavior of interaction partners. We examined behavioral aggression among well-acquainted peers. A total of 116 9-year-old boys rated how aggressive their classmates were toward other classmates. Subsequently, 11 groups of 6 boys each interacted in play groups, during which observations of aggression were made. Analyses indicated strong generalized accuracy yet little dyadic and perceiver accuracy. PMID:17575243

  13. Robust Nonlinear Neural Codes

    NASA Astrophysics Data System (ADS)

    Yang, Qianli; Pitkow, Xaq

    2015-03-01

    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  14. A robust DCT domain watermarking algorithm based on chaos system

    NASA Astrophysics Data System (ADS)

    Xiao, Mingsong; Wan, Xiaoxia; Gan, Chaohua; Du, Bo

    2009-10-01

    Digital watermarking is a kind of technique that can be used for protecting and enforcing the intellectual property (IP) rights of the digital media like the digital images containting in the transaction copyright. There are many kinds of digital watermarking algorithms. However, existing digital watermarking algorithms are not robust enough against geometric attacks and signal processing operations. In this paper, a robust watermarking algorithm based on chaos array in DCT (discrete cosine transform)-domain for gray images is proposed. The algorithm provides an one-to-one method to extract the watermark.Experimental results have proved that this new method has high accuracy and is highly robust against geometric attacks, signal processing operations and geometric transformations. Furthermore, the one who have on idea of the key can't find the position of the watermark embedded in. As a result, the watermark not easy to be modified, so this scheme is secure and robust.

  15. A robust, inexpensive wavelength meter using a commercial color sensors

    NASA Astrophysics Data System (ADS)

    Jones, Tyler; Otterstrom, Nils; Jackson, Jarom; Archibald, James; Durfee, Dallin

    2015-05-01

    Commercial color sensor chips are used in a variety of consumer electronics. Many are built to specifications far above those needed for their typical uses, some having temperature coefficients of only a few parts per million, and using precision 16 bit analog to digital converters. Using such a device, we were able to measure the wavelength of a laser with a precision of 0.01 nm with a calibration drift of similar magnitude over several days. Factors that influence the precision and accuracy, such as etalon effects in the sensor, temperature dependence, intensity variations, and timing, will be discussed. Funding by Brigham Young University and the National Science Foundation.

  16. Precise Point Positioning in the Airborne Mode

    NASA Astrophysics Data System (ADS)

    El-Mowafy, Ahmed

    2011-01-01

    The Global Positioning System (GPS) is widely used for positioning in the airborne mode such as in navigation as a supplementary system and for geo-referencing of cameras in mapping and surveillance by aircrafts and Unmanned Aerial Vehicles (UAV). The Precise Point Positioning (PPP) approach is an attractive positioning approach based on processing of un-differenced observations from a single GPS receiver. It employs precise satellite orbits and satellite clock corrections. These data can be obtained via the internet from several sources, e.g. the International GNSS Service (IGS). The data can also broadcast from satellites, such as via the LEX signal of the new Japanese satellite system QZSS. The PPP can achieve positioning precision and accuracy at the sub-decimetre level. In this paper, the functional and stochastic mathematical modelling used in PPP is discussed. Results of applying the PPP method in an airborne test using a small fixed-wing aircraft are presented. To evaluate the performance of the PPP approach, a reference trajectory was established by differential positioning of the same GPS observations with data from a ground reference station. The coordinate results from the two approaches, PPP and differential positioning, were compared and statistically evaluated. For the test at hand, positioning accuracy at the cm-to-decimetre was achieved for latitude and longitude coordinates and doubles that value for height estimation.

  17. Highly Parallel, High-Precision Numerical Integration

    SciTech Connect

    Bailey, David H.; Borwein, Jonathan M.

    2005-04-22

    This paper describes a scheme for rapidly computing numerical values of definite integrals to very high accuracy, ranging from ordinary machine precision to hundreds or thousands of digits, even for functions with singularities or infinite derivatives at endpoints. Such a scheme is of interest not only in computational physics and computational chemistry, but also in experimental mathematics, where high-precision numerical values of definite integrals can be used to numerically discover new identities. This paper discusses techniques for a parallel implementation of this scheme, then presents performance results for 1-D and 2-D test suites. Results are also given for a certain problem from mathematical physics, which features a difficult singularity, confirming a conjecture to 20,000 digit accuracy. The performance rate for this latter calculation on 1024 CPUs is 690 Gflop/s. We believe that this and one other 20,000-digit integral evaluation that we report are the highest-precision non-trivial numerical integrations performed to date.

  18. Accuracy of Digital vs. Conventional Implant Impressions

    PubMed Central

    Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.

    2015-01-01

    The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423

  19. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  20. Accuracy of tablet splitting.

    PubMed

    McDevitt, J T; Gurst, A H; Chen, Y

    1998-01-01

    We attempted to determine the accuracy of manually splitting hydrochlorothiazide tablets. Ninety-four healthy volunteers each split ten 25-mg hydrochlorothiazide tablets, which were then weighed using an analytical balance. Demographics, grip and pinch strength, digit circumference, and tablet-splitting experience were documented. Subjects were also surveyed regarding their willingness to pay a premium for commercially available, lower-dose tablets. Of 1752 manually split tablet portions, 41.3% deviated from ideal weight by more than 10% and 12.4% deviated by more than 20%. Gender, age, education, and tablet-splitting experience were not predictive of variability. Most subjects (96.8%) stated a preference for commercially produced, lower-dose tablets, and 77.2% were willing to pay more for them. For drugs with steep dose-response curves or narrow therapeutic windows, the differences we recorded could be clinically relevant. PMID:9469693

  1. Robust image segmentation using local robust statistics and correntropy-based K-means clustering

    NASA Astrophysics Data System (ADS)

    Huang, Chencheng; Zeng, Li

    2015-03-01

    It is an important work to segment the real world images with intensity inhomogeneity such as magnetic resonance (MR) and computer tomography (CT) images. In practice, such images are often polluted by noise which make them difficult to be segmented by traditional level set based segmentation models. In this paper, we propose a robust level set image segmentation model combining local with global fitting energies to segment noised images. In the proposed model, the local fitting energy is based on the local robust statistics (LRS) information of an input image, which can efficiently reduce the effects of the noise, and the global fitting energy utilizes the correntropy-based K-means (CK) method, which can adaptively emphasize the samples that are close to their corresponding cluster centers. By integrating the advantages of global information and local robust statistics characteristics, the proposed model can efficiently segment images with intensity inhomogeneity and noise. Then, a level set regularization term is used to avoid re-initialization procedures in the process of curve evolution. In addition, the Gaussian filter is utilized to keep the level set smoothing in the curve evolution process. The proposed model first appeared as a two-phase model and then extended to a multi-phase one. Experimental results show the advantages of our model in terms of accuracy and robustness to the noise. In particular, our method has been applied on some synthetic and real images with desirable results.

  2. Automatic Mode Transition Enabled Robust Triboelectric Nanogenerators.

    PubMed

    Chen, Jun; Yang, Jin; Guo, Hengyu; Li, Zhaoling; Zheng, Li; Su, Yuanjie; Wen, Zhen; Fan, Xing; Wang, Zhong Lin

    2015-12-22

    Although the triboelectric nanogenerator (TENG) has been proven to be a renewable and effective route for ambient energy harvesting, its robustness remains a great challenge due to the requirement of surface friction for a decent output, especially for the in-plane sliding mode TENG. Here, we present a rationally designed TENG for achieving a high output performance without compromising the device robustness by, first, converting the in-plane sliding electrification into a contact separation working mode and, second, creating an automatic transition between a contact working state and a noncontact working state. The magnet-assisted automatic transition triboelectric nanogenerator (AT-TENG) was demonstrated to effectively harness various ambient rotational motions to generate electricity with greatly improved device robustness. At a wind speed of 6.5 m/s or a water flow rate of 5.5 L/min, the harvested energy was capable of lighting up 24 spot lights (0.6 W each) simultaneously and charging a capacitor to greater than 120 V in 60 s. Furthermore, due to the rational structural design and unique output characteristics, the AT-TENG was not only capable of harvesting energy from natural bicycling and car motion but also acting as a self-powered speedometer with ultrahigh accuracy. Given such features as structural simplicity, easy fabrication, low cost, wide applicability even in a harsh environment, and high output performance with superior device robustness, the AT-TENG renders an effective and practical approach for ambient mechanical energy harvesting as well as self-powered active sensing. PMID:26529374

  3. High-precision displacement measurement method for three degrees of freedom-compliant mechanisms based on computer micro-vision.

    PubMed

    Wu, Heng; Zhang, Xianmin; Gan, Jinqiang; Li, Hai; He, Zhenya

    2016-04-01

    A practical method for the high-precision displacement measurements of the three degrees of freedom-compliant mechanisms based on computer micro-vision is proposed. The method consists of two steps. In the first step, the candidate pixels are selected using a ring projection transform matching approach. In the second step, the exact location is determined by an improved pseudo-Zernike moment method. The setup of the micro-vision system is also introduced. A series of simulations is carried out and the results show that the proposed algorithm enjoys extremely high precision and robustness in the presence of image translations and rotations. Finally, a micro-vision system and a laser interferometer measurement (LIM) system are built up to validate and compare the actual performances of the proposed method. The experimental results demonstrate that the proposed approach can obtain a high accuracy and shows higher operability and stability than the LIM system. Moreover, the measuring accuracy can reach a pixel. PMID:27139661

  4. Galvanometer deflection: a precision high-speed system.

    PubMed

    Jablonowski, D P; Raamot, J

    1976-06-01

    An X-Y galvanometer deflection system capable of high precision in a random access mode of operation is described. Beam positional information in digitized form is obtained by employing a Ronchi grating with a sophisticated optical detection scheme. This information is used in a control interface to locate the beam to the required precision. The system is characterized by high accuracy at maximum speed and is designed for operation in a variable environment, with particular attention placed on thermal insensitivity. PMID:20165203

  5. Precision Pointing Control System (PPCS) star tracker test

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Tests performed on the TRW precision star tracker are described. The unit tested was a two-axis gimballed star tracker designed to provide star LOS data to an accuracy of 1 to 2 sec. The tracker features a unique bearing system and utilizes thermal and mechanical symmetry techniques to achieve high precision which can be demonstrated in a one g environment. The test program included a laboratory evaluation of tracker functional operation, sensitivity, repeatibility, and thermal stability.

  6. Precise image-guided irradiation of small animals: a flexible non-profit platform.

    PubMed

    Tillner, Falk; Thute, Prasad; Löck, Steffen; Dietrich, Antje; Fursov, Andriy; Haase, Robert; Lukas, Mathias; Rimarzig, Bernd; Sobiella, Manfred; Krause, Mechthild; Baumann, Michael; Bütof, Rebecca; Enghardt, Wolfgang

    2016-04-21

    Preclinical in vivo studies using small animals are essential to develop new therapeutic options in radiation oncology. Of particular interest are orthotopic tumour models, which better reflect the clinical situation in terms of growth patterns and microenvironmental parameters of the tumour as well as the interplay of tumours with the surrounding normal tissues. Such orthotopic models increase the technical demands and the complexity of preclinical studies as local irradiation with therapeutically relevant doses requires image-guided target localisation and accurate beam application. Moreover, advanced imaging techniques are needed for monitoring treatment outcome. We present a novel small animal image-guided radiation therapy (SAIGRT) system, which allows for precise and accurate, conformal irradiation and x-ray imaging of small animals. High accuracy is achieved by its robust construction, the precise movement of its components and a fast high-resolution flat-panel detector. Field forming and x-ray imaging is accomplished close to the animal resulting in a small penumbra and a high image quality. Feasibility for irradiating orthotopic models has been proven using lung tumour and glioblastoma models in mice. The SAIGRT system provides a flexible, non-profit academic research platform which can be adapted to specific experimental needs and therefore enables systematic preclinical trials in multicentre research networks. PMID:27008208

  7. Precise image-guided irradiation of small animals: a flexible non-profit platform

    NASA Astrophysics Data System (ADS)

    Tillner, Falk; Thute, Prasad; Löck, Steffen; Dietrich, Antje; Fursov, Andriy; Haase, Robert; Lukas, Mathias; Rimarzig, Bernd; Sobiella, Manfred; Krause, Mechthild; Baumann, Michael; Bütof, Rebecca; Enghardt, Wolfgang

    2016-04-01

    Preclinical in vivo studies using small animals are essential to develop new therapeutic options in radiation oncology. Of particular interest are orthotopic tumour models, which better reflect the clinical situation in terms of growth patterns and microenvironmental parameters of the tumour as well as the interplay of tumours with the surrounding normal tissues. Such orthotopic models increase the technical demands and the complexity of preclinical studies as local irradiation with therapeutically relevant doses requires image-guided target localisation and accurate beam application. Moreover, advanced imaging techniques are needed for monitoring treatment outcome. We present a novel small animal image-guided radiation therapy (SAIGRT) system, which allows for precise and accurate, conformal irradiation and x-ray imaging of small animals. High accuracy is achieved by its robust construction, the precise movement of its components and a fast high-resolution flat-panel detector. Field forming and x-ray imaging is accomplished close to the animal resulting in a small penumbra and a high image quality. Feasibility for irradiating orthotopic models has been proven using lung tumour and glioblastoma models in mice. The SAIGRT system provides a flexible, non-profit academic research platform which can be adapted to specific experimental needs and therefore enables systematic preclinical trials in multicentre research networks.

  8. Instrument Attitude Precision Control

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    2004-01-01

    A novel approach is presented in this paper to analyze attitude precision and control for an instrument gimbaled to a spacecraft subject to an internal disturbance caused by a moving component inside the instrument. Nonlinear differential equations of motion for some sample cases are derived and solved analytically to gain insight into the influence of the disturbance on the attitude pointing error. A simple control law is developed to eliminate the instrument pointing error caused by the internal disturbance. Several cases are presented to demonstrate and verify the concept presented in this paper.

  9. Precision Robotic Assembly Machine

    ScienceCinema

    None

    2010-09-01

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  10. Precision mass measurements

    NASA Astrophysics Data System (ADS)

    Gläser, M.; Borys, M.

    2009-12-01

    Mass as a physical quantity and its measurement are described. After some historical remarks, a short summary of the concept of mass in classical and modern physics is given. Principles and methods of mass measurements, for example as energy measurement or as measurement of weight forces and forces caused by acceleration, are discussed. Precision mass measurement by comparing mass standards using balances is described in detail. Measurement of atomic masses related to 12C is briefly reviewed as well as experiments and recent discussions for a future new definition of the kilogram, the SI unit of mass.

  11. Precision Robotic Assembly Machine

    SciTech Connect

    2009-08-14

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  12. Precision electroweak measurements

    SciTech Connect

    Demarteau, M.

    1996-11-01

    Recent electroweak precision measurements fro {ital e}{sup +}{ital e}{sup -} and {ital p{anti p}} colliders are presented. Some emphasis is placed on the recent developments in the heavy flavor sector. The measurements are compared to predictions from the Standard Model of electroweak interactions. All results are found to be consistent with the Standard Model. The indirect constraint on the top quark mass from all measurements is in excellent agreement with the direct {ital m{sub t}} measurements. Using the world`s electroweak data in conjunction with the current measurement of the top quark mass, the constraints on the Higgs` mass are discussed.

  13. Robust Understanding of Statistical Variation

    ERIC Educational Resources Information Center

    Peters, Susan A.

    2011-01-01

    This paper presents a framework that captures the complexity of reasoning about variation in ways that are indicative of robust understanding and describes reasoning as a blend of design, data-centric, and modeling perspectives. Robust understanding is indicated by integrated reasoning about variation within each perspective and across…

  14. Robust, Optimal Subsonic Airfoil Shapes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2014-01-01

    A method has been developed to create an airfoil robust enough to operate satisfactorily in different environments. This method determines a robust, optimal, subsonic airfoil shape, beginning with an arbitrary initial airfoil shape, and imposes the necessary constraints on the design. Also, this method is flexible and extendible to a larger class of requirements and changes in constraints imposed.

  15. Density Variations Observable by Precision Satellite Orbits

    NASA Astrophysics Data System (ADS)

    McLaughlin, C. A.; Lechtenberg, T.; Hiatt, A.

    2008-12-01

    This research uses precision satellite orbits from the Challenging Minisatellite Payload (CHAMP) satellite to produce a new data source for studying density changes that occur on time scales less than a day. Precision orbit derived density is compared to accelerometer derived density. In addition, the precision orbit derived densities are used to examine density variations that have been observed with accelerometer data to see if they are observable. In particular, the research will examine the observability of geomagnetic storm time changes and polar cusp features that have been observed in accelerometer data. Currently highly accurate density data is available from three satellites with accelerometers and much lower accuracy data is available from hundreds of satellites for which two-line element sets are available from the Air Force. This paper explores a new data source that is more accurate and has better temporal resolution than the two-line element sets, and provides better spatial coverage than satellites with accelerometers. This data source will be valuable for studying atmospheric phenomena over short periods, for long term studies of the atmosphere, and for validating and improving complex coupled models that include neutral density. The precision orbit derived densities are very similar to the accelerometer derived densities, but the accelerometer can observe features with shorter temporal variations. This research will quantify the time scales observable by precision orbit derived density. The technique for estimating density is optimal orbit determination. The estimates are optimal in the least squares or minimum variance sense. Precision orbit data from CHAMP is used as measurements in a sequential measurement processing and filtering scheme. The atmospheric density is estimated as a correction to an atmospheric model.

  16. New High Precision Linelist of H_3^+

    NASA Astrophysics Data System (ADS)

    Hodges, James N.; Perry, Adam J.; Markus, Charles; Jenkins, Paul A., II; Kocheril, G. Stephen; McCall, Benjamin J.

    2014-06-01

    As the simplest polyatomic molecule, H_3^+ serves as an ideal benchmark for theoretical predictions of rovibrational energy levels. By strictly ab initio methods, the current accuracy of theoretical predictions is limited to an impressive one hundredth of a wavenumber, which has been accomplished by consideration of relativistic, adiabatic, and non-adiabatic corrections to the Born-Oppenheimer PES. More accurate predictions rely on a treatment of quantum electrodynamic effects, which have improved the accuracies of vibrational transitions in molecular hydrogen to a few MHz. High precision spectroscopy is of the utmost importance for extending the frontiers of ab initio calculations, as improved precision and accuracy enable more rigorous testing of calculations. Additionally, measuring rovibrational transitions of H_3^+ can be used to predict its forbidden rotational spectrum. Though the existing data can be used to determine rotational transition frequencies, the uncertainties are prohibitively large. Acquisition of rovibrational spectra with smaller experimental uncertainty would enable a spectroscopic search for the rotational transitions. The technique Noise Immune Cavity Enhanced Optical Heterodyne Velocity Modulation Spectroscopy, or NICE-OHVMS has been previously used to precisely and accurately measure transitions of H_3^+, CH_5^+, and HCO^+ to sub-MHz uncertainty. A second module for our optical parametric oscillator has extended our instrument's frequency coverage from 3.2-3.9 μm to 2.5-3.9 μm. With extended coverage, we have improved our previous linelist by measuring additional transitions. O. L. Polyansky, et al. Phil. Trans. R. Soc. A (2012), 370, 5014--5027. J. Komasa, et al. J. Chem. Theor. Comp. (2011), 7, 3105--3115. C. M. Lindsay, B. J. McCall, J. Mol. Spectrosc. (2001), 210, 66--83. J. N. Hodges, et al. J. Chem. Phys. (2013), 139, 164201.

  17. High precision kinematic surveying with laser scanners

    NASA Astrophysics Data System (ADS)

    Gräfe, Gunnar

    2007-12-01

    The kinematic survey of roads and railways is becoming a much more common data acquisition method. The development of the Mobile Road Mapping System (MoSES) has reached a level that allows the use of kinematic survey technology for high precision applications. The system is equipped with cameras and laser scanners. For high accuracy requirements, the scanners become the main sensor group because of their geometric precision and reliability. To guarantee reliable survey results, specific calibration procedures have to be applied, which can be divided into the scanner sensor calibration as step 1, and the geometric transformation parameter estimation with respect to the vehicle coordinate system as step 2. Both calibration steps include new methods for sensor behavior modeling and multisensor system integration. To verify laser scanner quality of the MoSES system, the results are regularly checked along different test routes. It can be proved that a standard deviation of 0.004 m for height of the scanner points will be obtained, if the specific calibrations and data processing methods are applied. This level of accuracy opens new possibilities to serve engineering survey applications using kinematic measurement techniques. The key feature of scanner technology is the full digital coverage of the road area. Three application examples illustrate the capabilities. Digital road surface models generated from MoSES data are used, especially for road surface reconstruction tasks along highways. Compared to static surveys, the method offers comparable accuracy at higher speed, lower costs, much higher grid resolution and with greater safety. The system's capability of gaining 360 profiles leads to other complex applications like kinematic tunnel surveys or the precise analysis of bridge clearances.

  18. A Robust Biomarker

    NASA Technical Reports Server (NTRS)

    Westall, F.; Steele, A.; Toporski, J.; Walsh, M. M.; Allen, C. C.; Guidry, S.; McKay, D. S.; Gibson, E. K.; Chafetz, H. S.

    2000-01-01

    containing fossil biofilm, including the 3.5 b.y..-old carbonaceous cherts from South Africa and Australia. As a result of the unique compositional, structural and "mineralisable" properties of bacterial polymer and biofilms, we conclude that bacterial polymers and biofilms constitute a robust and reliable biomarker for life on Earth and could be a potential biomarker for extraterrestrial life.

  19. Precision estimates for tomographic nondestructive assay

    SciTech Connect

    Prettyman, T.H.

    1995-12-31

    One technique being applied to improve the accuracy of assays of waste in large containers is computerized tomography (CT). Research on the application of CT to improve both neutron and gamma-ray assays of waste is being carried out at LANL. For example, tomographic gamma scanning (TGS) is a single-photon emission CT technique that corrects for the attenuation of gamma rays emitted from the sample using attenuation images from transmission CT. By accounting for the distribution of emitting material and correcting for the attenuation of the emitted gamma rays, TGS is able to achieve highly accurate assays of radionuclides in medium-density wastes. It is important to develope methods to estimate the precision of such assays, and this paper explores this problem by examining the precision estimators for TGS.

  20. Robust adaptive dynamic programming with an application to power systems.

    PubMed

    Jiang, Yu; Jiang, Zhong-Ping

    2013-07-01

    This brief presents a novel framework of robust adaptive dynamic programming (robust-ADP) aimed at computing globally stabilizing and suboptimal control policies in the presence of dynamic uncertainties. A key strategy is to integrate ADP theory with techniques in modern nonlinear control with a unique objective of filling up a gap in the past literature of ADP without taking into account dynamic uncertainties. Neither the system dynamics nor the system order are required to be precisely known. As an illustrative example, the computational algorithm is applied to the controller design of a two-machine power system. PMID:24808528

  1. Precision measurements in supersymmetry

    SciTech Connect

    Feng, J.L.

    1995-05-01

    Supersymmetry is a promising framework in which to explore extensions of the standard model. If candidates for supersymmetric particles are found, precision measurements of their properties will then be of paramount importance. The prospects for such measurements and their implications are the subject of this thesis. If charginos are produced at the LEP II collider, they are likely to be one of the few available supersymmetric signals for many years. The author considers the possibility of determining fundamental supersymmetry parameters in such a scenario. The study is complicated by the dependence of observables on a large number of these parameters. He proposes a straightforward procedure for disentangling these dependences and demonstrate its effectiveness by presenting a number of case studies at representative points in parameter space. In addition to determining the properties of supersymmetric particles, precision measurements may also be used to establish that newly-discovered particles are, in fact, supersymmetric. Supersymmetry predicts quantitative relations among the couplings and masses of superparticles. The author discusses tests of such relations at a future e{sup +}e{sup {minus}} linear collider, using measurements that exploit the availability of polarizable beams. Stringent tests of supersymmetry from chargino production are demonstrated in two representative cases, and fermion and neutralino processes are also discussed.

  2. Precision flyer initiator

    DOEpatents

    Frank, Alan M.; Lee, Ronald S.

    1998-01-01

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or "flyer" is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices.

  3. Precision muon physics

    NASA Astrophysics Data System (ADS)

    Gorringe, T. P.; Hertzog, D. W.

    2015-09-01

    The muon is playing a unique role in sub-atomic physics. Studies of muon decay both determine the overall strength and establish the chiral structure of weak interactions, as well as setting extraordinary limits on charged-lepton-flavor-violating processes. Measurements of the muon's anomalous magnetic moment offer singular sensitivity to the completeness of the standard model and the predictions of many speculative theories. Spectroscopy of muonium and muonic atoms gives unmatched determinations of fundamental quantities including the magnetic moment ratio μμ /μp, lepton mass ratio mμ /me, and proton charge radius rp. Also, muon capture experiments are exploring elusive features of weak interactions involving nucleons and nuclei. We will review the experimental landscape of contemporary high-precision and high-sensitivity experiments with muons. One focus is the novel methods and ingenious techniques that achieve such precision and sensitivity in recent, present, and planned experiments. Another focus is the uncommonly broad and topical range of questions in atomic, nuclear and particle physics that such experiments explore.

  4. Precision Joining Center

    SciTech Connect

    Powell, J.W.; Westphal, D.A.

    1991-08-01

    A workshop to obtain input from industry on the establishment of the Precision Joining Center (PJC) was held on July 10--12, 1991. The PJC is a center for training Joining Technologists in advanced joining techniques and concepts in order to promote the competitiveness of US industry. The center will be established as part of the DOE Defense Programs Technology Commercialization Initiative, and operated by EG G Rocky Flats in cooperation with the American Welding Society and the Colorado School of Mines Center for Welding and Joining Research. The overall objectives of the workshop were to validate the need for a Joining Technologists to fill the gap between the welding operator and the welding engineer, and to assure that the PJC will train individuals to satisfy that need. The consensus of the workshop participants was that the Joining Technologist is a necessary position in industry, and is currently used, with some variation, by many companies. It was agreed that the PJC core curriculum, as presented, would produce a Joining Technologist of value to industries that use precision joining techniques. The advantage of the PJC would be to train the Joining Technologist much more quickly and more completely. The proposed emphasis of the PJC curriculum on equipment intensive and hands-on training was judged to be essential.

  5. Progressive Precision Surface Design

    SciTech Connect

    Duchaineau, M; Joy, KJ

    2002-01-11

    We introduce a novel wavelet decomposition algorithm that makes a number of powerful new surface design operations practical. Wavelets, and hierarchical representations generally, have held promise to facilitate a variety of design tasks in a unified way by approximating results very precisely, thus avoiding a proliferation of undergirding mathematical representations. However, traditional wavelet decomposition is defined from fine to coarse resolution, thus limiting its efficiency for highly precise surface manipulation when attempting to create new non-local editing methods. Our key contribution is the progressive wavelet decomposition algorithm, a general-purpose coarse-to-fine method for hierarchical fitting, based in this paper on an underlying multiresolution representation called dyadic splines. The algorithm requests input via a generic interval query mechanism, allowing a wide variety of non-local operations to be quickly implemented. The algorithm performs work proportionate to the tiny compressed output size, rather than to some arbitrarily high resolution that would otherwise be required, thus increasing performance by several orders of magnitude. We describe several design operations that are made tractable because of the progressive decomposition. Free-form pasting is a generalization of the traditional control-mesh edit, but for which the shape of the change is completely general and where the shape can be placed using a free-form deformation within the surface domain. Smoothing and roughening operations are enhanced so that an arbitrary loop in the domain specifies the area of effect. Finally, the sculpting effect of moving a tool shape along a path is simulated.

  6. Precision flyer initiator

    DOEpatents

    Frank, A.M.; Lee, R.S.

    1998-05-26

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or ``flyer`` is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices. 10 figs.

  7. Precise autofocusing microscope with rapid response

    NASA Astrophysics Data System (ADS)

    Liu, Chien-Sheng; Jiang, Sheng-Hong

    2015-03-01

    The rapid on-line or off-line automated vision inspection is a critical operation in the manufacturing fields. Accordingly, this present study designs and characterizes a novel precise optics-based autofocusing microscope with a rapid response and no reduction in the focusing accuracy. In contrast to conventional optics-based autofocusing microscopes with centroid method, the proposed microscope comprises a high-speed rotating optical diffuser in which the variation of the image centroid position is reduced and consequently the focusing response is improved. The proposed microscope is characterized and verified experimentally using a laboratory-built prototype. The experimental results show that compared to conventional optics-based autofocusing microscopes, the proposed microscope achieves a more rapid response with no reduction in the focusing accuracy. Consequently, the proposed microscope represents another solution for both existing and emerging industrial applications of automated vision inspection.

  8. Reticence, Accuracy and Efficacy

    NASA Astrophysics Data System (ADS)

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  9. Visual inspection reliability for precision manufactured parts

    DOE PAGESBeta

    See, Judi E.

    2015-09-04

    Sandia National Laboratories conducted an experiment for the National Nuclear Security Administration to determine the reliability of visual inspection of precision manufactured parts used in nuclear weapons. In addition visual inspection has been extensively researched since the early 20th century; however, the reliability of visual inspection for nuclear weapons parts has not been addressed. In addition, the efficacy of using inspector confidence ratings to guide multiple inspections in an effort to improve overall performance accuracy is unknown. Further, the workload associated with inspection has not been documented, and newer measures of stress have not been applied.

  10. Digital image centering. I. [for precision astrometry

    NASA Technical Reports Server (NTRS)

    Van Altena, W. F.; Auer, L. H.

    1975-01-01

    A series of parallax plates have been measured on a PDS microdensitometer to assess the possibility of using the PDS for precision relative astrometry and to investigate centering algorithms that might be used to analyze digital images obtained with the Large Space Telescope. The basic repeatability of the PDS is found to be plus or minus 0.6 micron, with the potential for reaching plus or minus 0.2 micron. A very efficient centering algorithm has been developed which fits the marginal density distributions of the image with a Gaussian profile and a sloping background. The accuracy is comparable with the best results obtained with a photoelectric image bisector.

  11. Precise and automated microfluidic sample preparation.

    SciTech Connect

    Crocker, Robert W.; Patel, Kamlesh D.; Mosier, Bruce P.; Harnett, Cindy K.

    2004-07-01

    Autonomous bio-chemical agent detectors require sample preparation involving multiplex fluid control. We have developed a portable microfluidic pump array for metering sub-microliter volumes at flowrates of 1-100 {micro}L/min. Each pump is composed of an electrokinetic (EK) pump and high-voltage power supply with 15-Hz feedback from flow sensors. The combination of high pump fluid impedance and active control results in precise fluid metering with nanoliter accuracy. Automated sample preparation will be demonstrated by labeling proteins with fluorescamine and subsequent injection to a capillary gel electrophoresis (CGE) chip.

  12. The GBT precision telescope control system

    NASA Astrophysics Data System (ADS)

    Prestage, Richard M.; Constantikes, Kim T.; Balser, Dana S.; Condon, James J.

    2004-10-01

    The NRAO Robert C. Byrd Green Bank Telescope (GBT) is a 100m diameter advanced single dish radio telescope designed for a wide range of astronomical projects with special emphasis on precision imaging. Open-loop adjustments of the active surface, and real-time corrections to pointing and focus on the basis of structural temperatures already allow observations at frequencies up to 50GHz. Our ultimate goal is to extend the observing frequency limit up to 115GHz; this will require a two dimensional tracking error better than 1.3", and an rms surface accuracy better than 210μm. The Precision Telescope Control System project has two main components. One aspect is the continued deployment of appropriate metrology systems, including temperature sensors, inclinometers, laser rangefinders and other devices. An improved control system architecture will harness this measurement capability with the existing servo systems, to deliver the precision operation required. The second aspect is the execution of a series of experiments to identify, understand and correct the residual pointing and surface accuracy errors. These can have multiple causes, many of which depend on variable environmental conditions. A particularly novel approach is to solve simultaneously for gravitational, thermal and wind effects in the development of the telescope pointing and focus tracking models. Our precision temperature sensor system has already allowed us to compensate for thermal gradients in the antenna, which were previously responsible for the largest "non-repeatable" pointing and focus tracking errors. We are currently targetting the effects of wind as the next, currently uncompensated, source of error.

  13. The Effect of Strength Training on Fractionalized Accuracy.

    ERIC Educational Resources Information Center

    Gronbech, C. Eric

    The role of the strength factor in the accomplishment of precision tasks was investigated. Forty adult males weight trained to develop physical strength in several muscle groups, particularly in the elbow flexor area. Results indicate a decrease in incidence of accuracy concurrent with an increase in muscle strength. This suggests that in order to…

  14. Precise Orbit Determination for Altimeter Satellites

    NASA Astrophysics Data System (ADS)

    Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Lemoine, F. G.; Beckley, B. B.; Wang, Y.; Chinn, D. S.

    2002-05-01

    Orbit error remains a critical component in the error budget for all radar altimeter missions. This paper describes the ongoing work at GSFC to improve orbits for three radar altimeter satellites: TOPEX/POSEIDON (T/P), Jason, and Geosat Follow-On (GFO). T/P has demonstrated that, the time variation of ocean topography can be determined with an accuracy of a few centimeters, thanks to the availability of highly accurate orbits (2-3 cm radially) produced at GSFC. Jason, the T/P follow-on, is intended to continue measurement of the ocean surface with the same, if not better accuracy. Reaching the Jason centimeter accuracy orbit goal would greatly benefit the knowledge of ocean circulation. Several new POD strategies which promise significant improvement to the current T/P orbit are evaluated over one year of data. Also, preliminary, but very promising Jason POD results are presented. Orbit improvement for GFO has been dramatic, and has allowed this mission to provide a POESEIDON class altimeter product. The GFO Precise Orbit Ephemeris (POE) orbits are based on satellite laser ranging (SLR) tracking supplemented with GFO/GFO altimeter crossover data. The accuracy of these orbits were evaluated using several tests, including independent TOPEX/GFO altimeter crossover data. The orbit improvements are shown over the years 2000 and 2001 for which the POEs have been completed.

  15. Robust efficient estimation of heart rate pulse from video

    PubMed Central

    Xu, Shuchang; Sun, Lingyun; Rohde, Gustavo Kunde

    2014-01-01

    We describe a simple but robust algorithm for estimating the heart rate pulse from video sequences containing human skin in real time. Based on a model of light interaction with human skin, we define the change of blood concentration due to arterial pulsation as a pixel quotient in log space, and successfully use the derived signal for computing the pulse heart rate. Various experiments with different cameras, different illumination condition, and different skin locations were conducted to demonstrate the effectiveness and robustness of the proposed algorithm. Examples computed with normal illumination show the algorithm is comparable with pulse oximeter devices both in accuracy and sensitivity. PMID:24761294

  16. Truss Assembly and Welding by Intelligent Precision Jigging Robots

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Dorsey, John T.; Doggett, William R.; Correll, Nikolaus

    2014-01-01

    This paper describes an Intelligent Precision Jigging Robot (IPJR) prototype that enables the precise alignment and welding of titanium space telescope optical benches. The IPJR, equipped with micron accuracy sensors and actuators, worked in tandem with a lower precision remote controlled manipulator. The combined system assembled and welded a 2 m truss from stock titanium components. The calibration of the IPJR, and the difference between the predicted and the truss dimensions as-built, identified additional sources of error that should be addressed in the next generation of IPJRs in 2D and 3D.

  17. Operating a real time high accuracy positioning system

    NASA Astrophysics Data System (ADS)

    Johnston, G.; Hanley, J.; Russell, D.; Vooght, A.

    2003-04-01

    The paper shall review the history and development of real time DGPS services prior to then describing the design of a high accuracy GPS commercial augmentation system and service currently delivering over a wide area to users of precise positioning products. The infrastructure and system shall be explained in relation to the need for high accuracy and high integrity of positioning for users. A comparison of the different techniques for the delivery of data shall be provided to outline the technical approach taken. Examples of the performance of the real time system shall be shown in various regions and modes to outline the current achievable accuracies. Having described and established the current GPS based situation, a review of the potential of the Galileo system shall be presented. Following brief contextual information relating to the Galileo project, core system and services, the paper will identify possible key applications and the main user communities for sub decimetre level precise positioning. The paper will address the Galileo and modernised GPS signals in space that are relevant to commercial precise positioning for the future and will discuss the implications for precise positioning performance. An outline of the proposed architecture shall be described and associated with pointers towards a successful implementation. Central to this discussion will be an assessment of the likely evolution of system infrastructure and user equipment implementation, prospects for new applications and their effect upon the business case for precise positioning services.

  18. RSRE: RNA structural robustness evaluator.

    PubMed

    Shu, Wenjie; Bo, Xiaochen; Zheng, Zhiqiang; Wang, Shengqi

    2007-07-01

    Biological robustness, defined as the ability to maintain stable functioning in the face of various perturbations, is an important and fundamental topic in current biology, and has become a focus of numerous studies in recent years. Although structural robustness has been explored in several types of RNA molecules, the origins of robustness are still controversial. Computational analysis results are needed to make up for the lack of evidence of robustness in natural biological systems. The RNA structural robustness evaluator (RSRE) web server presented here provides a freely available online tool to quantitatively evaluate the structural robustness of RNA based on the widely accepted definition of neutrality. Several classical structure comparison methods are employed; five randomization methods are implemented to generate control sequences; sub-optimal predicted structures can be optionally utilized to mitigate the uncertainty of secondary structure prediction. With a user-friendly interface, the web application is easy to use. Intuitive illustrations are provided along with the original computational results to facilitate analysis. The RSRE will be helpful in the wide exploration of RNA structural robustness and will catalyze our understanding of RNA evolution. The RSRE web server is freely available at http://biosrv1.bmi.ac.cn/RSRE/ or http://biotech.bmi.ac.cn/RSRE/. PMID:17567615

  19. Pixel-level robust digital image correlation.

    PubMed

    Cofaru, Corneliu; Philips, Wilfried; Van Paepegem, Wim

    2013-12-01

    Digital Image Correlation (DIC) is a well-established non-contact optical metrology method. It employs digital image analysis to extract the full-field displacements and strains that occur in objects subjected to external stresses. Despite recent DIC progress, many problematic areas which greatly affect accuracy and that can seldomly be avoided, received very little attention. Problems posed by the presence of sharp displacement discontinuities, reflections, object borders or edges can be linked to the analysed object's properties and deformation. Other problematic areas, such as image noise, localized reflections or shadows are related more to the image acquisition process. This paper proposes a new subset-based pixel-level robust DIC method for in-plane displacement measurement which addresses all of these problems in a straightforward and unified approach, significantly improving DIC measurement accuracy compared to classic approaches. The proposed approach minimizes a robust energy functional which adaptively weighs pixel differences in the motion estimation process. The aim is to limit the negative influence of pixels that present erroneous or inconsistent motions by enforcing local motion consistency. The proposed method is compared to the classic Newton-Raphson DIC method in terms of displacement accuracy in three experiments. The first experiment is numerical and presents three combined problems: sharp displacement discontinuities, missing image information and image noise. The second experiment is a real experiment in which a plastic specimen is developing a lateral crack due to the application of uniaxial stress. The region around the crack presents both reflections that saturate the image intensity levels leading to missing image information, as well as sharp motion discontinuities due to the plastic film rupturing. The third experiment compares the proposed and classic DIC approaches with generic computer vision optical flow methods using images from

  20. Precision Joining Center

    NASA Technical Reports Server (NTRS)

    Powell, John W.

    1991-01-01

    The establishment of a Precision Joining Center (PJC) is proposed. The PJC will be a cooperatively operated center with participation from U.S. private industry, the Colorado School of Mines, and various government agencies, including the Department of Energy's Nuclear Weapons Complex (NWC). The PJC's primary mission will be as a training center for advanced joining technologies. This will accomplish the following objectives: (1) it will provide an effective mechanism to transfer joining technology from the NWC to private industry; (2) it will provide a center for testing new joining processes for the NWC and private industry; and (3) it will provide highly trained personnel to support advance joining processes for the NWC and private industry.

  1. Robustness of airline route networks

    NASA Astrophysics Data System (ADS)

    Lordan, Oriol; Sallan, Jose M.; Escorihuela, Nuria; Gonzalez-Prieto, David

    2016-03-01

    Airlines shape their route network by defining their routes through supply and demand considerations, paying little attention to network performance indicators, such as network robustness. However, the collapse of an airline network can produce high financial costs for the airline and all its geographical area of influence. The aim of this study is to analyze the topology and robustness of the network route of airlines following Low Cost Carriers (LCCs) and Full Service Carriers (FSCs) business models. Results show that FSC hubs are more central than LCC bases in their route network. As a result, LCC route networks are more robust than FSC networks.

  2. Pervasive robustness in biological systems.

    PubMed

    Félix, Marie-Anne; Barkoulas, Michalis

    2015-08-01

    Robustness is characterized by the invariant expression of a phenotype in the face of a genetic and/or environmental perturbation. Although phenotypic variance is a central measure in the mapping of the genotype and environment to the phenotype in quantitative evolutionary genetics, robustness is also a key feature in systems biology, resulting from nonlinearities in quantitative relationships between upstream and downstream components. In this Review, we provide a synthesis of these two lines of investigation, converging on understanding how variation propagates across biological systems. We critically assess the recent proliferation of studies identifying robustness-conferring genes in the context of the nonlinearity in biological systems. PMID:26184598

  3. High Accuracy Wavelength Calibration For A Scanning Visible Spectrometer

    SciTech Connect

    Filippo Scotti and Ronald Bell

    2010-07-29

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤ 0.2Â. An automated calibration for a scanning spectrometer has been developed to achieve a high wavelength accuracy overr the visible spectrum, stable over time and environmental conditions, without the need to recalibrate after each grating movement. The method fits all relevant spectrometer paraameters using multiple calibration spectra. With a steping-motor controlled sine-drive, accuracies of ~0.025 Â have been demonstrated. With the addition of high resolution (0.075 aresec) optical encoder on the grading stage, greater precision (~0.005 Â) is possible, allowing absolute velocity measurements with ~0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  4. High accuracy wavelength calibration for a scanning visible spectrometer.

    PubMed

    Scotti, Filippo; Bell, Ronald E

    2010-10-01

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤0.2 Å. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of ∼0.25 Å has been demonstrated. With the addition of a high resolution (0.075 arc  sec) optical encoder on the grating stage, greater precision (∼0.005 Å) is possible, allowing absolute velocity measurements within ∼0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively. PMID:21033925

  5. Precision Neutron Polarimetry

    NASA Astrophysics Data System (ADS)

    Sharma, Monisha; Barron-Palos, L.; Bowman, J. D.; Chupp, T. E.; Crawford, C.; Danagoulian, A.; Klein, A.; Penttila, S. I.; Salas-Bacci, A. F.; Wilburn, W. S.

    2008-04-01

    Proposed PANDA and abBA experiments aim to measure the correlation coefficients in the polarized neutron beta decay at the SNS. The goal of these experiments is 0.1% measurement which will require neutron polarimetry at 0.1% level. The FnPB neutron beam will be polarized either using a ^3He spin filter or a supermirror polarizer and the neutron polarization will be measured using a ^3He spin filter. Experiment to establish the accuracy to which neutron polarization can be determined using ^3He spin fliters was performed at Los Alamos National Laboratory in Summer 2007 and the analysis is in progress. The details of the experiment and the results will be presented.

  6. Impact of orbit, clock and EOP errors in GNSS Precise Point Positioning

    NASA Astrophysics Data System (ADS)

    Hackman, C.

    2012-12-01

    Precise point positioning (PPP; [1]) has gained ever-increasing usage in GNSS carrier-phase positioning, navigation and timing (PNT) since its inception in the late 1990s. In this technique, high-precision satellite clocks, satellite ephemerides and earth-orientation parameters (EOPs) are applied as fixed input by the user in order to estimate receiver/location-specific quantities such as antenna coordinates, troposphere delay and receiver-clock corrections. This is in contrast to "network" solutions, in which (typically) less-precise satellite clocks, satellite ephemerides and EOPs are used as input, and in which these parameters are estimated simultaneously with the receiver/location-specific parameters. The primary reason for increased PPP application is that it offers most of the benefits of a network solution with a smaller computing cost. In addition, the software required to do PPP positioning can be simpler than that required for network solutions. Finally, PPP permits high-precision positioning of single or sparsely spaced receivers that may have few or no GNSS satellites in common view. A drawback of PPP is that the accuracy of the results depend directly on the accuracy of the supplied orbits, clocks and EOPs, since these parameters are not adjusted during the processing. In this study, we will examine the impact of orbit, EOP and satellite clock estimates on PPP solutions. Our primary focus will be the impact of these errors on station coordinates; however the study may be extended to error propagation into receiver-clock corrections and/or troposphere estimates if time permits. Study motivation: the United States Naval Observatory (USNO) began testing PPP processing using its own predicted orbits, clocks and EOPs in Summer 2012 [2]. The results of such processing could be useful for real- or near-real-time applications should they meet accuracy/precision requirements. Understanding how errors in satellite clocks, satellite orbits and EOPs propagate

  7. Robust scanner identification based on noise features

    NASA Astrophysics Data System (ADS)

    Gou, Hongmei; Swaminathan, Ashwin; Wu, Min

    2007-02-01

    A large portion of digital image data available today is acquired using digital cameras or scanners. While cameras allow digital reproduction of natural scenes, scanners are often used to capture hardcopy art in more controlled scenarios. This paper proposes a new technique for non-intrusive scanner model identification, which can be further extended to perform tampering detection on scanned images. Using only scanned image samples that contain arbitrary content, we construct a robust scanner identifier to determine the brand/model of the scanner used to capture each scanned image. The proposed scanner identifier is based on statistical features of scanning noise. We first analyze scanning noise from several angles, including through image de-noising, wavelet analysis, and neighborhood prediction, and then obtain statistical features from each characterization. Experimental results demonstrate that the proposed method can effectively identify the correct scanner brands/models with high accuracy.

  8. HIFI-C: a robust and fast method for determining NMR couplings from adaptive 3D to 2D projections.

    PubMed

    Cornilescu, Gabriel; Bahrami, Arash; Tonelli, Marco; Markley, John L; Eghbalnia, Hamid R

    2007-08-01

    We describe a novel method for the robust, rapid, and reliable determination of J couplings in multi-dimensional NMR coupling data, including small couplings from larger proteins. The method, "High-resolution Iterative Frequency Identification of Couplings" (HIFI-C) is an extension of the adaptive and intelligent data collection approach introduced earlier in HIFI-NMR. HIFI-C collects one or more optimally tilted two-dimensional (2D) planes of a 3D experiment, identifies peaks, and determines couplings with high resolution and precision. The HIFI-C approach, demonstrated here for the 3D quantitative J method, offers vital features that advance the goal of rapid and robust collection of NMR coupling data. (1) Tilted plane residual dipolar couplings (RDC) data are collected adaptively in order to offer an intelligent trade off between data collection time and accuracy. (2) Data from independent planes can provide a statistical measure of reliability for each measured coupling. (3) Fast data collection enables measurements in cases where sample stability is a limiting factor (for example in the presence of an orienting medium required for residual dipolar coupling measurements). (4) For samples that are stable, or in experiments involving relatively stronger couplings, robust data collection enables more reliable determinations of couplings in shorter time, particularly for larger biomolecules. As a proof of principle, we have applied the HIFI-C approach to the 3D quantitative J experiment to determine N-C' RDC values for three proteins ranging from 56 to 159 residues (including a homodimer with 111 residues in each subunit). A number of factors influence the robustness and speed of data collection. These factors include the size of the protein, the experimental set up, and the coupling being measured, among others. To exhibit a lower bound on robustness and the potential for time saving, the measurement of dipolar couplings for the N-C' vector represents a realistic

  9. Precision laser automatic tracking system.

    PubMed

    Lucy, R F; Peters, C J; McGann, E J; Lang, K T

    1966-04-01

    A precision laser tracker has been constructed and tested that is capable of tracking a low-acceleration target to an accuracy of about 25 microrad root mean square. In tracking high-acceleration targets, the error is directly proportional to the angular acceleration. For an angular acceleration of 0.6 rad/sec(2), the measured tracking error was about 0.1 mrad. The basic components in this tracker, similar in configuration to a heliostat, are a laser and an image dissector, which are mounted on a stationary frame, and a servocontrolled tracking mirror. The daytime sensitivity of this system is approximately 3 x 10(-10) W/m(2); the ultimate nighttime sensitivity is approximately 3 x 10(-14) W/m(2). Experimental tests were performed to evaluate both dynamic characteristics of this system and the system sensitivity. Dynamic performance of the system was obtained, using a small rocket covered with retroreflective material launched at an acceleration of about 13 g at a point 204 m from the tracker. The daytime sensitivity of the system was checked, using an efficient retroreflector mounted on a light aircraft. This aircraft was tracked out to a maximum range of 15 km, which checked the daytime sensitivity of the system measured by other means. The system also has been used to track passively stars and the Echo I satellite. Also, the system tracked passively a +7.5 magnitude star, and the signal-to-noise ratio in this experiment indicates that it should be possible to track a + 12.5 magnitude star. PMID:20048888

  10. High precision anatomy for MEG.

    PubMed

    Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth

    2014-02-01

    Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1mm. Estimates of relative co-registration error were <1.5mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. PMID:23911673

  11. High precision anatomy for MEG☆

    PubMed Central

    Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth

    2014-01-01

    Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1 mm. Estimates of relative co-registration error were < 1.5 mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6 month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5 mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. PMID:23911673

  12. Centimeter-Level Robust Gnss-Aided Inertial Post-Processing for Mobile Mapping Without Local Reference Stations

    NASA Astrophysics Data System (ADS)

    Hutton, J. J.; Gopaul, N.; Zhang, X.; Wang, J.; Menon, V.; Rieck, D.; Kipka, A.; Pastor, F.

    2016-06-01

    For almost two decades mobile mapping systems have done their georeferencing using Global Navigation Satellite Systems (GNSS) to measure position and inertial sensors to measure orientation. In order to achieve cm level position accuracy, a technique referred to as post-processed carrier phase differential GNSS (DGNSS) is used. For this technique to be effective the maximum distance to a single Reference Station should be no more than 20 km, and when using a network of Reference Stations the distance to the nearest station should no more than about 70 km. This need to set up local Reference Stations limits productivity and increases costs, especially when mapping large areas or long linear features such as roads or pipelines. An alternative technique to DGNSS for high-accuracy positioning from GNSS is the so-called Precise Point Positioning or PPP method. In this case instead of differencing the rover observables with the Reference Station observables to cancel out common errors, an advanced model for every aspect of the GNSS error chain is developed and parameterized to within an accuracy of a few cm. The Trimble Centerpoint RTX positioning solution combines the methodology of PPP with advanced ambiguity resolution technology to produce cm level accuracies without the need for local reference stations. It achieves this through a global deployment of highly redundant monitoring stations that are connected through the internet and are used to determine the precise satellite data with maximum accuracy, robustness, continuity and reliability, along with advance algorithms and receiver and antenna calibrations. This paper presents a new post-processed realization of the Trimble Centerpoint RTX technology integrated into the Applanix POSPac MMS GNSS-Aided Inertial software for mobile mapping. Real-world results from over 100 airborne flights evaluated against a DGNSS network reference are presented which show that the post-processed Centerpoint RTX solution agrees with

  13. Ground Truth Accuracy Tests of GPS Seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Oberlander, D. J.; Davis, J. L.; Baena, R.; Ekstrom, G.

    2005-12-01

    As the precision of GPS determinations of site position continues to improve the detection of smaller and faster geophysical signals becomes possible. However, lack of independent measurements of these signals often precludes an assessment of the accuracy of such GPS position determinations. This may be particularly true for high-rate GPS applications. We have built an apparatus to assess the accuracy of GPS position determinations for high-rate applications, in particular the application known as "GPS seismology." The apparatus consists of a bidirectional, single-axis positioning table coupled to a digitally controlled stepping motor. The motor, in turn, is connected to a Field Programmable Gate Array (FPGA) chip that synchronously sequences through real historical earthquake profiles stored in Erasable Programmable Read Only Memory's (EPROM). A GPS antenna attached to this positioning table undergoes the simulated seismic motions of the Earth's surface while collecting high-rate GPS data. Analysis of the time-dependent position estimates can then be compared to the "ground truth," and the resultant GPS error spectrum can be measured. We have made extensive measurements with this system while inducing simulated seismic motions either in the horizontal plane or the vertical axis. A second stationary GPS antenna at a distance of several meters was simultaneously collecting high-rate (5 Hz) GPS data. We will present the calibration of this system, describe the GPS observations and data analysis, and assess the accuracy of GPS for high-rate geophysical applications and natural hazards mitigation.

  14. Robust atomistic calculation of dislocation line tension

    NASA Astrophysics Data System (ADS)

    Szajewski, B. A.; Pavia, F.; Curtin, W. A.

    2015-12-01

    The line tension Γ of a dislocation is an important and fundamental property ubiquitous to continuum scale models of metal plasticity. However, the precise value of Γ in a given material has proven difficult to assess, with literature values encompassing a wide range. Here results from a multiscale simulation and robust analysis of the dislocation line tension, for dislocation bow-out between pinning points, are presented for two widely-used interatomic potentials for Al. A central part of the analysis involves an effective Peierls stress applicable to curved dislocation structures that markedly differs from that of perfectly straight dislocations but is required to describe the bow-out both in loading and unloading. The line tensions for the two interatomic potentials are similar and provide robust numerical values for Al. Most importantly, the atomic results show notable differences with singular anisotropic elastic dislocation theory in that (i) the coefficient of the \\text{ln}(L) scaling with dislocation length L differs and (ii) the ratio of screw to edge line tension is smaller than predicted by anisotropic elasticity. These differences are attributed to local dislocation core interactions that remain beyond the scope of elasticity theory. The many differing literature values for Γ are attributed to various approximations and inaccuracies in previous approaches. The results here indicate that continuum line dislocation models, based on elasticity theory and various core-cut-off assumptions, may be fundamentally unable to reproduce full atomistic results, thus hampering the detailed predictive ability of such continuum models.

  15. Robust atomic force microscopy using multiple sensors.

    PubMed

    Baranwal, Mayank; Gorugantu, Ram S; Salapaka, Srinivasa M

    2016-08-01

    Atomic force microscopy typically relies on high-resolution high-bandwidth cantilever deflection measurements based control for imaging and estimating sample topography and properties. More precisely, in amplitude-modulation atomic force microscopy (AM-AFM), the control effort that regulates deflection amplitude is used as an estimate of sample topography; similarly, contact-mode AFM uses regulation of deflection signal to generate sample topography. In this article, a control design scheme based on an additional feedback mechanism that uses vertical z-piezo motion sensor, which augments the deflection based control scheme, is proposed and evaluated. The proposed scheme exploits the fact that the piezo motion sensor, though inferior to the cantilever deflection signal in terms of resolution and bandwidth, provides information on piezo actuator dynamics that is not easily retrievable from the deflection signal. The augmented design results in significant improvements in imaging bandwidth and robustness, especially in AM-AFM, where the complicated underlying nonlinear dynamics inhibits estimating piezo motions from deflection signals. In AM-AFM experiments, the two-sensor based design demonstrates a substantial improvement in robustness to modeling uncertainties by practically eliminating the peak in the sensitivity plot without affecting the closed-loop bandwidth when compared to a design that does not use the piezo-position sensor based feedback. The contact-mode imaging results, which use proportional-integral controllers for cantilever-deflection regulation, demonstrate improvements in bandwidth and robustness to modeling uncertainties, respectively, by over 30% and 20%. The piezo-sensor based feedback is developed using H∞ control framework. PMID:27587128

  16. Robust Optimization of Biological Protocols

    PubMed Central

    Flaherty, Patrick; Davis, Ronald W.

    2015-01-01

    When conducting high-throughput biological experiments, it is often necessary to develop a protocol that is both inexpensive and robust. Standard approaches are either not cost-effective or arrive at an optimized protocol that is sensitive to experimental variations. We show here a novel approach that directly minimizes the cost of the protocol while ensuring the protocol is robust to experimental variation. Our approach uses a risk-averse conditional value-at-risk criterion in a robust parameter design framework. We demonstrate this approach on a polymerase chain reaction protocol and show that our improved protocol is less expensive than the standard protocol and more robust than a protocol optimized without consideration of experimental variation. PMID:26417115

  17. Dosimetry robustness with stochastic optimization

    NASA Astrophysics Data System (ADS)

    Nohadani, Omid; Seco, Joao; Martin, Benjamin C.; Bortfeld, Thomas

    2009-06-01

    All radiation therapy treatment planning relies on accurate dose calculation. Uncertainties in dosimetric prediction can significantly degrade an otherwise optimal plan. In this work, we introduce a robust optimization method which handles dosimetric errors and warrants for high-quality IMRT plans. Unlike other dose error estimations, we do not rely on the detailed knowledge about the sources of the uncertainty and use a generic error model based on random perturbation. This generality is sought in order to cope with a large variety of error sources. We demonstrate the method on a clinical case of lung cancer and show that our method provides plans that are more robust against dosimetric errors and are clinically acceptable. In fact, the robust plan exhibits a two-fold improved equivalent uniform dose compared to the non-robust but optimized plan. The achieved speedup will allow computationally extensive multi-criteria or beam-angle optimization approaches to warrant for dosimetrically relevant plans.

  18. EVALUATION OF METRIC PRECISION FOR A RIPARIAN FOREST SURVEY

    EPA Science Inventory

    This paper evaluates the performance of a protocol to monitor riparian forests in western Oregon based on the quality of the data obtained from a recent field survey. Precision and accuracy are the criteria used to determine the quality of 19 field metrics. The field survey con...

  19. Robust stochastic optimization for reservoir operation

    NASA Astrophysics Data System (ADS)

    Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin

    2015-01-01

    Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.

  20. Precision Astronomy with Imperfect Deep Depletion CCDs

    NASA Astrophysics Data System (ADS)

    Stubbs, Christopher; LSST Sensor Team; PanSTARRS Team

    2014-01-01

    While thick CCDs do provide definite advantages in terms of increased quantum efficiency at wavelengths 700 nm<λ < 1.1 microns and reduced fringing from atmospheric emission lines, these devices also exhibit undesirable features that pose a challenge to precision determination of the positions, fluxes, and shapes of astronomical objects, and for the precision extraction of features in astronomical spectra. For example, the assumptions of a perfectly rectilinear pixel grid and of an intensity-independent point spread function become increasingly invalid as we push to higher precision measurements. Many of the effects seen in these devices arise from lateral electrical fields within the detector, that produce charge transport anomalies that have been previously misinterpreted as quantum efficiency variations. Performing simplistic flat-fielding therefore introduces systematic errors in the image processing pipeline. One measurement challenge we face is devising a combination of calibration methods and algorithms that can distinguish genuine quantum efficiency variations from charge transport effects. These device imperfections also confront spectroscopic applications, such as line centroid determination for precision radial velocity studies. Given the scientific benefits of improving both the precision and accuracy of astronomical measurements, we need to identify, characterize, and overcome these various detector artifacts. In retrospect, many of the detector features first identified in thick CCDs also afflict measurements made with more traditional CCD detectors, albeit often at a reduced level since the photocharge is subject to the perturbing influence of lateral electric fields for a shorter time interval. I provide a qualitative overview of the physical effects we think are responsible for the observed device properties, and provide some perspective for the work that lies ahead.