Science.gov

Sample records for accuracy precision linearity

  1. Precise linear sun sensor

    NASA Technical Reports Server (NTRS)

    Johnston, D. D.

    1972-01-01

    An evaluation of the precise linear sun sensor relating to future mission applications was performed. The test procedures, data, and results of the dual-axis, solid-state system are included. Brief descriptions of the sensing head and of the system's operational characteristics are presented. A unique feature of the system is that multiple sensor heads with various fields of view may be used with the same electronics.

  2. Accuracy and Precision of an IGRT Solution

    SciTech Connect

    Webster, Gareth J. Rowbottom, Carl G.; Mackay, Ranald I.

    2009-07-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within {+-} 3% in dose over the range of sample points. For some points in high-dose gradients

  3. Accuracy and precision of an IGRT solution.

    PubMed

    Webster, Gareth J; Rowbottom, Carl G; Mackay, Ranald I

    2009-01-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within +/- 3% in dose over the range of sample points. For some points in high-dose gradients

  4. Precision linear ramp function generator

    DOEpatents

    Jatko, W. Bruce; McNeilly, David R.; Thacker, Louis H.

    1986-01-01

    A ramp function generator is provided which produces a precise linear ramp unction which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.

  5. Precision linear ramp function generator

    DOEpatents

    Jatko, W.B.; McNeilly, D.R.; Thacker, L.H.

    1984-08-01

    A ramp function generator is provided which produces a precise linear ramp function which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.

  6. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032

  7. Precision standoff guidance antenna accuracy evaluation

    NASA Astrophysics Data System (ADS)

    Irons, F. H.; Landesberg, M. M.

    1981-02-01

    This report presents a summary of work done to determine the inherent angular accuracy achievable with the guidance and control precision standoff guidance antenna. The antenna is a critical element in the anti-jam single station guidance program since its characteristics can limit the intrinsic location guidance accuracy. It was important to determine the extent to which high ratio beamsplitting results could be achieved repeatedly and what issues were involved with calibrating the antenna. The antenna accuracy has been found to be on the order of 0.006 deg. through the use of a straightforward lookup table concept. This corresponds to a cross range error of 21 m at a range of 200 km. This figure includes both pointing errors and off-axis estimation errors. It was found that the antenna off-boresight calibration is adequately represented by a straight line for each position plus a lookup table for pointing errors relative to broadside. In the event recalibration is required, it was found that only 1% of the model would need to be corrected.

  8. Precision magnetic suspension linear bearing

    NASA Technical Reports Server (NTRS)

    Trumper, David L.; Queen, Michael A.

    1992-01-01

    We have shown the design and analyzed the electromechanics of a linear motor suitable for independently controlling two suspension degrees of freedom. This motor, at least on paper, meets the requirements for driving an X-Y stage of 10 Kg mass with about 4 m/sq sec acceleration, with travel of several hundred millimeters in X and Y, and with reasonable power dissipation. A conceptual design for such a stage is presented. The theoretical feasibility of linear and planar bearings using single or multiple magnetic suspension linear motors is demonstrated.

  9. Precision measurements of the SLC (Stanford Linear Collider) beam energy

    SciTech Connect

    Kent, J.; King, M.; Von Zanthier, C.; Watson, S.; Levi, M.; Rouse, F.; Bambade, P.; Erickson, R.; Jung, C.K.; Nash, J.

    1989-03-01

    A method of precisely determining the beam energy in high energy linear colliders has been developed using dipole spectrometers and synchrotron radiation detectors. Beam lines implementing this method have been installed on the Stanford Linear Collider. An absolute energy measurement with an accuracy of better than deltaE/E = 5 /times/ 10/sup /minus/4/ can be achieved on a pulse-to-pulse basis. The operation of this system will be described. 4 refs., 3 figs., 1 tab.

  10. Novel linear piezoelectric motor for precision position stage

    NASA Astrophysics Data System (ADS)

    Chen, Chao; Shi, Yunlai; Zhang, Jun; Wang, Junshan

    2016-03-01

    Conventional servomotor and stepping motor face challenges in nanometer positioning stages due to the complex structure, motion transformation mechanism, and slow dynamic response, especially directly driven by linear motor. A new butterfly-shaped linear piezoelectric motor for linear motion is presented. A two-degree precision position stage driven by the proposed linear ultrasonic motor possesses a simple and compact configuration, which makes the system obtain shorter driving chain. Firstly, the working principle of the linear ultrasonic motor is analyzed. The oscillation orbits of two driving feet on the stator are produced successively by using the anti-symmetric and symmetric vibration modes of the piezoelectric composite structure, and the slider pressed on the driving feet can be propelled twice in only one vibration cycle. Then with the derivation of the dynamic equation of the piezoelectric actuator and transient response model, start-upstart-up and settling state characteristics of the proposed linear actuator is investigated theoretically and experimentally, and is applicable to evaluate step resolution of the precision platform driven by the actuator. Moreover the structure of the two-degree position stage system is described and a special precision displacement measurement system is built. Finally, the characteristics of the two-degree position stage are studied. In the closed-loop condition the positioning accuracy of plus or minus <0.5 μm is experimentally obtained for the stage propelled by the piezoelectric motor. A precision position stage based the proposed butterfly-shaped linear piezoelectric is theoretically and experimentally investigated.

  11. Precision and Accuracy Studies with Kajaani Fiber Length Analyzers

    NASA Astrophysics Data System (ADS)

    Copur, Yalcin; Makkonen, Hannu

    The aim of this study was to test the measurement precision and accuracy of the Kajaani FS-100 giving attention to possible machine error in the measurements. Fiber length of pine pulps produced using polysulfide, kraft, biokraft and soda methods were determined using both FS-100 and FiberLab automated fiber length analyzers. The measured length values were compared for both methods. The measurement precision and accuracy was tested by replicated measurements using rayon stable fibers. Measurements performed on pulp samples showed typical length distributions for both analyzers. Results obtained from Kajaani FS-100 and FiberLab showed a significant correlation. The shorter length measurement with FiberLab was found to be mainly due to the instrument calibration. The measurement repeatability tested for Kajaani FS-100 indicated that the measurements are precise.

  12. Precision envelope detector and linear rectifier circuitry

    DOEpatents

    Davis, Thomas J.

    1980-01-01

    Disclosed is a method and apparatus for the precise linear rectification and envelope detection of oscillatory signals. The signal is applied to a voltage-to-current converter which supplies current to a constant current sink. The connection between the converter and the sink is also applied through a diode and an output load resistor to a ground connection. The connection is also connected to ground through a second diode of opposite polarity from the diode in series with the load resistor. Very small amplitude voltage signals applied to the converter will cause a small change in the output current of the converter, and the difference between the output current and the constant current sink will be applied either directly to ground through the single diode, or across the output load resistor, dependent upon the polarity. Disclosed also is a full-wave rectifier utilizing constant current sinks and voltage-to-current converters. Additionally, disclosed is a combination of the voltage-to-current converters with differential integrated circuit preamplifiers to boost the initial signal amplitude, and with low pass filtering applied so as to obtain a video or signal envelope output.

  13. Evaluation of optoelectronic Plethysmography accuracy and precision in recording displacements during quiet breathing simulation.

    PubMed

    Massaroni, C; Schena, E; Saccomandi, P; Morrone, M; Sterzi, S; Silvestri, S

    2015-08-01

    Opto-electronic Plethysmography (OEP) is a motion analysis system used to measure chest wall kinematics and to indirectly evaluate respiratory volumes during breathing. Its working principle is based on the computation of marker displacements placed on the chest wall. This work aims at evaluating the accuracy and precision of OEP in measuring displacement in the range of human chest wall displacement during quiet breathing. OEP performances were investigated by the use of a fully programmable chest wall simulator (CWS). CWS was programmed to move 10 times its eight shafts in the range of physiological displacement (i.e., between 1 mm and 8 mm) at three different frequencies (i.e., 0.17 Hz, 0.25 Hz, 0.33 Hz). Experiments were performed with the aim to: (i) evaluate OEP accuracy and precision error in recording displacement in the overall calibrated volume and in three sub-volumes, (ii) evaluate the OEP volume measurement accuracy due to the measurement accuracy of linear displacements. OEP showed an accuracy better than 0.08 mm in all trials, considering the whole 2m(3) calibrated volume. The mean measurement discrepancy was 0.017 mm. The precision error, expressed as the ratio between measurement uncertainty and the recorded displacement by OEP, was always lower than 0.55%. Volume overestimation due to OEP linear measurement accuracy was always <; 12 mL (<; 3.2% of total volume), considering all settings. PMID:26736504

  14. Precision and Accuracy Parameters in Structured Light 3-D Scanning

    NASA Astrophysics Data System (ADS)

    Eiríksson, E. R.; Wilm, J.; Pedersen, D. B.; Aanæs, H.

    2016-04-01

    Structured light systems are popular in part because they can be constructed from off-the-shelf low cost components. In this paper we quantitatively show how common design parameters affect precision and accuracy in such systems, supplying a much needed guide for practitioners. Our quantitative measure is the established VDI/VDE 2634 (Part 2) guideline using precision made calibration artifacts. Experiments are performed on our own structured light setup, consisting of two cameras and a projector. We place our focus on the influence of calibration design parameters, the calibration procedure and encoding strategy and present our findings. Finally, we compare our setup to a state of the art metrology grade commercial scanner. Our results show that comparable, and in some cases better, results can be obtained using the parameter settings determined in this study.

  15. The Plus or Minus Game - Teaching Estimation, Precision, and Accuracy

    NASA Astrophysics Data System (ADS)

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-03-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in TPT (Larry Weinstein's "Fermi Questions.") For several years the authors (a college physics professor, a retired algebra teacher, and a fifth-grade teacher) have been playing a game, primarily at home to challenge each other for fun, but also in the classroom as an educational tool. We call the game "The Plus or Minus Game." The game combines estimation with the principle of precision and uncertainty in a competitive and fun way.

  16. Precision and accuracy in the reproduction of simple tone sequences.

    PubMed

    Vos, P G; Ellermann, H H

    1989-02-01

    In four experiments we investigated the precision and accuracy with which amateur musicians are able to reproduce sequences of tones varied only temporally, so as to have tone and rest durations constant over sequences, and the tempo varied over the musically meaningful range of 5-0.5 tones per second. Experiments 1 and 2 supported the hypothesis of attentional bias toward having the attack moments, rather than the departure moments, precisely times. Experiment 3 corroborated the hypothesis that inaccurate timing of short interattack intervals is manifested in a lengthening of rests, rather than tones, as a result of larger motor activity during the reproduction of rests. Experiment 4 gave some support to the hypothesis that the shortening of long interattack intervals is due to mnemonic constraints affecting the rests rather than the tones. Both theoretical and practical consequences of the various findings, particularly with respect to timing in musical performance, are discussed. PMID:2522528

  17. Fluorescence Axial Localization with Nanometer Accuracy and Precision

    SciTech Connect

    Li, Hui; Yen, Chi-Fu; Sivasankar, Sanjeevi

    2012-06-15

    We describe a new technique, standing wave axial nanometry (SWAN), to image the axial location of a single nanoscale fluorescent object with sub-nanometer accuracy and 3.7 nm precision. A standing wave, generated by positioning an atomic force microscope tip over a focused laser beam, is used to excite fluorescence; axial position is determined from the phase of the emission intensity. We use SWAN to measure the orientation of single DNA molecules of different lengths, grafted on surfaces with different functionalities.

  18. Accuracy, Precision, and Resolution in Strain Measurements on Diffraction Instruments

    NASA Astrophysics Data System (ADS)

    Polvino, Sean M.

    Diffraction stress analysis is a commonly used technique to evaluate the properties and performance of different classes of materials from engineering materials, such as steels and alloys, to electronic materials like Silicon chips. Often to better understand the performance of these materials at operating conditions they are also commonly subjected to elevated temperatures and different loading conditions. The validity of any measurement under these conditions is only as good as the control of the conditions and the accuracy and precision of the instrument being used to measure the properties. What is the accuracy and precision of a typical diffraction system and what is the best way to evaluate these quantities? Is there a way to remove systematic and random errors in the data that are due to problems with the control system used? With the advent of device engineering employing internal stress as a method for increasing performance the measurement of stress from microelectronic structures has become of enhanced importance. X-ray diffraction provides an ideal method for measuring these small areas without the need for modifying the sample and possibly changing the strain state. Micro and nano diffraction experiments on Silicon-on-Insulator samples revealed changes to the material under investigation and raised significant concerns about the usefulness of these techniques. This damage process and the application of micro and nano diffraction is discussed.

  19. Assessing the Accuracy of the Precise Point Positioning Technique

    NASA Astrophysics Data System (ADS)

    Bisnath, S. B.; Collins, P.; Seepersad, G.

    2012-12-01

    The Precise Point Positioning (PPP) GPS data processing technique has developed over the past 15 years to become a standard method for growing categories of positioning and navigation applications. The technique relies on single receiver point positioning combined with the use of precise satellite orbit and clock information and high-fidelity error modelling. The research presented here uniquely addresses the current accuracy of the technique, explains the limits of performance, and defines paths to improvements. For geodetic purposes, performance refers to daily static position accuracy. PPP processing of over 80 IGS stations over one week results in few millimetre positioning rms error in the north and east components and few centimetres in the vertical (all one sigma values). Larger error statistics for real-time and kinematic processing are also given. GPS PPP with ambiguity resolution processing is also carried out, producing slight improvements over the float solution results. These results are categorised into quality classes in order to analyse the root error causes of the resultant accuracies: "best", "worst", multipath, site displacement effects, satellite availability and geometry, etc. Also of interest in PPP performance is solution convergence period. Static, conventional solutions are slow to converge, with approximately 35 minutes required for 95% of solutions to reach the 20 cm or better horizontal accuracy. Ambiguity resolution can significantly reduce this period without biasing solutions. The definition of a PPP error budget is a complex task even with the resulting numerical assessment, as unlike the epoch-by-epoch processing in the Standard Position Service, PPP processing involving filtering. An attempt is made here to 1) define the magnitude of each error source in terms of range, 2) transform ranging error to position error via Dilution Of Precision (DOP), and 3) scale the DOP through the filtering process. The result is a deeper

  20. Scatterometry measurement precision and accuracy below 70 nm

    NASA Astrophysics Data System (ADS)

    Sendelbach, Matthew; Archie, Charles N.

    2003-05-01

    Scatterometry is a contender for various measurement applications where structure widths and heights can be significantly smaller than 70 nm within one or two ITRS generations. For example, feedforward process control in the post-lithography transistor gate formation is being actively pursued by a number of RIE tool manufacturers. Several commercial forms of scatterometry are available or under development which promise to provide satisfactory performance in this regime. Scatterometry, as commercially practiced today, involves analyzing the zeroth order reflected light from a grating of lines. Normal incidence spectroscopic reflectometry, 2-theta fixed-wavelength ellipsometry, and spectroscopic ellipsometry are among the optical techniques, while library based spectra matching and realtime regression are among the analysis techniques. All these commercial forms will find accurate and precise measurement a challenge when the material constituting the critical structure approaches a very small volume. Equally challenging is executing an evaluation methodology that first determines the true properties (critical dimensions and materials) of semiconductor wafer artifacts and then compares measurement performance of several scatterometers. How well do scatterometers track process induced changes in bottom CD and sidewall profile? This paper introduces a general 3D metrology assessment methodology and reports upon work involving sub-70 nm structures and several scatterometers. The methodology combines results from multiple metrologies (CD-SEM, CD-AFM, TEM, and XSEM) to form a Reference Measurement System (RMS). The methodology determines how well the scatterometry measurement tracks critical structure changes even in the presence of other noncritical changes that take place at the same time; these are key components of accuracy. Because the assessment rewards scatterometers that measure with good precision (reproducibility) and good accuracy, the most precise

  1. Precision Motion Control of Linear DC Solenoid Motor

    NASA Astrophysics Data System (ADS)

    Kato, Atsushi; Kubo, Takeharu; Ohnishi, Kouhei

    High speed and high precision control has been required in various cases. Hence, a new linear actuator based on Linear DC Solenoid Motor (LDSM), is developed for that purpose. In addition, we propose a precision motion control for LDSM. LDSM is composed of solenoid stator and moving permanent magnet. It has simple and light structure. Moreover, the solenoid form provides small leakage and generates more power than non-linear motor. Nevertheless, the nonlinear disturbance force such as friction force prevents LDSM from controlling precisely. In this paper, the high gain disturbance observer is applied to LDSM to suppress the force. The observer is able to estimate and compensate the nonlinear disturbance force. It is confirmed that the proposed precision motion control provides LDSM with precise observer control position and force through the experiments.

  2. T1-mapping in the heart: accuracy and precision

    PubMed Central

    2014-01-01

    The longitudinal relaxation time constant (T1) of the myocardium is altered in various disease states due to increased water content or other changes to the local molecular environment. Changes in both native T1 and T1 following administration of gadolinium (Gd) based contrast agents are considered important biomarkers and multiple methods have been suggested for quantifying myocardial T1 in vivo. Characterization of the native T1 of myocardial tissue may be used to detect and assess various cardiomyopathies while measurement of T1 with extracellular Gd based contrast agents provides additional information about the extracellular volume (ECV) fraction. The latter is particularly valuable for more diffuse diseases that are more challenging to detect using conventional late gadolinium enhancement (LGE). Both T1 and ECV measures have been shown to have important prognostic significance. T1-mapping has the potential to detect and quantify diffuse fibrosis at an early stage provided that the measurements have adequate reproducibility. Inversion recovery methods such as MOLLI have excellent precision and are highly reproducible when using tightly controlled protocols. The MOLLI method is widely available and is relatively mature. The accuracy of inversion recovery techniques is affected significantly by magnetization transfer (MT). Despite this, the estimate of apparent T1 using inversion recovery is a sensitive measure, which has been demonstrated to be a useful tool in characterizing tissue and discriminating disease. Saturation recovery methods have the potential to provide a more accurate measurement of T1 that is less sensitive to MT as well as other factors. Saturation recovery techniques are, however, noisier and somewhat more artifact prone and have not demonstrated the same level of reproducibility at this point in time. This review article focuses on the technical aspects of key T1-mapping methods and imaging protocols and describes their limitations including

  3. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS--1988

    EPA Science Inventory

    Precision and accuracy data obtained from state and local agencies (SLAMS) during 1988 are analyzed. ooled site variances and average biases which are relevant quantities to both precision and accuracy determinations are statistically compared within and between states to assess ...

  4. Accuracy and precision of alternative estimators of ectoparasiticide efficacy.

    PubMed

    Schall, Robert; Burger, Divan A; Luus, Herman G

    2016-06-15

    While there is consensus that the efficacy of parasiticides is properly assessed using the Abbott formula, there is as yet no general consensus on the use of arithmetic versus geometric mean numbers of surviving parasites in the formula. The purpose of this paper is to investigate the accuracy and precision of various efficacy estimators based on the Abbott formula which alternatively use arithmetic mean, geometric mean and median numbers of surviving parasites; we also consider a maximum likelihood estimator. Our study shows that the best estimators using geometric means are competitive, with respect to root mean squared error, with the conventional Abbott estimator using arithmetic means, as they have lower average and lower median root mean square error over the parameter scenarios which we investigated. However, our study confirms that Abbott estimators using geometric means are potentially biased upwards, and this upward bias is substantial in particular when the test product has substandard efficacy (90% and below). For this reason, we recommend that the Abbott estimator be calculated using arithmetic means. PMID:27198777

  5. Accuracy and precision of quantitative 31P-MRS measurements of human skeletal muscle mitochondrial function.

    PubMed

    Layec, Gwenael; Gifford, Jayson R; Trinity, Joel D; Hart, Corey R; Garten, Ryan S; Park, Song Y; Le Fur, Yann; Jeong, Eun-Kee; Richardson, Russell S

    2016-08-01

    Although theoretically sound, the accuracy and precision of (31)P-magnetic resonance spectroscopy ((31)P-MRS) approaches to quantitatively estimate mitochondrial capacity are not well documented. Therefore, employing four differing models of respiratory control [linear, kinetic, and multipoint adenosine diphosphate (ADP) and phosphorylation potential], this study sought to determine the accuracy and precision of (31)P-MRS assessments of peak mitochondrial adenosine-triphosphate (ATP) synthesis rate utilizing directly measured peak respiration (State 3) in permeabilized skeletal muscle fibers. In 23 subjects of different fitness levels, (31)P-MRS during a 24-s maximal isometric knee extension and high-resolution respirometry in muscle fibers from the vastus lateralis was performed. Although significantly correlated with State 3 respiration (r = 0.72), both the linear (45 ± 13 mM/min) and phosphorylation potential (47 ± 16 mM/min) models grossly overestimated the calculated in vitro peak ATP synthesis rate (P < 0.05). Of the ADP models, the kinetic model was well correlated with State 3 respiration (r = 0.72, P < 0.05), but moderately overestimated ATP synthesis rate (P < 0.05), while the multipoint model, although being somewhat less well correlated with State 3 respiration (r = 0.55, P < 0.05), most accurately reflected peak ATP synthesis rate. Of note, the PCr recovery time constant (τ), a qualitative index of mitochondrial capacity, exhibited the strongest correlation with State 3 respiration (r = 0.80, P < 0.05). Therefore, this study reveals that each of the (31)P-MRS data analyses, including PCr τ, exhibit precision in terms of mitochondrial capacity. As only the multipoint ADP model did not overstimate the peak skeletal muscle mitochondrial ATP synthesis, the multipoint ADP model is the only quantitative approach to exhibit both accuracy and precision. PMID:27302751

  6. Measuring changes in Plasmodium falciparum transmission: Precision, accuracy and costs of metrics

    PubMed Central

    Tusting, Lucy S.; Bousema, Teun; Smith, David L.; Drakeley, Chris

    2016-01-01

    As malaria declines in parts of Africa and elsewhere, and as more countries move towards elimination, it is necessary to robustly evaluate the effect of interventions and control programmes on malaria transmission. To help guide the appropriate design of trials to evaluate transmission-reducing interventions, we review eleven metrics of malaria transmission, discussing their accuracy, precision, collection methods and costs, and presenting an overall critique. We also review the non-linear scaling relationships between five metrics of malaria transmission; the entomological inoculation rate, force of infection, sporozoite rate, parasite rate and the basic reproductive number, R0. Our review highlights that while the entomological inoculation rate is widely considered the gold standard metric of malaria transmission and may be necessary for measuring changes in transmission in highly endemic areas, it has limited precision and accuracy and more standardised methods for its collection are required. In areas of low transmission, parasite rate, sero-conversion rates and molecular metrics including MOI and mFOI may be most appropriate. When assessing a specific intervention, the most relevant effects will be detected by examining the metrics most directly affected by that intervention. Future work should aim to better quantify the precision and accuracy of malaria metrics and to improve methods for their collection. PMID:24480314

  7. Precision measurements of linear scattering density using muon tomography

    NASA Astrophysics Data System (ADS)

    Åström, E.; Bonomi, G.; Calliari, I.; Calvini, P.; Checchia, P.; Donzella, A.; Faraci, E.; Forsberg, F.; Gonella, F.; Hu, X.; Klinger, J.; Sundqvist Ökvist, L.; Pagano, D.; Rigoni, A.; Ramous, E.; Urbani, M.; Vanini, S.; Zenoni, A.; Zumerle, G.

    2016-07-01

    We demonstrate that muon tomography can be used to precisely measure the properties of various materials. The materials which have been considered have been extracted from an experimental blast furnace, including carbon (coke) and iron oxides, for which measurements of the linear scattering density relative to the mass density have been performed with an absolute precision of 10%. We report the procedures that are used in order to obtain such precision, and a discussion is presented to address the expected performance of the technique when applied to heavier materials. The results we obtain do not depend on the specific type of material considered and therefore they can be extended to any application.

  8. Increasing the precision and accuracy of top-loading balances:  application of experimental design.

    PubMed

    Bzik, T J; Henderson, P B; Hobbs, J P

    1998-01-01

    The traditional method of estimating the weight of multiple objects is to obtain the weight of each object individually. We demonstrate that the precision and accuracy of these estimates can be improved by using a weighing scheme in which multiple objects are simultaneously on the balance. The resulting system of linear equations is solved to yield the weight estimates for the objects. Precision and accuracy improvements can be made by using a weighing scheme without requiring any more weighings than the number of objects when a total of at least six objects are to be weighed. It is also necessary that multiple objects can be weighed with about the same precision as that obtained with a single object, and the scale bias remains relatively constant over the set of weighings. Simulated and empirical examples are given for a system of eight objects in which up to five objects can be weighed simultaneously. A modified Plackett-Burman weighing scheme yields a 25% improvement in precision over the traditional method and implicitly removes the scale bias from seven of the eight objects. Applications of this novel use of experimental design techniques are shown to have potential commercial importance for quality control methods that rely on the mass change rate of an object. PMID:21644600

  9. Precision Linear Actuator for Space Interferometry Mission (SIM) Siderostat Pointing

    NASA Technical Reports Server (NTRS)

    Cook, Brant; Braun, David; Hankins, Steve; Koenig, John; Moore, Don

    2008-01-01

    'SIM PlanetQuest will exploit the classical measuring tool of astrometry (interferometry) with unprecedented precision to make dramatic advances in many areas of astronomy and astrophysics'(1). In order to obtain interferometric data two large steerable mirrors, or Siderostats, are used to direct starlight into the interferometer. A gimbaled mechanism actuated by linear actuators is chosen to meet the unprecedented pointing and angle tracking requirements of SIM. A group of JPL engineers designed, built, and tested a linear ballscrew actuator capable of performing submicron incremental steps for 10 years of continuous operation. Precise, zero backlash, closed loop pointing control requirements, lead the team to implement a ballscrew actuator with a direct drive DC motor and a precision piezo brake. Motor control commutation using feedback from a precision linear encoder on the ballscrew output produced an unexpected incremental step size of 20 nm over a range of 120 mm, yielding a dynamic range of 6,000,000:1. The results prove linear nanometer positioning requires no gears, levers, or hydraulic converters. Along the way many lessons have been learned and will subsequently be shared.

  10. Improved DORIS accuracy for precise orbit determination and geodesy

    NASA Technical Reports Server (NTRS)

    Willis, Pascal; Jayles, Christian; Tavernier, Gilles

    2004-01-01

    In 2001 and 2002, 3 more DORIS satellites were launched. Since then, all DORIS results have been significantly improved. For precise orbit determination, 20 cm are now available in real-time with DIODE and 1.5 to 2 cm in post-processing. For geodesy, 1 cm precision can now be achieved regularly every week, making now DORIS an active part of a Global Observing System for Geodesy through the IDS.

  11. Numerical planetary and lunar ephemerides - Present status, precision and accuracies

    NASA Technical Reports Server (NTRS)

    Standish, E. Myles, Jr.

    1986-01-01

    Features of the emphemeris creation process are described with attention given to the equations of motion, the numerical integration, and the least-squares fitting process. Observational data are presented and ephemeride accuracies are estimated. It is believed that radio measurements, VLBI, occultations, and the Space Telescope and Hipparcos will improve ephemerides in the near future. Limitations to accuracy are considered as well as relativity features. The export procedure, by which an outside user may obtain and use the JPL ephemerides, is discussed.

  12. S-193 scatterometer backscattering cross section precision/accuracy for Skylab 2 and 3 missions

    NASA Technical Reports Server (NTRS)

    Krishen, K.; Pounds, D. J.

    1975-01-01

    Procedures for measuring the precision and accuracy with which the S-193 scatterometer measured the background cross section of ground scenes are described. Homogeneous ground sites were selected, and data from Skylab missions were analyzed. The precision was expressed as the standard deviation of the scatterometer-acquired backscattering cross section. In special cases, inference of the precision of measurement was made by considering the total range from the maximum to minimum of the backscatter measurements within a data segment, rather than the standard deviation. For Skylab 2 and 3 missions a precision better than 1.5 dB is indicated. This procedure indicates an accuracy of better than 3 dB for the Skylab 2 and 3 missions. The estimates of precision and accuracy given in this report are for backscattering cross sections from -28 to 18 dB. Outside this range the precision and accuracy decrease significantly.

  13. Precision linear shaped charge analyses for severance of metals

    SciTech Connect

    Vigil, M.G.

    1996-08-01

    The Precision Linear Shaped Charge (PLSC) design concept involves the independent fabrication and assembly of the liner (wedge of PLSC), the tamper/confinement, and explosive. The liner is the most important part of a linear shaped charge (LSC) and should be fabricated by a more quality controlled, precise process than the tamper material. Also, this concept allows the liner material to be different from the tamper material. The explosive can be loaded between the liner and tamper as the last step in the assembly process rather than the first step as in conventional LSC designs. PLSC designs have been shown to produce increased jet penetrations in given targets, more reproducible jet penetration, and more efficient explosive cross-section geometries using a minimum amount of explosive. The Linear Explosive Shaped Charge Analysis (LESCA) code developed at Sandia National Laboratories has been used to assist in the design of PLSCs. LESCA predictions for PLSC jet tip velocities, jet-target impact angles, and jet penetration in aluminum and steel targets are compared to measured data. The advantages of PLSC over conventional LSC are presented. As an example problem, the LESCA code was used to analytically develop a conceptual design for a PLSC component to sever a three-inch thick 1018 steel plate at a water depth of 500 feet (15 atmospheres).

  14. The precision and accuracy of a portable heart rate monitor.

    PubMed

    Seaward, B L; Sleamaker, R H; McAuliffe, T; Clapp, J F

    1990-01-01

    A device that would comfortably and accurately measure exercise heart rate during field performance could be valuable for athletes, fitness participants, and investigators in the field of exercise physiology. Such a device, a portable telemeterized microprocessor, was compared with direct EKG measurements in a laboratory setting under several conditions to assess its accuracy. Twenty-four subjects were studied at rest and during light-, moderate-, high-, and maximal-intensity endurance activities (walking, running, aerobic dancing, and Nordic Track simulated cross-country skiing. Differences between values obtained by the two measuring devices were not statistically significant, with correlation coefficient (r) values ranging from 0.998 to 0.999. The two methods proved equally reliable for measuring heart rate in a host of varied aerobic activities at varying intensities. PMID:2306564

  15. Multi-Repeated Projection Lithography for High-Precision Linear Scale Based on Average Homogenization Effect.

    PubMed

    Ren, Dongxu; Zhao, Huiying; Zhang, Chupeng; Yuan, Daocheng; Xi, Jianpu; Zhu, Xueliang; Ban, Xinxing; Dong, Longchao; Gu, Yawen; Jiang, Chunye

    2016-01-01

    A multi-repeated photolithography method for manufacturing an incremental linear scale using projection lithography is presented. The method is based on the average homogenization effect that periodically superposes the light intensity of different locations of pitches in the mask to make a consistent energy distribution at a specific wavelength, from which the accuracy of a linear scale can be improved precisely using the average pitch with different step distances. The method's theoretical error is within 0.01 µm for a periodic mask with a 2-µm sine-wave error. The intensity error models in the focal plane include the rectangular grating error on the mask, static positioning error, and lithography lens focal plane alignment error, which affect pitch uniformity less than in the common linear scale projection lithography splicing process. It was analyzed and confirmed that increasing the repeat exposure number of a single stripe could improve accuracy, as could adjusting the exposure spacing to achieve a set proportion of black and white stripes. According to the experimental results, the effectiveness of the multi-repeated photolithography method is confirmed to easily realize a pitch accuracy of 43 nm in any 10 locations of 1 m, and the whole length accuracy of the linear scale is less than 1 µm/m. PMID:27089348

  16. Multi-Repeated Projection Lithography for High-Precision Linear Scale Based on Average Homogenization Effect

    PubMed Central

    Ren, Dongxu; Zhao, Huiying; Zhang, Chupeng; Yuan, Daocheng; Xi, Jianpu; Zhu, Xueliang; Ban, Xinxing; Dong, Longchao; Gu, Yawen; Jiang, Chunye

    2016-01-01

    A multi-repeated photolithography method for manufacturing an incremental linear scale using projection lithography is presented. The method is based on the average homogenization effect that periodically superposes the light intensity of different locations of pitches in the mask to make a consistent energy distribution at a specific wavelength, from which the accuracy of a linear scale can be improved precisely using the average pitch with different step distances. The method’s theoretical error is within 0.01 µm for a periodic mask with a 2-µm sine-wave error. The intensity error models in the focal plane include the rectangular grating error on the mask, static positioning error, and lithography lens focal plane alignment error, which affect pitch uniformity less than in the common linear scale projection lithography splicing process. It was analyzed and confirmed that increasing the repeat exposure number of a single stripe could improve accuracy, as could adjusting the exposure spacing to achieve a set proportion of black and white stripes. According to the experimental results, the effectiveness of the multi-repeated photolithography method is confirmed to easily realize a pitch accuracy of 43 nm in any 10 locations of 1 m, and the whole length accuracy of the linear scale is less than 1 µm/m. PMID:27089348

  17. Precision and accuracy of decay constants and age standards

    NASA Astrophysics Data System (ADS)

    Villa, I. M.

    2011-12-01

    40 years of round-robin experiments with age standards teach us that systematic errors must be present in at least N-1 labs if participants provide N mutually incompatible data. In EarthTime, the U-Pb community has produced and distributed synthetic solutions with full metrological traceability. Collector linearity is routinely calibrated under variable conditions (e.g. [1]). Instrumental mass fractionation is measured in-run with double spikes (e.g. 233U-236U). Parent-daughter ratios are metrologically traceable, so the full uncertainty budget of a U-Pb age should coincide with interlaboratory uncertainty. TIMS round-robin experiments indeed show a decrease of N towards the ideal value of 1. Comparing 235U-207Pb with 238U-206Pb ages (e.g. [2]) has resulted in a credible re-evaluation of the 235U decay constant, with lower uncertainty than gamma counting. U-Pb microbeam techniques reveal the link petrology-microtextures-microchemistry-isotope record but do not achieve the low uncertainty of TIMS. In the K-Ar community, N is large; interlaboratory bias is > 10 times self-assessed uncertainty. Systematic errors may have analytical and petrological reasons. Metrological traceability is not yet implemented (substantial advance may come from work in progress, e.g. [7]). One of the worst problems is collector stability and linearity. Using electron multipliers (EM) instead of Faraday buckets (FB) reduces both dynamic range and collector linearity. Mass spectrometer backgrounds are never zero; the extent as well as the predictability of their variability must be propagated into the uncertainty evaluation. The high isotope ratio of the atmospheric Ar requires a large dynamic range over which linearity must be demonstrated under all analytical conditions to correctly estimate mass fractionation. The only assessment of EM linearity in Ar analyses [3] points out many fundamental problems; the onus of proof is on every laboratory claiming low uncertainties. Finally, sample

  18. Milling precision and fitting accuracy of Cerec Scan milled restorations.

    PubMed

    Arnetzl, G; Pongratz, D

    2005-10-01

    The milling accuracy of the Cerec Scan system was examined under standard practice conditions. For this purpose, one and the same 3D design similar to an inlay was milled 30 times from Vita Mark II ceramic blocks. Cylindrical diamond burs with 1.2 or 1.6 mm diameter were used. Each individual milled body was measured exactly to 0.1 microm at five defined sections with a coordinate measuring instrument from the Zeiss company. In the statistical evaluation, both the different diamond bur diameters and the extent of material removal from the ceramic blank were taken into consideration; sections with large substance removal and sections with low substance removal were defined. The standard deviation for the 1.6-mm burs was clearly greater than that for the 1.2-mm burs for the section with large substance removal. This difference was significant according to the Levene test for variance equality. In sections with low substance removal, no difference between the use of the 1.6-mm or 1.2-mm bur was shown. The measuring results ranged between 0.053 and 0.14 mm. The spacing of the distances with large substance removal were larger than those with low substance removal. The T-test for paired random samples showed that the distance with large substance removal when using the 1.6-mm bur was significantly larger than the distance with low substance removal. The difference was not significant for the small burs. It was shown several times statistically that the use of the cylindrical diamond bur with 1.6-mm diameter led to greater inaccuracies than the use of the 1.2-mm cylindrical diamond bur, especially at sites with large material removal. PMID:16689028

  19. Precision and accuracy of visual foliar injury assessments

    SciTech Connect

    Gumpertz, M.L.; Tingey, D.T.; Hogsett, W.E.

    1982-07-01

    The study compared three measures of foliar injury: (i) mean percent leaf area injured of all leaves on the plant, (ii) mean percent leaf area injured of the three most injured leaves, and (iii) the proportion of injured leaves to total number of leaves. For the first measure, the variation caused by reader biases and day-to-day variations were compared with the innate plant-to-plant variation. Bean (Phaseolus vulgaris 'Pinto'), pea (Pisum sativum 'Little Marvel'), radish (Rhaphanus sativus 'Cherry Belle'), and spinach (Spinacia oleracea 'Northland') plants were exposed to either 3 ..mu..L L/sup -1/ SO/sub 2/ or 0.3 ..mu..L L/sup -1/ ozone for 2 h. Three leaf readers visually assessed the percent injury on every leaf of each plant while a fourth reader used a transparent grid to make an unbiased assessment for each plant. The mean leaf area injured of the three most injured leaves was highly correlated with all leaves on the plant only if the three most injured leaves were <100% injured. The proportion of leaves injured was not highly correlated with percent leaf area injured of all leaves on the plant for any species in this study. The largest source of variation in visual assessments was plant-to-plant variation, which ranged from 44 to 97% of the total variance, followed by variation among readers (0-32% of the variance). Except for radish exposed to ozone, the day-to-day variation accounted for <18% of the total. Reader bias in assessment of ozone injury was significant but could be adjusted for each reader by a simple linear regression (R/sup 2/ = 0.89-0.91) of the visual assessments against the grid assessments.

  20. Precision and Accuracy in Measurements: A Tale of Four Graduated Cylinders.

    ERIC Educational Resources Information Center

    Treptow, Richard S.

    1998-01-01

    Expands upon the concepts of precision and accuracy at a level suitable for general chemistry. Serves as a bridge to the more extensive treatments in analytical chemistry textbooks and the advanced literature on error analysis. Contains 22 references. (DDR)

  1. An analytically linearized helicopter model with improved modeling accuracy

    NASA Technical Reports Server (NTRS)

    Jensen, Patrick T.; Curtiss, H. C., Jr.; Mckillip, Robert M., Jr.

    1991-01-01

    An analytically linearized model for helicopter flight response including rotor blade dynamics and dynamic inflow, that was recently developed, was studied with the objective of increasing the understanding, the ease of use, and the accuracy of the model. The mathematical model is described along with a description of the UH-60A Black Hawk helicopter and flight test used to validate the model. To aid in utilization of the model for sensitivity analysis, a new, faster, and more efficient implementation of the model was developed. It is shown that several errors in the mathematical modeling of the system caused a reduction in accuracy. These errors in rotor force resolution, trim force and moment calculation, and rotor inertia terms were corrected along with improvements to the programming style and documentation. Use of a trim input file to drive the model is examined. Trim file errors in blade twist, control input phase angle, coning and lag angles, main and tail rotor pitch, and uniform induced velocity, were corrected. Finally, through direct comparison of the original and corrected model responses to flight test data, the effect of the corrections on overall model output is shown.

  2. Expansion and dissemination of a standardized accuracy and precision assessment technique

    NASA Astrophysics Data System (ADS)

    Kwartowitz, David M.; Riti, Rachel E.; Holmes, David R., III

    2011-03-01

    The advent and development of new imaging techniques and image-guidance have had a major impact on surgical practice. These techniques attempt to allow the clinician to not only visualize what is currently visible, but also what is beneath the surface, or function. These systems are often based on tracking systems coupled with registration and visualization technologies. The accuracy and precision of the tracking systems, thus is critical in the overall accuracy and precision of the image-guidance system. In this work the accuracy and precision of an Aurora tracking system is assessed, using the technique specified in " novel technique for analysis of accuracy of magnetic tracking systems used in image guided surgery." This analysis yielded a demonstration that accuracy is dependent on distance from the tracker's field generator, and had an RMS value of 1.48 mm. The error has the similar characteristics and values as the previous work, thus validating this method for tracker analysis.

  3. Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis

    PubMed Central

    Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.

    2015-01-01

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505

  4. S193 radiometer brightness temperature precision/accuracy for SL2 and SL3

    NASA Technical Reports Server (NTRS)

    Pounds, D. J.; Krishen, K.

    1975-01-01

    The precision and accuracy with which the S193 radiometer measured the brightness temperature of ground scenes is investigated. Estimates were derived from data collected during Skylab missions. Homogeneous ground sites were selected and S193 radiometer brightness temperature data analyzed. The precision was expressed as the standard deviation of the radiometer acquired brightness temperature. Precision was determined to be 2.40 K or better depending on mode and target temperature.

  5. Precision linear shaped charge severance of graphite-epoxy materials

    NASA Technical Reports Server (NTRS)

    Vigil, Manuel G.

    1993-01-01

    This paper presents Precision Linear Shaped Charge (PLSC) components designed to sever a variety of target materials. Recent data for the severance of graphite-epoxy panels or targets with PLSC's are presented. A brief history of the requirement to originate the development of PLSC's for weapon components at Sandia National Laboratories is presented. The Department of Energy's (DOE) nuclear weapon systems have continually decreased in size. Today's relatively small weapons require the design of much more efficient, lighter, and smaller explosive components because fragments, air shocks, and pyro-shocks associated with the function of these components can damage electrical and other sensitive components located nearby. The DOE requirements for PLSC's are listed. Therefore, linear shaped charge (LSC) components for weapon systems can no longer be empirically or experimentally designed for a given application. Many of today's designs require severing concentric cylinders, for example, where the LSC jet is designed to sever only one of the two cylinders as was the case for the B90/Nuclear Depth Strike Bomb. Therefore, code modeling and simulation technology must be utilized to obtain a better understanding of the LSC jet hydrodynamic penetration, fracture, shear, and spall mechanisms associated with the severance of metallic as well as composite targets.

  6. A linear actuator for precision positioning of dual objects

    NASA Astrophysics Data System (ADS)

    Peng, Yuxin; Cao, Jie; Guo, Zhao; Yu, Haoyong

    2015-12-01

    In this paper, a linear actuator for precision positioning of dual objects is proposed based on a double friction drive principle using a single piezoelectric element (PZT). The linear actuator consists of an electromagnet and a permanent magnet, which are connected by the PZT. The electromagnet serves as an object 1, and another object (object 2) is attached on the permanent magnet by the magnetic force. For positioning the dual objects independently, two different friction drive modes can be alternated by an on-off control of the electromagnet. When the electromagnet releases from the guide way, it can be driven by impact friction force generated by the PZT. Otherwise, when the electromagnet clamps on the guide way and remains stationary, the object 2 can be driven based on the principle of smooth impact friction drive. A prototype was designed and constructed and experiments were carried out to test the basic performance of the actuator. It has been verified that with a compact size of 31 mm (L) × 12 mm (W) × 8 mm (H), the two objects can achieve long strokes on the order of several millimeters and high resolutions of several tens of nanometers. Since the proposed actuator allows independent movement of two objects by a single PZT, the actuator has the potential to be constructed compactly.

  7. Gaining Precision and Accuracy on Microprobe Trace Element Analysis with the Multipoint Background Method

    NASA Astrophysics Data System (ADS)

    Allaz, J. M.; Williams, M. L.; Jercinovic, M. J.; Donovan, J. J.

    2014-12-01

    Electron microprobe trace element analysis is a significant challenge, but can provide critical data when high spatial resolution is required. Due to the low peak intensity, the accuracy and precision of such analyses relies critically on background measurements, and on the accuracy of any pertinent peak interference corrections. A linear regression between two points selected at appropriate off-peak positions is a classical approach for background characterization in microprobe analysis. However, this approach disallows an accurate assessment of background curvature (usually exponential). Moreover, if present, background interferences can dramatically affect the results if underestimated or ignored. The acquisition of a quantitative WDS scan over the spectral region of interest is still a valuable option to determine the background intensity and curvature from a fitted regression of background portions of the scan, but this technique retains an element of subjectivity as the analyst has to select areas in the scan, which appear to represent background. We present here a new method, "Multi-Point Background" (MPB), that allows acquiring up to 24 off-peak background measurements from wavelength positions around the peaks. This method aims to improve the accuracy, precision, and objectivity of trace element analysis. The overall efficiency is amended because no systematic WDS scan needs to be acquired in order to check for the presence of possible background interferences. Moreover, the method is less subjective because "true" backgrounds are selected by the statistical exclusion of erroneous background measurements, reducing the need for analyst intervention. This idea originated from efforts to refine EPMA monazite U-Th-Pb dating, where it was recognised that background errors (peak interference or background curvature) could result in errors of several tens of million years on the calculated age. Results obtained on a CAMECA SX-100 "UltraChron" using monazite

  8. [Assessment of precision and accuracy of digital surface photogrammetry with the DSP 400 system].

    PubMed

    Krimmel, M; Kluba, S; Dietz, K; Reinert, S

    2005-03-01

    The objective of the present study was to evaluate the precision and accuracy of facial anthropometric measurements obtained through digital 3-D surface photogrammetry with the DSP 400 system in comparison to traditional 2-D photogrammetry. Fifty plaster casts of cleft infants were imaged and 21 standard anthropometric measurements were obtained. For precision assessment the measurements were performed twice in a subsample. Accuracy was determined by comparison of direct measurements and indirect 2-D and 3-D image measurements. Precision of digital surface photogrammetry was almost as good as direct anthropometry and clearly better than 2-D photogrammetry. Measurements derived from 3-D images showed better congruence to direct measurements than from 2-D photos. Digital surface photogrammetry with the DSP 400 system is sufficiently precise and accurate for craniofacial anthropometric examinations. PMID:15832575

  9. Active transport improves the precision of linear long distance molecular signalling

    NASA Astrophysics Data System (ADS)

    Godec, Aljaž; Metzler, Ralf

    2016-09-01

    Molecular signalling in living cells occurs at low copy numbers and is thereby inherently limited by the noise imposed by thermal diffusion. The precision at which biochemical receptors can count signalling molecules is intimately related to the noise correlation time. In addition to passive thermal diffusion, messenger RNA and vesicle-engulfed signalling molecules can transiently bind to molecular motors and are actively transported across biological cells. Active transport is most beneficial when trafficking occurs over large distances, for instance up to the order of 1 metre in neurons. Here we explain how intermittent active transport allows for faster equilibration upon a change in concentration triggered by biochemical stimuli. Moreover, we show how intermittent active excursions induce qualitative changes in the noise in effectively one-dimensional systems such as dendrites. Thereby they allow for significantly improved signalling precision in the sense of a smaller relative deviation in the concentration read-out by the receptor. On the basis of linear response theory we derive the exact mean field precision limit for counting actively transported molecules. We explain how intermittent active excursions disrupt the recurrence in the molecular motion, thereby facilitating improved signalling accuracy. Our results provide a deeper understanding of how recurrence affects molecular signalling precision in biological cells and novel medical-diagnostic devices.

  10. The Plus or Minus Game--Teaching Estimation, Precision, and Accuracy

    ERIC Educational Resources Information Center

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-01-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in "TPT" (Larry Weinstein's "Fermi…

  11. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS, 1984

    EPA Science Inventory

    Precision and accuracy data obtained from state and local agencies during 1984 are summarized and compared to data reported earlier for the period 1981-1983. A continual improvement in the completeness of the data is evident. Improvement is also evident in the size of the precisi...

  12. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS, 1983

    EPA Science Inventory

    Precision and accuracy data obtained from State and local agencies during 1983 are summarized and evaluated. Some comparisons are made with the results previously reported for 1981 and 1982 to determine the indication of any trends. Some trends indicated improvement in the comple...

  13. PRECISION AND ACCURACY ASSESSMENTS FOR STATE AND LOCAL AIR MONITORING NETWORKS, 1985

    EPA Science Inventory

    Precision and accuracy data obtained from State and local agencies during 1985 are summarized and evaluated. Some comparisons are made with the results reported for prior years to determine any trends. Some trends indicated continued improvement in the completeness of reporting o...

  14. ASSESSMENT OF THE PRECISION AND ACCURACY OF SAM AND MFC MICROCOSMS EXPOSED TO TOXICANTS

    EPA Science Inventory

    The results of 30 mixed flank culture (MFC) and four standardized aquatic microcosm (SAM) microcosm experiments were used to describe the precision and accuracy of these two protocols. oefficients of variation (CV) for chemicals measurements (DO,pH) were generally less than 7%, f...

  15. Commissioning Procedures for Mechanical Precision and Accuracy in a Dedicated LINAC

    SciTech Connect

    Ballesteros-Zebadua, P.; Larrga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Juarez, J.; Prieto, I.; Moreno-Jimenez, S.; Celis, M. A.

    2008-08-11

    Mechanical precision measurements are fundamental procedures for the commissioning of a dedicated LINAC. At our Radioneurosurgery Unit, these procedures can be suitable as quality assurance routines that allow the verification of the equipment geometrical accuracy and precision. In this work mechanical tests were performed for gantry and table rotation, obtaining mean associated uncertainties of 0.3 mm and 0.71 mm, respectively. Using an anthropomorphic phantom and a series of localized surface markers, isocenter accuracy showed to be smaller than 0.86 mm for radiosurgery procedures and 0.95 mm for fractionated treatments with mask. All uncertainties were below tolerances. The highest contribution to mechanical variations is due to table rotation, so it is important to correct variations using a localization frame with printed overlays. Mechanical precision knowledge would allow to consider the statistical errors in the treatment planning volume margins.

  16. Evaluation of the Accuracy and Precision of a Next Generation Computer-Assisted Surgical System

    PubMed Central

    Dai, Yifei; Liebelt, Ralph A.; Gao, Bo; Gulbransen, Scott W.; Silver, Xeve S.

    2015-01-01

    Background Computer-assisted orthopaedic surgery (CAOS) improves accuracy and reduces outliers in total knee arthroplasty (TKA). However, during the evaluation of CAOS systems, the error generated by the guidance system (hardware and software) has been generally overlooked. Limited information is available on the accuracy and precision of specific CAOS systems with regard to intraoperative final resection measurements. The purpose of this study was to assess the accuracy and precision of a next generation CAOS system and investigate the impact of extra-articular deformity on the system-level errors generated during intraoperative resection measurement. Methods TKA surgeries were performed on twenty-eight artificial knee inserts with various types of extra-articular deformity (12 neutral, 12 varus, and 4 valgus). Surgical resection parameters (resection depths and alignment angles) were compared between postoperative three-dimensional (3D) scan-based measurements and intraoperative CAOS measurements. Using the 3D scan-based measurements as control, the accuracy (mean error) and precision (associated standard deviation) of the CAOS system were assessed. The impact of extra-articular deformity on the CAOS system measurement errors was also investigated. Results The pooled mean unsigned errors generated by the CAOS system were equal or less than 0.61 mm and 0.64° for resection depths and alignment angles, respectively. No clinically meaningful biases were found in the measurements of resection depths (< 0.5 mm) and alignment angles (< 0.5°). Extra-articular deformity did not show significant effect on the measurement errors generated by the CAOS system investigated. Conclusions This study presented a set of methodology and workflow to assess the system-level accuracy and precision of CAOS systems. The data demonstrated that the CAOS system investigated can offer accurate and precise intraoperative measurements of TKA resection parameters, regardless of the presence

  17. Accuracy, precision and economics: The cost of cutting-edge chemical analyses

    NASA Astrophysics Data System (ADS)

    Hamilton, B.; Hannigan, R.; Jones, C.; Chen, Z.

    2002-12-01

    Otolith (fish ear bone) chemistry has proven to be an exceptional tool for the identification of essential fish habitats in marine and freshwater environments. These measurements, which explore the variations in trace element content of otoliths relative to Calcium (eg., Ba/Ca, Mg/Ca etc.), provide data to resolve the differences in habitat water chemistry on the watershed to catchment scale. The vast majority of these analyses are performed by laser ablation ICP-MS using a high-resolution instrument. However, few laboratories are equipped with this configuration and many researchers measure the trace element chemistry of otoliths by whole digestion ICP-MS using lower resolution quadrupole instruments. This study examines the differences in accuracy and precision between three elemental analysis methods using whole otolith digestion on a low resolution ICP-MS (ELAN 9000). The first, and most commonly used, technique is external calibration with internal standardization. This technique is the most cost-effective but also is one with limitations in terms of method detection. The second, standard addition is more costly in terms of time and use of standard materials but offers gains in precision and accuracy. The third, isotope dilution, is the least cost effective but the most accurate of elemental analysis techniques. Based on the results of this study, which seeks to identify the technique which is the easiest to implement yet has the precision and accuracy necessary to resolve spatial variations in habitats, we conclude that external calibration with internal standardization can be sufficient to revolve spatial and temporal variations in marine and estuarine environments (+/- 6-8% accuracy). Standard addition increases the accuracy of measurements to 2-5% and is ideal for freshwater studies. While there is a gain in accuracy and precision with isotope dilution, the spatial and temporal resolution is no greater with this technique than the other.

  18. Accuracy in Dental Medicine, A New Way to Measure Trueness and Precision

    PubMed Central

    Ender, Andreas; Mehl, Albert

    2014-01-01

    Reference scanners are used in dental medicine to verify a lot of procedures. The main interest is to verify impression methods as they serve as a base for dental restorations. The current limitation of many reference scanners is the lack of accuracy scanning large objects like full dental arches, or the limited possibility to assess detailed tooth surfaces. A new reference scanner, based on focus variation scanning technique, was evaluated with regards to highest local and general accuracy. A specific scanning protocol was tested to scan original tooth surface from dental impressions. Also, different model materials were verified. The results showed a high scanning accuracy of the reference scanner with a mean deviation of 5.3 ± 1.1 µm for trueness and 1.6 ± 0.6 µm for precision in case of full arch scans. Current dental impression methods showed much higher deviations (trueness: 20.4 ± 2.2 µm, precision: 12.5 ± 2.5 µm) than the internal scanning accuracy of the reference scanner. Smaller objects like single tooth surface can be scanned with an even higher accuracy, enabling the system to assess erosive and abrasive tooth surface loss. The reference scanner can be used to measure differences for a lot of dental research fields. The different magnification levels combined with a high local and general accuracy can be used to assess changes of single teeth or restorations up to full arch changes. PMID:24836007

  19. ACCURACY AND PRECISION OF A METHOD TO STUDY KINEMATICS OF THE TEMPOROMANDIBULAR JOINT: COMBINATION OF MOTION DATA AND CT IMAGING

    PubMed Central

    Baltali, Evre; Zhao, Kristin D.; Koff, Matthew F.; Keller, Eugene E.; An, Kai-Nan

    2008-01-01

    The purpose of the study was to test the precision and accuracy of a method used to track selected landmarks during motion of the temporomandibular joint (TMJ). A precision phantom device was constructed and relative motions between two rigid bodies on the phantom device were measured using optoelectronic (OE) and electromagnetic (EM) motion tracking devices. The motion recordings were also combined with a 3D CT image for each type of motion tracking system (EM+CT and OE+CT) to mimic methods used in previous studies. In the OE and EM data collections, specific landmarks on the rigid bodies were determined using digitization. In the EM+CT and OE+CT data sets, the landmark locations were obtained from the CT images. 3D linear distances and 3D curvilinear path distances were calculated for the points. The accuracy and precision for all 4 methods were evaluated (EM, OE, EM+CT and OE+CT). In addition, results were compared with and without the CT imaging (EM vs. EM+CT, OE vs. OE+CT). All systems overestimated the actual 3D curvilinear path lengths. All systems also underestimated the actual rotation values. The accuracy of all methods was within 0.5 mm for 3D curvilinear path calculations, 0.05 mm for 3D linear distance calculations, and 0.2° for rotation calculations. In addition, Bland-Altman plots for each configuration of the systems suggest that measurements obtained from either system are repeatable and comparable. PMID:18617178

  20. A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures

    NASA Technical Reports Server (NTRS)

    Moore, Ashley

    2005-01-01

    The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.

  1. ASSESSING THE ACCURACY OF THE LINEARIZED LANGMUIR MODEL

    Technology Transfer Automated Retrieval System (TEKTRAN)

    One of the most commonly used models for describing phosphorus (P) sorption to soils is the nonlinear Langmuir model. To avoid the difficulties in fitting the nonlinear Langmuir equation to sorption data, linearized versions are commonly used. Although concerns have been raised in the past regarding...

  2. Sex differences in accuracy and precision when judging time to arrival: data from two Internet studies.

    PubMed

    Sanders, Geoff; Sinclair, Kamila

    2011-12-01

    We report two Internet studies that investigated sex differences in the accuracy and precision of judging time to arrival. We used accuracy to mean the ability to match the actual time to arrival and precision to mean the consistency with which each participant made their judgments. Our task was presented as a computer game in which a toy UFO moved obliquely towards the participant through a virtual three-dimensional space on route to a docking station. The UFO disappeared before docking and participants pressed their space bar at the precise moment they thought the UFO would have docked. Study 1 showed it was possible to conduct quantitative studies of spatiotemporal judgments in virtual reality via the Internet and confirmed reports that men are more accurate because women underestimate, but found no difference in precision measured as intra-participant variation. Study 2 repeated Study 1 with five additional presentations of one condition to provide a better measure of precision. Again, men were more accurate than women but there were no sex differences in precision. However, within the coincidence-anticipation timing (CAT) literature, of those studies that report sex differences, a majority found that males are both more accurate and more precise than females. Noting that many CAT studies report no sex differences, we discuss appropriate interpretations of such null findings. While acknowledging that CAT performance may be influenced by experience we suggest that the sex difference may have originated among our ancestors with the evolutionary selection of men for hunting and women for gathering. PMID:21125324

  3. Measuring the accuracy and precision of quantitative coronary angiography using a digitally simulated test phantom

    NASA Astrophysics Data System (ADS)

    Morioka, Craig A.; Whiting, James S.; LeFree, Michelle T.

    1998-06-01

    Quantitative coronary angiography (QCA) diameter measurements have been used as an endpoint measurement in clinical studies involving therapies to reduce coronary atherosclerosis. The accuracy and precision of the QCA measure can affect the sample size and study conclusions of a clinical study. Measurements using x-ray test phantoms can underestimate the precision and accuracy of the actual arteries in clinical digital angiograms because they do not contain complex patient structures. Determining the clinical performance of QCA algorithms under clinical conditions is difficult because: (1) no gold standard test object exists in clinical images, (2) phantom images do not have any structured background noise. We purpose the use of computer simulated arteries as a replacement for traditional angiographic test phantoms to evaluate QCA algorithm performance.

  4. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new

  5. Comparison between predicted and actual accuracies for an Ultra-Precision CNC measuring machine

    SciTech Connect

    Thompson, D.C.; Fix, B.L.

    1995-05-30

    At the 1989 CIRP annual meeting, we reported on the design of a specialized, ultra-precision CNC measuring machine, and on the error budget that was developed to guide the design process. In our paper we proposed a combinatorial rule for merging estimated and/or calculated values for all known sources of error, to yield a single overall predicted accuracy for the machine. In this paper we compare our original predictions with measured performance of the completed instrument.

  6. Evaluation of precision and accuracy of selenium measurements in biological materials using neutron activation analysis

    SciTech Connect

    Greenberg, R.R.

    1988-01-01

    In recent years, the accurate determination of selenium in biological materials has become increasingly important in view of the essential nature of this element for human nutrition and its possible role as a protective agent against cancer. Unfortunately, the accurate determination of selenium in biological materials is often difficult for most analytical techniques for a variety of reasons, including interferences, complicated selenium chemistry due to the presence of this element in multiple oxidation states and in a variety of different organic species, stability and resistance to destruction of some of these organo-selenium species during acid dissolution, volatility of some selenium compounds, and potential for contamination. Neutron activation analysis (NAA) can be one of the best analytical techniques for selenium determinations in biological materials for a number of reasons. Currently, precision at the 1% level (1s) and overall accuracy at the 1 to 2% level (95% confidence interval) can be attained at the U.S. National Bureau of Standards (NBS) for selenium determinations in biological materials when counting statistics are not limiting (using the {sup 75}Se isotope). An example of this level of precision and accuracy is summarized. Achieving this level of accuracy, however, requires strict attention to all sources of systematic error. Precise and accurate results can also be obtained after radiochemical separations.

  7. Precision and accuracy of 3D lower extremity residua measurement systems

    NASA Astrophysics Data System (ADS)

    Commean, Paul K.; Smith, Kirk E.; Vannier, Michael W.; Hildebolt, Charles F.; Pilgram, Thomas K.

    1996-04-01

    Accurate and reproducible geometric measurement of lower extremity residua is required for custom prosthetic socket design. We compared spiral x-ray computed tomography (SXCT) and 3D optical surface scanning (OSS) with caliper measurements and evaluated the precision and accuracy of each system. Spiral volumetric CT scanned surface and subsurface information was used to make external and internal measurements, and finite element models (FEMs). SXCT and OSS were used to measure lower limb residuum geometry of 13 below knee (BK) adult amputees. Six markers were placed on each subject's BK residuum and corresponding plaster casts and distance measurements were taken to determine precision and accuracy for each system. Solid models were created from spiral CT scan data sets with the prosthesis in situ under different loads using p-version finite element analysis (FEA). Tissue properties of the residuum were estimated iteratively and compared with values taken from the biomechanics literature. The OSS and SXCT measurements were precise within 1% in vivo and 0.5% on plaster casts, and accuracy was within 3.5% in vivo and 1% on plaster casts compared with caliper measures. Three-dimensional optical surface and SXCT imaging systems are feasible for capturing the comprehensive 3D surface geometry of BK residua, and provide distance measurements statistically equivalent to calipers. In addition, SXCT can readily distinguish internal soft tissue and bony structure of the residuum. FEM can be applied to determine tissue material properties interactively using inverse methods.

  8. Large format focal plane array integration with precision alignment, metrology and accuracy capabilities

    NASA Astrophysics Data System (ADS)

    Neumann, Jay; Parlato, Russell; Tracy, Gregory; Randolph, Max

    2015-09-01

    Focal plane alignment for large format arrays and faster optical systems require enhanced precision methodology and stability over temperature. The increase in focal plane array size continues to drive the alignment capability. Depending on the optical system, the focal plane flatness of less than 25μm (.001") is required over transition temperatures from ambient to cooled operating temperatures. The focal plane flatness requirement must also be maintained in airborne or launch vibration environments. This paper addresses the challenge of the detector integration into the focal plane module and housing assemblies, the methodology to reduce error terms during integration and the evaluation of thermal effects. The driving factors influencing the alignment accuracy include: datum transfers, material effects over temperature, alignment stability over test, adjustment precision and traceability to NIST standard. The FPA module design and alignment methodology reduces the error terms by minimizing the measurement transfers to the housing. In the design, the proper material selection requires matched coefficient of expansion materials minimizes both the physical shift over temperature as well as lowering the stress induced into the detector. When required, the co-registration of focal planes and filters can achieve submicron relative positioning by applying precision equipment, interferometry and piezoelectric positioning stages. All measurements and characterizations maintain traceability to NIST standards. The metrology characterizes the equipment's accuracy, repeatability and precision of the measurements.

  9. Precision measurement of a particle mass at the linear collider

    SciTech Connect

    Milstene, C.; Freitas, A.; Schmitt, M.; Sopczak, A.; /Lancaster U.

    2007-06-01

    Precision measurement of the stop mass at the ILC is done in a method based on cross-sections measurements at two different center-of-mass energies. This allows to minimize both the statistical and systematic errors. In the framework of the MSSM, a light stop, compatible with electro-weak baryogenesis, is studied in its decay into a charm jet and neutralino, the Lightest Supersymmetric Particle (LSP), as a candidate of dark matter. This takes place for a small stop-neutralino mass difference.

  10. Accuracy and precision of ice stream bed topography derived from ground-based radar surveys

    NASA Astrophysics Data System (ADS)

    King, Edward

    2016-04-01

    There is some confusion within the glaciological community as to the accuracy of the basal topography derived from radar measurements. A number of texts and papers state that basal topography cannot be determined to better than one quarter of the wavelength of the radar system. On the other hand King et al (Nature Geoscience, 2009) claimed that features of the bed topography beneath Rutford Ice Stream, Antarctica can be distinguished to +/- 3m using a 3 MHz radar system (which has a quarter wavelength of 14m in ice). These statements of accuracy are mutually exclusive. I will show in this presentation that the measurement of ice thickness is a radar range determination to a single strongly-reflective target. This measurement has much higher accuracy than the resolution of two targets of similar reflection strength, which is governed by the quarter-wave criterion. The rise time of the source signal and the sensitivity and digitisation interval of the recording system are the controlling criteria on radar range accuracy. A dataset from Pine Island Glacier, West Antarctica will be used to illustrate these points, as well as the repeatability or precision of radar range measurements, and the influence of gridding parameters and positioning accuracy on the final DEM product.

  11. Wound Area Measurement with Digital Planimetry: Improved Accuracy and Precision with Calibration Based on 2 Rulers

    PubMed Central

    Foltynski, Piotr

    2015-01-01

    Introduction In the treatment of chronic wounds the wound surface area change over time is useful parameter in assessment of the applied therapy plan. The more precise the method of wound area measurement the earlier may be identified and changed inappropriate treatment plan. Digital planimetry may be used in wound area measurement and therapy assessment when it is properly used, but the common problem is the camera lens orientation during the taking of a picture. The camera lens axis should be perpendicular to the wound plane, and if it is not, the measured area differ from the true area. Results Current study shows that the use of 2 rulers placed in parallel below and above the wound for the calibration increases on average 3.8 times the precision of area measurement in comparison to the measurement with one ruler used for calibration. The proposed procedure of calibration increases also 4 times accuracy of area measurement. It was also showed that wound area range and camera type do not influence the precision of area measurement with digital planimetry based on two ruler calibration, however the measurements based on smartphone camera were significantly less accurate than these based on D-SLR or compact cameras. Area measurement on flat surface was more precise with the digital planimetry with 2 rulers than performed with the Visitrak device, the Silhouette Mobile device or the AreaMe software-based method. Conclusion The calibration in digital planimetry with using 2 rulers remarkably increases precision and accuracy of measurement and therefore should be recommended instead of calibration based on single ruler. PMID:26252747

  12. Accuracy of 3D white light scanning of abutment teeth impressions: evaluation of trueness and precision

    PubMed Central

    Jeon, Jin-Hun; Kim, Hae-Young; Kim, Ji-Hwan

    2014-01-01

    PURPOSE This study aimed to evaluate the accuracy of digitizing dental impressions of abutment teeth using a white light scanner and to compare the findings among teeth types. MATERIALS AND METHODS To assess precision, impressions of the canine, premolar, and molar prepared to receive all-ceramic crowns were repeatedly scanned to obtain five sets of 3-D data (STL files). Point clouds were compared and error sizes were measured (n=10 per type). Next, to evaluate trueness, impressions of teeth were rotated by 10°-20° and scanned. The obtained data were compared with the first set of data for precision assessment, and the error sizes were measured (n=5 per type). The Kruskal-Wallis test was performed to evaluate precision and trueness among three teeth types, and post-hoc comparisons were performed using the Mann-Whitney U test with Bonferroni correction (α=.05). RESULTS Precision discrepancies for the canine, premolar, and molar were 3.7 µm, 3.2 µm, and 7.3 µm, respectively, indicating the poorest precision for the molar (P<.001). Trueness discrepancies for teeth types were 6.2 µm, 11.2 µm, and 21.8 µm, respectively, indicating the poorest trueness for the molar (P=.007). CONCLUSION In respect to accuracy the molar showed the largest discrepancies compared with the canine and premolar. Digitizing of dental impressions of abutment teeth using a white light scanner was assessed to be a highly accurate method and provided discrepancy values in a clinically acceptable range. Further study is needed to improve digitizing performance of white light scanning in axial wall. PMID:25551007

  13. The tradeoff between accuracy and precision in latent variable models of mediation processes

    PubMed Central

    Ledgerwood, Alison; Shrout, Patrick E.

    2016-01-01

    Social psychologists place high importance on understanding mechanisms, and frequently employ mediation analyses to shed light on the process underlying an effect. Such analyses can be conducted using observed variables (e.g., a typical regression approach) or latent variables (e.g., a SEM approach), and choosing between these methods can be a more complex and consequential decision than researchers often realize. The present paper adds to the literature on mediation by examining the relative tradeoff between accuracy and precision in latent versus observed variable modeling. Whereas past work has shown that latent variable models tend to produce more accurate estimates, we demonstrate that observed variable models tend to produce more precise estimates, and examine this relative tradeoff both theoretically and empirically in a typical three-variable mediation model across varying levels of effect size and reliability. We discuss implications for social psychologists seeking to uncover mediating variables, and recommend practical approaches for maximizing both accuracy and precision in mediation analyses. PMID:21806305

  14. Accuracy or precision: Implications of sample design and methodology on abundance estimation

    USGS Publications Warehouse

    Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.

    2015-01-01

    Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.

  15. Accuracy and precision of stream reach water surface slopes estimated in the field and from maps

    USGS Publications Warehouse

    Isaak, D.J.; Hubert, W.A.; Krueger, K.L.

    1999-01-01

    The accuracy and precision of five tools used to measure stream water surface slope (WSS) were evaluated. Water surface slopes estimated in the field with a clinometer or from topographic maps used in conjunction with a map wheel or geographic information system (GIS) were significantly higher than WSS estimated in the field with a surveying level (biases of 34, 41, and 53%, respectively). Accuracy of WSS estimates obtained with an Abney level did not differ from surveying level estimates, but conclusions regarding the accuracy of Abney levels and clinometers were weakened by intratool variability. The surveying level estimated WSS most precisely (coefficient of variation [CV] = 0.26%), followed by the GIS (CV = 1.87%), map wheel (CV = 6.18%), Abney level (CV = 13.68%), and clinometer (CV = 21.57%). Estimates of WSS measured in the field with an Abney level and estimated for the same reaches with a GIS used in conjunction with l:24,000-scale topographic maps were significantly correlated (r = 0.86), but there was a tendency for the GIS to overestimate WSS. Detailed accounts of the methods used to measure WSS and recommendations regarding the measurement of WSS are provided.

  16. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients

    PubMed Central

    Asadian, Simin; Khatony, Alireza; Moradi, Gholamreza; Abdi, Alireza; Rezaei, Mansour

    2016-01-01

    Introduction An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. Methods In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients’ body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. Results There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (P<0.001). Kappa coefficients showed good agreement between the temperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%). Paired t-test demonstrated an acceptable precision with forehead (P=0.132), left (P=0.18) and right (P=0.318) tympanic membranes, oral (P=1.00), and axillary (P=1.00) methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. Conclusion The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left) for assessing a patient’s body temperature in the intensive care units because of high accuracy and acceptable precision. PMID:27621673

  17. Assessing accuracy and precision for field and laboratory data: a perspective in ecosystem restoration

    USGS Publications Warehouse

    Stapanian, Martin A.; Lewis, Timothy E; Palmer, Craig J.; Middlebrook Amos, Molly

    2016-01-01

    Unlike most laboratory studies, rigorous quality assurance/quality control (QA/QC) procedures may be lacking in ecosystem restoration (“ecorestoration”) projects, despite legislative mandates in the United States. This is due, in part, to ecorestoration specialists making the false assumption that some types of data (e.g. discrete variables such as species identification and abundance classes) are not subject to evaluations of data quality. Moreover, emergent behavior manifested by complex, adapting, and nonlinear organizations responsible for monitoring the success of ecorestoration projects tend to unconsciously minimize disorder, QA/QC being an activity perceived as creating disorder. We discuss similarities and differences in assessing precision and accuracy for field and laboratory data. Although the concepts for assessing precision and accuracy of ecorestoration field data are conceptually the same as laboratory data, the manner in which these data quality attributes are assessed is different. From a sample analysis perspective, a field crew is comparable to a laboratory instrument that requires regular “recalibration,” with results obtained by experts at the same plot treated as laboratory calibration standards. Unlike laboratory standards and reference materials, the “true” value for many field variables is commonly unknown. In the laboratory, specific QA/QC samples assess error for each aspect of the measurement process, whereas field revisits assess precision and accuracy of the entire data collection process following initial calibration. Rigorous QA/QC data in an ecorestoration project are essential for evaluating the success of a project, and they provide the only objective “legacy” of the dataset for potential legal challenges and future uses.

  18. Super-linear Precision in Simple Neural Population Codes

    NASA Astrophysics Data System (ADS)

    Schwab, David; Fiete, Ila

    2015-03-01

    A widely used tool for quantifying the precision with which a population of noisy sensory neurons encodes the value of an external stimulus is the Fisher Information (FI). Maximizing the FI is also a commonly used objective for constructing optimal neural codes. The primary utility and importance of the FI arises because it gives, through the Cramer-Rao bound, the smallest mean-squared error achievable by any unbiased stimulus estimator. However, it is well-known that when neural firing is sparse, optimizing the FI can result in codes that perform very poorly when considering the resulting mean-squared error, a measure with direct biological relevance. Here we construct optimal population codes by minimizing mean-squared error directly and study the scaling properties of the resulting network, focusing on the optimal tuning curve width. We then extend our results to continuous attractor networks that maintain short-term memory of external stimuli in their dynamics. Here we find similar scaling properties in the structure of the interactions that minimize diffusive information loss.

  19. Mapping stream habitats with a global positioning system: Accuracy, precision, and comparison with traditional methods

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.

    2006-01-01

    We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media

  20. Precision and accuracy of spectrophotometric pH measurements at environmental conditions in the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Hammer, Karoline; Schneider, Bernd; Kuliński, Karol; Schulz-Bull, Detlef E.

    2014-06-01

    The increasing uptake of anthropogenic CO2 by the oceans has raised an interest in precise and accurate pH measurement in order to assess the impact on the marine CO2-system. Spectrophotometric pH measurements were refined during the last decade yielding a precision and accuracy that cannot be achieved with the conventional potentiometric method. However, until now the method was only tested in oceanic systems with a relative stable and high salinity and a small pH range. This paper describes the first application of such a pH measurement system at conditions in the Baltic Sea which is characterized by a wide salinity and pH range. The performance of the spectrophotometric system at pH values as low as 7.0 (“total” scale) and salinities between 0 and 35 was examined using TRIS-buffer solutions, certified reference materials, and tests of consistency with measurements of other parameters of the marine CO2 system. Using m-cresol purple as indicator dye and a spectrophotometric measurement system designed at Scripps Institution of Oceanography (B. Carter, A. Dickson), a precision better than ±0.001 and an accuracy between ±0.01 and ±0.02 was achieved within the observed pH and salinity ranges in the Baltic Sea. The influence of the indicator dye on the pH of the sample was determined theoretically and is presented as a pH correction term for the different alkalinity regimes in the Baltic Sea. Because of the encouraging tests, the ease of operation and the fact that the measurements refer to the internationally accepted “total” pH scale, it is recommended to use the spectrophotometric method also for pH monitoring and trend detection in the Baltic Sea.

  1. Improvement in precision, accuracy, and efficiency in sstandardizing the characterization of granular materials

    SciTech Connect

    Tucker, Jonathan R.; Shadle, Lawrence J.; Benyahia, Sofiane; Mei, Joseph; Guenther, Chris; Koepke, M. E.

    2013-01-01

    Useful prediction of the kinematics, dynamics, and chemistry of a system relies on precision and accuracy in the quantification of component properties, operating mechanisms, and collected data. In an attempt to emphasize, rather than gloss over, the benefit of proper characterization to fundamental investigations of multiphase systems incorporating solid particles, a set of procedures were developed and implemented for the purpose of providing a revised methodology having the desirable attributes of reduced uncertainty, expanded relevance and detail, and higher throughput. Better, faster, cheaper characterization of multiphase systems result. Methodologies are presented to characterize particle size, shape, size distribution, density (particle, skeletal and bulk), minimum fluidization velocity, void fraction, particle porosity, and assignment within the Geldart Classification. A novel form of the Ergun equation was used to determine the bulk void fractions and particle density. Accuracy of properties-characterization methodology was validated on materials of known properties prior to testing materials of unknown properties. Several of the standard present-day techniques were scrutinized and improved upon where appropriate. Validity, accuracy, and repeatability were assessed for the procedures presented and deemed higher than present-day techniques. A database of over seventy materials has been developed to assist in model validation efforts and future desig

  2. Learning Linear Spatial-Numeric Associations Improves Accuracy of Memory for Numbers

    PubMed Central

    Thompson, Clarissa A.; Opfer, John E.

    2016-01-01

    Memory for numbers improves with age and experience. One potential source of improvement is a logarithmic-to-linear shift in children’s representations of magnitude. To test this, Kindergartners and second graders estimated the location of numbers on number lines and recalled numbers presented in vignettes (Study 1). Accuracy at number-line estimation predicted memory accuracy on a numerical recall task after controlling for the effect of age and ability to approximately order magnitudes (mapper status). To test more directly whether linear numeric magnitude representations caused improvements in memory, half of children were given feedback on their number-line estimates (Study 2). As expected, learning linear representations was again linked to memory for numerical information even after controlling for age and mapper status. These results suggest that linear representations of numerical magnitude may be a causal factor in development of numeric recall accuracy. PMID:26834688

  3. Hepatic perfusion in a tumor model using DCE-CT: an accuracy and precision study

    NASA Astrophysics Data System (ADS)

    Stewart, Errol E.; Chen, Xiaogang; Hadway, Jennifer; Lee, Ting-Yim

    2008-08-01

    In the current study we investigate the accuracy and precision of hepatic perfusion measurements based on the Johnson and Wilson model with the adiabatic approximation. VX2 carcinoma cells were implanted into the livers of New Zealand white rabbits. Simultaneous dynamic contrast-enhanced computed tomography (DCE-CT) and radiolabeled microsphere studies were performed under steady-state normo-, hyper- and hypo-capnia. The hepatic arterial blood flows (HABF) obtained using both techniques were compared with ANOVA. The precision was assessed by the coefficient of variation (CV). Under normo-capnia the microsphere HABF were 51.9 ± 4.2, 40.7 ± 4.9 and 99.7 ± 6.0 ml min-1 (100 g)-1 while DCE-CT HABF were 50.0 ± 5.7, 37.1 ± 4.5 and 99.8 ± 6.8 ml min-1 (100 g)-1 in normal tissue, tumor core and rim, respectively. There were no significant differences between HABF measurements obtained with both techniques (P > 0.05). Furthermore, a strong correlation was observed between HABF values from both techniques: slope of 0.92 ± 0.05, intercept of 4.62 ± 2.69 ml min-1 (100 g)-1 and R2 = 0.81 ± 0.05 (P < 0.05). The Bland-Altman plot comparing DCE-CT and microsphere HABF measurements gives a mean difference of -0.13 ml min-1 (100 g)-1, which is not significantly different from zero. DCE-CT HABF is precise, with CV of 5.7, 24.9 and 1.4% in the normal tissue, tumor core and rim, respectively. Non-invasive measurement of HABF with DCE-CT is accurate and precise. DCE-CT can be an important extension of CT to assess hepatic function besides morphology in liver diseases.

  4. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    NASA Astrophysics Data System (ADS)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  5. Effects of shortened acquisition time on accuracy and precision of quantitative estimates of organ activity1

    PubMed Central

    He, Bin; Frey, Eric C.

    2010-01-01

    Purpose: Quantitative estimation of in vivo organ uptake is an essential part of treatment planning for targeted radionuclide therapy. This usually involves the use of planar or SPECT scans with acquisition times chosen based more on image quality considerations rather than the minimum needed for precise quantification. In previous simulation studies at clinical count levels (185 MBq 111In), the authors observed larger variations in accuracy of organ activity estimates resulting from anatomical and uptake differences than statistical noise. This suggests that it is possible to reduce the acquisition time without substantially increasing the variation in accuracy. Methods: To test this hypothesis, the authors compared the accuracy and variation in accuracy of organ activity estimates obtained from planar and SPECT scans at various count levels. A simulated phantom population with realistic variations in anatomy and biodistribution was used to model variability in a patient population. Planar and SPECT projections were simulated using previously validated Monte Carlo simulation tools. The authors simulated the projections at count levels approximately corresponding to 1.5–30 min of total acquisition time. The projections were processed using previously described quantitative SPECT (QSPECT) and planar (QPlanar) methods. The QSPECT method was based on the OS-EM algorithm with compensations for attenuation, scatter, and collimator-detector response. The QPlanar method is based on the ML-EM algorithm using the same model-based compensation for all the image degrading effects as the QSPECT method. The volumes of interests (VOIs) were defined based on the true organ configuration in the phantoms. The errors in organ activity estimates from different count levels and processing methods were compared in terms of mean and standard deviation over the simulated phantom population. Results: There was little degradation in quantitative reliability when the acquisition time was

  6. Slight pressure imbalances can affect accuracy and precision of dual inlet-based clumped isotope analysis.

    PubMed

    Fiebig, Jens; Hofmann, Sven; Löffler, Niklas; Lüdecke, Tina; Methner, Katharina; Wacker, Ulrike

    2016-01-01

    It is well known that a subtle nonlinearity can occur during clumped isotope analysis of CO2 that - if remaining unaddressed - limits accuracy. The nonlinearity is induced by a negative background on the m/z 47 ion Faraday cup, whose magnitude is correlated with the intensity of the m/z 44 ion beam. The origin of the negative background remains unclear, but is possibly due to secondary electrons. Usually, CO2 gases of distinct bulk isotopic compositions are equilibrated at 1000 °C and measured along with the samples in order to be able to correct for this effect. Alternatively, measured m/z 47 beam intensities can be corrected for the contribution of secondary electrons after monitoring how the negative background on m/z 47 evolves with the intensity of the m/z 44 ion beam. The latter correction procedure seems to work well if the m/z 44 cup exhibits a wider slit width than the m/z 47 cup. Here we show that the negative m/z 47 background affects precision of dual inlet-based clumped isotope measurements of CO2 unless raw m/z 47 intensities are directly corrected for the contribution of secondary electrons. Moreover, inaccurate results can be obtained even if the heated gas approach is used to correct for the observed nonlinearity. The impact of the negative background on accuracy and precision arises from small imbalances in m/z 44 ion beam intensities between reference and sample CO2 measurements. It becomes the more significant the larger the relative contribution of secondary electrons to the m/z 47 signal is and the higher the flux rate of CO2 into the ion source is set. These problems can be overcome by correcting the measured m/z 47 ion beam intensities of sample and reference gas for the contributions deriving from secondary electrons after scaling these contributions to the intensities of the corresponding m/z 49 ion beams. Accuracy and precision of this correction are demonstrated by clumped isotope analysis of three internal carbonate standards. The

  7. Estimated results analysis and application of the precise point positioning based high-accuracy ionosphere delay

    NASA Astrophysics Data System (ADS)

    Wang, Shi-tai; Peng, Jun-huan

    2015-12-01

    The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.

  8. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    SciTech Connect

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence of a significant quantity of 238U in the samples.

  9. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    DOE PAGESBeta

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence ofmore » a significant quantity of 238U in the samples.« less

  10. Accuracy and precision of estimating age of gray wolves by tooth wear

    USGS Publications Warehouse

    Gipson, P.S.; Ballard, W.B.; Nowak, R.M.; Mech, L.D.

    2000-01-01

    We evaluated the accuracy and precision of tooth wear for aging gray wolves (Canis lupus) from Alaska, Minnesota, and Ontario based on 47 known-age or known-minimum-age skulls. Estimates of age using tooth wear and a commercial cementum annuli-aging service were useful for wolves up to 14 years old. The precision of estimates from cementum annuli was greater than estimates from tooth wear, but tooth wear estimates are more applicable in the field. We tended to overestimate age by 1-2 years and occasionally by 3 or 4 years. The commercial service aged young wolves with cementum annuli to within ?? 1 year of actual age, but under estimated ages of wolves ???9 years old by 1-3 years. No differences were detected in tooth wear patterns for wild wolves from Alaska, Minnesota, and Ontario, nor between captive and wild wolves. Tooth wear was not appropriate for aging wolves with an underbite that prevented normal wear or severely broken and missing teeth.

  11. Accuracy and precision of gait events derived from motion capture in horses during walk and trot.

    PubMed

    Boye, Jenny Katrine; Thomsen, Maj Halling; Pfau, Thilo; Olsen, Emil

    2014-03-21

    This study aimed to create an evidence base for detection of stance-phase timings from motion capture in horses. The objective was to compare the accuracy (bias) and precision (SD) for five published algorithms for the detection of hoof-on and hoof-off using force plates as the reference standard. Six horses were walked and trotted over eight force plates surrounded by a synchronised 12-camera infrared motion capture system. The five algorithms (A-E) were based on: (A) horizontal velocity of the hoof; (B) Fetlock angle and horizontal hoof velocity; (C) horizontal displacement of the hoof relative to the centre of mass; (D) horizontal velocity of the hoof relative to the Centre of Mass and; (E) vertical acceleration of the hoof. A total of 240 stance phases in walk and 240 stance phases in trot were included in the assessment. Method D provided the most accurate and precise results in walk for stance phase duration with a bias of 4.1% for front limbs and 4.8% for hind limbs. For trot we derived a combination of method A for hoof-on and method E for hoof-off resulting in a bias of -6.2% of stance in the front limbs and method B for the hind limbs with a bias of 3.8% of stance phase duration. We conclude that motion capture yields accurate and precise detection of gait events for horses walking and trotting over ground and the results emphasise a need for different algorithms for front limbs versus hind limbs in trot. PMID:24529754

  12. Impact of survey workflow on precision and accuracy of terrestrial LiDAR datasets

    NASA Astrophysics Data System (ADS)

    Gold, P. O.; Cowgill, E.; Kreylos, O.

    2009-12-01

    Ground-based LiDAR (Light Detection and Ranging) survey techniques are enabling remote visualization and quantitative analysis of geologic features at unprecedented levels of detail. For example, digital terrain models computed from LiDAR data have been used to measure displaced landforms along active faults and to quantify fault-surface roughness. But how accurately do terrestrial LiDAR data represent the true ground surface, and in particular, how internally consistent and precise are the mosaiced LiDAR datasets from which surface models are constructed? Addressing this question is essential for designing survey workflows that capture the necessary level of accuracy for a given project while minimizing survey time and equipment, which is essential for effective surveying of remote sites. To address this problem, we seek to define a metric that quantifies how scan registration error changes as a function of survey workflow. Specifically, we are using a Trimble GX3D laser scanner to conduct a series of experimental surveys to quantify how common variables in field workflows impact the precision of scan registration. Primary variables we are testing include 1) use of an independently measured network of control points to locate scanner and target positions, 2) the number of known-point locations used to place the scanner and point clouds in 3-D space, 3) the type of target used to measure distances between the scanner and the known points, and 4) setting up the scanner over a known point as opposed to resectioning of known points. Precision of the registered point cloud is quantified using Trimble Realworks software by automatic calculation of registration errors (errors between locations of the same known points in different scans). Accuracy of the registered cloud (i.e., its ground-truth) will be measured in subsequent experiments. To obtain an independent measure of scan-registration errors and to better visualize the effects of these errors on a registered point

  13. Calibration of non-contact incremental linear encoders using a macro-micro dual-drive high-precision comparator

    NASA Astrophysics Data System (ADS)

    Yu, Haoyu; Liu, Hongzhong; Li, Xuan; Ye, Guoyong; Shi, Yongsheng; Yin, Lei; Jiang, Weitao; Chen, Bangdao; Liu, Xiaokang

    2015-09-01

    The accuracy of a linear encoder is determined by encoder-specific errors, which consist of both long-range and cyclic errors. Generally, it is difficult to measure the two errors of a non-contact incremental linear encoder with a large measuring range and small signal period in one measurement because of the contradiction between long travel range and high resolution. To resolve this issue, a prototype high-precision interferometric comparator with a macro-micro dual-drive system is presented. The measurement and motion resolution of the comparator are 1 nm and 3 nm, respectively. A measuring range of 320 mm is realized and the theoretical maximum range of the comparator is 2 m. The comparator mainly includes a high-accuracy aerostatic linear-motion stage, a constant displacement ratio piezoelectric-driven stage, two laser interferometers, a 6-DOF grating pair position adjustment devices and a PC-based data processor. The measurable linear movement is afforded, respectively, by the long-stroke stage and the piezoelectric-driven stage for the long-range error and cyclic error measurement. The movement can be measured by the encoder and then be calibrated by the corresponding laser interferometer. In the experiment, the accuracy of a non-contact incremental linear encoder with a 20 μm-long signal period and 320 mm measuring range proposed by our team was calibrated after proper mounting. The long-range error is measured to be 3.123 μm, and the cyclic error is within  ±0.159 μm, which matches well with the theoretical estimation given by  ±0.145 μm. The measurement uncertainties are estimated and the results confirm the effectiveness and feasibility of the proposed scheme and instruments.

  14. Precision, accuracy, and application of diver-towed underwater GPS receivers.

    PubMed

    Schories, Dirk; Niedzwiedz, Gerd

    2012-04-01

    Diver-towed global positioning systems (GPS) handhelds have been used for a few years in underwater monitoring studies. We modeled the accuracy of this method using the software KABKURR originally developed by the University of Rostock for fishing and marine engineering. Additionally, three field experiments were conducted to estimate the precision of the method and apply it in the field: (1) an experiment of underwater transects from 5 to 35 m in the Southern Chile fjord region, (2) a transect from 5 to 30 m under extreme climatic conditions in the Antarctic, and (3) an underwater tracking experiment at Lake Ranco, Southern Chile. The coiled cable length in relation to water depth is the main error source besides the signal quality of the GPS under calm weather conditions. The forces used in the model resulted in a displacement of 2.3 m in a depth of 5 m, 3.2 m at a 10-m depth, 4.6 m in a 20-m depth, 5.5 m at a 30-m depth, and 6.8 m in a 40-m depth, when only an additional 0.5 m cable extension was used compared to the water depth. The GPS buoy requires good buoyancy in order to keep its position at the water surface when the diver is trying to minimize any additional cable extension error. The diver has to apply a tensile force for shortening the cable length at the lower cable end. Repeated diving along transect lines from 5 to 35 m resulted only in small deviations independent of water depth indicating the precision of the method for monitoring studies. Routing of given reference points with a Garmin 76CSx handheld placed in an underwater housing resulted in mean deviances less than 6 m at a water depth of 10 m. Thus, we can confirm that diver-towed GPS handhelds give promising results when used for underwater research in shallow water and open a wide field of applicability, but no submeter accuracy is possible due to the different error sources. PMID:21614620

  15. Welcome detailed data, but with a grain of salt: accuracy, precision, uncertainty in flood inundation modeling

    NASA Astrophysics Data System (ADS)

    Dottori, Francesco; Di Baldassarre, Giuliano; Todini, Ezio

    2013-04-01

    New survey techniques are providing a huge amount of high-detailed and accurate data which can be extremely valuable for flood inundation modeling. Such data availability raises the issue of how to exploit their information content to provide reliable flood risk mapping and predictions. We think that these data should form the basis of hydraulic modelling anytime they are available. However, high expectations regarding these datasets should be tempered as some important issues should be considered. These include: the large number of uncertainty sources in model structure and available data; the difficult evaluation of model results, due to the scarcity of observed data; the computational efficiency; the false confidence that can be given by high-resolution results, as accuracy of results is not necessarily increased by higher precision. We briefly discuss these issues and existing approaches which can be used to manage high detailed data. In our opinion, methods based on sub-grid and roughness upscaling treatments would be in many instances an appropriate solution to maintain consistence with the uncertainty related to model structure and data available for model building and evaluation.

  16. Precision and accuracy of regional radioactivity quantitation using the maximum likelihood EM reconstruction algorithm

    SciTech Connect

    Carson, R.E.; Yan, Y.; Chodkowski, B.; Yap, T.K.; Daube-Witherspoon, M.E. )

    1994-09-01

    The imaging characteristics of maximum likelihood (ML) reconstruction using the EM algorithm for emission tomography have been extensively evaluated. There has been less study of the precision and accuracy of ML estimates of regional radioactivity concentration. The authors developed a realistic brain slice simulation by segmenting a normal subject's MRI scan into gray matter, white matter, and CSF and produced PET sinogram data with a model that included detector resolution and efficiencies, attenuation, scatter, and randoms. Noisy realizations at different count levels were created, and ML and filtered backprojection (FBP) reconstructions were performed. The bias and variability of ROI values were determined. In addition, the effects of ML pixel size, image smoothing and region size reduction were assessed. ML estimates at 1,000 iterations (0.6 sec per iteration on a parallel computer) for 1-cm[sup 2] gray matter ROIs showed negative biases of 6% [+-] 2% which can be reduced to 0% [+-] 3% by removing the outer 1-mm rim of each ROI. FBP applied to the full-size ROIs had 15% [+-] 4% negative bias with 50% less noise than ML. Shrinking the FBP regions provided partial bias compensation with noise increases to levels similar to ML. Smoothing of ML images produced biases comparable to FBP with slightly less noise. Because of its heavy computational requirements, the ML algorithm will be most useful for applications in which achieving minimum bias is important.

  17. Modeling precision and accuracy of a LWIR microgrid array imaging polarimeter

    NASA Astrophysics Data System (ADS)

    Boger, James K.; Tyo, J. Scott; Ratliff, Bradley M.; Fetrow, Matthew P.; Black, Wiley T.; Kumar, Rakesh

    2005-08-01

    Long-wave infrared (LWIR) imaging is a prominent and useful technique for remote sensing applications. Moreover, polarization imaging has been shown to provide additional information about the imaged scene. However, polarization estimation requires that multiple measurements be made of each observed scene point under optically different conditions. This challenging measurement strategy makes the polarization estimates prone to error. The sources of this error differ depending upon the type of measurement scheme used. In this paper, we examine one particular measurement scheme, namely, a simultaneous multiple-measurement imaging polarimeter (SIP) using a microgrid polarizer array. The imager is composed of a microgrid polarizer masking a LWIR HgCdTe focal plane array (operating at 8.3-9.3 μm), and is able to make simultaneous modulated scene measurements. In this paper we present an analytical model that is used to predict the performance of the system in order to help interpret real results. This model is radiometrically accurate and accounts for the temperature of the camera system optics, spatial nonuniformity and drift, optical resolution and other sources of noise. This model is then used in simulation to validate it against laboratory measurements. The precision and accuracy of the SIP instrument is then studied.

  18. Precision and accuracy of clinical quantification of myocardial blood flow by dynamic PET: A technical perspective.

    PubMed

    Moody, Jonathan B; Lee, Benjamin C; Corbett, James R; Ficaro, Edward P; Murthy, Venkatesh L

    2015-10-01

    A number of exciting advances in PET/CT technology and improvements in methodology have recently converged to enhance the feasibility of routine clinical quantification of myocardial blood flow and flow reserve. Recent promising clinical results are pointing toward an important role for myocardial blood flow in the care of patients. Absolute blood flow quantification can be a powerful clinical tool, but its utility will depend on maintaining precision and accuracy in the face of numerous potential sources of methodological errors. Here we review recent data and highlight the impact of PET instrumentation, image reconstruction, and quantification methods, and we emphasize (82)Rb cardiac PET which currently has the widest clinical application. It will be apparent that more data are needed, particularly in relation to newer PET technologies, as well as clinical standardization of PET protocols and methods. We provide recommendations for the methodological factors considered here. At present, myocardial flow reserve appears to be remarkably robust to various methodological errors; however, with greater attention to and more detailed understanding of these sources of error, the clinical benefits of stress-only blood flow measurement may eventually be more fully realized. PMID:25868451

  19. Evaluation of Precise Point Positioning accuracy under large total electron content variations in equatorial latitudes

    NASA Astrophysics Data System (ADS)

    Rodríguez-Bilbao, I.; Moreno Monge, B.; Rodríguez-Caderot, G.; Herraiz, M.; Radicella, S. M.

    2015-01-01

    The ionosphere is one of the largest contributors to errors in GNSS positioning. Although in Precise Point Positioning (PPP) the ionospheric delay is corrected to a first order through the 'iono-free combination', significant errors may still be observed when large electron density gradients are present. To confirm this phenomenon, the temporal behavior of intense fluctuations of total electron content (TEC) and PPP altitude accuracy at equatorial latitudes are analyzed during four years of different solar activity. For this purpose, equatorial plasma irregularities are identified with periods of high rate of change of TEC (ROT). The largest ROT values are observed from 19:00 to 01:00 LT, especially around magnetic equinoxes, although some differences exist between the stations depending on their location. Highest ROT values are observed in the American and African regions. In general, large ROT events are accompanied by frequent satellite signal losses and an increase in the PPP altitude error during years 2001, 2004 and 2011. A significant increase in the PPP altitude error RMS is observed in epochs of high ROT with respect to epochs of low ROT in years 2001, 2004 and 2011, reaching up to 0.26 m in the 19:00-01:00 LT period.

  20. David Weston--Ocean science of invariant principles, total accuracy, and appropriate precision

    NASA Astrophysics Data System (ADS)

    Roebuck, Ian

    2002-11-01

    David Weston's entire professional career was as a member of the Royal Navy Scientific Service, working in the field of ocean acoustics and its applications to maritime operations. The breadth of his interests has often been remarked upon, but because of the sensitive nature of his work at the time, it was indeed much more diverse than his published papers showed. This presentation, from the successors to the laboratories he illuminated for many years, is an attempt to fill in at least some of the gaps. The presentation also focuses on the underlying scientific philosophy of David's work, rooted in the British tradition of applicable mathematics and physics. A deep appreciation of the role of invariants and dimensional methods, and awareness of the sensitivity of any models to changes to the input assumptions, was at the heart of his approach. The needs of the Navy kept him rigorous in requiring accuracy, and clear about the distinction between it and precision. Examples of these principles are included, still as relevant today as they were when he insisted on applying them 30 years ago.

  1. Sub-nm accuracy metrology for ultra-precise reflective X-ray optics

    NASA Astrophysics Data System (ADS)

    Siewert, F.; Buchheim, J.; Zeschke, T.; Brenner, G.; Kapitzki, S.; Tiedtke, K.

    2011-04-01

    The transport and monochromatization of synchrotron light from a high brilliant laser-like source to the experimental station without significant loss of brilliance and coherence is a challenging task in X-ray optics and requires optical elements of utmost accuracy. These are wave-front preserving plane mirrors with lengths of up to 1 m characterized by residual slope errors in the range of 0.05 μrad (rms) and values of 0.1 nm (rms) for micro-roughness. In the case of focusing optical elements like elliptical cylinders the required residual slope error is in the range of 0.25 μrad rms and better. In addition the alignment of optical elements is a critical and beamline performance limiting topic. Thus the characterization of ultra-precise reflective optical elements for FEL-beamline application in the free and mounted states is of significant importance. We will discuss recent results in the field of metrology achieved at the BESSY-II Optics Laboratory (BOL) of the Helmholtz Zentrum Berlin (HZB) by use of the Nanometer Optical Component Measuring Machine (NOM). Different types of mirror have been inspected by line-scan and slope mapping in the free and mounted states. Based on these results the mirror clamping of a combined mirror/grating set-up for the BL-beamlines at FLASH was improved.

  2. Obtaining identical results with double precision global accuracy on different numbers of processors in parallel particle Monte Carlo simulations

    SciTech Connect

    Cleveland, Mathew A. Brunner, Thomas A.; Gentile, Nicholas A.; Keasler, Jeffrey A.

    2013-10-15

    We describe and compare different approaches for achieving numerical reproducibility in photon Monte Carlo simulations. Reproducibility is desirable for code verification, testing, and debugging. Parallelism creates a unique problem for achieving reproducibility in Monte Carlo simulations because it changes the order in which values are summed. This is a numerical problem because double precision arithmetic is not associative. Parallel Monte Carlo, both domain replicated and decomposed simulations, will run their particles in a different order during different runs of the same simulation because the non-reproducibility of communication between processors. In addition, runs of the same simulation using different domain decompositions will also result in particles being simulated in a different order. In [1], a way of eliminating non-associative accumulations using integer tallies was described. This approach successfully achieves reproducibility at the cost of lost accuracy by rounding double precision numbers to fewer significant digits. This integer approach, and other extended and reduced precision reproducibility techniques, are described and compared in this work. Increased precision alone is not enough to ensure reproducibility of photon Monte Carlo simulations. Non-arbitrary precision approaches require a varying degree of rounding to achieve reproducibility. For the problems investigated in this work double precision global accuracy was achievable by using 100 bits of precision or greater on all unordered sums which where subsequently rounded to double precision at the end of every time-step.

  3. High Precision Piezoelectric Linear Motors for Operations at Cryogenic Temperatures and Vacuum

    NASA Technical Reports Server (NTRS)

    Wong, D.; Carman, G.; Stam, M.; Bar-Cohen, Y.; Sen, A.; Henry, P.; Bearman, G.; Moacanin, J.

    1995-01-01

    The use of an electromechanical device for optically positioning a mirror system during the pre-project phase of the Pluto Fast Flyby mission was evaluated at JPL. The device under consideration was a piezoelectric driven linear motor functionally dependent upon a time varying electric field which induces displacements ranging from submicrons to millimeters with positioning accuracy within nanometers.

  4. 13 Years of TOPEX/POSEIDON Precision Orbit Determination and the 10-fold Improvement in Expected Orbit Accuracy

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Beckley, B. D.; Klosko, S. M.

    2006-01-01

    Launched in the summer of 1992, TOPEX/POSEIDON (T/P) was a joint mission between NASA and the Centre National d Etudes Spatiales (CNES), the French Space Agency, to make precise radar altimeter measurements of the ocean surface. After the remarkably successful 13-years of mapping the ocean surface T/P lost its ability to maneuver and was de-commissioned January 2006. T/P revolutionized the study of the Earth s oceans by vastly exceeding pre-launch estimates of surface height accuracy recoverable from radar altimeter measurements. The precision orbit lies at the heart of the altimeter measurement providing the reference frame from which the radar altimeter measurements are made. The expected quality of orbit knowledge had limited the measurement accuracy expectations of past altimeter missions, and still remains a major component in the error budget of all altimeter missions. This paper describes critical improvements made to the T/P orbit time series over the 13-years of precise orbit determination (POD) provided by the GSFC Space Geodesy Laboratory. The POD improvements from the pre-launch T/P expectation of radial orbit accuracy and Mission requirement of 13-cm to an expected accuracy of about 1.5-cm with today s latest orbits will be discussed. The latest orbits with 1.5 cm RMS radial accuracy represent a significant improvement to the 2.0-cm accuracy orbits currently available on the T/P Geophysical Data Record (GDR) altimeter product.

  5. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Goodman, Joseph W.

    1989-01-01

    The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.

  6. Measurement Precision and Accuracy of the Centre Location of AN Ellipse by Weighted Centroid Method

    NASA Astrophysics Data System (ADS)

    Matsuoka, R.

    2015-03-01

    Circular targets are often utilized in photogrammetry, and a circle on a plane is projected as an ellipse onto an oblique image. This paper reports an analysis conducted in order to investigate the measurement precision and accuracy of the centre location of an ellipse on a digital image by an intensity-weighted centroid method. An ellipse with a semi-major axis a, a semi-minor axis b, and a rotation angle θ of the major axis is investigated. In the study an equivalent radius r = (a2cos2θ + b2sin2θ)1/2 is adopted as a measure of the dimension of an ellipse. First an analytical expression representing a measurement error (ϵx, ϵy,) is obtained. Then variances Vx of ϵx are obtained at 1/256 pixel intervals from 0.5 to 100 pixels in r by numerical integration, because a formula representing Vx is unable to be obtained analytically when r > 0.5. The results of the numerical integration indicate that Vxwould oscillate in a 0.5 pixel cycle in r and Vx excluding the oscillation component would be inversely proportional to the cube of r. Finally an effective approximate formula of Vx from 0.5 to 100 pixels in r is obtained by least squares adjustment. The obtained formula is a fractional expression of which numerator is a fifth-degree polynomial of {r-0.5×int(2r)} expressing the oscillation component and denominator is the cube of r. Here int(x) is the function to return the integer part of the value x. Coefficients of the fifth-degree polynomial of the numerator can be expressed by a quadratic polynomial of {0.5×int(2r)+0.25}.

  7. Accuracy, precision and response time of consumer bimetal and digital thermometers for cooked ground beef patties and chicken breasts

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Three models each of consumer instant-read bimetal and digital thermometers were tested for accuracy, precision and response time compared to a calibrated thermocouple in cooked 80 percent and 90 percent lean ground beef patties and boneless and bone-in split chicken breasts. At the recommended inse...

  8. The Diagnostic Accuracy of Linear Endoscopic Ultrasound for Evaluating Symptoms Suggestive of Common Bile Duct Stones.

    PubMed

    Wang, Min; He, Xu; Tian, Chuan; Li, Jian; Min, Feng; Li, Hong-Yan

    2016-01-01

    Background. In order to assess the diagnostic accuracy of linear EUS for evaluating clinically suggestive CBD stones in high-risk groups. Methods. 202 patients with clinically suggestive CBD stones in high-risk groups who underwent linear EUS examination between January 2012 and January 2015 were retrospectively reviewed. Endoscopic retrograde cholangiopancreatography (ERCP) with stone extraction or surgical choledochoscopy was only performed when a CBD stone was detected by linear EUS. Cases that were negative for CBD stones were followed up for at least 6 months. Results. Of 202 enrolled patients, 126 were positive for CBD stones according to linear EUS findings. 124 patients successfully underwent ERCP, and ERCP failed in 2 who were later successfully treated by surgical intervention. There were 2 false-positive cases with positive findings for CBD stones on ERCP. Among 76 patients without CBD stones, no false-negative cases were identified during the mean 6-month follow-up. Linear EUS had sensitivity, specificity, and positive and negative predictive values for the detection of CBD stones of 100%, 92.88%, 98.21%, and 100%, respectively. Conclusions. Linear EUS is a safe and efficacious diagnostic tool for evaluating clinically suggestive CBD stones with high risk of choledocholithiasis. Performing linear EUS prior to ERCP in patients with symptoms suggestive of CBD stones can reduce unnecessary ERCP procedures. PMID:27610131

  9. The Diagnostic Accuracy of Linear Endoscopic Ultrasound for Evaluating Symptoms Suggestive of Common Bile Duct Stones

    PubMed Central

    He, Xu; Li, Jian; Min, Feng; Li, Hong-yan

    2016-01-01

    Background. In order to assess the diagnostic accuracy of linear EUS for evaluating clinically suggestive CBD stones in high-risk groups. Methods. 202 patients with clinically suggestive CBD stones in high-risk groups who underwent linear EUS examination between January 2012 and January 2015 were retrospectively reviewed. Endoscopic retrograde cholangiopancreatography (ERCP) with stone extraction or surgical choledochoscopy was only performed when a CBD stone was detected by linear EUS. Cases that were negative for CBD stones were followed up for at least 6 months. Results. Of 202 enrolled patients, 126 were positive for CBD stones according to linear EUS findings. 124 patients successfully underwent ERCP, and ERCP failed in 2 who were later successfully treated by surgical intervention. There were 2 false-positive cases with positive findings for CBD stones on ERCP. Among 76 patients without CBD stones, no false-negative cases were identified during the mean 6-month follow-up. Linear EUS had sensitivity, specificity, and positive and negative predictive values for the detection of CBD stones of 100%, 92.88%, 98.21%, and 100%, respectively. Conclusions. Linear EUS is a safe and efficacious diagnostic tool for evaluating clinically suggestive CBD stones with high risk of choledocholithiasis. Performing linear EUS prior to ERCP in patients with symptoms suggestive of CBD stones can reduce unnecessary ERCP procedures. PMID:27610131

  10. Design of a high linearity and high gain accuracy analog baseband circuit for DAB receiver

    NASA Astrophysics Data System (ADS)

    Li, Ma; Zhigong, Wang; Jian, Xu; Yiqiang, Wu; Junliang, Wang; Mi, Tian; Jianping, Chen

    2015-02-01

    An analog baseband circuit of high linearity and high gain accuracy for a digital audio broadcasting receiver is implemented in a 0.18-μm RFCMOS process. The circuit comprises a 3rd-order active-RC complex filter (CF) and a programmable gain amplifier (PGA). An automatic tuning circuit is also designed to tune the CF's pass band. Instead of the class-A fully differential operational amplifier (FDOPA) adopted in the conventional CF and PGA design, a class-AB FDOPA is specially employed in this circuit to achieve a higher linearity and gain accuracy for its large current swing capability with lower static current consumption. In the PGA circuit, a novel DC offset cancellation technique based on the MOS resistor is introduced to reduce the settling time significantly. A reformative switching network is proposed, which can eliminate the switch resistor's influence on the gain accuracy of the PGA. The measurement result shows the gain range of the circuit is 10-50 dB with a 1-dB step size, and the gain accuracy is less than ±0.3 dB. The OIP3 is 23.3 dBm at the gain of 10 dB. Simulation results show that the settling time is reduced from 100 to 1 ms. The image band rejection is about 40 dB. It only draws 4.5 mA current from a 1.8 V supply voltage.

  11. A high-accuracy optical linear algebra processor for finite element applications

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Taylor, B. K.

    1984-01-01

    Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.

  12. Using statistics and software to maximize precision and accuracy in U-Pb geochronological measurements

    NASA Astrophysics Data System (ADS)

    McLean, N.; Bowring, J. F.; Bowring, S. A.

    2009-12-01

    Uncertainty in U-Pb geochronology results from a wide variety of factors, including isotope ratio determinations, common Pb corrections, initial daughter product disequilibria, instrumental mass fractionation, isotopic tracer calibration, and U decay constants and isotopic composition. The relative contribution of each depends on the proportion of radiogenic to common Pb, the measurement technique, and the quality of systematic error determinations. Random and systematic uncertainty contributions may be propagated into individual analyses or for an entire population, and must be propagated correctly to accurately interpret data. Tripoli and U-Pb_Redux comprise a new data reduction and error propagation software package that combines robust cycle measurement statistics with rigorous multivariate data analysis and presents the results graphically and interactively. Maximizing the precision and accuracy of a measurement begins with correct appraisal and codification of the systematic and random errors for each analysis. For instance, a large dataset of total procedural Pb blank analyses defines a multivariate normal distribution, describing the mean of and variation in isotopic composition (IC) that must be subtracted from each analysis. Uncertainty in the size and IC of each Pb blank is related to the (random) uncertainty in ratio measurements and the (systematic) uncertainty involved in tracer subtraction. Other sample and measurement parameters can be quantified in the same way, represented as statistical distributions that describe their uncertainty or variation, and are input into U-Pb_Redux as such before the raw sample isotope ratios are measured. During sample measurement, U-Pb_Redux and Tripoli can relay cycle data in real time, calculating a date and uncertainty for each new cycle or block. The results are presented in U-Pb_Redux as an interactive user interface with multiple visualization tools. One- and two-dimensional plots of each calculated date and

  13. Sensitivity Analysis for Characterizing the Accuracy and Precision of JEM/SMILES Mesospheric O3

    NASA Astrophysics Data System (ADS)

    Esmaeili Mahani, M.; Baron, P.; Kasai, Y.; Murata, I.; Kasaba, Y.

    2011-12-01

    The main purpose of this study is to evaluate the Superconducting sub-Millimeter Limb Emission Sounder (SMILES) measurements of mesospheric ozone, O3. As the first step, the error due to the impact of Mesospheric Temperature Inversions (MTIs) on ozone retrieval has been determined. The impacts of other parameters such as pressure variability, solar events, and etc. on mesospheric O3 will also be investigated. Ozone, is known to be important due to the stratospheric O3 layer protection of life on Earth by absorbing harmful UV radiations. However, O3 chemistry can be studied purely in the mesosphere without distraction of heterogeneous situation and dynamical variations due to the short lifetime of O3 in this region. Mesospheric ozone is produced by the photo-dissociation of O2 and the subsequent reaction of O with O2. Diurnal and semi-diurnal variations of mesospheric ozone are associated with variations in solar activity. The amplitude of the diurnal variation increases from a few percent at an altitude of 50 km, to about 80 percent at 70 km. Although despite the apparent simplicity of this situation, significant disagreements exist between the predictions from the existing models and observations, which need to be resolved. SMILES is a highly sensitive radiometer with a few to several tens percent of precision from upper troposphere to the mesosphere. SMILES was developed by the Japanese Aerospace eXploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT) located at the Japanese Experiment Module (JEM) on the International Space Station (ISS). SMILES has successfully measured the vertical distributions and the diurnal variations of various atmospheric species in the latitude range of 38S to 65N from October 2009 to April 2010. A sensitivity analysis is being conducted to investigate the expected precision and accuracy of the mesospheric O3 profiles (from 50 to 90 km height) due to the impact of Mesospheric Temperature

  14. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique

    PubMed Central

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan

    2014-01-01

    Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823

  15. Annular precision linear shaped charge flight termination system for the ODES program

    SciTech Connect

    Vigil, M.G.; Marchi, D.L.

    1994-06-01

    The work for the development of an Annular Precision Linear Shaped Charge (APLSC) Flight Termination System (FTS) for the Operation and Deployment Experiment Simulator (ODES) program is discussed and presented in this report. The Precision Linear Shaped Charge (PLSC) concept was recently developed at Sandia. The APLSC component is designed to produce a copper jet to cut four inch diameter holes in each of two spherical tanks, one containing fuel and the other an oxidizer that are hyperbolic when mixed, to terminate the ODES vehicle flight if necessary. The FTS includes two detonators, six Mild Detonating Fuse (MDF) transfer lines, a detonator block, detonation transfer manifold, and the APLSC component. PLSCs have previously been designed in ring components where the jet penetrating axis is either directly away or toward the center of the ring assembly. Typically, these PLSC components are designed to cut metal cylinders from the outside inward or from the inside outward. The ODES program requires an annular linear shaped charge. The (Linear Shaped Charge Analysis) LESCA code was used to design this 65 grain/foot APLSC and data comparing the analytically predicted to experimental data are presented. Jet penetration data are presented to assess the maximum depth and reproducibility of the penetration. Data are presented for full scale tests, including all FTS components, and conducted with nominal 19 inch diameter, spherical tanks.

  16. Accuracy of linear measurement in the Galileos cone beam computed tomography under simulated clinical conditions

    PubMed Central

    Ganguly, R; Ruprecht, A; Vincent, S; Hellstein, J; Timmons, S; Qian, F

    2011-01-01

    Objectives The aim of this study was to determine the geometric accuracy of cone beam CT (CBCT)-based linear measurements of bone height obtained with the Galileos CBCT (Sirona Dental Systems Inc., Bensheim, Hessen, Germany) in the presence of soft tissues. Methods Six embalmed cadaver heads were imaged with the Galileos CBCT unit subsequent to placement of radiopaque fiduciary markers over the buccal and lingual cortical plates. Electronic linear measurements of bone height were obtained using the Sirona software. Physical measurements were obtained with digital calipers at the same location. This distance was compared on all six specimens bilaterally to determine accuracy of the image measurements. Results The findings showed no statistically significant difference between the imaging and physical measurements (P > 0.05) as determined by a paired sample t-test. The intraclass correlation was used to measure the intrarater reliability of repeated measures and there was no statistically significant difference between measurements performed at the same location (P > 0.05). Conclusions The Galileos CBCT image-based linear measurement between anatomical structures within the mandible in the presence of soft tissues is sufficiently accurate for clinical use. PMID:21697155

  17. Multi-mode sliding mode control for precision linear stage based on fixed or floating stator

    NASA Astrophysics Data System (ADS)

    Fang, Jiwen; Long, Zhili; Wang, Michael Yu; Zhang, Lufan; Dai, Xufei

    2016-02-01

    This paper presents the control performance of a linear motion stage driven by Voice Coil Motor (VCM). Unlike the conventional VCM, the stator of this VCM is regulated, which means it can be adjusted as a floating-stator or fixed-stator. A Multi-Mode Sliding Mode Control (MMSMC), including a conventional Sliding Mode Control (SMC) and an Integral Sliding Mode Control (ISMC), is designed to control the linear motion stage. The control is switched between SMC and IMSC based on the error threshold. To eliminate the chattering, a smooth function is adopted instead of a signum function. The experimental results with the floating stator show that the positioning accuracy and tracking performance of the linear motion stage are improved with the MMSMC approach.

  18. Quantifying precision and accuracy of measurements of dissolved inorganic carbon stable isotopic composition using continuous-flow isotope-ratio mass spectrometry

    PubMed Central

    Waldron, Susan; Marian Scott, E; Vihermaa, Leena E; Newton, Jason

    2014-01-01

    RATIONALE We describe an analytical procedure that allows sample collection and measurement of carbon isotopic composition (δ13CV-PDB value) and dissolved inorganic carbon concentration, [DIC], in aqueous samples without further manipulation post field collection. By comparing outputs from two different mass spectrometers, we quantify with the statistical rigour uncertainty associated with the estimation of an unknown measurement. This is rarely undertaken, but it is needed to understand the significance of field data and to interpret quality assurance exercises. METHODS Immediate acidification of field samples during collection in evacuated, pre-acidified vials removed the need for toxic chemicals to inhibit continued bacterial activity that might compromise isotopic and concentration measurements. Aqueous standards mimicked the sample matrix and avoided headspace fractionation corrections. Samples were analysed using continuous-flow isotope-ratio mass spectrometry, but for low DIC concentration the mass spectrometer response could be non-linear. This had to be corrected for. RESULTS Mass spectrometer non-linearity exists. Rather than estimating precision as the repeat analysis of an internal standard, we have adopted inverse linear calibrations to quantify the precision and 95% confidence intervals (CI) of the δ13CDIC values. The response for [DIC] estimation was always linear. For 0.05–0.5 mM DIC internal standards, however, changes in mass spectrometer linearity resulted in estimations of the precision in the δ13CVPDB value of an unknown ranging from ± 0.44‰ to ± 1.33‰ (mean values) and a mean 95% CI half-width of ±1.1–3.1‰. CONCLUSIONS Mass spectrometer non-linearity should be considered in estimating uncertainty in measurement. Similarly, statistically robust estimates of precision and accuracy should also be adopted. Such estimations do not inhibit research advances: our consideration of small-scale spatial variability at two points on a

  19. Numerical accuracy of linear triangular finite elements in modeling multi-holed structures

    SciTech Connect

    Sullivan, R.M.; Griffen, J.E.

    1980-06-01

    A study has been performed to quantify the accuracy of linear triangular finite elements for modeling temperature and stress fields in structures with multiple holes. The purpose of the study was to evaluate the use of these elements for the analysis of HTGR fuel blocks, which may contain up to 325 holes. Since an accurate full scale analysis was not feasible with existing methods, a representative small scale benchmark problem containing only seven holes was selected. The finite element codes used in this study were TEPC-2D for thermal analysis and SAFIRE for stress analysis. It was concluded that linear triangular finite elements are too inefficient for this application. An accurate analysis of stresses in HTGR fuel blocks will require the use of higher order elements, such as the 8-node quadrilaterals in the new TWOD code.

  20. Linear combinations of biomarkers to improve diagnostic accuracy with three ordinal diagnostic categories

    PubMed Central

    Kang, Le; Xiong, Chengjie; Crane, Paul; Tian, Lili

    2015-01-01

    Many researchers have addressed the problem of finding the optimal linear combination of biomarkers to maximize the area under receiver operating characteristic (ROC) curves for scenarios with binary disease status. In practice, many disease processes such as Alzheimer can be naturally classified into three diagnostic categories such as normal, mild cognitive impairment and Alzheimer’s disease (AD), and for such diseases the volume under the ROC surface (VUS) is the most commonly used index of diagnostic accuracy. In this article, we propose a few parametric and nonparametric approaches to address the problem of finding the optimal linear combination to maximize the VUS. We carried out simulation studies to investigate the performance of the proposed methods. We apply all of the investigated approaches to a real data set from a cohort study in early stage AD. PMID:22865796

  1. Use of single-representative reverse-engineered surface-models for RSA does not affect measurement accuracy and precision.

    PubMed

    Seehaus, Frank; Schwarze, Michael; Flörkemeier, Thilo; von Lewinski, Gabriela; Kaptein, Bart L; Jakubowitz, Eike; Hurschler, Christof

    2016-05-01

    Implant migration can be accurately quantified by model-based Roentgen stereophotogrammetric analysis (RSA), using an implant surface model to locate the implant relative to the bone. In a clinical situation, a single reverse engineering (RE) model for each implant type and size is used. It is unclear to what extent the accuracy and precision of migration measurement is affected by implant manufacturing variability unaccounted for by a single representative model. Individual RE models were generated for five short-stem hip implants of the same type and size. Two phantom analyses and one clinical analysis were performed: "Accuracy-matched models": one stem was assessed, and the results from the original RE model were compared with randomly selected models. "Accuracy-random model": each of the five stems was assessed and analyzed using one randomly selected RE model. "Precision-clinical setting": implant migration was calculated for eight patients, and all five available RE models were applied to each case. For the two phantom experiments, the 95%CI of the bias ranged from -0.28 mm to 0.30 mm for translation and -2.3° to 2.5° for rotation. In the clinical setting, precision is less than 0.5 mm and 1.2° for translation and rotation, respectively, except for rotations about the proximodistal axis (<4.1°). High accuracy and precision of model-based RSA can be achieved and are not biased by using a single representative RE model. At least for implants similar in shape to the investigated short-stem, individual models are not necessary. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 34:903-910, 2016. PMID:26553748

  2. Accuracy and Precision Analysis of Chamber-Based Nitrous Oxide Gas Flux Estimates

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Chamber-based estimates of soil-to-atmosphere nitrous oxide (N2O) gas flux tend to underestimate actual emission rates due to inherently non-linear time series data. In theory, this limitation can be minimized by adjusting measurement conditions to reduce non-linearity and/or by using flux-calculati...

  3. Accuracy and Linearity of Positive Airway Pressure Devices: A Technical Bench Testing Study

    PubMed Central

    Torre-Bouscoulet, Luis; López-Escárcega, Elodia; Carrillo-Alduenda, José Luis; Arredondo-del-Bosque, Fernando; Reyes-Zúñiga, Margarita; Castorena-Maldonado, Armando

    2010-01-01

    Study Objectives: To analyze the accuracy and linearity of different CPAP devices outside of the manufacturers' own quality control environment. Methods: Accuracy (how well readings agree with the gold standard) and linearity were evaluated by comparing programmed pressure to measured CPAP pressure using an instrument established as the gold standard. Comparisons were made centimeter-by-centimeter (linearity) throughout the entire programming spectrum of each device (from 4 to 20 cm H2O). Results: A total of 108 CPAP devices were tested (1836 measurements); mean use of the devices was 956 hours. Twenty-two of them were new. The intra-class correlation coefficient (ICC) decreased from 0.97 at pressures programmed between 4 and 10 cm H2O, to 0.84 at pressures of 16 to 20 cm H2O. Despite this high ICC, the 95% agreement limit oscillated between −1 and 1 cm H2O. This same behavior was observed in relation to hours of use: the ICC for readings taken on devices with < 2,000 hours of use was 0.99, while that of the 50 measurements made on devices with > 6,000 hours was 0.97 (the agreement limit oscillated between −1.3 and 2.5 cm H2O). “Adequate adjustments” were documented in 97% of measurements when the definition was ± 1 cm H2O of the programmed pressure, but this index of adequate adjustment readings decreased to 85% when the ± 0.5 cm H2O criterion was applied. Conclusions: In general, the CPAP devices were accurate and linear throughout the spectrum of programmable pressures; however, strategies to assure short- and long-term equipment reliability are required in conditions of routine use. Citation: Torre-Bouscoulet L; López-Escárcega E; Carrillo-Alduenda JL; Arredondo-del-Bosque F; Reyes-Zúñiga M; Castorena-Maldonado A. Accuracy and linearity of positive airway pressure devices: a technical bench testing study. J Clin Sleep Med 2010;6(4):369-373. PMID:20726286

  4. Dichotomy in perceptual learning of interval timing: calibration of mean accuracy and precision differ in specificity and time course.

    PubMed

    Sohn, Hansem; Lee, Sang-Hun

    2013-01-01

    Our brain is inexorably confronted with a dynamic environment in which it has to fine-tune spatiotemporal representations of incoming sensory stimuli and commit to a decision accordingly. Among those representations needing constant calibration is interval timing, which plays a pivotal role in various cognitive and motor tasks. To investigate how perceived time interval is adjusted by experience, we conducted a human psychophysical experiment using an implicit interval-timing task in which observers responded to an invisible bar drifting at a constant speed. We tracked daily changes in distributions of response times for a range of physical time intervals over multiple days of training with two major types of timing performance, mean accuracy and precision. We found a decoupled dynamics of mean accuracy and precision in terms of their time course and specificity of perceptual learning. Mean accuracy showed feedback-driven instantaneous calibration evidenced by a partial transfer around the time interval trained with feedback, while timing precision exhibited a long-term slow improvement with no evident specificity. We found that a Bayesian observer model, in which a subjective time interval is determined jointly by a prior and likelihood function for timing, captures the dissociative temporal dynamics of the two types of timing measures simultaneously. Finally, the model suggested that the width of the prior, not the likelihoods, gradually shrinks over sessions, substantiating the important role of prior knowledge in perceptual learning of interval timing. PMID:23076112

  5. Quantifying Vegetation Change in Semiarid Environments: Precision and Accuracy of Spectral Mixture Analysis and the Normalized Difference Vegetation Index

    NASA Technical Reports Server (NTRS)

    Elmore, Andrew J.; Mustard, John F.; Manning, Sara J.; Elome, Andrew J.

    2000-01-01

    Because in situ techniques for determining vegetation abundance in semiarid regions are labor intensive, they usually are not feasible for regional analyses. Remotely sensed data provide the large spatial scale necessary, but their precision and accuracy in determining vegetation abundance and its change through time have not been quantitatively determined. In this paper, the precision and accuracy of two techniques, Spectral Mixture Analysis (SMA) and Normalized Difference Vegetation Index (NDVI) applied to Landsat TM data, are assessed quantitatively using high-precision in situ data. In Owens Valley, California we have 6 years of continuous field data (1991-1996) for 33 sites acquired concurrently with six cloudless Landsat TM images. The multitemporal remotely sensed data were coregistered to within 1 pixel, radiometrically intercalibrated using temporally invariante surface features and geolocated to within 30 m. These procedures facilitated the accurate location of field-monitoring sites within the remotely sensed data. Formal uncertainties in the registration, radiometric alignment, and modeling were determined. Results show that SMA absolute percent live cover (%LC) estimates are accurate to within ?4.0%LC and estimates of change in live cover have a precision of +/-3.8%LC. Furthermore, even when applied to areas of low vegetation cover, the SMA approach correctly determined the sense of clump, (i.e., positive or negative) in 87% of the samples. SMA results are superior to NDVI, which, although correlated with live cover, is not a quantitative measure and showed the correct sense of change in only 67%, of the samples.

  6. Accuracy and precisions of water quality parameters retrieved from particle swarm optimisation in a sub-tropical lake

    NASA Astrophysics Data System (ADS)

    Campbell, Glenn; Phinn, Stuart R.

    2009-09-01

    Optical remote sensing has been used to map and monitor water quality parameters such as the concentrations of hydrosols (chlorophyll and other pigments, total suspended material, and coloured dissolved organic matter). In the inversion / optimisation approach a forward model is used to simulate the water reflectance spectra from a set of parameters and the set that gives the closest match is selected as the solution. The accuracy of the hydrosol retrieval is dependent on an efficient search of the solution space and the reliability of the similarity measure. In this paper the Particle Swarm Optimisation (PSO) was used to search the solution space and seven similarity measures were trialled. The accuracy and precision of this method depends on the inherent noise in the spectral bands of the sensor being employed, as well as the radiometric corrections applied to images to calculate the subsurface reflectance. Using the Hydrolight® radiative transfer model and typical hydrosol concentrations from Lake Wivenhoe, Australia, MERIS reflectance spectra were simulated. The accuracy and precision of hydrosol concentrations derived from each similarity measure were evaluated after errors associated with the air-water interface correction, atmospheric correction and the IOP measurement were modelled and applied to the simulated reflectance spectra. The use of band specific empirically estimated values for the anisotropy value in the forward model improved the accuracy of hydrosol retrieval. The results of this study will be used to improve an algorithm for the remote sensing of water quality for freshwater impoundments.

  7. Nano-accuracy measurements and the surface profiler by use of Monolithic Hollow Penta-Prism for precision mirror testing

    NASA Astrophysics Data System (ADS)

    Qian, Shinan; Wayne, Lewis; Idir, Mourad

    2014-09-01

    We developed a Monolithic Hollow Penta-Prism Long Trace Profiler-NOM (MHPP-LTP-NOM) to attain nano-accuracy in testing plane- and near-plane-mirrors. A new developed Monolithic Hollow Penta-Prism (MHPP) combined with the advantages of PPLTP and autocollimator ELCOMAT of the Nano-Optic-Measuring Machine (NOM) is used to enhance the accuracy and stability of our measurements. Our precise system-alignment method by using a newly developed CCD position-monitor system (PMS) assured significant thermal stability and, along with our optimized noise-reduction analytic method, ensured nano-accuracy measurements. Herein we report our tests results; all errors are about 60 nrad rms or less in tests of plane- and near-plane- mirrors.

  8. Accuracy of linear measurement using cone-beam computed tomography at different reconstruction angles

    PubMed Central

    Nikneshan, Sima; Aval, Shadi Hamidi; Bakhshalian, Neema; Shahab, Shahriyar; Mohammadpour, Mahdis

    2014-01-01

    Purpose This study was performed to evaluate the effect of changing the orientation of a reconstructed image on the accuracy of linear measurements using cone-beam computed tomography (CBCT). Materials and Methods Forty-two titanium pins were inserted in seven dry sheep mandibles. The length of these pins was measured using a digital caliper with readability of 0.01 mm. Mandibles were radiographed using a CBCT device. When the CBCT images were reconstructed, the orientation of slices was adjusted to parallel (i.e., 0°), +10°, +12°, -12°, and -10° with respect to the occlusal plane. The length of the pins was measured by three radiologists, and the accuracy of these measurements was reported using descriptive statistics and one-way analysis of variance (ANOVA); p<0.05 was considered statistically significant. Results The differences in radiographic measurements ranged from -0.64 to +0.06 at the orientation of -12°, -0.66 to -0.11 at -10°, -0.51 to +0.19 at 0°, -0.64 to +0.08 at +10°, and -0.64 to +0.1 at +12°. The mean absolute values of the errors were greater at negative orientations than at the parallel position or at positive orientations. The observers underestimated most of the variables by 0.5-0.1 mm (83.6%). In the second set of observations, the reproducibility at all orientations was greater than 0.9. Conclusion Changing the slice orientation in the range of -12° to +12° reduced the accuracy of linear measurements obtained using CBCT. However, the error value was smaller than 0.5 mm and was, therefore, clinically acceptable. PMID:25473632

  9. High Precision Piezoelectric Linear Motors for Operations at Cryogenic Temperatures and Vacuum

    NASA Technical Reports Server (NTRS)

    Wong, D.; Carman, G.; Stam, M.; Bar-Cohen, Y.; Sen, A.; Henry, P.; Bearman, G.; Moacanin, J.

    1995-01-01

    The Jet Propulsion Laboratory evaluated the use of an electromechanical device for optically positioning a mirror system during the pre-project phase of the Pluto-Fast-Flyby (PFF) mission. The device under consideration was a piezoelectric driven linear motor functionally dependent upon a time varying electric field which induces displacements ranging from submicrons to millimeters with positioning accuracy within nanometers. Using a control package, the mirror system provides image motion compensation and mosaicking capabilities. While this device offers unique advantages, there were concerns pertaining to its operational capabilities for the PFF mission. The issues include irradiation effects and thermal concerns. A literature study indicated that irradiation effects will not significantly impact the linear motor's operational characteristics. On the other hand, thermal concerns necessitated an in depth study.

  10. Evaluation of the accuracy of linear and angular measurements on panoramic radiographs taken at different positions

    PubMed Central

    Nikneshan, Sima; Sharafi, Mohamad

    2013-01-01

    Purpose This study assessed the accuracy of linear and angular measurements on panoramic radiographs taken at different positions in vitro. Materials and Methods Two acrylic models were fabricated from a cast with normal occlusion. Straight and 75° mesially and lingually angulated pins were placed, and standardized panoramic radiographs were taken at standard position, at an 8° downward tilt of the occlusal plane compared to the standard position, at an 8° upward tilt of the anterior occlusal plane, and at a 10° downward tilt of the right and left sides of the model. On the radiographs, the length of the pins above (crown) and below (root) the occlusal plane, total pin length, crown-to-root ratio, and angulation of pins relative to the occlusal plane were calculated. The data were subjected to repeated measures ANOVA and LSD multiple comparisons tests. Results Significant differences were noted between the radiographic measurements and true values in different positions on both models with linear (P<0.001) and those with angulated pins (P<0.005). No statistically significant differences were observed between the angular measurements and baselines of the natural head posture at different positions for the linear and angulated pins. Conclusion Angular measurements on panoramic radiographs were sufficiently accurate and changes in the position of the occlusal plane equal to or less than 10° had no significant effect on them. Some variations could exist in the pin positioning (head positioning), and they were tolerable while taking panoramic radiographs. Linear measurements showed the least errors in the standard position and 8° upward tilt of the anterior part of the occlusal plane compared to other positions. PMID:24083213

  11. Linear FMCW Laser Radar for Precision Range and Vector Velocity Measurements

    NASA Technical Reports Server (NTRS)

    Pierrottet, Diego; Amzajerdian, Farzin; Petway, Larry; Barnes, Bruce; Lockhard, George; Rubio, Manuel

    2008-01-01

    An all fiber linear frequency modulated continuous wave (FMCW) coherent laser radar system is under development with a goal to aide NASA s new Space Exploration initiative for manned and robotic missions to the Moon and Mars. By employing a combination of optical heterodyne and linear frequency modulation techniques and utilizing state-of-the-art fiber optic technologies, highly efficient, compact and reliable laser radar suitable for operation in a space environment is being developed. Linear FMCW lidar has the capability of high-resolution range measurements, and when configured into a multi-channel receiver system it has the capability of obtaining high precision horizontal and vertical velocity measurements. Precision range and vector velocity data are beneficial to navigating planetary landing pods to the preselected site and achieving autonomous, safe soft-landing. The all-fiber coherent laser radar has several important advantages over more conventional pulsed laser altimeters or range finders. One of the advantages of the coherent laser radar is its ability to measure directly the platform velocity by extracting the Doppler shift generated from the motion, as opposed to time of flight range finders where terrain features such as hills, cliffs, or slopes add error to the velocity measurement. Doppler measurements are about two orders of magnitude more accurate than the velocity estimates obtained by pulsed laser altimeters. In addition, most of the components of the device are efficient and reliable commercial off-the-shelf fiber optic telecommunication components. This paper discusses the design and performance of a second-generation brassboard system under development at NASA Langley Research Center as part of the Autonomous Landing and Hazard Avoidance (ALHAT) project.

  12. A high-precision Jacob's staff with improved spatial accuracy and laser sighting capability

    NASA Astrophysics Data System (ADS)

    Patacci, Marco

    2016-04-01

    A new Jacob's staff design incorporating a 3D positioning stage and a laser sighting stage is described. The first combines a compass and a circular spirit level on a movable bracket and the second introduces a laser able to slide vertically and rotate on a plane parallel to bedding. The new design allows greater precision in stratigraphic thickness measurement while restricting the cost and maintaining speed of measurement to levels similar to those of a traditional Jacob's staff. Greater precision is achieved as a result of: a) improved 3D positioning of the rod through the use of the integrated compass and spirit level holder; b) more accurate sighting of geological surfaces by tracing with height adjustable rotatable laser; c) reduced error when shifting the trace of the log laterally (i.e. away from the dip direction) within the trace of the laser plane, and d) improved measurement of bedding dip and direction necessary to orientate the Jacob's staff, using the rotatable laser. The new laser holder design can also be used to verify parallelism of a geological surface with structural dip by creating a visual planar datum in the field and thus allowing determination of surfaces which cut the bedding at an angle (e.g., clinoforms, levees, erosion surfaces, amalgamation surfaces, etc.). Stratigraphic thickness measurements and estimates of measurement uncertainty are valuable to many applications of sedimentology and stratigraphy at different scales (e.g., bed statistics, reconstruction of palaeotopographies, depositional processes at bed scale, architectural element analysis), especially when a quantitative approach is applied to the analysis of the data; the ability to collect larger data sets with improved precision will increase the quality of such studies.

  13. Improving Precision and Accuracy of Isotope Ratios from Short Transient Laser Ablation-Multicollector-Inductively Coupled Plasma Mass Spectrometry Signals: Application to Micrometer-Size Uranium Particles.

    PubMed

    Claverie, Fanny; Hubert, Amélie; Berail, Sylvain; Donard, Ariane; Pointurier, Fabien; Pécheyran, Christophe

    2016-04-19

    The isotope drift encountered on short transient signals measured by multicollector inductively coupled plasma mass spectrometry (MC-ICPMS) is related to differences in detector time responses. Faraday to Faraday and Faraday to ion counter time lags were determined and corrected using VBA data processing based on the synchronization of the isotope signals. The coefficient of determination of the linear fit between the two isotopes was selected as the best criterion to obtain accurate detector time lag. The procedure was applied to the analysis by laser ablation-MC-ICPMS of micrometer sized uranium particles (1-3.5 μm). Linear regression slope (LRS) (one isotope plotted over the other), point-by-point, and integration methods were tested to calculate the (235)U/(238)U and (234)U/(238)U ratios. Relative internal precisions of 0.86 to 1.7% and 1.2 to 2.4% were obtained for (235)U/(238)U and (234)U/(238)U, respectively, using LRS calculation, time lag, and mass bias corrections. A relative external precision of 2.1% was obtained for (235)U/(238)U ratios with good accuracy (relative difference with respect to the reference value below 1%). PMID:27031645

  14. Accuracy evaluation of the optical surface monitoring system on EDGE linear accelerator in a phantom study.

    PubMed

    Mancosu, Pietro; Fogliata, Antonella; Stravato, Antonella; Tomatis, Stefano; Cozzi, Luca; Scorsetti, Marta

    2016-01-01

    Frameless stereotactic radiosurgery (SRS) requires dedicated systems to monitor the patient position during the treatment to avoid target underdosage due to involuntary shift. The optical surface monitoring system (OSMS) is here evaluated in a phantom-based study. The new EDGE linear accelerator from Varian (Varian, Palo Alto, CA) integrates, for cranial lesions, the common cone beam computed tomography (CBCT) and kV-MV portal images to the optical surface monitoring system (OSMS), a device able to detect real-time patient׳s face movements in all 6 couch axes (vertical, longitudinal, lateral, rotation along the vertical axis, pitch, and roll). We have evaluated the OSMS imaging capability in checking the phantoms׳ position and monitoring its motion. With this aim, a home-made cranial phantom was developed to evaluate the OSMS accuracy in 4 different experiments: (1) comparison with CBCT in isocenter location, (2) capability to recognize predefined shifts up to 2° or 3cm, (3) evaluation at different couch angles, (4) ability to properly reconstruct the surface when the linac gantry visually block one of the cameras. The OSMS system showed, with a phantom, to be accurate for positioning in respect to the CBCT imaging system with differences of 0.6 ± 0.3mm for linear vector displacement, with a maximum rotational inaccuracy of 0.3°. OSMS presented an accuracy of 0.3mm for displacement up to 1cm and 1°, and 0.5mm for larger displacements. Different couch angles (45° and 90°) induced a mean vector uncertainty < 0.4mm. Coverage of 1 camera produced an uncertainty < 0.5mm. Translations and rotations of a phantom can be accurately detect with the optical surface detector system. PMID:26994827

  15. Performance characterization of precision micro robot using a machine vision system over the Internet for guaranteed positioning accuracy

    NASA Astrophysics Data System (ADS)

    Kwon, Yongjin; Chiou, Richard; Rauniar, Shreepud; Sosa, Horacio

    2005-11-01

    There is a missing link between a virtual development environment (e.g., a CAD/CAM driven offline robotic programming) and production requirements of the actual robotic workcell. Simulated robot path planning and generation of pick-and-place coordinate points will not exactly coincide with the robot performance due to lack of consideration in variations in individual robot repeatability and thermal expansion of robot linkages. This is especially important when robots are controlled and programmed remotely (e.g., through Internet or Ethernet) since remote users have no physical contact with robotic systems. Using the current technology in Internet-based manufacturing that is limited to a web camera for live image transfer has been a significant challenge for the robot task performance. Consequently, the calibration and accuracy quantification of robot critical to precision assembly have to be performed on-site and the verification of robot positioning accuracy cannot be ascertained remotely. In worst case, the remote users have to assume the robot performance envelope provided by the manufacturers, which may causes a potentially serious hazard for system crash and damage to the parts and robot arms. Currently, there is no reliable methodology for remotely calibrating the robot performance. The objective of this research is, therefore, to advance the current state-of-the-art in Internet-based control and monitoring technology, with a specific aim in the accuracy calibration of micro precision robotic system for the development of a novel methodology utilizing Ethernet-based smart image sensors and other advanced precision sensory control network.

  16. Influence of a high vacuum on the precise positioning using an ultrasonic linear motor

    NASA Astrophysics Data System (ADS)

    Kim, Wan-Soo; Lee, Dong-Jin; Lee, Sun-Kyu

    2011-01-01

    This paper presents an investigation of the ultrasonic linear motor stage for use in a high vacuum environment. The slider table is driven by the hybrid bolt-clamped Langevin-type ultrasonic linear motor, which is excited with its different modes of natural frequencies in both lateral and longitudinal directions. In general, the friction behavior in a vacuum environment becomes different from that in an environment of atmospheric pressure and this difference significantly affects the performance of the ultrasonic linear motor. In this paper, to consistently provide stable and high power of output in a high vacuum, frequency matching was conducted. Moreover, to achieve the fine control performance in the vacuum environment, a modified nominal characteristic trajectory following control method was adopted. Finally, the stage was operated under high vacuum condition, and the operating performances were investigated compared with that of a conventional PI compensator. As a result, robustness of positioning was accomplished in a high vacuum condition with nanometer-level accuracy.

  17. A Method for the Precision Mass Measurement of the Stop Quark at the International Linear Collider

    SciTech Connect

    Freitas, Ayres; Milstene, Caroline; Schmitt, Michael; Sopczak, Andre; /Lancaster U.

    2007-12-01

    Many supersymmetric models predict new particles within the reach of the next generation of colliders. For an understanding of the model structure and the mechanism(s) of symmetry breaking, it is important to know the masses of the new particles precisely. In this article the measurement of the mass of the scalar partner of the top quark (stop) at an e{sup +}e{sup -} collider is studied. A relatively light stop is motivated by attempts to explain electroweak baryogenesis and can play an important role in dark matter relic density. A method is presented which makes use of cross-section measurements near the pair-production threshold as well as at higher center-of-mass energies. It is shown that this method not only increases the statistical precision, but also greatly reduces the systematic uncertainties, which can be important. numerical results are presented, based on a realistic event simulation, for two signal selection strategies: using conventional selection cuts, and using an Iterative Discriminant Analysis (IDA). The studies indicate that a precision of {Delta}m{sub {bar t}{sub 1}} = 0.42 GeV can be achieved, representing a major improvement over previous studies. While the analysis of stops is particularly challenging due to the possibility of stop hadronization, the general procedure could be applied to the mass measurement of other particles as well. They also comment on the potential of the IDA to discover a stop quark in this scenario, and they revisit the accuracy of the theoretical predictions for the neutralino relic density.

  18. A Method for the Precision Mass Measurement of the Stop Quark at the International Linear Collider

    SciTech Connect

    Freitas, Ayres; Milstene, Caroline; Schmitt, Michael; Sopczak, Andre; /Lancaster U.

    2008-06-01

    Many supersymmetric models predict new particles within the reach of the next generation of colliders. For an understanding of the model structure and the mechanism(s) of symmetry breaking, it is important to know the masses of the new particles precisely. In this article the measurement of the mass of the scalar partner of the top quark (stop) at an e+e- collider is studied. A relatively light stop is motivated by attempts to explain electroweak baryogenesis and can play an important role in dark matter relic density. A method is presented which makes use of cross-section measurements near the pair-production threshold as well as at higher center-of-mass energies. It is shown that this method not only increases the statistical precision, but also greatly reduces the systematic uncertainties, which can be important. Numerical results are presented, based on a realistic event simulation, for two signal selection strategies: using conventional selection cuts, and using an Iterative Discriminant Analysis (IDA). Our studies indicate that a precision of {Delta}m{tilde t}{sub 1} = 0.42 GeV can be achieved, representing a major improvement over previous studies. While the analysis of stops is particularly challenging due to the possibility of stop hadronization, the general procedure could be applied to the mass measurement of other particles as well. We also comment on the potential of the IDA to discover a stop quark in this scenario, and we revisit the accuracy of the theoretical predictions for the neutralino relic density

  19. Accuracy and Precision of Three-Dimensional Low Dose CT Compared to Standard RSA in Acetabular Cups: An Experimental Study.

    PubMed

    Brodén, Cyrus; Olivecrona, Henrik; Maguire, Gerald Q; Noz, Marilyn E; Zeleznik, Michael P; Sköldenberg, Olof

    2016-01-01

    Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting. PMID:27478832

  20. Accuracy and Precision of Three-Dimensional Low Dose CT Compared to Standard RSA in Acetabular Cups: An Experimental Study

    PubMed Central

    Olivecrona, Henrik; Maguire, Gerald Q.; Noz, Marilyn E.; Zeleznik, Michael P.

    2016-01-01

    Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting. PMID:27478832

  1. The accuracy and precision of DXA for assessing body composition in team sport athletes.

    PubMed

    Bilsborough, Johann Christopher; Greenway, Kate; Opar, David; Livingstone, Steuart; Cordy, Justin; Coutts, Aaron James

    2014-01-01

    This study determined the precision of pencil and fan beam dual-energy X-ray absorptiometry (DXA) devices for assessing body composition in professional Australian Football players. Thirty-six professional Australian Football players, in two groups (fan DXA, N = 22; pencil DXA, N = 25), underwent two consecutive DXA scans. A whole body phantom with known values for fat mass, bone mineral content and fat-free soft tissue mass was also used to validate each DXA device. Additionally, the criterion phantom was scanned 20 times by each DXA to assess reliability. Test-retest reliability of DXA anthropometric measures were derived from repeated fan and pencil DXA scans. Fat-free soft tissue mass and bone mineral content from both DXA units showed strong correlations with, and trivial differences to, the criterion phantom values. Fat mass from both DXA showed moderate correlations with criterion measures (pencil: r = 0.64; fan: r = 0.67) and moderate differences with the criterion value. The limits of agreement were similar for both fan beam DXA and pencil beam DXA (fan: fat-free soft tissue mass = -1650 ± 179 g, fat mass = -357 ± 316 g, bone mineral content = 289 ± 122 g; pencil: fat-free soft tissue mass = -1701 ± 257 g, fat mass = -359 ± 326 g, bone mineral content = 177 ± 117 g). DXA also showed excellent precision for bone mineral content (coefficient of variation (%CV) fan = 0.6%; pencil = 1.5%) and fat-free soft tissue mass (%CV fan = 0.3%; pencil = 0.5%) and acceptable reliability for fat measures (%CV fan: fat mass = 2.5%, percent body fat = 2.5%; pencil: fat mass = 5.9%, percent body fat = 5.7%). Both DXA provide precise measures of fat-free soft tissue mass and bone mineral content in lean Australian Football players. DXA-derived fat-free soft tissue mass and bone mineral content are suitable for assessing body composition in lean team sport athletes. PMID:24914773

  2. Approaches for achieving long-term accuracy and precision of δ18O and δ2H for waters analyzed using laser absorption spectrometers.

    PubMed

    Wassenaar, Leonard I; Coplen, Tyler B; Aggarwal, Pradeep K

    2014-01-21

    The measurement of δ(2)H and δ(18)O in water samples by laser absorption spectroscopy (LAS) are adopted increasingly in hydrologic and environmental studies. Although LAS instrumentation is easy to use, its incorporation into laboratory operations is not as easy, owing to extensive offline data manipulation required for outlier detection, derivation and application of algorithms to correct for between-sample memory, correcting for linear and nonlinear instrumental drift, VSMOW-SLAP scale normalization, and in maintaining long-term QA/QC audits. Here we propose a series of standardized water-isotope LAS performance tests and routine sample analysis templates, recommended procedural guidelines, and new data processing software (LIMS for Lasers) that altogether enables new and current LAS users to achieve and sustain long-term δ(2)H and δ(18)O accuracy and precision for these important isotopic assays. PMID:24328223

  3. Evaluation of accuracy of linear regression models in predicting urban stormwater discharge characteristics.

    PubMed

    Madarang, Krish J; Kang, Joo-Hyon

    2014-06-01

    Stormwater runoff has been identified as a source of pollution for the environment, especially for receiving waters. In order to quantify and manage the impacts of stormwater runoff on the environment, predictive models and mathematical models have been developed. Predictive tools such as regression models have been widely used to predict stormwater discharge characteristics. Storm event characteristics, such as antecedent dry days (ADD), have been related to response variables, such as pollutant loads and concentrations. However it has been a controversial issue among many studies to consider ADD as an important variable in predicting stormwater discharge characteristics. In this study, we examined the accuracy of general linear regression models in predicting discharge characteristics of roadway runoff. A total of 17 storm events were monitored in two highway segments, located in Gwangju, Korea. Data from the monitoring were used to calibrate United States Environmental Protection Agency's Storm Water Management Model (SWMM). The calibrated SWMM was simulated for 55 storm events, and the results of total suspended solid (TSS) discharge loads and event mean concentrations (EMC) were extracted. From these data, linear regression models were developed. R(2) and p-values of the regression of ADD for both TSS loads and EMCs were investigated. Results showed that pollutant loads were better predicted than pollutant EMC in the multiple regression models. Regression may not provide the true effect of site-specific characteristics, due to uncertainty in the data. PMID:25079842

  4. A Time Projection Chamber for High Accuracy and Precision Fission Cross-Section Measurements

    SciTech Connect

    T. Hill; K. Jewell; M. Heffner; D. Carter; M. Cunningham; V. Riot; J. Ruz; S. Sangiorgio; B. Seilhan; L. Snyder; D. M. Asner; S. Stave; G. Tatishvili; L. Wood; R. G. Baker; J. L. Klay; R. Kudo; S. Barrett; J. King; M. Leonard; W. Loveland; L. Yao; C. Brune; S. Grimes; N. Kornilov; T. N. Massey; J. Bundgaard; D. L. Duke; U. Greife; U. Hager; E. Burgett; J. Deaven; V. Kleinrath; C. McGrath; B. Wendt; N. Hertel; D. Isenhower; N. Pickle; H. Qu; S. Sharma; R. T. Thornton; D. Tovwell; R. S. Towell; S.

    2014-09-01

    The fission Time Projection Chamber (fissionTPC) is a compact (15 cm diameter) two-chamber MICROMEGAS TPC designed to make precision cross-section measurements of neutron-induced fission. The actinide targets are placed on the central cathode and irradiated with a neutron beam that passes axially through the TPC inducing fission in the target. The 4p acceptance for fission fragments and complete charged particle track reconstruction are powerful features of the fissionTPC which will be used to measure fission cross-sections and examine the associated systematic errors. This paper provides a detailed description of the design requirements, the design solutions, and the initial performance of the fissionTPC.

  5. The Precision and Accuracy of AIRS Level 1B Radiances for Climate Studies

    NASA Technical Reports Server (NTRS)

    Hearty, Thomas J.; Gaiser, Steve; Pagano, Tom; Aumann, Hartmut

    2004-01-01

    We investigate uncertainties in the Atmospheric Infrared Sounder (AIRS) radiances based on in-flight and preflight calibration algorithms and observations. The global coverage and spectra1 resolution ((lamda)/(Delta)(lamda) 1200) of AIRS enable it to produce a data set that can be used as a climate data record over the lifetime of the instrument. Therefore, we examine the effects of the uncertainties in the calibration and the detector stability on future climate studies. The uncertainties of the parameters that go into the AIRS radiometric calibration are propagated to estimate the accuracy of the radiances and any climate data record created from AIRS measurements. The calculated radiance uncertainties are consistent with observations. Algorithm enhancements may be able to reduce the radiance uncertainties by as much as 7%. We find that the orbital variation of the gain contributes a brightness temperature bias of < 0.01 K.

  6. Quantification and visualization of carotid segmentation accuracy and precision using a 2D standardized carotid map

    NASA Astrophysics Data System (ADS)

    Chiu, Bernard; Ukwatta, Eranga; Shavakh, Shadi; Fenster, Aaron

    2013-06-01

    This paper describes a framework for vascular image segmentation evaluation. Since the size of vessel wall and plaque burden is defined by the lumen and wall boundaries in vascular segmentation, these two boundaries should be considered as a pair in statistical evaluation of a segmentation algorithm. This work proposed statistical metrics to evaluate the difference of local vessel wall thickness (VWT) produced by manual and algorithm-based semi-automatic segmentation methods (ΔT) with the local segmentation standard deviation of the wall and lumen boundaries considered. ΔT was further approximately decomposed into the local wall and lumen boundary differences (ΔW and ΔL respectively) in order to provide information regarding which of the wall and lumen segmentation errors contribute more to the VWT difference. In this study, the lumen and wall boundaries in 3D carotid ultrasound images acquired for 21 subjects were each segmented five times manually and by a level-set segmentation algorithm. The (absolute) difference measures (i.e., ΔT, ΔW, ΔL and their absolute values) and the pooled local standard deviation of manually and algorithmically segmented wall and lumen boundaries were computed for each subject and represented in a 2D standardized map. The local accuracy and variability of the segmentation algorithm at each point can be quantified by the average of these metrics for the whole group of subjects and visualized on the 2D standardized map. Based on the results shown on the 2D standardized map, a variety of strategies, such as adding anchor points and adjusting weights of different forces in the algorithm, can be introduced to improve the accuracy and variability of the algorithm.

  7. A Comparative Evaluation of the Linear Dimensional Accuracy of Four Impression Techniques using Polyether Impression Material.

    PubMed

    Manoj, Smita Sara; Cherian, K P; Chitre, Vidya; Aras, Meena

    2013-12-01

    There is much discussion in the dental literature regarding the superiority of one impression technique over the other using addition silicone impression material. However, there is inadequate information available on the accuracy of different impression techniques using polyether. The purpose of this study was to assess the linear dimensional accuracy of four impression techniques using polyether on a laboratory model that simulates clinical practice. The impression material used was Impregum Soft™, 3 M ESPE and the four impression techniques used were (1) Monophase impression technique using medium body impression material. (2) One step double mix impression technique using heavy body and light body impression materials simultaneously. (3) Two step double mix impression technique using a cellophane spacer (heavy body material used as a preliminary impression to create a wash space with a cellophane spacer, followed by the use of light body material). (4) Matrix impression using a matrix of polyether occlusal registration material. The matrix is loaded with heavy body material followed by a pick-up impression in medium body material. For each technique, thirty impressions were made of a stainless steel master model that contained three complete crown abutment preparations, which were used as the positive control. Accuracy was assessed by measuring eight dimensions (mesiodistal, faciolingual and inter-abutment) on stone dies poured from impressions of the master model. A two-tailed t test was carried out to test the significance in difference of the distances between the master model and the stone models. One way analysis of variance (ANOVA) was used for multiple group comparison followed by the Bonferroni's test for pair wise comparison. The accuracy was tested at α = 0.05. In general, polyether impression material produced stone dies that were smaller except for the dies produced from the one step double mix impression technique. The ANOVA revealed a highly

  8. A low noise and high precision linear power supply with thermal foldback protection

    NASA Astrophysics Data System (ADS)

    Carniti, P.; Cassina, L.; Gotti, C.; Maino, M.; Pessina, G.

    2016-05-01

    A low noise and high precision linear power supply was designed for use in rare event search experiments with macrobolometers. The circuit accepts at the input a "noisy" dual supply voltage up to ±15 V and gives at the output precise, low noise, and stable voltages that can be set between ±3.75 V and ±12.5 V in eight 1.25 V steps. Particular care in circuit design, component selection, and proper filtering results in a noise spectral density of 50 nV / √{ Hz } at 1 Hz and 20 nV / √{ Hz } white when the output is set to ±5 V. This corresponds to 125 nV RMS (0.8 μV peak to peak) between 0.1 Hz and 10 Hz, and 240 nV RMS (1.6 μV peak to peak) between 0.1 Hz and 100 Hz. The power supply rejection ratio (PSRR) of the circuit is 100 dB at low frequency, and larger than 40 dB up to high frequency, thanks to a proper compensation design. Calibration allows to reach a precision in the absolute value of the output voltage of ±70 ppm, or ±350 μV at ±5 V, and to reduce thermal drifts below ±1 ppm/∘C in the expected operating range. The maximum peak output current is about 6 A from each output. An original foldback protection scheme was developed that dynamically limits the maximum output current to keep the temperature of the output transistors within their safe operating range. An add-on card based on an ARM Cortex-M3 microcontroller is devoted to the monitoring and control of all circuit functionalities and provides remote communication via CAN bus.

  9. A low noise and high precision linear power supply with thermal foldback protection.

    PubMed

    Carniti, P; Cassina, L; Gotti, C; Maino, M; Pessina, G

    2016-05-01

    A low noise and high precision linear power supply was designed for use in rare event search experiments with macrobolometers. The circuit accepts at the input a "noisy" dual supply voltage up to ±15 V and gives at the output precise, low noise, and stable voltages that can be set between ±3.75 V and ±12.5 V in eight 1.25 V steps. Particular care in circuit design, component selection, and proper filtering results in a noise spectral density of 50nV/Hz at 1 Hz and 20nV/Hz white when the output is set to ±5 V. This corresponds to 125 nV RMS (0.8 μV peak to peak) between 0.1 Hz and 10 Hz, and 240 nV RMS (1.6 μV peak to peak) between 0.1 Hz and 100 Hz. The power supply rejection ratio (PSRR) of the circuit is 100 dB at low frequency, and larger than 40 dB up to high frequency, thanks to a proper compensation design. Calibration allows to reach a precision in the absolute value of the output voltage of ±70 ppm, or ±350 μV at ±5 V, and to reduce thermal drifts below ±1 ppm/(∘)C in the expected operating range. The maximum peak output current is about 6 A from each output. An original foldback protection scheme was developed that dynamically limits the maximum output current to keep the temperature of the output transistors within their safe operating range. An add-on card based on an ARM Cortex-M3 microcontroller is devoted to the monitoring and control of all circuit functionalities and provides remote communication via CAN bus. PMID:27250450

  10. Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine.

    PubMed

    Castaneda, Christian; Nalley, Kip; Mannion, Ciaran; Bhattacharyya, Pritish; Blake, Patrick; Pecora, Andrew; Goy, Andre; Suh, K Stephen

    2015-01-01

    As research laboratories and clinics collaborate to achieve precision medicine, both communities are required to understand mandated electronic health/medical record (EHR/EMR) initiatives that will be fully implemented in all clinics in the United States by 2015. Stakeholders will need to evaluate current record keeping practices and optimize and standardize methodologies to capture nearly all information in digital format. Collaborative efforts from academic and industry sectors are crucial to achieving higher efficacy in patient care while minimizing costs. Currently existing digitized data and information are present in multiple formats and are largely unstructured. In the absence of a universally accepted management system, departments and institutions continue to generate silos of information. As a result, invaluable and newly discovered knowledge is difficult to access. To accelerate biomedical research and reduce healthcare costs, clinical and bioinformatics systems must employ common data elements to create structured annotation forms enabling laboratories and clinics to capture sharable data in real time. Conversion of these datasets to knowable information should be a routine institutionalized process. New scientific knowledge and clinical discoveries can be shared via integrated knowledge environments defined by flexible data models and extensive use of standards, ontologies, vocabularies, and thesauri. In the clinical setting, aggregated knowledge must be displayed in user-friendly formats so that physicians, non-technical laboratory personnel, nurses, data/research coordinators, and end-users can enter data, access information, and understand the output. The effort to connect astronomical numbers of data points, including '-omics'-based molecular data, individual genome sequences, experimental data, patient clinical phenotypes, and follow-up data is a monumental task. Roadblocks to this vision of integration and interoperability include ethical, legal

  11. Precise and Continuous Time and Frequency Synchronisation at the 5×10-19 Accuracy Level

    PubMed Central

    Wang, B.; Gao, C.; Chen, W. L.; Miao, J.; Zhu, X.; Bai, Y.; Zhang, J. W.; Feng, Y. Y.; Li, T. C.; Wang, L. J.

    2012-01-01

    The synchronisation of time and frequency between remote locations is crucial for many important applications. Conventional time and frequency dissemination often makes use of satellite links. Recently, the communication fibre network has become an attractive option for long-distance time and frequency dissemination. Here, we demonstrate accurate frequency transfer and time synchronisation via an 80 km fibre link between Tsinghua University (THU) and the National Institute of Metrology of China (NIM). Using a 9.1 GHz microwave modulation and a timing signal carried by two continuous-wave lasers and transferred across the same 80 km urban fibre link, frequency transfer stability at the level of 5×10−19/day was achieved. Time synchronisation at the 50 ps precision level was also demonstrated. The system is reliable and has operated continuously for several months. We further discuss the feasibility of using such frequency and time transfer over 1000 km and its applications to long-baseline radio astronomy. PMID:22870385

  12. Towards the next decades of precision and accuracy in a 87Sr optical lattice clock

    NASA Astrophysics Data System (ADS)

    Martin, Michael; Lin, Yige; Swallows, Matthew; Bishof, Michael; Blatt, Sebastian; Benko, Craig; Chen, Licheng; Hirokawa, Takako; Rey, Ana Maria; Ye, Jun

    2011-05-01

    Optical lattice clocks based on ensembles of neutral atoms have the potential to operate at the highest levels of stability due to the parallel interrogation of many atoms. However, the control of systematic shifts in these systems is correspondingly difficult due to potential collisional atomic interactions. By tightly confining samples of ultracold fermionic 87Sr atoms in a two-dimensional optical lattice, as opposed to the conventional one-dimensional geometry, we increase the collisional interaction energy to be the largest relevant energy scale, thus entering the strongly interacting regime of clock operation. We show both theoretically and experimentally that this increase in interaction energy results in a paradoxical decrease in the collisional shift, reducing this key systematic to the 10-17 level. We also present work towards next- generation ultrastable lasers to attain quantum-limited clock operation, potentially enhancing clock precision by an order of magnitude. This work was supported by a grant from the ARO with funding from the DARPA OLE program, NIST, NSF, and AFOSR.

  13. Tedlar bag sampling technique for vertical profiling of carbon dioxide through the atmospheric boundary layer with high precision and accuracy.

    PubMed

    Schulz, Kristen; Jensen, Michael L; Balsley, Ben B; Davis, Kenneth; Birks, John W

    2004-07-01

    Carbon dioxide is the most important greenhouse gas other than water vapor, and its modulation by the biosphere is of fundamental importance to our understanding of global climate change. We have developed a new technique for vertical profiling of CO2 and meteorological parameters through the atmospheric boundary layer and well into the free troposphere. Vertical profiling of CO2 mixing ratios allows estimates of landscape-scale fluxes characteristic of approximately100 km2 of an ecosystem. The method makes use of a powered parachute as a platform and a new Tedlar bag air sampling technique. Air samples are returned to the ground where measurements of CO2 mixing ratios are made with high precision (< or =0.1%) and accuracy (< or =0.1%) using a conventional nondispersive infrared analyzer. Laboratory studies are described that characterize the accuracy and precision of the bag sampling technique and that measure the diffusion coefficient of CO2 through the Tedlar bag wall. The technique has been applied in field studies in the proximity of two AmeriFlux sites, and results are compared with tower measurements of CO2. PMID:15296321

  14. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review).

    PubMed

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194

  15. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation.

    PubMed

    Choe, Kyoung Whan; Blake, Randolph; Lee, Sang-Hun

    2016-01-01

    Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (<16s) and large (∼4min) time scales, and these pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task. PMID:25578924

  16. Accuracy Assessment of the Precise Point Positioning for Different Troposphere Models

    NASA Astrophysics Data System (ADS)

    Oguz Selbesoglu, Mahmut; Gurturk, Mert; Soycan, Metin

    2016-04-01

    This study investigates the accuracy and repeatability of PPP technique at different latitudes by using different troposphere delay models. Nine IGS stations were selected between 00-800 latitudes at northern hemisphere and southern hemisphere. Coordinates were obtained for 7 days at 1 hour intervals in summer and winter. At first, the coordinates were estimated by using Niell troposphere delay model with and without including north and east gradients in order to investigate the contribution of troposphere delay gradients to the positioning . Secondly, Saastamoinen model was used to eliminate troposphere path delays by using standart atmosphere parameters were extrapolated for all station levels. Finally, coordinates were estimated by using RTCA-MOPS empirical troposphere delay model. Results demonstrate that Niell troposphere delay model with horizontal gradients has better mean values of rms errors 0.09 % and 65 % than the Niell troposphere model without horizontal gradients and RTCA-MOPS model, respectively. Saastamoinen model mean values of rms errors were obtained approximately 4 times bigger than the Niell troposphere delay model with horizontal gradients.

  17. A simple device for high-precision head image registration: Preliminary performance and accuracy tests

    SciTech Connect

    Pallotta, Stefania

    2007-05-15

    The purpose of this paper is to present a new device for multimodal head study registration and to examine its performance in preliminary tests. The device consists of a system of eight markers fixed to mobile carbon pipes and bars which can be easily mounted on the patient's head using the ear canals and the nasal bridge. Four graduated scales fixed to the rigid support allow examiners to find the same device position on the patient's head during different acquisitions. The markers can be filled with appropriate substances for visualisation in computed tomography (CT), magnetic resonance, single photon emission computer tomography (SPECT) and positron emission tomography images. The device's rigidity and its position reproducibility were measured in 15 repeated CT acquisitions of the Alderson Rando anthropomorphic phantom and in two SPECT studies of a patient. The proposed system displays good rigidity and reproducibility characteristics. A relocation accuracy of less than 1,5 mm was found in more than 90% of the results. The registration parameters obtained using such a device were compared to those obtained using fiducial markers fixed on phantom and patient heads, resulting in differences of less than 1 deg. and 1 mm for rotation and translation parameters, respectively. Residual differences between fiducial marker coordinates in reference and in registered studies were less than 1 mm in more than 90% of the results, proving that the device performed as accurately as noninvasive stereotactic devices. Finally, an example of multimodal employment of the proposed device is reported.

  18. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review)

    PubMed Central

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194

  19. A Method of Determining Accuracy and Precision for Dosimeter Systems Using Accreditation Data

    SciTech Connect

    Rick Cummings and John Flood

    2010-12-01

    A study of the uncertainty of dosimeter results is required by the national accreditation programs for each dosimeter model for which accreditation is sought. Typically, the methods used to determine uncertainty have included the partial differentiation method described in the U.S. Guide to Uncertainty in Measurements or the use of Monte Carlo techniques and probability distribution functions to generate simulated dose results. Each of these techniques has particular strengths and should be employed when the areas of uncertainty are required to be understood in detail. However, the uncertainty of dosimeter results can also be determined using a Model II One-Way Analysis of Variance technique and accreditation testing data. The strengths of the technique include (1) the method is straightforward and the data are provided under accreditation testing and (2) the method provides additional data for the analysis of long-term uncertainty using Statistical Process Control (SPC) techniques. The use of SPC to compare variances and standard deviations over time is described well in other areas and is not discussed in detail in this paper. The application of Analysis of Variance to historic testing data indicated that the accuracy in a representative dosimetry system (Panasonic® Model UD-802) was 8.2%, 5.1%, and 4.8% and the expanded uncertainties at the 95% confidence level were 10.7%, 14.9%, and 15.2% for the Accident, Protection Level-Shallow, and Protection Level-Deep test categories in the Department of Energy Laboratory Accreditation Program, respectively. The 95% level of confidence ranges were (0.98 to 1.19), (0.90 to 1.20), and (0.90 to 1.20) for the three groupings of test categories, respectively.

  20. A method of determining accuracy and precision for dosimeter systems using accreditation data.

    PubMed

    Cummings, Frederick; Flood, John R

    2010-12-01

    A study of the uncertainty of dosimeter results is required by the national accreditation programs for each dosimeter model for which accreditation is sought. Typically, the methods used to determine uncertainty have included the partial differentiation method described in the U.S. Guide to Uncertainty in Measurements or the use of Monte Carlo techniques and probability distribution functions to generate simulated dose results. Each of these techniques has particular strengths and should be employed when the areas of uncertainty are required to be understood in detail. However, the uncertainty of dosimeter results can also be determined using a Model II One-Way Analysis of Variance technique and accreditation testing data. The strengths of the technique include (1) the method is straightforward and the data are provided under accreditation testing and (2) the method provides additional data for the analysis of long-term uncertainty using Statistical Process Control (SPC) techniques. The use of SPC to compare variances and standard deviations over time is described well in other areas and is not discussed in detail in this paper. The application of Analysis of Variance to historic testing data indicated that the accuracy in a representative dosimetry system (Panasonic® Model UD-802) was 8.2%, 5.1%, and 4.8% and the expanded uncertainties at the 95% confidence level were 10.7%, 14.9%, and 15.2% for the Accident, Protection Level-Shallow, and Protection Level-Deep test categories in the Department of Energy Laboratory Accreditation Program, respectively. The 95% level of confidence ranges were (0.98 to 1.19), (0.90 to 1.20), and (0.90 to 1.20) for the three groupings of test categories, respectively. PMID:21068596

  1. Video image analysis in the Australian meat industry - precision and accuracy of predicting lean meat yield in lamb carcasses.

    PubMed

    Hopkins, D L; Safari, E; Thompson, J M; Smith, C R

    2004-06-01

    A wide selection of lamb types of mixed sex (ewes and wethers) were slaughtered at a commercial abattoir and during this process images of 360 carcasses were obtained online using the VIAScan® system developed by Meat and Livestock Australia. Soft tissue depth at the GR site (thickness of tissue over the 12th rib 110 mm from the midline) was measured by an abattoir employee using the AUS-MEAT sheep probe (PGR). Another measure of this thickness was taken in the chiller using a GR knife (NGR). Each carcass was subsequently broken down to a range of trimmed boneless retail cuts and the lean meat yield determined. The current industry model for predicting meat yield uses hot carcass weight (HCW) and tissue depth at the GR site. A low level of accuracy and precision was found when HCW and PGR were used to predict lean meat yield (R(2)=0.19, r.s.d.=2.80%), which could be improved markedly when PGR was replaced by NGR (R(2)=0.41, r.s.d.=2.39%). If the GR measures were replaced by 8 VIAScan® measures then greater prediction accuracy could be achieved (R(2)=0.52, r.s.d.=2.17%). A similar result was achieved when the model was based on principal components (PCs) computed from the 8 VIAScan® measures (R(2)=0.52, r.s.d.=2.17%). The use of PCs also improved the stability of the model compared to a regression model based on HCW and NGR. The transportability of the models was tested by randomly dividing the data set and comparing coefficients and the level of accuracy and precision. Those models based on PCs were superior to those based on regression. It is demonstrated that with the appropriate modeling the VIAScan® system offers a workable method for predicting lean meat yield automatically. PMID:22061323

  2. Accuracy and reliability of multi-GNSS real-time precise positioning: GPS, GLONASS, BeiDou, and Galileo

    NASA Astrophysics Data System (ADS)

    Li, Xingxing; Ge, Maorong; Dai, Xiaolei; Ren, Xiaodong; Fritsche, Mathias; Wickert, Jens; Schuh, Harald

    2015-06-01

    In this contribution, we present a GPS+GLONASS+BeiDou+Galileo four-system model to fully exploit the observations of all these four navigation satellite systems for real-time precise orbit determination, clock estimation and positioning. A rigorous multi-GNSS analysis is performed to achieve the best possible consistency by processing the observations from different GNSS together in one common parameter estimation procedure. Meanwhile, an efficient multi-GNSS real-time precise positioning service system is designed and demonstrated by using the multi-GNSS Experiment, BeiDou Experimental Tracking Network, and International GNSS Service networks including stations all over the world. The statistical analysis of the 6-h predicted orbits show that the radial and cross root mean square (RMS) values are smaller than 10 cm for BeiDou and Galileo, and smaller than 5 cm for both GLONASS and GPS satellites, respectively. The RMS values of the clock differences between real-time and batch-processed solutions for GPS satellites are about 0.10 ns, while the RMS values for BeiDou, Galileo and GLONASS are 0.13, 0.13 and 0.14 ns, respectively. The addition of the BeiDou, Galileo and GLONASS systems to the standard GPS-only processing, reduces the convergence time almost by 70 %, while the positioning accuracy is improved by about 25 %. Some outliers in the GPS-only solutions vanish when multi-GNSS observations are processed simultaneous. The availability and reliability of GPS precise positioning decrease dramatically as the elevation cutoff increases. However, the accuracy of multi-GNSS precise point positioning (PPP) is hardly decreased and few centimeter are still achievable in the horizontal components even with 40 elevation cutoff. At 30 and 40 elevation cutoffs, the availability rates of GPS-only solution drop significantly to only around 70 and 40 %, respectively. However, multi-GNSS PPP can provide precise position estimates continuously (availability rate is more than 99

  3. In silico instrumental response correction improves precision of label-free proteomics and accuracy of proteomics-based predictive models.

    PubMed

    Lyutvinskiy, Yaroslav; Yang, Hongqian; Rutishauser, Dorothea; Zubarev, Roman A

    2013-08-01

    In the analysis of proteome changes arising during the early stages of a biological process (e.g. disease or drug treatment) or from the indirect influence of an important factor, the biological variations of interest are often small (∼10%). The corresponding requirements for the precision of proteomics analysis are high, and this often poses a challenge, especially when employing label-free quantification. One of the main contributors to the inaccuracy of label-free proteomics experiments is the variability of the instrumental response during LC-MS/MS runs. Such variability might include fluctuations in the electrospray current, transmission efficiency from the air-vacuum interface to the detector, and detection sensitivity. We have developed an in silico post-processing method of reducing these variations, and have thus significantly improved the precision of label-free proteomics analysis. For abundant blood plasma proteins, a coefficient of variation of approximately 1% was achieved, which allowed for sex differentiation in pooled samples and ≈90% accurate differentiation of individual samples by means of a single LC-MS/MS analysis. This method improves the precision of measurements and increases the accuracy of predictive models based on the measurements. The post-acquisition nature of the correction technique and its generality promise its widespread application in LC-MS/MS-based methods such as proteomics and metabolomics. PMID:23589346

  4. Accuracy and Precision of Equine Gait Event Detection during Walking with Limb and Trunk Mounted Inertial Sensors

    PubMed Central

    Olsen, Emil; Andersen, Pia Haubro; Pfau, Thilo

    2012-01-01

    The increased variations of temporal gait events when pathology is present are good candidate features for objective diagnostic tests. We hypothesised that the gait events hoof-on/off and stance can be detected accurately and precisely using features from trunk and distal limb-mounted Inertial Measurement Units (IMUs). Four IMUs were mounted on the distal limb and five IMUs were attached to the skin over the dorsal spinous processes at the withers, fourth lumbar vertebrae and sacrum as well as left and right tuber coxae. IMU data were synchronised to a force plate array and a motion capture system. Accuracy (bias) and precision (SD of bias) was calculated to compare force plate and IMU timings for gait events. Data were collected from seven horses. One hundred and twenty three (123) front limb steps were analysed; hoof-on was detected with a bias (SD) of −7 (23) ms, hoof-off with 0.7 (37) ms and front limb stance with −0.02 (37) ms. A total of 119 hind limb steps were analysed; hoof-on was found with a bias (SD) of −4 (25) ms, hoof-off with 6 (21) ms and hind limb stance with 0.2 (28) ms. IMUs mounted on the distal limbs and sacrum can detect gait events accurately and precisely. PMID:22969392

  5. A material sensitivity study on the accuracy of deformable organ registration using linear biomechanical models

    SciTech Connect

    Chi, Y.; Liang, J.; Yan, D.

    2006-02-15

    Model-based deformable organ registration techniques using the finite element method (FEM) have recently been investigated intensively and applied to image-guided adaptive radiotherapy (IGART). These techniques assume that human organs are linearly elastic material, and their mechanical properties are predetermined. Unfortunately, the accurate measurement of the tissue material properties is challenging and the properties usually vary between patients. A common issue is therefore the achievable accuracy of the calculation due to the limited access to tissue elastic material constants. In this study, we performed a systematic investigation on this subject based on tissue biomechanics and computer simulations to establish the relationships between achievable registration accuracy and tissue mechanical and organ geometrical properties. Primarily we focused on image registration for three organs: rectal wall, bladder wall, and prostate. The tissue anisotropy due to orientation preference in tissue fiber alignment is captured by using an orthotropic or a transversely isotropic elastic model. First we developed biomechanical models for the rectal wall, bladder wall, and prostate using simplified geometries and investigated the effect of varying material parameters on the resulting organ deformation. Then computer models based on patient image data were constructed, and image registrations were performed. The sensitivity of registration errors was studied by perturbating the tissue material properties from their mean values while fixing the boundary conditions. The simulation results demonstrated that registration error for a subvolume increases as its distance from the boundary increases. Also, a variable associated with material stability was found to be a dominant factor in registration accuracy in the context of material uncertainty. For hollow thin organs such as rectal walls and bladder walls, the registration errors are limited. Given 30% in material uncertainty

  6. Assessment of Completeness and Positional Accuracy of Linear Features in Volunteered Geographic Information (vgi)

    NASA Astrophysics Data System (ADS)

    Eshghi, M.; Alesheikh, A. A.

    2015-12-01

    Recent advances in spatial data collection technologies and online services dramatically increase the contribution of ordinary people to produce, share, and use geographic information. Collecting spatial data as well as disseminating them on the internet by citizens has led to a huge source of spatial data termed as Volunteered Geographic Information (VGI) by Mike Goodchild. Although, VGI has produced previously unavailable data assets, and enriched existing ones. But its quality can be highly variable and challengeable. This presents several challenges to potential end users who are concerned about the validation and the quality assurance of the data which are collected. Almost, all the existing researches are based on how to find accurate VGI data from existing VGI data which consist of a) comparing the VGI data with the accurate official data, or b) in cases that there is no access to correct data; therefore, looking for an alternative way to determine the quality of VGI data is essential, and so forth. In this paper it has been attempt to develop a useful method to reach this goal. In this process, the positional accuracy of linear feature of Iran, Tehran OSM data have been analyzed.

  7. Pseudo-inverse linear discriminants for the improvement of overall classification accuracies.

    PubMed

    Daqi, Gao; Ahmed, Dastagir; Lili, Guo; Zejian, Wang; Zhe, Wang

    2016-09-01

    This paper studies the learning and generalization performances of pseudo-inverse linear discriminant (PILDs) based on the processing minimum sum-of-squared error (MS(2)E) and the targeting overall classification accuracy (OCA) criterion functions. There is little practicable significance to prove the equivalency between a PILD with the desired outputs in reverse proportion to the number of class samples and an FLD with the totally projected mean thresholds. When the desired outputs of each class are assigned a fixed value, a PILD is partly equal to an FLD. With the customarily desired outputs {1, -1}, a practicable threshold is acquired, which is only related to sample sizes. If the desired outputs of each sample are changeable, a PILD has nothing in common with an FLD. The optimal threshold may thus be singled out from multiple empirical ones related to sizes and distributed regions. Depending upon the processing MS(2)E criteria and the actually algebraic distances, an iterative learning strategy of PILD is proposed, the outstanding advantages of which are with limited epoch, without learning rate and divergent risk. Enormous experimental results for the benchmark datasets have verified that the iterative PILDs with optimal thresholds have good learning and generalization performances, and even reach the top OCAs for some datasets among the existing classifiers. PMID:27351107

  8. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    PubMed

    Wells, Emma; Wolfe, Marlene K; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4-19% error), then test strips (5.2-48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration. Given the

  9. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions

    PubMed Central

    Wells, Emma; Wolfe, Marlene K.; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4–19% error), then test strips (5.2–48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5–11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14–37 for test strips and $33–609 for titration

  10. Detailed data is welcome, but with a pinch of salt: Accuracy, precision, and uncertainty in flood inundation modeling

    NASA Astrophysics Data System (ADS)

    Dottori, F.; Di Baldassarre, G.; Todini, E.

    2013-09-01

    New survey techniques provide a large amount of high-resolution data, which can be extremely precious for flood inundation modeling. Such data availability raises the issue as to how to exploit their information content to effectively improve flood risk mapping and predictions. In this paper, we will discuss a number of important issues which should be taken into account in works related to flood modeling. These include the large number of uncertainty sources in model structure and available data; the difficult evaluation of model results, due to the scarcity of observed data; computational efficiency; false confidence that can be given by high-resolution outputs, as accuracy is not necessarily increased by higher precision. Finally, we briefly review and discuss a number of existing approaches, such as subgrid parameterization and roughness upscaling methods, which can be used to incorporate high detailed data into flood inundation models, balancing efficiency and reliability.

  11. Community-based Approaches to Improving Accuracy, Precision, and Reproducibility in U-Pb and U-Th Geochronology

    NASA Astrophysics Data System (ADS)

    McLean, N. M.; Condon, D. J.; Bowring, S. A.; Schoene, B.; Dutton, A.; Rubin, K. H.

    2015-12-01

    The last two decades have seen a grassroots effort by the international geochronology community to "calibrate Earth history through teamwork and cooperation," both as part of the EARTHTIME initiative and though several daughter projects with similar goals. Its mission originally challenged laboratories "to produce temporal constraints with uncertainties approaching 0.1% of the radioisotopic ages," but EARTHTIME has since exceeded its charge in many ways. Both the U-Pb and Ar-Ar chronometers first considered for high-precision timescale calibration now regularly produce dates at the sub-per mil level thanks to instrumentation, laboratory, and software advances. At the same time new isotope systems, including U-Th dating of carbonates, have developed comparable precision. But the larger, inter-related scientific challenges envisioned at EARTHTIME's inception remain - for instance, precisely calibrating the global geologic timescale, estimating rates of change around major climatic perturbations, and understanding evolutionary rates through time - and increasingly require that data from multiple geochronometers be combined. To solve these problems, the next two decades of uranium-daughter geochronology will require further advances in accuracy, precision, and reproducibility. The U-Th system has much in common with U-Pb, in that both parent and daughter isotopes are solids that can easily be weighed and dissolved in acid, and have well-characterized reference materials certified for isotopic composition and/or purity. For U-Pb, improving lab-to-lab reproducibility has entailed dissolving precisely weighed U and Pb metals of known purity and isotopic composition together to make gravimetric solutions, then using these to calibrate widely distributed tracers composed of artificial U and Pb isotopes. To mimic laboratory measurements, naturally occurring U and Pb isotopes were also mixed in proportions to mimic samples of three different ages, to be run as internal

  12. Assessment of accuracy and precision of 3D reconstruction of unicompartmental knee arthroplasty in upright position using biplanar radiography.

    PubMed

    Tsai, Tsung-Yuan; Dimitriou, Dimitris; Hosseini, Ali; Liow, Ming Han Lincoln; Torriani, Martin; Li, Guoan; Kwon, Young-Min

    2016-07-01

    This study aimed to evaluate the precision and accuracy of 3D reconstruction of UKA component position, contact location and lower limb alignment in standing position using biplanar radiograph. Two human specimens with 4 medial UKAs were implanted with beads for radiostereometric analysis (RSA). The specimens were frozen in standing position and CT-scanned to obtain relative positions between the beads, bones and UKA components. The specimens were then imaged using biplanar radiograph (EOS). The positions of the femur, tibia, UKA components and UKA contact locations were obtained using RSA- and EOS-based techniques. Intraclass correlation coefficient (ICC) was calculated for inter-observer reliability of the EOS technique. The average (standard deviation) of the differences between two techniques in translations and rotations were less than 0.18 (0.29) mm and 0.39° (0.66°) for UKA components. The root-mean-square-errors (RMSE) of contact location along the anterior/posterior and medial/lateral directions were 0.84mm and 0.30mm. The RMSEs of the knee rotations were less than 1.70°. The ICCs for the EOS-based segmental orientations between two raters were larger than 0.98. The results suggest the EOS-based 3D reconstruction technique can precisely determine component position, contact location and lower limb alignment for UKA patients in weight-bearing standing position. PMID:27117422

  13. THE PRECISION AND ACCURACY OF EARLY EPOCH OF REIONIZATION FOREGROUND MODELS: COMPARING MWA AND PAPER 32-ANTENNA SOURCE CATALOGS

    SciTech Connect

    Jacobs, Daniel C.; Bowman, Judd; Aguirre, James E.

    2013-05-20

    As observations of the Epoch of Reionization (EoR) in redshifted 21 cm emission begin, we assess the accuracy of the early catalog results from the Precision Array for Probing the Epoch of Reionization (PAPER) and the Murchison Wide-field Array (MWA). The MWA EoR approach derives much of its sensitivity from subtracting foregrounds to <1% precision, while the PAPER approach relies on the stability and symmetry of the primary beam. Both require an accurate flux calibration to set the amplitude of the measured power spectrum. The two instruments are very similar in resolution, sensitivity, sky coverage, and spectral range and have produced catalogs from nearly contemporaneous data. We use a Bayesian Markov Chain Monte Carlo fitting method to estimate that the two instruments are on the same flux scale to within 20% and find that the images are mostly in good agreement. We then investigate the source of the errors by comparing two overlapping MWA facets where we find that the differences are primarily related to an inaccurate model of the primary beam but also correlated errors in bright sources due to CLEAN. We conclude with suggestions for mitigating and better characterizing these effects.

  14. Error propagation in relative real-time reverse transcription polymerase chain reaction quantification models: the balance between accuracy and precision.

    PubMed

    Nordgård, Oddmund; Kvaløy, Jan Terje; Farmen, Ragne Kristin; Heikkilä, Reino

    2006-09-15

    Real-time reverse transcription polymerase chain reaction (RT-PCR) has gained wide popularity as a sensitive and reliable technique for mRNA quantification. The development of new mathematical models for such quantifications has generally paid little attention to the aspect of error propagation. In this study we evaluate, both theoretically and experimentally, several recent models for relative real-time RT-PCR quantification of mRNA with respect to random error accumulation. We present error propagation expressions for the most common quantification models and discuss the influence of the various components on the total random error. Normalization against a calibrator sample to improve comparability between different runs is shown to increase the overall random error in our system. On the other hand, normalization against multiple reference genes, introduced to improve accuracy, does not increase error propagation compared to normalization against a single reference gene. Finally, we present evidence that sample-specific amplification efficiencies determined from individual amplification curves primarily increase the random error of real-time RT-PCR quantifications and should be avoided. Our data emphasize that the gain of accuracy associated with new quantification models should be validated against the corresponding loss of precision. PMID:16899212

  15. Single-frequency receivers as master permanent stations in GNSS networks: precision and accuracy of the positioning in mixed networks

    NASA Astrophysics Data System (ADS)

    Dabove, Paolo; Manzino, Ambrogio Maria

    2015-04-01

    The use of GPS/GNSS instruments is a common practice in the world at both a commercial and academic research level. Since last ten years, Continuous Operating Reference Stations (CORSs) networks were born in order to achieve the possibility to extend a precise positioning more than 15 km far from the master station. In this context, the Geomatics Research Group of DIATI at the Politecnico di Torino has carried out several experiments in order to evaluate the achievable precision obtainable with different GNSS receivers (geodetic and mass-market) and antennas if a CORSs network is considered. This work starts from the research above described, in particular focusing the attention on the usefulness of single frequency permanent stations in order to thicken the existing CORSs, especially for monitoring purposes. Two different types of CORSs network are available today in Italy: the first one is the so called "regional network" and the second one is the "national network", where the mean inter-station distances are about 25/30 and 50/70 km respectively. These distances are useful for many applications (e.g. mobile mapping) if geodetic instruments are considered but become less useful if mass-market instruments are used or if the inter-station distance between master and rover increases. In this context, some innovative GNSS networks were developed and tested, analyzing the performance of rover's positioning in terms of quality, accuracy and reliability both in real-time and post-processing approach. The use of single frequency GNSS receivers leads to have some limits, especially due to a limited baseline length, the possibility to obtain a correct fixing of the phase ambiguity for the network and to fix the phase ambiguity correctly also for the rover. These factors play a crucial role in order to reach a positioning with a good level of accuracy (as centimetric o better) in a short time and with an high reliability. The goal of this work is to investigate about the

  16. Standardization of Operator-Dependent Variables Affecting Precision and Accuracy of the Disk Diffusion Method for Antibiotic Susceptibility Testing.

    PubMed

    Hombach, Michael; Maurer, Florian P; Pfiffner, Tamara; Böttger, Erik C; Furrer, Reinhard

    2015-12-01

    Parameters like zone reading, inoculum density, and plate streaking influence the precision and accuracy of disk diffusion antibiotic susceptibility testing (AST). While improved reading precision has been demonstrated using automated imaging systems, standardization of the inoculum and of plate streaking have not been systematically investigated yet. This study analyzed whether photometrically controlled inoculum preparation and/or automated inoculation could further improve the standardization of disk diffusion. Suspensions of Escherichia coli ATCC 25922 and Staphylococcus aureus ATCC 29213 of 0.5 McFarland standard were prepared by 10 operators using both visual comparison to turbidity standards and a Densichek photometer (bioMérieux), and the resulting CFU counts were determined. Furthermore, eight experienced operators each inoculated 10 Mueller-Hinton agar plates using a single 0.5 McFarland standard bacterial suspension of E. coli ATCC 25922 using regular cotton swabs, dry flocked swabs (Copan, Brescia, Italy), or an automated streaking device (BD-Kiestra, Drachten, Netherlands). The mean CFU counts obtained from 0.5 McFarland standard E. coli ATCC 25922 suspensions were significantly different for suspensions prepared by eye and by Densichek (P < 0.001). Preparation by eye resulted in counts that were closer to the CLSI/EUCAST target of 10(8) CFU/ml than those resulting from Densichek preparation. No significant differences in the standard deviations of the CFU counts were observed. The interoperator differences in standard deviations when dry flocked swabs were used decreased significantly compared to the differences when regular cotton swabs were used, whereas the mean of the standard deviations of all operators together was not significantly altered. In contrast, automated streaking significantly reduced both interoperator differences, i.e., the individual standard deviations, compared to the standard deviations for the manual method, and the mean of

  17. Accuracy and precision of MR blood oximetry based on the long paramagnetic cylinder approximation of large vessels.

    PubMed

    Langham, Michael C; Magland, Jeremy F; Epstein, Charles L; Floyd, Thomas F; Wehrli, Felix W

    2009-08-01

    An accurate noninvasive method to measure the hemoglobin oxygen saturation (%HbO(2)) of deep-lying vessels without catheterization would have many clinical applications. Quantitative MRI may be the only imaging modality that can address this difficult and important problem. MR susceptometry-based oximetry for measuring blood oxygen saturation in large vessels models the vessel as a long paramagnetic cylinder immersed in an external field. The intravascular magnetic susceptibility relative to surrounding muscle tissue is a function of oxygenated hemoglobin (HbO(2)) and can be quantified with a field-mapping pulse sequence. In this work, the method's accuracy and precision was investigated theoretically on the basis of an analytical expression for the arbitrarily oriented cylinder, as well as experimentally in phantoms and in vivo in the femoral artery and vein at 3T field strength. Errors resulting from vessel tilt, noncircularity of vessel cross-section, and induced magnetic field gradients were evaluated and methods for correction were designed and implemented. Hemoglobin saturation was measured at successive vessel segments, differing in geometry, such as eccentricity and vessel tilt, but constant blood oxygen saturation levels, as a means to evaluate measurement consistency. The average standard error and coefficient of variation of measurements in phantoms were <2% with tilt correction alone, in agreement with theory, suggesting that high accuracy and reproducibility can be achieved while ignoring noncircularity for tilt angles up to about 30 degrees . In vivo, repeated measurements of %HbO(2) in the femoral vessels yielded a coefficient of variation of less than 5%. In conclusion, the data suggest that %HbO(2) can be measured reproducibly in vivo in large vessels of the peripheral circulation on the basis of the paramagnetic cylinder approximation of the incremental field. PMID:19526517

  18. Determination of the precision and accuracy of morphological measurements using the Kinect™ sensor: comparison with standard stereophotogrammetry.

    PubMed

    Bonnechère, B; Jansen, B; Salvia, P; Bouzahouene, H; Sholukha, V; Cornelis, J; Rooze, M; Van Sint Jan, S

    2014-01-01

    The recent availability of the Kinect™ sensor, a low-cost Markerless Motion Capture (MMC) system, could give new and interesting insights into ergonomics (e.g. the creation of a morphological database). Extensive validation of this system is still missing. The aim of the study was to determine if the Kinect™ sensor can be used as an easy, cheap and fast tool to conduct morphology estimation. A total of 48 subjects were analysed using MMC. Results were compared with measurements obtained from a high-resolution stereophotogrammetric system, a marker-based system (MBS). Differences between MMC and MBS were found; however, these differences were systematically correlated and enabled regression equations to be obtained to correct MMC results. After correction, final results were in agreement with MBS data (p = 0.99). Results show that measurements were reproducible and precise after applying regression equations. Kinect™ sensors-based systems therefore seem to be suitable for use as fast and reliable tools to estimate morphology. Practitioner Summary: The Kinect™ sensor could eventually be used for fast morphology estimation as a body scanner. This paper presents an extensive validation of this device for anthropometric measurements in comparison to manual measurements and stereophotogrammetric devices. The accuracy is dependent on the segment studied but the reproducibility is excellent. PMID:24646374

  19. Strategy for high-accuracy-and-precision retrieval of atmospheric methane from the mid-infrared FTIR network

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Forster, F.; Rettinger, M.; Jones, N.

    2011-05-01

    We present a strategy (MIR-GBM v1.0) for the retrieval of column-averaged dry-air mole fractions of methane (XCH4) with a precision <0.3 % (1-σ diurnal variation, 7-min integration) and a seasonal bias <0.14 % from mid-infrared ground-based solar FTIR measurements of the Network for the Detection of Atmospheric Composition Change (NDACC, comprising 22 FTIR stations). This makes NDACC methane data useful for satellite validation and for the inversion of regional-scale sources and sinks in addition to long-term trend analysis. Such retrievals complement the high accuracy and precision near-infrared observations of the younger Total Carbon Column Observing Network (TCCON) with time series dating back 15 yr or so before TCCON operations began. MIR-GBM v1.0 is using HITRAN 2000 (including the 2001 update release) and 3 spectral micro windows (2613.70-2615.40 cm-1, 2835.50-2835.80 cm-1, 2921.00-2921.60 cm-1). A first-order Tikhonov constraint is applied to the state vector given in units of per cent of volume mixing ratio. It is tuned to achieve minimum diurnal variation without damping seasonality. Final quality selection of the retrievals uses a threshold for the ratio of root-mean-square spectral residuals and information content (<0.15 %). Column-averaged dry-air mole fractions are calculated using the retrieved methane profiles and four-times-daily pressure-temperature-humidity profiles from National Center for Environmental Prediction (NCEP) interpolated to the time of measurement. MIR-GBM v1.0 is the optimum of 24 tested retrieval strategies (8 different spectral micro-window selections, 3 spectroscopic line lists: HITRAN 2000, 2004, 2008). Dominant errors of the non-optimum retrieval strategies are HDO/H2O-CH4 interference errors (seasonal bias up to ≈4 %). Therefore interference errors have been quantified at 3 test sites covering clear-sky integrated water vapor levels representative for all NDACC sites (Wollongong maximum = 44.9 mm, Garmisch mean = 14.9 mm

  20. Strategy for high-accuracy-and-precision retrieval of atmospheric methane from the mid-infrared FTIR network

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Forster, F.; Rettinger, M.; Jones, N.

    2011-09-01

    We present a strategy (MIR-GBM v1.0) for the retrieval of column-averaged dry-air mole fractions of methane (XCH4) with a precision <0.3% (1-σ diurnal variation, 7-min integration) and a seasonal bias <0.14% from mid-infrared ground-based solar FTIR measurements of the Network for the Detection of Atmospheric Composition Change (NDACC, comprising 22 FTIR stations). This makes NDACC methane data useful for satellite validation and for the inversion of regional-scale sources and sinks in addition to long-term trend analysis. Such retrievals complement the high accuracy and precision near-infrared observations of the younger Total Carbon Column Observing Network (TCCON) with time series dating back 15 years or so before TCCON operations began. MIR-GBM v1.0 is using HITRAN 2000 (including the 2001 update release) and 3 spectral micro windows (2613.70-2615.40 cm-1, 2835.50-2835.80 cm-1, 2921.00-2921.60 cm-1). A first-order Tikhonov constraint is applied to the state vector given in units of per cent of volume mixing ratio. It is tuned to achieve minimum diurnal variation without damping seasonality. Final quality selection of the retrievals uses a threshold for the goodness of fit (χ2 < 1) as well as for the ratio of root-mean-square spectral noise and information content (<0.15%). Column-averaged dry-air mole fractions are calculated using the retrieved methane profiles and four-times-daily pressure-temperature-humidity profiles from National Center for Environmental Prediction (NCEP) interpolated to the time of measurement. MIR-GBM v1.0 is the optimum of 24 tested retrieval strategies (8 different spectral micro-window selections, 3 spectroscopic line lists: HITRAN 2000, 2004, 2008). Dominant errors of the non-optimum retrieval strategies are systematic HDO/H2O-CH4 interference errors leading to a seasonal bias up to ≈5%. Therefore interference errors have been quantified at 3 test sites covering clear-sky integrated water vapor levels representative for all NDACC

  1. Airborne Laser CO2 Column Measurements: Evaluation of Precision and Accuracy Under a Wide Range of Surface and Atmospheric Conditions

    NASA Astrophysics Data System (ADS)

    Browell, E. V.; Dobler, J. T.; Kooi, S. A.; Fenn, M. A.; Choi, Y.; Vay, S. A.; Harrison, F. W.; Moore, B.

    2011-12-01

    This paper discusses the latest flight test results of a multi-frequency intensity-modulated (IM) continuous-wave (CW) laser absorption spectrometer (LAS) that operates near 1.57 μm for remote CO2 column measurements. This IM-LAS system is under development for a future space-based mission to determine the global distribution of regional-scale CO2 sources and sinks, which is the objective of the NASA Active Sensing of CO2 Emissions during Nights, Days, and Seasons (ASCENDS) mission. A prototype of the ASCENDS system, called the Multi-frequency Fiber Laser Lidar (MFLL), has been flight tested in eleven airborne campaigns since May 2005. This paper compares the most recent results obtained during the 2010 and 2011 UC-12 and DC-8 flight tests, where MFLL remote CO2 column measurements were evaluated against airborne in situ CO2 profile measurements traceable to World Meteorological Organization standards. The major change to the MFLL system in 2011 was the implementation of several different IM modes, which could be quickly changed in flight, to directly compare the precision and accuracy of MFLL CO2 measurements in each mode. The different IM modes that were evaluated included "fixed" IM frequencies near 50, 200, and 500 kHz; frequencies changed in short time steps (Stepped); continuously swept frequencies (Swept); and a pseudo noise (PN) code. The Stepped, Swept, and PN modes were generated to evaluate the ability of these IM modes to desensitize MFLL CO2 column measurements to intervening optically thin aerosols/clouds. MFLL was flown on the NASA Langley UC-12 aircraft in May 2011 to evaluate the newly implemented IM modes and their impact on CO2 measurement precision and accuracy, and to determine which IM mode provided the greatest thin cloud rejection (TCR) for the CO2 column measurements. Within the current hardware limitations of the MFLL system, the "fixed" 50 kHz results produced similar SNR values to those found previously. The SNR decreased as expected

  2. A strategy for multivariate calibration based on modified single-index signal regression: Capturing explicit non-linearity and improving prediction accuracy

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyu; Li, Qingbo; Zhang, Guangjun

    2013-11-01

    In this paper, a modified single-index signal regression (mSISR) method is proposed to construct a nonlinear and practical model with high-accuracy. The mSISR method defines the optimal penalty tuning parameter in P-spline signal regression (PSR) as initial tuning parameter and chooses the number of cycles based on minimizing root mean squared error of cross-validation (RMSECV). mSISR is superior to single-index signal regression (SISR) in terms of accuracy, computation time and convergency. And it can provide the character of the non-linearity between spectra and responses in a more precise manner than SISR. Two spectra data sets from basic research experiments, including plant chlorophyll nondestructive measurement and human blood glucose noninvasive measurement, are employed to illustrate the advantages of mSISR. The results indicate that the mSISR method (i) obtains the smooth and helpful regression coefficient vector, (ii) explicitly exhibits the type and amount of the non-linearity, (iii) can take advantage of nonlinear features of the signals to improve prediction performance and (iv) has distinct adaptability for the complex spectra model by comparing with other calibration methods. It is validated that mSISR is a promising nonlinear modeling strategy for multivariate calibration.

  3. Design of a platinum resistance thermometer temperature measuring transducer and improved accuracy of linearizing the output voltage

    SciTech Connect

    Malygin, V.M.

    1995-06-01

    An improved method is presented for designing a temperature measuring transducer, the electrical circuit of which comprises an unbalanced bridge, in one arm of which is a platinum resistance thermometer, and containing a differential amplifier with feedback. Values are given for the coefficients, the minimum linearization error is determined, and an example is also given of the practical design of the transducer, using the given coefficients. A determination is made of the limiting achievable accuracy in linearizing the output voltage of the measuring transducer, as a function of the range of measured temperature.

  4. Precision Interval Estimation of the Response Surface by Means of an Integrated Algorithm of Neural Network and Linear Regression

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.

    1999-01-01

    The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.

  5. The measurement of linear and angular displacements in prototype aircraft - Instrumentation, calibration and operational accuracy

    NASA Astrophysics Data System (ADS)

    Storm van Leeuwen, Sam

    The design and development of angular displacement transducers for flight test instrumentation systems are considered. Calibration tools, developed to meet the accuracy requirements, allowed in situ calibration with short turn around times. The design of the control surface deflection measurement channels for the Fokker 100 prototype aircraft is discussed in detail. It is demonstrated that a bellows coupling provides accurate results, and that the levers and push-pull rod drive mechanisms perform well. The results suggest that a complex mechanical drive mechanism reduces the system accuracy.

  6. Evaluation of precision and accuracy of the Borgwaldt RM20S(®) smoking machine designed for in vitro exposure.

    PubMed

    Kaur, Navneet; Lacasse, Martine; Roy, Jean-Philippe; Cabral, Jean-Louis; Adamson, Jason; Errington, Graham; Waldron, Karen C; Gaça, Marianna; Morin, André

    2010-12-01

    The Borgwaldt RM20S(®) smoking machine enables the generation, dilution, and transfer of fresh cigarette smoke to cell exposure chambers, for in vitro analyses. We present a study confirming the precision (repeatability r, reproducibility R) and accuracy of smoke dose generated by the Borgwaldt RM20S(®) system and delivery to exposure chambers. Due to the aerosol nature of cigarette smoke, the repeatability of the dilution of the vapor phase in air was assessed by quantifying two reference standard gases: methane (CH(4), r between 29.0 and 37.0 and RSD between 2.2% and 4.5%) and carbon monoxide (CO, r between 166.8 and 235.8 and RSD between 0.7% and 3.7%). The accuracy of dilution (percent error) for CH(4) and CO was between 6.4% and 19.5% and between 5.8% and 6.4%, respectively, over a 10-1000-fold dilution range. To corroborate our findings, a small inter-laboratory study was carried out for CH(4) measurements. The combined dilution repeatability had an r between 21.3 and 46.4, R between 52.9 and 88.4, RSD between 6.3% and 17.3%, and error between 4.3% and 13.1%. Based on the particulate component of cigarette smoke (3R4F), the repeatability (RSD = 12%) of the undiluted smoke generated by the Borgwaldt RM20S(®) was assessed by quantifying solanesol using high-performance liquid chromatography with ultraviolet detection (HPLC/UV). Finally, the repeatability (r between 0.98 and 4.53 and RSD between 8.8% and 12%) of the dilution of generated smoke particulate phase was assessed by quantifying solanesol following various dilutions of cigarette smoke. The findings in this study suggest the Borgwaldt RM20S(®) smoking machine is a reliable tool to generate and deliver repeatable and reproducible doses of whole smoke to in vitro cultures. PMID:21126153

  7. Effect of modulation frequency bandwidth on measurement accuracy and precision for digital diffuse optical spectroscopy (dDOS)

    NASA Astrophysics Data System (ADS)

    Jung, Justin; Istfan, Raeef; Roblyer, Darren

    2014-03-01

    Near-infrared (NIR) frequency-domain Diffuse Optical Spectroscopy (DOS) is an emerging technology with a growing number of potential clinical applications. In an effort to reduce DOS system complexity and improve portability, we recently demonstrated a direct digital sampling method that utilizes digital signal generation and detection as a replacement for more traditional analog methods. In our technique, a fast analog-to-digital converter (ADC) samples the detected time-domain radio frequency (RF) waveforms at each modulation frequency in a broad-bandwidth sweep (50- 300MHz). While we have shown this method provides comparable results to other DOS technologies, the process is data intensive as digital samples must be stored and processed for each modulation frequency and wavelength. We explore here the effect of reducing the modulation frequency bandwidth on the accuracy and precision of extracted optical properties. To accomplish this, the performance of the digital DOS (dDOS) system was compared to a gold standard network analyzer based DOS system. With a starting frequency of 50MHz, the input signal of the dDOS system was swept to 100, 150, 250, or 300MHz in 4MHz increments and results were compared to full 50-300MHz networkanalyzer DOS measurements. The average errors in extracted μa and μs' with dDOS were lowest for the full 50-300MHz sweep (less than 3%) and were within 3.8% for frequency bandwidths as narrow as 50-150MHz. The errors increased to as much as 9.0% when a bandwidth of 50-100MHz was tested. These results demonstrate the possibility for reduced data collection with dDOS without critical compensation of optical property extraction.

  8. The Impact of 3D Volume-of-Interest Definition on Accuracy and Precision of Activity Estimation in Quantitative SPECT and Planar Processing Methods

    PubMed Central

    He, Bin; Frey, Eric C.

    2010-01-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise, and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT), and planar (QPlanar) processing. Another important effect impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimations. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in the same transaxial plane in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g., in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from −1 to 1 voxels in increments of 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ

  9. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1990-01-01

    A ground-based adaptive optics imaging telescope system attempts to improve image quality by detecting and correcting for atmospherically induced wavefront aberrations. The required control computations during each cycle will take a finite amount of time. Longer time delays result in larger values of residual wavefront error variance since the atmosphere continues to change during that time. Thus an optical processor may be well-suited for this task. This paper presents a study of the accuracy requirements in a general optical processor that will make it competitive with, or superior to, a conventional digital computer for the adaptive optics application. An optimization of the adaptive optics correction algorithm with respect to an optical processor's degree of accuracy is also briefly discussed.

  10. Accuracy, precision and response time of consumer fork, remote digital probe and disposable indicator thermometers for cooked ground beef patties and chicken breasts

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Nine different commercially available instant-read consumer thermometers (forks, remotes, digital probe and disposable color change indicators) were tested for accuracy and precision compared to a calibrated thermocouple in 80 percent and 90 percent lean ground beef patties, and boneless and bone-in...

  11. An Examination of the Precision and Technical Accuracy of the First Wave of Group-Randomized Trials Funded by the Institute of Education Sciences

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Raudenbush, Stephen W.

    2009-01-01

    This article examines the power analyses for the first wave of group-randomized trials funded by the Institute of Education Sciences. Specifically, it assesses the precision and technical accuracy of the studies. The authors identified the appropriate experimental design and estimated the minimum detectable standardized effect size (MDES) for each…

  12. The JPL Hg(sup +) Extended Linear Ion Trap Frequency Standard: Status, Stability, and Accuracy Prospects

    NASA Technical Reports Server (NTRS)

    Tjoelker, R. L.; Prestage, J. D.; Maleki, L.

    1996-01-01

    Microwave frequency standards based on room temperature (sup 199)Hg(sup +) ions in a Linear Ion Trap (LITS) presently achieve a Signal to Noise and line Q inferred short frequency stability. Long term stability has been measured for averaging intervals up to 5 months with apparent sensitivity to variations in ion number/temperature limiting the flicker floor.

  13. Soil conductivity and multiple linear regression for precision monitoring of beef feedlot manure and runoff

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Open-lot cattle feeding operations face challenges in control of nutrient runoff, leaching, and gaseous emissions. This report investigates the use of precision management of saline soils as found on 1) feedlot surfaces and 2) a vegetative treatment area (VTA) utilized to control feedlot runoff. A...

  14. Deformable Image Registration for Adaptive Radiation Therapy of Head and Neck Cancer: Accuracy and Precision in the Presence of Tumor Changes

    SciTech Connect

    Mencarelli, Angelo; Kranen, Simon Robert van; Hamming-Vrieze, Olga; Beek, Suzanne van; Nico Rasch, Coenraad Robert; Herk, Marcel van; Sonke, Jan-Jakob

    2014-11-01

    Purpose: To compare deformable image registration (DIR) accuracy and precision for normal and tumor tissues in head and neck cancer patients during the course of radiation therapy (RT). Methods and Materials: Thirteen patients with oropharyngeal tumors, who underwent submucosal implantation of small gold markers (average 6, range 4-10) around the tumor and were treated with RT were retrospectively selected. Two observers identified 15 anatomical features (landmarks) representative of normal tissues in the planning computed tomography (pCT) scan and in weekly cone beam CTs (CBCTs). Gold markers were digitally removed after semiautomatic identification in pCTs and CBCTs. Subsequently, landmarks and gold markers on pCT were propagated to CBCTs, using a b-spline-based DIR and, for comparison, rigid registration (RR). To account for observer variability, the pair-wise difference analysis of variance method was applied. DIR accuracy (systematic error) and precision (random error) for landmarks and gold markers were quantified. Time trend of the precisions for RR and DIR over the weekly CBCTs were evaluated. Results: DIR accuracies were submillimeter and similar for normal and tumor tissue. DIR precision (1 SD) on the other hand was significantly different (P<.01), with 2.2 mm vector length in normal tissue versus 3.3 mm in tumor tissue. No significant time trend in DIR precision was found for normal tissue, whereas in tumor, DIR precision was significantly (P<.009) degraded during the course of treatment by 0.21 mm/week. Conclusions: DIR for tumor registration proved to be less precise than that for normal tissues due to limited contrast and complex non-elastic tumor response. Caution should therefore be exercised when applying DIR for tumor changes in adaptive procedures.

  15. Design, Simulation and Testing of a Precision Alignment Frame for the Next Linear Collider

    SciTech Connect

    Fitsos, P

    2004-06-18

    An alignment frame is developed to support 3 Beam Position Monitors (BPM's) for detecting and ultimately aligning the electron beam from a linear accelerator. This report discusses the design details, preliminary modal analysis of the alignment frame as well as the addition of a metrology frame in the final phase of development.

  16. Linear diffraction grating interferometer with high alignment tolerance and high accuracy

    SciTech Connect

    Cheng Fang; Fan, Kuang-Chao

    2011-08-01

    We present an innovative structure of a linear diffraction grating interferometer as a long stroke and nanometer resolution displacement sensor for any linear stage. The principle of this diffractive interferometer is based on the phase information encoded by the {+-}1st order beams diffracted by a holographic grating. Properly interfering these two beams leads to modulation similar to a Doppler frequency shift that can be translated to displacement measurements via phase decoding. A self-compensation structure is developed to improve the alignment tolerance. LightTool analysis shows that this new structure is completely immune to alignment errors of offset, standoff, yaw, and roll. The tolerance of the pitch is also acceptable for most installation conditions. In order to compact the structure and improve the signal quality, a new optical bonding technology by mechanical fixture is presented so that the miniature optics can be permanently bonded together without an air gap in between. For the output waveform signals, a software module is developed for fast real-time pulse counting and phase subdivision. A laser interferometer HP5529A is employed to test the repeatability of the whole system. Experimental data show that within 15 mm travel length, the repeatability is within 15 nm.

  17. Millimeter-accuracy GPS landslide monitoring using Precise Point Positioning with Single Receiver Phase Ambiguity (PPP-SRPA) resolution: a case study in Puerto Rico

    NASA Astrophysics Data System (ADS)

    Wang, G. Q.

    2013-03-01

    Continuous Global Positioning System (GPS) monitoring is essential for establishing the rate and pattern of superficial movements of landslides. This study demonstrates a technique which uses a stand-alone GPS station to conduct millimeter-accuracy landslide monitoring. The Precise Point Positioning with Single Receiver Phase Ambiguity (PPP-SRPA) resolution employed by the GIPSY/OASIS software package (V6.1.2) was applied in this study. Two-years of continuous GPS data collected at a creeping landslide were used to evaluate the accuracy of the PPP-SRPA solutions. The criterion for accuracy was the root-mean-square (RMS) of residuals of the PPP-SRPA solutions with respect to "true" landslide displacements over the two-year period. RMS is often regarded as repeatability or precision in GPS literature. However, when contrasted with a known "true" position or displacement it could be termed RMS accuracy or simply accuracy. This study indicated that the PPP-SRPA resolution can provide an accuracy of 2 to 3 mm horizontally and 8 mm vertically for 24-hour sessions with few outliers (< 1%) in the Puerto Rico region. Horizontal accuracy below 5 mm can be stably achieved with 4-hour or longer sessions if avoiding the collection of data during extreme weather conditions. Vertical accuracy below 10 mm can be achieved with 8-hour or longer sessions. This study indicates that the PPP-SRPA resolution is competitive with the conventional carrier-phase double-difference network resolution for static (longer than 4 hours) landslide monitoring while maintaining many advantages. It is evident that the PPP-SRPA method would become an attractive alternative to the conventional carrier-phase double-difference method for landslide monitoring, notably in remote areas or developing countries.

  18. The accuracy of linear indices of ventricular volume in pediatric hydrocephalus: technical note.

    PubMed

    Ragan, Dustin K; Cerqua, Jonathon; Nash, Tiffany; McKinstry, Robert C; Shimony, Joshua S; Jones, Blaise V; Mangano, Francesco T; Holland, Scott K; Yuan, Weihong; Limbrick, David D

    2015-06-01

    Assessment of ventricular size is essential in clinical management of hydrocephalus and other neurological disorders. At present, ventricular size is assessed using indices derived from the dimensions of the ventricles rather than the actual volumes. In a population of 22 children with congenital hydrocephalus and 22 controls, the authors evaluated the relationship between ventricular volume and linear indices in common use, such as the frontooccipital horn ratio, Evans' index, and the bicaudate index. Ventricular volume was measured on high-resolution anatomical MR images. The frontooccipital horn ratio was found to have a stronger correlation with both absolute and relative ventricular volume than other indices. Further analysis of the brain volumes found that congenital hydrocephalus produced a negligible decrease in the volume of the brain parenchyma. PMID:25745953

  19. The accuracy of linear indices of ventricular volume in pediatric hydrocephalus: technical note

    PubMed Central

    Ragan, Dustin K.; Cerqua, Jonathon; Nash, Tiffany; McKinstry, Robert C.; Shimony, Joshua S.; Jones, Blaise V.; Mangano, Francesco T.; Holland, Scott K.; Yuan, Weihong; Limbrick, David D.

    2015-01-01

    Assessment of ventricular size is essential in clinical management of hydrocephalus and other neurological disorders. At present, ventricular size is assessed using indices derived from the dimensions of the ventricles rather than the actual volumes. In a population of 22 children with congenital hydrocephalus and 22 controls, the authors evaluated the relationship between ventricular volume and linear indices in common use, such as the frontooccipital horn ratio, Evans’ index, and the bicaudate index. Ventricular volume was measured on high-resolution anatomical MR images. The frontooccipital horn ratio was found to have a stronger correlation with both absolute and relative ventricular volume than other indices. Further analysis of the brain volumes found that congenital hydrocephalus produced a negligible decrease in the volume of the brain parenchyma. PMID:25745953

  20. On loss of accuracy and non-uniqueness of solutions generated by equivalent linearization and cumulant-neglect methods

    NASA Astrophysics Data System (ADS)

    Fan, F.-G.; Ahmadi, G.

    1990-03-01

    The equivalent linearization, the Gaussian closure and the non-Gaussian cumulant-neglect closure schemes are used to analyze responses of a non-linear system with multiple potential wells under random external excitations. The resulting response statistics are compared with those obtained from Monte-Carlo simulations and exact stationary solutions to the corresponding Fokker-Planck-Kolmogorov equation. The question of uniqueness of mean-square responses for different approximation methods is also examined and discussed. The results presented show that accuracies of these approximation techniques vary depending on the nature and strength of non-linearity of the system and the intensity of excitation. For certain conditions, the exact mean-square responses are underestimated by a factor of ten or more. The Gaussian closure technique and the equivalent linearization method lead to identical results which are somewhat less accurate than those obtained by the non-Gaussian cumulant-neglect closure scheme. It is also shown that the solution generated by these techniques may not be unique.

  1. a Method for Self-Calibration in Satellite with High Precision of Space Linear Array Camera

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Qian, Fangming; Miao, Yuzhe; Wang, Rongjian

    2016-06-01

    At present, the on-orbit calibration of the geometric parameters of a space surveying camera is usually processed by data from a ground calibration field after capturing the images. The entire process is very complicated and lengthy and cannot monitor and calibrate the geometric parameters in real time. On the basis of a large number of on-orbit calibrations, we found that owing to the influence of many factors, e.g., weather, it is often difficult to capture images of the ground calibration field. Thus, regular calibration using field data cannot be ensured. This article proposes a real time self-calibration method for a space linear array camera on a satellite using the optical auto collimation principle. A collimating light source and small matrix array CCD devices are installed inside the load system of the satellite; these use the same light path as the linear array camera. We can extract the location changes of the cross marks in the matrix array CCD to determine the real-time variations in the focal length and angle parameters of the linear array camera. The on-orbit status of the camera is rapidly obtained using this method. On one hand, the camera's change regulation can be mastered accurately and the camera's attitude can be adjusted in a timely manner to ensure optimal photography; in contrast, self-calibration of the camera aboard the satellite can be realized quickly, which improves the efficiency and reliability of photogrammetric processing.

  2. Accuracy and reliability of linear measurements using tangential projection and cone beam computed tomography

    PubMed Central

    Sheikhi, Mahnaz; Dakhil-Alian, Mansour; Bahreinian, Zahra

    2015-01-01

    Background: Providing a cross-sectional image is essential for preimplant assessments. Computed tomography (CT) and cone beam CT (CBCT) images are very expensive and provide high radiation dose. Tangential projection is a very simple, available, and low-dose technique that can be used in the anterior portion of mandible. The purpose of this study was to evaluate the accuracy of tangential projection in preimplant measurements in comparison to CBCT. Materials and Methods: Three dry edentulous human mandibles were examined in five points at intercanine region using tangential projection and CBCT. The height and width of the ridge were measured twice by two observers. The mandibles were then cut, and real measurements were obtained. The agreement between real measures and measurements obtained by either technique, and inter- and intra-observer reliability were tested. Results: The measurement error was less than 0.12 for tangential technique and 0.06 for CBCT. The agreement between the real measures and measurements from radiographs were higher than 0.87. Tangential projection slightly overestimated the distances, while there was a slight underestimation in CBCT results. Conclusion: Considering the low cost, low radiation dose, simplicity and availability, tangenital projection would be adequate for preimplant assessment in edentulous patients when limited numbers of implants are required in the anterior mandible. PMID:26005469

  3. High precision and high accuracy isotopic measurement of uranium using lead and thorium calibration solutions by inductively coupled plasma-multiple collector-mass spectrometry

    SciTech Connect

    Bowen, I.; Walder, A.J.; Hodgson, T.; Parrish, R.R. |

    1998-12-31

    A novel method for the high accuracy and high precision measurement of uranium isotopic composition by Inductively Coupled Plasma-Multiple Collector-Mass Spectrometry is discussed. Uranium isotopic samples are spiked with either thorium or lead for use as internal calibration reference materials. This method eliminates the necessity to periodically measure uranium standards to correct for changing mass bias when samples are measured over long time periods. This technique has generated among the highest levels of analytical precision on both the major and minor isotopes of uranium. Sample throughput has also been demonstrated to exceed Thermal Ionization Mass Spectrometry by a factor of four to five.

  4. The effects of temporal-precision and time-minimization constraints on the spatial and temporal accuracy of aimed hand movements.

    PubMed

    Carlton, L G

    1994-03-01

    Discrete aimed hand movements, made by subjects given temporal-accuracy and time-minimization task instructions, were compared. Movements in the temporal-accuracy task were made to a point target with a goal movement time of 400 ms. A circular target then was manufactured that incorporated the measured spatial errors from the temporal-accuracy task, and subjects attempted to contact the target with a minimum movement time and without missing the circular target (time-minimization task instructions). This procedure resulted in equal movement amplitude and approximately equal spatial accuracy for the two task instructions. Movements under the time-minimization instructions were completed rapidly (M = 307 ms) without target misses, and tended to be made up of two submovements. In contrast, movements under temporal-accuracy instructions were made more slowly (M = 397 ms), matching the goal movement time, and were typically characterized by a single submovement. These data support the hypothesis that movement times, at a fixed movement amplitude versus target width ratio, decrease as the number of submovements increases, and that movements produced under temporal-accuracy and time-minimization have different control characteristics. These control differences are related to the linear and logarithmic speed-accuracy relations observed for temporal-accuracy and time-minimization tasks, respectively. PMID:15757833

  5. The accuracy of single emulsion radiographic film in linear measurement of spiral tomography

    PubMed Central

    Dabbaghi, Arash; Shokraneh, Ali; Farhadi, Nastaran

    2013-01-01

    Background: Conventional tomography used for evaluation of the small areas of the jaws provides acceptable information. It has some advantages of availability, less radiation dose and cost in comparison to computed tomography (CT) and cone beam CT. Double emulsion film usually used for taking tomograms requires less exposure than single emulsion film; on the other hand, the latter provides more sharpness and spatial resolution. The aim of this study was to compare diagnostic accuracy of these two kinds of films in the spiral tomography. Materials and Methods: In an experimental study, 20 lines (10 lines anterior and 10 lines posterior to the mental foramen) were selected on two dry human mandibles and tomographic images were taken from each line with and without metal marker by single and double emulsion films. For quantitative assessment, the mandibular width and length was identified and measured on 80obtained tomograms. Afterwards, the mandibles were sectioned on each line and their actual width and height were measured. For each line, the data of tomograms were subtracted from gold standard as measurement error. These errors were divided into three groups: Greater than +1 mm, between +1 mm and −1 mm and less than −1 mm. Obtained data were analyzed by Pearson Chi-square test (α=0/05). Results: There was no significant difference between the single and double emulsion films, with and without markers in the measurement of both height and width of mandible (P > 0.05). Conclusion: The single emulsion film is not recommended to be used for taking the spiral tomogram. PMID:23946736

  6. A simple algorithm improves mass accuracy to 50-100 ppm for delayed extraction linear MALDI-TOF mass spectrometry

    SciTech Connect

    Hack, Christopher A.; Benner, W. Henry

    2001-10-31

    A simple mathematical technique for improving mass calibration accuracy of linear delayed extraction matrix assisted laser desorption ionization time-of-flight mass spectrometry (DE MALDI-TOF MS) spectra is presented. The method involves fitting a parabola to a plot of Dm vs. mass data where Dm is the difference between the theoretical mass of calibrants and the mass obtained from a linear relationship between the square root of m/z and ion time of flight. The quadratic equation that describes the parabola is then used to correct the mass of unknowns by subtracting the deviation predicted by the quadratic equation from measured data. By subtracting the value of the parabola at each mass from the calibrated data, the accuracy of mass data points can be improved by factors of 10 or more. This method produces highly similar results whether or not initial ion velocity is accounted for in the calibration equation; consequently, there is no need to depend on that uncertain parameter when using the quadratic correction. This method can be used to correct the internally calibrated masses of protein digest peaks. The effect of nitrocellulose as a matrix additive is also briefly discussed, and it is shown that using nitrocellulose as an additive to a CHCA matrix does not significantly change initial ion velocity but does change the average position of ions relative to the sample electrode at the instant the extraction voltage is applied.

  7. Towards the GEOSAT Follow-On Precise Orbit Determination Goals of High Accuracy and Near-Real-Time Processing

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Zelensky, Nikita P.; Chinn, Douglas S.; Beckley, Brian D.; Lillibridge, John L.

    2006-01-01

    The US Navy's GEOSAT Follow-On spacecraft (GFO) primary mission objective is to map the oceans using a radar altimeter. Satellite laser ranging data, especially in combination with altimeter crossover data, offer the only means of determining high-quality precise orbits. Two tuned gravity models, PGS7727 and PGS7777b, were created at NASA GSFC for GFO that reduce the predicted radial orbit through degree 70 to 13.7 and 10.0 mm. A macromodel was developed to model the nonconservative forces and the SLR spacecraft measurement offset was adjusted to remove a mean bias. Using these improved models, satellite-ranging data, altimeter crossover data, and Doppler data are used to compute both daily medium precision orbits with a latency of less than 24 hours. Final precise orbits are also computed using these tracking data and exported with a latency of three to four weeks to NOAA for use on the GFO Geophysical Data Records (GDR s). The estimated orbit precision of the daily orbits is between 10 and 20 cm, whereas the precise orbits have a precision of 5 cm.

  8. The precision and accuracy of iterative and non-iterative methods of photopeak integration in activation analysis, with particular reference to the analysis of multiplets

    USGS Publications Warehouse

    Baedecker, P.A.

    1977-01-01

    The relative precisions obtainable using two digital methods, and three iterative least squares fitting procedures of photopeak integration have been compared empirically using 12 replicate counts of a test sample with 14 photopeaks of varying intensity. The accuracy by which the various iterative fitting methods could analyse synthetic doublets has also been evaluated, and compared with a simple non-iterative approach. ?? 1977 Akade??miai Kiado??.

  9. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  10. Accuracy and precision of a custom camera-based system for 2-d and 3-d motion tracking during speech and nonspeech motor tasks.

    PubMed

    Feng, Yongqiang; Max, Ludo

    2014-04-01

    PURPOSE Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and submillimeter accuracy. METHOD The authors examined the accuracy and precision of 2-D and 3-D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially available computer software (APAS, Ariel Dynamics), and a custom calibration device. RESULTS Overall root-mean-square error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3- vs. 6-mm diameter) was negligible at all frame rates for both 2-D and 3-D data. CONCLUSION Motion tracking with consumer-grade digital cameras and the APAS software can achieve submillimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  11. Precision Linear Actuators for the Spherical Primary Optical Telescope Demonstration Mirror

    NASA Technical Reports Server (NTRS)

    Budinoff, Jason; Pfenning, David

    2006-01-01

    The Spherical Primary Optical Telescope (SPOT) is an ongoing research effort at Goddard Space Flight Center developing wavefront sensing and control architectures for future space telescopes. The 03.5-m SPOT telescope primary mirror is comprise9 of six 0.86-m hexagonal mirror segments arranged in a single ring, with the central segment missing. The mirror segments are designed for laboratory use and are not lightweighted to reduce cost. Each primary mirror segment is actuated and has tip, tilt, and piston rigid-body motions. Additionally, the radius of curvature of each mirror segment may be varied mechanically. To provide these degrees of freedom, the SPOT mirror segment assembly requires linear actuators capable of

  12. Linear and Logarithmic Speed-Accuracy Trade-Offs in Reciprocal Aiming Result from Task-Specific Parameterization of an Invariant Underlying Dynamics

    ERIC Educational Resources Information Center

    Bongers, Raoul M.; Fernandez, Laure; Bootsma, Reinoud J.

    2009-01-01

    The authors examined the origins of linear and logarithmic speed-accuracy trade-offs from a dynamic systems perspective on motor control. In each experiment, participants performed 2 reciprocal aiming tasks: (a) a velocity-constrained task in which movement time was imposed and accuracy had to be maximized, and (b) a distance-constrained task in…

  13. [The linear hyperspectral camera rotating scan imaging geometric correction based on the precise spectral sampling].

    PubMed

    Wang, Shu-min; Zhang, Ai-wu; Hu, Shao-xing; Wang, Jing-meng; Meng, Xian-gang; Duan, Yi-hao; Sun, Wei-dong

    2015-02-01

    As the rotation speed of ground based hyperspectral imaging system is too fast in the image collection process, which exceeds the speed limitation, there is data missed in the rectified image, it shows as the_black lines. At the same time, there is serious distortion in the collected raw images, which effects the feature information classification and identification. To solve these problems, in this paper, we introduce the each component of the ground based hyperspectral imaging system at first, and give the general process of data collection. The rotation speed is controlled in data collection process, according to the image cover area of each frame and the image collection speed of the ground based hyperspectral imaging system, And then the spatial orientation model is deduced in detail combining with the star scanning angle, stop scanning angle and the minimum distance between the sensor and the scanned object etc. The oriented image is divided into grids and resampled with new spectral. The general flow of distortion image corrected is presented in this paper. Since the image spatial resolution is different between the adjacent frames, and in order to keep the highest image resolution of corrected image, the minimum ground sampling distance is employed as the grid unit to divide the geo-referenced image. Taking the spectral distortion into account caused by direct sampling method when the new uniform grids and the old uneven grids are superimposed to take the pixel value, the precise spectral sampling method based on the position distribution is proposed. The distortion image collected in Lao Si Cheng ruin which is in the Zhang Jiajie town Hunan province is corrected through the algorithm proposed on above. The features keep the original geometric characteristics. It verifies the validity of the algorithm. And we extract the spectral of different features to compute the correlation coefficient. The results show that the improved spectral sampling method is

  14. On detector linearity and precision of beam shift detection for quantitative differential phase contrast applications.

    PubMed

    Zweck, Josef; Schwarzhuber, Felix; Wild, Johannes; Galioit, Vincent

    2016-09-01

    Differential phase contrast is a STEM imaging mode where minute sideways deflections of the electron probe are monitored, usually by using a position sensitive device (Chapman, 1984 [1]; Lohr et al., 2012 [2]) or, alternatively in some cases, a fast camera (Müller et al., 2012 [3,4]; Yang et al., 2015 [5]; Pennycook et al., 2015 [6]) as a pixelated detector. While traditionally differential phase contrast electron microscopy was mainly focused on investigations of micro-magnetic domain structures and their specific features, such as domain wall widths, etc. (Chapman, 1984 [1]; Chapman et al., 1978, 1981, 1985 [7-9]; Sannomiya et al., 2004 [10]), its usage has recently been extended to mesoscopic (Lohr et al., 2012, 2016 [2,12]; Bauer et al., 2014 [11]; Shibata et al., 2015 [13]) and nano-scale electric fields (Shibata et al., 2012 [14]; Mueller et al., 2014 [15]). In this paper, the various interactions which can cause a beam deflection are reviewed and expanded by two so far undiscussed mechanisms which may be important for biological applications. As differential phase contrast microscopy strongly depends on the ability to detect minute beam deflections we first treat the linearity problem for an annular four quadrant detector and then determine the factors which limit the minimum measurable deflection angle, such as S/N ratio, current density, dwell time and detector geometry. Knowing these factors enables the experimenter to optimize the set-up for optimum performance of the microscope and to get a clear figure for the achievable field resolution error margins. PMID:27376783

  15. Improvements in dose accuracy delivered with static-MLC IMRT on an integrated linear accelerator control system

    SciTech Connect

    Li Ji; Wiersma, Rodney D.; Stepaniak, Christopher J.; Farrey, Karl J.; Al-Hallaq, Hania A.

    2012-05-15

    Purpose: Dose accuracy has been shown to vary with dose per segment and dose rate when delivered with static multileaf collimator (SMLC) intensity modulated radiation therapy (IMRT) by Varian C-series MLC controllers. The authors investigated the impact of monitor units (MUs) per segment and dose rate on the dose delivery accuracy of SMLC-IMRT fields on a Varian TrueBeam linear accelerator (LINAC), which delivers dose and manages motion of all components using a single integrated controller. Methods: An SMLC sequence was created consisting of ten identical 10 x 10 cm{sup 2} segments with identical MUs. Beam holding between segments was achieved by moving one out-of-field MLC leaf pair. Measurements were repeated for various combinations of MU/segment ranging from 1 to 40 and dose rates of 100-600 MU/min for a 6 MV photon beam (6X) and dose rates of 800-2400 MU/min for a 10 MV flattening-filter free photon (10XFFF) beam. All measurements were made with a Farmer (0.6 cm{sup 3}) ionization chamber placed at the isocenter in a solid-water phantom at 10 cm depth. The measurements were performed on two Varian LINACs: C-series Trilogy and TrueBeam. Each sequence was delivered three times and the dose readings for the corresponding segments were averaged. The effects of MU/segment, dose rate, and LINAC type on the relative dose variation ({Delta}{sub i}) were compared using F-tests ({alpha} = 0.05). Results: On the Trilogy, large {Delta}{sub i} was observed in small MU segments: at 1 MU/segment, the maximum {Delta}{sub i} was 10.1% and 57.9% at 100 MU/min and 600 MU/min, respectively. Also, the first segment of each sequence consistently overshot ({Delta}{sub i} > 0), while the last segment consistently undershot ({Delta}{sub i} < 0). On the TrueBeam, at 1 MU/segment, {Delta}{sub i} ranged from 3.0% to 4.5% at 100 and 600 MU/min; no obvious overshoot/undershoot trend was observed. F-tests showed statistically significant difference [(1 - {beta}) =1.0000] between the

  16. Towards obtaining spatiotemporally precise responses to continuous sensory stimuli in humans: a general linear modeling approach to EEG.

    PubMed

    Gonçalves, Nuno R; Whelan, Robert; Foxe, John J; Lalor, Edmund C

    2014-08-15

    Noninvasive investigation of human sensory processing with high temporal resolution typically involves repeatedly presenting discrete stimuli and extracting an average event-related response from scalp recorded neuroelectric or neuromagnetic signals. While this approach is and has been extremely useful, it suffers from two drawbacks: a lack of naturalness in terms of the stimulus and a lack of precision in terms of the cortical response generators. Here we show that a linear modeling approach that exploits functional specialization in sensory systems can be used to rapidly obtain spatiotemporally precise responses to complex sensory stimuli using electroencephalography (EEG). We demonstrate the method by example through the controlled modulation of the contrast and coherent motion of visual stimuli. Regressing the data against these modulation signals produces spatially focal, highly temporally resolved response measures that are suggestive of specific activation of visual areas V1 and V6, respectively, based on their onset latency, their topographic distribution and the estimated location of their sources. We discuss our approach by comparing it with fMRI/MRI informed source analysis methods and, in doing so, we provide novel information on the timing of coherent motion processing in human V6. Generalizing such an approach has the potential to facilitate the rapid, inexpensive spatiotemporal localization of higher perceptual functions in behaving humans. PMID:24736185

  17. Optimizing the accuracy and precision of the single-pulse Laue technique for synchrotron photo-crystallography

    SciTech Connect

    Kaminski, Radoslaw; Graber, Timothy; Benedict, Jason B.; Henning, Robert; Chen, Yu-Sheng; Scheins, Stephan; Messerschmidt, Marc; Coppens, Philip

    2010-06-24

    The accuracy that can be achieved in single-pulse pump-probe Laue experiments is discussed. It is shown that with careful tuning of the experimental conditions a reproducibility of the intensity ratios of equivalent intensities obtained in different measurements of 3-4% can be achieved. The single-pulse experiments maximize the time resolution that can be achieved and, unlike stroboscopic techniques in which the pump-probe cycle is rapidly repeated, minimize the temperature increase due to the laser exposure of the sample.

  18. Optimizing the accuracy and precision of the single-pulse Laue technique for synchrotron photo-crystallography

    PubMed Central

    Kamiński, Radosław; Graber, Timothy; Benedict, Jason B.; Henning, Robert; Chen, Yu-Sheng; Scheins, Stephan; Messerschmidt, Marc; Coppens, Philip

    2010-01-01

    The accuracy that can be achieved in single-pulse pump-probe Laue experiments is discussed. It is shown that with careful tuning of the experimental conditions a reproducibility of the intensity ratios of equivalent intensities obtained in different measurements of 3–4% can be achieved. The single-pulse experiments maximize the time resolution that can be achieved and, unlike stroboscopic techniques in which the pump-probe cycle is rapidly repeated, minimize the temperature increase due to the laser exposure of the sample. PMID:20567080

  19. Accuracy and precision of end-expiratory lung-volume measurements by automated nitrogen washout/washin technique in patients with acute respiratory distress syndrome

    PubMed Central

    2011-01-01

    Introduction End-expiratory lung volume (EELV) is decreased in acute respiratory distress syndrome (ARDS), and bedside EELV measurement may help to set positive end-expiratory pressure (PEEP). Nitrogen washout/washin for EELV measurement is available at the bedside, but assessments of accuracy and precision in real-life conditions are scant. Our purpose was to (a) assess EELV measurement precision in ARDS patients at two PEEP levels (three pairs of measurements), and (b) compare the changes (Δ) induced by PEEP for total EELV with the PEEP-induced changes in lung volume above functional residual capacity measured with passive spirometry (ΔPEEP-volume). The minimal predicted increase in lung volume was calculated from compliance at low PEEP and ΔPEEP to ensure the validity of lung-volume changes. Methods Thirty-four patients with ARDS were prospectively included in five university-hospital intensive care units. ΔEELV and ΔPEEP volumes were compared between 6 and 15 cm H2O of PEEP. Results After exclusion of three patients, variability of the nitrogen technique was less than 4%, and the largest difference between measurements was 81 ± 64 ml. ΔEELV and ΔPEEP-volume were only weakly correlated (r2 = 0.47); 95% confidence interval limits, -414 to 608 ml). In four patients with the highest PEEP (≥ 16 cm H2O), ΔEELV was lower than the minimal predicted increase in lung volume, suggesting flawed measurements, possibly due to leaks. Excluding those from the analysis markedly strengthened the correlation between ΔEELV and ΔPEEP volume (r2 = 0.80). Conclusions In most patients, the EELV technique has good reproducibility and accuracy, even at high PEEP. At high pressures, its accuracy may be limited in case of leaks. The minimal predicted increase in lung volume may help to check for accuracy. PMID:22166727

  20. Progress integrating ID-TIMS U-Pb geochronology with accessory mineral geochemistry: towards better accuracy and higher precision time

    NASA Astrophysics Data System (ADS)

    Schoene, B.; Samperton, K. M.; Crowley, J. L.; Cottle, J. M.

    2012-12-01

    It is increasingly common that hand samples of plutonic and volcanic rocks contain zircon with dates that span between zero and >100 ka. This recognition comes from the increased application of U-series geochronology on young volcanic rocks and the increased precision to better than 0.1% on single zircons by the U-Pb ID-TIMS method. It has thus become more difficult to interpret such complicated datasets in terms of ashbed eruption or magma emplacement, which are critical constraints for geochronologic applications ranging from biotic evolution and the stratigraphic record to magmatic and metamorphic processes in orogenic belts. It is important, therefore, to develop methods that aid in interpreting which minerals, if any, date the targeted process. One promising tactic is to better integrate accessory mineral geochemistry with high-precision ID-TIMS U-Pb geochronology. These dual constraints can 1) identify cogenetic populations of minerals, and 2) record magmatic or metamorphic fluid evolution through time. Goal (1) has been widely sought with in situ geochronology and geochemical analysis but is limited by low-precision dates. Recent work has attempted to bridge this gap by retrieving the typically discarded elution from ion exchange chemistry that precedes ID-TIMS U-Pb geochronology and analyzing it by ICP-MS (U-Pb TIMS-TEA). The result integrates geochemistry and high-precision geochronology from the exact same volume of material. The limitation of this method is the relatively coarse spatial resolution compared to in situ techniques, and thus averages potentially complicated trace element profiles through single minerals or mineral fragments. In continued work, we test the effect of this on zircon by beginning with CL imaging to reveal internal zonation and growth histories. This is followed by in situ LA-ICPMS trace element transects of imaged grains to reveal internal geochemical zonation. The same grains are then removed from grain-mount, fragmented, and

  1. Effect of the impression margin thickness on the linear accuracy of impression and stone dies: an in vitro study.

    PubMed

    Naveen, Y G; Patil, Raghunath

    2013-03-01

    The space available for impression material in gingival sulcus immediately after the removal of retraction cord has been found to be 0.3-0.4 mm. However after 40 s only 0.2 mm of the retracted space is available. This is of concern when impression of multiple abutments is to be made. Hence a study was planned to determine the minimum width of the retracted sulcus necessary to obtain a good impression. Five metal dies were machined to accurately fit a stainless steel block with a square cavity in the center with spaces, 1 mm deep and of varying widths (0.11-0.3 mm) away from the block. Polyvinyl siloxane impressions were made and poured using a high strength stone. Using traveling microscope, length and widths of abutment, impression and die were measured and compared for linear accuracy and completeness of impression. Results showed 1.5-3 times greater mean distortion and larger coefficient of variance in the 0.11 mm group than in the wider sulcular groups. ANOVA test for distortion also showed statistically significant differences (P < 0.05). 75 % of impressions in 0.11 mm group were defective compared to less than 25 % of impressions in other width groups. It is not always possible to predictably obtain accurate impressions in sulcus width of 0.11 mm or lesser. Dimensionally accurate and defect free impressions were obtained in sulcus width of 0.15 mm and wider. Hence clinicians must choose retraction methods to obtain a width greater than 0.35 mm. Further immediate loading of the impression material after cord removal may improve accuracy. PMID:24431701

  2. Linear Discriminant Analysis Achieves High Classification Accuracy for the BOLD fMRI Response to Naturalistic Movie Stimuli.

    PubMed

    Mandelkow, Hendrik; de Zwart, Jacco A; Duyn, Jeff H

    2016-01-01

    Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI). However, conventional fMRI analysis based on statistical parametric mapping (SPM) and the general linear model (GLM) is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA), have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past, this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbor (NN), Gaussian Naïve Bayes (GNB), and (regularized) Linear Discriminant Analysis (LDA) in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie. Results show that LDA regularized by principal component analysis (PCA) achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2 s apart during a 300 s movie (chance level 0.7% = 2 s/300 s). The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these

  3. Linear Discriminant Analysis Achieves High Classification Accuracy for the BOLD fMRI Response to Naturalistic Movie Stimuli

    PubMed Central

    Mandelkow, Hendrik; de Zwart, Jacco A.; Duyn, Jeff H.

    2016-01-01

    Naturalistic stimuli like movies evoke complex perceptual processes, which are of great interest in the study of human cognition by functional MRI (fMRI). However, conventional fMRI analysis based on statistical parametric mapping (SPM) and the general linear model (GLM) is hampered by a lack of accurate parametric models of the BOLD response to complex stimuli. In this situation, statistical machine-learning methods, a.k.a. multivariate pattern analysis (MVPA), have received growing attention for their ability to generate stimulus response models in a data-driven fashion. However, machine-learning methods typically require large amounts of training data as well as computational resources. In the past, this has largely limited their application to fMRI experiments involving small sets of stimulus categories and small regions of interest in the brain. By contrast, the present study compares several classification algorithms known as Nearest Neighbor (NN), Gaussian Naïve Bayes (GNB), and (regularized) Linear Discriminant Analysis (LDA) in terms of their classification accuracy in discriminating the global fMRI response patterns evoked by a large number of naturalistic visual stimuli presented as a movie. Results show that LDA regularized by principal component analysis (PCA) achieved high classification accuracies, above 90% on average for single fMRI volumes acquired 2 s apart during a 300 s movie (chance level 0.7% = 2 s/300 s). The largest source of classification errors were autocorrelations in the BOLD signal compounded by the similarity of consecutive stimuli. All classifiers performed best when given input features from a large region of interest comprising around 25% of the voxels that responded significantly to the visual stimulus. Consistent with this, the most informative principal components represented widespread distributions of co-activated brain regions that were similar between subjects and may represent functional networks. In light of these

  4. Technical note: precision and accuracy of in vitro digestion of neutral detergent fiber and predicted net energy of lactation content of fibrous feeds.

    PubMed

    Spanghero, M; Berzaghi, P; Fortina, R; Masoero, F; Rapetti, L; Zanfi, C; Tassone, S; Gallo, A; Colombini, S; Ferlito, J C

    2010-10-01

    The objective of this study was to test the precision and agreement with in situ data (accuracy) of neutral detergent fiber degradability (NDFD) obtained with the rotating jar in vitro system (Daisy(II) incubator, Ankom Technology, Fairport, NY). Moreover, the precision of the chemical assays requested by the National Research Council (2001) for feed energy calculations and the estimated net energy of lactation contents were evaluated. Precision was measured as standard deviation (SD) of reproducibility (S(R)) and repeatability (S(r)) (between- and within-laboratory variability, respectively), which were expressed as coefficients of variation (SD/mean × 100, S(R) and S(r), respectively). Ten fibrous feed samples (alfalfa dehydrated, alfalfa hay, corn cob, corn silage, distillers grains, meadow hay, ryegrass hay, soy hulls, wheat bran, and wheat straw) were analyzed by 5 laboratories. Analyses of dry matter (DM), ash, crude protein (CP), neutral detergent fiber (NDF), and acid detergent fiber (ADF) had satisfactory S(r), from 0.4 to 2.9%, and S(R), from 0.7 to 6.2%, with the exception of ether extract (EE) and CP bound to NDF or ADF. Extending the fermentation time from 30 to 48 h increased the NDFD values (from 42 to 54% on average across all tested feeds) and improved the NDFD precision, in terms of both S(r) (12 and 7% for 30 and 48 h, respectively) and S(R) (17 and 10% for 30 and 48 h, respectively). The net energy for lactation (NE(L)) predicted from 48-h incubation NDFD data approximated well the tabulated National Research Council (2001) values for several feeds, and the improvement in NDFD precision given by longer incubations (48 vs. 30 h) also improved precision of the NE(L) estimates from 11 to 8%. Data obtained from the rotating jar in vitro technique compared well with in situ data. In conclusion, the adoption of a 48-h period of incubation improves repeatability and reproducibility of NDFD and accuracy and reproducibility of the associated calculated

  5. Leaf vein length per unit area is not intrinsically dependent on image magnification: avoiding measurement artifacts for accuracy and precision.

    PubMed

    Sack, Lawren; Caringella, Marissa; Scoffoni, Christine; Mason, Chase; Rawls, Michael; Markesteijn, Lars; Poorter, Lourens

    2014-10-01

    Leaf vein length per unit leaf area (VLA; also known as vein density) is an important determinant of water and sugar transport, photosynthetic function, and biomechanical support. A range of software methods are in use to visualize and measure vein systems in cleared leaf images; typically, users locate veins by digital tracing, but recent articles introduced software by which users can locate veins using thresholding (i.e. based on the contrasting of veins in the image). Based on the use of this method, a recent study argued against the existence of a fixed VLA value for a given leaf, proposing instead that VLA increases with the magnification of the image due to intrinsic properties of the vein system, and recommended that future measurements use a common, low image magnification for measurements. We tested these claims with new measurements using the software LEAFGUI in comparison with digital tracing using ImageJ software. We found that the apparent increase of VLA with magnification was an artifact of (1) using low-quality and low-magnification images and (2) errors in the algorithms of LEAFGUI. Given the use of images of sufficient magnification and quality, and analysis with error-free software, the VLA can be measured precisely and accurately. These findings point to important principles for improving the quantity and quality of important information gathered from leaf vein systems. PMID:25096977

  6. Evaluation of accuracy of non-linear finite element computations for surgical simulation: study using brain phantom.

    PubMed

    Ma, J; Wittek, A; Singh, S; Joldes, G; Washio, T; Chinzei, K; Miller, K

    2010-12-01

    In this paper, the accuracy of non-linear finite element computations in application to surgical simulation was evaluated by comparing the experiment and modelling of indentation of the human brain phantom. The evaluation was realised by comparing forces acting on the indenter and the deformation of the brain phantom. The deformation of the brain phantom was measured by tracking 3D motions of X-ray opaque markers, placed within the brain phantom using a custom-built bi-plane X-ray image intensifier system. The model was implemented using the ABAQUS(TM) finite element solver. Realistic geometry obtained from magnetic resonance images and specific constitutive properties determined through compression tests were used in the model. The model accurately predicted the indentation force-displacement relations and marker displacements. Good agreement between modelling and experimental results verifies the reliability of the finite element modelling techniques used in this study and confirms the predictive power of these techniques in surgical simulation. PMID:21153973

  7. Automatic optimal filament segmentation with sub-pixel accuracy using generalized linear models and B-spline level-sets.

    PubMed

    Xiao, Xun; Geyer, Veikko F; Bowne-Anderson, Hugo; Howard, Jonathon; Sbalzarini, Ivo F

    2016-08-01

    Biological filaments, such as actin filaments, microtubules, and cilia, are often imaged using different light-microscopy techniques. Reconstructing the filament curve from the acquired images constitutes the filament segmentation problem. Since filaments have lower dimensionality than the image itself, there is an inherent trade-off between tracing the filament with sub-pixel accuracy and avoiding noise artifacts. Here, we present a globally optimal filament segmentation method based on B-spline vector level-sets and a generalized linear model for the pixel intensity statistics. We show that the resulting optimization problem is convex and can hence be solved with global optimality. We introduce a simple and efficient algorithm to compute such optimal filament segmentations, and provide an open-source implementation as an ImageJ/Fiji plugin. We further derive an information-theoretic lower bound on the filament segmentation error, quantifying how well an algorithm could possibly do given the information in the image. We show that our algorithm asymptotically reaches this bound in the spline coefficients. We validate our method in comprehensive benchmarks, compare with other methods, and show applications from fluorescence, phase-contrast, and dark-field microscopy. PMID:27104582

  8. Parallel High Order Accuracy Methods Applied to Non-Linear Hyperbolic Equations and to Problems in Materials Sciences

    SciTech Connect

    Jan Hesthaven

    2012-02-06

    Final report for DOE Contract DE-FG02-98ER25346 entitled Parallel High Order Accuracy Methods Applied to Non-Linear Hyperbolic Equations and to Problems in Materials Sciences. Principal Investigator Jan S. Hesthaven Division of Applied Mathematics Brown University, Box F Providence, RI 02912 Jan.Hesthaven@Brown.edu February 6, 2012 Note: This grant was originally awarded to Professor David Gottlieb and the majority of the work envisioned reflects his original ideas. However, when Prof Gottlieb passed away in December 2008, Professor Hesthaven took over as PI to ensure proper mentoring of students and postdoctoral researchers already involved in the project. This unusual circumstance has naturally impacted the project and its timeline. However, as the report reflects, the planned work has been accomplished and some activities beyond the original scope have been pursued with success. Project overview and main results The effort in this project focuses on the development of high order accurate computational methods for the solution of hyperbolic equations with application to problems with strong shocks. While the methods are general, emphasis is on applications to gas dynamics with strong shocks.

  9. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, Jacquelyn C.; Thompson, Anne M.; Schmidlin, F. J.; Oltmans, S. J.; Smit, H. G. J.

    2004-01-01

    Since 1998 the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 ozone profiles over eleven southern hemisphere tropical and subtropical stations. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used to measure ozone. The data are archived at: &ttp://croc.gsfc.nasa.gov/shadoz>. In analysis of ozonesonde imprecision within the SHADOZ dataset, Thompson et al. [JGR, 108,8238,20031 we pointed out that variations in ozonesonde technique (sensor solution strength, instrument manufacturer, data processing) could lead to station-to-station biases within the SHADOZ dataset. Imprecisions and accuracy in the SHADOZ dataset are examined in light of new data. First, SHADOZ total ozone column amounts are compared to version 8 TOMS (2004 release). As for TOMS version 7, satellite total ozone is usually higher than the integrated column amount from the sounding. Discrepancies between the sonde and satellite datasets decline two percentage points on average, compared to version 7 TOMS offsets. Second, the SHADOZ station data are compared to results of chamber simulations (JOSE-2000, Juelich Ozonesonde Intercomparison Experiment) in which the various SHADOZ techniques were evaluated. The range of JOSE column deviations from a standard instrument (-10%) in the chamber resembles that of the SHADOZ station data. It appears that some systematic variations in the SHADOZ ozone record are accounted for by differences in solution strength, data processing and instrument type (manufacturer).

  10. EFFECT OF RADIATION DOSE LEVEL ON ACCURACY AND PRECISION OF MANUAL SIZE MEASUREMENTS IN CHEST TOMOSYNTHESIS EVALUATED USING SIMULATED PULMONARY NODULES

    PubMed Central

    Söderman, Christina; Johnsson, Åse Allansdotter; Vikgren, Jenny; Norrlund, Rauni Rossi; Molnar, David; Svalkvist, Angelica; Månsson, Lars Gunnar; Båth, Magnus

    2016-01-01

    The aim of the present study was to investigate the dependency of the accuracy and precision of nodule diameter measurements on the radiation dose level in chest tomosynthesis. Artificial ellipsoid-shaped nodules with known dimensions were inserted in clinical chest tomosynthesis images. Noise was added to the images in order to simulate radiation dose levels corresponding to effective doses for a standard-sized patient of 0.06 and 0.04 mSv. These levels were compared with the original dose level, corresponding to an effective dose of 0.12 mSv for a standard-sized patient. Four thoracic radiologists measured the longest diameter of the nodules. The study was restricted to nodules located in high-dose areas of the tomosynthesis projection radiographs. A significant decrease of the measurement accuracy and intraobserver variability was seen for the lowest dose level for a subset of the observers. No significant effect of dose level on the interobserver variability was found. The number of non-measurable small nodules (≤5 mm) was higher for the two lowest dose levels compared with the original dose level. In conclusion, for pulmonary nodules at positions in the lung corresponding to locations in high-dose areas of the projection radiographs, using a radiation dose level resulting in an effective dose of 0.06 mSv to a standard-sized patient may be possible in chest tomosynthesis without affecting the accuracy and precision of nodule diameter measurements to any large extent. However, an increasing number of non-measurable small nodules (≤5 mm) with decreasing radiation dose may raise some concerns regarding an applied general dose reduction for chest tomosynthesis examinations in the clinical praxis. PMID:26994093

  11. EFFECT OF RADIATION DOSE LEVEL ON ACCURACY AND PRECISION OF MANUAL SIZE MEASUREMENTS IN CHEST TOMOSYNTHESIS EVALUATED USING SIMULATED PULMONARY NODULES.

    PubMed

    Söderman, Christina; Johnsson, Åse Allansdotter; Vikgren, Jenny; Norrlund, Rauni Rossi; Molnar, David; Svalkvist, Angelica; Månsson, Lars Gunnar; Båth, Magnus

    2016-06-01

    The aim of the present study was to investigate the dependency of the accuracy and precision of nodule diameter measurements on the radiation dose level in chest tomosynthesis. Artificial ellipsoid-shaped nodules with known dimensions were inserted in clinical chest tomosynthesis images. Noise was added to the images in order to simulate radiation dose levels corresponding to effective doses for a standard-sized patient of 0.06 and 0.04 mSv. These levels were compared with the original dose level, corresponding to an effective dose of 0.12 mSv for a standard-sized patient. Four thoracic radiologists measured the longest diameter of the nodules. The study was restricted to nodules located in high-dose areas of the tomosynthesis projection radiographs. A significant decrease of the measurement accuracy and intraobserver variability was seen for the lowest dose level for a subset of the observers. No significant effect of dose level on the interobserver variability was found. The number of non-measurable small nodules (≤5 mm) was higher for the two lowest dose levels compared with the original dose level. In conclusion, for pulmonary nodules at positions in the lung corresponding to locations in high-dose areas of the projection radiographs, using a radiation dose level resulting in an effective dose of 0.06 mSv to a standard-sized patient may be possible in chest tomosynthesis without affecting the accuracy and precision of nodule diameter measurements to any large extent. However, an increasing number of non-measurable small nodules (≤5 mm) with decreasing radiation dose may raise some concerns regarding an applied general dose reduction for chest tomosynthesis examinations in the clinical praxis. PMID:26994093

  12. Development of a compact, fiber-coupled, six degree-of-freedom measurement system for precision linear stage metrology

    NASA Astrophysics Data System (ADS)

    Yu, Xiangzhi; Gillmer, Steven R.; Woody, Shane C.; Ellis, Jonathan D.

    2016-06-01

    A compact, fiber-coupled, six degree-of-freedom measurement system which enables fast, accurate calibration, and error mapping of precision linear stages is presented. The novel design has the advantages of simplicity, compactness, and relatively low cost. This proposed sensor can simultaneously measure displacement, two straightness errors, and changes in pitch, yaw, and roll using a single optical beam traveling between the measurement system and a small target. The optical configuration of the system and the working principle for all degrees-of-freedom are presented along with the influence and compensation of crosstalk motions in roll and straightness measurements. Several comparison experiments are conducted to investigate the feasibility and performance of the proposed system in each degree-of-freedom independently. Comparison experiments to a commercial interferometer demonstrate error standard deviations of 0.33 μm in straightness, 0.14 μrad in pitch, 0.44 μradin yaw, and 45.8 μrad in roll.

  13. A precise measurement of the left-right asymmetry of Z Boson production at the SLAC linear collider

    SciTech Connect

    1994-09-01

    We present a precise measurement of the left-right cross section asymmetry of Z boson production (A{sub LR}) observed in 1993 data at the SLAC linear collider. The A{sub LR} experiment provides a direct measure of the effective weak mixing angle through the initial state couplings of the electron to the Z. During the 1993 run of the SLC, the SLD detector recorded 49,392 Z events produced by the collision of longitudinally polarized electrons on unpolarized positrons at a center-of-mass energy of 91.26 GeV. A Compton polarimeter measured the luminosity-weighted electron polarization to be (63.4{+-}1.3)%. ALR was measured to be 0.1617{+-}0.0071(stat.){+-}0.0033(syst.), which determines the effective weak mixing angle to be sin {sup 2}{theta}{sub W}{sup eff} = 0.2292{+-}0.0009(stat.){+-}0.0004(syst.). This measurement of A{sub LR} is incompatible at the level of two standard deviations with the value predicted by a fit of several other electroweak measurements to the Standard Model.

  14. SU-E-J-147: Monte Carlo Study of the Precision and Accuracy of Proton CT Reconstructed Relative Stopping Power Maps

    SciTech Connect

    Dedes, G; Asano, Y; Parodi, K; Arbor, N; Dauvergne, D; Testa, E; Letang, J; Rit, S

    2015-06-15

    Purpose: The quantification of the intrinsic performances of proton computed tomography (pCT) as a modality for treatment planning in proton therapy. The performance of an ideal pCT scanner is studied as a function of various parameters. Methods: Using GATE/Geant4, we simulated an ideal pCT scanner and scans of several cylindrical phantoms with various tissue equivalent inserts of different sizes. Insert materials were selected in order to be of clinical relevance. Tomographic images were reconstructed using a filtered backprojection algorithm taking into account the scattering of protons into the phantom. To quantify the performance of the ideal pCT scanner, we study the precision and the accuracy with respect to the theoretical relative stopping power ratios (RSP) values for different beam energies, imaging doses, insert sizes and detector positions. The planning range uncertainty resulting from the reconstructed RSP is also assessed by comparison with the range of the protons in the analytically simulated phantoms. Results: The results indicate that pCT can intrinsically achieve RSP resolution below 1%, for most examined tissues at beam energies below 300 MeV and for imaging doses around 1 mGy. RSP maps accuracy of less than 0.5 % is observed for most tissue types within the studied dose range (0.2–1.5 mGy). Finally, the uncertainty in the proton range due to the accuracy of the reconstructed RSP map is well below 1%. Conclusion: This work explores the intrinsic performance of pCT as an imaging modality for proton treatment planning. The obtained results show that under ideal conditions, 3D RSP maps can be reconstructed with an accuracy better than 1%. Hence, pCT is a promising candidate for reducing the range uncertainties introduced by the use of X-ray CT alongside with a semiempirical calibration to RSP.Supported by the DFG Cluster of Excellence Munich-Centre for Advanced Photonics (MAP)

  15. Accuracy of pregnancy diagnosis and prediction of foetal numbers in sheep with linear-array real-time ultrasound scanning.

    PubMed

    Taverne, M A; Lavoir, M C; van Oord, R; van der Weyden, G C

    1985-10-01

    Pregnancy diagnosis was carried out in sheep by means of transabdominal linear-array real-time ultrasound scanning. Animals were restrained standing, and the transducer was placed on the hairless area of the ventral abdominal wall just in front of the udder. Of a total of 818 tests, 724 were performed between days 29 and 89 of pregnancy, 598 animals subsequently lambed and 126 were non-lambing animals. Only 8 of these tests were wrong: 3 false positive and 5 false negative diagnoses. Sensitivity, specificity, positive- and negative predictive values for these tests were 99.2%, 97.6%, 99.5%, and 96% respectively. There was evidence to indicate that the three false positive tests were caused by foetal mortality or unobserved abortions that took place after testing. Only 2 of the 5 false negative tests were carried out after day 39 of gestation. Counting of foetal numbers (1, 2 or 3) was performed in only some animals (n = 210) between days 45 and 77 of gestation. Three groups of animals (A: 89 ewes; B: 27 PMSG-treated ewes; C: 94 ewes) were analyzed separately. Overall accuracy of all predictions was 83.1%, 37.0% and 78.7% for the 3 groups respectively. Animals in group B produced only 3 or more lambs. Sensitivity of the countings of singles, twins and triplets or more were 90.4%, 90.4% and 50% respectively for the animals from group A and 91.9%, 86% and 21.4% for the animals from group C.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:3907116

  16. Factors influence accuracy and precision in the determination of the elemental composition of defense waste glass by ICP-emission spectrometry

    SciTech Connect

    Goode, S.R.

    1995-12-31

    The influence of instrumental factors on the accuracy and precision of the determination of the composition of glass and glass feedstock is presented. In addition, the effects of different methods of sampling, dissolution methods, and standardization procedures and their effect on the quality of the chemical analysis will also be presented. The target glass simulates the material that will be prepared by the vitrification of highly radioactive liquid defense waste. The glass and feedstock streams must be well characterized to ensure a durable glass; current models estimate a 100,000 year lifetime. The elemental composition will be determined by ICP-emission spectrometry with radiation exposure issues requiring a multielement analysis for all constituents, on a single analytical sample, using compromise conditions.

  17. Precision digital control systems

    NASA Astrophysics Data System (ADS)

    Vyskub, V. G.; Rozov, B. S.; Savelev, V. I.

    This book is concerned with the characteristics of digital control systems of great accuracy. A classification of such systems is considered along with aspects of stabilization, programmable control applications, digital tracking systems and servomechanisms, and precision systems for the control of a scanning laser beam. Other topics explored are related to systems of proportional control, linear devices and methods for increasing precision, approaches for further decreasing the response time in the case of high-speed operation, possibilities for the implementation of a logical control law, and methods for the study of precision digital control systems. A description is presented of precision automatic control systems which make use of electronic computers, taking into account the existing possibilities for an employment of computers in automatic control systems, approaches and studies required for including a computer in such control systems, and an analysis of the structure of automatic control systems with computers. Attention is also given to functional blocks in the considered systems.

  18. A Study of the Accuracy and Precision Among XRF, ICP-MS, and PIXE on Trace Element Analyses of Small Water Samples

    NASA Astrophysics Data System (ADS)

    Naik, Sahil; Patnaik, Ritish; Kummari, Venkata; Phinney, Lucas; Dhoubhadel, Mangal; Jesseph, Aaron; Hoffmann, William; Verbeck, Guido; Rout, Bibhudutta

    2010-10-01

    The study aimed to compare the viability, precision, and accuracy among three popular instruments - X-ray Fluorescence (XRF), Inductively Coupled Plasma Mass Spectrometer (ICP-MS), and Particle-Induced X-ray Emission (PIXE) - used to analyze the trace elemental composition of small water samples. Ten-milliliter water samples from public tap water sources in seven different localities in India (Bangalore, Kochi, Bhubaneswar, Cuttack, Puri, Hospet, and Pipili) were prepared through filtration and dilution for proper analysis. The project speculates that the ICP-MS will give the most accurate and precise trace elemental analysis, followed by PIXE and XRF. XRF will be seen as a portable and affordable instrument that can analyze samples on-site while ICP-MS is extremely accurate, and expensive option for off-site analyses. PIXE will be deemed to be too expensive and cumbersome for on-site analysis; however, laboratories with a PIXE accelerator can use the instrument to get accurate analyses.

  19. Simultaneous determination of triazine herbicides in rice by high-performance liquid chromatography coupled with high resolution and high mass accuracy hybrid linear ion trap-orbitrap mass spectrometry.

    PubMed

    Mou, Ren-Xiang; Chen, Ming-Xue; Cao, Zhao-Yun; Zhu, Zhi-Wei

    2011-11-01

    A method was developed for the simultaneous determination of 10 triazine herbicides (cyanazine, simazine, simetryn, metribuzin, atrazine, ametryn, terbuthylazine, prometryn, terbutryn, and dimethametryn) in rice samples by high resolution and high mass accuracy hybrid linear ion trap-Orbitrap mass spectrometer. After extraction with acetonitrile and evaporation, the herbicides were redissolved in n-hexane and purified on a Florisil solid-phase extraction column. All compounds were separated within 12 min, producing more than 11 data points for each herbicide and high mass accuracy quantified ions which the mass errors of absolute value were less than 1.9 ppm in pure solution and 2.1 ppm in the matrix-matched standards solution. The method was validated in terms of the limits of detection and the limits of quantification. The linearity was satisfactory, with a correlation coefficient of >0.9975. Precision and recovery studies were evaluated at three concentration levels for Japonica, Indica, and Glutinous rice matrix. The mean recoveries obtained for all analytes in spiked Xiushui 03, Liangyoupeijiu, and Taihunuo rice samples were 83.3-99.0%, 82.0-99.7%, and 84.2-99.4%, respectively, with relative standard deviation in range 1.7-10.6%, 1.2-10.7%, and 1.9-11.6% for spiked rice samples, respectively. The intra-day precision (n=5) for the 10 herbicides in rice samples spiked at an intermediate level was between 2.8% and 7.9%, and the inter-day precision over 10 days (n=10) was between 5.5% and 15.9%. PMID:21995922

  20. An in-depth evaluation of accuracy and precision in Hg isotopic analysis via pneumatic nebulization and cold vapor generation multi-collector ICP-mass spectrometry.

    PubMed

    Rua-Ibarz, Ana; Bolea-Fernandez, Eduardo; Vanhaecke, Frank

    2016-01-01

    Mercury (Hg) isotopic analysis via multi-collector inductively coupled plasma (ICP)-mass spectrometry (MC-ICP-MS) can provide relevant biogeochemical information by revealing sources, pathways, and sinks of this highly toxic metal. In this work, the capabilities and limitations of two different sample introduction systems, based on pneumatic nebulization (PN) and cold vapor generation (CVG), respectively, were evaluated in the context of Hg isotopic analysis via MC-ICP-MS. The effect of (i) instrument settings and acquisition parameters, (ii) concentration of analyte element (Hg), and internal standard (Tl)-used for mass discrimination correction purposes-and (iii) different mass bias correction approaches on the accuracy and precision of Hg isotope ratio results was evaluated. The extent and stability of mass bias were assessed in a long-term study (18 months, n = 250), demonstrating a precision ≤0.006% relative standard deviation (RSD). CVG-MC-ICP-MS showed an approximately 20-fold enhancement in Hg signal intensity compared with PN-MC-ICP-MS. For CVG-MC-ICP-MS, the mass bias induced by instrumental mass discrimination was accurately corrected for by using either external correction in a sample-standard bracketing approach (SSB) or double correction, consisting of the use of Tl as internal standard in a revised version of the Russell law (Baxter approach), followed by SSB. Concomitant matrix elements did not affect CVG-ICP-MS results. Neither with PN, nor with CVG, any evidence for mass-independent discrimination effects in the instrument was observed within the experimental precision obtained. CVG-MC-ICP-MS was finally used for Hg isotopic analysis of reference materials (RMs) of relevant environmental origin. The isotopic composition of Hg in RMs of marine biological origin testified of mass-independent fractionation that affected the odd-numbered Hg isotopes. While older RMs were used for validation purposes, novel Hg isotopic data are provided for the

  1. Dual-energy X-ray absorptiometry for measuring total bone mineral content in the rat: study of accuracy and precision.

    PubMed

    Casez, J P; Muehlbauer, R C; Lippuner, K; Kelly, T; Fleisch, H; Jaeger, P

    1994-07-01

    Sequential studies of osteopenic bone disease in small animals require the availability of non-invasive, accurate and precise methods to assess bone mineral content (BMC) and bone mineral density (BMD). Dual-energy X-ray absorptiometry (DXA), which is currently used in humans for this purpose, can also be applied to small animals by means of adapted software. Precision and accuracy of DXA was evaluated in 10 rats weighing 50-265 g. The rats were anesthetized with a mixture of ketamine-xylazine administrated intraperitoneally. Each rat was scanned six times consecutively in the antero-posterior incidence after repositioning using the rat whole-body software for determination of whole-body BMC and BMD (Hologic QDR 1000, software version 5.52). Scan duration was 10-20 min depending on rat size. After the last measurement, rats were sacrificed and soft tissues were removed by dermestid beetles. Skeletons were then scanned in vitro (ultra high resolution software, version 4.47). Bones were subsequently ashed and dissolved in hydrochloric acid and total body calcium directly assayed by atomic absorption spectrophotometry (TBCa[chem]). Total body calcium was also calculated from the DXA whole-body in vivo measurement (TBCa[DXA]) and from the ultra high resolution measurement (TBCa[UH]) under the assumption that calcium accounts for 40.5% of the BMC expressed as hydroxyapatite. Precision error for whole-body BMC and BMD (mean +/- S.D.) was 1.3% and 1.5%, respectively. Simple regression analysis between TBCa[DXA] or TBCa[UH] and TBCa[chem] revealed tight correlations (n = 0.991 and 0.996, respectively), with slopes and intercepts which were significantly different from 1 and 0, respectively.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:7950505

  2. The accuracy and precision of two non-invasive, magnetic resonance-guided focused ultrasound-based thermal diffusivity estimation methods

    PubMed Central

    Dillon, Christopher R.; Payne, Allison; Christensen, Douglas A.; Roemer, Robert B.

    2016-01-01

    Purpose The use of correct tissue thermal diffusivity values is necessary for making accurate thermal modeling predictions during magnetic resonance-guided focused ultrasound (MRgFUS) treatment planning. This study evaluates the accuracy and precision of two non-invasive thermal diffusivity estimation methods, a Gaussian Temperature method published by Cheng and Plewes in 2002 and a Gaussian specific absorption rate (SAR) method published by Dillon et al in 2012. Materials and Methods Both methods utilize MRgFUS temperature data obtained during cooling following a short (<25s) heating pulse. The Gaussian SAR method can also use temperatures obtained during heating. Experiments were performed at low heating levels (ΔT~10°C) in ex vivo pork muscle and in vivo rabbit back muscle. The non-invasive MRgFUS thermal diffusivity estimates were compared with measurements from two standard invasive methods. Results Both non-invasive methods accurately estimate thermal diffusivity when using MR-temperature cooling data (overall ex vivo error<6%, in vivo<12%). Including heating data in the Gaussian SAR method further reduces errors (ex vivo error<2%, in vivo<3%). The significantly lower standard deviation values (p<0.03) of the Gaussian SAR method indicate that it has better precision than the Gaussian Temperature method. Conclusions With repeated sonications, either MR-based method could provide accurate thermal diffusivity values for MRgFUS therapies. Fitting to more data simultaneously likely makes the Gaussian SAR method less susceptible to noise, and using heating data helps it converge more consistently to the FUS fitting parameters and thermal diffusivity. These effects lead to the improved precision of the Gaussian SAR method. PMID:25198092

  3. Bias, precision and accuracy in the estimation of cuticular and respiratory water loss: a case study from a highly variable cockroach, Perisphaeria sp.

    PubMed

    Gray, Emilie M; Chown, Steven L

    2008-01-01

    We compared the precision, bias and accuracy of two techniques that were recently proposed to estimate the contributions of cuticular and respiratory water loss to total water loss in insects. We performed measurements of VCO2 and VH2O in normoxia, hyperoxia and anoxia using flow through respirometry on single individuals of the highly variable cockroach Perisphaeria sp. to compare estimates of cuticular and respiratory water loss (CWL and RWL) obtained by the VH2O-VCO2 y-intercept method with those obtained by the hyperoxic switch method. Precision was determined by assessing the repeatability of values obtained whereas bias was assessed by comparing the methods' results to each other and to values for other species found in the literature. We found that CWL was highly repeatable by both methods (R0.88) and resulted in similar values to measures of CWL determined during the closed-phase of discontinuous gas exchange (DGE). Repeatability of RWL was much lower (R=0.40) and significant only in the case of the hyperoxic method. RWL derived from the hyperoxic method is higher (by 0.044 micromol min(-1)) than that obtained from the method traditionally used for measuring water loss during the closed-phase of DGE, suggesting that in the past RWL may have been underestimated. The very low cuticular permeability of this species (3.88 microg cm(-2) h(-1) Torr(-1)) is reasonable given the seasonally hot and dry habitat where it lives. We also tested the hygric hypothesis proposed to account for the evolution of discontinuous gas exchange cycles and found no effect of respiratory pattern on RWL, although the ratio of mean VH2O to VCO2 was higher for continuous patterns compared with discontinuous ones. PMID:17949739

  4. Guidelines for Dual Energy X-Ray Absorptiometry Analysis of Trabecular Bone-Rich Regions in Mice: Improved Precision, Accuracy, and Sensitivity for Assessing Longitudinal Bone Changes.

    PubMed

    Shi, Jiayu; Lee, Soonchul; Uyeda, Michael; Tanjaya, Justine; Kim, Jong Kil; Pan, Hsin Chuan; Reese, Patricia; Stodieck, Louis; Lin, Andy; Ting, Kang; Kwak, Jin Hee; Soo, Chia

    2016-05-01

    Trabecular bone is frequently studied in osteoporosis research because changes in trabecular bone are the most common cause of osteoporotic fractures. Dual energy X-ray absorptiometry (DXA) analysis specific to trabecular bone-rich regions is crucial to longitudinal osteoporosis research. The purpose of this study is to define a novel method for accurately analyzing trabecular bone-rich regions in mice via DXA. This method will be utilized to analyze scans obtained from the International Space Station in an upcoming study of microgravity-induced bone loss. Thirty 12-week-old BALB/c mice were studied. The novel method was developed by preanalyzing trabecular bone-rich sites in the distal femur, proximal tibia, and lumbar vertebrae via high-resolution X-ray imaging followed by DXA and micro-computed tomography (micro-CT) analyses. The key DXA steps described by the novel method were (1) proper mouse positioning, (2) region of interest (ROI) sizing, and (3) ROI positioning. The precision of the new method was assessed by reliability tests and a 14-week longitudinal study. The bone mineral content (BMC) data from DXA was then compared to the BMC data from micro-CT to assess accuracy. Bone mineral density (BMD) intra-class correlation coefficients of the new method ranging from 0.743 to 0.945 and Levene's test showing that there was significantly lower variances of data generated by new method both verified its consistency. By new method, a Bland-Altman plot displayed good agreement between DXA BMC and micro-CT BMC for all sites and they were strongly correlated at the distal femur and proximal tibia (r=0.846, p<0.01; r=0.879, p<0.01, respectively). The results suggest that the novel method for site-specific analysis of trabecular bone-rich regions in mice via DXA yields more precise, accurate, and repeatable BMD measurements than the conventional method. PMID:26956416

  5. Five degree-of-freedom control of an ultra-precision magnetically-suspended linear bearing. Ph.D. Thesis - MIT

    NASA Technical Reports Server (NTRS)

    Trumper, David L.; Slocum, A. H.

    1991-01-01

    The authors constructed a high precision linear bearing. A 10.7 kg platen measuring 125 mm by 125 mm by 350 mm is suspended and controlled in five degrees of freedom by seven electromagnets. The position of the platen is measured by five capacitive probes which have nanometer resolution. The suspension acts as a linear bearing, allowing linear travel of 50 mm in the sixth degree of freedom. In the laboratory, this bearing system has demonstrated position stability of 5 nm peak-to-peak. This is believed to be the highest position stability yet demonstrated in a magnetic suspension system. Performance at this level confirms that magnetic suspensions can address motion control requirements at the nanometer level. The experimental effort associated with this linear bearing system is described. Major topics are the development of models for the suspension, implementation of control algorithms, and measurement of the actual bearing performance. Suggestions for the future improvement of the bearing system are given.

  6. Accuracy of Linear Depolarisation Ratios in Clean Air Ranges Measured with POLIS-6 at 355 and 532 NM

    NASA Astrophysics Data System (ADS)

    Freudenthaler, Volker; Seefeldner, Meinhard; Groß, Silke; Wandinger, Ulla

    2016-06-01

    Linear depolarization ratios in clean air ranges were measured with POLIS-6 at 355 and 532 nm. The mean deviation from the theoretical values, including the rotational Raman lines within the filter bandwidths, amounts to 0.0005 at 355 nm and to 0.0012 at 532 nm. The mean uncertainty of the measured linear depolarization ratio of clean air is about 0.0005 at 355 nm and about 0.0006 at 532 nm.

  7. High-Precision Surface Inspection: Uncertainty Evaluation within an Accuracy Range of 15μm with Triangulation-based Laser Line Scanners

    NASA Astrophysics Data System (ADS)

    Dupuis, Jan; Kuhlmann, Heiner

    2014-06-01

    Triangulation-based range sensors, e.g. laser line scanners, are used for high-precision geometrical acquisition of free-form surfaces, for reverse engineering tasks or quality management. In contrast to classical tactile measuring devices, these scanners generate a great amount of 3D-points in a short period of time and enable the inspection of soft materials. However, for accurate measurements, a number of aspects have to be considered to minimize measurement uncertainties. This study outlines possible sources of uncertainties during the measurement process regarding the scanner warm-up, the impact of laser power and exposure time as well as scanner’s reaction to areas of discontinuity, e.g. edges. All experiments were performed using a fixed scanner position to avoid effects resulting from imaging geometry. The results show a significant dependence of measurement accuracy on the correct adaption of exposure time as a function of surface reflectivity and laser power. Additionally, it is illustrated that surface structure as well as edges can cause significant systematic uncertainties.

  8. Radar probing of ionospheric plasmas precisely confirms linear kinetic plasma theory (Hannes Alfvén Medal Lecture)

    NASA Astrophysics Data System (ADS)

    Farley, Donald

    2010-05-01

    In 1958 W. E. Gordon first suggested that huge radars could probe the ionosphere via scattering from independent electrons, even though the radar cross section of a single electron is only 10-28 m2. This suggestion quickly led to the construction of two enormous radars in the early 1960s, one near Lima, Peru, and one near Arecibo, Puerto Rico. It soon became apparent that the theory of this scatter was more complicated than originally envisaged by Gordon. Although the new theory was more complicated, it was much richer: by measuring the detailed shape of the Doppler frequency spectrum (or alternatively the signal autocorrelation function, the ACF), a radar researcher could determine many, if not most, of the parameters of interest of the plasma. There is now a substantial network of major radar facilities scattered from the magnetic equator (Peru) to the high arctic latitudes (Svalbard and Resolute Bay), all doing important ionospheric research. The history of what is now called Incoherent Scatter (even though it is not truly incoherent) is fascinating, and I will touch on a few highlights. The sophisticated radar and data processing techniques that have been developed are also impressive. In this talk, however, I want to focus mainly on the details of the theory and on how the radar observations have confirmed the predictions of classical linear plasma kinetic theory to an amazingly high degree of precision, far higher than has any other technique that I am aware of. The theory can be, and has been, developed from two very different points of view. One starts with 'dressed particles,' or Coulomb 'clouds' around ions and electrons moving with a Maxwellian velocity distribution; the second starts by considering all the charged particles to be made up of a spectrum of density plane waves and then invokes a generalized version of the Nyquist Noise Theorem to calculate the thermal amplitudes of the waves. Both approaches give exactly the same results, results that

  9. Rapid screening of drugs of abuse in human urine by high-performance liquid chromatography coupled with high resolution and high mass accuracy hybrid linear ion trap-Orbitrap mass spectrometry.

    PubMed

    Li, Xiaowen; Shen, Baohua; Jiang, Zheng; Huang, Yi; Zhuo, Xianyi

    2013-08-01

    A novel analytical toxicology method has been developed for the analysis of drugs of abuse in human urine by using a high resolution and high mass accuracy hybrid linear ion trap-Orbitrap mass spectrometer (LTQ-Orbitrap-MS). This method allows for the detection of different drugs of abuse, including amphetamines, cocaine, opiate alkaloids, cannabinoids, hallucinogens and their metabolites. After solid-phase extraction with Oasis HLB cartridges, spiked urine samples were analysed by HPLC/LTQ-Orbitrap-MS using an electrospray interface in positive ionisation mode, with resolving power of 30,000 full width at half maximum (FWHM). Gradient elution off of a Hypersil Gold PFP column (50mm×2.1mm) allowed to resolve 65 target compounds and 3 internal standards in a total chromatographic run time of 20min. Validation of this method consisted of confirmation of identity, selectivity, linearity, limit of detection (LOD), lowest limits of quantification (LLOQ), accuracy, precision, extraction recovery and matrix effect. The regression coefficients (r(2)) for the calibration curves (LLOQ - 100ng/mL) in the study were ≥0.99. The LODs for 65 validated compounds were better than 5ng/ml except for 4 compounds. The relative standard deviation (RSD), which was used to estimate repeatability at three concentrations, was always less than 15%. The recovery of extraction and matrix effects were above 50 and 70%, respectively. Mass accuracy was always better than 2ppm, corresponding to a maximum mass error of 0.8 millimass units (mmu). The accurate masses of characteristic fragments were obtained by collisional experiments for a more reliable identification of the analytes. Automated data analysis and reporting were performed using ToxID software with an exact mass database. This procedure was then successfully applied to analyse drugs of abuse in a real urine sample from subject who was assumed to be drug addict. PMID:23838299

  10. Application of a volume holographic grating in a CaF2 crystal for measuring linear displacements with nanoscale accuracy

    NASA Astrophysics Data System (ADS)

    Shcheulin, A. S.; Angervaks, A. E.; Kupchikov, A. K.; Verkhovskii, E. B.; Ryskin, A. I.

    2014-12-01

    A holographic method for measuring linear displacements based on the use of a highly stable volume scale hologram recorded in an additively colored calcium fluoride crystal with photochromic color centers is proposed and experimentally approved. The essence of this method lies in measuring and analyzing harmonic signals formed during linear displacement of crystal with a volume hologram in an external interference field. A physical model of the formation of harmonic signals in photodetectors when measuring displacements is considered, and a mathematical method for calculating linear displacements by plotting a Lissajous figure is substantiated. A laboratory breadboard of a device for measuring linear displacements in a range of 10 mm, limited by the aperture of crystal with a recorded 8.7-mm-thick hologram, is designed. When using a scale hologram with a period of 2.18 μm and a 632.8-nm He-Ne laser for reading this hologram, the error in measuring displacements by this method is 9 nm at a resolution of 3 nm.

  11. Technical Note: Precision and accuracy of a commercially available CT optically stimulated luminescent dosimetry system for the measurement of CT dose index

    PubMed Central

    Vrieze, Thomas J.; Sturchio, Glenn M.; McCollough, Cynthia H.

    2012-01-01

    Purpose: To determine the precision and accuracy of CTDI100 measurements made using commercially available optically stimulated luminescent (OSL) dosimeters (Landaur, Inc.) as beam width, tube potential, and attenuating material were varied. Methods: One hundred forty OSL dosimeters were individually exposed to a single axial CT scan, either in air, a 16-cm (head), or 32-cm (body) CTDI phantom at both center and peripheral positions. Scans were performed using nominal total beam widths of 3.6, 6, 19.2, and 28.8 mm at 120 kV and 28.8 mm at 80 kV. Five measurements were made for each of 28 parameter combinations. Measurements were made under the same conditions using a 100-mm long CTDI ion chamber. Exposed OSL dosimeters were returned to the manufacturer, who reported dose to air (in mGy) as a function of distance along the probe, integrated dose, and CTDI100. Results: The mean precision averaged over 28 datasets containing five measurements each was 1.4% ± 0.6%, range = 0.6%–2.7% for OSL and 0.08% ± 0.06%, range = 0.02%–0.3% for ion chamber. The root mean square (RMS) percent differences between OSL and ion chamber CTDI100 values were 13.8%, 6.4%, and 8.7% for in-air, head, and body measurements, respectively, with an overall RMS percent difference of 10.1%. OSL underestimated CTDI100 relative to the ion chamber 21/28 times (75%). After manual correction of the 80 kV measurements, the RMS percent differences between OSL and ion chamber measurements were 9.9% and 10.0% for 80 and 120 kV, respectively. Conclusions: Measurements of CTDI100 with commercially available CT OSL dosimeters had a percent standard deviation of 1.4%. After energy-dependent correction factors were applied, the RMS percent difference in the measured CTDI100 values was about 10%, with a tendency of OSL to underestimate CTDI relative to the ion chamber. Unlike ion chamber methods, however, OSL dosimeters allow measurement of the radiation dose profile. PMID:23127052

  12. Technical Note: Precision and accuracy of a commercially available CT optically stimulated luminescent dosimetry system for the measurement of CT dose index

    SciTech Connect

    Vrieze, Thomas J.; Sturchio, Glenn M.; McCollough, Cynthia H.

    2012-11-15

    Purpose: To determine the precision and accuracy of CTDI{sub 100} measurements made using commercially available optically stimulated luminescent (OSL) dosimeters (Landaur, Inc.) as beam width, tube potential, and attenuating material were varied. Methods: One hundred forty OSL dosimeters were individually exposed to a single axial CT scan, either in air, a 16-cm (head), or 32-cm (body) CTDI phantom at both center and peripheral positions. Scans were performed using nominal total beam widths of 3.6, 6, 19.2, and 28.8 mm at 120 kV and 28.8 mm at 80 kV. Five measurements were made for each of 28 parameter combinations. Measurements were made under the same conditions using a 100-mm long CTDI ion chamber. Exposed OSL dosimeters were returned to the manufacturer, who reported dose to air (in mGy) as a function of distance along the probe, integrated dose, and CTDI{sub 100}. Results: The mean precision averaged over 28 datasets containing five measurements each was 1.4%{+-} 0.6%, range = 0.6%-2.7% for OSL and 0.08%{+-} 0.06%, range = 0.02%-0.3% for ion chamber. The root mean square (RMS) percent differences between OSL and ion chamber CTDI{sub 100} values were 13.8%, 6.4%, and 8.7% for in-air, head, and body measurements, respectively, with an overall RMS percent difference of 10.1%. OSL underestimated CTDI{sub 100} relative to the ion chamber 21/28 times (75%). After manual correction of the 80 kV measurements, the RMS percent differences between OSL and ion chamber measurements were 9.9% and 10.0% for 80 and 120 kV, respectively. Conclusions: Measurements of CTDI{sub 100} with commercially available CT OSL dosimeters had a percent standard deviation of 1.4%. After energy-dependent correction factors were applied, the RMS percent difference in the measured CTDI{sub 100} values was about 10%, with a tendency of OSL to underestimate CTDI relative to the ion chamber. Unlike ion chamber methods, however, OSL dosimeters allow measurement of the radiation dose profile.

  13. Accuracy and precision of porosity estimates based on velocity inversion of surface ground-penetrating radar data: A controlled experiment at the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Bradford, J.; Clement, W.

    2006-12-01

    Although rarely acquired, ground penetrating radar (GPR) data acquired in continuous multi-offset geometries can substantially improve our understanding of the subsurface compared to conventional single offset surveys. This improvement arises because multi-offset data enable full use of the information that the GPR signal can carry. The added information allows us to maximize the material property information extracted from a GPR survey. Of the array of potential multi-offset GPR measurements, traveltime vs offset information enables laterally and vertically continuous electromagnetic (EM) velocity measurements. In turn, the EM velocities provide estimates of water content via petrophysical relationships such as the CRIM or Topp's equations. In fully saturated media the water content is a direct measure of bulk porosity. The Boise Hydrogeophysical Research Site (BHRS) is a experimental wellfield located in a shallow alluvial aquifer near Boise, Idaho. In July, 2006 we conducted a controlled 3D multi-offset GPR experiment at the BHRS designed to test the accuracy of state-of-the-art velocity analysis methodologies. We acquired continuous multi-offset GPR data over an approximately 20 x 30 m 3D area. The GPR system was a Sensors and Software pulseEkko Pro multichannel system with 100 MHz antennas and was configured with 4 receivers and a single transmitter. Data were acquired in off-end geometry for a total of 16 offsets with a 1 m offset interval and 1 m near offset. The data were acquired on a 1 m x 1m grid in four passes, each consisting of a 3 m range of equally spaced offsets. The survey encompassed 13 wells finished to the ~20 m depth of the unconfined aquifer. We established velocity control by acquiring vertical radar profiles (VRPs) in all 13 wells. Preliminary velocity measurements using an established method of reflection tomography were within about 1 percent of local 1D velocity distributions determined from the VRPs. Vertical velocity precision from the

  14. Linear signal noise summer accurately determines and controls S/N ratio

    NASA Technical Reports Server (NTRS)

    Sundry, J. L.

    1966-01-01

    Linear signal noise summer precisely controls the relative power levels of signal and noise, and mixes them linearly in accurately known ratios. The S/N ratio accuracy and stability are greatly improved by this technique and are attained simultaneously.

  15. Development and validation of an automated and marker-free CT-based spatial analysis method (CTSA) for assessment of femoral hip implant migration In vitro accuracy and precision comparable to that of radiostereometric analysis (RSA).

    PubMed

    Scheerlinck, Thierry; Polfliet, Mathias; Deklerck, Rudi; Van Gompel, Gert; Buls, Nico; Vandemeulebroucke, Jef

    2016-04-01

    Background and purpose - We developed a marker-free automated CT-based spatial analysis (CTSA) method to detect stem-bone migration in consecutive CT datasets and assessed the accuracy and precision in vitro. Our aim was to demonstrate that in vitro accuracy and precision of CTSA is comparable to that of radiostereometric analysis (RSA). Material and methods - Stem and bone were segmented in 2 CT datasets and both were registered pairwise. The resulting rigid transformations were compared and transferred to an anatomically sound coordinate system, taking the stem as reference. This resulted in 3 translation parameters and 3 rotation parameters describing the relative amount of stem-bone displacement, and it allowed calculation of the point of maximal stem migration. Accuracy was evaluated in 39 comparisons by imposing known stem migration on a stem-bone model. Precision was estimated in 20 comparisons based on a zero-migration model, and in 5 patients without stem loosening. Results - Limits of the 95% tolerance intervals (TIs) for accuracy did not exceed 0.28 mm for translations and 0.20° for rotations (largest standard deviation of the signed error (SDSE): 0.081 mm and 0.057°). In vitro, limits of the 95% TI for precision in a clinically relevant setting (8 comparisons) were below 0.09 mm and 0.14° (largest SDSE: 0.012 mm and 0.020°). In patients, the precision was lower, but acceptable, and dependent on CT scan resolution. Interpretation - CTSA allows detection of stem-bone migration with an accuracy and precision comparable to that of RSA. It could be valuable for evaluation of subtle stem loosening in clinical practice. PMID:26634843

  16. Development and validation of an automated and marker-free CT-based spatial analysis method (CTSA) for assessment of femoral hip implant migration In vitro accuracy and precision comparable to that of radiostereometric analysis (RSA)

    PubMed Central

    Scheerlinck, Thierry; Polfliet, Mathias; Deklerck, Rudi; Van Gompel, Gert; Buls, Nico; Vandemeulebroucke, Jef

    2016-01-01

    Background and purpose — We developed a marker-free automated CT-based spatial analysis (CTSA) method to detect stem-bone migration in consecutive CT datasets and assessed the accuracy and precision in vitro. Our aim was to demonstrate that in vitro accuracy and precision of CTSA is comparable to that of radiostereometric analysis (RSA). Material and methods — Stem and bone were segmented in 2 CT datasets and both were registered pairwise. The resulting rigid transformations were compared and transferred to an anatomically sound coordinate system, taking the stem as reference. This resulted in 3 translation parameters and 3 rotation parameters describing the relative amount of stem-bone displacement, and it allowed calculation of the point of maximal stem migration. Accuracy was evaluated in 39 comparisons by imposing known stem migration on a stem-bone model. Precision was estimated in 20 comparisons based on a zero-migration model, and in 5 patients without stem loosening. Results — Limits of the 95% tolerance intervals (TIs) for accuracy did not exceed 0.28 mm for translations and 0.20° for rotations (largest standard deviation of the signed error (SDSE): 0.081 mm and 0.057°). In vitro, limits of the 95% TI for precision in a clinically relevant setting (8 comparisons) were below 0.09 mm and 0.14° (largest SDSE: 0.012 mm and 0.020°). In patients, the precision was lower, but acceptable, and dependent on CT scan resolution. Interpretation — CTSA allows detection of stem-bone migration with an accuracy and precision comparable to that of RSA. It could be valuable for evaluation of subtle stem loosening in clinical practice. PMID:26634843

  17. The 1998-2000 SHADOZ (Southern Hemisphere ADditional OZonesondes) Tropical Ozone Climatology: Ozonesonde Precision, Accuracy and Station-to-Station Variability

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, Anne M.; McPeters, R. D.; Oltmans, S. J.; Schmidlin, F. J.; Bhartia, P. K. (Technical Monitor)

    2001-01-01

    As part of the SAFARI-2000 campaign, additional launches of ozonesondes were made at Irene, South Africa and at Lusaka, Zambia. These represent campaign augmentations to the SHADOZ database described in this paper. This network of 10 southern hemisphere tropical and subtropical stations, designated the Southern Hemisphere ADditional OZonesondes (SHADOZ) project and established from operational sites, provided over 1000 profiles from ozonesondes and radiosondes during the period 1998-2000. (Since that time, two more stations, one in southern Africa, have joined SHADOZ). Archived data are available at: http://code9l6.gsfc.nasa.gov/Data-services/shadoz>. Uncertainties and accuracies within the SHADOZ ozone data set are evaluated by analyzing: (1) imprecisions in stratospheric ozone profiles and in methods of extrapolating ozone above balloon burst; (2) comparisons of column-integrated total ozone from sondes with total ozone from the Earth-Probe/TOMS (Total Ozone Mapping Spectrometer) satellite and ground-based instruments; (3) possible biases from station-to-station due to variations in ozonesonde characteristics. The key results are: (1) Ozonesonde precision is 5%; (2) Integrated total ozone column amounts from the sondes are in good agreement (2-10%) with independent measurements from ground-based instruments at five SHADOZ sites and with overpass measurements from the TOMS satellite (version 7 data). (3) Systematic variations in TOMS-sonde offsets and in groundbased-sonde offsets from station to station reflect biases in sonde technique as well as in satellite retrieval. Discrepancies are present in both stratospheric and tropospheric ozone. (4) There is evidence for a zonal wave-one pattern in total and tropospheric ozone, but not in stratospheric ozone.

  18. Analysis of the accuracy and precision of the McMaster method in detection of the eggs of Toxocara and Trichuris species (Nematoda) in dog faeces.

    PubMed

    Kochanowski, Maciej; Dabrowska, Joanna; Karamon, Jacek; Cencek, Tomasz; Osiński, Zbigniew

    2013-07-01

    The aim of this study was to determine the accuracy and precision of McMaster method with Raynaud's modification in the detection of the eggs of the nematodes Toxocara canis (Werner, 1782) and Trichuris ovis (Abildgaard, 1795) in faeces of dogs. Four variants of McMaster method were used for counting: in one grid, two grids, the whole McMaster chamber and flotation in the tube. One hundred sixty samples were prepared from dog faeces (20 repetitions for each egg quantity) containing 15, 25, 50, 100, 150, 200, 250 and 300 eggs of T. canis and T. ovis in 1 g of faeces. To compare the influence of kind of faeces on the results, samples of dog faeces were enriched at the same levels with the eggs of another nematode, Ascaris suum Goeze, 1782. In addition, 160 samples of pig faeces were prepared and enriched only with A. suum eggs in the same way. The highest limit of detection (the lowest level of eggs that were detected in at least 50% of repetitions) in all McMaster chamber variants were obtained for T. canis eggs (25-250 eggs/g faeces). In the variant with flotation in the tube, the highest limit of detection was obtained for T. ovis eggs (100 eggs/g). The best results of the limit of detection, sensitivity and the lowest coefficients of variation were obtained with the use of the whole McMaster chamber variant. There was no significant impact of properties of faeces on the obtained results. Multiplication factors for the whole chamber were calculated on the basis of the transformed equation of the regression line, illustrating the relationship between the number of detected eggs and that of the eggs added to the'sample. Multiplication factors calculated for T. canis and T. ovis eggs were higher than those expected using McMaster method with Raynaud modification. PMID:23951934

  19. Application of U-Pb ID-TIMS dating to the end-Triassic global crisis: testing the limits on precision and accuracy in a multidisciplinary whodunnit (Invited)

    NASA Astrophysics Data System (ADS)

    Schoene, B.; Schaltegger, U.; Guex, J.; Bartolini, A.

    2010-12-01

    The ca. 201.4 Ma Triassic-Jurassic boundary is characterized by one of the most devastating mass-extinctions in Earth history, subsequent biologic radiation, rapid carbon cycle disturbances and enormous flood basalt volcanism (Central Atlantic Magmatic Province - CAMP). Considerable uncertainty remains regarding the temporal and causal relationship between these events though this link is important for understanding global environmental change under extreme stresses. We present ID-TIMS U-Pb zircon geochronology on volcanic ash beds from two marine sections that span the Triassic-Jurassic boundary and from the CAMP in North America. To compare the timing of the extinction with the onset of the CAMP, we assess the precision and accuracy of ID-TIMS U-Pb zircon geochronology by exploring random and systematic uncertainties, reproducibility, open-system behavior, and pre-eruptive crystallization of zircon. We find that U-Pb ID-TIMS dates on single zircons can be internally and externally reproducible at 0.05% of the age, consistent with recent experiments coordinated through the EARTHTIME network. Increased precision combined with methods alleviating Pb-loss in zircon reveals that these ash beds contain zircon that crystallized between 10^5 and 10^6 years prior to eruption. Mineral dates older than eruption ages are prone to affect all geochronologic methods and therefore new tools exploring this form of “geologic uncertainty” will lead to better time constraints for ash bed deposition. In an effort to understand zircon dates within the framework of a magmatic system, we analyzed zircon trace elements by solution ICPMS for the same volume of zircon dated by ID-TIMS. In one example we argue that zircon trace element patterns as a function of time result from a mix of xeno-, ante-, and autocrystic zircons in the ash bed, and approximate eruption age with the youngest zircon date. In a contrasting example from a suite of Cretaceous andesites, zircon trace elements

  20. Accuracy and precision of 14C-based source apportionment of organic and elemental carbon in aerosols using the Swiss_4S protocol

    NASA Astrophysics Data System (ADS)

    Mouteva, G. O.; Fahrni, S. M.; Santos, G. M.; Randerson, J. T.; Zhang, Y.-L.; Szidat, S.; Czimczik, C. I.

    2015-09-01

    Aerosol source apportionment remains a critical challenge for understanding the transport and aging of aerosols, as well as for developing successful air pollution mitigation strategies. The contributions of fossil and non-fossil sources to organic carbon (OC) and elemental carbon (EC) in carbonaceous aerosols can be quantified by measuring the radiocarbon (14C) content of each carbon fraction. However, the use of 14C in studying OC and EC has been limited by technical challenges related to the physical separation of the two fractions and small sample sizes. There is no common procedure for OC/EC 14C analysis, and uncertainty studies have largely focused on the precision of yields. Here, we quantified the uncertainty in 14C measurement of aerosols associated with the isolation and analysis of each carbon fraction with the Swiss_4S thermal-optical analysis (TOA) protocol. We used an OC/EC analyzer (Sunset Laboratory Inc., OR, USA) coupled to a vacuum line to separate the two components. Each fraction was thermally desorbed and converted to carbon dioxide (CO2) in pure oxygen (O2). On average, 91 % of the evolving CO2 was then cryogenically trapped on the vacuum line, reduced to filamentous graphite, and measured for its 14C content via accelerator mass spectrometry (AMS). To test the accuracy of our setup, we quantified the total amount of extraneous carbon introduced during the TOA sample processing and graphitization as the sum of modern and fossil (14C-depleted) carbon introduced during the analysis of fossil reference materials (adipic acid for OC and coal for EC) and contemporary standards (oxalic acid for OC and rice char for EC) as a function of sample size. We further tested our methodology by analyzing five ambient airborne particulate matter (PM2.5) samples with a range of OC and EC concentrations and 14C contents in an interlaboratory comparison. The total modern and fossil carbon blanks of our setup were 0.8 ± 0.4 and 0.67 ± 0.34 μg C, respectively

  1. Accuracy and precision of 14C-based source apportionment of organic and elemental carbon in aerosols using the Swiss_4S protocol

    NASA Astrophysics Data System (ADS)

    Mouteva, G. O.; Fahrni, S. M.; Santos, G. M.; Randerson, J. T.; Zhang, Y. L.; Szidat, S.; Czimczik, C. I.

    2015-04-01

    Aerosol source apportionment remains a critical challenge for understanding the transport and aging of aerosols, as well as for developing successful air pollution mitigation strategies. The contributions of fossil and non-fossil sources to organic carbon (OC) and elemental carbon (EC) in carbonaceous aerosols can be quantified by measuring the radiocarbon (14C) content of each carbon fraction. However, the use of 14C in studying OC and EC has been limited by technical challenges related to the physical separation of the two fractions and small sample sizes. There is no common procedure for OC/EC 14C analysis, and uncertainty studies have largely focused on the precision of yields. Here, we quantified the uncertainty in 14C measurement of aerosols associated with the isolation and analysis of each carbon fraction with the Swiss_4S thermal-optical analysis (TOA) protocol. We used an OC/EC analyzer (Sunset Laboratory Inc., OR, USA) coupled to vacuum line to separate the two components. Each fraction was thermally desorbed and converted to carbon dioxide (CO2) in pure oxygen (O2). On average 91% of the evolving CO2 was then cryogenically trapped on the vacuum line, reduced to filamentous graphite, and measured for its 14C content via accelerator mass spectrometry (AMS). To test the accuracy of our set-up, we quantified the total amount of extraneous carbon introduced during the TOA sample processing and graphitization as the sum of modern and fossil (14C-depleted) carbon introduced during the analysis of fossil reference materials (adipic acid for OC and coal for EC) and contemporary standards (oxalic acid for OC and rice char for EC) as a function of sample size. We further tested our methodology by analyzing five ambient airborne particulate matter (PM2.5) samples with a range of OC and EC concentrations and 14C contents in an interlaboratory comparison. The total modern and fossil carbon blanks of our set-up were 0.8 ± 0.4 and 0.67 ± 0.34 μg C, respectively

  2. Improving power and accuracy of genome-wide association studies via a multi-locus mixed linear model methodology.

    PubMed

    Wang, Shi-Bo; Feng, Jian-Ying; Ren, Wen-Long; Huang, Bo; Zhou, Ling; Wen, Yang-Jun; Zhang, Jin; Dunwell, Jim M; Xu, Shizhong; Zhang, Yuan-Ming

    2016-01-01

    Genome-wide association studies (GWAS) have been widely used in genetic dissection of complex traits. However, common methods are all based on a fixed-SNP-effect mixed linear model (MLM) and single marker analysis, such as efficient mixed model analysis (EMMA). These methods require Bonferroni correction for multiple tests, which often is too conservative when the number of markers is extremely large. To address this concern, we proposed a random-SNP-effect MLM (RMLM) and a multi-locus RMLM (MRMLM) for GWAS. The RMLM simply treats the SNP-effect as random, but it allows a modified Bonferroni correction to be used to calculate the threshold p value for significance tests. The MRMLM is a multi-locus model including markers selected from the RMLM method with a less stringent selection criterion. Due to the multi-locus nature, no multiple test correction is needed. Simulation studies show that the MRMLM is more powerful in QTN detection and more accurate in QTN effect estimation than the RMLM, which in turn is more powerful and accurate than the EMMA. To demonstrate the new methods, we analyzed six flowering time related traits in Arabidopsis thaliana and detected more genes than previous reported using the EMMA. Therefore, the MRMLM provides an alternative for multi-locus GWAS. PMID:26787347

  3. Improving power and accuracy of genome-wide association studies via a multi-locus mixed linear model methodology

    PubMed Central

    Wang, Shi-Bo; Feng, Jian-Ying; Ren, Wen-Long; Huang, Bo; Zhou, Ling; Wen, Yang-Jun; Zhang, Jin; Dunwell, Jim M.; Xu, Shizhong; Zhang, Yuan-Ming

    2016-01-01

    Genome-wide association studies (GWAS) have been widely used in genetic dissection of complex traits. However, common methods are all based on a fixed-SNP-effect mixed linear model (MLM) and single marker analysis, such as efficient mixed model analysis (EMMA). These methods require Bonferroni correction for multiple tests, which often is too conservative when the number of markers is extremely large. To address this concern, we proposed a random-SNP-effect MLM (RMLM) and a multi-locus RMLM (MRMLM) for GWAS. The RMLM simply treats the SNP-effect as random, but it allows a modified Bonferroni correction to be used to calculate the threshold p value for significance tests. The MRMLM is a multi-locus model including markers selected from the RMLM method with a less stringent selection criterion. Due to the multi-locus nature, no multiple test correction is needed. Simulation studies show that the MRMLM is more powerful in QTN detection and more accurate in QTN effect estimation than the RMLM, which in turn is more powerful and accurate than the EMMA. To demonstrate the new methods, we analyzed six flowering time related traits in Arabidopsis thaliana and detected more genes than previous reported using the EMMA. Therefore, the MRMLM provides an alternative for multi-locus GWAS. PMID:26787347

  4. Precision synchrotron radiation detectors

    SciTech Connect

    Levi, M.; Rouse, F.; Butler, J.; Jung, C.K.; Lateur, M.; Nash, J.; Tinsman, J.; Wormser, G.; Gomez, J.J.; Kent, J.

    1989-03-01

    Precision detectors to measure synchrotron radiation beam positions have been designed and installed as part of beam energy spectrometers at the Stanford Linear Collider (SLC). The distance between pairs of synchrotron radiation beams is measured absolutely to better than 28 /mu/m on a pulse-to-pulse basis. This contributes less than 5 MeV to the error in the measurement of SLC beam energies (approximately 50 GeV). A system of high-resolution video cameras viewing precisely-aligned fiducial wire arrays overlaying phosphorescent screens has achieved this accuracy. Also, detectors of synchrotron radiation using the charge developed by the ejection of Compton-recoil electrons from an array of fine wires are being developed. 4 refs., 5 figs., 1 tab.

  5. Experimental investigation on focusing characteristics of a He-Ne laser using circular Fresnel zone plate for high-precision alignment of linear accelerators

    SciTech Connect

    Suwada, Tsuyoshi; Satoh, Masanori; Telada, Souichi; Minoshima, Kaoru

    2012-05-15

    We experimentally investigate the focusing characteristics of a He-Ne laser at the focal region for the high-precision alignment of long-distance linear accelerators using a circular Fresnel zone plate. The laser wave passing through the Fresnel zone plate having a focal length of 66.7 m propagates for a 268-m-long distance at atmospheric pressure. A new laser-based alignment system using Fresnel zone plates as the alignment targets is discussed. The transverse displacement of the focused spot of the laser is measured as a function of the displacement of the target by a detector installed at the focal point. Systematic studies on the focusing characteristics and alignment precision have been successfully conducted in this experiment. The experimental results are in good agreement with theoretical calculations, and the alignment precision of the target is determined to be less than {+-}30 {mu}m. In this study, we perform a detailed experimental investigation on the laser propagation and focusing characteristics using the circular Fresnel zone plate at the focal region along with theoretical calculations.

  6. SU-E-J-03: Characterization of the Precision and Accuracy of a New, Preclinical, MRI-Guided Focused Ultrasound System for Image-Guided Interventions in Small-Bore, High-Field Magnets

    SciTech Connect

    Ellens, N; Farahani, K

    2015-06-15

    Purpose: MRI-guided focused ultrasound (MRgFUS) has many potential and realized applications including controlled heating and localized drug delivery. The development of many of these applications requires extensive preclinical work, much of it in small animal models. The goal of this study is to characterize the spatial targeting accuracy and reproducibility of a preclinical high field MRgFUS system for thermal ablation and drug delivery applications. Methods: The RK300 (FUS Instruments, Toronto, Canada) is a motorized, 2-axis FUS positioning system suitable for small bore (72 mm), high-field MRI systems. The accuracy of the system was assessed in three ways. First, the precision of the system was assessed by sonicating regular grids of 5 mm squares on polystyrene plates and comparing the resulting focal dimples to the intended pattern, thereby assessing the reproducibility and precision of the motion control alone. Second, the targeting accuracy was assessed by imaging a polystyrene plate with randomly drilled holes and replicating the hole pattern by sonicating the observed hole locations on intact polystyrene plates and comparing the results. Third, the practicallyrealizable accuracy and precision were assessed by comparing the locations of transcranial, FUS-induced blood-brain-barrier disruption (BBBD) (observed through Gadolinium enhancement) to the intended targets in a retrospective analysis of animals sonicated for other experiments. Results: The evenly-spaced grids indicated that the precision was 0.11 +/− 0.05 mm. When image-guidance was included by targeting random locations, the accuracy was 0.5 +/− 0.2 mm. The effective accuracy in the four rodent brains assessed was 0.8 +/− 0.6 mm. In all cases, the error appeared normally distributed (p<0.05) in both orthogonal axes, though the left/right error was systematically greater than the superior/inferior error. Conclusions: The targeting accuracy of this device is sub-millimeter, suitable for many

  7. Diagnostic Accuracy of Ultrasound B scan using 10 MHz linear probe in ocular trauma;results from a high burden country

    PubMed Central

    Shazlee, Muhammad Kashif; Ali, Muhammad; SaadAhmed, Muhammad; Hussain, Ammad; Hameed, Kamran; Lutfi, Irfan Amjad; Khan, Muhammad Tahir

    2016-01-01

    Objective: To study the diagnostic accuracy of Ultrasound B scan using 10 MHz linear probe in ocular trauma. Methods: A total of 61 patients with 63 ocular injuries were assessed during July 2013 to January 2014. All patients were referred to the department of Radiology from Emergency Room since adequate clinical assessment of the fundus was impossible because of the presence of opaque ocular media. Based on radiological diagnosis, the patients were provided treatment (surgical or medical). Clinical diagnosis was confirmed during surgical procedures or clinical follow-up. Results: A total of 63 ocular injuries were examined in 61 patients. The overall sensitivity was 91.5%, Specificity was 98.87%, Positive predictive value was 87.62 and Negative predictive value was 99%. Conclusion: Ultrasound B-scan is a sensitive, non invasive and rapid way of assessing intraocular damage caused by blunt or penetrating eye injuries. PMID:27182245

  8. Detecting declines in the abundance of a bull trout (Salvelinus confluentus) population: Understanding the accuracy, precision, and costs of our efforts

    USGS Publications Warehouse

    Al-Chokhachy, R.; Budy, P.; Conner, M.

    2009-01-01

    Using empirical field data for bull trout (Salvelinus confluentus), we evaluated the trade-off between power and sampling effort-cost using Monte Carlo simulations of commonly collected mark-recapture-resight and count data, and we estimated the power to detect changes in abundance across different time intervals. We also evaluated the effects of monitoring different components of a population and stratification methods on the precision of each method. Our results illustrate substantial variability in the relative precision, cost, and information gained from each approach. While grouping estimates by age or stage class substantially increased the precision of estimates, spatial stratification of sampling units resulted in limited increases in precision. Although mark-resight methods allowed for estimates of abundance versus indices of abundance, our results suggest snorkel surveys may be a more affordable monitoring approach across large spatial scales. Detecting a 25% decline in abundance after 5 years was not possible, regardless of technique (power = 0.80), without high sampling effort (48% of study site). Detecting a 25% decline was possible after 15 years, but still required high sampling efforts. Our results suggest detecting moderate changes in abundance of freshwater salmonids requires considerable resource and temporal commitments and highlight the difficulties of using abundance measures for monitoring bull trout populations.

  9. Propagation and stability characteristics of a 500-m-long laser-based fiducial line for high-precision alignment of long-distance linear accelerators

    SciTech Connect

    Suwada, Tsuyoshi; Satoh, Masanori; Telada, Souichi; Minoshima, Kaoru

    2013-09-15

    A laser-based alignment system with a He-Ne laser has been newly developed in order to precisely align accelerator units at the KEKB injector linac. The laser beam was first implemented as a 500-m-long fiducial straight line for alignment measurements. We experimentally investigated the propagation and stability characteristics of the laser beam passing through laser pipes in vacuum. The pointing stability at the last fiducial point was successfully obtained with the transverse displacements of ±40 μm level in one standard deviation by applying a feedback control. This pointing stability corresponds to an angle of ±0.08 μrad. This report contains a detailed description of the experimental investigation for the propagation and stability characteristics of the laser beam in the laser-based alignment system for long-distance linear accelerators.

  10. Linear-scaling time-dependent density-functional theory beyond the Tamm-Dancoff approximation: Obtaining efficiency and accuracy with in situ optimised local orbitals

    SciTech Connect

    Zuehlsdorff, T. J. Payne, M. C.; Hine, N. D. M.; Haynes, P. D.

    2015-11-28

    We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.

  11. Linear-scaling time-dependent density-functional theory beyond the Tamm-Dancoff approximation: Obtaining efficiency and accuracy with in situ optimised local orbitals

    NASA Astrophysics Data System (ADS)

    Zuehlsdorff, T. J.; Hine, N. D. M.; Payne, M. C.; Haynes, P. D.

    2015-11-01

    We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.

  12. Linear-scaling time-dependent density-functional theory beyond the Tamm-Dancoff approximation: Obtaining efficiency and accuracy with in situ optimised local orbitals.

    PubMed

    Zuehlsdorff, T J; Hine, N D M; Payne, M C; Haynes, P D

    2015-11-28

    We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment. PMID:26627950

  13. Method and system using power modulation for maskless vapor deposition of spatially graded thin film and multilayer coatings with atomic-level precision and accuracy

    DOEpatents

    Montcalm, Claude; Folta, James Allen; Tan, Swie-In; Reiss, Ira

    2002-07-30

    A method and system for producing a film (preferably a thin film with highly uniform or highly accurate custom graded thickness) on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source operated with time-varying flux distribution. In preferred embodiments, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. A user selects a source flux modulation recipe for achieving a predetermined desired thickness profile of the deposited film. The method relies on precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  14. Precision Fabrication of a Large-Area Sinusoidal Surface Using a Fast-Tool-Servo Technique ─Improvement of Local Fabrication Accuracy

    NASA Astrophysics Data System (ADS)

    Gao, Wei; Tano, Makoto; Araki, Takeshi; Kiyono, Satoshi

    This paper describes a diamond turning fabrication system for a sinusoidal grid surface. The wavelength and amplitude of the sinusoidal wave in each direction are 100µm and 100nm, respectively. The fabrication system, which is based on a fast-tool-servo (FTS), has the ability to generate the angle grid surface over an area of φ 150mm. This paper focuses on the improvement of the local fabrication accuracy. The areas considered are each approximately 1 × 1mm, and can be imaged by an interference microscope. Specific fabrication errors of the manufacturing process, caused by the round nose geometry of the diamond cutting tool and the data digitization, are successfully identified by Discrete Fourier Transform of the microscope images. Compensation processes are carried out to reduce the errors. As a result, the fabrication errors in local areas of the angle grid surface are reduced by 1/10.

  15. Preliminary assessment of the accuracy and precision of TOPEX/POSEIDON altimeter data with respect to the large-scale ocean circulation

    NASA Technical Reports Server (NTRS)

    Wunsch, Carl; Stammer, Detlef

    1994-01-01

    TOPEX/POSEIDON sea surface height measurements are examined for quantitative consistency with known elements of the oceanic general circulation and its variability. Project-provided corrections were accepted but are at tested as part of the overall results. The ocean was treated as static over each 10-day repeat cycle and maps constructed of the absolute sea surface topography from simple averages in 2 deg x 2 deg bins. A hybrid geoid model formed from a combination of the recent Joint Gravity Model-2 and the project-provided Ohio State University geoid was used to estimate the absolute topography in each 10-day period. Results are examined in terms of the annual average, seasonal average, seasonal variations, and variations near the repeat period. Conclusion are as follows: the orbit error is now difficult to observe, having been reduced to a level at or below the level of other error sources; the geoid dominates the error budget of the estimates of the absolute topography; the estimated seasonal cycle is consistent with prior estimates; shorter-period variability is dominated on the largest scales by an oscillation near 50 days in spherical harmonics Y(sup m)(sub 1)(theta, lambda) with an amplitude near 10 cm, close to the simplest alias of the M(sub 2) tide. This spectral peak and others visible in the periodograms support the hypothesis that the largest remaining time-dependent errors lie in the tidal models. Though discrepancies attribute to the geoid are within the formal uncertainties of the good estimates, removal of them is urgent for circulation studies. Current gross accuracy of the TOPEX/POSEIDON mission is in the range of 5-10 cm, distributed overbroad band of frequencies and wavenumbers. In finite bands, accuracies approach the 1-cm level, and expected improvements arising from extended mission duration should reduce these numbers by nearly an order of magnitude.

  16. Leaf Vein Length per Unit Area Is Not Intrinsically Dependent on Image Magnification: Avoiding Measurement Artifacts for Accuracy and Precision1[W][OPEN

    PubMed Central

    Sack, Lawren; Caringella, Marissa; Scoffoni, Christine; Mason, Chase; Rawls, Michael; Markesteijn, Lars; Poorter, Lourens

    2014-01-01

    Leaf vein length per unit leaf area (VLA; also known as vein density) is an important determinant of water and sugar transport, photosynthetic function, and biomechanical support. A range of software methods are in use to visualize and measure vein systems in cleared leaf images; typically, users locate veins by digital tracing, but recent articles introduced software by which users can locate veins using thresholding (i.e. based on the contrasting of veins in the image). Based on the use of this method, a recent study argued against the existence of a fixed VLA value for a given leaf, proposing instead that VLA increases with the magnification of the image due to intrinsic properties of the vein system, and recommended that future measurements use a common, low image magnification for measurements. We tested these claims with new measurements using the software LEAFGUI in comparison with digital tracing using ImageJ software. We found that the apparent increase of VLA with magnification was an artifact of (1) using low-quality and low-magnification images and (2) errors in the algorithms of LEAFGUI. Given the use of images of sufficient magnification and quality, and analysis with error-free software, the VLA can be measured precisely and accurately. These findings point to important principles for improving the quantity and quality of important information gathered from leaf vein systems. PMID:25096977

  17. High-accuracy, high-precision, high-resolution, continuous monitoring of urban greenhouse gas emissions? Results to date from INFLUX

    NASA Astrophysics Data System (ADS)

    Davis, K. J.; Brewer, A.; Cambaliza, M. O. L.; Deng, A.; Hardesty, M.; Gurney, K. R.; Heimburger, A. M. F.; Karion, A.; Lauvaux, T.; Lopez-Coto, I.; McKain, K.; Miles, N. L.; Patarasuk, R.; Prasad, K.; Razlivanov, I. N.; Richardson, S.; Sarmiento, D. P.; Shepson, P. B.; Sweeney, C.; Turnbull, J. C.; Whetstone, J. R.; Wu, K.

    2015-12-01

    The Indianapolis Flux Experiment (INFLUX) is testing the boundaries of our ability to use atmospheric measurements to quantify urban greenhouse gas (GHG) emissions. The project brings together inventory assessments, tower-based and aircraft-based atmospheric measurements, and atmospheric modeling to provide high-accuracy, high-resolution, continuous monitoring of emissions of GHGs from the city. Results to date include a multi-year record of tower and aircraft based measurements of the urban CO2 and CH4 signal, long-term atmospheric modeling of GHG transport, and emission estimates for both CO2 and CH4 based on both tower and aircraft measurements. We will present these emissions estimates, the uncertainties in each, and our assessment of the primary needs for improvements in these emissions estimates. We will also present ongoing efforts to improve our understanding of atmospheric transport and background atmospheric GHG mole fractions, and to disaggregate GHG sources (e.g. biogenic vs. fossil fuel CO2 fluxes), topics that promise significant improvement in urban GHG emissions estimates.

  18. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset 1998-2000 in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, A. M.; Schmidlin, F. J.; Oltmans, S. J.; McPeters, R. D.; Smit, H. G. J.

    2003-01-01

    A network of 12 southern hemisphere tropical and subtropical stations in the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 profiles of stratospheric and tropospheric ozone since 1998. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used with standard radiosondes for pressure, temperature and relative humidity measurements. The archived data are available at:http: //croc.gsfc.nasa.gov/shadoz. In Thompson et al., accuracies and imprecisions in the SHADOZ 1998- 2000 dataset were examined using ground-based instruments and the TOMS total ozone measurement (version 7) as references. Small variations in ozonesonde technique introduced possible biases from station-to-station. SHADOZ total ozone column amounts are now compared to version 8 TOMS; discrepancies between the two datasets are reduced 2\\% on average. An evaluation of ozone variations among the stations is made using the results of a series of chamber simulations of ozone launches (JOSIE-2000, Juelich Ozonesonde Intercomparison Experiment) in which a standard reference ozone instrument was employed with the various sonde techniques used in SHADOZ. A number of variations in SHADOZ ozone data are explained when differences in solution strength, data processing and instrument type (manufacturer) are taken into account.

  19. The Magsat precision vector magnetometer

    NASA Technical Reports Server (NTRS)

    Acuna, M. H.

    1980-01-01

    This paper examines the Magsat precision vector magnetometer which is designed to measure projections of the ambient field in three orthogonal directions. The system contains a highly stable and linear triaxial fluxgate magnetometer with a dynamic range of + or - 2000 nT (1 nT = 10 to the -9 weber per sq m). The magnetometer electronics, analog-to-digital converter, and digitally controlled current sources are implemented with redundant designs to avoid a loss of data in case of failures. Measurements are carried out with an accuracy of + or - 1 part in 64,000 in magnitude and 5 arcsec in orientation (1 arcsec = 0.00028 deg).

  20. The effect of dilution and the use of a post-extraction nucleic acid purification column on the accuracy, precision, and inhibition of environmental DNA samples

    USGS Publications Warehouse

    Mckee, Anna M.; Spear, Stephen F.; Pierson, Todd W.

    2015-01-01

    Isolation of environmental DNA (eDNA) is an increasingly common method for detecting presence and assessing relative abundance of rare or elusive species in aquatic systems via the isolation of DNA from environmental samples and the amplification of species-specific sequences using quantitative PCR (qPCR). Co-extracted substances that inhibit qPCR can lead to inaccurate results and subsequent misinterpretation about a species’ status in the tested system. We tested three treatments (5-fold and 10-fold dilutions, and spin-column purification) for reducing qPCR inhibition from 21 partially and fully inhibited eDNA samples collected from coastal plain wetlands and mountain headwater streams in the southeastern USA. All treatments reduced the concentration of DNA in the samples. However, column purified samples retained the greatest sensitivity. For stream samples, all three treatments effectively reduced qPCR inhibition. However, for wetland samples, the 5-fold dilution was less effective than other treatments. Quantitative PCR results for column purified samples were more precise than the 5-fold and 10-fold dilutions by 2.2× and 3.7×, respectively. Column purified samples consistently underestimated qPCR-based DNA concentrations by approximately 25%, whereas the directional bias in qPCR-based DNA concentration estimates differed between stream and wetland samples for both dilution treatments. While the directional bias of qPCR-based DNA concentration estimates differed among treatments and locations, the magnitude of inaccuracy did not. Our results suggest that 10-fold dilution and column purification effectively reduce qPCR inhibition in mountain headwater stream and coastal plain wetland eDNA samples, and if applied to all samples in a study, column purification may provide the most accurate relative qPCR-based DNA concentrations estimates while retaining the greatest assay sensitivity.

  1. High-precision broad-band linear polarimetry of early-type binaries. I. Discovery of variable, phase-locked polarization in HD 48099

    NASA Astrophysics Data System (ADS)

    Berdyugin, A.; Piirola, V.; Sadegi, S.; Tsygankov, S.; Sakanoi, T.; Kagitani, M.; Yoneda, M.; Okano, S.; Poutanen, J.

    2016-06-01

    Aims: We investigate the structure of the O-type binary system HD 48099 by measuring linear polarization that arises due to light scattering process. High-precison polarimetry provides independent estimates of the orbital parameters and gives important information on the properties of the system. Methods: Linear polarization measurements of HD 48099 in the B, V and R passbands with the high-precision Dipol-2 polarimeter have been carried out. The data have been obtained with the 60 cm KVA (Observatory Roque de los Muchachos, La Palma, Spain) and T60 (Haleakala, Hawaii, USA) remotely controlled telescopes during 31 observing nights. Polarimetry in the optical wavelengths has been complemented by observations in the X-rays with the Swift space observatory. Results: Optical polarimetry revealed small intrinsic polarization in HD 48099 with ~0.1% peak to peak variation over the orbital period of 3.08 d. The variability pattern is typical for binary systems, showing strong second harmonic of the orbital period. We apply our model code for the electron scattering in the circumstellar matter to put constraints on the system geometry. A good model fit is obtained for scattering of light on a cloud produced by the colliding stellar winds. The geometry of the cloud, with a broad distribution of scattering particles away from the orbital plane, helps in constraining the (low) orbital inclination. We derive from the polarization data the inclination i = 17° ± 2° and the longitude of the ascending node Ω = 82° ± 1° of the binary orbit. The available X-ray data provide additional evidence for the existence of the colliding stellar winds in the system. Another possible source of the polarized light could be scattering from the stellar photospheres. The models with circumstellar envelopes, or matter confined to the orbital plane, do not provide good constraints on the low inclination, better than i ≤ 27°, as is already suggested by the absence of eclipses. The

  2. Re-Os geochronology of the El Salvador porphyry Cu-Mo deposit, Chile: Tracking analytical improvements in accuracy and precision over the past decade

    NASA Astrophysics Data System (ADS)

    Zimmerman, Aaron; Stein, Holly J.; Morgan, John W.; Markey, Richard J.; Watanabe, Yasushi

    2014-04-01

    deposit geochronology. The timing and duration of mineralization from Re-Os dating of ore minerals is more precise than estimates from previously reported 40Ar/39Ar and K-Ar ages on alteration minerals. The Re-Os results suggest that the mineralization is temporally distinct from pre-mineral rhyolite porphyry (42.63 ± 0.28 Ma) and is immediately prior to or overlapping with post-mineral latite dike emplacement (41.16 ± 0.48 Ma). Based on the Re-Os and other geochronologic data, the Middle Eocene intrusive activity in the El Salvador district is divided into three pulses: (1) 44-42.5 Ma for weakly mineralized porphyry intrusions, (2) 41.8-41.2 Ma for intensely mineralized porphyry intrusions, and (3) ∼41 Ma for small latite dike intrusions without major porphyry stocks. The orientation of igneous dikes and porphyry stocks changed from NNE-SSW during the first pulse to WNW-ESE for the second and third pulses. This implies that the WNW-ESE striking stress changed from σ3 (minimum principal compressive stress) during the first pulse to σHmax (maximum principal compressional stress in a horizontal plane) during the second and third pulses. Therefore, the focus of intense porphyry Cu-Mo mineralization occurred during a transient geodynamic reconfiguration just before extinction of major intrusive activity in the region.

  3. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    PubMed Central

    2011-01-01

    Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed

  4. Relative accuracy evaluation.

    PubMed

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  5. Relative Accuracy Evaluation

    PubMed Central

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  6. The accuracy of linear measurements of maxillary and mandibular edentulous sites in cone-beam computed tomography images with different fields of view and voxel sizes under simulated clinical conditions

    PubMed Central

    Ramesh, Aruna; Pagni, Sarah

    2016-01-01

    Purpose The objective of this study was to investigate the effect of varying resolutions of cone-beam computed tomography images on the accuracy of linear measurements of edentulous areas in human cadaver heads. Intact cadaver heads were used to simulate a clinical situation. Materials and Methods Fiduciary markers were placed in the edentulous areas of 4 intact embalmed cadaver heads. The heads were scanned with two different CBCT units using a large field of view (13 cm×16 cm) and small field of view (5 cm×8 cm) at varying voxel sizes (0.3 mm, 0.2 mm, and 0.16 mm). The ground truth was established with digital caliper measurements. The imaging measurements were then compared with caliper measurements to determine accuracy. Results The Wilcoxon signed rank test revealed no statistically significant difference between the medians of the physical measurements obtained with calipers and the medians of the CBCT measurements. A comparison of accuracy among the different imaging protocols revealed no significant differences as determined by the Friedman test. The intraclass correlation coefficient was 0.961, indicating excellent reproducibility. Inter-observer variability was determined graphically with a Bland-Altman plot and by calculating the intraclass correlation coefficient. The Bland-Altman plot indicated very good reproducibility for smaller measurements but larger discrepancies with larger measurements. Conclusion The CBCT-based linear measurements in the edentulous sites using different voxel sizes and FOVs are accurate compared with the direct caliper measurements of these sites. Higher resolution CBCT images with smaller voxel size did not result in greater accuracy of the linear measurements. PMID:27358816

  7. Practical parameter estimation through space harmonic method and experiment of permanent magnet linear synchronous motor for high accuracy field orient control

    NASA Astrophysics Data System (ADS)

    Jang, Seok-Myeong; You, Dae-Joon; Jang, Won-Bum; Park, Ji-Hoon

    2005-05-01

    This paper presents the practical parameter estimation for a slotless air-cored permanent magnet linear synchronous motor (PMLSM) using an analytical method and experiment. In the analytical method, the linkage flux is calculated through the generalized magnetic vector potential obtained by the space harmonics and transfer relation with each region of permanent magnet (PM) mover, air gap, and winding stator. This linkage flux is used to estimate the dynamic parameters such as magnetization inductance, backemf, and thrust constant. Also, the resistance and self-inductance with one phase are obtained by the experiment. Therefore, dynamic simulation of a linear synchronous motor composed of dynamic parameters is performed by the nonrotating (d-q) voltage equation. In good agreement with the estimated parameter values, the experimental results confirm the validity of the analysis method and simulation.

  8. Precise sequence control in linear and cyclic copolymers of 2,5-bis(2-thienyl)pyrrole and aniline by DNA-programmed assembly.

    PubMed

    Chen, Wen; Schuster, Gary B

    2013-03-20

    A series of linear and cyclic, sequence controlled, DNA-conjoined copolymers of aniline (ANi) and 2,5-bis(2-thienyl)pyrrole (SNS) were synthesized. In one approach, linear copolymers were prepared from complementary DNA oligomers containing covalently attached SNS and ANi monomers. Hybridization of the oligomers aligns the monomers in the major groove of the DNA. Treatment of the SNS- and ANi-containing duplexes with horseradish peroxidase (HRP) and H2O2 causes rapid and efficient polymerization. In this way, linear copolymers (SNS)4(ANi)6 and (ANi)2(SNS)2(ANi)2(SNS)2(ANi)2 were prepared and analyzed. A second approach to the preparation of linear and cyclic copolymers of ANi and SNS employed a DNA encoded module strategy. In this approach, single-stranded DNA oligomers composed of a central region containing (SNS)6 or (ANi)5 covalently attached monomer blocks and flanking 5'- and 3'-single-strand DNA recognition sequences were combined in buffer solution. Self-assembly of these oligomers by Watson-Crick base pairing of the recognition sequences creates linear or cyclic arrays of SNS and ANi monomer blocks. Treatment of these arrays with HRP/H2O2 causes rapid and efficient polymerization to form copolymers having patterns such as cyclic BBA and linear ABA, where B stands for an (SNS)6 block and A stands for an (ANi)5 block. These DNA-conjoined copolymers were characterized by melting temperature analysis, circular dichroism spectroscopy, native and denaturing polyacrylamide gel electrophoresis, and UV-visible-near-IR optical spectroscopy. The optical spectra of these copolymers are typical of those of conducting polymers and are uniquely dependent on the specific order of monomers in the copolymer. PMID:23448549

  9. Precision electron polarimetry

    SciTech Connect

    Chudakov, Eugene A.

    2013-11-01

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. M{\\o}ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at ~300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100\\%-polarized electron target for M{\\o}ller polarimetry.

  10. Precision electron polarimetry

    SciTech Connect

    Chudakov, E.

    2013-11-07

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. Mo/ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at 300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100%-polarized electron target for Mo/ller polarimetry.

  11. SU-E-P-54: Evaluation of the Accuracy and Precision of IGPS-O X-Ray Image-Guided Positioning System by Comparison with On-Board Imager Cone-Beam Computed Tomography

    SciTech Connect

    Zhang, D; Wang, W; Jiang, B; Fu, D

    2015-06-15

    Purpose: The purpose of this study is to assess the positioning accuracy and precision of IGPS-O system which is a novel radiographic kilo-voltage x-ray image-guided positioning system developed for clinical IGRT applications. Methods: IGPS-O x-ray image-guided positioning system consists of two oblique sets of radiographic kilo-voltage x-ray projecting and imaging devices which were equiped on the ground and ceiling of treatment room. This system can determine the positioning error in the form of three translations and three rotations according to the registration of two X-ray images acquired online and the planning CT image. An anthropomorphic head phantom and an anthropomorphic thorax phantom were used for this study. The phantom was set up on the treatment table with correct position and various “planned” setup errors. Both IGPS-O x-ray image-guided positioning system and the commercial On-board Imager Cone-beam Computed Tomography (OBI CBCT) were used to obtain the setup errors of the phantom. Difference of the Result between the two image-guided positioning systems were computed and analyzed. Results: The setup errors measured by IGPS-O x-ray image-guided positioning system and the OBI CBCT system showed a general agreement, the means and standard errors of the discrepancies between the two systems in the left-right, anterior-posterior, superior-inferior directions were −0.13±0.09mm, 0.03±0.25mm, 0.04±0.31mm, respectively. The maximum difference was only 0.51mm in all the directions and the angular discrepancy was 0.3±0.5° between the two systems. Conclusion: The spatial and angular discrepancies between IGPS-O system and OBI CBCT for setup error correction was minimal. There is a general agreement between the two positioning system. IGPS-O x-ray image-guided positioning system can achieve as good accuracy as CBCT and can be used in the clinical IGRT applications.

  12. Application of AFINCH as a Tool for Evaluating the Effects of Streamflow-Gaging-Network Size and Composition on the Accuracy and Precision of Streamflow Estimates at Ungaged Locations in the Southeast Lake Michigan Hydrologic Subregion

    USGS Publications Warehouse

    Koltun, G.F.; Holtschlag, David J.

    2010-01-01

    Bootstrapping techniques employing random subsampling were used with the AFINCH (Analysis of Flows In Networks of CHannels) model to gain insights into the effects of variation in streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the 0405 (Southeast Lake Michigan) hydrologic subregion. AFINCH uses stepwise-regression techniques to estimate monthly water yields from catchments based on geospatial-climate and land-cover data in combination with available streamflow and water-use data. Calculations are performed on a hydrologic-subregion scale for each catchment and stream reach contained in a National Hydrography Dataset Plus (NHDPlus) subregion. Water yields from contributing catchments are multiplied by catchment areas and resulting flow values are accumulated to compute streamflows in stream reaches which are referred to as flow lines. AFINCH imposes constraints on water yields to ensure that observed streamflows are conserved at gaged locations. Data from the 0405 hydrologic subregion (referred to as Southeast Lake Michigan) were used for the analyses. Daily streamflow data were measured in the subregion for 1 or more years at a total of 75 streamflow-gaging stations during the analysis period which spanned water years 1971-2003. The number of streamflow gages in operation each year during the analysis period ranged from 42 to 56 and averaged 47. Six sets (one set for each censoring level), each composed of 30 random subsets of the 75 streamflow gages, were created by censoring (removing) approximately 10, 20, 30, 40, 50, and 75 percent of the streamflow gages (the actual percentage of operating streamflow gages censored for each set varied from year to year, and within the year from subset to subset, but averaged approximately the indicated percentages). Streamflow estimates for six flow lines each were aggregated by censoring level, and results were analyzed to assess (a) how the size

  13. Protoporphyrin IX fluorescence contrast in invasive glioblastomas is linearly correlated with Gd enhanced magnetic resonance image contrast but has higher diagnostic accuracy

    PubMed Central

    Samkoe, Kimberley S.; Gibbs-Strauss, Summer L.; Yang, Harold H.; Khan Hekmatyar, S.; Jack Hoopes, P.; O’Hara, Julia A.; Kauppinen, Risto A.; Pogue, Brian W.

    2011-01-01

    The sensitivity and specificity of in vivo magnetic resonance (MR) imaging is compared with production of protoporphyrin IX (PpIX), determined ex vivo, in a diffusely infiltrating glioma. A human glioma transfected with green fluorescent protein, displaying diffuse, infiltrative growth, was implanted intracranially in athymic nude mice. Image contrast from corresponding regions of interest (ROIs) in in vivo MR and ex vivo fluorescence images was quantified. It was found that all tumor groups had statistically significant PpIX fluorescence contrast and that PpIX contrast demonstrated the best predictive power for tumor presence. Contrast from gadolinium enhanced T1-weighted (T1W+Gd) and absolute T2 images positively predicted the presence of a tumor, confirmed by the GFP positive (GFP+) and hematoxylin and eosin positive (H&E+) ROIs. However, only the absolute T2 images had predictive power from controls in ROIs that were GFP+ but H&E negative. Additionally, PpIX fluorescence and T1W+Gd image contrast were linearly correlated in both the GFP+ (r = 0.79, p<1×10−8) and H&E+ (r = 0.74, p<0.003) ROIs. The trace diffusion images did not have predictive power or significance from controls. This study indicates that gadolinium contrast enhanced MR images can predict the presence of diffuse tumors, but PpIX fluorescence is a better predictor regardless of tumor vascularity. PMID:21950922

  14. Online image-guided intensity-modulated radiotherapy for prostate cancer: How much improvement can we expect? A theoretical assessment of clinical benefits and potential dose escalation by improving precision and accuracy of radiation delivery

    SciTech Connect

    Ghilezan, Michel; Yan Di . E-mail: dyan@beaumont.edu; Liang Jian; Jaffray, David; Wong, John; Martinez, Alvaro

    2004-12-01

    Purpose: To quantify the theoretical benefit, in terms of improvement in precision and accuracy of treatment delivery and in dose increase, of using online image-guided intensity-modulated radiotherapy (IG-IMRT) performed with onboard cone-beam computed tomography (CT), in an ideal setting of no intrafraction motion/deformation, in the treatment of prostate cancer. Methods and materials: Twenty-two prostate cancer patients treated with conventional radiotherapy underwent multiple serial CT scans (median 18 scans per patient) during their treatment. We assumed that these data sets were equivalent to image sets obtainable by an onboard cone-beam CT. Each patient treatment was simulated with conventional IMRT and online IG-IMRT separately. The conventional IMRT plan was generated on the basis of pretreatment CT, with a clinical target volume to planning target volume (CTV-to-PTV) margin of 1 cm, and the online IG-IMRT plan was created before each treatment fraction on the basis of the CT scan of the day, without CTV-to-PTV margin. The inverse planning process was similar for both conventional IMRT and online IG-IMRT. Treatment dose for each organ of interest was quantified, including patient daily setup error and internal organ motion/deformation. We used generalized equivalent uniform dose (EUD) to compare the two approaches. The generalized EUD (percentage) of each organ of interest was scaled relative to the prescription dose at treatment isocenter for evaluation and comparison. On the basis of bladder wall and rectal wall EUD, a dose-escalation coefficient was calculated, representing the potential increment of the treatment dose achievable with online IG-IMRT as compared with conventional IMRT. Results: With respect to radiosensitive tumor, the average EUD for the target (prostate plus seminal vesicles) was 96.8% for conventional IMRT and 98.9% for online IG-IMRT, with standard deviations (SDs) of 5.6% and 0.7%, respectively (p < 0.0001). The average EUDs of

  15. Study on control structure analysis and optimization of high-precision measurement platform for optical aspheric surface

    NASA Astrophysics Data System (ADS)

    Ke, Xiaolong; Guo, Yinbiao; Wang, Zhengzhong; Liu, Jianchun

    2009-05-01

    Taking high generality and efficiency into account, this paper presents a measurement and control means based on high-precision measurement platform including high-precision linear motors, contact and non-contact measurement sensor of 0.1um resolution and a new developed measuring software. This platform aims to achieve high-precision measurement for all kinds of optical aspheric workpieces for detection accuracy of 2um/200*200mm. In this paper, a measurement platform which consists of granite gantry framework, 3 axes linear motors, circle grating rotary encoder, grating linear scales, 4 axes motion control card, linear motion ball guide, contacting and non-contacting measurement sensor and so on, is designed and implemented. Through finite element stress analysis, it can find that the framework well fulfills the accuracy demand. And the performance comparison between linear motors and piezoelectric ceramics motors is then discussed. Further, it also compares the coordinated motion of "circle grating rotary encoder+2 axes linear motors" with the coordinated motion of "3 axes linear motors" to find out the difference in measurement accuracy by experiment data. Here, a better scheme for kinematic locus planning is proposed for making sure all axes have better dynamic characteristics. Aiming at various characteristics of optical workpieces, the different measurement paths are also provided. Finally, the experiments for this purpose are done to validate the measurement platform accuracy.

  16. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  17. Precision Nova operations

    NASA Astrophysics Data System (ADS)

    Ehrlich, Robert B.; Miller, John L.; Saunders, Rodney L.; Thompson, Calvin E.; Weiland, Timothy L.; Laumann, Curt W.

    1995-12-01

    To improve the symmetry of x-ray drive on indirectly driven ICF capsules, we have increased the accuracy of operating procedures and diagnostics on the Nova laser. Precision Nova operations include routine precision power balance to within 10% rms in the 'foot' and 5% rms in the peak of shaped pulses, beam synchronization to within 10 ps rms, and pointing of the beams onto targets to within 35 micrometer rms. We have also added a 'fail-safe chirp' system to avoid stimulated Brillouin scattering (SBS) in optical components during high energy shots.

  18. Precision Nova operations

    SciTech Connect

    Ehrlich, R.B.; Miller, J.L.; Saunders, R.L.; Thompson, C.E.; Weiland, T.L.; Laumann, C.W.

    1995-09-01

    To improve the symmetry of x-ray drive on indirectly driven ICF capsules, we have increased the accuracy of operating procedures and diagnostics on the Nova laser. Precision Nova operations includes routine precision power balance to within 10% rms in the ``foot`` and 5% nns in the peak of shaped pulses, beam synchronization to within 10 ps rms, and pointing of the beams onto targets to within 35 {mu}m rms. We have also added a ``fail-safe chirp`` system to avoid Stimulated Brillouin Scattering (SBS) in optical components during high energy shots.

  19. Fitting magnetic field gradient with Heisenberg-scaling accuracy

    PubMed Central

    Zhang, Yong-Liang; Wang, Huan; Jing, Li; Mu, Liang-Zhu; Fan, Heng

    2014-01-01

    The linear function is possibly the simplest and the most used relation appearing in various areas of our world. A linear relation can be generally determined by the least square linear fitting (LSLF) method using several measured quantities depending on variables. This happens for such as detecting the gradient of a magnetic field. Here, we propose a quantum fitting scheme to estimate the magnetic field gradient with N-atom spins preparing in W state. Our scheme combines the quantum multi-parameter estimation and the least square linear fitting method to achieve the quantum Cramér-Rao bound (QCRB). We show that the estimated quantity achieves the Heisenberg-scaling accuracy. Our scheme of quantum metrology combined with data fitting provides a new method in fast high precision measurements. PMID:25487218

  20. Precise Orbit Determination for ALOS

    NASA Technical Reports Server (NTRS)

    Nakamura, Ryo; Nakamura, Shinichi; Kudo, Nobuo; Katagiri, Seiji

    2007-01-01

    The Advanced Land Observing Satellite (ALOS) has been developed to contribute to the fields of mapping, precise regional land coverage observation, disaster monitoring, and resource surveying. Because the mounted sensors need high geometrical accuracy, precise orbit determination for ALOS is essential for satisfying the mission objectives. So ALOS mounts a GPS receiver and a Laser Reflector (LR) for Satellite Laser Ranging (SLR). This paper deals with the precise orbit determination experiments for ALOS using Global and High Accuracy Trajectory determination System (GUTS) and the evaluation of the orbit determination accuracy by SLR data. The results show that, even though the GPS receiver loses lock of GPS signals more frequently than expected, GPS-based orbit is consistent with SLR-based orbit. And considering the 1 sigma error, orbit determination accuracy of a few decimeters (peak-to-peak) was achieved.

  1. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  2. Accuracy and precision of gravitational-wave models of inspiraling neutron star-black hole binaries with spin: Comparison with matter-free numerical relativity in the low-frequency regime

    NASA Astrophysics Data System (ADS)

    Bhagwat, Swetha; Kumar, Prayush; Barkett, Kevin; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilagyi, Bela; LIGO Collaboration

    2016-03-01

    Detection of gravitational wave involves extracting extremely weak signal from noisy data and their detection depends crucially on the accuracy of the signal models. The most accurate models of compact binary coalescence are known to come from solving the Einstein's equation numerically without any approximations. However, this is computationally formidable. As a more practical alternative, several analytic or semi analytic approximations are developed to model these waveforms. However, the work of Nitz et al. (2013) demonstrated that there is disagreement between these models. We present a careful follow up study on accuracies of different waveform families for spinning black-hole neutron star binaries, in context of both detection and parameter estimation and find that SEOBNRv2 to be the most faithful model. Post Newtonian models can be used for detection but we find that they could lead to large parameter bias. Supported by National Science Foundation (NSF) Awards No. PHY-1404395 and No. AST-1333142.

  3. Making Precise Antenna Reflectors For Millimeter Wavelengths

    NASA Technical Reports Server (NTRS)

    Sharp, G. Richard; Wanhainen, Joyce S.; Ketelsen, Dean A.

    1994-01-01

    In improved method of fabrication of precise, lightweight antenna reflectors for millimeter wavelengths, required precise contours of reflecting surfaces obtained by computer numberically controlled machining of surface layers bonded to lightweight, rigid structures. Achievable precision greater than that of older, more-expensive fabrication method involving multiple steps of low- and high-temperature molding, in which some accuracy lost at each step.

  4. Precision translator

    DOEpatents

    Reedy, Robert P.; Crawford, Daniel W.

    1984-01-01

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  5. Precision translator

    DOEpatents

    Reedy, R.P.; Crawford, D.W.

    1982-03-09

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  6. Precision and accuracy in fluorescent short tandem repeat DNA typing: assessment of benefits imparted by the use of allelic ladders with the AmpF/STR Profiler Plus kit.

    PubMed

    Leclair, Benoît; Frégeau, Chantal J; Bowen, Kathy L; Fourney, Ron M

    2004-03-01

    Base-calling precision of short tandem repeat (STR) allelic bands on dynamic slab-gel electrophoresis systems was evaluated. Data was collected from over 6000 population database allele peaks generated from 468 population database samples amplified with the AmpF/STR Profiler Plus (PP) kit and electrophoresed on ABD 377 DNA sequencers. Precision was measured by way of standard deviations and was shown to be essentially the same, whether using fixed or floating bin genotyping. However, the allelic ladders have proven more sensitive to electrophoretic variations than database samples, which have caused some floating bins of D18S51 to shift on occasion. This observation prompted the investigation of polyacrylamide gel formulations in order to stabilize allelic ladder migration. The results demonstrate that, although alleles comprised in allelic ladders and questioned samples run on the same gel should migrate in an identical manner, this premise needs to be verified for any given electrophoresis platform and gel formulation. We show that the compilation of base-calling data is a very informative and useful tool for assessing the performance stability of dynamic gel electrophoresis systems, stability on which depends genotyping result quality. PMID:15004837

  7. Precision GPS ephemerides and baselines

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Based on the research, the area of precise ephemerides for GPS satellites, the following observations can be made pertaining to the status and future work needed regarding orbit accuracy. There are several aspects which need to be addressed in discussing determination of precise orbits, such as force models, kinematic models, measurement models, data reduction/estimation methods, etc. Although each one of these aspects was studied at CSR in research efforts, only points pertaining to the force modeling aspect are addressed.

  8. Assessing the Accuracy and Precision of Inorganic Geochemical Data Produced through Flux Fusion and Acid Digestions: Multiple (60+) Comprehensive Analyses of BHVO-2 and the Development of Improved "Accepted" Values

    NASA Astrophysics Data System (ADS)

    Ireland, T. J.; Scudder, R.; Dunlea, A. G.; Anderson, C. H.; Murray, R. W.

    2014-12-01

    The use of geological standard reference materials (SRMs) to assess both the accuracy and the reproducibility of geochemical data is a vital consideration in determining the major and trace element abundances of geologic, oceanographic, and environmental samples. Calibration curves commonly are generated that are predicated on accurate analyses of these SRMs. As a means to verify the robustness of these calibration curves, a SRM can also be run as an unknown item (i.e., not included as a data point in the calibration). The experimentally derived composition of the SRM can thus be compared to the certified (or otherwise accepted) value. This comparison gives a direct measure of the accuracy of the method used. Similarly, if the same SRM is analyzed as an unknown over multiple analytical sessions, the external reproducibility of the method can be evaluated. Two common bulk digestion methods used in geochemical analysis are flux fusion and acid digestion. The flux fusion technique is excellent at ensuring complete digestion of a variety of sample types, is quick, and does not involve much use of hazardous acids. However, this technique is hampered by a high amount of total dissolved solids and may be accompanied by an increased analytical blank for certain trace elements. On the other hand, acid digestion (using a cocktail of concentrated nitric, hydrochloric and hydrofluoric acids) provides an exceptionally clean digestion with very low analytical blanks. However, this technique results in a loss of Si from the system and may compromise results for a few other elements (e.g., Ge). Our lab uses flux fusion for the determination of major elements and a few key trace elements by ICP-ES, while acid digestion is used for Ti and trace element analyses by ICP-MS. Here we present major and trace element data for BHVO-2, a frequently used SRM derived from a Hawaiian basalt, gathered over a period of over two years (30+ analyses by each technique). We show that both digestion

  9. Precise Measurement for Manufacturing

    NASA Technical Reports Server (NTRS)

    2003-01-01

    A metrology instrument known as PhaseCam supports a wide range of applications, from testing large optics to controlling factory production processes. This dynamic interferometer system enables precise measurement of three-dimensional surfaces in the manufacturing industry, delivering speed and high-resolution accuracy in even the most challenging environments.Compact and reliable, PhaseCam enables users to make interferometric measurements right on the factory floor. The system can be configured for many different applications, including mirror phasing, vacuum/cryogenic testing, motion/modal analysis, and flow visualization.

  10. Precision Pointing System Development

    SciTech Connect

    BUGOS, ROBERT M.

    2003-03-01

    The development of precision pointing systems has been underway in Sandia's Electronic Systems Center for over thirty years. Important areas of emphasis are synthetic aperture radars and optical reconnaissance systems. Most applications are in the aerospace arena, with host vehicles including rockets, satellites, and manned and unmanned aircraft. Systems have been used on defense-related missions throughout the world. Presently in development are pointing systems with accuracy goals in the nanoradian regime. Future activity will include efforts to dramatically reduce system size and weight through measures such as the incorporation of advanced materials and MEMS inertial sensors.

  11. Precision GPS ephemerides and baselines

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The emphasis of this grant was focused on precision ephemerides for the Global Positioning System (GPS) satellites for geodynamics applications. During the period of this grant, major activities were in the areas of thermal force modeling, numerical integration accuracy improvement for eclipsing satellites, analysis of GIG '91 campaign data, and the Southwest Pacific campaign data analysis.

  12. Precision orbit computations for Starlette

    NASA Technical Reports Server (NTRS)

    Marsh, J. G.; Williamson, R. G.

    1976-01-01

    The Starlette satellite, launched in February 1975 by the French Centre National d'Etudes Spatiales, was designed to minimize the effects of nongravitational forces and to obtain the highest possible accuracy for laser range measurements. Analyses of the first four months of global laser tracking data confirmed the stability of the orbit and the precision to which the satellite's position is established.

  13. Accuracy and precision of gravitational-wave models of inspiraling neutron star-black hole binaries with spin: Comparison with matter-free numerical relativity in the low-frequency regime

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilágyi, Béla

    2015-11-01

    Coalescing binaries of neutron stars and black holes are one of the most important sources of gravitational waves for the upcoming network of ground-based detectors. Detection and extraction of astrophysical information from gravitational-wave signals requires accurate waveform models. The effective-one-body and other phenomenological models interpolate between analytic results and numerical relativity simulations, that typically span O (10 ) orbits before coalescence. In this paper we study the faithfulness of these models for neutron star-black hole binaries. We investigate their accuracy using new numerical relativity (NR) simulations that span 36-88 orbits, with mass ratios q and black hole spins χBH of (q ,χBH)=(7 ,±0.4 ),(7 ,±0.6 ) , and (5 ,-0.9 ). These simulations were performed treating the neutron star as a low-mass black hole, ignoring its matter effects. We find that (i) the recently published SEOBNRv1 and SEOBNRv2 models of the effective-one-body family disagree with each other (mismatches of a few percent) for black hole spins χBH≥0.5 or χBH≤-0.3 , with waveform mismatch accumulating during early inspiral; (ii) comparison with numerical waveforms indicates that this disagreement is due to phasing errors of SEOBNRv1, with SEOBNRv2 in good agreement with all of our simulations; (iii) phenomenological waveforms agree with SEOBNRv2 only for comparable-mass low-spin binaries, with overlaps below 0.7 elsewhere in the neutron star-black hole binary parameter space; (iv) comparison with numerical waveforms shows that most of this model's dephasing accumulates near the frequency interval where it switches to a phenomenological phasing prescription; and finally (v) both SEOBNR and post-Newtonian models are effectual for neutron star-black hole systems, but post-Newtonian waveforms will give a significant bias in parameter recovery. Our results suggest that future gravitational-wave detection searches and parameter estimation efforts would benefit

  14. Simplest Molecules as Candidates for Precise Optical Clocks

    NASA Astrophysics Data System (ADS)

    Schiller, S.; Bakalov, D.; Korobov, V. I.

    2014-07-01

    The precise measurement of transition frequencies in cold, trapped molecules has applications in fundamental physics, and extremely high accuracies are desirable. We determine suitable candidates by considering the simplest molecules with a single electron, for which the external-field shift corrections can be calculated theoretically with high precision. Our calculations show that H2+ exhibits particular transitions whose fractional systematic uncertainties may be reduced to 5×10-17 at room temperature. We also generalize the method of composite frequencies, introducing tailored linear combinations of individual transition frequencies that are free of the major systematic shifts, independent of the strength of the external perturbing fields. By applying this technique, the uncertainty of the composite frequency is reduced compared to what is achievable with a single transition, e.g., to the 10-18 range for HD+. Thus, these molecules are of metrological relevance for future studies.

  15. Superconducting Tunnel Junctions for High-Precision EUV Spectroscopy

    NASA Astrophysics Data System (ADS)

    Ponce, F.; Carpenter, M. H.; Cantor, R.; Friedrich, S.

    2016-08-01

    We have characterized the photon response of superconducting tunnel junctions in the extreme ultraviolet energy range below 100 eV with a pulsed 355 nm laser. The detectors are operated at rates up to 5000 counts/s, are very linear in energy and have an energy resolution between 0.9 and 2 eV. We observe multiple peaks that correspond to an integer number of photons with a Poissonian probability distribution and that can be used for high-accuracy energy calibration. The uncertainty of the centroid depends on the detector resolution and the counting statistics and can be as low as 1 meV for well-separated peaks with >10^5 counts. We discuss the precision of the peak centroid as a function of detector resolution and total number of counts and the accuracy of the energy calibration.

  16. Design of a dual-axis optoelectronic level for precision angle measurements

    NASA Astrophysics Data System (ADS)

    Fan, Kuang-Chao; Wang, Tsung-Han; Lin, Sheng-Yi; Liu, Yen-Chih

    2011-05-01

    The accuracy of machine tools is mainly determined by angular errors during linear motion according to the well-known Abbe principle. Precision angle measurement is important to precision machines. This paper presents the theory and experiments of a new dual-axis optoelectronic level with low cost and high precision. The system adopts a commercial DVD pickup head as the angle sensor in association with the double-layer pendulum mechanism for two-axis swings, respectively. In data processing with a microprocessor, the measured angles of both axes can be displayed on an LCD or exported to an external PC. Calibrated by a triple-beam laser angular interferometer, the error of the dual-axis optoelectronic level is better than ±0.7 arcsec in the measuring range of ±30 arcsec, and the settling time is within 0.5 s. Experiments show the applicability to the inspection of precision machines.

  17. Accelerating scientific computations with mixed precision algorithms

    NASA Astrophysics Data System (ADS)

    Baboulin, Marc; Buttari, Alfredo; Dongarra, Jack; Kurzak, Jakub; Langou, Julie; Langou, Julien; Luszczek, Piotr; Tomov, Stanimire

    2009-12-01

    On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented. Program summaryProgram title: ITER-REF Catalogue identifier: AECO_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECO_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 7211 No. of bytes in distributed program, including test data, etc.: 41 862 Distribution format: tar.gz Programming language: FORTRAN 77 Computer: desktop, server Operating system: Unix/Linux RAM: 512 Mbytes Classification: 4.8 External routines: BLAS (optional) Nature of problem: On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. Solution method: Mixed precision algorithms stem from the observation that, in many cases, a single precision solution of a problem can be refined to the point where double precision accuracy is achieved. A common approach to the solution of linear systems, either dense or sparse, is to perform the LU

  18. Precision Efficacy Analysis for Regression.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.

    When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…

  19. Precision spectroscopy of Helium

    SciTech Connect

    Cancio, P.; Giusfredi, G.; Mazzotti, D.; De Natale, P.; De Mauro, C.; Krachmalnicoff, V.; Inguscio, M.

    2005-05-05

    Accurate Quantum-Electrodynamics (QED) tests of the simplest bound three body atomic system are performed by precise laser spectroscopic measurements in atomic Helium. In this paper, we present a review of measurements between triplet states at 1083 nm (23S-23P) and at 389 nm (23S-33P). In 4He, such data have been used to measure the fine structure of the triplet P levels and, then, to determine the fine structure constant when compared with equally accurate theoretical calculations. Moreover, the absolute frequencies of the optical transitions have been used for Lamb-shift determinations of the levels involved with unprecedented accuracy. Finally, determination of the He isotopes nuclear structure and, in particular, a measurement of the nuclear charge radius, are performed by using hyperfine structure and isotope-shift measurements.

  20. Precision ozone vapor pressure measurements

    NASA Technical Reports Server (NTRS)

    Hanson, D.; Mauersberger, K.

    1985-01-01

    The vapor pressure above liquid ozone has been measured with a high accuracy over a temperature range of 85 to 95 K. At the boiling point of liquid argon (87.3 K) an ozone vapor pressure of 0.0403 Torr was obtained with an accuracy of + or - 0.7 percent. A least square fit of the data provided the Clausius-Clapeyron equation for liquid ozone; a latent heat of 82.7 cal/g was calculated. High-precision vapor pressure data are expected to aid research in atmospheric ozone measurements and in many laboratory ozone studies such as measurements of cross sections and reaction rates.

  1. EDITORIAL: Precision proteins Precision proteins

    NASA Astrophysics Data System (ADS)

    Demming, Anna

    2010-06-01

    large molecular weight, net negative charge and hydrophilicity of synthetic small interfering RNAs makes it hard for the molecules to cross the plasma membrane and enter the cell cytoplasm. Immune responses can also diminish the effectiveness of this approach. In this issue, Shiri Weinstein and Dan Peer from Tel Aviv University provide an overview of the challenges and recent progress in the use of nanocarriers for delivering RNAi effector molecules into target tissues and cells more effectively [5]. Also in this issue, researchers in Korea report new results that demonstrate the potential of nanostructures in neural network engineering [6]. Min Jee Jang et al report directional growth of neurites along linear carbon nanotube patterns, demonstrating great progress in neural engineering and the scope for using nanotechnology to treat neural diseases. Modern medicine cannot claim to have abolished the pain and suffering that accompany disease. But a comparison between the ghastly and often ineffective iron implements of early medicine and the smart gadgets and treatments used in hospitals today speaks volumes for the extraordinary progress that has been made, and the motivation behind this research. References [1] Wallis F 2000 Signs and senses: diagnosis and prognosis in early medieval pulse and urine texts Soc. Hist. Med. 13 265-78 [2] Arntz Y, Seelig J D, Lang H P, Zhang J, Hunziker P, Ramseyer J P, Meyer E, Hegner M and Gerber Ch 2003 Label-free protein assay based on a nanomechanical cantiliever array Nanotechnology 14 86-90 [3] Gowtham S, Scheicher R H, Pandey R, Karna S P and Ahuja R 2008 First-principles study of physisorption of nucleic acid bases on small-diameter carbon nanotubes Nanotechnology 19 125701 [4] Wang H-N and Vo-Dinh T 2009 Multiplex detection of breast cancer biomarkers using plasmonic molecular sentinel nanoprobes Nanotechnology 20 065101 [5] Weinstein S and Peer D 2010 RNAi nanomedicines: challenges and opportunities within the immune system

  2. EDITORIAL: Precision proteins Precision proteins

    NASA Astrophysics Data System (ADS)

    Demming, Anna

    2010-06-01

    large molecular weight, net negative charge and hydrophilicity of synthetic small interfering RNAs makes it hard for the molecules to cross the plasma membrane and enter the cell cytoplasm. Immune responses can also diminish the effectiveness of this approach. In this issue, Shiri Weinstein and Dan Peer from Tel Aviv University provide an overview of the challenges and recent progress in the use of nanocarriers for delivering RNAi effector molecules into target tissues and cells more effectively [5]. Also in this issue, researchers in Korea report new results that demonstrate the potential of nanostructures in neural network engineering [6]. Min Jee Jang et al report directional growth of neurites along linear carbon nanotube patterns, demonstrating great progress in neural engineering and the scope for using nanotechnology to treat neural diseases. Modern medicine cannot claim to have abolished the pain and suffering that accompany disease. But a comparison between the ghastly and often ineffective iron implements of early medicine and the smart gadgets and treatments used in hospitals today speaks volumes for the extraordinary progress that has been made, and the motivation behind this research. References [1] Wallis F 2000 Signs and senses: diagnosis and prognosis in early medieval pulse and urine texts Soc. Hist. Med. 13 265-78 [2] Arntz Y, Seelig J D, Lang H P, Zhang J, Hunziker P, Ramseyer J P, Meyer E, Hegner M and Gerber Ch 2003 Label-free protein assay based on a nanomechanical cantiliever array Nanotechnology 14 86-90 [3] Gowtham S, Scheicher R H, Pandey R, Karna S P and Ahuja R 2008 First-principles study of physisorption of nucleic acid bases on small-diameter carbon nanotubes Nanotechnology 19 125701 [4] Wang H-N and Vo-Dinh T 2009 Multiplex detection of breast cancer biomarkers using plasmonic molecular sentinel nanoprobes Nanotechnology 20 065101 [5] Weinstein S and Peer D 2010 RNAi nanomedicines: challenges and opportunities within the immune system

  3. Mixed-Precision Spectral Deferred Correction: Preprint

    SciTech Connect

    Grout, Ray W. S.

    2015-09-02

    Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.

  4. Quality, precision and accuracy of the maximum No. 40 anemometer

    SciTech Connect

    Obermeir, J.; Blittersdorf, D.

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  5. Factors affecting accuracy and precision in PET volume imaging

    SciTech Connect

    Karp, J.S.; Daube-Witherspoon, M.E.; Muehllehner, G. )

    1991-03-01

    Volume imaging positron emission tomographic (PET) scanners with no septa and a large axial acceptance angle offer several advantages over multiring PET scanners. A volume imaging scanner combines high sensitivity with fine axial sampling and spatial resolution. The fine axial sampling minimizes the partial volume effect, which affects the measured concentration of an object. Even if the size of an object is large compared to the slice spacing in a multiring scanner, significant variation in the concentration is measured as a function of the axial position of the object. With a volume imaging scanner, it is necessary to use a three-dimensional reconstruction algorithm in order to avoid variations in the axial resolution as a function of the distance from the center of the scanner. In addition, good energy resolution is needed in order to use a high energy threshold to reduce the coincident scattered radiation.

  6. Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images

    PubMed Central

    Frey, Eric C.; Humm, John L.; Ljungberg, Michael

    2012-01-01

    The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429

  7. Tomography & Geochemistry: Precision, Repeatability, Accuracy and Joint Interpretations

    NASA Astrophysics Data System (ADS)

    Foulger, G. R.; Panza, G. F.; Artemieva, I. M.; Bastow, I. D.; Cammarano, F.; Doglioni, C.; Evans, J. R.; Hamilton, W. B.; Julian, B. R.; Lustrino, M.; Thybo, H.; Yanovskaya, T. B.

    2015-12-01

    Seismic tomography can reveal the spatial seismic structure of the mantle, but has little ability to constrain composition, phase or temperature. In contrast, petrology and geochemistry can give insights into mantle composition, but have severely limited spatial control on magma sources. For these reasons, results from these three disciplines are often interpreted jointly. Nevertheless, the limitations of each method are often underestimated, and underlying assumptions de-emphasized. Examples of the limitations of seismic tomography include its ability to image in detail the three-dimensional structure of the mantle or to determine with certainty the strengths of anomalies. Despite this, published seismic anomaly strengths are often unjustifiably translated directly into physical parameters. Tomography yields seismological parameters such as wave speed and attenuation, not geological or thermal parameters. Much of the mantle is poorly sampled by seismic waves, and resolution- and error-assessment methods do not express the true uncertainties. These and other problems have become highlighted in recent years as a result of multiple tomography experiments performed by different research groups, in areas of particular interest e.g., Yellowstone. The repeatability of the results is often poorer than the calculated resolutions. The ability of geochemistry and petrology to identify magma sources and locations is typically overestimated. These methods have little ability to determine source depths. Models that assign geochemical signatures to specific layers in the mantle, including the transition zone, the lower mantle, and the core-mantle boundary, are based on speculative models that cannot be verified and for which viable, less-astonishing alternatives are available. Our knowledge is poor of the size, distribution and location of protoliths, and of metasomatism of magma sources, the nature of the partial-melting and melt-extraction process, the mixing of disparate melts, and the re-assimilation of crust and mantle lithosphere by rising melt. Interpretations of seismic tomography, petrologic and geochemical observations, and all three together, are ambiguous, and this needs to be emphasized more in presenting interpretations so that the viability of the models can be assessed more reliably.

  8. Global positioning system measurements for crustal deformation: Precision and accuracy

    USGS Publications Warehouse

    Prescott, W.H.; Davis, J.L.; Svarc, J.L.

    1989-01-01

    Analysis of 27 repeated observations of Global Positioning System (GPS) position-difference vectors, up to 11 kilometers in length, indicates that the standard deviation of the measurements is 4 millimeters for the north component, 6 millimeters for the east component, and 10 to 20 millimeters for the vertical component. The uncertainty grows slowly with increasing vector length. At 225 kilometers, the standard deviation of the measurement is 6, 11, and 40 millimeters for the north, east, and up components, respectively. Measurements with GPS and Geodolite, an electromagnetic distance-measuring system, over distances of 10 to 40 kilometers agree within 0.2 part per million. Measurements with GPS and very long baseline interferometry of the 225-kilometer vector agree within 0.05 part per million.

  9. Arrival Metering Precision Study

    NASA Technical Reports Server (NTRS)

    Prevot, Thomas; Mercer, Joey; Homola, Jeffrey; Hunt, Sarah; Gomez, Ashley; Bienert, Nancy; Omar, Faisal; Kraut, Joshua; Brasil, Connie; Wu, Minghong, G.

    2015-01-01

    This paper describes the background, method and results of the Arrival Metering Precision Study (AMPS) conducted in the Airspace Operations Laboratory at NASA Ames Research Center in May 2014. The simulation study measured delivery accuracy, flight efficiency, controller workload, and acceptability of time-based metering operations to a meter fix at the terminal area boundary for different resolution levels of metering delay times displayed to the air traffic controllers and different levels of airspeed information made available to the Time-Based Flow Management (TBFM) system computing the delay. The results show that the resolution of the delay countdown timer (DCT) on the controllers display has a significant impact on the delivery accuracy at the meter fix. Using the 10 seconds rounded and 1 minute rounded DCT resolutions resulted in more accurate delivery than 1 minute truncated and were preferred by the controllers. Using the speeds the controllers entered into the fourth line of the data tag to update the delay computation in TBFM in high and low altitude sectors increased air traffic control efficiency and reduced fuel burn for arriving aircraft during time based metering.

  10. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    LRO definitive and predictive accuracy requirements were easily met in the nominal mission orbit, using the LP150Q lunar gravity model. center dot Accuracy of the LP150Q model is poorer in the extended mission elliptical orbit. center dot Later lunar gravity models, in particular GSFC-GRAIL-270, improve OD accuracy in the extended mission. center dot Implementation of a constrained plane when the orbit is within 45 degrees of the Earth-Moon line improves cross-track accuracy. center dot Prediction accuracy is still challenged during full-Sun periods due to coarse spacecraft area modeling - Implementation of a multi-plate area model with definitive attitude input can eliminate prediction violations. - The FDF is evaluating using analytic and predicted attitude modeling to improve full-Sun prediction accuracy. center dot Comparison of FDF ephemeris file to high-precision ephemeris files provides gross confirmation that overlap compares properly assess orbit accuracy.

  11. Precision Polarimetry for Cold Neutrons

    NASA Astrophysics Data System (ADS)

    Barron-Palos, Libertad; Bowman, J. David; Chupp, Timothy E.; Crawford, Christopher; Danagoulian, Areg; Gentile, Thomas R.; Jones, Gordon; Klein, Andreas; Penttila, Seppo I.; Salas-Bacci, Americo; Sharma, Monisha; Wilburn, W. Scott

    2007-10-01

    The abBA and PANDA experiments, currently under development, aim to measure the correlation coefficients in the polarized free neutron beta decay at the FnPB in SNS. The polarization of the neutron beam, polarized with a ^3He spin filter, has to be known with high precision in order to achieve the goal accuracy of these experiments. In the NPDGamma experiment, where a ^3He spin filter was used, it was observed that backgrounds play an important role in the precision to which the polarization can be determined. An experiment that focuses in the reduction of background sources to establish techniques and find the upper limit for the polarization accuracy with these spin filters is currently in progress at LANSCE. A description of the measurement and results will be presented.

  12. One high-accuracy camera calibration algorithm based on computer vision images

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.

  13. High precision during food recruitment of experienced (reactivated) foragers in the stingless bee Scaptotrigona mexicana (Apidae, Meliponini)

    NASA Astrophysics Data System (ADS)

    Sánchez, Daniel; Nieh, James C.; Hénaut, Yann; Cruz, Leopoldo; Vandame, Rémy

    Several studies have examined the existence of recruitment communication mechanisms in stingless bees. However, the spatial accuracy of location-specific recruitment has not been examined. Moreover, the location-specific recruitment of reactivated foragers, i.e., foragers that have previously experienced the same food source at a different location and time, has not been explicitly examined. However, such foragers may also play a significant role in colony foraging, particularly in small colonies. Here we report that reactivated Scaptotrigona mexicana foragers can recruit with high precision to a specific food location. The recruitment precision of reactivated foragers was evaluated by placing control feeders to the left and the right of the training feeder (direction-precision tests) and between the nest and the training feeder and beyond it (distance-precision tests). Reactivated foragers arrived at the correct location with high precision: 98.44% arrived at the training feeder in the direction trials (five-feeder fan-shaped array, accuracy of at least +/-6° of azimuth at 50 m from the nest), and 88.62% arrived at the training feeder in the distance trials (five-feeder linear array, accuracy of at least +/-5 m or +/-10% at 50 m from the nest). Thus, S. mexicana reactivated foragers can find the indicated food source at a specific distance and direction with high precision, higher than that shown by honeybees, Apis mellifera, which do not communicate food location at such close distances to the nest.

  14. Assignment of Calibration Information to Deeper Phylogenetic Nodes is More Effective in Obtaining Precise and Accurate Divergence Time Estimates.

    PubMed

    Mello, Beatriz; Schrago, Carlos G

    2014-01-01

    Divergence time estimation has become an essential tool for understanding macroevolutionary events. Molecular dating aims to obtain reliable inferences, which, within a statistical framework, means jointly increasing the accuracy and precision of estimates. Bayesian dating methods exhibit the propriety of a linear relationship between uncertainty and estimated divergence dates. This relationship occurs even if the number of sites approaches infinity and places a limit on the maximum precision of node ages. However, how the placement of calibration information may affect the precision of divergence time estimates remains an open question. In this study, relying on simulated and empirical data, we investigated how the location of calibration within a phylogeny affects the accuracy and precision of time estimates. We found that calibration priors set at median and deep phylogenetic nodes were associated with higher precision values compared to analyses involving calibration at the shallowest node. The results were independent of the tree symmetry. An empirical mammalian dataset produced results that were consistent with those generated by the simulated sequences. Assigning time information to the deeper nodes of a tree is crucial to guarantee the accuracy and precision of divergence times. This finding highlights the importance of the appropriate choice of outgroups in molecular dating. PMID:24855333

  15. Precision measurements in supersymmetry

    SciTech Connect

    Feng, J.L.

    1995-05-01

    Supersymmetry is a promising framework in which to explore extensions of the standard model. If candidates for supersymmetric particles are found, precision measurements of their properties will then be of paramount importance. The prospects for such measurements and their implications are the subject of this thesis. If charginos are produced at the LEP II collider, they are likely to be one of the few available supersymmetric signals for many years. The author considers the possibility of determining fundamental supersymmetry parameters in such a scenario. The study is complicated by the dependence of observables on a large number of these parameters. He proposes a straightforward procedure for disentangling these dependences and demonstrate its effectiveness by presenting a number of case studies at representative points in parameter space. In addition to determining the properties of supersymmetric particles, precision measurements may also be used to establish that newly-discovered particles are, in fact, supersymmetric. Supersymmetry predicts quantitative relations among the couplings and masses of superparticles. The author discusses tests of such relations at a future e{sup +}e{sup {minus}} linear collider, using measurements that exploit the availability of polarizable beams. Stringent tests of supersymmetry from chargino production are demonstrated in two representative cases, and fermion and neutralino processes are also discussed.

  16. Automate generation of incremental linear networks masks by using photocomposition method with with multiple microphotographical reductions using laser microsystems

    NASA Astrophysics Data System (ADS)

    Gheorghe, Gheorghe I.; Dontu, Octavian

    2008-03-01

    The paper treats high precision micro technologies for automate generation of linear incremental networks masks by using the photocomposition method with multiple micro photographical reductions using laser high sensitivity microsystems, for the manufacture of micro-sensors and micro-transducers for micro displacements with endowment in industrial and metrological laboratories. These laser micro technologies allow automate generation of incremental networks masks with incremental step of 0,1 µm ensuring necessary accuracy according to European and international standards as well as realization of linear incremental photoelectric rules divisor and vernier as marks ultra precise components of micro-sensors and microtransducers for micro displacements.

  17. Precision evaluation of calibration factor of a superconducting gravimeter using an absolute gravimeter

    NASA Astrophysics Data System (ADS)

    Feng, Jin-yang; Wu, Shu-qing; Li, Chun-jian; Su, Duo-wu; Xu, Jin-yi; Yu, Mei

    2016-01-01

    The precision of the calibration factor of a superconducting gravimeter (SG) using an absolute gravimeter (AG) is analyzed based on linear least square fitting and error propagation theory and factors affecting the accuracy are discussed. It can improve the accuracy to choose the observation period of solid tide as a significant change or increase the calibration time. Simulation is carried out based on synthetic gravity tides calculated with T-soft at observed site from Aug. 14th to Sept. 2nd in 2014. The result indicates that the highest precision using half a day's observation data is below 0.28% and the precision exponentially increases with the increase of peak-to-peak gravity change. The comparison of results obtained from the same observation time indicated that using properly selected observation data has more beneficial on the improvement of precision. Finally, the calibration experiment of the SG iGrav-012 is introduced and the calibration factor is determined for the first time using AG FG5X-249. With 2.5 days' data properly selected from solid tide period with large tidal amplitude, the determined calibration factor of iGrav-012 is (-92.54423+/-0.13616) μGal/V (1μGal=10-8m/s2), with the relative accuracy of about 0.15%.

  18. Linear thermal expansion data for tuffs from the unsaturated zone at Yucca Mountain, Nevada; Yucca Mountain Site Characterization Project

    SciTech Connect

    Schwartz, B.M.; Chocas, C.S.

    1992-07-01

    Experiment results are presented for linear thermal expansion measurements on tuffaceous rocks from the unsaturated accuracy of the unconfined data collected between 50 and 250{degrees}C is better than 1.8 percent, with the precision better than 4.5 ;percent. The accuracy of the unconfined data collected between ambient temperature and 50{degrees}C and is approximately 11 percent deviation from the true value, with a precision of 12 percent of the mean value. Because of experiment design and the lack of information related calibrations, the accuracy and precision of the confined thermal expansion measurements could not be determined.

  19. Precision powder feeder

    DOEpatents

    Schlienger, M. Eric; Schmale, David T.; Oliver, Michael S.

    2001-07-10

    A new class of precision powder feeders is disclosed. These feeders provide a precision flow of a wide range of powdered materials, while remaining robust against jamming or damage. These feeders can be precisely controlled by feedback mechanisms.

  20. A novel precision face grinder for advanced optic manufacture

    NASA Astrophysics Data System (ADS)

    Guo, Y.; Peng, Y.; Wang, Z.; Yang, W.; Bi, G.; Ke, X.; Lin, X.

    2010-10-01

    In this paper, a large-scale NC precision face grinding machine is developed. This grinding machine can be used to the precision machining of brittle materials. The base and the machine body are independent and the whole structure is configured as a "T" type. The vertical column is seat onto the machine body at the middle center part through a double of precision lead rails. The grinding wheel is driven with a hydraulic dynamic and static spindle. The worktable is supported with a novel split thin film throttle hydrostatic lead rails. Each of motion-axis of the grinding machine is equipped with a Heidenhain absolute linear encoder, and then a closed feedback control system is formed with the adopted Fanuc 0i-MD NC system. The machine is capable of machining extremely flat surfaces on workpiece up to 800mmx600mm. The maximums load bearing of the work table is 620Kg. Furthermore, the roughness of the machined surfaces should be smooth (Ra<50nm-100nm), and the form accuracy less than 2μm (+/-1μm)/200x200mm. After the assembly and debugging of the surface grinding machine, the worktable surface has been self-ground with 60# grinding wheel and the form accuracy is 3μm/600mm×800mm. Then the grinding experiment was conduct on a BK7 flat optic glass element (400mmx250mm) and a ceramic disc (Φ100mm) with 60# grinding wheel, and the measuring results show the surface roughness and the form accuracy of the optic glass device are 0.07μm and 1.56μm/200x200mm, and these of the ceramic disc are 0.52μm and 1.28μm respectively.

  1. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  2. Linear Classification Functions.

    ERIC Educational Resources Information Center

    Huberty, Carl J.; Smith, Jerry D.

    Linear classification functions (LCFs) arise in a predictive discriminant analysis for the purpose of classifying experimental units into criterion groups. The relative contribution of the response variables to classification accuracy may be based on LCF-variable correlations for each group. It is proved that, if the raw response measures are…

  3. High precision triangular waveform generator

    DOEpatents

    Mueller, Theodore R.

    1983-01-01

    An ultra-linear ramp generator having separately programmable ascending and descending ramp rates and voltages is provided. Two constant current sources provide the ramp through an integrator. Switching of the current at current source inputs rather than at the integrator input eliminates switching transients and contributes to the waveform precision. The triangular waveforms produced by the waveform generator are characterized by accurate reproduction and low drift over periods of several hours. The ascending and descending slopes are independently selectable.

  4. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  5. Multiple linear regression for isotopic measurements

    NASA Astrophysics Data System (ADS)

    Garcia Alonso, J. I.

    2012-04-01

    There are two typical applications of isotopic measurements: the detection of natural variations in isotopic systems and the detection man-made variations using enriched isotopes as indicators. For both type of measurements accurate and precise isotope ratio measurements are required. For the so-called non-traditional stable isotopes, multicollector ICP-MS instruments are usually applied. In many cases, chemical separation procedures are required before accurate isotope measurements can be performed. The off-line separation of Rb and Sr or Nd and Sm is the classical procedure employed to eliminate isobaric interferences before multicollector ICP-MS measurement of Sr and Nd isotope ratios. Also, this procedure allows matrix separation for precise and accurate Sr and Nd isotope ratios to be obtained. In our laboratory we have evaluated the separation of Rb-Sr and Nd-Sm isobars by liquid chromatography and on-line multicollector ICP-MS detection. The combination of this chromatographic procedure with multiple linear regression of the raw chromatographic data resulted in Sr and Nd isotope ratios with precisions and accuracies typical of off-line sample preparation procedures. On the other hand, methods for the labelling of individual organisms (such as a given plant, fish or animal) are required for population studies. We have developed a dual isotope labelling procedure which can be unique for a given individual, can be inherited in living organisms and it is stable. The detection of the isotopic signature is based also on multiple linear regression. The labelling of fish and its detection in otoliths by Laser Ablation ICP-MS will be discussed using trout and salmon as examples. As a conclusion, isotope measurement procedures based on multiple linear regression can be a viable alternative in multicollector ICP-MS measurements.

  6. Optimal diving maneuver strategy considering guidance accuracy for hypersonic vehicle

    NASA Astrophysics Data System (ADS)

    Zhu, Jianwen; Liu, Luhua; Tang, Guojian; Bao, Weimin

    2014-11-01

    An optimal maneuver strategy considering terminal guidance accuracy for hypersonic vehicle in dive phase is investigated in this paper. First, it derives the complete three-dimensional nonlinear coupled motion equation without any approximations based on diving relative motion relationship directly, and converts it into linear decoupled state space equation with the same relative degree by feedback linearization. Second, the diving guidance law is designed based on the decoupled equation to meet the terminal impact point and falling angle constraints. In order to further improve the interception capability, it constructs maneuver control model through adding maneuver control item to the guidance law. Then, an integrated performance index consisting of maximum line-of-sight angle rate and minimum energy consumption is designed, and optimal control is employed to obtain optimal maneuver strategy when the encounter time is determined and undetermined, respectively. Furthermore, the performance index and suboptimal strategy are reconstructed to deal with the control capability constraint and the serous influence on terminal guidance accuracy caused by maneuvering flight. Finally, the approach is tested using the Common Aero Vehicle-H model. Simulation results demonstrate that the proposed strategy can achieve high precision guidance and effective maneuver at the same time, and the indices are also optimized.

  7. Construction concepts for precision segmented reflectors

    NASA Technical Reports Server (NTRS)

    Mikulas, Martin M., Jr.; Withnell, Peter R.

    1993-01-01

    Three construction concepts for deployable precision segmented reflectors are presented. The designs produce reflectors with very high surface accuracies and diameters three to five times the width of the launch vehicle shroud. Of primary importance is the reliability of both the deployment process and the reflector operation. This paper is conceptual in nature, and uses these criteria to present beneficial design concepts for deployable precision segmented reflectors.

  8. High-precision arithmetic in mathematical physics

    DOE PAGESBeta

    Bailey, David H.; Borwein, Jonathan M.

    2015-05-12

    For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.

  9. Matter power spectrum and the challenge of percent accuracy

    NASA Astrophysics Data System (ADS)

    Schneider, Aurel; Teyssier, Romain; Potter, Doug; Stadel, Joachim; Onions, Julian; Reed, Darren S.; Smith, Robert E.; Springel, Volker; Pearce, Frazer R.; Scoccimarro, Roman

    2016-04-01

    Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day N-body methods, identifying main potential error sources from the set-up of initial conditions to the measurement of the final power spectrum. We directly compare three widely used N-body codes, Ramses, Pkdgrav3, and Gadget3 which represent three main discretisation techniques: the particle-mesh method, the tree method, and a hybrid combination of the two. For standard run parameters, the codes agree to within one percent at k<=1 h Mpc‑1 and to within three percent at k<=10 h Mpc‑1. We also consider the bispectrum and show that the reduced bispectra agree at the sub-percent level for k<= 2 h Mpc‑1. In a second step, we quantify potential errors due to initial conditions, box size, and resolution using an extended suite of simulations performed with our fastest code Pkdgrav3. We demonstrate that the simulation box size should not be smaller than L=0.5 h‑1Gpc to avoid systematic finite-volume effects (while much larger boxes are required to beat down the statistical sample variance). Furthermore, a maximum particle mass of Mp=109 h‑1Msolar is required to conservatively obtain one percent precision of the matter power spectrum. As a consequence, numerical simulations covering large survey volumes of upcoming missions such as DES, LSST, and Euclid will need more than a trillion particles to reproduce clustering properties at the targeted accuracy.

  10. Key techniques of ultra-precision aerostatic system

    NASA Astrophysics Data System (ADS)

    Li, Dongsheng; Li, Jiafu; Cui, Ting; Hu, Jiacheng; Cheng, Yang; Wang, Meibao

    2013-10-01

    In the process of ultra-precision machining and measuring, nanoscale rotary and linear motion can be realized by aerostatic system. Aerostatic restrictors are one of core components in aerostatic system. An aerostatic restrictor with multi-micro channels was designed and developed, combined with the orifice and torus throttling methods. Besides, the mentioned restrictor has two individual parts assembled together by interference fit, which can alleviating the contradiction between its stiffness and stability. Its maximum bearing capacity was 708.4N when the supply gas pressure was 0.5MPa. Numerical simulation and experimental investigation indicates the pressure in gas film of this restrictor gradually reduces to atmospheric pressure from the center to the surrounding. The temperature decreases from the outlet to the edge and the maximum temperature difference is more than 5°, which verifies Joule-Thomson effect in the throttling process. In order to reduce the influence of gas source fluctuation on the parameters such as gas film thickness, pressure and temperature, high accuracy stable pressure source was developed applying two-stage series closed-loops feedback control, which can make the outlet pressure error below 1%. Because of the influence of ambient noise on ultra-precision aerostatic system, high precision vibration-isolation platform was developed applying air spring vibration-isolation technology, whose natural frequency can be as low as 1.22Hz.

  11. Application of GPS in a high precision engineering survey network

    SciTech Connect

    Ruland, R.; Leick, A.

    1985-04-01

    A GPS satellite survey was carried out with the Macrometer to support construction at the Stanford Linear Accelerator Center (SLAC). The network consists of 16 stations of which 9 stations were part of the Macrometer network. The horizontal and vertical accuracy of the GPS survey is estimated to be 1 to 2 mm and 2 to 3 mm respectively. The horizontal accuracy of the terrestrial survey, consisting of angles and distances, equals that of the GPS survey only in the ''loop'' portion of the network. All stations are part of a precise level network. The ellipsoidal heights obtained from the GPS survey and the orthometric heights of the level network are used to compute geoid undulations. A geoid profile along the linac was computed by the National Geodetic Survey in 1963. This profile agreed with the observed geoid within the standard deviation of the GPS survey. Angles and distances were adjusted together (TERRA), and all terrestrial observations were combined with the GPS vector observations in a combination adjustment (COMB). A comparison of COMB and TERRA revealed systematic errors in the terrestrial solution. A scale factor of 1.5 ppM +- .8 ppM was estimated. This value is of the same magnitude as the over-all horizontal accuracy of both networks. 10 refs., 3 figs., 5 tabs.

  12. Geometric accuracy improvement and verification of remote sensing image product for the ZY-3 surveying and mapping satellite

    NASA Astrophysics Data System (ADS)

    Wang, Xia; Zhou, Ping; Guo, Li

    2015-12-01

    Based on the geometric characteristic of ZY3 surveying and mapping satellite, this paper analyses the main error sources of the geometric accuracy of ZY3 satellite image product, and proposes a key technique to improve the accuracy of geometric positioning of ZY-3 satellite image products without the Ground Control Points. Firstly, 556 ZY-3 satellite images distributed in the central western China, with an area of 350 million km2, were used for the planar positioning accuracy verification. The results show that the planar accuracy of ZY-3 image without the GCPs is about 10.8 meters (1σ), and more than 96.9% of experimental image without the GCPs have the planar accuracy higher than 25 meters. Subsequently, the Digital Surface Model (DSM) produced by the ZY-3 three linear array image in Shanxi without the GCPs and the high-precise Lidar-DEM were compared. The comparison shows that overall vertical accuracy of DSM is higher than 6 meters (1σ), and higher than 5.5 and 6.4 meters (1σ) in plane and mountainous area respectively. So the validation confirmed the overall accuracy of ZY-3 satellite images, indicating that ZY-3 satellite can achieve a higher geometric accuracy.

  13. Application of GPS in a high precision engineering survey network

    NASA Astrophysics Data System (ADS)

    Ruland, R.; Leick, A.

    A global positioning system (GPS) satellite survey was conducted with the Macrometer to support construction at the standard linear accelerator center (SLAC). The network consists of 16 stations of which 9 stations were part of the Macrometer network. The horizontal accuracy of the terrestrial survey, consisting of angles and distances, equals that of the GPS survey only in the loop portion of the network. All stations are part of the precise level network. The elliposoidal heights obtained from the GPS survey and the orthometric heights of the level network are used to compute geoid undulations. The profile agreed with the observed geoid within the standard deviation of the GPS survey. Angles and distances were adjusted together (TERRA), and all terrestrial observations were combined with the GPS vector observations in a combination adjustment (COMB). A comparison of COMB and TERRA revealed systematic errors in the terrestrial solution.

  14. Linear Accelerators

    NASA Astrophysics Data System (ADS)

    Sidorin, Anatoly

    2010-01-01

    In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.

  15. Linear Accelerators

    SciTech Connect

    Sidorin, Anatoly

    2010-01-05

    In linear accelerators the particles are accelerated by either electrostatic fields or oscillating Radio Frequency (RF) fields. Accordingly the linear accelerators are divided in three large groups: electrostatic, induction and RF accelerators. Overview of the different types of accelerators is given. Stability of longitudinal and transverse motion in the RF linear accelerators is briefly discussed. The methods of beam focusing in linacs are described.

  16. Design and Analysis of a Differential Waveguide Structure to Improve Magnetostrictive Linear Position Sensors

    PubMed Central

    Zhang, Yongjie; Liu, Weiwen; Zhang, Haibo; Yang, Jinfeng; Zhao, Hui

    2011-01-01

    Magnetostrictive linear position sensors (MLPS) are high-precision sensors used in the industrial field for measuring the propagation time of ultrasonic signals in a waveguide. To date, MLPS have attracted widespread attention for their accuracy, reliability, and cost-efficiency in performing non-contact, multiple measurements. However, the sensor, with its traditional structure, is susceptible to electromagnetic interference, which affects accuracy. In the present study, we propose a novel structure of MLPS that relies on two differential waveguides to improve the signal-to-noise ratio, common-mode rejection ratio, and accuracy of MLPS. The proposed sensor model can depict sensor performance and the relationship of sensor parameters. Experimental results with the new sensor indicate that the new structure can improve accuracy to ±0.1 mm higher than ±0.2 mm with a traditional structure. In addition, the proposed sensor shows a considerable improvement in temperature characteristics. PMID:22163911

  17. Accuracy assessment of single and double difference models for the single epoch GPS compass

    NASA Astrophysics Data System (ADS)

    Chen, Wantong; Qin, Honglei; Zhang, Yanzhong; Jin, Tian

    2012-02-01

    The single epoch GPS compass is an important field of study, since it is a valuable technique for the orientation estimation of vehicles and it can guarantee a total independence from carrier phase slips in practical applications. To achieve highly accurate angular estimates, the unknown integer ambiguities of the carrier phase observables need to be resolved. Past researches focus on the ambiguity resolution for single epoch; however, accuracy is another significant problem for many challenging applications. In this contribution, the accuracy is evaluated for the non-common clock scheme of the receivers and the common clock scheme of the receivers, respectively. We focus on three scenarios for either scheme: single difference model vs. double difference model, single frequency model vs. multiple frequency model and optimal linear combinations vs. traditional triple-frequency least squares. We deduce the short baseline precision for a number of different available models and analyze the difference in accuracy for those models. Compared with the single or double difference model of the non-common clock scheme, the single difference model of the common clock scheme can greatly reduce the vertical component error of baseline vector, which results in higher elevation accuracy. The least squares estimator can also reduce the error of fixed baseline vector with the aid of the multi-frequency observation, thereby improving the attitude accuracy. In essence, the "accuracy improvement" is attributed to the difference in accuracy for different models, not a real improvement for any specific model. If all noise levels of GPS triple frequency carrier phase are assumed the same in unit of cycles, it can be proved that the optimal linear combination approach is equivalent to the traditional triple-frequency least squares, no matter which scheme is utilized. Both simulations and actual experiments have been performed to verify the correctness of theoretical analysis.

  18. Precise Countersinking Tool

    NASA Technical Reports Server (NTRS)

    Jenkins, Eric S.; Smith, William N.

    1992-01-01

    Tool countersinks holes precisely with only portable drill; does not require costly machine tool. Replaceable pilot stub aligns axis of tool with centerline of hole. Ensures precise cut even with imprecise drill. Designed for relatively low cutting speeds.

  19. Surgical accuracy of three-dimensional virtual planning: a pilot study of bimaxillary orthognathic procedures including maxillary segmentation.

    PubMed

    Stokbro, K; Aagaard, E; Torkov, P; Bell, R B; Thygesen, T

    2016-01-01

    This retrospective study evaluated the precision and positional accuracy of different orthognathic procedures following virtual surgical planning in 30 patients. To date, no studies of three-dimensional virtual surgical planning have evaluated the influence of segmentation on positional accuracy and transverse expansion. Furthermore, only a few have evaluated the precision and accuracy of genioplasty in placement of the chin segment. The virtual surgical plan was compared with the postsurgical outcome by using three linear and three rotational measurements. The influence of maxillary segmentation was analyzed in both superior and inferior maxillary repositioning. In addition, transverse surgical expansion was compared with the postsurgical expansion obtained. An overall, high degree of linear accuracy between planned and postsurgical outcomes was found, but with a large standard deviation. Rotational difference showed an increase in pitch, mainly affecting the maxilla. Segmentation had no significant influence on maxillary placement. However, a posterior movement was observed in inferior maxillary repositioning. A lack of transverse expansion was observed in the segmented maxilla independent of the degree of expansion. PMID:26250603

  20. Based on linear spectral mixture model (LSMM) unmixing remote sensing image

    NASA Astrophysics Data System (ADS)

    Liu, Jiaodi; Cao, Weibin

    2011-06-01

    There are mixed pixels in remote sensing images ordinarily, this is a difficulty of the pixel classification (ie, unmixing) in remote sensing image processing.Linear spectral separation, estimating the value end of Genpo degree, for spatial modeling, through the non-constrained mixed pixel decomposition,with cotton, corn, tomatoes and soil four endmembers to decompose mixed pixels, Got four endmember abundance images and the RMS error image, the planting area of cotton and cotton-growing area of the measurement in the decomposition of mixed pixel block, and obtained unmixing accuracy. Experimental results show that: a simple linear mixed model modeling, and computation is greatly reduced, high precision, strong adaptability.

  1. "Precision" drug development?

    PubMed

    Woodcock, J

    2016-02-01

    The concept of precision medicine has entered broad public consciousness, spurred by a string of targeted drug approvals, highlighted by the availability of personal gene sequences, and accompanied by some remarkable claims about the future of medicine. It is likely that precision medicines will require precision drug development programs. What might such programs look like? PMID:26331240

  2. Precision agricultural systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Precision agriculture is a new farming practice that has been developing since late 1980s. It has been variously referred to as precision farming, prescription farming, site-specific crop management, to name but a few. There are numerous definitions for precision agriculture, but the central concept...

  3. Geometric accuracy of Landsat-4 and Landsat-5 Thematic Mapper images.

    USGS Publications Warehouse

    Borgeson, W.T.; Batson, R.M.; Kieffer, H.H.

    1985-01-01

    The geometric accuracy of the Landsat Thematic Mappers was assessed by a linear least-square comparison of the positions of conspicuous ground features in digital images with their geographic locations as determined from 1:24 000-scale maps. For a Landsat-5 image, the single-dimension standard deviations of the standard digital product, and of this image with additional linear corrections, are 11.2 and 10.3 m, respectively (0.4 pixel). An F-test showed that skew and affine distortion corrections are not significant. At this level of accuracy, the granularity of the digital image and the probable inaccuracy of the 1:24 000 maps began to affect the precision of the comparison. The tested image, even with a moderate accuracy loss in the digital-to-graphic conversion, meets National Horizontal Map Accuracy standards for scales of 1:100 000 and smaller. Two Landsat-4 images, obtained with the Multispectral Scanner on and off, and processed by an interim software system, contain significant skew and affine distortions. -Authors

  4. Precision CW laser automatic tracking system investigated

    NASA Technical Reports Server (NTRS)

    Lang, K. T.; Lucy, R. F.; Mcgann, E. J.; Peters, C. J.

    1966-01-01

    Precision laser tracker capable of tracking a low acceleration target to an accuracy of about 20 microradians rms is being constructed and tested. This laser tracking has the advantage of discriminating against other optical sources and the capability of simultaneously measuring range.

  5. Using satellite data to increase accuracy of PMF calculations

    SciTech Connect

    Mettel, M.C.

    1992-03-01

    The accuracy of a flood severity estimate depends on the data used. The more detailed and precise the data, the more accurate the estimate. Earth observation satellites gather detailed data for determining the probable maximum flood at hydropower projects.

  6. Accuracy of Intraoral Digital Impressions for Whole Upper Jaws, Including Full Dentitions and Palatal Soft Tissues

    PubMed Central

    Gan, Ning; Xiong, Yaoyang; Jiao, Ting

    2016-01-01

    Intraoral digital impressions have been stated to meet the clinical requirements for some teeth-supported restorations, though fewer evidences were proposed for larger scanning range. The aim of this study was to compare the accuracy (trueness and precision) of intraoral digital impressions for whole upper jaws, including the full dentitions and palatal soft tissues, as well as to determine the effect of different palatal vault height or arch width on accuracy of intraoral digital impressions. Thirty-two volunteers were divided into three groups according to the palatal vault height or arch width. Each volunteer received three scans with TRIOS intraoral scanner and one conventional impression of whole upper jaw. Three-dimensional (3D) images digitized from conventional gypsum casts by a laboratory scanner were chose as the reference models. All datasets were imported to a specific software program for 3D analysis by "best fit alignment" and "3D compare" process. Color-coded deviation maps showed qualitative visualization of the deviations. For the digital impressions for palatal soft tissues, trueness was (130.54±33.95)μm and precision was (55.26±11.21)μm. For the digital impressions for upper full dentitions, trueness was (80.01±17.78)μm and precision was (59.52±11.29)μm. Larger deviations were found between intraoral digital impressions and conventional impressions in the areas of palatal soft tissues than that in the areas of full dentitions (p<0.001). Precision of digital impressions for palatal soft tissues was slightly better than that for full dentitions (p = 0.049). There was no significant effect of palatal vault height on accuracy of digital impressions for palatal soft tissues (p>0.05), but arch width was found to have a significant effect on precision of intraoral digital impressions for full dentitions (p = 0.016). A linear correlation was found between arch width and precision of digital impressions for whole upper jaws (r = 0.326, p = 0

  7. Design for H type co-planar precision stage based on closed air bearing guideway with vacuum attraction force

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Shi, Zhaoyao; Lin, Jiachun; Zhang, Hua

    2011-12-01

    The accuracy of traditional two-dimensional precision stage is limited not only by the accuracy of each guideway but also by the configuration of the stage. It is not easy to calculate and compensate the total accuracy of the stage due to the complicated influence caused by the different position of the slides. An air bearing guideways with vacuum attraction forces has been designed with closed slide structure to enhance the stiffness and avoid the deformation caused by the weight of slide and workpieces. An H style two-dimension ultra-precision stage with co-planar structure has been developed based on the air bearing guideways to avoid the multi-influence by the axes. Driven by linear motors, the position of the workpiece is encoded by length scales with resolution of 50nm and thermal expansion of 0.6 μm/m/°C (0 °C to 30 °C). The travel span of the stage is 320x320mm, during which each axis has a positioning accuracy of +/-1μm, a repeatability of +/-0.3μm and a straightness of +/-0.5μm. The stage can be applied in precision manufacturing and measurement.

  8. Optimizing the geometrical accuracy of curvilinear meshes

    NASA Astrophysics Data System (ADS)

    Toulorge, Thomas; Lambrechts, Jonathan; Remacle, Jean-François

    2016-04-01

    This paper presents a method to generate valid high order meshes with optimized geometrical accuracy. The high order meshing procedure starts with a linear mesh, that is subsequently curved without taking care of the validity of the high order elements. An optimization procedure is then used to both untangle invalid elements and optimize the geometrical accuracy of the mesh. Standard measures of the distance between curves are considered to evaluate the geometrical accuracy in planar two-dimensional meshes, but they prove computationally too costly for optimization purposes. A fast estimate of the geometrical accuracy, based on Taylor expansions of the curves, is introduced. An unconstrained optimization procedure based on this estimate is shown to yield significant improvements in the geometrical accuracy of high order meshes, as measured by the standard Hausdorff distance between the geometrical model and the mesh. Several examples illustrate the beneficial impact of this method on CFD solutions, with a particular role of the enhanced mesh boundary smoothness.

  9. The Seasat Precision Orbit Determination Experiment

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Born, G. H.

    1980-01-01

    The objectives and conclusions reached during the Seasat Precision Orbit Determination Experiment are discussed. It is noted that the activities of the experiment team included extensive software calibration and validation and an intense effort to validate and improve the dynamic models which describe the satellite's motion. Significant improvement in the gravitational model was obtained during the experiment, and it is pointed out that the current accuracy of the Seasat altitude ephemeris is 1.5 m rms. An altitude ephemeris for the Seasat spacecraft with an accuracy of 0.5 m rms is seen as possible with further improvements in the geopotential, atmospheric drag, and solar radiation pressure models. It is concluded that since altimetry missions with a 2-cm precision altimeter are contemplated, the precision orbit determination effort initiated under the Seasat Project must be continued and expanded.

  10. Precision performance lamp technology

    NASA Astrophysics Data System (ADS)

    Bell, Dean A.; Kiesa, James E.; Dean, Raymond A.

    1997-09-01

    A principal function of a lamp is to produce light output with designated spectra, intensity, and/or geometric radiation patterns. The function of a precision performance lamp is to go beyond these parameters and into the precision repeatability of performance. All lamps are not equal. There are a variety of incandescent lamps, from the vacuum incandescent indictor lamp to the precision lamp of a blood analyzer. In the past the definition of a precision lamp was described in terms of wattage, light center length (LCL), filament position, and/or spot alignment. This paper presents a new view of precision lamps through the discussion of a new segment of lamp design, which we term precision performance lamps. The definition of precision performance lamps will include (must include) the factors of a precision lamp. But what makes a precision lamp a precision performance lamp is the manner in which the design factors of amperage, mscp (mean spherical candlepower), efficacy (lumens/watt), life, not considered individually but rather considered collectively. There is a statistical bias in a precision performance lamp for each of these factors; taken individually and as a whole. When properly considered the results can be dramatic to the system design engineer, system production manage and the system end-user. It can be shown that for the lamp user, the use of precision performance lamps can translate to: (1) ease of system design, (2) simplification of electronics, (3) superior signal to noise ratios, (4) higher manufacturing yields, (5) lower system costs, (6) better product performance. The factors mentioned above are described along with their interdependent relationships. It is statistically shown how the benefits listed above are achievable. Examples are provided to illustrate how proper attention to precision performance lamp characteristics actually aid in system product design and manufacturing to build and market more, market acceptable product products in the

  11. Precision optical metrology without lasers

    NASA Astrophysics Data System (ADS)

    Bergmann, Ralf B.; Burke, Jan; Falldorf, Claas

    2015-07-01

    Optical metrology is a key technique when it comes to precise and fast measurement with a resolution down to the micrometer or even nanometer regime. The choice of a particular optical metrology technique and the quality of results depends on sample parameters such as size, geometry and surface roughness as well as user requirements such as resolution, measurement time and robustness. Interferometry-based techniques are well known for their low measurement uncertainty in the nm range, but usually require careful isolation against vibration and a laser source that often needs shielding for reasons of eye-safety. In this paper, we concentrate on high precision optical metrology without lasers by using the gradient based measurement technique of deflectometry and the finite difference based technique of shear interferometry. Careful calibration of deflectometry systems allows one to investigate virtually all kinds of reflecting surfaces including aspheres or free-form surfaces with measurement uncertainties below the μm level. Computational Shear Interferometry (CoSI) allows us to combine interferometric accuracy and the possibility to use cheap and eye-safe low-brilliance light sources such as e.g. fiber coupled LEDs or even liquid crystal displays. We use CoSI e.g. for quantitative phase contrast imaging in microscopy. We highlight the advantages of both methods, discuss their transfer functions and present results on the precision of both techniques.

  12. Linear Collisions

    ERIC Educational Resources Information Center

    Walkiewicz, T. A.; Newby, N. D., Jr.

    1972-01-01

    A discussion of linear collisions between two or three objects is related to a junior-level course in analytical mechanics. The theoretical discussion uses a geometrical approach that treats elastic and inelastic collisions from a unified point of view. Experiments with a linear air track are described. (Author/TS)

  13. Advanced irrigation engineering: Precision and Precise

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Irrigation advances in precision irrigation (PI) or site-specific irrigation (SSI) have been considerable in research; however commercialization lags. A primary necessity for it is variability in soil texture that affects soil water holding capacity and crop yield. Basically, SSI/PI uses variable ra...

  14. Advanced irrigation engineering: Precision and Precise

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Irrigation advances in precision irrigation (PI) or site specific irrigation (SSI) have been considerable in research; however commercialization lags. A primary necessity for PI/SSI is variability in soil texture that affects soil water holding capacity and crop yield. Basically, SSI/PI uses variabl...

  15. Precision robotic control of agricultural vehicles on realistic farm trajectories

    NASA Astrophysics Data System (ADS)

    Bell, Thomas

    High-precision "autofarming", or precise agricultural vehicle guidance, is rapidly becoming a reality thanks to increasing computing power and carrier-phase differential GPS ("CPDGPS") position and attitude sensors. Realistic farm trajectories will include not only rows but also arcs created by smoothly joining rows or path-planning algorithms, spirals for farming center-pivot irrigated fields, and curved trajectories dictated by nonlinear field boundaries. In addition, fields are often sloped, and accurate control may be required either on linear trajectories or on curved contours. A three-dimensional vehicle model which adapts to changing vehicle and ground conditions was created, and a low-order model for controller synthesis was extracted based on nominal conditions. The model was extended to include a towed implement. Experimentation showed that an extended Kalman filter could identify the vehicle's state in real-time. An approximation was derived for the additional positional uncertainty introduced by the noisy "lever-arm correction" necessary to translate the GPS position measurement at the roof antenna to the vehicle's control point on the ground; this approximation was then used to support the assertion that attitude measurement accuracy was as important to control point position measurement as the original position measurement accuracy at the GPS antenna. The low-order vehicle control model was transformed to polar coordinates for control on arcs and spirals. Experimental data showed that the tractor's control, point tracked an arc to within a -0.3 cm mean and a 3.4 cm standard deviation and a spiral to within a -0.2 cm mean and a 5.3 cm standard deviation. Cubic splines were used to describe curve trajectories, and a general expression for the time-rate-of-change of curve-related parameters was derived. Four vehicle control algorithms were derived for curve tracking: linear local-error control based on linearizing the vehicle about the curve's radius of

  16. The stability of mechanical calibration for a kV cone beam computed tomography system integrated with linear accelerator

    SciTech Connect

    Sharpe, Michael B.; Moseley, Douglas J.; Purdie, Thomas G.

    2006-01-15

    The geometric accuracy and precision of an image-guided treatment system were assessed. Image guidance is performed using an x-ray volume imaging (XVI) system integrated with a linear accelerator and treatment planning system. Using an amorphous silicon detector and x-ray tube, volumetric computed tomography images are reconstructed from kilovoltage radiographs by filtered backprojection. Image fusion and assessment of geometric targeting are supported by the treatment planning system. To assess the limiting accuracy and precision of image-guided treatment delivery, a rigid spherical target embedded in an opaque phantom was subjected to 21 treatment sessions over a three-month period. For each session, a volumetric data set was acquired and loaded directly into an active treatment planning session. Image fusion was used to ascertain the couch correction required to position the target at the prescribed iso-center. Corrections were validated independently using megavoltage electronic portal imaging to record the target position with respect to symmetric treatment beam apertures. An initial calibration cycle followed by repeated image-guidance sessions demonstrated the XVI system could be used to relocate an unambiguous object to within less than 1 mm of the prescribed location. Treatment could then proceed within the mechanical accuracy and precision of the delivery system. The calibration procedure maintained excellent spatial resolution and delivery precision over the duration of this study, while the linear accelerator was in routine clinical use. Based on these results, the mechanical accuracy and precision of the system are ideal for supporting high-precision localization and treatment of soft-tissue targets.

  17. The stability of mechanical calibration for a kV cone beam computed tomography system integrated with linear accelerator.

    PubMed

    Sharpe, Michael B; Moseley, Douglas J; Purdie, Thomas G; Islam, Mohammad; Siewerdsen, Jeffrey H; Jaffray, David A

    2006-01-01

    The geometric accuracy and precision of an image-guided treatment system were assessed. Image guidance is performed using an x-ray volume imaging (XVI) system integrated with a linear accelerator and treatment planning system. Using an amorphous silicon detector and x-ray tube, volumetric computed tomography images are reconstructed from kilovoltage radiographs by filtered backprojection. Image fusion and assessment of geometric targeting are supported by the treatment planning system. To assess the limiting accuracy and precision of image-guided treatment delivery, a rigid spherical target embedded in an opaque phantom was subjected to 21 treatment sessions over a three-month period. For each session, a volumetric data set was acquired and loaded directly into an active treatment planning session. Image fusion was used to ascertain the couch correction required to position the target at the prescribed iso-center. Corrections were validated independently using megavoltage electronic portal imaging to record the target position with respect to symmetric treatment beam apertures. An initial calibration cycle followed by repeated image-guidance sessions demonstrated the XVI system could be used to relocate an unambiguous object to within less than 1 mm of the prescribed location. Treatment could then proceed within the mechanical accuracy and precision of the delivery system. The calibration procedure maintained excellent spatial resolution and delivery precision over the duration of this study, while the linear accelerator was in routine clinical use. Based on these results, the mechanical accuracy and precision of the system are ideal for supporting high-precision localization and treatment of soft-tissue targets. PMID:16485420

  18. Precision aerial application for site-specific rice crop management

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Precision agriculture includes different technologies that allow agricultural professional to use information management tools to optimize agriculture production. The new technologies allow aerial application applicators to improve application accuracy and efficiency, which saves time and money for...

  19. Single-frequency precise point positioning: an analytical approach

    NASA Astrophysics Data System (ADS)

    Sterle, Oskar; Stopar, Bojan; Pavlovčič Prešeren, Polona

    2015-08-01

    An analytical approach to single-frequency precise point positioning (PPP) is discussed in this paper. To obtain highest precision results, all biases must be eliminated or modelled to centimetre level. The use of the GRAPHIC ionosphere-free linear combination that is based on single-frequency phase and code observations eliminates the ionosphere bias; however, the rank deficient Gauss-Markov model is obtained. We explicitly determine rank deficiency of a Gauss-Markov model as a number of all ambiguity clusters, each of them defined as a set of all ambiguities overlapping in time. On the basis of S-transformation we prove that the single-frequency PPP represents an unbiased estimator for station coordinates and troposphere parameters, while it presents a biased estimator for ambiguities and receiver-clock error parameters. Additionally we describe the estimable parameters in each ambiguity cluster as the differences between ambiguity parameters and the sum of receiver-clock parameters with one of the ambiguities. We also show that any other particular solution on the basis of S-transformation is obtained only when the common least-squares estimation in single step is applied. The recursive least-squares estimation with parameter pre-elimination only determines the vector of unknowns as possible to transform through S-transformation, whereas the same does not hold for the cofactor matrix of unknowns. For a case study, we present our method on GPS data from 19 permanent stations (14 IGS and 5 EPN) in Europe, for 89 consecutive days in the beginning of 2013. The static case study revealed the precision of daily coordinates as 7.6, 11.7 and 19.6 mm for , and , respectively. The accuracies of the , and components were determined as 6.9, 13.5 and 31.4 mm, respectively, and were calculated using the Helmert transformation of weighted-mean daily single-frequency PPP and IGb08 coordinates. The estimated convergence times were relatively diverse, expanding from 1.75 h (CAGL

  20. Multivariate standardisation for non-linear calibration range in the chemiluminescence determination of chromium.

    PubMed

    Tortajada-Genaro, L A; Campíns-Falcó, P

    2007-05-15

    Multivariate standardisation is proposed for the successful chemiluminescence determination of chromium based on luminol-hydrogen peroxide reaction. In an extended concentration range, non-linear calibration model is needed. The studied instrumental situations were different detection cells, instruments, assemblies, time and their possible combinations. Chemiluminescence kinetic registers have been transferred using piecewise direct standardisation (PDS) method. The optimisation of transfer parameters has been carried out based on the prediction residual error criteria. Non-linear principal component regression (NL-PCR) and non-linear partial least square regression (NL-PLS) were chosen for modelling the relationship signal-concentration of transferred registers. Good accuracy and precision were obtained for water samples. The concentrations of chromium were statistically in agreement with reference method values and with recovery studies. Therefore, it is possible to transfer chemiluminescence curves without loosing ability of prediction, even the presence of a non-linear behaviour. PMID:19071716

  1. System and method for high precision isotope ratio destructive analysis

    DOEpatents

    Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R

    2013-07-02

    A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).

  2. Precision Hyperfine Structure of 2;^3P State of ^3He with External Magnetic

    NASA Astrophysics Data System (ADS)

    Wu, Qixue; Drake, G. W. F.

    2007-06-01

    The theory of the Zeeman effect can be used to extrapolate precise measurements for the fine structure or the hyperfine structure to zero-field strength. In the present work, the hyperfine structure of 2;^3P state of ^3He with external magnetic fields is precisely calculated. The values of the fields for 32 crossings and five anticrossings of the magnetic sublevels are theoretically predicted for magnetic field strengths up to 1 Tesla. The results are compared with experimental work. We include the linear terms, diamagnetic terms, and the 2̂ relativistic correction terms in the Zeeman Hamiltonian. All related matrix elements are calculated with high accuracy by the use of double basis set Hylleraas type variational wave functions[1,2].[1] Z. -C. Yan and G.W.F. Drake, Phys. Rev. A 50, R1980 (1994).[2] Q. Wu and G.W.F. Drake, J. Phys. B 40, 393 (2007).

  3. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  4. LINEAR ACCELERATOR

    DOEpatents

    Christofilos, N.C.; Polk, I.J.

    1959-02-17

    Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.

  5. Improving the precision matrix for precision cosmology

    NASA Astrophysics Data System (ADS)

    Paz, Dante J.; Sánchez, Ariel G.

    2015-12-01

    The estimation of cosmological constraints from observations of the large-scale structure of the Universe, such as the power spectrum or the correlation function, requires the knowledge of the inverse of the associated covariance matrix, namely the precision matrix, Ψ . In most analyses, Ψ is estimated from a limited set of mock catalogues. Depending on how many mocks are used, this estimation has an associated error which must be propagated into the final cosmological constraints. For future surveys such as Euclid and Dark Energy Spectroscopic Instrument, the control of this additional uncertainty requires a prohibitively large number of mock catalogues. In this work, we test a novel technique for the estimation of the precision matrix, the covariance tapering method, in the context of baryon acoustic oscillation measurements. Even though this technique was originally devised as a way to speed up maximum likelihood estimations, our results show that it also reduces the impact of noisy precision matrix estimates on the derived confidence intervals, without introducing biases on the target parameters. The application of this technique can help future surveys to reach their true constraining power using a significantly smaller number of mock catalogues.

  6. Precision Optics Curriculum.

    ERIC Educational Resources Information Center

    Reid, Robert L.; And Others

    This guide outlines the competency-based, two-year precision optics curriculum that the American Precision Optics Manufacturers Association has proposed to fill the void that it suggests will soon exist as many of the master opticians currently employed retire. The model, which closely resembles the old European apprenticeship model, calls for 300…

  7. High precision measurement system based on coplanar XY-stage

    NASA Astrophysics Data System (ADS)

    Fan, Kuang-Chao; Miao, Jin-Wei; Gong, Wei; Zhang, You-Liang; Cheng, Fang

    2011-12-01

    A coplanar XY-stage, together with a high precise measurement system, is presented in this paper. The proposed coplanar XY-stage fully conforms to the Abbe principle. The symmetric structural design is considered to eliminate the structure deformation due to force and temperature changes. For consisting of a high precise measurement system, a linear diffraction grating interferometer(LDGI) is employed as the position feedback sensor with the resolution to 1 nm after the waveform interpolation, an ultrasonic motor HR4 is used to generate both the long stroke motion and the nano positioning on the same stage. Three modes of HR4 are used for positioning control: the AC mode in continuous motion control for the long stroke; the gate mode to drive the motor in low velocity for the short stroke; and the DC mode in which the motor works as a piezo actuator, enabling accurate positioning of a few nanometers. The stage calibration is carried out by comparing the readings of LDGI with a Renishaw laser interferometer and repeated 5 times. Experimental results show the XY-stage has achieved positioning accuracy in less than 20nm after the compensation of systematic errors, and standard deviation is within 20 nm for travels up to 20 mm.

  8. Electrostatic microactuators for precise positioning of neural microelectrodes.

    PubMed

    Muthuswamy, Jit; Okandan, Murat; Jain, Tilak; Gilletti, Aaron

    2005-10-01

    Microelectrode arrays used for monitoring single and multineuronal action potentials often fail to record from the same population of neurons over a period of time likely due to micromotion of neurons away from the microelectrode, gliosis around the recording site and also brain movement due to behavior. We report here novel electrostatic microactuated microelectrodes that will enable precise repositioning of the microelectrodes within the brain tissue. Electrostatic comb-drive microactuators and associated microelectrodes are fabricated using the SUMMiT V (Sandia's Ultraplanar Multilevel MEMS Technology) process, a five-layer polysilicon micromachining technology of the Sandia National labs, NM. The microfabricated microactuators enable precise bidirectional positioning of the microelectrodes in the brain with accuracy in the order of 1 microm. The microactuators allow for a linear translation of the microelectrodes of up to 5 mm in either direction making it suitable for positioning microelectrodes in deep structures of a rodent brain. The overall translation was reduced to approximately 2 mm after insulation of the microelectrodes with epoxy for monitoring multiunit activity. The microactuators are capable of driving the microelectrodes in the brain tissue with forces in the order of several micro-Newtons. Single unit recordings were obtained from the somatosensory cortex of adult rats in acute experiments demonstrating the feasibility of this technology. Further optimization of the insulation, packaging and interconnect issues will be necessary before this technology can be validated in long-term experiments. PMID:16235660

  9. Electrostatic Microactuators for Precise Positioning of Neural Microelectrodes

    PubMed Central

    Muthuswamy, Jit; Okandan, Murat; Jain, Tilak; Gilletti, Aaron

    2006-01-01

    Microelectrode arrays used for monitoring single and multineuronal action potentials often fail to record from the same population of neurons over a period of time likely due to micromotion of neurons away from the microelectrode, gliosis around the recording site and also brain movement due to behavior. We report here novel electrostatic microactuated microelectrodes that will enable precise repositioning of the microelectrodes within the brain tissue. Electrostatic comb-drive microactuators and associated microelectrodes are fabricated using the SUMMiT V™ (Sandia's Ultraplanar Multilevel MEMS Technology) process, a five-layer polysilicon micromachining technology of the Sandia National labs, NM. The microfabricated microactuators enable precise bidirectional positioning of the microelectrodes in the brain with accuracy in the order of 1 μm. The microactuators allow for a linear translation of the microelectrodes of up to 5 mm in either direction making it suitable for positioning microelectrodes in deep structures of a rodent brain. The overall translation was reduced to approximately 2 mm after insulation of the microelectrodes with epoxy for monitoring multiunit activity. The microactuators are capable of driving the microelectrodes in the brain tissue with forces in the order of several micro-Newtons. Single unit recordings were obtained from the somatosensory cortex of adult rats in acute experiments demonstrating the feasibility of this technology. Further optimization of the insulation, packaging and interconnect issues will be necessary before this technology can be validated in long-term experiments. PMID:16235660

  10. Prints for precision engineering research lathe (Engineering Materials)

    SciTech Connect

    Not Available

    1982-12-01

    The precision engineering research lathe (PERL) is a small two-axis, ultra-high-precision turning machine used for turning very small contoured parts. Housed in a laminar-flow enclosure for temperature control, called a clean air envelope, PERL is maintained at a constant 68 degrees F (plus or minus 1 degree). The size of the lathe is minimized to reduce sensitivity to temperature variations. This, combined with internal water cooling of the spindle motor, the only major heat source on the machine, permits the use of air-shower temperature control. (This approach is a departure from previous designs for larger machines where liquid shower systems are used.) Major design features include the use of a T-configuration, hydrostatic oil slides, capstan slide drives, air-bearing spindles, and laser interferometer position feedback. The following features are particularly noteworthy: (1) to obtain the required accuracy and friction characteristics, the two linear slides are supported by 10-cm-travel hydrostatic bearings developed at LLNL; (2) to minimize backlash and friction, capstan drives are used to provide the slide motions; and (3) to obtain the best surface finish possible, asynchronous (nonrepeatable) spindle motion is minimized by driving the spindle directly with a brushless dc torque motor. PERL operates in single-axis mode. Using facing cuts on copper with a diamond tool, surface finishes of 7.5 nm peak-to-valley (1.5 nm rms) have been achieved.

  11. All-order approach to high-precision atomic calculation

    NASA Astrophysics Data System (ADS)

    Iskrenova-Tchoukova, Eugeniya

    High-precision atomic calculations combined with experiments of matching accuracy provide an excellent opportunity to test our understanding of atomic structure and properties as well as the many-body atomic theories. The relativistic all-order method, which is a linearized version of the coupled-cluster singles-doubles method, has proven to yield high precision results for a variety of atomic properties. In this thesis, we study the atomic properties of neutral atoms and ions by means of the relativistic all-order method. The lifetimes and ground state static polarizabilities of a singly ionized barium atom are studied in comparison with the isoelectronic neutral cesium atom and with a singly ionized calcium atom. The lifetimes of a number of excited states in atomic potassium, rubidium, and francium are theoretically calculated and compared with the available experimental data. The magnetic dipole hyperfine constant of the 9S1/2 state in 210Fr is calculated and the result is combined with the experimental one to extract the value of the 210Fr nuclear magnetic moment. Another part of the thesis work focuses on the development and implementation of an extension of the currently used all-order singles-doubles (SD) method to include all valence triple excitations in an iterative way, all-order SD+vT approximation. Some of the ideas and results presented in Chapters 4, 5, and 6 have been published and are subject to copyright laws. These publications are cited accordingly.

  12. Expressing precision and bias in calorimetry

    SciTech Connect

    Hauck, Danielle K; Croft, Stephen; Bracken, David S

    2010-01-01

    The calibration and calibration verification of a nuclear calorimeter represents a substantial investment of time in part because a single calorimeter measurement takes of the order of 2 to 24h to complete. The time to complete a measurement generally increases with the size of the calorimeter measurement well. It is therefore important to plan the sequence of measurements rather carefully so as to cover the dynamic range and achieve the required accuracy within a reasonable time frame. This work will discuss how calibrations and their verification has been done in the past and what we consider to be good general practice in this regard. A proposed approach to calibration and calibration verification is presented which, in the final analysis, makes use of all the available data - both calibration and verification collectively - in order to obtain the best (in a best fit sense) possible calibration. The combination of sample variance and percent recovery are traditionally taken as sufficient to capture the random (precision) and systematic (bias) contributions to the uncertainty in a calorimetric assay. These terms have been defined as well as formulated for a basic calibration. It has been tradition to assume that sensitivity is a linear function of power. However, the availability of computer power and statistical packages should be utilized to fit the response function as accurately as possible using whatever functions are deemed most suitable. Allowing for more flexibility in the response function fit will enable the calibration to be updated according to the results from regular validation measurements through the year. In a companion paper to be published elsewhere we plan to discuss alternative fitting functions.

  13. Developing and implementing a high precision setup system

    NASA Astrophysics Data System (ADS)

    Peng, Lee-Cheng

    The demand for high-precision radiotherapy (HPRT) was first implemented in stereotactic radiosurgery using a rigid, invasive stereotactic head frame. Fractionated stereotactic radiotherapy (SRT) with a frameless device was developed along a growing interest in sophisticated treatment with a tight margin and high-dose gradient. This dissertation establishes the complete management for HPRT in the process of frameless SRT, including image-guided localization, immobilization, and dose evaluation. The most ideal and precise positioning system can allow for ease of relocation, real-time patient movement assessment, high accuracy, and no additional dose in daily use. A new image-guided stereotactic positioning system (IGSPS), the Align RT3C 3D surface camera system (ART, VisionRT), which combines 3D surface images and uses a real-time tracking technique, was developed to ensure accurate positioning at the first place. The uncertainties of current optical tracking system, which causes patient discomfort due to additional bite plates using the dental impression technique and external markers, are found. The accuracy and feasibility of ART is validated by comparisons with the optical tracking and cone-beam computed tomography (CBCT) systems. Additionally, an effective daily quality assurance (QA) program for the linear accelerator and multiple IGSPSs is the most important factor to ensure system performance in daily use. Currently, systematic errors from the phantom variety and long measurement time caused by switching phantoms were discovered. We investigated the use of a commercially available daily QA device to improve the efficiency and thoroughness. Reasonable action level has been established by considering dosimetric relevance and clinic flow. As for intricate treatments, the effect of dose deviation caused by setup errors remains uncertain on tumor coverage and toxicity on OARs. The lack of adequate dosimetric simulations based on the true treatment coordinates from

  14. A New Linearization Method of Unbalanced Electrical Distribution Networks

    SciTech Connect

    Liu, Guodong; Xu, Yan; Ceylan, Oguzhan; Tomsovic, Kevin

    2014-01-01

    Abstract--- With increasing penetration of distributed generation in the distribution networks (DN), the secure and optimal operation of DN has become an important concern. As DN control and operation strategies are mostly based on the linearized sensitivity coefficients between controlled variables (e.g., node voltages, line currents, power loss) and control variables (e.g., power injections, transformer tap positions), efficient and precise calculation of these sensitivity coefficients, i.e. linearization of DN, is of fundamental importance. In this paper, the derivation of the node voltages and power loss as functions of the nodal power injections and transformers' tap-changers positions is presented, and then solved by a Gauss-Seidel method. Compared to other approaches presented in the literature, the proposed method takes into account different load characteristics (e.g., constant PQ, constant impedance, constant current and any combination of above) of a generic multi-phase unbalanced DN and improves the accuracy of linearization. Numerical simulations on both IEEE 13 and 34 nodes test feeders show the efficiency and accuracy of the proposed method.

  15. A 3-D Multilateration: A Precision Geodetic Measurement System

    NASA Technical Reports Server (NTRS)

    Escobal, P. R.; Fliegel, H. F.; Jaffe, R. M.; Muller, P. M.; Ong, K. M.; Vonroos, O. H.

    1972-01-01

    A system was designed with the capability of determining 1-cm accuracy station positions in three dimensions using pulsed laser earth satellite tracking stations coupled with strictly geometric data reduction. With this high accuracy, several crucial geodetic applications become possible, including earthquake hazards assessment, precision surveying, plate tectonics, and orbital determination.

  16. Superconducting linear actuator

    NASA Technical Reports Server (NTRS)

    Johnson, Bruce; Hockney, Richard

    1993-01-01

    Special actuators are needed to control the orientation of large structures in space-based precision pointing systems. Electromagnetic actuators that presently exist are too large in size and their bandwidth is too low. Hydraulic fluid actuation also presents problems for many space-based applications. Hydraulic oil can escape in space and contaminate the environment around the spacecraft. A research study was performed that selected an electrically-powered linear actuator that can be used to control the orientation of a large pointed structure. This research surveyed available products, analyzed the capabilities of conventional linear actuators, and designed a first-cut candidate superconducting linear actuator. The study first examined theoretical capabilities of electrical actuators and determined their problems with respect to the application and then determined if any presently available actuators or any modifications to available actuator designs would meet the required performance. The best actuator was then selected based on available design, modified design, or new design for this application. The last task was to proceed with a conceptual design. No commercially-available linear actuator or modification capable of meeting the specifications was found. A conventional moving-coil dc linear actuator would meet the specification, but the back-iron for this actuator would weigh approximately 12,000 lbs. A superconducting field coil, however, eliminates the need for back iron, resulting in an actuator weight of approximately 1000 lbs.

  17. The Mira-Titan Universe: Precision Predictions for Dark Energy Surveys

    NASA Astrophysics Data System (ADS)

    Heitmann, Katrin; Bingham, Derek; Lawrence, Earl; Bergner, Steven; Habib, Salman; Higdon, David; Pope, Adrian; Biswas, Rahul; Finkel, Hal; Frontiere, Nicholas; Bhattacharya, Suman

    2016-04-01

    Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear power spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.

  18. Optimal design of robot accuracy compensators

    SciTech Connect

    Zhuang, H.; Roth, Z.S. . Robotics Center and Electrical Engineering Dept.); Hamano, Fumio . Dept. of Electrical Engineering)

    1993-12-01

    The problem of optimal design of robot accuracy compensators is addressed. Robot accuracy compensation requires that actual kinematic parameters of a robot be previously identified. Additive corrections of joint commands, including those at singular configurations, can be computed without solving the inverse kinematics problem for the actual robot. This is done by either the damped least-squares (DLS) algorithm or the linear quadratic regulator (LQR) algorithm, which is a recursive version of the DLS algorithm. The weight matrix in the performance index can be selected to achieve specific objectives, such as emphasizing end-effector's positioning accuracy over orientation accuracy or vice versa, or taking into account proximity to robot joint travel limits and singularity zones. The paper also compares the LQR and the DLS algorithms in terms of computational complexity, storage requirement, and programming convenience. Simulation results are provided to show the effectiveness of the algorithms.

  19. Estimating Standardized Linear Contrasts of Means with Desired Precision

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2009-01-01

    L. Wilkinson and the Task Force on Statistical Inference (1999) recommended reporting confidence intervals for measures of effect sizes. If the sample size is too small, the confidence interval may be too wide to provide meaningful information. Recently, K. Kelley and J. R. Rausch (2006) used an iterative approach to computer-generate tables of…

  20. Reliability of linear distance measurement for dental implant length with standardized periapical radiographs.

    PubMed

    Wakoh, Mamoru; Harada, Takuya; Otonari, Takamichi; Otonari-Yamamoto, Mika; Ohkubo, Mai; Kousuge, Yuji; Kobayashi, Norio; Mizuta, Shigeru; Kitagawa, Hiromi; Sano, Tsukasa

    2006-08-01

    The purpose of this study was to investigate the accuracy of distance measurements of implant length based on periapical radiographs compared with that of other modalities. We carried out an experimental trial to compare precision in distance measurement. Dental implant fixtures were buried in the canine and first molar regions. These were then subjected to periapical (PE) radiography, panoramic (PA) radiography, conventional (CV) and medical computed (CT) tomography. The length of the implant fixture on each film was measured by nine observers and degree of precision was statistically analyzed. The precision of both PE radiographs and CT tomograms was closest at the highest level. Standardized PE radiography, in particular, was superior to CT tomography in the first molar region. This suggests that standardized PE radiographs should be utilized as a reliable modality for longitudinal and linear distance measurement, depending on implant length at local implantation site. PMID:17344618

  1. Precision Spectroscopy of Atomic Hydrogen

    NASA Astrophysics Data System (ADS)

    Beyer, A.; Parthey, Ch G.; Kolachevsky, N.; Alnis, J.; Khabarova, K.; Pohl, R.; Peters, E.; Yost, D. C.; Matveev, A.; Predehl, K.; Droste, S.; Wilken, T.; Holzwarth, R.; Hänsch, T. W.; Abgrall, M.; Rovera, D.; Salomon, Ch; Laurent, Ph; Udem, Th

    2013-12-01

    Precise determinations of transition frequencies of simple atomic systems are required for a number of fundamental applications such as tests of quantum electrodynamics (QED), the determination of fundamental constants and nuclear charge radii. The sharpest transition in atomic hydrogen occurs between the metastable 2S state and the 1S ground state. Its transition frequency has now been measured with almost 15 digits accuracy using an optical frequency comb and a cesium atomic clock as a reference [1]. A recent measurement of the 2S - 2P3/2 transition frequency in muonic hydrogen is in significant contradiction to the hydrogen data if QED calculations are assumed to be correct [2, 3]. We hope to contribute to this so-called "proton size puzzle" by providing additional experimental input from hydrogen spectroscopy.

  2. System for precise position registration

    DOEpatents

    Sundelin, Ronald M.; Wang, Tong

    2005-11-22

    An apparatus for enabling accurate retaining of a precise position, such as for reacquisition of a microscopic spot or feature having a size of 0.1 mm or less, on broad-area surfaces after non-in situ processing. The apparatus includes a sample and sample holder. The sample holder includes a base and three support posts. Two of the support posts interact with a cylindrical hole and a U-groove in the sample to establish location of one point on the sample and a line through the sample. Simultaneous contact of the third support post with the surface of the sample defines a plane through the sample. All points of the sample are therefore uniquely defined by the sample and sample holder. The position registration system of the current invention provides accuracy, as measured in x, y repeatability, of at least 140 .mu.m.

  3. Accuracy of analyses of microelectronics nanostructures in atom probe tomography

    NASA Astrophysics Data System (ADS)

    Vurpillot, F.; Rolland, N.; Estivill, R.; Duguay, S.; Blavette, D.

    2016-07-01

    The routine use of atom probe tomography (APT) as a nano-analysis microscope in the semiconductor industry requires the precise evaluation of the metrological parameters of this instrument (spatial accuracy, spatial precision, composition accuracy or composition precision). The spatial accuracy of this microscope is evaluated in this paper in the analysis of planar structures such as high-k metal gate stacks. It is shown both experimentally and theoretically that the in-depth accuracy of reconstructed APT images is perturbed when analyzing this structure composed of an oxide layer of high electrical permittivity (higher-k dielectric constant) that separates the metal gate and the semiconductor channel of a field emitter transistor. Large differences in the evaporation field between these layers (resulting from large differences in material properties) are the main sources of image distortions. An analytic model is used to interpret inaccuracy in the depth reconstruction of these devices in APT.

  4. Interoceptive accuracy and panic.

    PubMed

    Zoellner, L A; Craske, M G

    1999-12-01

    Psychophysiological models of panic hypothesize that panickers focus attention on and become anxious about the physical sensations associated with panic. Attention on internal somatic cues has been labeled interoception. The present study examined the role of physiological arousal and subjective anxiety on interoceptive accuracy. Infrequent panickers and nonanxious participants participated in an initial baseline to examine overall interoceptive accuracy. Next, participants ingested caffeine, about which they received either safety or no safety information. Using a mental heartbeat tracking paradigm, participants' count of their heartbeats during specific time intervals were coded based on polygraph measures. Infrequent panickers were more accurate in the perception of their heartbeats than nonanxious participants. Changes in physiological arousal were not associated with increased accuracy on the heartbeat perception task. However, higher levels of self-reported anxiety were associated with superior performance. PMID:10596462

  5. Precision Environmental Radiation Monitoring System

    SciTech Connect

    Vladimir Popov, Pavel Degtiarenko

    2010-07-01

    A new precision low-level environmental radiation monitoring system has been developed and tested at Jefferson Lab. This system provides environmental radiation measurements with accuracy and stability of the order of 1 nGy/h in an hour, roughly corresponding to approximately 1% of the natural cosmic background at the sea level. Advanced electronic front-end has been designed and produced for use with the industry-standard High Pressure Ionization Chamber detector hardware. A new highly sensitive readout electronic circuit was designed to measure charge from the virtually suspended ionization chamber ion collecting electrode. New signal processing technique and dedicated data acquisition were tested together with the new readout. The designed system enabled data collection in a remote Linux-operated computer workstation, which was connected to the detectors using a standard telephone cable line. The data acquisition system algorithm is built around the continuously running 24-bit resolution 192 kHz data sampling analog to digital convertor. The major features of the design include: extremely low leakage current in the input circuit, true charge integrating mode operation, and relatively fast response to the intermediate radiation change. These features allow operating of the device as an environmental radiation monitor, at the perimeters of the radiation-generating installations in densely populated areas, like in other monitoring and security applications requiring high precision and long-term stability. Initial system evaluation results are presented.

  6. Seasonal Effects on GPS PPP Accuracy

    NASA Astrophysics Data System (ADS)

    Saracoglu, Aziz; Ugur Sanli, D.

    2016-04-01

    GPS Precise Point Positioning (PPP) is now routinely used in many geophysical applications. Static positioning and 24 h data are requested for high precision results however real life situations do not always let us collect 24 h data. Thus repeated GPS surveys of 8-10 h observation sessions are still used by some research groups. Positioning solutions from shorter data spans are subject to various systematic influences, and the positioning quality as well as the estimated velocity is degraded. Researchers pay attention to the accuracy of GPS positions and of the estimated velocities derived from short observation sessions. Recently some research groups turned their attention to the study of seasonal effects (i.e. meteorological seasons) on GPS solutions. Up to now usually regional studies have been reported. In this study, we adopt a global approach and study the various seasonal effects (including the effect of the annual signal) on GPS solutions produced from short observation sessions. We use the PPP module of the NASA/JPL's GIPSY/OASIS II software and globally distributed GPS stations' data of the International GNSS Service. Accuracy studies previously performed with 10-30 consecutive days of continuous data. Here, data from each month of a year, incorporating two years in succession, is used in the analysis. Our major conclusion is that a reformulation for the GPS positioning accuracy is necessary when taking into account the seasonal effects, and typical one term accuracy formulation is expanded to a two-term one.

  7. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    A precision liquid level sensor utilizes a balanced bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  8. Precision displacement reference system

    DOEpatents

    Bieg, Lothar F.; Dubois, Robert R.; Strother, Jerry D.

    2000-02-22

    A precision displacement reference system is described, which enables real time accountability over the applied displacement feedback system to precision machine tools, positioning mechanisms, motion devices, and related operations. As independent measurements of tool location is taken by a displacement feedback system, a rotating reference disk compares feedback counts with performed motion. These measurements are compared to characterize and analyze real time mechanical and control performance during operation.

  9. High-precision hydraulic Stewart platform

    NASA Astrophysics Data System (ADS)

    van Silfhout, Roelof G.

    1999-08-01

    We present a novel design for a Stewart platform (or hexapod), an apparatus which performs positioning tasks with high accuracy. The platform, which is supported by six hydraulic telescopic struts, provides six degrees of freedom with 1 μm resolution. Rotations about user defined pivot points can be specified for any axis of rotation with microradian accuracy. Motion of the platform is performed by changing the strut lengths. Servo systems set and maintain the length of the struts to high precision using proportional hydraulic valves and incremental encoders. The combination of hydraulic actuators and a design which is optimized in terms of mechanical stiffness enables the platform to manipulate loads of up to 20 kN. Sophisticated software allows direct six-axis positioning including true path control. Our platform is an ideal support structure for a large variety of scientific instruments that require a stable alignment base with high-precision motion.

  10. Accuracy Validation of Large-scale Block Adjustment without Control of ZY3 Images over China

    NASA Astrophysics Data System (ADS)

    Yang, Bo

    2016-06-01

    Mapping from optical satellite images without ground control is one of the goals of photogrammetry. Using 8802 three linear array stereo images (a total of 26406 images) of ZY3 over China, we propose a large-scale and non-control block adjustment method of optical satellite images based on the RPC model, in which a single image is regarded as an adjustment unit to be organized. To overcome the block distortion caused by unstable adjustment without ground control and the excessive accumulation of errors, we use virtual control points created by the initial RPC model of the images as the weighted observations and add them into the adjustment model to refine the adjustment. We use 8000 uniformly distributed high precision check points to evaluate the geometric accuracy of the DOM (Digital Ortho Model) and DSM (Digital Surface Model) production, for which the standard deviations of plane and elevation are 3.6 m and 4.2 m respectively. The geometric accuracy is consistent across the whole block and the mosaic accuracy of neighboring DOM is within a pixel, thus, the seamless mosaic could take place. This method achieves the goal of an accuracy of mapping without ground control better than 5 m for the whole China from ZY3 satellite images.

  11. The International Linear Collider

    NASA Astrophysics Data System (ADS)

    List, Benno

    2014-04-01

    The International Linear Collider (ILC) is a proposed e+e- linear collider with a centre-of-mass energy of 200-500 GeV, based on superconducting RF cavities. The ILC would be an ideal machine for precision studies of a light Higgs boson and the top quark, and would have a discovery potential for new particles that is complementary to that of LHC. The clean experimental conditions would allow the operation of detectors with extremely good performance; two such detectors, ILD and SiD, are currently being designed. Both make use of novel concepts for tracking and calorimetry. The Japanese High Energy Physics community has recently recommended to build the ILC in Japan.

  12. General linear chirplet transform

    NASA Astrophysics Data System (ADS)

    Yu, Gang; Zhou, Yiqi

    2016-03-01

    Time-frequency (TF) analysis (TFA) method is an effective tool to characterize the time-varying feature of a signal, which has drawn many attentions in a fairly long period. With the development of TFA, many advanced methods are proposed, which can provide more precise TF results. However, some restrictions are introduced inevitably. In this paper, we introduce a novel TFA method, termed as general linear chirplet transform (GLCT), which can overcome some limitations existed in current TFA methods. In numerical and experimental validations, by comparing with current TFA methods, some advantages of GLCT are demonstrated, which consist of well-characterizing the signal of multi-component with distinct non-linear features, being independent to the mathematical model and initial TFA method, allowing for the reconstruction of the interested component, and being non-sensitivity to noise.

  13. Accuracy of deception judgments.

    PubMed

    Bond, Charles F; DePaulo, Bella M

    2006-01-01

    We analyze the accuracy of deception judgments, synthesizing research results from 206 documents and 24,483 judges. In relevant studies, people attempt to discriminate lies from truths in real time with no special aids or training. In these circumstances, people achieve an average of 54% correct lie-truth judgments, correctly classifying 47% of lies as deceptive and 61% of truths as nondeceptive. Relative to cross-judge differences in accuracy, mean lie-truth discrimination abilities are nontrivial, with a mean accuracy d of roughly .40. This produces an effect that is at roughly the 60th percentile in size, relative to others that have been meta-analyzed by social psychologists. Alternative indexes of lie-truth discrimination accuracy correlate highly with percentage correct, and rates of lie detection vary little from study to study. Our meta-analyses reveal that people are more accurate in judging audible than visible lies, that people appear deceptive when motivated to be believed, and that individuals regard their interaction partners as honest. We propose that people judge others' deceptions more harshly than their own and that this double standard in evaluating deceit can explain much of the accumulated literature. PMID:16859438

  14. Design and experiment on a multi-functioned and programmable piezoelectric ceramic power supply with high precision for speckle interferometry

    NASA Astrophysics Data System (ADS)

    Wang, Biao; Ye, Yan; Wang, Yong-hong; Yang, En-zhen

    2016-01-01

    Speckle interferometry is a method of measuring structure's tiny deformations which requires accurate phase information of interference fringes. The phase information is acquired by micro-displacement produced by piezoelectric ceramic (PZT). In order to drive the PZT micro-displacement actuator, a multi-functioned and programmable PZT power supply with high precision is designed. Calibration experiment has been done to the PZT micro-actuator in speckle interferometry. Some experiments were also done to test its relevant characteristics. The experiment results show that it has high linearity, repeatability, stability, low ripple and can meet the requirement of the reliability and displacement accuracy in speckle interferometry.

  15. Precision calibration and systematic error reduction in the long trace profiler

    SciTech Connect

    Qian, Shinan; Sostero, Giovanni; Takacs, Peter Z.

    2000-01-01

    The long trace profiler (LTP) has become the instrument of choice for surface figure testing and slope error measurement of mirrors used for synchrotron radiation and x-ray astronomy optics. In order to achieve highly accurate measurements with the LTP, systematic errors need to be reduced by precise angle calibration and accurate focal plane position adjustment. A self-scanning method is presented to adjust the focal plane position of the detector with high precision by use of a pentaprism scanning technique. The focal plane position can be set to better than 0.25 mm for a 1250-mm-focal-length Fourier-transform lens using this technique. The use of a 0.03-arcsec-resolution theodolite combined with the sensitivity of the LTP detector system can be used to calibrate the angular linearity error very precisely. Some suggestions are introduced for reducing the system error. With these precision calibration techniques, accuracy in the measurement of figure and slope error on meter-long mirrors is now at a level of about 1 {mu}rad rms over the whole testing range of the LTP. (c) 2000 Society of Photo-Optical Instrumentation Engineers.

  16. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  17. Precision Higgs Physics

    NASA Astrophysics Data System (ADS)

    Boughezal, Radja

    2015-04-01

    The future of the high energy physics program will increasingly rely upon precision studies looking for deviations from the Standard Model. Run I of the Large Hadron Collider (LHC) triumphantly discovered the long-awaited Higgs boson, and there is great hope in the particle physics community that this new state will open a portal onto a new theory of Nature at the smallest scales. A precision study of Higgs boson properties is needed in order to test whether this belief is true. New theoretical ideas and high-precision QCD tools are crucial to fulfill this goal. They become even more important as larger data sets from LHC Run II further reduce the experimental errors and theoretical uncertainties begin to dominate. In this talk, I will review recent progress in understanding Higgs properties,including the calculation of precision predictions needed to identify possible physics beyond the Standard Model in the Higgs sector. New ideas for measuring the Higgs couplings to light quarks as well as bounding the Higgs width in a model-independent way will be discussed. Precision predictions for Higgs production in association with jets and ongoing efforts to calculate the inclusive N3LO cross section will be reviewed.

  18. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-05-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/√{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/√{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  19. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/√{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/√{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  20. Students' Accuracy of Measurement Estimation: Context, Units, and Logical Thinking

    ERIC Educational Resources Information Center

    Jones, M. Gail; Gardner, Grant E.; Taylor, Amy R.; Forrester, Jennifer H.; Andre, Thomas

    2012-01-01

    This study examined students' accuracy of measurement estimation for linear distances, different units of measure, task context, and the relationship between accuracy estimation and logical thinking. Middle school students completed a series of tasks that included estimating the length of various objects in different contexts and completed a test…

  1. Precise Indoor Localization for Mobile Laser Scanner

    NASA Astrophysics Data System (ADS)

    Kaijaluoto, R.; Hyyppä, A.

    2015-05-01

    Accurate 3D data is of high importance for indoor modeling for various applications in construction, engineering and cultural heritage documentation. For the lack of GNSS signals hampers use of kinematic platforms indoors, TLS is currently the most accurate and precise method for collecting such a data. Due to its static single view point data collection, excessive time and data redundancy are needed for integrity and coverage of data. However, localization methods with affordable scanners are used for solving mobile platform pose problem. The aim of this study was to investigate what level of trajectory accuracies can be achieved with high quality sensors and freely available state of the art planar SLAM algorithms, and how well this trajectory translates to a point cloud collected with a secondary scanner. In this study high precision laser scanners were used with a novel way to combine the strengths of two SLAM algorithms into functional method for precise localization. We collected five datasets using Slammer platform with two laser scanners, and processed them with altogether 20 different parameter sets. The results were validated against TLS reference. The results show increasing scan frequency improves the trajectory, reaching 20 mm RMSE levels for the best performing parameter sets. Further analysis of the 3D point cloud showed good agreement with TLS reference with 17 mm positional RMSE. With precision scanners the obtained point cloud allows for high level of detail data for indoor modeling with accuracies close to TLS at best with vastly improved data collection efficiency.

  2. Ultra-precision: enabling our future.

    PubMed

    Shore, Paul; Morantz, Paul

    2012-08-28

    This paper provides a perspective on the development of ultra-precision technologies: What drove their evolution and what do they now promise for the future as we face the consequences of consumption of the Earth's finite resources? Improved application of measurement is introduced as a major enabler of mass production, and its resultant impact on wealth generation is considered. This paper identifies the ambitions of the defence, automotive and microelectronics sectors as important drivers of improved manufacturing accuracy capability and ever smaller feature creation. It then describes how science fields such as astronomy have presented significant precision engineering challenges, illustrating how these fields of science have achieved unprecedented levels of accuracy, sensitivity and sheer scale. Notwithstanding their importance to science understanding, many science-driven ultra-precision technologies became key enablers for wealth generation and other well-being issues. Specific ultra-precision machine tools important to major astronomy programmes are discussed, as well as the way in which subsequently evolved machine tools made at the beginning of the twenty-first century, now provide much wider benefits. PMID:22802499

  3. Precision Measurements in 37K

    NASA Astrophysics Data System (ADS)

    Anholm, Melissa; Ashery, Daniel; Behling, Spencer; Fenker, Benjamin; Melconian, Dan; Mehlman, Michael; Behr, John; Gorelov, Alexandre; Olchanski, Konstantin; Preston, Claire; Warner, Claire; Gwinner, Gerald

    2015-10-01

    We have performed precision measurements of the kinematics of the daughter particles in the decay of 37K. This isotope decays by β+ emission in a mixed Fermi/Gamow-Teller transition to its isobaric analog, 37Ar. Because the higher-order standard model corrections to this decay process are well understood, it is an ideal candidate for for improving constraints on interactions beyond the standard model. Our setup utilizes a magneto-optical trap to confine and cool samples of 37K, which are then spin-polarized by optical pumping. This allows us to perform measurements on both polarized and unpolarized nuclei, which is valuable for a complete understanding of systematic effects. Precision measurements of this decay are expected to be sensitive to the presence of right-handed vector currents, as well as a linear combination of scalar and tensor currents. Progress towards a final result is presented here. Support provided by: NSERC, NRC through TRIUMF, DOE ER40773, Early Career ER41747, Israel Science Foundation.

  4. Asynchronous RTK precise DGNSS positioning method for deriving a low-latency high-rate output

    NASA Astrophysics Data System (ADS)

    Liang, Zhang; Hanfeng, Lv; Dingjie, Wang; Yanqing, Hou; Jie, Wu

    2015-07-01

    Low-latency high-rate (1 Hz) precise real-time kinematic (RTK) can be applied in high-speed scenarios such as aircraft automatic landing, precise agriculture and intelligent vehicle. The classic synchronous RTK (SRTK) precise differential GNSS (DGNSS) positioning technology, however, is not able to obtain a low-latency high-rate output for the rover receiver because of long data link transmission time delays (DLTTD) from the reference receiver. To overcome the long DLTTD, this paper proposes an asynchronous real-time kinematic (ARTK) method using asynchronous observations from two receivers. The asynchronous observation model (AOM) is developed based on undifferenced carrier phase observation equations of the two receivers at different epochs with short baseline. The ephemeris error and atmosphere delay are the possible main error sources on positioning accuracy in this model, and they are analyzed theoretically. In a short DLTTD and during a period of quiet ionosphere activity, the main error sources decreasing positioning accuracy are satellite orbital errors: the "inverted ephemeris error" and the integration of satellite velocity error which increase linearly along with DLTTD. The cycle slip of asynchronous double-differencing carrier phase is detected by TurboEdit method and repaired by the additional ambiguity parameter method. The AOM can deal with synchronous observation model (SOM) and achieve precise positioning solution with synchronous observations as well, since the SOM is only a specific case of AOM. The proposed method not only can reduce the cost of data collection and transmission, but can also support the mobile phone network data link transfer mode for the data of the reference receiver. This method can avoid data synchronizing process besides ambiguity initialization step, which is very convenient for real-time navigation of vehicles. The static and kinematic experiment results show that this method achieves 20 Hz or even higher rate output in

  5. How Physics Got Precise

    SciTech Connect

    Kleppner, Daniel

    2005-01-19

    Although the ancients knew the length of the year to about ten parts per million, it was not until the end of the 19th century that precision measurements came to play a defining role in physics. Eventually such measurements made it possible to replace human-made artifacts for the standards of length and time with natural standards. For a new generation of atomic clocks, time keeping could be so precise that the effects of the local gravitational potentials on the clock rates would be important. This would force us to re-introduce an artifact into the definition of the second - the location of the primary clock. I will describe some of the events in the history of precision measurements that have led us to this pleasing conundrum, and some of the unexpected uses of atomic clocks today.

  6. Precision gap particle separator

    DOEpatents

    Benett, William J.; Miles, Robin; Jones, II., Leslie M.; Stockton, Cheryl

    2004-06-08

    A system for separating particles entrained in a fluid includes a base with a first channel and a second channel. A precision gap connects the first channel and the second channel. The precision gap is of a size that allows small particles to pass from the first channel into the second channel and prevents large particles from the first channel into the second channel. A cover is positioned over the base unit, the first channel, the precision gap, and the second channel. An port directs the fluid containing the entrained particles into the first channel. An output port directs the large particles out of the first channel. A port connected to the second channel directs the small particles out of the second channel.

  7. Precision Muonium Spectroscopy

    NASA Astrophysics Data System (ADS)

    Jungmann, Klaus P.

    2016-09-01

    The muonium atom is the purely leptonic bound state of a positive muon and an electron. It has a lifetime of 2.2 µs. The absence of any known internal structure provides for precision experiments to test fundamental physics theories and to determine accurate values of fundamental constants. In particular ground state hyperfine structure transitions can be measured by microwave spectroscopy to deliver the muon magnetic moment. The frequency of the 1s-2s transition in the hydrogen-like atom can be determined with laser spectroscopy to obtain the muon mass. With such measurements fundamental physical interactions, in particular quantum electrodynamics, can also be tested at highest precision. The results are important input parameters for experiments on the muon magnetic anomaly. The simplicity of the atom enables further precise experiments, such as a search for muonium-antimuonium conversion for testing charged lepton number conservation and searches for possible antigravity of muons and dark matter.

  8. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  9. Asymptotic accuracy of two-class discrimination

    SciTech Connect

    Ho, T.K.; Baird, H.S.

    1994-12-31

    Poor quality-e.g. sparse or unrepresentative-training data is widely suspected to be one cause of disappointing accuracy of isolated-character classification in modern OCR machines. We conjecture that, for many trainable classification techniques, it is in fact the dominant factor affecting accuracy. To test this, we have carried out a study of the asymptotic accuracy of three dissimilar classifiers on a difficult two-character recognition problem. We state this problem precisely in terms of high-quality prototype images and an explicit model of the distribution of image defects. So stated, the problem can be represented as a stochastic source of an indefinitely long sequence of simulated images labeled with ground truth. Using this sequence, we were able to train all three classifiers to high and statistically indistinguishable asymptotic accuracies (99.9%). This result suggests that the quality of training data was the dominant factor affecting accuracy. The speed of convergence during training, as well as time/space trade-offs during recognition, differed among the classifiers.

  10. Precision Heating Process

    NASA Technical Reports Server (NTRS)

    1992-01-01

    A heat sealing process was developed by SEBRA based on technology that originated in work with NASA's Jet Propulsion Laboratory. The project involved connecting and transferring blood and fluids between sterile plastic containers while maintaining a closed system. SEBRA markets the PIRF Process to manufacturers of medical catheters. It is a precisely controlled method of heating thermoplastic materials in a mold to form or weld catheters and other products. The process offers advantages in fast, precise welding or shape forming of catheters as well as applications in a variety of other industries.

  11. Precision manometer gauge

    DOEpatents

    McPherson, M.J.; Bellman, R.A.

    1982-09-27

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  12. Precision manometer gauge

    DOEpatents

    McPherson, Malcolm J.; Bellman, Robert A.

    1984-01-01

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  13. Researches on High Accuracy Prediction Methods of Earth Orientation Parameters

    NASA Astrophysics Data System (ADS)

    Xu, X. Q.

    2015-09-01

    The Earth rotation reflects the coupling process among the solid Earth, atmosphere, oceans, mantle, and core of the Earth on multiple spatial and temporal scales. The Earth rotation can be described by the Earth's orientation parameters, which are abbreviated as EOP (mainly including two polar motion components PM_X and PM_Y, and variation in the length of day ΔLOD). The EOP is crucial in the transformation between the terrestrial and celestial reference systems, and has important applications in many areas such as the deep space exploration, satellite precise orbit determination, and astrogeodynamics. However, the EOP products obtained by the space geodetic technologies generally delay by several days to two weeks. The growing demands for modern space navigation make high-accuracy EOP prediction be a worthy topic. This thesis is composed of the following three aspects, for the purpose of improving the EOP forecast accuracy. (1) We analyze the relation between the length of the basic data series and the EOP forecast accuracy, and compare the EOP prediction accuracy for the linear autoregressive (AR) model and the nonlinear artificial neural network (ANN) method by performing the least squares (LS) extrapolations. The results show that the high precision forecast of EOP can be realized by appropriate selection of the basic data series length according to the required time span of EOP prediction: for short-term prediction, the basic data series should be shorter, while for the long-term prediction, the series should be longer. The analysis also showed that the LS+AR model is more suitable for the short-term forecasts, while the LS+ANN model shows the advantages in the medium- and long-term forecasts. (2) We develop for the first time a new method which combines the autoregressive model and Kalman filter (AR+Kalman) in short-term EOP prediction. The equations of observation and state are established using the EOP series and the autoregressive coefficients

  14. Astrophysics with Microarcsecond Accuracy Astrometry

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  15. LINEAR ACCELERATOR

    DOEpatents

    Colgate, S.A.

    1958-05-27

    An improvement is presented in linear accelerators for charged particles with respect to the stable focusing of the particle beam. The improvement consists of providing a radial electric field transverse to the accelerating electric fields and angularly introducing the beam of particles in the field. The results of the foregoing is to achieve a beam which spirals about the axis of the acceleration path. The combination of the electric fields and angular motion of the particles cooperate to provide a stable and focused particle beam.

  16. Flexible analysis of digital PCR experiments using generalized linear mixed models.

    PubMed

    Vynck, Matthijs; Vandesompele, Jo; Nijs, Nele; Menten, Björn; De Ganck, Ariane; Thas, Olivier

    2016-09-01

    The use of digital PCR for quantification of nucleic acids is rapidly growing. A major drawback remains the lack of flexible data analysis tools. Published analysis approaches are either tailored to specific problem settings or fail to take into account sources of variability. We propose the generalized linear mixed models framework as a flexible tool for analyzing a wide range of experiments. We also introduce a method for estimating reference gene stability to improve accuracy and precision of copy number and relative expression estimates. We demonstrate the usefulness of the methodology on a complex experimental setup. PMID:27551671

  17. Precision bolometer bridge

    NASA Technical Reports Server (NTRS)

    White, D. R.

    1968-01-01

    Prototype precision bolometer calibration bridge is manually balanced device for indicating dc bias and balance with either dc or ac power. An external galvanometer is used with the bridge for null indication, and the circuitry monitors voltage and current simultaneously without adapters in testing 100 and 200 ohm thin film bolometers.

  18. Precision metal molding

    NASA Technical Reports Server (NTRS)

    Townhill, A.

    1967-01-01

    Method provides precise alignment for metal-forming dies while permitting minimal thermal expansion without die warpage or cavity space restriction. The interfacing dowel bars and die side facings are arranged so the dies are restrained in one orthogonal angle and permitted to thermally expand in the opposite orthogonal angle.

  19. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    1985-01-29

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge. 2 figs.

  20. Precision liquid level sensor

    DOEpatents

    Field, Michael E.; Sullivan, William H.

    1985-01-01

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  1. Precision in Stereochemical Terminology

    ERIC Educational Resources Information Center

    Wade, Leroy G., Jr.

    2006-01-01

    An analysis of relatively new terminology that has given multiple definitions often resulting in students learning principles that are actually false is presented with an example of the new term stereogenic atom introduced by Mislow and Siegel. The Mislow terminology would be useful in some cases if it were used precisely and correctly, but it is…

  2. Precision physics at LHC

    SciTech Connect

    Hinchliffe, I.

    1997-05-01

    In this talk the author gives a brief survey of some physics topics that will be addressed by the Large Hadron Collider currently under construction at CERN. Instead of discussing the reach of this machine for new physics, the author gives examples of the types of precision measurements that might be made if new physics is discovered.

  3. Linear Clouds

    NASA Technical Reports Server (NTRS)

    2006-01-01

    [figure removed for brevity, see original site] Context image for PIA03667 Linear Clouds

    These clouds are located near the edge of the south polar region. The cloud tops are the puffy white features in the bottom half of the image.

    Image information: VIS instrument. Latitude -80.1N, Longitude 52.1E. 17 meter/pixel resolution.

    Note: this THEMIS visual image has not been radiometrically nor geometrically calibrated for this preliminary release. An empirical correction has been performed to remove instrumental effects. A linear shift has been applied in the cross-track and down-track direction to approximate spacecraft and planetary motion. Fully calibrated and geometrically projected images will be released through the Planetary Data System in accordance with Project policies at a later time.

    NASA's Jet Propulsion Laboratory manages the 2001 Mars Odyssey mission for NASA's Office of Space Science, Washington, D.C. The Thermal Emission Imaging System (THEMIS) was developed by Arizona State University, Tempe, in collaboration with Raytheon Santa Barbara Remote Sensing. The THEMIS investigation is led by Dr. Philip Christensen at Arizona State University. Lockheed Martin Astronautics, Denver, is the prime contractor for the Odyssey project, and developed and built the orbiter. Mission operations are conducted jointly from Lockheed Martin and from JPL, a division of the California Institute of Technology in Pasadena.

  4. Visual Tracking via Sparse and Local Linear Coding.

    PubMed

    Wang, Guofeng; Qin, Xueying; Zhong, Fan; Liu, Yue; Li, Hongbo; Peng, Qunsheng; Yang, Ming-Hsuan

    2015-11-01

    The state search is an important component of any object tracking algorithm. Numerous algorithms have been proposed, but stochastic sampling methods (e.g., particle filters) are arguably one of the most effective approaches. However, the discretization of the state space complicates the search for the precise object location. In this paper, we propose a novel tracking algorithm that extends the state space of particle observations from discrete to continuous. The solution is determined accurately via iterative linear coding between two convex hulls. The algorithm is modeled by an optimal function, which can be efficiently solved by either convex sparse coding or locality constrained linear coding. The algorithm is also very flexible and can be combined with many generic object representations. Thus, we first use sparse representation to achieve an efficient searching mechanism of the algorithm and demonstrate its accuracy. Next, two other object representation models, i.e., least soft-threshold squares and adaptive structural local sparse appearance, are implemented with improved accuracy to demonstrate the flexibility of our algorithm. Qualitative and quantitative experimental results demonstrate that the proposed tracking algorithm performs favorably against the state-of-the-art methods in dynamic scenes. PMID:26353352

  5. High-precision positioning of radar scatterers

    NASA Astrophysics Data System (ADS)

    Dheenathayalan, Prabu; Small, David; Schubert, Adrian; Hanssen, Ramon F.

    2016-05-01

    Remote sensing radar satellites cover wide areas and provide spatially dense measurements, with millions of scatterers. Knowledge of the precise position of each radar scatterer is essential to identify the corresponding object and interpret the estimated deformation. The absolute position accuracy of synthetic aperture radar (SAR) scatterers in a 2D radar coordinate system, after compensating for atmosphere and tidal effects, is in the order of centimeters for TerraSAR-X (TSX) spotlight images. However, the absolute positioning in 3D and its quality description are not well known. Here, we exploit time-series interferometric SAR to enhance the positioning capability in three dimensions. The 3D positioning precision is parameterized by a variance-covariance matrix and visualized as an error ellipsoid centered at the estimated position. The intersection of the error ellipsoid with objects in the field is exploited to link radar scatterers to real-world objects. We demonstrate the estimation of scatterer position and its quality using 20 months of TSX stripmap acquisitions over Delft, the Netherlands. Using trihedral corner reflectors (CR) for validation, the accuracy of absolute positioning in 2D is about 7 cm. In 3D, an absolute accuracy of up to ˜ 66 cm is realized, with a cigar-shaped error ellipsoid having centimeter precision in azimuth and range dimensions, and elongated in cross-range dimension with a precision in the order of meters (the ratio of the ellipsoid axis lengths is 1/3/213, respectively). The CR absolute 3D position, along with the associated error ellipsoid, is found to be accurate and agree with the ground truth position at a 99 % confidence level. For other non-CR coherent scatterers, the error ellipsoid concept is validated using 3D building models. In both cases, the error ellipsoid not only serves as a quality descriptor, but can also help to associate radar scatterers to real-world objects.

  6. Precision bridge circuit using a temperature sensor

    NASA Technical Reports Server (NTRS)

    Mount, Bruce E. (Inventor)

    1992-01-01

    A precision bridge measurement circuit connected to a current source providing a linear output voltage versus resistance change of a variable resistance (resistance temperature transducer) including a voltage follower in one branch of the bridge so that the zero setting of the transducer resistance does not depend upon the current source or upon an excitation voltage. The zero setting depends only on the precision and stability of the three resistances. By connecting the output of an instrumentation amplifier to a feedback resistor and then to the output of the voltage follower, minor nonlinearities in the resistance-vs-temperature output of a resistance-temperature transducer, such as a platinum temperature sensor, may be corrected. Sensors which have nonlinearity opposite in polarity to platinum, such as nickel-iron sensors, may be linearized by inserting an inverting amplifier into the feedback loop.

  7. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  8. Portable Linear Sled (PLS) for biomedical research

    NASA Technical Reports Server (NTRS)

    Vallotton, Will; Matsuhiro, Dennis; Wynn, Tom; Temple, John

    1993-01-01

    The PLS is a portable linear motion generating device conceived by researchers at Ames Research Center's Vestibular Research Facility and designed by engineers at Ames for the study of motion sickness in space. It is an extremely smooth apparatus, powered by linear motors and suspended on air bearings which ride on precision ground ceramic ways.

  9. Iterative Precise Conductivity Measurement with IDEs

    PubMed Central

    Hubálek, Jaromír

    2015-01-01

    The paper presents a new approach in the field of precise electrolytic conductivity measurements with planar thin- and thick-film electrodes. This novel measuring method was developed for measurement with comb-like electrodes called interdigitated electrodes (IDEs). Correction characteristics over a wide range of specific conductivities were determined from an interface impedance characterization of the thick-film IDEs. The local maximum of the capacitive part of the interface impedance is used for corrections to get linear responses. The measuring frequency was determined at a wide range of measured conductivity. An iteration mode of measurements was suggested to precisely measure the conductivity at the right frequency in order to achieve a highly accurate response. The method takes precise conductivity measurements in concentration ranges from 10−6 to 1 M without electrode cell replacement. PMID:26007745

  10. Improving the accuracy of phase-shifting techniques

    NASA Astrophysics Data System (ADS)

    Cruz-Santos, William; López-García, Lourdes; Redondo-Galvan, Arturo

    2015-05-01

    The traditional phase-shifting profilometry technique is based on the projection of digital interference patterns and computation of the absolute phase map. Recently, a method was proposed that used phase interpolation to the corner detection, at subpixel accuracy in the projector image for improving the camera-projector calibration. We propose a general strategy to improve the accuracy in the search for correspondence that can be used to obtain high precision three-dimensional reconstruction. Experimental results show that our strategy can outperform the precision of the phase-shifting method.

  11. Using Genetic Distance to Infer the Accuracy of Genomic Prediction.

    PubMed

    Scutari, Marco; Mackay, Ian; Balding, David

    2016-09-01

    The prediction of phenotypic traits using high-density genomic data has many applications such as the selection of plants and animals of commercial interest; and it is expected to play an increasing role in medical diagnostics. Statistical models used for this task are usually tested using cross-validation, which implicitly assumes that new individuals (whose phenotypes we would like to predict) originate from the same population the genomic prediction model is trained on. In this paper we propose an approach based on clustering and resampling to investigate the effect of increasing genetic distance between training and target populations when predicting quantitative traits. This is important for plant and animal genetics, where genomic selection programs rely on the precision of predictions in future rounds of breeding. Therefore, estimating how quickly predictive accuracy decays is important in deciding which training population to use and how often the model has to be recalibrated. We find that the correlation between true and predicted values decays approximately linearly with respect to either FST or mean kinship between the training and the target populations. We illustrate this relationship using simulations and a collection of data sets from mice, wheat and human genetics. PMID:27589268

  12. High-precision triangular-waveform generator

    DOEpatents

    Mueller, T.R.

    1981-11-14

    An ultra-linear ramp generator having separately programmable ascending and decending ramp rates and voltages is provided. Two constant current sources provide the ramp through an integrator. Switching of the current at current source inputs rather than at the integrator input eliminates switching transients and contributes to the waveform precision. The triangular waveforms produced by the waveform generator are characterized by accurate reproduction and low drift over periods of several hours. The ascending and descending slopes are independently selectable.

  13. Beam Instrumentation Challenges at the International Linear Collider

    SciTech Connect

    Tenenbaum, Peter; /SLAC

    2006-05-16

    The International Linear Collider (ILC) is a proposed facility for the study of high energy physics through electron-positron collisions at center-of-mass energies up to 500 GeV and luminosities up to 2 x 10{sup 34} cm{sup -2} sec{sup -1}. Meeting the ILC's goals will require an extremely sophisticated suite of beam instruments for the preservation of beam emittance, the diagnosis of optical errors and mismatches, the determination of beam properties required for particle physics purposes, and machine protection. The instrumentation foreseen for the ILC is qualitatively similar to equipment in use at other accelerator facilities in the world, but in many cases the precision, accuracy, stability, or dynamic range required by the ILC exceed what is typically available in today's accelerators. In this paper we survey the beam instrumentation requirements of the ILC and describe the system components which are expected to meet those requirements.

  14. Principles and techniques for designing precision machines

    SciTech Connect

    Hale, L C

    1999-02-01

    This thesis is written to advance the reader's knowledge of precision-engineering principles and their application to designing machines that achieve both sufficient precision and minimum cost. It provides the concepts and tools necessary for the engineer to create new precision machine designs. Four case studies demonstrate the principles and showcase approaches and solutions to specific problems that generally have wider applications. These come from projects at the Lawrence Livermore National Laboratory in which the author participated: the Large Optics Diamond Turning Machine, Accuracy Enhancement of High- Productivity Machine Tools, the National Ignition Facility, and Extreme Ultraviolet Lithography. Although broad in scope, the topics go into sufficient depth to be useful to practicing precision engineers and often fulfill more academic ambitions. The thesis begins with a chapter that presents significant principles and fundamental knowledge from the Precision Engineering literature. Following this is a chapter that presents engineering design techniques that are general and not specific to precision machines. All subsequent chapters cover specific aspects of precision machine design. The first of these is Structural Design, guidelines and analysis techniques for achieving independently stiff machine structures. The next chapter addresses dynamic stiffness by presenting several techniques for Deterministic Damping, damping designs that can be analyzed and optimized with predictive results. Several chapters present a main thrust of the thesis, Exact-Constraint Design. A main contribution is a generalized modeling approach developed through the course of creating several unique designs. The final chapter is the primary case study of the thesis, the Conceptual Design of a Horizontal Machining Center.

  15. LINEAR SOLAR MODELS

    SciTech Connect

    Villante, F. L.; Ricci, B.

    2010-05-01

    We present a new approach to studying the properties of the Sun. We consider small variations of the physical and chemical properties of the Sun with respect to standard solar model predictions and we linearize the structure equations to relate them to the properties of the solar plasma. By assuming that the (variation of) present solar composition can be estimated from the (variation of) nuclear reaction rates and elemental diffusion efficiency in the present Sun, we obtain a linear system of ordinary differential equations which can be used to calculate the response of the Sun to an arbitrary modification of the input parameters (opacity, cross sections, etc.). This new approach is intended to be a complement to the traditional methods for solar model (SM) calculation and allows us to investigate in a more efficient and transparent way the role of parameters and assumptions in SM construction. We verify that these linear solar models recover the predictions of the traditional SMs with a high level of accuracy.

  16. High Accuracy Passive Magnetic Field-Based Localization for Feedback Control Using Principal Component Analysis.

    PubMed

    Foong, Shaohui; Sun, Zhenglong

    2016-01-01

    In this paper, a novel magnetic field-based sensing system employing statistically optimized concurrent multiple sensor outputs for precise field-position association and localization is presented. This method capitalizes on the independence between simultaneous spatial field measurements at multiple locations to induce unique correspondences between field and position. This single-source-multi-sensor configuration is able to achieve accurate and precise localization and tracking of translational motion without contact over large travel distances for feedback control. Principal component analysis (PCA) is used as a pseudo-linear filter to optimally reduce the dimensions of the multi-sensor output space for computationally efficient field-position mapping with artificial neural networks (ANNs). Numerical simulations are employed to investigate the effects of geometric parameters and Gaussian noise corruption on PCA assisted ANN mapping performance. Using a 9-sensor network, the sensing accuracy and closed-loop tracking performance of the proposed optimal field-based sensing system is experimentally evaluated on a linear actuator with a significantly more expensive optical encoder as a comparison. PMID:27529253

  17. A passion for precision

    SciTech Connect

    2010-05-19

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  18. Towards precision medicine.

    PubMed

    Ashley, Euan A

    2016-08-16

    There is great potential for genome sequencing to enhance patient care through improved diagnostic sensitivity and more precise therapeutic targeting. To maximize this potential, genomics strategies that have been developed for genetic discovery - including DNA-sequencing technologies and analysis algorithms - need to be adapted to fit clinical needs. This will require the optimization of alignment algorithms, attention to quality-coverage metrics, tailored solutions for paralogous or low-complexity areas of the genome, and the adoption of consensus standards for variant calling and interpretation. Global sharing of this more accurate genotypic and phenotypic data will accelerate the determination of causality for novel genes or variants. Thus, a deeper understanding of disease will be realized that will allow its targeting with much greater therapeutic precision. PMID:27528417

  19. Precision Polarization of Neutrons

    NASA Astrophysics Data System (ADS)

    Martin, Elise; Barron-Palos, Libertad; Couture, Aaron; Crawford, Christopher; Chupp, Tim; Danagoulian, Areg; Estes, Mary; Hona, Binita; Jones, Gordon; Klein, Andi; Penttila, Seppo; Sharma, Monisha; Wilburn, Scott

    2009-05-01

    Determining polarization of a cold neutron beam to high precision is required for the next generation neutron decay correlation experiments at the SNS, such as the proposed abBA and PANDA experiments. Precision polarimetry measurements were conducted at Los Alamos National Laboratory with the goal of determining the beam polarization to the level of 10-3 or better. The cold neutrons from FP12 were polarized using optically polarized ^3He gas as a spin filter, which has a highly spin-dependent absorption cross section. A second ^ 3He spin filter was used to analyze the neutron polarization after passing through a resonant RF spin rotator. A discussion of the experiment and results will be given.

  20. A passion for precision

    ScienceCinema

    None

    2011-10-06

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  1. Precision contour gage

    DOEpatents

    Bieg, Lothar F.

    1990-12-11

    An apparatus for gaging the contour of a machined part includes a rotary slide assembly, a kinematic mount to move the apparatus into and out of position for measuring the part while the part is still on the machining apparatus, a linear probe assembly with a suspension arm and a probe assembly including as probe tip for providing a measure of linear displacement of the tip on the surface of the part, a means for changing relative positions between the part and the probe tip, and a means for recording data points representing linear positions of the probe tip at prescribed rotation intervals in the position changes between the part and the probe tip.

  2. Precision contour gage

    DOEpatents

    Bieg, L.F.

    1990-12-11

    An apparatus for gaging the contour of a machined part includes a rotary slide assembly, a kinematic mount to move the apparatus into and out of position for measuring the part while the part is still on the machining apparatus, a linear probe assembly with a suspension arm and a probe assembly including as probe tip for providing a measure of linear displacement of the tip on the surface of the part, a means for changing relative positions between the part and the probe tip, and a means for recording data points representing linear positions of the probe tip at prescribed rotation intervals in the position changes between the part and the probe tip. 5 figs.

  3. Analysis of precision in tumor tracking based on optical positioning system during radiotherapy.

    PubMed

    Zhou, Han; Shen, Junshu; Li, Bing; Chen, Junting; Zhu, Xixu; Ge, Yun; Wang, Yongjian

    2016-03-19

    Tumor tracking is performed during patient set-up and monitoring of respiratory motion in radiotherapy. In the clinical setting, there are several types of equipment for this set-up such as the Electronic Portal imaging Device (EPID) and Cone Beam CT (CBCT). Technically, an optical positioning system tracks the difference between the infra ball reflected from body and machine isocenter. Our objective is to compare the clinical positioning error of patient setup between Cone Beam CT (CBCT) with the Optical Positioning System (OPS), and to evaluate the traditional positioning systems and OPS based on our proposed approach of patient positioning. In our experiments, a phantom was used, and we measured its setup errors in three directions. Specifically, the deviations in the left-to-right (LR), anterior-to-posterior (AP) and inferior-to-superior (IS) directions were measured by vernier caliper on a graph paper using the Varian Linear accelerator. Then, we verified the accuracy of OPS based on this experimental study. In order to verify the accuracy of phantom experiment, 40 patients were selected in our radiotherapy experiment. To illustrate the precise of optical positioning system, we designed clinical trials using EPID. From our radiotherapy procedure, we can conclude that OPS has higher precise than conventional positioning methods, and is a comparatively fast and efficient positioning method with respect to the CBCT guidance system. PMID:27257880

  4. Improved 6-Plex Tandem Mass Tags Quantification Throughput Using a Linear Ion Trap-High-Energy Collision Induced Dissociation MS(3) Scan.

    PubMed

    Liu, Jane M; Sweredoski, Michael J; Hess, Sonja

    2016-08-01

    The use of tandem mass tags (TMT) as an isobaric labeling strategy is a powerful method for quantitative proteomics, yet its accuracy has traditionally suffered from interference. This interference can be largely overcome by selecting MS(2) fragment precursor ions for high-energy collision induced dissociation (HCD) MS(3) analysis in an Orbitrap scan. While this approach minimizes the interference effect, sensitivity suffers due to the high AGC targets and long acquisition times associated with MS(3) Orbitrap detection. We investigated whether acquiring the MS(3) scan in a linear ion trap with its lower AGC target would increase overall quantification levels with a minimal effect on precision and accuracy. Trypsin-digested proteins from Saccharomyces cerevisiae were tagged with 6-plex TMT reagents. The sample was subjected to replicate analyses using either the Orbitrap or the linear ion trap for the HCD MS(3) scan. HCD MS(3) detection in the linear ion trap vs Orbitrap increased protein identification by 66% with minor loss in precision and accuracy. Thus, the use of a linear ion trap-HCD MS(3) scan during a 6-plex TMT experiment can improve overall identification levels while maintaining the power of multiplexed quantitative analysis. PMID:27377715

  5. Precision disablement aiming system

    DOEpatents

    Monda, Mark J.; Hobart, Clinton G.; Gladwell, Thomas Scott

    2016-02-16

    A disrupter to a target may be precisely aimed by positioning a radiation source to direct radiation towards the target, and a detector is positioned to detect radiation that passes through the target. An aiming device is positioned between the radiation source and the target, wherein a mechanical feature of the aiming device is superimposed on the target in a captured radiographic image. The location of the aiming device in the radiographic image is used to aim a disrupter towards the target.

  6. Precision laser aiming system

    SciTech Connect

    Ahrens, Brandon R.; Todd, Steven N.

    2009-04-28

    A precision laser aiming system comprises a disrupter tool, a reflector, and a laser fixture. The disrupter tool, the reflector and the laser fixture are configurable for iterative alignment and aiming toward an explosive device threat. The invention enables a disrupter to be quickly and accurately set up, aligned, and aimed in order to render safe or to disrupt a target from a standoff position.

  7. Accuracy in Judgments of Aggressiveness

    PubMed Central

    Kenny, David A.; West, Tessa V.; Cillessen, Antonius H. N.; Coie, John D.; Dodge, Kenneth A.; Hubbard, Julie A.; Schwartz, David

    2009-01-01

    Perceivers are both accurate and biased in their understanding of others. Past research has distinguished between three types of accuracy: generalized accuracy, a perceiver’s accuracy about how a target interacts with others in general; perceiver accuracy, a perceiver’s view of others corresponding with how the perceiver is treated by others in general; and dyadic accuracy, a perceiver’s accuracy about a target when interacting with that target. Researchers have proposed that there should be more dyadic than other forms of accuracy among well-acquainted individuals because of the pragmatic utility of forecasting the behavior of interaction partners. We examined behavioral aggression among well-acquainted peers. A total of 116 9-year-old boys rated how aggressive their classmates were toward other classmates. Subsequently, 11 groups of 6 boys each interacted in play groups, during which observations of aggression were made. Analyses indicated strong generalized accuracy yet little dyadic and perceiver accuracy. PMID:17575243

  8. High Accuracy Decoding of Dynamical Motion from a Large Retinal Population.

    PubMed

    Marre, Olivier; Botella-Soler, Vicente; Simmons, Kristina D; Mora, Thierry; Tkačik, Gašper; Berry, Michael J

    2015-07-01

    Motion tracking is a challenge the visual system has to solve by reading out the retinal population. It is still unclear how the information from different neurons can be combined together to estimate the position of an object. Here we recorded a large population of ganglion cells in a dense patch of salamander and guinea pig retinas while displaying a bar moving diffusively. We show that the bar's position can be reconstructed from retinal activity with a precision in the hyperacuity regime using a linear decoder acting on 100+ cells. We then took advantage of this unprecedented precision to explore the spatial structure of the retina's population code. The classical view would have suggested that the firing rates of the cells form a moving hill of activity tracking the bar's position. Instead, we found that most ganglion cells in the salamander fired sparsely and idiosyncratically, so that their neural image did not track the bar. Furthermore, ganglion cell activity spanned an area much larger than predicted by their receptive fields, with cells coding for motion far in their surround. As a result, population redundancy was high, and we could find multiple, disjoint subsets of neurons that encoded the trajectory with high precision. This organization allows for diverse collections of ganglion cells to represent high-accuracy motion information in a form easily read out by downstream neural circuits. PMID:26132103

  9. High Accuracy Decoding of Dynamical Motion from a Large Retinal Population

    PubMed Central

    Marre, Olivier; Botella-Soler, Vicente; Simmons, Kristina D.; Mora, Thierry; Tkačik, Gašper; Berry, Michael J.

    2015-01-01

    Motion tracking is a challenge the visual system has to solve by reading out the retinal population. It is still unclear how the information from different neurons can be combined together to estimate the position of an object. Here we recorded a large population of ganglion cells in a dense patch of salamander and guinea pig retinas while displaying a bar moving diffusively. We show that the bar’s position can be reconstructed from retinal activity with a precision in the hyperacuity regime using a linear decoder acting on 100+ cells. We then took advantage of this unprecedented precision to explore the spatial structure of the retina’s population code. The classical view would have suggested that the firing rates of the cells form a moving hill of activity tracking the bar’s position. Instead, we found that most ganglion cells in the salamander fired sparsely and idiosyncratically, so that their neural image did not track the bar. Furthermore, ganglion cell activity spanned an area much larger than predicted by their receptive fields, with cells coding for motion far in their surround. As a result, population redundancy was high, and we could find multiple, disjoint subsets of neurons that encoded the trajectory with high precision. This organization allows for diverse collections of ganglion cells to represent high-accuracy motion information in a form easily read out by downstream neural circuits. PMID:26132103

  10. Precise Point Positioning in the Airborne Mode

    NASA Astrophysics Data System (ADS)

    El-Mowafy, Ahmed

    2011-01-01

    The Global Positioning System (GPS) is widely used for positioning in the airborne mode such as in navigation as a supplementary system and for geo-referencing of cameras in mapping and surveillance by aircrafts and Unmanned Aerial Vehicles (UAV). The Precise Point Positioning (PPP) approach is an attractive positioning approach based on processing of un-differenced observations from a single GPS receiver. It employs precise satellite orbits and satellite clock corrections. These data can be obtained via the internet from several sources, e.g. the International GNSS Service (IGS). The data can also broadcast from satellites, such as via the LEX signal of the new Japanese satellite system QZSS. The PPP can achieve positioning precision and accuracy at the sub-decimetre level. In this paper, the functional and stochastic mathematical modelling used in PPP is discussed. Results of applying the PPP method in an airborne test using a small fixed-wing aircraft are presented. To evaluate the performance of the PPP approach, a reference trajectory was established by differential positioning of the same GPS observations with data from a ground reference station. The coordinate results from the two approaches, PPP and differential positioning, were compared and statistically evaluated. For the test at hand, positioning accuracy at the cm-to-decimetre was achieved for latitude and longitude coordinates and doubles that value for height estimation.

  11. Highly Parallel, High-Precision Numerical Integration

    SciTech Connect

    Bailey, David H.; Borwein, Jonathan M.

    2005-04-22

    This paper describes a scheme for rapidly computing numerical values of definite integrals to very high accuracy, ranging from ordinary machine precision to hundreds or thousands of digits, even for functions with singularities or infinite derivatives at endpoints. Such a scheme is of interest not only in computational physics and computational chemistry, but also in experimental mathematics, where high-precision numerical values of definite integrals can be used to numerically discover new identities. This paper discusses techniques for a parallel implementation of this scheme, then presents performance results for 1-D and 2-D test suites. Results are also given for a certain problem from mathematical physics, which features a difficult singularity, confirming a conjecture to 20,000 digit accuracy. The performance rate for this latter calculation on 1024 CPUs is 690 Gflop/s. We believe that this and one other 20,000-digit integral evaluation that we report are the highest-precision non-trivial numerical integrations performed to date.

  12. Accuracy of Digital vs. Conventional Implant Impressions

    PubMed Central

    Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.

    2015-01-01

    The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423

  13. Accuracy of tablet splitting.

    PubMed

    McDevitt, J T; Gurst, A H; Chen, Y

    1998-01-01

    We attempted to determine the accuracy of manually splitting hydrochlorothiazide tablets. Ninety-four healthy volunteers each split ten 25-mg hydrochlorothiazide tablets, which were then weighed using an analytical balance. Demographics, grip and pinch strength, digit circumference, and tablet-splitting experience were documented. Subjects were also surveyed regarding their willingness to pay a premium for commercially available, lower-dose tablets. Of 1752 manually split tablet portions, 41.3% deviated from ideal weight by more than 10% and 12.4% deviated by more than 20%. Gender, age, education, and tablet-splitting experience were not predictive of variability. Most subjects (96.8%) stated a preference for commercially produced, lower-dose tablets, and 77.2% were willing to pay more for them. For drugs with steep dose-response curves or narrow therapeutic windows, the differences we recorded could be clinically relevant. PMID:9469693

  14. Analytic streamline calculations on linear tetrahedra

    SciTech Connect

    Diachin, D.P.; Herzog, J.A.

    1997-06-01

    Analytic solutions for streamlines within tetrahedra are used to define operators that accurately and efficiently compute streamlines. The method presented here is based on linear interpolation, and therefore produces exact results for linear velocity fields. In addition, the method requires less computation than the forward Euler numerical method. Results are presented that compare accuracy measurements of the method with forward Euler and fourth order Runge-Kutta applied to both a linear and a nonlinear velocity field.

  15. [Linear accelerator radiosurgery].

    PubMed

    Brandt, R A; Salvajoli, J V; Oliveira, V C; Carmignani, M; da Cruz, J C; Leal, H D; Ferraz, L

    1995-03-01

    Radiosurgery is the precise radiation of a known intracranial target with a high dose of energy, sparing the adjacent nervous tissue. Technological advances in the construction of linear accelerators, stereotactic instruments and in computer sciences made this technique easier to perform and affordable. The main indications for radiosurgery are inoperable cerebral vascular malformations, vestibular and other cranial schwannomas, skull base meningiomas, deep seated gliomas and cerebral metastases. More recently, the development of fraccionated stereotactic radiotherapy increased the spectrum of indications to bigger lesions and to those adjacent to critical nervous structures. We present our initial experience in the treatment of 31 patients. An adequate control of the neoplastic lesions was obtained and the adequate time of observation is still needed to evaluate the results in arteriovenous malformations. PMID:7575207

  16. An improved methodology for precise geoid/quasigeoid modelling

    NASA Astrophysics Data System (ADS)

    Nesvadba, Otakar; Holota, Petr

    2016-04-01

    The paper describes recent development of the computational procedure useful for precise local quasigeoid modelling. The overall methodology is primarily based on a solution of the so-called gravimetric boundary value problem for an ellipsoidal domain (exterior to an oblate spheroid), which means that gravity disturbances on the ellipsoid are used in quality of input data. The problem of a difference between the Earth's topography and the chosen ellipsoidal surface is solved iteratively, by analytical continuation of the gravity disturbances to the computational ellipsoid. The methodology covers an interpolation technique of the discrete gravity data, which, considering a priori adopted covariance function, provides the best linear unbiased estimate of the respective quantity, numerical integration technique developed on the surface of ellipsoid in the spectral domain, an iterative procedure of analytical continuation in ellipsoidal coordinates, remove and restore of the atmospheric masses, an estimate of the far-zones contribution (in a case of regional data coverage) and the restore step of the obtained disturbing gravity potential to the target height anomaly. All the computational steps of the procedure are modest in the consumption of compute resources, thus the methodology can be used on a common personal computer, free of any accuracy or resolution penalty. Finally, the performance of the developed methodology is demonstrated on the real-case examples related to the territories of France (Auvergne regional quasigeoid) and the Czech Republic.

  17. Galvanometer deflection: a precision high-speed system.

    PubMed

    Jablonowski, D P; Raamot, J

    1976-06-01

    An X-Y galvanometer deflection system capable of high precision in a random access mode of operation is described. Beam positional information in digitized form is obtained by employing a Ronchi grating with a sophisticated optical detection scheme. This information is used in a control interface to locate the beam to the required precision. The system is characterized by high accuracy at maximum speed and is designed for operation in a variable environment, with particular attention placed on thermal insensitivity. PMID:20165203

  18. Precision Pointing Control System (PPCS) star tracker test

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Tests performed on the TRW precision star tracker are described. The unit tested was a two-axis gimballed star tracker designed to provide star LOS data to an accuracy of 1 to 2 sec. The tracker features a unique bearing system and utilizes thermal and mechanical symmetry techniques to achieve high precision which can be demonstrated in a one g environment. The test program included a laboratory evaluation of tracker functional operation, sensitivity, repeatibility, and thermal stability.

  19. Linear parameter varying battery model identification using subspace methods

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Yurkovich, S.

    2011-03-01

    The advent of hybrid and plug-in hybrid electric vehicles has created a demand for more precise battery pack management systems (BMS). Among methods used to design various components of a BMS, such as state-of-charge (SoC) estimators, model based approaches offer a good balance between accuracy, calibration effort and implementability. Because models used for these approaches are typically low in order and complexity, the traditional approach is to identify linear (or slightly nonlinear) models that are scheduled based on operating conditions. These models, formally known as linear parameter varying (LPV) models, tend to be difficult to identify because they contain a large amount of coefficients that require calibration. Consequently, the model identification process can be very laborious and time-intensive. This paper describes a comprehensive identification algorithm that uses linear-algebra-based subspace methods to identify a parameter varying state variable model that can describe the input-to-output dynamics of a battery under various operating conditions. Compared with previous methods, this approach is much faster and provides the user with information on the order of the system without placing an a priori structure on the system matrices. The entire process and various nuances are demonstrated using data collected from a lithium ion battery, and the focus is on applications for energy storage in automotive applications.

  20. Instrument Attitude Precision Control

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    2004-01-01

    A novel approach is presented in this paper to analyze attitude precision and control for an instrument gimbaled to a spacecraft subject to an internal disturbance caused by a moving component inside the instrument. Nonlinear differential equations of motion for some sample cases are derived and solved analytically to gain insight into the influence of the disturbance on the attitude pointing error. A simple control law is developed to eliminate the instrument pointing error caused by the internal disturbance. Several cases are presented to demonstrate and verify the concept presented in this paper.

  1. Precision Robotic Assembly Machine

    ScienceCinema

    None

    2010-09-01

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  2. Precision mass measurements

    NASA Astrophysics Data System (ADS)

    Gläser, M.; Borys, M.

    2009-12-01

    Mass as a physical quantity and its measurement are described. After some historical remarks, a short summary of the concept of mass in classical and modern physics is given. Principles and methods of mass measurements, for example as energy measurement or as measurement of weight forces and forces caused by acceleration, are discussed. Precision mass measurement by comparing mass standards using balances is described in detail. Measurement of atomic masses related to 12C is briefly reviewed as well as experiments and recent discussions for a future new definition of the kilogram, the SI unit of mass.

  3. Precision Robotic Assembly Machine

    SciTech Connect

    2009-08-14

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  4. Precision electroweak measurements

    SciTech Connect

    Demarteau, M.

    1996-11-01

    Recent electroweak precision measurements fro {ital e}{sup +}{ital e}{sup -} and {ital p{anti p}} colliders are presented. Some emphasis is placed on the recent developments in the heavy flavor sector. The measurements are compared to predictions from the Standard Model of electroweak interactions. All results are found to be consistent with the Standard Model. The indirect constraint on the top quark mass from all measurements is in excellent agreement with the direct {ital m{sub t}} measurements. Using the world`s electroweak data in conjunction with the current measurement of the top quark mass, the constraints on the Higgs` mass are discussed.

  5. Design and experimental performances of a piezoelectric linear actuator by means of lateral motion

    NASA Astrophysics Data System (ADS)

    Li, Jianping; Zhou, Xiaoqin; Zhao, Hongwei; Shao, Mingkun; Hou, Pengliang; Xu, Xiuquan

    2015-06-01

    A piezoelectric-driven actuator based on the lateral motion principle is proposed in this paper, it can achieve large-stroke linear motion with high resolution. One parallelogram-type flexure hinge mechanism and one piezoelectric stack are used to generate the lateral motion. The mechanical structure and working principle are discussed. A prototype was fabricated and a series of experiments were carried out to investigate its working performance. The results indicate that the maximum moving speed is about 14.25 mm s-1, and the maximum output force is 3.43 N, the minimum stepping displacement is about 0.04 μm. The experiments confirm that the lateral motion can be used to design piezoelectric actuators with a large moving stroke and high accuracy with a compact size. This actuator can be used in fast tool servo systems for ultra-precision machining, precision motors for aerospace, focusing systems for optics, and so on.

  6. Density Variations Observable by Precision Satellite Orbits

    NASA Astrophysics Data System (ADS)

    McLaughlin, C. A.; Lechtenberg, T.; Hiatt, A.

    2008-12-01

    This research uses precision satellite orbits from the Challenging Minisatellite Payload (CHAMP) satellite to produce a new data source for studying density changes that occur on time scales less than a day. Precision orbit derived density is compared to accelerometer derived density. In addition, the precision orbit derived densities are used to examine density variations that have been observed with accelerometer data to see if they are observable. In particular, the research will examine the observability of geomagnetic storm time changes and polar cusp features that have been observed in accelerometer data. Currently highly accurate density data is available from three satellites with accelerometers and much lower accuracy data is available from hundreds of satellites for which two-line element sets are available from the Air Force. This paper explores a new data source that is more accurate and has better temporal resolution than the two-line element sets, and provides better spatial coverage than satellites with accelerometers. This data source will be valuable for studying atmospheric phenomena over short periods, for long term studies of the atmosphere, and for validating and improving complex coupled models that include neutral density. The precision orbit derived densities are very similar to the accelerometer derived densities, but the accelerometer can observe features with shorter temporal variations. This research will quantify the time scales observable by precision orbit derived density. The technique for estimating density is optimal orbit determination. The estimates are optimal in the least squares or minimum variance sense. Precision orbit data from CHAMP is used as measurements in a sequential measurement processing and filtering scheme. The atmospheric density is estimated as a correction to an atmospheric model.

  7. Accuracy test procedure for image evaluation techniques.

    PubMed

    Jones, R A

    1968-01-01

    A procedure has been developed to determine the accuracy of image evaluation techniques. In the procedure, a target having orthogonal test arrays is photographed with a high quality optical system. During the exposure, the target is subjected to horizontal linear image motion. The modulation transfer functions of the images in the horizontal and vertical directions are obtained using the evaluation technique. Since all other degradations are symmetrical, the quotient of the two modulation transfer functions represents the modulation transfer function of the experimentally induced linear image motion. In an accurate experiment, any discrepancy between the experimental determination and the true value is due to inaccuracy in the image evaluation technique. The procedure was used to test the Perkin-Elmer automated edge gradient analysis technique over the spatial frequency range of 0-200 c/m. This experiment demonstrated that the edge gradient technique is accurate over this region and that the testing procedure can be controlled with the desired accuracy. Similarly, the test procedure can be used to determine the accuracy of other image evaluation techniques. PMID:20062421

  8. New High Precision Linelist of H_3^+

    NASA Astrophysics Data System (ADS)

    Hodges, James N.; Perry, Adam J.; Markus, Charles; Jenkins, Paul A., II; Kocheril, G. Stephen; McCall, Benjamin J.

    2014-06-01

    As the simplest polyatomic molecule, H_3^+ serves as an ideal benchmark for theoretical predictions of rovibrational energy levels. By strictly ab initio methods, the current accuracy of theoretical predictions is limited to an impressive one hundredth of a wavenumber, which has been accomplished by consideration of relativistic, adiabatic, and non-adiabatic corrections to the Born-Oppenheimer PES. More accurate predictions rely on a treatment of quantum electrodynamic effects, which have improved the accuracies of vibrational transitions in molecular hydrogen to a few MHz. High precision spectroscopy is of the utmost importance for extending the frontiers of ab initio calculations, as improved precision and accuracy enable more rigorous testing of calculations. Additionally, measuring rovibrational transitions of H_3^+ can be used to predict its forbidden rotational spectrum. Though the existing data can be used to determine rotational transition frequencies, the uncertainties are prohibitively large. Acquisition of rovibrational spectra with smaller experimental uncertainty would enable a spectroscopic search for the rotational transitions. The technique Noise Immune Cavity Enhanced Optical Heterodyne Velocity Modulation Spectroscopy, or NICE-OHVMS has been previously used to precisely and accurately measure transitions of H_3^+, CH_5^+, and HCO^+ to sub-MHz uncertainty. A second module for our optical parametric oscillator has extended our instrument's frequency coverage from 3.2-3.9 μm to 2.5-3.9 μm. With extended coverage, we have improved our previous linelist by measuring additional transitions. O. L. Polyansky, et al. Phil. Trans. R. Soc. A (2012), 370, 5014--5027. J. Komasa, et al. J. Chem. Theor. Comp. (2011), 7, 3105--3115. C. M. Lindsay, B. J. McCall, J. Mol. Spectrosc. (2001), 210, 66--83. J. N. Hodges, et al. J. Chem. Phys. (2013), 139, 164201.

  9. High precision kinematic surveying with laser scanners

    NASA Astrophysics Data System (ADS)

    Gräfe, Gunnar

    2007-12-01

    The kinematic survey of roads and railways is becoming a much more common data acquisition method. The development of the Mobile Road Mapping System (MoSES) has reached a level that allows the use of kinematic survey technology for high precision applications. The system is equipped with cameras and laser scanners. For high accuracy requirements, the scanners become the main sensor group because of their geometric precision and reliability. To guarantee reliable survey results, specific calibration procedures have to be applied, which can be divided into the scanner sensor calibration as step 1, and the geometric transformation parameter estimation with respect to the vehicle coordinate system as step 2. Both calibration steps include new methods for sensor behavior modeling and multisensor system integration. To verify laser scanner quality of the MoSES system, the results are regularly checked along different test routes. It can be proved that a standard deviation of 0.004 m for height of the scanner points will be obtained, if the specific calibrations and data processing methods are applied. This level of accuracy opens new possibilities to serve engineering survey applications using kinematic measurement techniques. The key feature of scanner technology is the full digital coverage of the road area. Three application examples illustrate the capabilities. Digital road surface models generated from MoSES data are used, especially for road surface reconstruction tasks along highways. Compared to static surveys, the method offers comparable accuracy at higher speed, lower costs, much higher grid resolution and with greater safety. The system's capability of gaining 360 profiles leads to other complex applications like kinematic tunnel surveys or the precise analysis of bridge clearances.

  10. High-precision early mission narrow angle sciene with the Space Interferometry Mission

    NASA Technical Reports Server (NTRS)

    Shaklan, S.; Milman, M. H.; Pan, X.

    2002-01-01

    We have developed a technique that allows SIM to measure relative stellar positions with an accuracy of 1 micro-arcsecond at any time during its 5-yr mission. Unlike SIM's standard narrow-angle approach, Gridless Narrow Angle Astrometry (GNAA) does not rely on the global reference frame of grid stars that reaches full accuracy after 5 years. GNAA is simply the application of traditional single-telescope narrow angle techniques to SIM's narrow angle optical path delay measurements. In GNAA, a set of reference stars and a target star are observed at several baseline orientations. A linearized model uses delay measurements to solve for star positions and baseline orientations. A conformal transformation maps observations at different epochs to a common reference frame. The technique works on short period signals (P=days to months), allowing it to be applied to many of the known extra-solar planets, intriguing radio/X- ray binaries, and other periodic sources. The technique's accuracy is limited in the long-term by false acceleration due to a combination of reference star and target star proper motion. The science capability 1 micro-arcsecond astrometric precision - is unique to SIM.

  11. Precision estimates for tomographic nondestructive assay

    SciTech Connect

    Prettyman, T.H.

    1995-12-31

    One technique being applied to improve the accuracy of assays of waste in large containers is computerized tomography (CT). Research on the application of CT to improve both neutron and gamma-ray assays of waste is being carried out at LANL. For example, tomographic gamma scanning (TGS) is a single-photon emission CT technique that corrects for the attenuation of gamma rays emitted from the sample using attenuation images from transmission CT. By accounting for the distribution of emitting material and correcting for the attenuation of the emitted gamma rays, TGS is able to achieve highly accurate assays of radionuclides in medium-density wastes. It is important to develope methods to estimate the precision of such assays, and this paper explores this problem by examining the precision estimators for TGS.

  12. Precision flyer initiator

    DOEpatents

    Frank, Alan M.; Lee, Ronald S.

    1998-01-01

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or "flyer" is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices.

  13. Precision muon physics

    NASA Astrophysics Data System (ADS)

    Gorringe, T. P.; Hertzog, D. W.

    2015-09-01

    The muon is playing a unique role in sub-atomic physics. Studies of muon decay both determine the overall strength and establish the chiral structure of weak interactions, as well as setting extraordinary limits on charged-lepton-flavor-violating processes. Measurements of the muon's anomalous magnetic moment offer singular sensitivity to the completeness of the standard model and the predictions of many speculative theories. Spectroscopy of muonium and muonic atoms gives unmatched determinations of fundamental quantities including the magnetic moment ratio μμ /μp, lepton mass ratio mμ /me, and proton charge radius rp. Also, muon capture experiments are exploring elusive features of weak interactions involving nucleons and nuclei. We will review the experimental landscape of contemporary high-precision and high-sensitivity experiments with muons. One focus is the novel methods and ingenious techniques that achieve such precision and sensitivity in recent, present, and planned experiments. Another focus is the uncommonly broad and topical range of questions in atomic, nuclear and particle physics that such experiments explore.

  14. Precision Joining Center

    SciTech Connect

    Powell, J.W.; Westphal, D.A.

    1991-08-01

    A workshop to obtain input from industry on the establishment of the Precision Joining Center (PJC) was held on July 10--12, 1991. The PJC is a center for training Joining Technologists in advanced joining techniques and concepts in order to promote the competitiveness of US industry. The center will be established as part of the DOE Defense Programs Technology Commercialization Initiative, and operated by EG G Rocky Flats in cooperation with the American Welding Society and the Colorado School of Mines Center for Welding and Joining Research. The overall objectives of the workshop were to validate the need for a Joining Technologists to fill the gap between the welding operator and the welding engineer, and to assure that the PJC will train individuals to satisfy that need. The consensus of the workshop participants was that the Joining Technologist is a necessary position in industry, and is currently used, with some variation, by many companies. It was agreed that the PJC core curriculum, as presented, would produce a Joining Technologist of value to industries that use precision joining techniques. The advantage of the PJC would be to train the Joining Technologist much more quickly and more completely. The proposed emphasis of the PJC curriculum on equipment intensive and hands-on training was judged to be essential.

  15. Progressive Precision Surface Design

    SciTech Connect

    Duchaineau, M; Joy, KJ

    2002-01-11

    We introduce a novel wavelet decomposition algorithm that makes a number of powerful new surface design operations practical. Wavelets, and hierarchical representations generally, have held promise to facilitate a variety of design tasks in a unified way by approximating results very precisely, thus avoiding a proliferation of undergirding mathematical representations. However, traditional wavelet decomposition is defined from fine to coarse resolution, thus limiting its efficiency for highly precise surface manipulation when attempting to create new non-local editing methods. Our key contribution is the progressive wavelet decomposition algorithm, a general-purpose coarse-to-fine method for hierarchical fitting, based in this paper on an underlying multiresolution representation called dyadic splines. The algorithm requests input via a generic interval query mechanism, allowing a wide variety of non-local operations to be quickly implemented. The algorithm performs work proportionate to the tiny compressed output size, rather than to some arbitrarily high resolution that would otherwise be required, thus increasing performance by several orders of magnitude. We describe several design operations that are made tractable because of the progressive decomposition. Free-form pasting is a generalization of the traditional control-mesh edit, but for which the shape of the change is completely general and where the shape can be placed using a free-form deformation within the surface domain. Smoothing and roughening operations are enhanced so that an arbitrary loop in the domain specifies the area of effect. Finally, the sculpting effect of moving a tool shape along a path is simulated.

  16. Precision flyer initiator

    DOEpatents

    Frank, A.M.; Lee, R.S.

    1998-05-26

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or ``flyer`` is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices. 10 figs.

  17. Precise autofocusing microscope with rapid response

    NASA Astrophysics Data System (ADS)

    Liu, Chien-Sheng; Jiang, Sheng-Hong

    2015-03-01

    The rapid on-line or off-line automated vision inspection is a critical operation in the manufacturing fields. Accordingly, this present study designs and characterizes a novel precise optics-based autofocusing microscope with a rapid response and no reduction in the focusing accuracy. In contrast to conventional optics-based autofocusing microscopes with centroid method, the proposed microscope comprises a high-speed rotating optical diffuser in which the variation of the image centroid position is reduced and consequently the focusing response is improved. The proposed microscope is characterized and verified experimentally using a laboratory-built prototype. The experimental results show that compared to conventional optics-based autofocusing microscopes, the proposed microscope achieves a more rapid response with no reduction in the focusing accuracy. Consequently, the proposed microscope represents another solution for both existing and emerging industrial applications of automated vision inspection.

  18. Electron Bunch Timing with Femtosecond Precision in a Superconducting Free-Electron Laser

    SciTech Connect

    Loehl, F.; Arsov, V.; Felber, M.; Hacker, K.; Lorbeer, B.; Ludwig, F.; Matthiesen, K.-H.; Schlarb, H.; Schmidt, B.; Winter, A.; Jalmuzna, W.; Schmueser, P.; Schulz, S.; Zemella, J.; Szewinski, J.

    2010-04-09

    High-gain free-electron lasers (FELs) are capable of generating femtosecond x-ray pulses with peak brilliances many orders of magnitude higher than at other existing x-ray sources. In order to fully exploit the opportunities offered by these femtosecond light pulses in time-resolved experiments, an unprecedented synchronization accuracy is required. In this Letter, we distributed the pulse train of a mode-locked fiber laser with femtosecond stability to different locations in the linear accelerator of the soft x-ray FEL FLASH. A novel electro-optic detection scheme was applied to measure the electron bunch arrival time with an as yet unrivaled precision of 6 fs (rms). With two beam-based feedback systems we succeeded in stabilizing both the arrival time and the electron bunch compression process within two magnetic chicanes, yielding a significant reduction of the FEL pulse energy jitter.

  19. Electron bunch timing with femtosecond precision in a superconducting free-electron laser.

    PubMed

    Löhl, F; Arsov, V; Felber, M; Hacker, K; Jalmuzna, W; Lorbeer, B; Ludwig, F; Matthiesen, K-H; Schlarb, H; Schmidt, B; Schmüser, P; Schulz, S; Szewinski, J; Winter, A; Zemella, J

    2010-04-01

    High-gain free-electron lasers (FELs) are capable of generating femtosecond x-ray pulses with peak brilliances many orders of magnitude higher than at other existing x-ray sources. In order to fully exploit the opportunities offered by these femtosecond light pulses in time-resolved experiments, an unprecedented synchronization accuracy is required. In this Letter, we distributed the pulse train of a mode-locked fiber laser with femtosecond stability to different locations in the linear accelerator of the soft x-ray FEL FLASH. A novel electro-optic detection scheme was applied to measure the electron bunch arrival time with an as yet unrivaled precision of 6 fs (rms). With two beam-based feedback systems we succeeded in stabilizing both the arrival time and the electron bunch compression process within two magnetic chicanes, yielding a significant reduction of the FEL pulse energy jitter. PMID:20481941

  20. Electron Bunch Timing with Femtosecond Precision in a Superconducting Free-Electron Laser

    NASA Astrophysics Data System (ADS)

    Löhl, F.; Arsov, V.; Felber, M.; Hacker, K.; Jalmuzna, W.; Lorbeer, B.; Ludwig, F.; Matthiesen, K.-H.; Schlarb, H.; Schmidt, B.; Schmüser, P.; Schulz, S.; Szewinski, J.; Winter, A.; Zemella, J.

    2010-04-01

    High-gain free-electron lasers (FELs) are capable of generating femtosecond x-ray pulses with peak brilliances many orders of magnitude higher than at other existing x-ray sources. In order to fully exploit the opportunities offered by these femtosecond light pulses in time-resolved experiments, an unprecedented synchronization accuracy is required. In this Letter, we distributed the pulse train of a mode-locked fiber laser with femtosecond stability to different locations in the linear accelerator of the soft x-ray FEL FLASH. A novel electro-optic detection scheme was applied to measure the electron bunch arrival time with an as yet unrivaled precision of 6 fs (rms). With two beam-based feedback systems we succeeded in stabilizing both the arrival time and the electron bunch compression process within two magnetic chicanes, yielding a significant reduction of the FEL pulse energy jitter.