Science.gov

Sample records for accuracy precision limits

  1. Accuracy, precision, and method detection limits of quantitative PCR for airborne bacteria and fungi.

    PubMed

    Hospodsky, Denina; Yamamoto, Naomichi; Peccia, Jordan

    2010-11-01

    Real-time quantitative PCR (qPCR) for rapid and specific enumeration of microbial agents is finding increased use in aerosol science. The goal of this study was to determine qPCR accuracy, precision, and method detection limits (MDLs) within the context of indoor and ambient aerosol samples. Escherichia coli and Bacillus atrophaeus vegetative bacterial cells and Aspergillus fumigatus fungal spores loaded onto aerosol filters were considered. Efficiencies associated with recovery of DNA from aerosol filters were low, and excluding these efficiencies in quantitative analysis led to underestimating the true aerosol concentration by 10 to 24 times. Precision near detection limits ranged from a 28% to 79% coefficient of variation (COV) for the three test organisms, and the majority of this variation was due to instrument repeatability. Depending on the organism and sampling filter material, precision results suggest that qPCR is useful for determining dissimilarity between two samples only if the true differences are greater than 1.3 to 3.2 times (95% confidence level at n = 7 replicates). For MDLs, qPCR was able to produce a positive response with 99% confidence from the DNA of five B. atrophaeus cells and less than one A. fumigatus spore. Overall MDL values that included sample processing efficiencies ranged from 2,000 to 3,000 B. atrophaeus cells per filter and 10 to 25 A. fumigatus spores per filter. Applying the concepts of accuracy, precision, and MDL to qPCR aerosol measurements demonstrates that sample processing efficiencies must be accounted for in order to accurately estimate bioaerosol exposure, provides guidance on the necessary statistical rigor required to understand significant differences among separate aerosol samples, and prevents undetected (i.e., nonquantifiable) values for true aerosol concentrations that may be significant.

  2. Accuracy and precision of manual baseline determination.

    PubMed

    Jirasek, A; Schulze, G; Yu, M M L; Blades, M W; Turner, R F B

    2004-12-01

    Vibrational spectra often require baseline removal before further data analysis can be performed. Manual (i.e., user) baseline determination and removal is a common technique used to perform this operation. Currently, little data exists that details the accuracy and precision that can be expected with manual baseline removal techniques. This study addresses this current lack of data. One hundred spectra of varying signal-to-noise ratio (SNR), signal-to-baseline ratio (SBR), baseline slope, and spectral congestion were constructed and baselines were subtracted by 16 volunteers who were categorized as being either experienced or inexperienced in baseline determination. In total, 285 baseline determinations were performed. The general level of accuracy and precision that can be expected for manually determined baselines from spectra of varying SNR, SBR, baseline slope, and spectral congestion is established. Furthermore, the effects of user experience on the accuracy and precision of baseline determination is estimated. The interactions between the above factors in affecting the accuracy and precision of baseline determination is highlighted. Where possible, the functional relationships between accuracy, precision, and the given spectral characteristic are detailed. The results provide users of manual baseline determination useful guidelines in establishing limits of accuracy and precision when performing manual baseline determination, as well as highlighting conditions that confound the accuracy and precision of manual baseline determination.

  3. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032

  4. Accuracy and Precision of an IGRT Solution

    SciTech Connect

    Webster, Gareth J. Rowbottom, Carl G.; Mackay, Ranald I.

    2009-07-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within {+-} 3% in dose over the range of sample points. For some points in high-dose gradients

  5. [History, accuracy and precision of SMBG devices].

    PubMed

    Dufaitre-Patouraux, L; Vague, P; Lassmann-Vague, V

    2003-04-01

    Self-monitoring of blood glucose started only fifty years ago. Until then metabolic control was evaluated by means of qualitative urinary blood measure often of poor reliability. Reagent strips were the first semi quantitative tests to monitor blood glucose, and in the late seventies meters were launched on the market. Initially the use of such devices was intended for medical staff, but thanks to handiness improvement they became more and more adequate to patients and are now a necessary tool for self-blood glucose monitoring. The advanced technologies allow to develop photometric measurements but also more recently electrochemical one. In the nineties, improvements were made mainly in meters' miniaturisation, reduction of reaction time and reading, simplification of blood sampling and capillary blood laying. Although accuracy and precision concern was in the heart of considerations at the beginning of self-blood glucose monitoring, the recommendations of societies of diabetology came up in the late eighties. Now, the French drug agency: AFSSAPS asks for a control of meter before any launching on the market. According to recent publications very few meters meet reliability criteria set up by societies of diabetology in the late nineties. Finally because devices may be handled by numerous persons in hospitals, meters use as possible source of nosocomial infections have been recently questioned and is subject to very strict guidelines published by AFSSAPS.

  6. Assessing the Accuracy of the Precise Point Positioning Technique

    NASA Astrophysics Data System (ADS)

    Bisnath, S. B.; Collins, P.; Seepersad, G.

    2012-12-01

    The Precise Point Positioning (PPP) GPS data processing technique has developed over the past 15 years to become a standard method for growing categories of positioning and navigation applications. The technique relies on single receiver point positioning combined with the use of precise satellite orbit and clock information and high-fidelity error modelling. The research presented here uniquely addresses the current accuracy of the technique, explains the limits of performance, and defines paths to improvements. For geodetic purposes, performance refers to daily static position accuracy. PPP processing of over 80 IGS stations over one week results in few millimetre positioning rms error in the north and east components and few centimetres in the vertical (all one sigma values). Larger error statistics for real-time and kinematic processing are also given. GPS PPP with ambiguity resolution processing is also carried out, producing slight improvements over the float solution results. These results are categorised into quality classes in order to analyse the root error causes of the resultant accuracies: "best", "worst", multipath, site displacement effects, satellite availability and geometry, etc. Also of interest in PPP performance is solution convergence period. Static, conventional solutions are slow to converge, with approximately 35 minutes required for 95% of solutions to reach the 20 cm or better horizontal accuracy. Ambiguity resolution can significantly reduce this period without biasing solutions. The definition of a PPP error budget is a complex task even with the resulting numerical assessment, as unlike the epoch-by-epoch processing in the Standard Position Service, PPP processing involving filtering. An attempt is made here to 1) define the magnitude of each error source in terms of range, 2) transform ranging error to position error via Dilution Of Precision (DOP), and 3) scale the DOP through the filtering process. The result is a deeper

  7. Spectropolarimetry with PEPSI at the LBT: accuracy vs. precision in magnetic field measurements

    NASA Astrophysics Data System (ADS)

    Ilyin, Ilya; Strassmeier, Klaus G.; Woche, Manfred; Hofmann, Axel

    2009-04-01

    We present the design of the new PEPSI spectropolarimeter to be installed at the Large Binocular Telescope (LBT) in Arizona to measure the full set of Stokes parameters in spectral lines and outline its precision and the accuracy limiting factors.

  8. A study of laseruler accuracy and precision (1986-1987)

    SciTech Connect

    Ramachandran, R.S.; Armstrong, K.P.

    1989-06-22

    A study was conducted to investigate Laserruler accuracy and precision. Tests were performed on 0.050 in., 0.100 in., and 0.120 in. gauge block standards. Results showed and accuracy of 3.7 {mu}in. for the 0.12 in. standard, with higher accuracies for the two thinner blocks. The Laserruler precision was 4.83 {mu}in. for the 0.120 in. standard, 3.83 {mu}in. for the 0.100 in. standard, and 4.2 {mu}in. for the 0.050 in. standard.

  9. Precision and accuracy in diffusion tensor magnetic resonance imaging.

    PubMed

    Jones, Derek K

    2010-04-01

    This article reviews some of the key factors influencing the accuracy and precision of quantitative metrics derived from diffusion magnetic resonance imaging data. It focuses on the study pipeline beginning at the choice of imaging protocol, through preprocessing and model fitting up to the point of extracting quantitative estimates for subsequent analysis. The aim was to provide the newcomers to the field with sufficient knowledge of how their decisions at each stage along this process might impact on precision and accuracy, to design their study/approach, and to use diffusion tensor magnetic resonance imaging in the clinic. More specifically, emphasis is placed on improving accuracy and precision. I illustrate how careful choices along the way can substantially affect the sample size needed to make an inference from the data.

  10. Accuracy and precision of temporal artery thermometers in febrile patients.

    PubMed

    Wolfson, Margaret; Granstrom, Patsy; Pomarico, Bernie; Reimanis, Cathryn

    2013-01-01

    The noninvasive temporal artery thermometer offers a way to measure temperature when oral assessment is contraindicated, uncomfortable, or difficult to obtain. In this study, the accuracy and precision of the temporal artery thermometer exceeded levels recommended by experts for use in acute care clinical practice.

  11. Accuracy-precision trade-off in visual orientation constancy.

    PubMed

    De Vrijer, M; Medendorp, W P; Van Gisbergen, J A M

    2009-02-09

    Using the subjective visual vertical task (SVV), previous investigations on the maintenance of visual orientation constancy during lateral tilt have found two opposite bias effects in different tilt ranges. The SVV typically shows accurate performance near upright but severe undercompensation at tilts beyond 60 deg (A-effect), frequently with slight overcompensation responses (E-effect) in between. Here we investigate whether a Bayesian spatial-perception model can account for this error pattern. The model interprets A- and E-effects as the drawback of a computational strategy, geared at maintaining visual stability with optimal precision at small tilt angles. In this study, we test whether these systematic errors can be seen as the consequence of a precision-accuracy trade-off when combining a veridical but noisy signal about eye orientation in space with the visual signal. To do so, we used a psychometric approach to assess both precision and accuracy of the SVV in eight subjects laterally tilted at 9 different tilt angles (-120 degrees to 120 degrees). Results show that SVV accuracy and precision worsened with tilt angle, according to a pattern that could be fitted quite adequately by the Bayesian model. We conclude that spatial vision essentially follows the rules of Bayes' optimal observer theory.

  12. Measurement Accuracy Limitation Analysis on Synchrophasors

    SciTech Connect

    Zhao, Jiecheng; Zhan, Lingwei; Liu, Yilu; Qi, Hairong; Gracia, Jose R; Ewing, Paul D

    2015-01-01

    This paper analyzes the theoretical accuracy limitation of synchrophasors measurements on phase angle and frequency of the power grid. Factors that cause the measurement error are analyzed, including error sources in the instruments and in the power grid signal. Different scenarios of these factors are evaluated according to the normal operation status of power grid measurement. Based on the evaluation and simulation, the errors of phase angle and frequency caused by each factor are calculated and discussed.

  13. The Plus or Minus Game - Teaching Estimation, Precision, and Accuracy

    NASA Astrophysics Data System (ADS)

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-03-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in TPT (Larry Weinstein's "Fermi Questions.") For several years the authors (a college physics professor, a retired algebra teacher, and a fifth-grade teacher) have been playing a game, primarily at home to challenge each other for fun, but also in the classroom as an educational tool. We call the game "The Plus or Minus Game." The game combines estimation with the principle of precision and uncertainty in a competitive and fun way.

  14. Fluorescence Axial Localization with Nanometer Accuracy and Precision

    SciTech Connect

    Li, Hui; Yen, Chi-Fu; Sivasankar, Sanjeevi

    2012-06-15

    We describe a new technique, standing wave axial nanometry (SWAN), to image the axial location of a single nanoscale fluorescent object with sub-nanometer accuracy and 3.7 nm precision. A standing wave, generated by positioning an atomic force microscope tip over a focused laser beam, is used to excite fluorescence; axial position is determined from the phase of the emission intensity. We use SWAN to measure the orientation of single DNA molecules of different lengths, grafted on surfaces with different functionalities.

  15. Fundamental Limits of Scintillation Detector Timing Precision

    PubMed Central

    Derenzo, Stephen E.; Choong, Woon-Seng; Moses, William W.

    2014-01-01

    In this paper we review the primary factors that affect the timing precision of a scintillation detector. Monte Carlo calculations were performed to explore the dependence of the timing precision on the number of photoelectrons, the scintillator decay and rise times, the depth of interaction uncertainty, the time dispersion of the optical photons (modeled as an exponential decay), the photodetector rise time and transit time jitter, the leading-edge trigger level, and electronic noise. The Monte Carlo code was used to estimate the practical limits on the timing precision for an energy deposition of 511 keV in 3 mm × 3 mm × 30 mm Lu2SiO5:Ce and LaBr3:Ce crystals. The calculated timing precisions are consistent with the best experimental literature values. We then calculated the timing precision for 820 cases that sampled scintillator rise times from 0 to 1.0 ns, photon dispersion times from 0 to 0.2 ns, photodetector time jitters from 0 to 0.5 ns fwhm, and A from 10 to 10,000 photoelectrons per ns decay time. Since the timing precision R was found to depend on A−1/2 more than any other factor, we tabulated the parameter B, where R = BA−1/2. An empirical analytical formula was found that fit the tabulated values of B with an rms deviation of 2.2% of the value of B. The theoretical lower bound of the timing precision was calculated for the example of 0.5 ns rise time, 0.1 ns photon dispersion, and 0.2 ns fwhm photodetector time jitter. The lower bound was at most 15% lower than leading-edge timing discrimination for A from 10 to 10,000 photoelectrons/ns. A timing precision of 8 ps fwhm should be possible for an energy deposition of 511 keV using currently available photodetectors if a theoretically possible scintillator were developed that could produce 10,000 photoelectrons/ns. PMID:24874216

  16. Fundamental limits of scintillation detector timing precision.

    PubMed

    Derenzo, Stephen E; Choong, Woon-Seng; Moses, William W

    2014-07-01

    In this paper we review the primary factors that affect the timing precision of a scintillation detector. Monte Carlo calculations were performed to explore the dependence of the timing precision on the number of photoelectrons, the scintillator decay and rise times, the depth of interaction uncertainty, the time dispersion of the optical photons (modeled as an exponential decay), the photodetector rise time and transit time jitter, the leading-edge trigger level, and electronic noise. The Monte Carlo code was used to estimate the practical limits on the timing precision for an energy deposition of 511 keV in 3 mm × 3 mm × 30 mm Lu2SiO5:Ce and LaBr3:Ce crystals. The calculated timing precisions are consistent with the best experimental literature values. We then calculated the timing precision for 820 cases that sampled scintillator rise times from 0 to 1.0 ns, photon dispersion times from 0 to 0.2 ns, photodetector time jitters from 0 to 0.5 ns fwhm, and A from 10 to 10,000 photoelectrons per ns decay time. Since the timing precision R was found to depend on A(-1/2) more than any other factor, we tabulated the parameter B, where R = BA(-1/2). An empirical analytical formula was found that fit the tabulated values of B with an rms deviation of 2.2% of the value of B. The theoretical lower bound of the timing precision was calculated for the example of 0.5 ns rise time, 0.1 ns photon dispersion, and 0.2 ns fwhm photodetector time jitter. The lower bound was at most 15% lower than leading-edge timing discrimination for A from 10 to 10,000 photoelectrons ns(-1). A timing precision of 8 ps fwhm should be possible for an energy deposition of 511 keV using currently available photodetectors if a theoretically possible scintillator were developed that could produce 10,000 photoelectrons ns(-1).

  17. Scatterometry measurement precision and accuracy below 70 nm

    NASA Astrophysics Data System (ADS)

    Sendelbach, Matthew; Archie, Charles N.

    2003-05-01

    Scatterometry is a contender for various measurement applications where structure widths and heights can be significantly smaller than 70 nm within one or two ITRS generations. For example, feedforward process control in the post-lithography transistor gate formation is being actively pursued by a number of RIE tool manufacturers. Several commercial forms of scatterometry are available or under development which promise to provide satisfactory performance in this regime. Scatterometry, as commercially practiced today, involves analyzing the zeroth order reflected light from a grating of lines. Normal incidence spectroscopic reflectometry, 2-theta fixed-wavelength ellipsometry, and spectroscopic ellipsometry are among the optical techniques, while library based spectra matching and realtime regression are among the analysis techniques. All these commercial forms will find accurate and precise measurement a challenge when the material constituting the critical structure approaches a very small volume. Equally challenging is executing an evaluation methodology that first determines the true properties (critical dimensions and materials) of semiconductor wafer artifacts and then compares measurement performance of several scatterometers. How well do scatterometers track process induced changes in bottom CD and sidewall profile? This paper introduces a general 3D metrology assessment methodology and reports upon work involving sub-70 nm structures and several scatterometers. The methodology combines results from multiple metrologies (CD-SEM, CD-AFM, TEM, and XSEM) to form a Reference Measurement System (RMS). The methodology determines how well the scatterometry measurement tracks critical structure changes even in the presence of other noncritical changes that take place at the same time; these are key components of accuracy. Because the assessment rewards scatterometers that measure with good precision (reproducibility) and good accuracy, the most precise

  18. ACCURACY LIMITATIONS IN LONG TRACE PROFILOMETRY.

    SciTech Connect

    TAKACS,P.Z.; QIAN,S.

    2003-08-25

    As requirements for surface slope error quality of grazing incidence optics approach the 100 nanoradian level, it is necessary to improve the performance of the measuring instruments to achieve accurate and repeatable results at this level. We have identified a number of internal error sources in the Long Trace Profiler (LTP) that affect measurement quality at this level. The LTP is sensitive to phase shifts produced within the millimeter diameter of the pencil beam probe by optical path irregularities with scale lengths of a fraction of a millimeter. We examine the effects of mirror surface ''macroroughness'' and internal glass homogeneity on the accuracy of the LTP through experiment and theoretical modeling. We will place limits on the allowable surface ''macroroughness'' and glass homogeneity required to achieve accurate measurements in the nanoradian range.

  19. Measuring changes in Plasmodium falciparum transmission: precision, accuracy and costs of metrics.

    PubMed

    Tusting, Lucy S; Bousema, Teun; Smith, David L; Drakeley, Chris

    2014-01-01

    As malaria declines in parts of Africa and elsewhere, and as more countries move towards elimination, it is necessary to robustly evaluate the effect of interventions and control programmes on malaria transmission. To help guide the appropriate design of trials to evaluate transmission-reducing interventions, we review 11 metrics of malaria transmission, discussing their accuracy, precision, collection methods and costs and presenting an overall critique. We also review the nonlinear scaling relationships between five metrics of malaria transmission: the entomological inoculation rate, force of infection, sporozoite rate, parasite rate and the basic reproductive number, R0. Our chapter highlights that while the entomological inoculation rate is widely considered the gold standard metric of malaria transmission and may be necessary for measuring changes in transmission in highly endemic areas, it has limited precision and accuracy and more standardised methods for its collection are required. In areas of low transmission, parasite rate, seroconversion rates and molecular metrics including MOI and mFOI may be most appropriate. When assessing a specific intervention, the most relevant effects will be detected by examining the metrics most directly affected by that intervention. Future work should aim to better quantify the precision and accuracy of malaria metrics and to improve methods for their collection.

  20. Evaluation of precision and accuracy of selenium measurements in biological materials using neutron activation analysis

    SciTech Connect

    Greenberg, R.R.

    1988-01-01

    In recent years, the accurate determination of selenium in biological materials has become increasingly important in view of the essential nature of this element for human nutrition and its possible role as a protective agent against cancer. Unfortunately, the accurate determination of selenium in biological materials is often difficult for most analytical techniques for a variety of reasons, including interferences, complicated selenium chemistry due to the presence of this element in multiple oxidation states and in a variety of different organic species, stability and resistance to destruction of some of these organo-selenium species during acid dissolution, volatility of some selenium compounds, and potential for contamination. Neutron activation analysis (NAA) can be one of the best analytical techniques for selenium determinations in biological materials for a number of reasons. Currently, precision at the 1% level (1s) and overall accuracy at the 1 to 2% level (95% confidence interval) can be attained at the U.S. National Bureau of Standards (NBS) for selenium determinations in biological materials when counting statistics are not limiting (using the {sup 75}Se isotope). An example of this level of precision and accuracy is summarized. Achieving this level of accuracy, however, requires strict attention to all sources of systematic error. Precise and accurate results can also be obtained after radiochemical separations.

  1. Improved DORIS accuracy for precise orbit determination and geodesy

    NASA Technical Reports Server (NTRS)

    Willis, Pascal; Jayles, Christian; Tavernier, Gilles

    2004-01-01

    In 2001 and 2002, 3 more DORIS satellites were launched. Since then, all DORIS results have been significantly improved. For precise orbit determination, 20 cm are now available in real-time with DIODE and 1.5 to 2 cm in post-processing. For geodesy, 1 cm precision can now be achieved regularly every week, making now DORIS an active part of a Global Observing System for Geodesy through the IDS.

  2. Higgs triplets and limits from precision measurements

    SciTech Connect

    Chen, Mu-Chun; Dawson, Sally; Krupovnickas, Tadas; /Brookhaven

    2006-04-01

    In this letter, they present the results on a global fit to precision electroweak data in a Higgs triplet model. In models with a triplet Higgs boson, a consistent renormalization scheme differs from that of the Standard Model and the global fit shows that a light Higgs boson with mass of 100-200 GeV is preferred. Triplet Higgs bosons arise in many extensions of the Standard Model, including the left-right model and the Little Higgs models. The result demonstrates the importance of the scalar loops when there is a large mass splitting between the heavy scalars. It also indicates the significance of the global fit.

  3. S-193 scatterometer backscattering cross section precision/accuracy for Skylab 2 and 3 missions

    NASA Technical Reports Server (NTRS)

    Krishen, K.; Pounds, D. J.

    1975-01-01

    Procedures for measuring the precision and accuracy with which the S-193 scatterometer measured the background cross section of ground scenes are described. Homogeneous ground sites were selected, and data from Skylab missions were analyzed. The precision was expressed as the standard deviation of the scatterometer-acquired backscattering cross section. In special cases, inference of the precision of measurement was made by considering the total range from the maximum to minimum of the backscatter measurements within a data segment, rather than the standard deviation. For Skylab 2 and 3 missions a precision better than 1.5 dB is indicated. This procedure indicates an accuracy of better than 3 dB for the Skylab 2 and 3 missions. The estimates of precision and accuracy given in this report are for backscattering cross sections from -28 to 18 dB. Outside this range the precision and accuracy decrease significantly.

  4. Accuracy or precision: Implications of sample design and methodology on abundance estimation

    USGS Publications Warehouse

    Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.

    2015-01-01

    Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.

  5. [Accuracy and precision in the evaluation of computer assisted surgical systems. A definition].

    PubMed

    Strauss, G; Hofer, M; Korb, W; Trantakis, C; Winkler, D; Burgert, O; Schulz, T; Dietz, A; Meixensberger, J; Koulechov, K

    2006-02-01

    Accuracy represents the outstanding criterion for navigation systems. Surgeons have noticed a great discrepancy between the values from the literature and system specifications on one hand, and intraoperative accuracy on the other. A unitary understanding for the term accuracy does not exist in clinical practice. Furthermore, an incorrect equality for the terms precision and accuracy can be found in the literature. On top of this, clinical accuracy differs from mechanical (technical) accuracy. From a clinical point of view, we had to deal with remarkably many different terms all describing accuracy. This study has the goals of: 1. Defining "accuracy" and related terms, 2. Differentiating between "precision" and "accuracy", 3. Deriving the term "surgical accuracy", 4. Recommending use of the the term "surgical accuracy" for a navigation system. To a great extent, definitions were applied from the International Standardisation Organisation-ISO and the norm from the Deutsches Institut für Normung e.V.-DIN (the German Institute for Standardization). For defining surgical accuracy, the terms reference value, expectation, accuracy and precision are of major interest. Surgical accuracy should indicate the maximum values for the deviation between test results and the reference value (true value) A(max), and additionally indicate precision P(surg). As a basis for measurements, a standardized technical model was used. Coordinates of the model were acquired by CT. To determine statistically and reality relevant results for head surgery, 50 measurements with an accuracy of 50, 75, 100 and 150 mm from the centre of the registration geometry are adequate. In the future, we recommend labeling the system's overall performance with the following specifications: maximum accuracy deviation A(max), precision P and information on the measurement method. This could be displayed on a seal of quality.

  6. Accuracy and precision in measurements of biomass oxidative ratios

    NASA Astrophysics Data System (ADS)

    Gallagher, M. E.; Masiello, C. A.; Randerson, J. T.; Chadwick, O. A.

    2005-12-01

    One fundamental property of the Earth system is the oxidative ratio (OR) of the terrestrial biosphere, or the mols CO2 fixed per mols O2 released via photosynthesis. This is also an essential, poorly constrained parameter in the calculation of the size of the terrestrial and oceanic carbon sinks via atmospheric O2 and CO2 measurements. We are pursuing a number of techniques to accurately measure natural variations in above- and below-ground OR. For aboveground biomass, OR can be calculated directly from percent C, H, N, and O data measured via elemental analysis; however, the precision of this technique is a function of 4 measurements, resulting in increased data variability. It is also possible to measure OR via bomb calorimetry and percent C, using relationships between the heat of combustion of a sample and its OR. These measurements hold the potential for generation of more precise data, as error depends only on 2 measurements instead of 4. We present data comparing these two OR measurement techniques.

  7. Mapping stream habitats with a global positioning system: Accuracy, precision, and comparison with traditional methods

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.

    2006-01-01

    We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media

  8. Gamma-Ray Peak Integration: Accuracy and Precision

    SciTech Connect

    Richard M. Lindstrom

    2000-11-12

    The accuracy of singlet gamma-ray peak areas obtained by a peak analysis program is immaterial. If the same algorithm is used for sample measurement as for calibration and if the peak shapes are similar, then biases in the integration method cancel. Reproducibility is the only important issue. Even the uncertainty of the areas computed by the program is trivial because the true standard uncertainty can be experimentally assessed by repeated measurements of the same source. Reproducible peak integration was important in a recent standard reference material certification task. The primary tool used for spectrum analysis was SUM, a National Institute of Standards and Technology interactive program to sum peaks and subtract a linear background, using the same channels to integrate all 20 spectra. For comparison, this work examines other peak integration programs. Unlike some published comparisons of peak performance in which synthetic spectra were used, this experiment used spectra collected for a real (though exacting) analytical project, analyzed by conventional software used in routine ways. Because both components of the 559- to 564-keV doublet are from {sup 76}As, they were integrated together with SUM. The other programs, however, deconvoluted the peaks. A sensitive test of the fitting algorithm is the ratio of reported peak areas. In almost all the cases, this ratio was much more variable than expected from the reported uncertainties reported by the program. Other comparisons to be reported indicate that peak integration is still an imperfect tool in the analysis of gamma-ray spectra.

  9. Accuracy and Precision in Measurements of Biomass Oxidative Ratio and Carbon Oxidation State

    NASA Astrophysics Data System (ADS)

    Gallagher, M. E.; Masiello, C. A.; Randerson, J. T.; Chadwick, O. A.; Robertson, G. P.

    2007-12-01

    Ecosystem oxidative ratio (OR) is a critical parameter in the apportionment of anthropogenic CO2 between the terrestrial biosphere and ocean carbon reservoirs. OR is the ratio of O2 to CO2 in gas exchange fluxes between the terrestrial biosphere and atmosphere. Ecosystem OR is linearly related to biomass carbon oxidation state (Cox), a fundamental property of the earth system describing the bonding environment of carbon in molecules. Cox can range from -4 to +4 (CH4 to CO2). Variations in both Cox and OR are driven by photosynthesis, respiration, and decomposition. We are developing several techniques to accurately measure variations in ecosystem Cox and OR; these include elemental analysis, bomb calorimetry, and 13C nuclear magnetic resonance spectroscopy. A previous study, comparing the accuracy and precision of elemental analysis versus bomb calorimetry for pure chemicals, showed that elemental analysis-based measurements are more accurate, while calorimetry- based measurements yield more precise data. However, the limited biochemical range of natural samples makes it possible that calorimetry may ultimately prove most accurate, as well as most cost-effective. Here we examine more closely the accuracy of Cox and OR values generated by calorimetry on a large set of natural biomass samples collected from the Kellogg Biological Station-Long Term Ecological Research (KBS-LTER) site in Michigan.

  10. Tomography & Geochemistry: Precision, Repeatability, Accuracy and Joint Interpretations

    NASA Astrophysics Data System (ADS)

    Foulger, G. R.; Panza, G. F.; Artemieva, I. M.; Bastow, I. D.; Cammarano, F.; Doglioni, C.; Evans, J. R.; Hamilton, W. B.; Julian, B. R.; Lustrino, M.; Thybo, H.; Yanovskaya, T. B.

    2015-12-01

    Seismic tomography can reveal the spatial seismic structure of the mantle, but has little ability to constrain composition, phase or temperature. In contrast, petrology and geochemistry can give insights into mantle composition, but have severely limited spatial control on magma sources. For these reasons, results from these three disciplines are often interpreted jointly. Nevertheless, the limitations of each method are often underestimated, and underlying assumptions de-emphasized. Examples of the limitations of seismic tomography include its ability to image in detail the three-dimensional structure of the mantle or to determine with certainty the strengths of anomalies. Despite this, published seismic anomaly strengths are often unjustifiably translated directly into physical parameters. Tomography yields seismological parameters such as wave speed and attenuation, not geological or thermal parameters. Much of the mantle is poorly sampled by seismic waves, and resolution- and error-assessment methods do not express the true uncertainties. These and other problems have become highlighted in recent years as a result of multiple tomography experiments performed by different research groups, in areas of particular interest e.g., Yellowstone. The repeatability of the results is often poorer than the calculated resolutions. The ability of geochemistry and petrology to identify magma sources and locations is typically overestimated. These methods have little ability to determine source depths. Models that assign geochemical signatures to specific layers in the mantle, including the transition zone, the lower mantle, and the core-mantle boundary, are based on speculative models that cannot be verified and for which viable, less-astonishing alternatives are available. Our knowledge is poor of the size, distribution and location of protoliths, and of metasomatism of magma sources, the nature of the partial-melting and melt-extraction process, the mixing of disparate

  11. Precision and Accuracy in Measurements: A Tale of Four Graduated Cylinders.

    ERIC Educational Resources Information Center

    Treptow, Richard S.

    1998-01-01

    Expands upon the concepts of precision and accuracy at a level suitable for general chemistry. Serves as a bridge to the more extensive treatments in analytical chemistry textbooks and the advanced literature on error analysis. Contains 22 references. (DDR)

  12. Accuracy and precision of silicon based impression media for quantitative areal texture analysis.

    PubMed

    Goodall, Robert H; Darras, Laurent P; Purnell, Mark A

    2015-05-20

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis.

  13. Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis

    PubMed Central

    Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.

    2015-01-01

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505

  14. 13 Years of TOPEX/POSEIDON Precision Orbit Determination and the 10-fold Improvement in Expected Orbit Accuracy

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Beckley, B. D.; Klosko, S. M.

    2006-01-01

    Launched in the summer of 1992, TOPEX/POSEIDON (T/P) was a joint mission between NASA and the Centre National d Etudes Spatiales (CNES), the French Space Agency, to make precise radar altimeter measurements of the ocean surface. After the remarkably successful 13-years of mapping the ocean surface T/P lost its ability to maneuver and was de-commissioned January 2006. T/P revolutionized the study of the Earth s oceans by vastly exceeding pre-launch estimates of surface height accuracy recoverable from radar altimeter measurements. The precision orbit lies at the heart of the altimeter measurement providing the reference frame from which the radar altimeter measurements are made. The expected quality of orbit knowledge had limited the measurement accuracy expectations of past altimeter missions, and still remains a major component in the error budget of all altimeter missions. This paper describes critical improvements made to the T/P orbit time series over the 13-years of precise orbit determination (POD) provided by the GSFC Space Geodesy Laboratory. The POD improvements from the pre-launch T/P expectation of radial orbit accuracy and Mission requirement of 13-cm to an expected accuracy of about 1.5-cm with today s latest orbits will be discussed. The latest orbits with 1.5 cm RMS radial accuracy represent a significant improvement to the 2.0-cm accuracy orbits currently available on the T/P Geophysical Data Record (GDR) altimeter product.

  15. Noise limitations on monopulse accuracy in a multibeam antenna

    NASA Astrophysics Data System (ADS)

    Loraine, J.; Wallington, J. R.

    A multibeam system allowing target tracking using monopulse processing switched from beamset to beamset is considered. Attention is given to the accuracy of target angular position estimation. An analytical method is used to establish performance limits under low SNR conditions for a multibeam system. It is shown that, in order to achieve accuracies comparable to those of conventional monopulse systems, much higher SNRs are needed.

  16. Slight pressure imbalances can affect accuracy and precision of dual inlet-based clumped isotope analysis.

    PubMed

    Fiebig, Jens; Hofmann, Sven; Löffler, Niklas; Lüdecke, Tina; Methner, Katharina; Wacker, Ulrike

    2016-01-01

    It is well known that a subtle nonlinearity can occur during clumped isotope analysis of CO2 that - if remaining unaddressed - limits accuracy. The nonlinearity is induced by a negative background on the m/z 47 ion Faraday cup, whose magnitude is correlated with the intensity of the m/z 44 ion beam. The origin of the negative background remains unclear, but is possibly due to secondary electrons. Usually, CO2 gases of distinct bulk isotopic compositions are equilibrated at 1000 °C and measured along with the samples in order to be able to correct for this effect. Alternatively, measured m/z 47 beam intensities can be corrected for the contribution of secondary electrons after monitoring how the negative background on m/z 47 evolves with the intensity of the m/z 44 ion beam. The latter correction procedure seems to work well if the m/z 44 cup exhibits a wider slit width than the m/z 47 cup. Here we show that the negative m/z 47 background affects precision of dual inlet-based clumped isotope measurements of CO2 unless raw m/z 47 intensities are directly corrected for the contribution of secondary electrons. Moreover, inaccurate results can be obtained even if the heated gas approach is used to correct for the observed nonlinearity. The impact of the negative background on accuracy and precision arises from small imbalances in m/z 44 ion beam intensities between reference and sample CO2 measurements. It becomes the more significant the larger the relative contribution of secondary electrons to the m/z 47 signal is and the higher the flux rate of CO2 into the ion source is set. These problems can be overcome by correcting the measured m/z 47 ion beam intensities of sample and reference gas for the contributions deriving from secondary electrons after scaling these contributions to the intensities of the corresponding m/z 49 ion beams. Accuracy and precision of this correction are demonstrated by clumped isotope analysis of three internal carbonate standards. The

  17. Slight pressure imbalances can affect accuracy and precision of dual inlet-based clumped isotope analysis.

    PubMed

    Fiebig, Jens; Hofmann, Sven; Löffler, Niklas; Lüdecke, Tina; Methner, Katharina; Wacker, Ulrike

    2016-01-01

    It is well known that a subtle nonlinearity can occur during clumped isotope analysis of CO2 that - if remaining unaddressed - limits accuracy. The nonlinearity is induced by a negative background on the m/z 47 ion Faraday cup, whose magnitude is correlated with the intensity of the m/z 44 ion beam. The origin of the negative background remains unclear, but is possibly due to secondary electrons. Usually, CO2 gases of distinct bulk isotopic compositions are equilibrated at 1000 °C and measured along with the samples in order to be able to correct for this effect. Alternatively, measured m/z 47 beam intensities can be corrected for the contribution of secondary electrons after monitoring how the negative background on m/z 47 evolves with the intensity of the m/z 44 ion beam. The latter correction procedure seems to work well if the m/z 44 cup exhibits a wider slit width than the m/z 47 cup. Here we show that the negative m/z 47 background affects precision of dual inlet-based clumped isotope measurements of CO2 unless raw m/z 47 intensities are directly corrected for the contribution of secondary electrons. Moreover, inaccurate results can be obtained even if the heated gas approach is used to correct for the observed nonlinearity. The impact of the negative background on accuracy and precision arises from small imbalances in m/z 44 ion beam intensities between reference and sample CO2 measurements. It becomes the more significant the larger the relative contribution of secondary electrons to the m/z 47 signal is and the higher the flux rate of CO2 into the ion source is set. These problems can be overcome by correcting the measured m/z 47 ion beam intensities of sample and reference gas for the contributions deriving from secondary electrons after scaling these contributions to the intensities of the corresponding m/z 49 ion beams. Accuracy and precision of this correction are demonstrated by clumped isotope analysis of three internal carbonate standards. The

  18. Test of CCD Precision Limits for Differential Photometry

    NASA Technical Reports Server (NTRS)

    Robinson, L. B.; Wei, M. Z.; Borucki, W. J.; Dunham, E. W.; Ford, C. H.; Granados, A. F.

    1995-01-01

    Results of tests to demonstrate the very high differential-photometric stability of CCD light sensors are presented. The measurements reported here demonstrate that in a controlled laboratory environment, a front-illuminated CCD can provide differential-photometric measurements with reproducible precision approaching one part in 10(exp 5). Practical limitations to the precision of differential-photometric measurements with CCDs and implications for spaceborne applications are discussed.

  19. Test of CCD Precision Limits for Differential Photometry

    NASA Technical Reports Server (NTRS)

    Borucki, W. J.; Dunham, E. W.; Wei, M. Z.; Robinson, L. B.; Ford, C. H.; Granados, A. F.

    1995-01-01

    Results of tests to demonstrate the very high differential-photometric stability of CCD light sensors are presented. The measurements reported here demonstrate that in a controlled laboratory environment, a front-illuminated CCD can provide differential-photometric measurements with reproducible precision approaching one part in 105. Practical limitations to the precision of differential-photometric measurements with CCDs and implications for spaceborne applications are discussed.

  20. A Comparison of the Astrometric Precision and Accuracy of Double Star Observations with Two Telescopes

    NASA Astrophysics Data System (ADS)

    Alvarez, Pablo; Fishbein, Amos E.; Hyland, Michael W.; Kight, Cheyne L.; Lopez, Hairold; Navarro, Tanya; Rosas, Carlos A.; Schachter, Aubrey E.; Summers, Molly A.; Weise, Eric D.; Hoffman, Megan A.; Mires, Robert C.; Johnson, Jolyon M.; Genet, Russell M.; White, Robin

    2009-01-01

    Using a manual Meade 6" Newtonian telescope and a computerized Meade 10" Schmidt-Cassegrain telescope, students from Arroyo Grande High School measured the well-known separation and position angle of the bright visual double star Albireo. The precision and accuracy of the observations from the two telescopes were compared to each other and to published values of Albireo taken as the standard. It was hypothesized that the larger, computerized telescope would be both more precise and more accurate.

  1. Evaluation of optoelectronic Plethysmography accuracy and precision in recording displacements during quiet breathing simulation.

    PubMed

    Massaroni, C; Schena, E; Saccomandi, P; Morrone, M; Sterzi, S; Silvestri, S

    2015-08-01

    Opto-electronic Plethysmography (OEP) is a motion analysis system used to measure chest wall kinematics and to indirectly evaluate respiratory volumes during breathing. Its working principle is based on the computation of marker displacements placed on the chest wall. This work aims at evaluating the accuracy and precision of OEP in measuring displacement in the range of human chest wall displacement during quiet breathing. OEP performances were investigated by the use of a fully programmable chest wall simulator (CWS). CWS was programmed to move 10 times its eight shafts in the range of physiological displacement (i.e., between 1 mm and 8 mm) at three different frequencies (i.e., 0.17 Hz, 0.25 Hz, 0.33 Hz). Experiments were performed with the aim to: (i) evaluate OEP accuracy and precision error in recording displacement in the overall calibrated volume and in three sub-volumes, (ii) evaluate the OEP volume measurement accuracy due to the measurement accuracy of linear displacements. OEP showed an accuracy better than 0.08 mm in all trials, considering the whole 2m(3) calibrated volume. The mean measurement discrepancy was 0.017 mm. The precision error, expressed as the ratio between measurement uncertainty and the recorded displacement by OEP, was always lower than 0.55%. Volume overestimation due to OEP linear measurement accuracy was always <; 12 mL (<; 3.2% of total volume), considering all settings. PMID:26736504

  2. A limiter for PPM that preserves accuracy at smooth extrema

    NASA Astrophysics Data System (ADS)

    Colella, Phillip; Sekora, Michael D.

    2008-07-01

    We present a new limiter for the PPM method of Colella and Woodward [P. Colella, P.R. Woodward, The Piecewise Parabolic Method (PPM) for gas-dynamical simulations, Journal of Computational Physics 54 (1984) 174-201] that preserves accuracy at smooth extrema. It is based on constraining the interpolated values at extrema (and only at extrema) using non-linear combinations of various difference approximations of the second derivatives. Otherwise, we use a standard geometric limiter to preserve monotonicity away from extrema. This leads to a method that has the same accuracy for smooth initial data as the underlying PPM method without limiting, while providing sharp, non-oscillatory representations of discontinuities.

  3. Sex differences in accuracy and precision when judging time to arrival: data from two Internet studies.

    PubMed

    Sanders, Geoff; Sinclair, Kamila

    2011-12-01

    We report two Internet studies that investigated sex differences in the accuracy and precision of judging time to arrival. We used accuracy to mean the ability to match the actual time to arrival and precision to mean the consistency with which each participant made their judgments. Our task was presented as a computer game in which a toy UFO moved obliquely towards the participant through a virtual three-dimensional space on route to a docking station. The UFO disappeared before docking and participants pressed their space bar at the precise moment they thought the UFO would have docked. Study 1 showed it was possible to conduct quantitative studies of spatiotemporal judgments in virtual reality via the Internet and confirmed reports that men are more accurate because women underestimate, but found no difference in precision measured as intra-participant variation. Study 2 repeated Study 1 with five additional presentations of one condition to provide a better measure of precision. Again, men were more accurate than women but there were no sex differences in precision. However, within the coincidence-anticipation timing (CAT) literature, of those studies that report sex differences, a majority found that males are both more accurate and more precise than females. Noting that many CAT studies report no sex differences, we discuss appropriate interpretations of such null findings. While acknowledging that CAT performance may be influenced by experience we suggest that the sex difference may have originated among our ancestors with the evolutionary selection of men for hunting and women for gathering.

  4. The Plus or Minus Game--Teaching Estimation, Precision, and Accuracy

    ERIC Educational Resources Information Center

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-01-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in "TPT" (Larry Weinstein's "Fermi…

  5. Commissioning Procedures for Mechanical Precision and Accuracy in a Dedicated LINAC

    SciTech Connect

    Ballesteros-Zebadua, P.; Larrga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Juarez, J.; Prieto, I.; Moreno-Jimenez, S.; Celis, M. A.

    2008-08-11

    Mechanical precision measurements are fundamental procedures for the commissioning of a dedicated LINAC. At our Radioneurosurgery Unit, these procedures can be suitable as quality assurance routines that allow the verification of the equipment geometrical accuracy and precision. In this work mechanical tests were performed for gantry and table rotation, obtaining mean associated uncertainties of 0.3 mm and 0.71 mm, respectively. Using an anthropomorphic phantom and a series of localized surface markers, isocenter accuracy showed to be smaller than 0.86 mm for radiosurgery procedures and 0.95 mm for fractionated treatments with mask. All uncertainties were below tolerances. The highest contribution to mechanical variations is due to table rotation, so it is important to correct variations using a localization frame with printed overlays. Mechanical precision knowledge would allow to consider the statistical errors in the treatment planning volume margins.

  6. Maximum angular accuracy of pulsed laser radar in photocounting limit.

    PubMed

    Elbaum, M; Diament, P; King, M; Edelson, W

    1977-07-01

    To estimate the angular position of targets with pulsed laser radars, their images may be sensed with a fourquadrant noncoherent detector and the image photocounting distribution processed to obtain the angular estimates. The limits imposed on the accuracy of angular estimation by signal and background radiation shot noise, dark current noise, and target cross-section fluctuations are calculated. Maximum likelihood estimates of angular positions are derived for optically rough and specular targets and their performances compared with theoretical lower bounds.

  7. Accuracy limit of rigid 3-point water models

    NASA Astrophysics Data System (ADS)

    Izadi, Saeed; Onufriev, Alexey V.

    2016-08-01

    Classical 3-point rigid water models are most widely used due to their computational efficiency. Recently, we introduced a new approach to constructing classical rigid water models [S. Izadi et al., J. Phys. Chem. Lett. 5, 3863 (2014)], which permits a virtually exhaustive search for globally optimal model parameters in the sub-space that is most relevant to the electrostatic properties of the water molecule in liquid phase. Here we apply the approach to develop a 3-point Optimal Point Charge (OPC3) water model. OPC3 is significantly more accurate than the commonly used water models of same class (TIP3P and SPCE) in reproducing a comprehensive set of liquid bulk properties, over a wide range of temperatures. Beyond bulk properties, we show that OPC3 predicts the intrinsic charge hydration asymmetry (CHA) of water — a characteristic dependence of hydration free energy on the sign of the solute charge — in very close agreement with experiment. Two other recent 3-point rigid water models, TIP3PFB and H2ODC, each developed by its own, completely different optimization method, approach the global accuracy optimum represented by OPC3 in both the parameter space and accuracy of bulk properties. Thus, we argue that an accuracy limit of practical 3-point rigid non-polarizable models has effectively been reached; remaining accuracy issues are discussed.

  8. Accuracy limit of rigid 3-point water models.

    PubMed

    Izadi, Saeed; Onufriev, Alexey V

    2016-08-21

    Classical 3-point rigid water models are most widely used due to their computational efficiency. Recently, we introduced a new approach to constructing classical rigid water models [S. Izadi et al., J. Phys. Chem. Lett. 5, 3863 (2014)], which permits a virtually exhaustive search for globally optimal model parameters in the sub-space that is most relevant to the electrostatic properties of the water molecule in liquid phase. Here we apply the approach to develop a 3-point Optimal Point Charge (OPC3) water model. OPC3 is significantly more accurate than the commonly used water models of same class (TIP3P and SPCE) in reproducing a comprehensive set of liquid bulk properties, over a wide range of temperatures. Beyond bulk properties, we show that OPC3 predicts the intrinsic charge hydration asymmetry (CHA) of water - a characteristic dependence of hydration free energy on the sign of the solute charge - in very close agreement with experiment. Two other recent 3-point rigid water models, TIP3PFB and H2ODC, each developed by its own, completely different optimization method, approach the global accuracy optimum represented by OPC3 in both the parameter space and accuracy of bulk properties. Thus, we argue that an accuracy limit of practical 3-point rigid non-polarizable models has effectively been reached; remaining accuracy issues are discussed. PMID:27544113

  9. Sex differences in accuracy and precision when judging time to arrival: data from two Internet studies.

    PubMed

    Sanders, Geoff; Sinclair, Kamila

    2011-12-01

    We report two Internet studies that investigated sex differences in the accuracy and precision of judging time to arrival. We used accuracy to mean the ability to match the actual time to arrival and precision to mean the consistency with which each participant made their judgments. Our task was presented as a computer game in which a toy UFO moved obliquely towards the participant through a virtual three-dimensional space on route to a docking station. The UFO disappeared before docking and participants pressed their space bar at the precise moment they thought the UFO would have docked. Study 1 showed it was possible to conduct quantitative studies of spatiotemporal judgments in virtual reality via the Internet and confirmed reports that men are more accurate because women underestimate, but found no difference in precision measured as intra-participant variation. Study 2 repeated Study 1 with five additional presentations of one condition to provide a better measure of precision. Again, men were more accurate than women but there were no sex differences in precision. However, within the coincidence-anticipation timing (CAT) literature, of those studies that report sex differences, a majority found that males are both more accurate and more precise than females. Noting that many CAT studies report no sex differences, we discuss appropriate interpretations of such null findings. While acknowledging that CAT performance may be influenced by experience we suggest that the sex difference may have originated among our ancestors with the evolutionary selection of men for hunting and women for gathering. PMID:21125324

  10. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new

  11. Comparison between predicted and actual accuracies for an Ultra-Precision CNC measuring machine

    SciTech Connect

    Thompson, D.C.; Fix, B.L.

    1995-05-30

    At the 1989 CIRP annual meeting, we reported on the design of a specialized, ultra-precision CNC measuring machine, and on the error budget that was developed to guide the design process. In our paper we proposed a combinatorial rule for merging estimated and/or calculated values for all known sources of error, to yield a single overall predicted accuracy for the machine. In this paper we compare our original predictions with measured performance of the completed instrument.

  12. The limits of precision monomer placement in chain growth polymerization.

    PubMed

    Gody, Guillaume; Zetterlund, Per B; Perrier, Sébastien; Harrisson, Simon

    2016-01-01

    Precise control over the location of monomers in a polymer chain has been described as the 'Holy Grail' of polymer synthesis. Controlled chain growth polymerization techniques have brought this goal closer, allowing the preparation of multiblock copolymers with ordered sequences of functional monomers. Such structures have promising applications ranging from medicine to materials engineering. Here we show, however, that the statistical nature of chain growth polymerization places strong limits on the control that can be obtained. We demonstrate that monomer locations are distributed according to surprisingly simple laws related to the Poisson or beta distributions. The degree of control is quantified in terms of the yield of the desired structure and the standard deviation of the appropriate distribution, allowing comparison between different synthetic techniques. This analysis establishes experimental requirements for the design of polymeric chains with controlled sequence of functionalities, which balance precise control of structure with simplicity of synthesis. PMID:26830125

  13. The limits of precision monomer placement in chain growth polymerization

    NASA Astrophysics Data System (ADS)

    Gody, Guillaume; Zetterlund, Per B.; Perrier, Sébastien; Harrisson, Simon

    2016-02-01

    Precise control over the location of monomers in a polymer chain has been described as the `Holy Grail' of polymer synthesis. Controlled chain growth polymerization techniques have brought this goal closer, allowing the preparation of multiblock copolymers with ordered sequences of functional monomers. Such structures have promising applications ranging from medicine to materials engineering. Here we show, however, that the statistical nature of chain growth polymerization places strong limits on the control that can be obtained. We demonstrate that monomer locations are distributed according to surprisingly simple laws related to the Poisson or beta distributions. The degree of control is quantified in terms of the yield of the desired structure and the standard deviation of the appropriate distribution, allowing comparison between different synthetic techniques. This analysis establishes experimental requirements for the design of polymeric chains with controlled sequence of functionalities, which balance precise control of structure with simplicity of synthesis.

  14. Direct illumination calibration of telescopes at the quantum precision limit

    NASA Astrophysics Data System (ADS)

    Barrelet, E.

    2016-10-01

    The electronic response of a telescope under direct illumination by a point-like light source is based on photon counting. With the data obtained using the SNDICE light source and the Megacam camera on the CFHT telescope, we show that the ultimate precision is only limited by the photon statistical fluctuation, which is below 1 ppm. A key feature of the analysis is the incorporation of diffuse light that interfers with specularly reflected light in the transmission model to explain the observed diffraction patterns. The effect of diffuse light, usually hidden conveniently in the Strehl ratio for an object at infinity, is characterized with a precision of 10 ppm. In particular, the spatial frequency representation provides some strong physical constraints and a practical monitoring of the roughness of various optical surfaces.

  15. Precision and accuracy of 3D lower extremity residua measurement systems

    NASA Astrophysics Data System (ADS)

    Commean, Paul K.; Smith, Kirk E.; Vannier, Michael W.; Hildebolt, Charles F.; Pilgram, Thomas K.

    1996-04-01

    Accurate and reproducible geometric measurement of lower extremity residua is required for custom prosthetic socket design. We compared spiral x-ray computed tomography (SXCT) and 3D optical surface scanning (OSS) with caliper measurements and evaluated the precision and accuracy of each system. Spiral volumetric CT scanned surface and subsurface information was used to make external and internal measurements, and finite element models (FEMs). SXCT and OSS were used to measure lower limb residuum geometry of 13 below knee (BK) adult amputees. Six markers were placed on each subject's BK residuum and corresponding plaster casts and distance measurements were taken to determine precision and accuracy for each system. Solid models were created from spiral CT scan data sets with the prosthesis in situ under different loads using p-version finite element analysis (FEA). Tissue properties of the residuum were estimated iteratively and compared with values taken from the biomechanics literature. The OSS and SXCT measurements were precise within 1% in vivo and 0.5% on plaster casts, and accuracy was within 3.5% in vivo and 1% on plaster casts compared with caliper measures. Three-dimensional optical surface and SXCT imaging systems are feasible for capturing the comprehensive 3D surface geometry of BK residua, and provide distance measurements statistically equivalent to calipers. In addition, SXCT can readily distinguish internal soft tissue and bony structure of the residuum. FEM can be applied to determine tissue material properties interactively using inverse methods.

  16. Large format focal plane array integration with precision alignment, metrology and accuracy capabilities

    NASA Astrophysics Data System (ADS)

    Neumann, Jay; Parlato, Russell; Tracy, Gregory; Randolph, Max

    2015-09-01

    Focal plane alignment for large format arrays and faster optical systems require enhanced precision methodology and stability over temperature. The increase in focal plane array size continues to drive the alignment capability. Depending on the optical system, the focal plane flatness of less than 25μm (.001") is required over transition temperatures from ambient to cooled operating temperatures. The focal plane flatness requirement must also be maintained in airborne or launch vibration environments. This paper addresses the challenge of the detector integration into the focal plane module and housing assemblies, the methodology to reduce error terms during integration and the evaluation of thermal effects. The driving factors influencing the alignment accuracy include: datum transfers, material effects over temperature, alignment stability over test, adjustment precision and traceability to NIST standard. The FPA module design and alignment methodology reduces the error terms by minimizing the measurement transfers to the housing. In the design, the proper material selection requires matched coefficient of expansion materials minimizes both the physical shift over temperature as well as lowering the stress induced into the detector. When required, the co-registration of focal planes and filters can achieve submicron relative positioning by applying precision equipment, interferometry and piezoelectric positioning stages. All measurements and characterizations maintain traceability to NIST standards. The metrology characterizes the equipment's accuracy, repeatability and precision of the measurements.

  17. Speckle phase noise in coherent laser ranging: fundamental precision limitations.

    PubMed

    Baumann, Esther; Deschênes, Jean-Daniel; Giorgetta, Fabrizio R; Swann, William C; Coddington, Ian; Newbury, Nathan R

    2014-08-15

    Frequency-modulated continuous-wave laser detection and ranging (FMCW LADAR) measures the range to a surface through coherent detection of the backscattered light from a frequency-swept laser source. The ultimate limit to the range precision of FMCW LADAR, or any coherent LADAR, to a diffusely scattering surface will be determined by the unavoidable speckle phase noise. Here, we demonstrate the two main manifestations of this limit. First, frequency-dependent speckle phase noise leads to a non-Gaussian range distribution having outliers that approach the system range resolution, regardless of the signal-to-noise ratio. These outliers are reduced only through improved range resolution (i.e., higher optical bandwidths). Second, if the range is measured during a continuous lateral scan across a surface, the spatial pattern of speckle phase is converted to frequency noise, which leads to additional excess range uncertainty. We explore these two effects and show that laboratory results agree with analytical expressions and numerical simulations. We also show that at 1 THz optical bandwidth, range precisions below 10 μm are achievable regardless of these effects. PMID:25121872

  18. Accuracy and precision of ice stream bed topography derived from ground-based radar surveys

    NASA Astrophysics Data System (ADS)

    King, Edward

    2016-04-01

    There is some confusion within the glaciological community as to the accuracy of the basal topography derived from radar measurements. A number of texts and papers state that basal topography cannot be determined to better than one quarter of the wavelength of the radar system. On the other hand King et al (Nature Geoscience, 2009) claimed that features of the bed topography beneath Rutford Ice Stream, Antarctica can be distinguished to +/- 3m using a 3 MHz radar system (which has a quarter wavelength of 14m in ice). These statements of accuracy are mutually exclusive. I will show in this presentation that the measurement of ice thickness is a radar range determination to a single strongly-reflective target. This measurement has much higher accuracy than the resolution of two targets of similar reflection strength, which is governed by the quarter-wave criterion. The rise time of the source signal and the sensitivity and digitisation interval of the recording system are the controlling criteria on radar range accuracy. A dataset from Pine Island Glacier, West Antarctica will be used to illustrate these points, as well as the repeatability or precision of radar range measurements, and the influence of gridding parameters and positioning accuracy on the final DEM product.

  19. Wound Area Measurement with Digital Planimetry: Improved Accuracy and Precision with Calibration Based on 2 Rulers

    PubMed Central

    Foltynski, Piotr

    2015-01-01

    Introduction In the treatment of chronic wounds the wound surface area change over time is useful parameter in assessment of the applied therapy plan. The more precise the method of wound area measurement the earlier may be identified and changed inappropriate treatment plan. Digital planimetry may be used in wound area measurement and therapy assessment when it is properly used, but the common problem is the camera lens orientation during the taking of a picture. The camera lens axis should be perpendicular to the wound plane, and if it is not, the measured area differ from the true area. Results Current study shows that the use of 2 rulers placed in parallel below and above the wound for the calibration increases on average 3.8 times the precision of area measurement in comparison to the measurement with one ruler used for calibration. The proposed procedure of calibration increases also 4 times accuracy of area measurement. It was also showed that wound area range and camera type do not influence the precision of area measurement with digital planimetry based on two ruler calibration, however the measurements based on smartphone camera were significantly less accurate than these based on D-SLR or compact cameras. Area measurement on flat surface was more precise with the digital planimetry with 2 rulers than performed with the Visitrak device, the Silhouette Mobile device or the AreaMe software-based method. Conclusion The calibration in digital planimetry with using 2 rulers remarkably increases precision and accuracy of measurement and therefore should be recommended instead of calibration based on single ruler. PMID:26252747

  20. On the Accuracy and Limits of Peptide Fragmentation Spectrum Prediction

    PubMed Central

    Li, Sujun; Arnold, Randy J.; Tang, Haixu; Radivojac, Predrag

    2011-01-01

    We estimated the reproducibility of tandem mass fragmentation spectra for the widely-used collision-induced dissociation (CID) instruments. Using the Pearson correlation coefficient as a measure of spectral similarity, we found that the within-experiment reproducibility of fragment ion intensities is very high (about 0.85). However, across different experiments and instrument types/setups, the correlation decreases by more than 15% (to about 0.70). We further investigated the accuracy of current predictors of peptide fragmentation spectra and found that they are more accurate than the ad-hoc models generally used by search engines (e.g. SEQUEST) and, surprisingly, approaching the empirical upper limit set by the average across-experiment spectral reproducibility (especially for charge +1 and charge +2 precursor ions). These results provide evidence that, in terms of accuracy of modeling, predicted peptide fragmentation spectra provide a viable alternative to spectral libraries for peptide identification, with a higher coverage of peptides and lower storage requirements. Furthermore, using five data sets of proteome digests by two different proteases, we find that PeptideART (a data-driven machine learning approach) is generally more accurate than MassAnalyzer (an approach based on a kinetic model for peptide fragmentation) in predicting fragmentation spectra, but that both models are significantly more accurate than the ad-hoc models. Availability: PeptideART is freely available at www.informatics.indiana.edu/predrag. PMID:21175207

  1. Evaluation of Accuracy in Kinematic GPS Analyses Using a Precision Roving Antenna Platform

    NASA Astrophysics Data System (ADS)

    Miura, S.; Sweeney, A.; Fujimoto, H.; Osaki, H.; Kawai, E.; Ichikawa, R.; Kondo, T.; Osada, Y.; Chadwell, C. D.

    2002-12-01

    Most tectonic plate boundaries and seismogenic zones of interplate earthquakes exist beneath the ocean and our knowledge on interplate coupling and on generation processes of those earthquakes remain limited. Seafloor geodesy will consequently play a very important role in improving our understanding of the physical process near plate boundaries. Seafloor positioning using a GPS/Acoustic technique is the one potential method to detect the displacement occurring at the ocean bottom. The accuracy of the technique depends on two parts: acoustic ranging in seawater, and kinematic GPS (KGPS) analysis. Accuracy of KGPS have evaluated with following way: 1) Static test: First, we carried out an experiment to confirm the capability of the KGPS analysis using GIPSY/OASIS-II for a long baseline of about 310 km. We used two GPS stations on land, one as a reference station in Sendai, and the other in Tokyo as a rover one, whose coordinate can vary from epoch to epoch. This baseline length is required for our project because the farthest seafloor transponder array is 280 km east of the nearest coastal GPS station. The 1 cm stability of the KGPS solution was achieved in the horizontal components of the 310-km baseline over the course of one day. The vertical component showed fluctuation probably due to parameters unmodeled in the analysis such as multipath and/or tropospheric delay. 2) Sea surface experiment: During cruise KT01-11 of the R/V Tansei-maru, Ocean Research Institute (ORI), University of Tokyo, around the Japan Trench in late July 2001, we deployed three precision acoustic transponders on both the Pacific plate (280 km from the coast, depth around 5450 m) and the landward slope (110 km from the coast, depth around 1600 m). We used a surface buoy with 3 GPS antennas, a motion sensor, a hydrophone, and a computer for data acquisition and control to make combined GPS/Acoustic observations. The buoy was towed about 80 m away from the R/V to reduce the impact of ship

  2. Automated Gravimetric Calibration to Optimize the Accuracy and Precision of TECAN Freedom EVO Liquid Handler.

    PubMed

    Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique

    2016-10-01

    High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications. PMID:26905719

  3. The tradeoff between accuracy and precision in latent variable models of mediation processes

    PubMed Central

    Ledgerwood, Alison; Shrout, Patrick E.

    2016-01-01

    Social psychologists place high importance on understanding mechanisms, and frequently employ mediation analyses to shed light on the process underlying an effect. Such analyses can be conducted using observed variables (e.g., a typical regression approach) or latent variables (e.g., a SEM approach), and choosing between these methods can be a more complex and consequential decision than researchers often realize. The present paper adds to the literature on mediation by examining the relative tradeoff between accuracy and precision in latent versus observed variable modeling. Whereas past work has shown that latent variable models tend to produce more accurate estimates, we demonstrate that observed variable models tend to produce more precise estimates, and examine this relative tradeoff both theoretically and empirically in a typical three-variable mediation model across varying levels of effect size and reliability. We discuss implications for social psychologists seeking to uncover mediating variables, and recommend practical approaches for maximizing both accuracy and precision in mediation analyses. PMID:21806305

  4. Automated Gravimetric Calibration to Optimize the Accuracy and Precision of TECAN Freedom EVO Liquid Handler

    PubMed Central

    Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique

    2016-01-01

    High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications. PMID:26905719

  5. Accuracy, precision, usability, and cost of free chlorine residual testing methods.

    PubMed

    Murray, Anna; Lantagne, Daniele

    2015-03-01

    Chlorine is the most widely used disinfectant worldwide, partially because residual protection is maintained after treatment. This residual is measured using colorimetric test kits varying in accuracy, precision, training required, and cost. Seven commercially available colorimeters, color wheel and test tube comparator kits, pool test kits, and test strips were evaluated for use in low-resource settings by: (1) measuring in quintuplicate 11 samples from 0.0-4.0 mg/L free chlorine residual in laboratory and natural light settings to determine accuracy and precision; (2) conducting volunteer testing where participants used and evaluated each test kit; and (3) comparing costs. Laboratory accuracy ranged from 5.1-40.5% measurement error, with colorimeters the most accurate and test strip methods the least. Variation between laboratory and natural light readings occurred with one test strip method. Volunteer participants found test strip methods easiest and color wheel methods most difficult, and were most confident in the colorimeter and least confident in test strip methods. Costs range from 3.50-444 USD for 100 tests. Application of a decision matrix found colorimeters and test tube comparator kits were most appropriate for use in low-resource settings; it is recommended users apply the decision matrix themselves, as the appropriate kit might vary by context.

  6. Accuracy and precision of stream reach water surface slopes estimated in the field and from maps

    USGS Publications Warehouse

    Isaak, D.J.; Hubert, W.A.; Krueger, K.L.

    1999-01-01

    The accuracy and precision of five tools used to measure stream water surface slope (WSS) were evaluated. Water surface slopes estimated in the field with a clinometer or from topographic maps used in conjunction with a map wheel or geographic information system (GIS) were significantly higher than WSS estimated in the field with a surveying level (biases of 34, 41, and 53%, respectively). Accuracy of WSS estimates obtained with an Abney level did not differ from surveying level estimates, but conclusions regarding the accuracy of Abney levels and clinometers were weakened by intratool variability. The surveying level estimated WSS most precisely (coefficient of variation [CV] = 0.26%), followed by the GIS (CV = 1.87%), map wheel (CV = 6.18%), Abney level (CV = 13.68%), and clinometer (CV = 21.57%). Estimates of WSS measured in the field with an Abney level and estimated for the same reaches with a GIS used in conjunction with l:24,000-scale topographic maps were significantly correlated (r = 0.86), but there was a tendency for the GIS to overestimate WSS. Detailed accounts of the methods used to measure WSS and recommendations regarding the measurement of WSS are provided.

  7. Breaking Quantum and Thermal Limits on Precision Measurements

    NASA Astrophysics Data System (ADS)

    Thompson, James K.

    2016-05-01

    I will give an overview of our efforts to use correlations and entanglement between many atoms to overcome quantum and thermal limits on precision measurements. In the first portion of my talk, I will present a path toward a 10000 times reduced sensitivity to the thermal mirror motion that limits the linewidth of today's best lasers. By utilizing narrow atomic transitions, the laser's phase information is primarily stored in the atomic gain medium rather than in the vibration-sensitive cavity field. To this end, I will present the first observation of lasing based on the mHz linewidth optical-clock transition in a laser-cooled ensemble of strontium atoms. In the second portion of my talk, I will describe how we use collective measurements to surpass the standard quantum limit on phase estimation 1 /√{ N} for N unentangled atoms. We achieve a directly observed reduction in phase variance relative to the standard quantum limit of as much as 17.7(6) dB. Supported by DARPA QuASAR, NIST, ARO, and NSF PFC. This material is based upon work supported by the National Science Foundation under Grant Number 1125844 Physics Frontier Center.

  8. Accuracy and precision of protein-ligand interaction kinetics determined from chemical shift titrations.

    PubMed

    Markin, Craig J; Spyracopoulos, Leo

    2012-12-01

    NMR-monitored chemical shift titrations for the study of weak protein-ligand interactions represent a rich source of information regarding thermodynamic parameters such as dissociation constants (K ( D )) in the micro- to millimolar range, populations for the free and ligand-bound states, and the kinetics of interconversion between states, which are typically within the fast exchange regime on the NMR timescale. We recently developed two chemical shift titration methods wherein co-variation of the total protein and ligand concentrations gives increased precision for the K ( D ) value of a 1:1 protein-ligand interaction (Markin and Spyracopoulos in J Biomol NMR 53: 125-138, 2012). In this study, we demonstrate that classical line shape analysis applied to a single set of (1)H-(15)N 2D HSQC NMR spectra acquired using precise protein-ligand chemical shift titration methods we developed, produces accurate and precise kinetic parameters such as the off-rate (k ( off )). For experimentally determined kinetics in the fast exchange regime on the NMR timescale, k ( off ) ~ 3,000 s(-1) in this work, the accuracy of classical line shape analysis was determined to be better than 5 % by conducting quantum mechanical NMR simulations of the chemical shift titration methods with the magnetic resonance toolkit GAMMA. Using Monte Carlo simulations, the experimental precision for k ( off ) from line shape analysis of NMR spectra was determined to be 13 %, in agreement with the theoretical precision of 12 % from line shape analysis of the GAMMA simulations in the presence of noise and protein concentration errors. In addition, GAMMA simulations were employed to demonstrate that line shape analysis has the potential to provide reasonably accurate and precise k ( off ) values over a wide range, from 100 to 15,000 s(-1). The validity of line shape analysis for k ( off ) values approaching intermediate exchange (~100 s(-1)), may be facilitated by more accurate K ( D ) measurements

  9. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients

    PubMed Central

    Asadian, Simin; Khatony, Alireza; Moradi, Gholamreza; Abdi, Alireza; Rezaei, Mansour

    2016-01-01

    Introduction An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. Methods In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients’ body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. Results There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (P<0.001). Kappa coefficients showed good agreement between the temperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%). Paired t-test demonstrated an acceptable precision with forehead (P=0.132), left (P=0.18) and right (P=0.318) tympanic membranes, oral (P=1.00), and axillary (P=1.00) methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. Conclusion The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left) for assessing a patient’s body temperature in the intensive care units because of high accuracy and acceptable precision.

  10. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients

    PubMed Central

    Asadian, Simin; Khatony, Alireza; Moradi, Gholamreza; Abdi, Alireza; Rezaei, Mansour

    2016-01-01

    Introduction An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. Methods In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients’ body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. Results There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (P<0.001). Kappa coefficients showed good agreement between the temperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%). Paired t-test demonstrated an acceptable precision with forehead (P=0.132), left (P=0.18) and right (P=0.318) tympanic membranes, oral (P=1.00), and axillary (P=1.00) methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. Conclusion The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left) for assessing a patient’s body temperature in the intensive care units because of high accuracy and acceptable precision. PMID:27621673

  11. Assessing accuracy and precision for field and laboratory data: a perspective in ecosystem restoration

    USGS Publications Warehouse

    Stapanian, Martin A.; Lewis, Timothy E; Palmer, Craig J.; Middlebrook Amos, Molly

    2016-01-01

    Unlike most laboratory studies, rigorous quality assurance/quality control (QA/QC) procedures may be lacking in ecosystem restoration (“ecorestoration”) projects, despite legislative mandates in the United States. This is due, in part, to ecorestoration specialists making the false assumption that some types of data (e.g. discrete variables such as species identification and abundance classes) are not subject to evaluations of data quality. Moreover, emergent behavior manifested by complex, adapting, and nonlinear organizations responsible for monitoring the success of ecorestoration projects tend to unconsciously minimize disorder, QA/QC being an activity perceived as creating disorder. We discuss similarities and differences in assessing precision and accuracy for field and laboratory data. Although the concepts for assessing precision and accuracy of ecorestoration field data are conceptually the same as laboratory data, the manner in which these data quality attributes are assessed is different. From a sample analysis perspective, a field crew is comparable to a laboratory instrument that requires regular “recalibration,” with results obtained by experts at the same plot treated as laboratory calibration standards. Unlike laboratory standards and reference materials, the “true” value for many field variables is commonly unknown. In the laboratory, specific QA/QC samples assess error for each aspect of the measurement process, whereas field revisits assess precision and accuracy of the entire data collection process following initial calibration. Rigorous QA/QC data in an ecorestoration project are essential for evaluating the success of a project, and they provide the only objective “legacy” of the dataset for potential legal challenges and future uses.

  12. To address accuracy and precision using methods from analytical chemistry and computational physics.

    PubMed

    Kozmutza, Cornelia; Picó, Yolanda

    2009-04-01

    In this work the pesticides were determined by liquid chromatography-mass spectrometry (LC-MS). In present study the occurrence of imidacloprid in 343 samples of oranges, tangerines, date plum, and watermelons from Valencian Community (Spain) has been investigated. The nine additional pesticides were chosen as they have been recommended for orchard treatment together with imidacloprid. The Mulliken population analysis has been applied to present the charge distribution in imidacloprid. Partitioned energy terms and the virial ratios have been calculated for certain molecules entering in interaction. A new technique based on the comparison of the decomposed total energy terms at various configurations is demonstrated in this work. The interaction ability could be established correctly in the studied case. An attempt is also made in this work to address accuracy and precision. These quantities are well-known in experimental measurements. In case precise theoretical description is achieved for the contributing monomers and also for the interacting complex structure some properties of this latter system can be predicted to quite a good accuracy. Based on simple hypothetical considerations we estimate the impact of applying computations on reducing the amount of analytical work.

  13. Precision and accuracy of spectrophotometric pH measurements at environmental conditions in the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Hammer, Karoline; Schneider, Bernd; Kuliński, Karol; Schulz-Bull, Detlef E.

    2014-06-01

    The increasing uptake of anthropogenic CO2 by the oceans has raised an interest in precise and accurate pH measurement in order to assess the impact on the marine CO2-system. Spectrophotometric pH measurements were refined during the last decade yielding a precision and accuracy that cannot be achieved with the conventional potentiometric method. However, until now the method was only tested in oceanic systems with a relative stable and high salinity and a small pH range. This paper describes the first application of such a pH measurement system at conditions in the Baltic Sea which is characterized by a wide salinity and pH range. The performance of the spectrophotometric system at pH values as low as 7.0 (“total” scale) and salinities between 0 and 35 was examined using TRIS-buffer solutions, certified reference materials, and tests of consistency with measurements of other parameters of the marine CO2 system. Using m-cresol purple as indicator dye and a spectrophotometric measurement system designed at Scripps Institution of Oceanography (B. Carter, A. Dickson), a precision better than ±0.001 and an accuracy between ±0.01 and ±0.02 was achieved within the observed pH and salinity ranges in the Baltic Sea. The influence of the indicator dye on the pH of the sample was determined theoretically and is presented as a pH correction term for the different alkalinity regimes in the Baltic Sea. Because of the encouraging tests, the ease of operation and the fact that the measurements refer to the internationally accepted “total” pH scale, it is recommended to use the spectrophotometric method also for pH monitoring and trend detection in the Baltic Sea.

  14. Improvement in precision, accuracy, and efficiency in sstandardizing the characterization of granular materials

    SciTech Connect

    Tucker, Jonathan R.; Shadle, Lawrence J.; Benyahia, Sofiane; Mei, Joseph; Guenther, Chris; Koepke, M. E.

    2013-01-01

    Useful prediction of the kinematics, dynamics, and chemistry of a system relies on precision and accuracy in the quantification of component properties, operating mechanisms, and collected data. In an attempt to emphasize, rather than gloss over, the benefit of proper characterization to fundamental investigations of multiphase systems incorporating solid particles, a set of procedures were developed and implemented for the purpose of providing a revised methodology having the desirable attributes of reduced uncertainty, expanded relevance and detail, and higher throughput. Better, faster, cheaper characterization of multiphase systems result. Methodologies are presented to characterize particle size, shape, size distribution, density (particle, skeletal and bulk), minimum fluidization velocity, void fraction, particle porosity, and assignment within the Geldart Classification. A novel form of the Ergun equation was used to determine the bulk void fractions and particle density. Accuracy of properties-characterization methodology was validated on materials of known properties prior to testing materials of unknown properties. Several of the standard present-day techniques were scrutinized and improved upon where appropriate. Validity, accuracy, and repeatability were assessed for the procedures presented and deemed higher than present-day techniques. A database of over seventy materials has been developed to assist in model validation efforts and future desig

  15. Hepatic perfusion in a tumor model using DCE-CT: an accuracy and precision study

    NASA Astrophysics Data System (ADS)

    Stewart, Errol E.; Chen, Xiaogang; Hadway, Jennifer; Lee, Ting-Yim

    2008-08-01

    In the current study we investigate the accuracy and precision of hepatic perfusion measurements based on the Johnson and Wilson model with the adiabatic approximation. VX2 carcinoma cells were implanted into the livers of New Zealand white rabbits. Simultaneous dynamic contrast-enhanced computed tomography (DCE-CT) and radiolabeled microsphere studies were performed under steady-state normo-, hyper- and hypo-capnia. The hepatic arterial blood flows (HABF) obtained using both techniques were compared with ANOVA. The precision was assessed by the coefficient of variation (CV). Under normo-capnia the microsphere HABF were 51.9 ± 4.2, 40.7 ± 4.9 and 99.7 ± 6.0 ml min-1 (100 g)-1 while DCE-CT HABF were 50.0 ± 5.7, 37.1 ± 4.5 and 99.8 ± 6.8 ml min-1 (100 g)-1 in normal tissue, tumor core and rim, respectively. There were no significant differences between HABF measurements obtained with both techniques (P > 0.05). Furthermore, a strong correlation was observed between HABF values from both techniques: slope of 0.92 ± 0.05, intercept of 4.62 ± 2.69 ml min-1 (100 g)-1 and R2 = 0.81 ± 0.05 (P < 0.05). The Bland-Altman plot comparing DCE-CT and microsphere HABF measurements gives a mean difference of -0.13 ml min-1 (100 g)-1, which is not significantly different from zero. DCE-CT HABF is precise, with CV of 5.7, 24.9 and 1.4% in the normal tissue, tumor core and rim, respectively. Non-invasive measurement of HABF with DCE-CT is accurate and precise. DCE-CT can be an important extension of CT to assess hepatic function besides morphology in liver diseases.

  16. Accuracy and precision of integumental linear dimensions in a three-dimensional facial imaging system

    PubMed Central

    Kim, Soo-Hwan; Jung, Woo-Young; Seo, Yu-Jin; Kim, Kyung-A; Park, Ki-Ho

    2015-01-01

    Objective A recently developed facial scanning method uses three-dimensional (3D) surface imaging with a light-emitting diode. Such scanning enables surface data to be captured in high-resolution color and at relatively fast speeds. The purpose of this study was to evaluate the accuracy and precision of 3D images obtained using the Morpheus 3D® scanner (Morpheus Co., Seoul, Korea). Methods The sample comprised 30 subjects aged 24-34 years (mean 29.0 ± 2.5 years). To test the correlation between direct and 3D image measurements, 21 landmarks were labeled on the face of each subject. Sixteen direct measurements were obtained twice using digital calipers; the same measurements were then made on two sets of 3D facial images. The mean values of measurements obtained from both methods were compared. To investigate the precision, a comparison was made between two sets of measurements taken with each method. Results When comparing the variables from both methods, five of the 16 possible anthropometric variables were found to be significantly different. However, in 12 of the 16 cases, the mean difference was under 1 mm. The average value of the differences for all variables was 0.75 mm. Precision was high in both methods, with error magnitudes under 0.5 mm. Conclusions 3D scanning images have high levels of precision and fairly good congruence with traditional anthropometry methods, with mean differences of less than 1 mm. 3D surface imaging using the Morpheus 3D® scanner is therefore a clinically acceptable method of recording facial integumental data. PMID:26023538

  17. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    NASA Astrophysics Data System (ADS)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  18. Estimated results analysis and application of the precise point positioning based high-accuracy ionosphere delay

    NASA Astrophysics Data System (ADS)

    Wang, Shi-tai; Peng, Jun-huan

    2015-12-01

    The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.

  19. Precision and accuracy testing of FMCW ladar-based length metrology.

    PubMed

    Mateo, Ana Baselga; Barber, Zeb W

    2015-07-01

    The calibration and traceability of high-resolution frequency modulated continuous wave (FMCW) ladar sources is a requirement for their use in length and volume metrology. We report the calibration of FMCW ladar length measurement systems by use of spectroscopy of molecular frequency references HCN (C-band) or CO (L-band) to calibrate the chirp rate of the FMCW sources. Propagating the stated uncertainties from the molecular calibrations provided by NIST and measurement errors provide an estimated uncertainty of a few ppm for the FMCW system. As a test of this calibration, a displacement measurement interferometer with a laser wavelength close to that of our FMCW system was built to make comparisons of the relative precision and accuracy. The comparisons performed show <10  ppm agreement, which was within the combined estimated uncertainties of the FMCW system and interferometer. PMID:26193146

  20. Accuracy improvement of protrusion angle of carbon nanotube tips by precision multiaxis nanomanipulator

    SciTech Connect

    Young Song, Won; Young Jung, Ki; O, Beom-Hoan; Park, Byong Chon

    2005-02-01

    In order to manufacture a carbon nanotube (CNT) tip in which the attachment angle and position of CNT were precisely adjusted, a nanomanipulator was installed inside a scanning electron microscope (SEM). A CNT tip, atomic force microscopy (AFM) probe to which a nanotube is attached, is known to be the most appropriate probe for measuring the shape of high aspect ratio. The developed nanomanipulator has two sets of modules with the degree of freedom of three-directional rectilinear motion and one-directional rotational motion at an accuracy of tens of nanometers, so it enables the manufacturing of more accurate CNT tips. The present study developed a CNT tip with the error of attachment angle less then 10 deg. through three-dimensional operation of a multiwalled carbon nanotube and an AFM probe inside a SEM.

  1. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    DOE PAGES

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence ofmore » a significant quantity of 238U in the samples.« less

  2. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    SciTech Connect

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence of a significant quantity of 238U in the samples.

  3. Accuracy and precision of estimating age of gray wolves by tooth wear

    USGS Publications Warehouse

    Gipson, P.S.; Ballard, W.B.; Nowak, R.M.; Mech, L.D.

    2000-01-01

    We evaluated the accuracy and precision of tooth wear for aging gray wolves (Canis lupus) from Alaska, Minnesota, and Ontario based on 47 known-age or known-minimum-age skulls. Estimates of age using tooth wear and a commercial cementum annuli-aging service were useful for wolves up to 14 years old. The precision of estimates from cementum annuli was greater than estimates from tooth wear, but tooth wear estimates are more applicable in the field. We tended to overestimate age by 1-2 years and occasionally by 3 or 4 years. The commercial service aged young wolves with cementum annuli to within ?? 1 year of actual age, but under estimated ages of wolves ???9 years old by 1-3 years. No differences were detected in tooth wear patterns for wild wolves from Alaska, Minnesota, and Ontario, nor between captive and wild wolves. Tooth wear was not appropriate for aging wolves with an underbite that prevented normal wear or severely broken and missing teeth.

  4. Accuracy, Precision, and Reliability of Chemical Measurements in Natural Products Research

    PubMed Central

    Betz, Joseph M.; Brown, Paula N.; Roman, Mark C.

    2010-01-01

    Natural products chemistry is the discipline that lies at the heart of modern pharmacognosy. The field encompasses qualitative and quantitative analytical tools that range from spectroscopy and spectrometry to chromatography. Among other things, modern research on crude botanicals is engaged in the discovery of the phytochemical constituents necessary for therapeutic efficacy, including the synergistic effects of components of complex mixtures in the botanical matrix. In the phytomedicine field, these botanicals and their contained mixtures are considered the active pharmaceutical ingredient (API), and pharmacognosists are increasingly called upon to supplement their molecular discovery work by assisting in the development and utilization of analytical tools for assessing the quality and safety of these products. Unlike single-chemical entity APIs, botanical raw materials and their derived products are highly variable because their chemistry and morphology depend on the genotypic and phenotypic variation, geographical origin and weather exposure, harvesting practices, and processing conditions of the source material. Unless controlled, this inherent variability in the raw material stream can result in inconsistent finished products that are under-potent, over-potent, and/or contaminated. Over the decades, natural products chemists have routinely developed quantitative analytical methods for phytochemicals of interest. Quantitative methods for the determination of product quality bear the weight of regulatory scrutiny. These methods must be accurate, precise, and reproducible. Accordingly, this review discusses the principles of accuracy (relationship between experimental and true value), precision (distribution of data values), and reliability in the quantitation of phytochemicals in natural products. PMID:20884340

  5. Transfer accuracy and precision scoring in planar bone cutting validated with ex vivo data.

    PubMed

    Milano, Federico Edgardo; Ritacco, Lucas Eduardo; Farfalli, Germán Luis; Bahamonde, Luis Alberto; Aponte-Tinao, Luis Alberto; Risk, Marcelo

    2015-05-01

    The use of interactive surgical scenarios for virtual preoperative planning of osteotomies has increased in the last 5 years. As it has been reported by several authors, this technology has been used in tumor resection osteotomies, knee osteotomies, and spine surgery with good results. A digital three-dimensional preoperative plan makes possible to quantitatively evaluate the transfer process from the virtual plan to the anatomy of the patient. We introduce an exact definition of accuracy and precision of this transfer process for planar bone cutting. We present a method to compute these properties from ex vivo data. We also propose a clinical score to assess the goodness of a cut. A computer simulation is used to characterize the definitions and the data generated by the measurement method. The definitions and method are evaluated in 17 ex vivo planar cuts of tumor resection osteotomies. The results show that the proposed method and definitions are highly correlated with a previous definition of accuracy based in ISO 1101. The score is also evaluated by showing that it distinguishes among different transfer techniques based in its distribution location and shape. The introduced definitions produce acceptable results in cases where the ISO-based definition produce counter intuitive results.

  6. Accuracy and precision of gait events derived from motion capture in horses during walk and trot.

    PubMed

    Boye, Jenny Katrine; Thomsen, Maj Halling; Pfau, Thilo; Olsen, Emil

    2014-03-21

    This study aimed to create an evidence base for detection of stance-phase timings from motion capture in horses. The objective was to compare the accuracy (bias) and precision (SD) for five published algorithms for the detection of hoof-on and hoof-off using force plates as the reference standard. Six horses were walked and trotted over eight force plates surrounded by a synchronised 12-camera infrared motion capture system. The five algorithms (A-E) were based on: (A) horizontal velocity of the hoof; (B) Fetlock angle and horizontal hoof velocity; (C) horizontal displacement of the hoof relative to the centre of mass; (D) horizontal velocity of the hoof relative to the Centre of Mass and; (E) vertical acceleration of the hoof. A total of 240 stance phases in walk and 240 stance phases in trot were included in the assessment. Method D provided the most accurate and precise results in walk for stance phase duration with a bias of 4.1% for front limbs and 4.8% for hind limbs. For trot we derived a combination of method A for hoof-on and method E for hoof-off resulting in a bias of -6.2% of stance in the front limbs and method B for the hind limbs with a bias of 3.8% of stance phase duration. We conclude that motion capture yields accurate and precise detection of gait events for horses walking and trotting over ground and the results emphasise a need for different algorithms for front limbs versus hind limbs in trot.

  7. Gaining Precision and Accuracy on Microprobe Trace Element Analysis with the Multipoint Background Method

    NASA Astrophysics Data System (ADS)

    Allaz, J. M.; Williams, M. L.; Jercinovic, M. J.; Donovan, J. J.

    2014-12-01

    Electron microprobe trace element analysis is a significant challenge, but can provide critical data when high spatial resolution is required. Due to the low peak intensity, the accuracy and precision of such analyses relies critically on background measurements, and on the accuracy of any pertinent peak interference corrections. A linear regression between two points selected at appropriate off-peak positions is a classical approach for background characterization in microprobe analysis. However, this approach disallows an accurate assessment of background curvature (usually exponential). Moreover, if present, background interferences can dramatically affect the results if underestimated or ignored. The acquisition of a quantitative WDS scan over the spectral region of interest is still a valuable option to determine the background intensity and curvature from a fitted regression of background portions of the scan, but this technique retains an element of subjectivity as the analyst has to select areas in the scan, which appear to represent background. We present here a new method, "Multi-Point Background" (MPB), that allows acquiring up to 24 off-peak background measurements from wavelength positions around the peaks. This method aims to improve the accuracy, precision, and objectivity of trace element analysis. The overall efficiency is amended because no systematic WDS scan needs to be acquired in order to check for the presence of possible background interferences. Moreover, the method is less subjective because "true" backgrounds are selected by the statistical exclusion of erroneous background measurements, reducing the need for analyst intervention. This idea originated from efforts to refine EPMA monazite U-Th-Pb dating, where it was recognised that background errors (peak interference or background curvature) could result in errors of several tens of million years on the calculated age. Results obtained on a CAMECA SX-100 "UltraChron" using monazite

  8. Systematic accuracy and precision analysis of video motion capturing systems--exemplified on the Vicon-460 system.

    PubMed

    Windolf, Markus; Götzen, Nils; Morlock, Michael

    2008-08-28

    With rising demand on highly accurate acquisition of small motion the use of video-based motion capturing becomes more and more popular. However, the performance of these systems strongly depends on a variety of influencing factors. A method was developed in order to systematically assess accuracy and precision of motion capturing systems with regard to influential system parameters. A calibration and measurement robot was designed to perform a repeatable dynamic calibration and to determine the resultant system accuracy and precision in a control volume investigating small motion magnitudes (180 x 180 x 150 mm3). The procedure was exemplified on the Vicon-460 system. Following parameters were analyzed: Camera setup, calibration volume, marker size and lens filter application. Equipped with four cameras the Vicon-460 system provided an overall accuracy of 63+/-5 microm and overall precision (noise level) of 15 microm for the most favorable parameter setting. Arbitrary changes in camera arrangement revealed variations in mean accuracy between 76 and 129 microm. The noise level normal to the cameras' projection plane was found higher compared to the other coordinate directions. Measurements including regions unaffected by the dynamic calibration reflected considerably lower accuracy (221+/-79 microm). Lager marker diameters led to higher accuracy and precision. Accuracy dropped significantly when using an optical lens filter. This study revealed significant influence of the system environment on the performance of video-based motion capturing systems. With careful configuration, optical motion capturing provides a powerful measuring opportunity for the majority of biomechanical applications.

  9. Improving accuracy and precision in biological applications of fluorescence lifetime imaging microscopy

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Wei

    The quantitative understanding of cellular and molecular responses in living cells is important for many reasons, including identifying potential molecular targets for treatments of diseases like cancer. Fluorescence lifetime imaging microscopy (FLIM) can quantitatively measure these responses in living cells by producing spatially resolved images of fluorophore lifetime, and has advantages over intensity-based measurements. However, in live-cell microscopy applications using high-intensity light sources such as lasers, maintaining biological viability remains critical. Although high-speed, time-gated FLIM significantly reduces light delivered to live cells, making measurements at low light levels remains a challenge affecting quantitative FLIM results. We can significantly improve both accuracy and precision in gated FLIM applications. We use fluorescence resonance energy transfer (FRET) with fluorescent proteins to detect molecular interactions in living cells: the use of FLIM, better fluorophores, and temperature/CO2 controls can improve live-cell FRET results with higher consistency, better statistics, and less non-specific FRET (for negative control comparisons, p-value = 0.93 (physiological) vs. 9.43E-05 (non-physiological)). Several lifetime determination methods are investigated to optimize gating schemes. We demonstrate a reduction in relative standard deviation (RSD) from 52.57% to 18.93% with optimized gating in an example under typical experimental conditions. We develop two novel total variation (TV) image denoising algorithms, FWTV ( f-weighted TV) and UWTV (u-weighted TV), that can achieve significant improvements for real imaging systems. With live-cell images, they improve the precision of local lifetime determination without significantly altering the global mean lifetime values (<5% lifetime changes). Finally, by combining optimal gating and TV denoising, even low-light excitation can achieve precision better than that obtained in high

  10. Parallaxes and Proper Motions of QSOs: A Test of Astrometric Precision and Accuracy

    NASA Astrophysics Data System (ADS)

    Harris, Hugh C.; Dahn, Conard C.; Zacharias, Norbert; Canzian, Blaise; Guetter, Harry H.; Levine, Stephen E.; Luginbuhl, Christian B.; Monet, Alice K. B.; Monet, David G.; Pier, Jeffrey R.; Stone, Ronald C.; Subasavage, John P.; Tilleman, Trudy; Walker, Richard L.; Johnston, Kenneth J.

    2016-11-01

    Optical astrometry of 12 fields containing quasi-stellar objects (QSOs) is presented. The targets are radio sources in the International Celestial Reference Frame with accurate radio positions that also have optical counterparts. The data are used to test several quantities: the internal precision of the relative optical astrometry, the relative parallaxes and proper motions, the procedures to correct from relative to absolute parallax and proper motion, the accuracy of the absolute parallaxes and proper motions, and the stability of the optical photocenters for these optically variable QSOs. For these 12 fields, the mean error in absolute parallax is 0.38 mas and the mean error in each coordinate of absolute proper motion is 1.1 mas yr‑1. The results yield a mean absolute parallax of ‑0.03 ± 0.11 mas. For 11 targets, we find no significant systematic motions of the photocenters at the level of 1–2 mas over the 10 years of this study; for one BL Lac object, we find a possible motion of 4 mas correlated with its brightness.

  11. Asymptotic Diffusion-Limit Accuracy of Sn Angular Differencing Schemes

    SciTech Connect

    Bailey, T S; Morel, J E; Chang, J H

    2009-11-05

    In a previous paper, Morel and Montry used a Galerkin-based diffusion analysis to define a particular weighted diamond angular discretization for S{sub n}n calculations in curvilinear geometries. The weighting factors were chosen to ensure that the Galerkin diffusion approximation was preserved, which eliminated the discrete-ordinates flux dip. It was also shown that the step and diamond angular differencing schemes, which both suffer from the flux dip, do not preserve the diffusion approximation in the Galerkin sense. In this paper we re-derive the Morel and Montry weighted diamond scheme using a formal asymptotic diffusion-limit analysis. The asymptotic analysis yields more information than the Galerkin analysis and demonstrates that the step and diamond schemes do in fact formally preserve the diffusion limit to leading order, while the Morel and Montry weighted diamond scheme preserves it to first order, which is required for full consistency in this limit. Nonetheless, the fact that the step and diamond differencing schemes preserve the diffusion limit to leading order suggests that the flux dip should disappear as the diffusion limit is approached for these schemes. Computational results are presented that confirm this conjecture. We further conjecture that preserving the Galerkin diffusion approximation is equivalent to preserving the asymptotic diffusion limit to first order.

  12. International normalised ratio (INR) measured on the CoaguChek S and XS compared with the laboratory for determination of precision and accuracy.

    PubMed

    Christensen, Thomas D; Larsen, Torben B; Jensen, Claus; Maegaard, Marianne; Sørensen, Benny

    2009-03-01

    Oral anticoagulation therapy is monitored by the use of international normalised ratio (INR). Patients performing self-management estimate INR using a coagulometer, but studies have been partly flawed regarding the estimated precision and accuracy. The objective was to estimate the imprecision and accuracy for two different coagulometers (CoaguChek S and XS). Twenty-four patients treated with coumarin were prospectively followed for six weeks. INR's were analyzed weekly in duplicates on both coagulometers, and compared with results from the hospital laboratory. Statistical analysis included Bland-Altman plot, 95% limits of agreement, coefficient of variance (CV), and an analysis of variance using a mixed effect model. Comparing 141 duplicate measurements (a total of 564 measurements) of INR, we found that the CoaguChek S and CoaguChek XS had a precision (CV) of 3.4% and 2.3%, respectively. Regarding analytical accuracy, the INR measurements tended to be lower on the coagulometers, and regarding diagnostic accuracy the CoaguChek S and CoaguChek XS deviated more than 15% from the laboratory measurements in 40% and 43% of the measurements, respectively. In conclusion, the precision of the coagulometers was found to be good, but only the CoaguChek XS had a precision within the predefined limit of 3%. Regarding analytical accuracy, the INR measurements tended to be lower on the coagulometers, compared to the laboratory. A large proportion of measurement of the coagulometers deviated more than 15% from the laboratory measurements. Whether this will have a clinical impact awaits further studies.

  13. Deformable Image Registration for Adaptive Radiation Therapy of Head and Neck Cancer: Accuracy and Precision in the Presence of Tumor Changes

    SciTech Connect

    Mencarelli, Angelo; Kranen, Simon Robert van; Hamming-Vrieze, Olga; Beek, Suzanne van; Nico Rasch, Coenraad Robert; Herk, Marcel van; Sonke, Jan-Jakob

    2014-11-01

    Purpose: To compare deformable image registration (DIR) accuracy and precision for normal and tumor tissues in head and neck cancer patients during the course of radiation therapy (RT). Methods and Materials: Thirteen patients with oropharyngeal tumors, who underwent submucosal implantation of small gold markers (average 6, range 4-10) around the tumor and were treated with RT were retrospectively selected. Two observers identified 15 anatomical features (landmarks) representative of normal tissues in the planning computed tomography (pCT) scan and in weekly cone beam CTs (CBCTs). Gold markers were digitally removed after semiautomatic identification in pCTs and CBCTs. Subsequently, landmarks and gold markers on pCT were propagated to CBCTs, using a b-spline-based DIR and, for comparison, rigid registration (RR). To account for observer variability, the pair-wise difference analysis of variance method was applied. DIR accuracy (systematic error) and precision (random error) for landmarks and gold markers were quantified. Time trend of the precisions for RR and DIR over the weekly CBCTs were evaluated. Results: DIR accuracies were submillimeter and similar for normal and tumor tissue. DIR precision (1 SD) on the other hand was significantly different (P<.01), with 2.2 mm vector length in normal tissue versus 3.3 mm in tumor tissue. No significant time trend in DIR precision was found for normal tissue, whereas in tumor, DIR precision was significantly (P<.009) degraded during the course of treatment by 0.21 mm/week. Conclusions: DIR for tumor registration proved to be less precise than that for normal tissues due to limited contrast and complex non-elastic tumor response. Caution should therefore be exercised when applying DIR for tumor changes in adaptive procedures.

  14. Precision limits of the twin-beam multiband URSULA

    NASA Technical Reports Server (NTRS)

    Debiase, G. A.; Paterno, L.; Fedel, B.; Santagati, G.; Ventura, R.

    1988-01-01

    URSULA is a multiband astronomical photoelectric photometer which minimizes errors introduced by the presence of the atmosphere. It operates with two identical channels, one for the star to be measured and the other for a reference star. After a technical description of the present version of the apparatus, some measurements of stellar sources of different brightness, and in different atmospheric conditions are presented. These measurements, based on observations made with the 91 cm Cassegrain telescope of the Catania Astrophysical Observatory, are used to check the photometer accuracy and compare its performance with that of standard photometers.

  15. The ultimate quantum limits on the accuracy of measurements

    NASA Technical Reports Server (NTRS)

    Yuen, Horace P.

    1992-01-01

    A quantum generalization of rate-distortion theory from standard communication and information theory is developed for application to determining the ultimate performance limit of measurement systems in physics. For the estimation of a real or a phase parameter, it is shown that the root-mean-square error obtained in a measurement with a single-mode photon level N cannot do better than approximately N exp -1, while approximately exp(-N) may be obtained for multi-mode fields with the same photon level N. Possible ways to achieve the remarkable exponential performance are indicated.

  16. Single-frequency receivers as master permanent stations in GNSS networks: precision and accuracy of the positioning in mixed networks

    NASA Astrophysics Data System (ADS)

    Dabove, Paolo; Manzino, Ambrogio Maria

    2015-04-01

    The use of GPS/GNSS instruments is a common practice in the world at both a commercial and academic research level. Since last ten years, Continuous Operating Reference Stations (CORSs) networks were born in order to achieve the possibility to extend a precise positioning more than 15 km far from the master station. In this context, the Geomatics Research Group of DIATI at the Politecnico di Torino has carried out several experiments in order to evaluate the achievable precision obtainable with different GNSS receivers (geodetic and mass-market) and antennas if a CORSs network is considered. This work starts from the research above described, in particular focusing the attention on the usefulness of single frequency permanent stations in order to thicken the existing CORSs, especially for monitoring purposes. Two different types of CORSs network are available today in Italy: the first one is the so called "regional network" and the second one is the "national network", where the mean inter-station distances are about 25/30 and 50/70 km respectively. These distances are useful for many applications (e.g. mobile mapping) if geodetic instruments are considered but become less useful if mass-market instruments are used or if the inter-station distance between master and rover increases. In this context, some innovative GNSS networks were developed and tested, analyzing the performance of rover's positioning in terms of quality, accuracy and reliability both in real-time and post-processing approach. The use of single frequency GNSS receivers leads to have some limits, especially due to a limited baseline length, the possibility to obtain a correct fixing of the phase ambiguity for the network and to fix the phase ambiguity correctly also for the rover. These factors play a crucial role in order to reach a positioning with a good level of accuracy (as centimetric o better) in a short time and with an high reliability. The goal of this work is to investigate about the

  17. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review).

    PubMed

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194

  18. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review)

    PubMed Central

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194

  19. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review).

    PubMed

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong.

  20. Sensitivity Analysis for Characterizing the Accuracy and Precision of JEM/SMILES Mesospheric O3

    NASA Astrophysics Data System (ADS)

    Esmaeili Mahani, M.; Baron, P.; Kasai, Y.; Murata, I.; Kasaba, Y.

    2011-12-01

    The main purpose of this study is to evaluate the Superconducting sub-Millimeter Limb Emission Sounder (SMILES) measurements of mesospheric ozone, O3. As the first step, the error due to the impact of Mesospheric Temperature Inversions (MTIs) on ozone retrieval has been determined. The impacts of other parameters such as pressure variability, solar events, and etc. on mesospheric O3 will also be investigated. Ozone, is known to be important due to the stratospheric O3 layer protection of life on Earth by absorbing harmful UV radiations. However, O3 chemistry can be studied purely in the mesosphere without distraction of heterogeneous situation and dynamical variations due to the short lifetime of O3 in this region. Mesospheric ozone is produced by the photo-dissociation of O2 and the subsequent reaction of O with O2. Diurnal and semi-diurnal variations of mesospheric ozone are associated with variations in solar activity. The amplitude of the diurnal variation increases from a few percent at an altitude of 50 km, to about 80 percent at 70 km. Although despite the apparent simplicity of this situation, significant disagreements exist between the predictions from the existing models and observations, which need to be resolved. SMILES is a highly sensitive radiometer with a few to several tens percent of precision from upper troposphere to the mesosphere. SMILES was developed by the Japanese Aerospace eXploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT) located at the Japanese Experiment Module (JEM) on the International Space Station (ISS). SMILES has successfully measured the vertical distributions and the diurnal variations of various atmospheric species in the latitude range of 38S to 65N from October 2009 to April 2010. A sensitivity analysis is being conducted to investigate the expected precision and accuracy of the mesospheric O3 profiles (from 50 to 90 km height) due to the impact of Mesospheric Temperature

  1. Force sensors with precision beyond the standard quantum limit

    NASA Astrophysics Data System (ADS)

    Ivanov, Peter A.

    2016-08-01

    We propose force sensing protocols using a linear ion chain which can operate beyond the quantum standard limit. We show that oscillating forces that are off resonance with the motional trap frequency can be detected very efficiently by using quantum probes represented by various spin-boson models. We demonstrate that the temporal evolution of a quantum probe described by the Dicke model can be mapped on the nonlinear Ramsey interferometry which allows us to detect far-detuned forces simply by measuring the collective spin populations. Moreover, we show that the measurement uncertainty can reach the Heisenberg limit by using initial spin-correlated states, instead of motional entangled states. An important advantage of the sensing technique is its natural robustness against the thermally induced dephasing, which extends the coherence time of the measurement protocol. Furthermore, we introduce a sensing scheme that utilizes the strong spin-phonon coupling to improve the force estimation. We show that for a quantum probe represented by the quantum Rabi model the force sensitivity can overcome the one achieved by the simple harmonic oscillator force sensor.

  2. Performance of Airborne Precision Spacing Under Realistic Wind Conditions and Limited Surveillance Range

    NASA Technical Reports Server (NTRS)

    Wieland, Frederick; Santos, Michel; Krueger, William; Houston, Vincent E.

    2011-01-01

    With the expected worldwide increase of air traffic during the coming decade, both the Federal Aviation Administration's (FAA's) Next Generation Air Transportation System (NextGen), as well as Eurocontrol's Single European Sky ATM Research (SESAR) program have, as part of their plans, air traffic management (ATM) solutions that can increase performance without requiring time-consuming and expensive infrastructure changes. One such solution involves the ability of both controllers and flight crews to deliver aircraft to the runway with greater accuracy than they can today. Previous research has shown that time-based spacing techniques, wherein the controller assigns a time spacing to each pair of arriving aircraft, can achieve this goal by providing greater runway delivery accuracy and producing a concomitant increase in system-wide performance. The research described herein focuses on one specific application of time-based spacing, called Airborne Precision Spacing (APS), which has evolved over the past ten years. This research furthers APS understanding by studying its performance with realistic wind conditions obtained from atmospheric sounding data and with realistic wind forecasts obtained from the Rapid Update Cycle (RUC) short-range weather forecast. In addition, this study investigates APS performance with limited surveillance range, as provided by the Automatic Dependent Surveillance-Broadcast (ADS-B) system, and with an algorithm designed to improve APS performance when ADS-B surveillance data is unavailable. The results presented herein quantify the runway threshold delivery accuracy of APS under these conditions, and also quantify resulting workload metrics such as the number of speed changes required to maintain spacing.

  3. Precision and accuracy in the quantitative analysis of biological samples by accelerator mass spectrometry: application in microdose absolute bioavailability studies.

    PubMed

    Gao, Lan; Li, Jing; Kasserra, Claudia; Song, Qi; Arjomand, Ali; Hesk, David; Chowdhury, Swapan K

    2011-07-15

    Determination of the pharmacokinetics and absolute bioavailability of an experimental compound, SCH 900518, following a 89.7 nCi (100 μg) intravenous (iv) dose of (14)C-SCH 900518 2 h post 200 mg oral administration of nonradiolabeled SCH 900518 to six healthy male subjects has been described. The plasma concentration of SCH 900518 was measured using a validated LC-MS/MS system, and accelerator mass spectrometry (AMS) was used for quantitative plasma (14)C-SCH 900518 concentration determination. Calibration standards and quality controls were included for every batch of sample analysis by AMS to ensure acceptable quality of the assay. Plasma (14)C-SCH 900518 concentrations were derived from the regression function established from the calibration standards, rather than directly from isotopic ratios from AMS measurement. The precision and accuracy of quality controls and calibration standards met the requirements of bioanalytical guidance (U.S. Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research, Center for Veterinary Medicine. Guidance for Industry: Bioanalytical Method Validation (ucm070107), May 2001. http://www.fda.gov/downloads/Drugs/GuidanceCompilanceRegulatoryInformation/Guidances/ucm070107.pdf ). The AMS measurement had a linear response range from 0.0159 to 9.07 dpm/mL for plasma (14)C-SCH 900158 concentrations. The CV and accuracy were 3.4-8.5% and 94-108% (82-119% for the lower limit of quantitation (LLOQ)), respectively, with a correlation coefficient of 0.9998. The absolute bioavailability was calculated from the dose-normalized area under the curve of iv and oral doses after the plasma concentrations were plotted vs the sampling time post oral dose. The mean absolute bioavailability of SCH 900518 was 40.8% (range 16.8-60.6%). The typical accuracy and standard deviation in AMS quantitative analysis of drugs from human plasma samples have been reported for the first time, and the impact of these

  4. Multifluorophore localization as a percolation problem: limits to density and precision.

    PubMed

    Small, Alex

    2016-07-01

    We show that the maximum desirable density of activated fluorophores in a superresolution experiment can be determined by treating the overlapping point spread functions as a problem in percolation theory. We derive a bound on the density of activated fluorophores, taking into account the desired localization accuracy and precision, as well as the number of photons emitted. Our bound on density is close to that reported in experimental work, suggesting that further increases in the density of imaged fluorophores will come at the expense of localization accuracy and precision. PMID:27409704

  5. Accuracy and precision of total mixed rations fed on commercial dairy farms.

    PubMed

    Sova, A D; LeBlanc, S J; McBride, B W; DeVries, T J

    2014-01-01

    Despite the significant time and effort spent formulating total mixed rations (TMR), it is evident that the ration delivered by the producer and that consumed by the cow may not accurately reflect that originally formulated. The objectives of this study were to (1) determine how TMR fed agrees with or differs from TMR formulation (accuracy), (2) determine daily variability in physical and chemical characteristics of TMR delivered (precision), and (3) investigate the relationship between daily variability in ration characteristics and group-average measures of productivity [dry matter intake (DMI), milk yield, milk components, efficiency, and feed sorting] on commercial dairy farms. Twenty-two commercial freestall herds were visited for 7 consecutive days in both summer and winter months. Fresh and refusal feed samples were collected daily to assess particle size distribution, dry matter, and chemical composition. Milk test data, including yield, fat, and protein were collected from a coinciding Dairy Herd Improvement test. Multivariable mixed-effect regression models were used to analyze associations between productivity measures and daily ration variability, measured as coefficient of variation (CV) over 7d. The average TMR [crude protein=16.5%, net energy for lactation (NEL) = 1.7 Mcal/kg, nonfiber carbohydrates = 41.3%, total digestible nutrients = 73.3%, neutral detergent fiber=31.3%, acid detergent fiber=20.5%, Ca = 0.92%, p=0.42%, Mg = 0.35%, K = 1.45%, Na = 0.41%] delivered exceeded TMR formulation for NEL (+0.05 Mcal/kg), nonfiber carbohydrates (+1.2%), acid detergent fiber (+0.7%), Ca (+0.08%), P (+0.02%), Mg (+0.02%), and K (+0.04%) and underfed crude protein (-0.4%), neutral detergent fiber (-0.6%), and Na (-0.1%). Dietary measures with high day-to-day CV were average feed refusal rate (CV = 74%), percent long particles (CV = 16%), percent medium particles (CV = 7.7%), percent short particles (CV = 6.1%), percent fine particles (CV = 13%), Ca (CV = 7

  6. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  7. Technical Note: Millimeter precision in ultrasound based patient positioning: Experimental quantification of inherent technical limitations

    SciTech Connect

    Ballhausen, Hendrik Hieber, Sheila; Li, Minglun; Belka, Claus; Reiner, Michael

    2014-08-15

    Purpose: To identify the relevant technical sources of error of a system based on three-dimensional ultrasound (3D US) for patient positioning in external beam radiotherapy. To quantify these sources of error in a controlled laboratory setting. To estimate the resulting end-to-end geometric precision of the intramodality protocol. Methods: Two identical free-hand 3D US systems at both the planning-CT and the treatment room were calibrated to the laboratory frame of reference. Every step of the calibration chain was repeated multiple times to estimate its contribution to overall systematic and random error. Optimal margins were computed given the identified and quantified systematic and random errors. Results: In descending order of magnitude, the identified and quantified sources of error were: alignment of calibration phantom to laser marks 0.78 mm, alignment of lasers in treatment vs planning room 0.51 mm, calibration and tracking of 3D US probe 0.49 mm, alignment of stereoscopic infrared camera to calibration phantom 0.03 mm. Under ideal laboratory conditions, these errors are expected to limit ultrasound-based positioning to an accuracy of 1.05 mm radially. Conclusions: The investigated 3D ultrasound system achieves an intramodal accuracy of about 1 mm radially in a controlled laboratory setting. The identified systematic and random errors require an optimal clinical tumor volume to planning target volume margin of about 3 mm. These inherent technical limitations do not prevent clinical use, including hypofractionation or stereotactic body radiation therapy.

  8. Uncertainty in the Timing of Origin of Animals and the Limits of Precision in Molecular Timescales.

    PubMed

    dos Reis, Mario; Thawornwattana, Yuttapong; Angelis, Konstantinos; Telford, Maximilian J; Donoghue, Philip C J; Yang, Ziheng

    2015-11-16

    The timing of divergences among metazoan lineages is integral to understanding the processes of animal evolution, placing the biological events of species divergences into the correct geological timeframe. Recent fossil discoveries and molecular clock dating studies have suggested a divergence of bilaterian phyla >100 million years before the Cambrian, when the first definite crown-bilaterian fossils occur. Most previous molecular clock dating studies, however, have suffered from limited data and biases in methodologies, and virtually all have failed to acknowledge the large uncertainties associated with the fossil record of early animals, leading to inconsistent estimates among studies. Here we use an unprecedented amount of molecular data, combined with four fossil calibration strategies (reflecting disparate and controversial interpretations of the metazoan fossil record) to obtain Bayesian estimates of metazoan divergence times. Our results indicate that the uncertain nature of ancient fossils and violations of the molecular clock impose a limit on the precision that can be achieved in estimates of ancient molecular timescales. For example, although we can assert that crown Metazoa originated during the Cryogenian (with most crown-bilaterian phyla diversifying during the Ediacaran), it is not possible with current data to pinpoint the divergence events with sufficient accuracy to test for correlations between geological and biological events in the history of animals. Although a Cryogenian origin of crown Metazoa agrees with current geological interpretations, the divergence dates of the bilaterians remain controversial. Thus, attempts to build evolutionary narratives of early animal evolution based on molecular clock timescales appear to be premature.

  9. Accuracy and precisions of water quality parameters retrieved from particle swarm optimisation in a sub-tropical lake

    NASA Astrophysics Data System (ADS)

    Campbell, Glenn; Phinn, Stuart R.

    2009-09-01

    Optical remote sensing has been used to map and monitor water quality parameters such as the concentrations of hydrosols (chlorophyll and other pigments, total suspended material, and coloured dissolved organic matter). In the inversion / optimisation approach a forward model is used to simulate the water reflectance spectra from a set of parameters and the set that gives the closest match is selected as the solution. The accuracy of the hydrosol retrieval is dependent on an efficient search of the solution space and the reliability of the similarity measure. In this paper the Particle Swarm Optimisation (PSO) was used to search the solution space and seven similarity measures were trialled. The accuracy and precision of this method depends on the inherent noise in the spectral bands of the sensor being employed, as well as the radiometric corrections applied to images to calculate the subsurface reflectance. Using the Hydrolight® radiative transfer model and typical hydrosol concentrations from Lake Wivenhoe, Australia, MERIS reflectance spectra were simulated. The accuracy and precision of hydrosol concentrations derived from each similarity measure were evaluated after errors associated with the air-water interface correction, atmospheric correction and the IOP measurement were modelled and applied to the simulated reflectance spectra. The use of band specific empirically estimated values for the anisotropy value in the forward model improved the accuracy of hydrosol retrieval. The results of this study will be used to improve an algorithm for the remote sensing of water quality for freshwater impoundments.

  10. Nano-accuracy measurements and the surface profiler by use of Monolithic Hollow Penta-Prism for precision mirror testing

    NASA Astrophysics Data System (ADS)

    Qian, Shinan; Wayne, Lewis; Idir, Mourad

    2014-09-01

    We developed a Monolithic Hollow Penta-Prism Long Trace Profiler-NOM (MHPP-LTP-NOM) to attain nano-accuracy in testing plane- and near-plane-mirrors. A new developed Monolithic Hollow Penta-Prism (MHPP) combined with the advantages of PPLTP and autocollimator ELCOMAT of the Nano-Optic-Measuring Machine (NOM) is used to enhance the accuracy and stability of our measurements. Our precise system-alignment method by using a newly developed CCD position-monitor system (PMS) assured significant thermal stability and, along with our optimized noise-reduction analytic method, ensured nano-accuracy measurements. Herein we report our tests results; all errors are about 60 nrad rms or less in tests of plane- and near-plane- mirrors.

  11. Simulations of thermally transferred OSL signals in quartz: Accuracy and precision of the protocols for equivalent dose evaluation

    NASA Astrophysics Data System (ADS)

    Pagonis, Vasilis; Adamiec, Grzegorz; Athanassas, C.; Chen, Reuven; Baker, Atlee; Larsen, Meredith; Thompson, Zachary

    2011-06-01

    Thermally-transferred optically stimulated luminescence (TT-OSL) signals in sedimentary quartz have been the subject of several recent studies, due to the potential shown by these signals to increase the range of luminescence dating by an order of magnitude. Based on these signals, a single aliquot protocol termed the ReSAR protocol has been developed and tested experimentally. This paper presents extensive numerical simulations of this ReSAR protocol. The purpose of the simulations is to investigate several aspects of the ReSAR protocol which are believed to cause difficulties during application of the protocol. Furthermore, several modified versions of the ReSAR protocol are simulated, and their relative accuracy and precision are compared. The simulations are carried out using a recently published kinetic model for quartz, consisting of 11 energy levels. One hundred random variants of the natural samples were generated by keeping the transition probabilities between energy levels fixed, while allowing simultaneous random variations of the concentrations of the 11 energy levels. The relative intrinsic accuracy and precision of the protocols are simulated by calculating the equivalent dose (ED) within the model, for a given natural burial dose of the sample. The complete sequence of steps undertaken in several versions of the dating protocols is simulated. The relative intrinsic precision of these techniques is estimated by fitting Gaussian probability functions to the resulting simulated distribution of ED values. New simulations are presented for commonly used OSL sensitivity tests, consisting of successive cycles of sample irradiation with the same dose, followed by measurements of the sensitivity corrected L/T signals. We investigate several experimental factors which may be affecting both the intrinsic precision and intrinsic accuracy of the ReSAR protocol. The results of the simulation show that the four different published versions of the ReSAR protocol can

  12. A high-precision Jacob's staff with improved spatial accuracy and laser sighting capability

    NASA Astrophysics Data System (ADS)

    Patacci, Marco

    2016-04-01

    A new Jacob's staff design incorporating a 3D positioning stage and a laser sighting stage is described. The first combines a compass and a circular spirit level on a movable bracket and the second introduces a laser able to slide vertically and rotate on a plane parallel to bedding. The new design allows greater precision in stratigraphic thickness measurement while restricting the cost and maintaining speed of measurement to levels similar to those of a traditional Jacob's staff. Greater precision is achieved as a result of: a) improved 3D positioning of the rod through the use of the integrated compass and spirit level holder; b) more accurate sighting of geological surfaces by tracing with height adjustable rotatable laser; c) reduced error when shifting the trace of the log laterally (i.e. away from the dip direction) within the trace of the laser plane, and d) improved measurement of bedding dip and direction necessary to orientate the Jacob's staff, using the rotatable laser. The new laser holder design can also be used to verify parallelism of a geological surface with structural dip by creating a visual planar datum in the field and thus allowing determination of surfaces which cut the bedding at an angle (e.g., clinoforms, levees, erosion surfaces, amalgamation surfaces, etc.). Stratigraphic thickness measurements and estimates of measurement uncertainty are valuable to many applications of sedimentology and stratigraphy at different scales (e.g., bed statistics, reconstruction of palaeotopographies, depositional processes at bed scale, architectural element analysis), especially when a quantitative approach is applied to the analysis of the data; the ability to collect larger data sets with improved precision will increase the quality of such studies.

  13. Uncertainty in the Timing of Origin of Animals and the Limits of Precision in Molecular Timescales

    PubMed Central

    dos Reis, Mario; Thawornwattana, Yuttapong; Angelis, Konstantinos; Telford, Maximilian J.; Donoghue, Philip C.J.; Yang, Ziheng

    2015-01-01

    Summary The timing of divergences among metazoan lineages is integral to understanding the processes of animal evolution, placing the biological events of species divergences into the correct geological timeframe. Recent fossil discoveries and molecular clock dating studies have suggested a divergence of bilaterian phyla >100 million years before the Cambrian, when the first definite crown-bilaterian fossils occur. Most previous molecular clock dating studies, however, have suffered from limited data and biases in methodologies, and virtually all have failed to acknowledge the large uncertainties associated with the fossil record of early animals, leading to inconsistent estimates among studies. Here we use an unprecedented amount of molecular data, combined with four fossil calibration strategies (reflecting disparate and controversial interpretations of the metazoan fossil record) to obtain Bayesian estimates of metazoan divergence times. Our results indicate that the uncertain nature of ancient fossils and violations of the molecular clock impose a limit on the precision that can be achieved in estimates of ancient molecular timescales. For example, although we can assert that crown Metazoa originated during the Cryogenian (with most crown-bilaterian phyla diversifying during the Ediacaran), it is not possible with current data to pinpoint the divergence events with sufficient accuracy to test for correlations between geological and biological events in the history of animals. Although a Cryogenian origin of crown Metazoa agrees with current geological interpretations, the divergence dates of the bilaterians remain controversial. Thus, attempts to build evolutionary narratives of early animal evolution based on molecular clock timescales appear to be premature. PMID:26603774

  14. Note: electronic circuit for two-way time transfer via a single coaxial cable with picosecond accuracy and precision.

    PubMed

    Prochazka, Ivan; Kodet, Jan; Panek, Petr

    2012-11-01

    We have designed, constructed, and tested the overall performance of the electronic circuit for the two-way time transfer between two timing devices over modest distances with sub-picosecond precision and a systematic error of a few picoseconds. The concept of the electronic circuit enables to carry out time tagging of pulses of interest in parallel to the comparison of the time scales of these timing devices. The key timing parameters of the circuit are: temperature change of the delay is below 100 fs/K, timing stability time deviation better than 8 fs for averaging time from minutes to hours, sub-picosecond time transfer precision, and a few picoseconds time transfer accuracy.

  15. Accuracy and Precision of Three-Dimensional Low Dose CT Compared to Standard RSA in Acetabular Cups: An Experimental Study

    PubMed Central

    Olivecrona, Henrik; Maguire, Gerald Q.; Noz, Marilyn E.; Zeleznik, Michael P.

    2016-01-01

    Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting. PMID:27478832

  16. Accuracy and Precision of Three-Dimensional Low Dose CT Compared to Standard RSA in Acetabular Cups: An Experimental Study.

    PubMed

    Brodén, Cyrus; Olivecrona, Henrik; Maguire, Gerald Q; Noz, Marilyn E; Zeleznik, Michael P; Sköldenberg, Olof

    2016-01-01

    Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting. PMID:27478832

  17. A Time Projection Chamber for High Accuracy and Precision Fission Cross-Section Measurements

    SciTech Connect

    T. Hill; K. Jewell; M. Heffner; D. Carter; M. Cunningham; V. Riot; J. Ruz; S. Sangiorgio; B. Seilhan; L. Snyder; D. M. Asner; S. Stave; G. Tatishvili; L. Wood; R. G. Baker; J. L. Klay; R. Kudo; S. Barrett; J. King; M. Leonard; W. Loveland; L. Yao; C. Brune; S. Grimes; N. Kornilov; T. N. Massey; J. Bundgaard; D. L. Duke; U. Greife; U. Hager; E. Burgett; J. Deaven; V. Kleinrath; C. McGrath; B. Wendt; N. Hertel; D. Isenhower; N. Pickle; H. Qu; S. Sharma; R. T. Thornton; D. Tovwell; R. S. Towell; S.

    2014-09-01

    The fission Time Projection Chamber (fissionTPC) is a compact (15 cm diameter) two-chamber MICROMEGAS TPC designed to make precision cross-section measurements of neutron-induced fission. The actinide targets are placed on the central cathode and irradiated with a neutron beam that passes axially through the TPC inducing fission in the target. The 4p acceptance for fission fragments and complete charged particle track reconstruction are powerful features of the fissionTPC which will be used to measure fission cross-sections and examine the associated systematic errors. This paper provides a detailed description of the design requirements, the design solutions, and the initial performance of the fissionTPC.

  18. Probing the limits of accuracy in electronic structure calculations: is theory capable of results uniformly better than "chemical accuracy"?

    PubMed

    Feller, David; Peterson, Kirk A

    2007-03-21

    Current limitations in electronic structure methods are discussed from the perspective of their potential to contribute to inherent uncertainties in predictions of molecular properties, with an emphasis on atomization energies (or heats of formation). The practical difficulties arising from attempts to achieve high accuracy are illustrated via two case studies: the carbon dimer (C2) and the hydroperoxyl radical (HO2). While the HO2 wave function is dominated by a single configuration, the carbon dimer involves considerable multiconfigurational character. In addition to these two molecules, statistical results will be presented for a much larger sample of molecules drawn from the Computational Results Database. The goal of this analysis will be to determine if a combination of coupled cluster theory with large 1-particle basis sets and careful incorporation of several computationally expensive smaller corrections can yield uniform agreement with experiment to better than "chemical accuracy" (+/-1 kcal/mol). In the case of HO2, the best current theoretical estimate of the zero-point-inclusive, spin-orbit corrected atomization energy (SigmaD0=166.0+/-0.3 kcal/mol) and the most recent Active Thermochemical Table (ATcT) value (165.97+/-0.06 kcal/mol) are in excellent agreement. For C2 the agreement is only slightly poorer, with theory (D0=143.7+/-0.3 kcal/mol) almost encompassing the most recent ATcT value (144.03+/-0.13 kcal/mol). For a larger collection of 68 molecules, a mean absolute deviation of 0.3 kcal/mol was found. The same high level of theory that produces good agreement for atomization energies also appears capable of predicting bond lengths to an accuracy of +/-0.001 A. PMID:17381194

  19. Probing the limits of accuracy in electronic structure calculations: Is theory capable of results uniformly better than ``chemical accuracy''?

    NASA Astrophysics Data System (ADS)

    Feller, David; Peterson, Kirk A.

    2007-03-01

    Current limitations in electronic structure methods are discussed from the perspective of their potential to contribute to inherent uncertainties in predictions of molecular properties, with an emphasis on atomization energies (or heats of formation). The practical difficulties arising from attempts to achieve high accuracy are illustrated via two case studies: the carbon dimer (C2) and the hydroperoxyl radical (HO2). While the HO2 wave function is dominated by a single configuration, the carbon dimer involves considerable multiconfigurational character. In addition to these two molecules, statistical results will be presented for a much larger sample of molecules drawn from the Computational Results Database. The goal of this analysis will be to determine if a combination of coupled cluster theory with large 1-particle basis sets and careful incorporation of several computationally expensive smaller corrections can yield uniform agreement with experiment to better than "chemical accuracy" (±1kcal /mol). In the case of HO2, the best current theoretical estimate of the zero-point-inclusive, spin-orbit corrected atomization energy (ΣD0=166.0±0.3kcal /mol) and the most recent Active Thermochemical Table (ATcT) value (165.97±0.06kcal/mol) are in excellent agreement. For C2 the agreement is only slightly poorer, with theory (D0=143.7±0.3kcal/mol) almost encompassing the most recent ATcT value (144.03±0.13kcal/mol). For a larger collection of 68molecules, a mean absolute deviation of 0.3kcal/mol was found. The same high level of theory that produces good agreement for atomization energies also appears capable of predicting bond lengths to an accuracy of ±0.001Å.

  20. Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine.

    PubMed

    Castaneda, Christian; Nalley, Kip; Mannion, Ciaran; Bhattacharyya, Pritish; Blake, Patrick; Pecora, Andrew; Goy, Andre; Suh, K Stephen

    2015-01-01

    As research laboratories and clinics collaborate to achieve precision medicine, both communities are required to understand mandated electronic health/medical record (EHR/EMR) initiatives that will be fully implemented in all clinics in the United States by 2015. Stakeholders will need to evaluate current record keeping practices and optimize and standardize methodologies to capture nearly all information in digital format. Collaborative efforts from academic and industry sectors are crucial to achieving higher efficacy in patient care while minimizing costs. Currently existing digitized data and information are present in multiple formats and are largely unstructured. In the absence of a universally accepted management system, departments and institutions continue to generate silos of information. As a result, invaluable and newly discovered knowledge is difficult to access. To accelerate biomedical research and reduce healthcare costs, clinical and bioinformatics systems must employ common data elements to create structured annotation forms enabling laboratories and clinics to capture sharable data in real time. Conversion of these datasets to knowable information should be a routine institutionalized process. New scientific knowledge and clinical discoveries can be shared via integrated knowledge environments defined by flexible data models and extensive use of standards, ontologies, vocabularies, and thesauri. In the clinical setting, aggregated knowledge must be displayed in user-friendly formats so that physicians, non-technical laboratory personnel, nurses, data/research coordinators, and end-users can enter data, access information, and understand the output. The effort to connect astronomical numbers of data points, including '-omics'-based molecular data, individual genome sequences, experimental data, patient clinical phenotypes, and follow-up data is a monumental task. Roadblocks to this vision of integration and interoperability include ethical, legal

  1. Precise and Continuous Time and Frequency Synchronisation at the 5×10-19 Accuracy Level

    PubMed Central

    Wang, B.; Gao, C.; Chen, W. L.; Miao, J.; Zhu, X.; Bai, Y.; Zhang, J. W.; Feng, Y. Y.; Li, T. C.; Wang, L. J.

    2012-01-01

    The synchronisation of time and frequency between remote locations is crucial for many important applications. Conventional time and frequency dissemination often makes use of satellite links. Recently, the communication fibre network has become an attractive option for long-distance time and frequency dissemination. Here, we demonstrate accurate frequency transfer and time synchronisation via an 80 km fibre link between Tsinghua University (THU) and the National Institute of Metrology of China (NIM). Using a 9.1 GHz microwave modulation and a timing signal carried by two continuous-wave lasers and transferred across the same 80 km urban fibre link, frequency transfer stability at the level of 5×10−19/day was achieved. Time synchronisation at the 50 ps precision level was also demonstrated. The system is reliable and has operated continuously for several months. We further discuss the feasibility of using such frequency and time transfer over 1000 km and its applications to long-baseline radio astronomy. PMID:22870385

  2. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation.

    PubMed

    Choe, Kyoung Whan; Blake, Randolph; Lee, Sang-Hun

    2016-01-01

    Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (<16s) and large (∼4min) time scales, and these pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task.

  3. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation.

    PubMed

    Choe, Kyoung Whan; Blake, Randolph; Lee, Sang-Hun

    2016-01-01

    Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (<16s) and large (∼4min) time scales, and these pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task. PMID:25578924

  4. A simple device for high-precision head image registration: Preliminary performance and accuracy tests

    SciTech Connect

    Pallotta, Stefania

    2007-05-15

    The purpose of this paper is to present a new device for multimodal head study registration and to examine its performance in preliminary tests. The device consists of a system of eight markers fixed to mobile carbon pipes and bars which can be easily mounted on the patient's head using the ear canals and the nasal bridge. Four graduated scales fixed to the rigid support allow examiners to find the same device position on the patient's head during different acquisitions. The markers can be filled with appropriate substances for visualisation in computed tomography (CT), magnetic resonance, single photon emission computer tomography (SPECT) and positron emission tomography images. The device's rigidity and its position reproducibility were measured in 15 repeated CT acquisitions of the Alderson Rando anthropomorphic phantom and in two SPECT studies of a patient. The proposed system displays good rigidity and reproducibility characteristics. A relocation accuracy of less than 1,5 mm was found in more than 90% of the results. The registration parameters obtained using such a device were compared to those obtained using fiducial markers fixed on phantom and patient heads, resulting in differences of less than 1 deg. and 1 mm for rotation and translation parameters, respectively. Residual differences between fiducial marker coordinates in reference and in registered studies were less than 1 mm in more than 90% of the results, proving that the device performed as accurately as noninvasive stereotactic devices. Finally, an example of multimodal employment of the proposed device is reported.

  5. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment.

    PubMed

    Vu, An T; Phillips, Jeffrey S; Kay, Kendrick; Phillips, Matthew E; Johnson, Matthew R; Shinkareva, Svetlana V; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2016-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms. PMID:27686111

  6. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment.

    PubMed

    Vu, An T; Phillips, Jeffrey S; Kay, Kendrick; Phillips, Matthew E; Johnson, Matthew R; Shinkareva, Svetlana V; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2016-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms.

  7. Precision of high-resolution multibeam echo sounding coupled with high-accuracy positioning in a shallow water coastal environment

    NASA Astrophysics Data System (ADS)

    Ernstsen, Verner B.; Noormets, Riko; Hebbeln, Dierk; Bartholomä, Alex; Flemming, Burg W.

    2006-09-01

    Over 4 years, repetitive bathymetric measurements of a shipwreck in the Grådyb tidal inlet channel in the Danish Wadden Sea were carried out using a state-of-the-art high-resolution multibeam echosounder (MBES) coupled with a real-time long range kinematic (LRK™) global positioning system. Seven measurements during a single survey in 2003 ( n=7) revealed a horizontal and vertical precision of the MBES system of ±20 and ±2 cm, respectively, at a 95% confidence level. By contrast, four annual surveys from 2002 to 2005 ( n=4) yielded a horizontal and vertical precision (at 95% confidence level) of only ±30 and ±8 cm, respectively. This difference in precision can be explained by three main factors: (1) the dismounting of the system between the annual surveys, (2) rougher sea conditions during the survey in 2004 and (3) the limited number of annual surveys. In general, the precision achieved here did not correspond to the full potential of the MBES system, as this could certainly have been improved by an increase in coverage density (soundings/m2), achievable by reducing the survey speed of the vessel. Nevertheless, precision was higher than that reported to date for earlier offshore test surveys using comparable equipment.

  8. A Method of Determining Accuracy and Precision for Dosimeter Systems Using Accreditation Data

    SciTech Connect

    Rick Cummings and John Flood

    2010-12-01

    A study of the uncertainty of dosimeter results is required by the national accreditation programs for each dosimeter model for which accreditation is sought. Typically, the methods used to determine uncertainty have included the partial differentiation method described in the U.S. Guide to Uncertainty in Measurements or the use of Monte Carlo techniques and probability distribution functions to generate simulated dose results. Each of these techniques has particular strengths and should be employed when the areas of uncertainty are required to be understood in detail. However, the uncertainty of dosimeter results can also be determined using a Model II One-Way Analysis of Variance technique and accreditation testing data. The strengths of the technique include (1) the method is straightforward and the data are provided under accreditation testing and (2) the method provides additional data for the analysis of long-term uncertainty using Statistical Process Control (SPC) techniques. The use of SPC to compare variances and standard deviations over time is described well in other areas and is not discussed in detail in this paper. The application of Analysis of Variance to historic testing data indicated that the accuracy in a representative dosimetry system (Panasonic® Model UD-802) was 8.2%, 5.1%, and 4.8% and the expanded uncertainties at the 95% confidence level were 10.7%, 14.9%, and 15.2% for the Accident, Protection Level-Shallow, and Protection Level-Deep test categories in the Department of Energy Laboratory Accreditation Program, respectively. The 95% level of confidence ranges were (0.98 to 1.19), (0.90 to 1.20), and (0.90 to 1.20) for the three groupings of test categories, respectively.

  9. A method of determining accuracy and precision for dosimeter systems using accreditation data.

    PubMed

    Cummings, Frederick; Flood, John R

    2010-12-01

    A study of the uncertainty of dosimeter results is required by the national accreditation programs for each dosimeter model for which accreditation is sought. Typically, the methods used to determine uncertainty have included the partial differentiation method described in the U.S. Guide to Uncertainty in Measurements or the use of Monte Carlo techniques and probability distribution functions to generate simulated dose results. Each of these techniques has particular strengths and should be employed when the areas of uncertainty are required to be understood in detail. However, the uncertainty of dosimeter results can also be determined using a Model II One-Way Analysis of Variance technique and accreditation testing data. The strengths of the technique include (1) the method is straightforward and the data are provided under accreditation testing and (2) the method provides additional data for the analysis of long-term uncertainty using Statistical Process Control (SPC) techniques. The use of SPC to compare variances and standard deviations over time is described well in other areas and is not discussed in detail in this paper. The application of Analysis of Variance to historic testing data indicated that the accuracy in a representative dosimetry system (Panasonic® Model UD-802) was 8.2%, 5.1%, and 4.8% and the expanded uncertainties at the 95% confidence level were 10.7%, 14.9%, and 15.2% for the Accident, Protection Level-Shallow, and Protection Level-Deep test categories in the Department of Energy Laboratory Accreditation Program, respectively. The 95% level of confidence ranges were (0.98 to 1.19), (0.90 to 1.20), and (0.90 to 1.20) for the three groupings of test categories, respectively. PMID:21068596

  10. Accuracy and Precision of Equine Gait Event Detection during Walking with Limb and Trunk Mounted Inertial Sensors

    PubMed Central

    Olsen, Emil; Andersen, Pia Haubro; Pfau, Thilo

    2012-01-01

    The increased variations of temporal gait events when pathology is present are good candidate features for objective diagnostic tests. We hypothesised that the gait events hoof-on/off and stance can be detected accurately and precisely using features from trunk and distal limb-mounted Inertial Measurement Units (IMUs). Four IMUs were mounted on the distal limb and five IMUs were attached to the skin over the dorsal spinous processes at the withers, fourth lumbar vertebrae and sacrum as well as left and right tuber coxae. IMU data were synchronised to a force plate array and a motion capture system. Accuracy (bias) and precision (SD of bias) was calculated to compare force plate and IMU timings for gait events. Data were collected from seven horses. One hundred and twenty three (123) front limb steps were analysed; hoof-on was detected with a bias (SD) of −7 (23) ms, hoof-off with 0.7 (37) ms and front limb stance with −0.02 (37) ms. A total of 119 hind limb steps were analysed; hoof-on was found with a bias (SD) of −4 (25) ms, hoof-off with 6 (21) ms and hind limb stance with 0.2 (28) ms. IMUs mounted on the distal limbs and sacrum can detect gait events accurately and precisely. PMID:22969392

  11. Limitations on long-term stability and accuracy in atomic clocks

    NASA Technical Reports Server (NTRS)

    Wineland, D. J.

    1979-01-01

    The limits to accuracy and long term stability in present atomic clocks are examined. Recent proposals for new frequency standards are discussed along with the advantages and disadvantages of frequency standards based on such ideas as laser transitions, single atoms, and atom cooling. The applicability of some of these new techniques to existing standards is examined.

  12. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    PubMed

    Wells, Emma; Wolfe, Marlene K; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4-19% error), then test strips (5.2-48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration. Given the

  13. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    PubMed

    Wells, Emma; Wolfe, Marlene K; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4-19% error), then test strips (5.2-48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration. Given the

  14. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions

    PubMed Central

    Wells, Emma; Wolfe, Marlene K.; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4–19% error), then test strips (5.2–48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5–11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14–37 for test strips and $33–609 for titration

  15. Limits to the precision of gradient sensing with spatial communication and temporal integration

    PubMed Central

    Mugler, Andrew; Levchenko, Andre; Nemenman, Ilya

    2016-01-01

    Gradient sensing requires at least two measurements at different points in space. These measurements must then be communicated to a common location to be compared, which is unavoidably noisy. Although much is known about the limits of measurement precision by cells, the limits placed by the communication are not understood. Motivated by recent experiments, we derive the fundamental limits to the precision of gradient sensing in a multicellular system, accounting for communication and temporal integration. The gradient is estimated by comparing a “local” and a “global” molecular reporter of the external concentration, where the global reporter is exchanged between neighboring cells. Using the fluctuation–dissipation framework, we find, in contrast to the case when communication is ignored, that precision saturates with the number of cells independently of the measurement time duration, because communication establishes a maximum length scale over which sensory information can be reliably conveyed. Surprisingly, we also find that precision is improved if the local reporter is exchanged between cells as well, albeit more slowly than the global reporter. The reason is that whereas exchange of the local reporter weakens the comparison, it decreases the measurement noise. We term such a model “regional excitation–global inhibition.” Our results demonstrate that fundamental sensing limits are necessarily sharpened when the need to communicate information is taken into account. PMID:26792517

  16. Quantum limits on optical phase estimation accuracy from classical rate-distortion theory

    SciTech Connect

    Nair, Ranjith

    2014-12-04

    The classical information-theoretic lower bound on the distortion of a random variable upon transmission through a noisy channel is applied to quantum-optical phase estimation. An approach for obtaining Bayesian lower bounds on the phase estimation accuracy is described that employs estimates of the classical capacity of the relevant quantum-optical channels. The Heisenberg limit for lossless phase estimation is derived for arbitrary probe state and prior distributions of the phase, and shot-noise scaling of the phase accuracy is established in the presence of nonzero loss for a parallel entanglement-assisted strategy with a single probe mode.

  17. Accuracy and precision of minimally-invasive cardiac output monitoring in children: a systematic review and meta-analysis.

    PubMed

    Suehiro, Koichi; Joosten, Alexandre; Murphy, Linda Suk-Ling; Desebbe, Olivier; Alexander, Brenton; Kim, Sang-Hyun; Cannesson, Maxime

    2016-10-01

    Several minimally-invasive technologies are available for cardiac output (CO) measurement in children, but the accuracy and precision of these devices have not yet been evaluated in a systematic review and meta-analysis. We conducted a comprehensive search of the medical literature in PubMed, Cochrane Library of Clinical Trials, Scopus, and Web of Science from its inception to June 2014 assessing the accuracy and precision of all minimally-invasive CO monitoring systems used in children when compared with CO monitoring reference methods. Pooled mean bias, standard deviation, and mean percentage error of included studies were calculated using a random-effects model. The inter-study heterogeneity was also assessed using an I(2) statistic. A total of 20 studies (624 patients) were included. The overall random-effects pooled bias, and mean percentage error were 0.13 ± 0.44 l min(-1) and 29.1 %, respectively. Significant inter-study heterogeneity was detected (P < 0.0001, I(2) = 98.3 %). In the sub-analysis regarding the device, electrical cardiometry showed the smallest bias (-0.03 l min(-1)) and lowest percentage error (23.6 %). Significant residual heterogeneity remained after conducting sensitivity and subgroup analyses based on the various study characteristics. By meta-regression analysis, we found no independent effects of study characteristics on weighted mean difference between reference and tested methods. Although the pooled bias was small, the mean pooled percentage error was in the gray zone of clinical applicability. In the sub-group analysis, electrical cardiometry was the device that provided the most accurate measurement. However, a high heterogeneity between studies was found, likely due to a wide range of study characteristics. PMID:26315477

  18. Community-based Approaches to Improving Accuracy, Precision, and Reproducibility in U-Pb and U-Th Geochronology

    NASA Astrophysics Data System (ADS)

    McLean, N. M.; Condon, D. J.; Bowring, S. A.; Schoene, B.; Dutton, A.; Rubin, K. H.

    2015-12-01

    The last two decades have seen a grassroots effort by the international geochronology community to "calibrate Earth history through teamwork and cooperation," both as part of the EARTHTIME initiative and though several daughter projects with similar goals. Its mission originally challenged laboratories "to produce temporal constraints with uncertainties approaching 0.1% of the radioisotopic ages," but EARTHTIME has since exceeded its charge in many ways. Both the U-Pb and Ar-Ar chronometers first considered for high-precision timescale calibration now regularly produce dates at the sub-per mil level thanks to instrumentation, laboratory, and software advances. At the same time new isotope systems, including U-Th dating of carbonates, have developed comparable precision. But the larger, inter-related scientific challenges envisioned at EARTHTIME's inception remain - for instance, precisely calibrating the global geologic timescale, estimating rates of change around major climatic perturbations, and understanding evolutionary rates through time - and increasingly require that data from multiple geochronometers be combined. To solve these problems, the next two decades of uranium-daughter geochronology will require further advances in accuracy, precision, and reproducibility. The U-Th system has much in common with U-Pb, in that both parent and daughter isotopes are solids that can easily be weighed and dissolved in acid, and have well-characterized reference materials certified for isotopic composition and/or purity. For U-Pb, improving lab-to-lab reproducibility has entailed dissolving precisely weighed U and Pb metals of known purity and isotopic composition together to make gravimetric solutions, then using these to calibrate widely distributed tracers composed of artificial U and Pb isotopes. To mimic laboratory measurements, naturally occurring U and Pb isotopes were also mixed in proportions to mimic samples of three different ages, to be run as internal

  19. An experimental analysis of accuracy and precision of a high-speed strain-gage system based on the direct-resistance method

    NASA Astrophysics Data System (ADS)

    Cappa, P.; del Prete, Z.

    1992-03-01

    An experimental study on the relative merits of using a high-speed digital-acquisition system to measure directly the strain-gage resistance, rather than using a conventional Wheatstone bridge, is carried out. Both strain gages, with a nominal resistance of 120 ohm and 1 kohm, were simulated with precision resistors, and the output signals were acquired over a time of 48 and 144 hours; furthermore, the effects in metrological performances caused by a statistical filtering were evaluated. The results show that the implementation of the statistical filtering gains a considerable improvement in gathering strain-gage-resistance readings. On the other hand, such a procedure causes, obviously, a loss of performance with regard to the acquisition rate, and therefore to the dynamic data-collecting capabilities. In any case, the intrinsic resolution of the 12-bit a/d converter, utilized in the present experimental analysis, causes a limitation for measurement accuracy in the range of hundreds microns/m.

  20. Cascade impactor (CI) mensuration--an assessment of the accuracy and precision of commercially available optical measurement systems.

    PubMed

    Chambers, Frank; Ali, Aziz; Mitchell, Jolyon; Shelton, Christopher; Nichols, Steve

    2010-03-01

    Multi-stage cascade impactors (CIs) are the preferred measurement technique for characterizing the aerodynamic particle size distribution of an inhalable aerosol. Stage mensuration is the recommended pharmacopeial method for monitoring CI "fitness for purpose" within a GxP environment. The Impactor Sub-Team of the European Pharmaceutical Aerosol Group has undertaken an inter-laboratory study to assess both the precision and accuracy of a range of makes and models of instruments currently used for optical inspection of impactor stages. Measurement of two Andersen 8-stage 'non-viable' cascade impactor "reference" stages that were representative of jet sizes for this instrument type (stages 2 and 7) confirmed that all instruments evaluated were capable of reproducible jet measurement, with the overall capability being within the current pharmacopeial stage specifications for both stages. In the assessment of absolute accuracy, small, but consistent differences (ca. 0.6% of the certified value) observed between 'dots' and 'spots' of a calibrated chromium-plated reticule were observed, most likely the result of treatment of partially lit pixels along the circumference of this calibration standard. Measurements of three certified ring gauges, the smallest having a nominal diameter of 1.0 mm, were consistent with the observation where treatment of partially illuminated pixels at the periphery of the projected image can result in undersizing. However, the bias was less than 1% of the certified diameter. The optical inspection instruments evaluated are fully capable of confirming cascade impactor suitability in accordance with pharmacopeial practice.

  1. Cancer Evolution and the Limits of Predictability in Precision Cancer Medicine

    PubMed Central

    Lipinski, Kamil A.; Barber, Louise J.; Davies, Matthew N.; Ashenden, Matthew; Sottoriva, Andrea; Gerlinger, Marco

    2016-01-01

    The ability to predict the future behavior of an individual cancer is crucial for precision cancer medicine. The discovery of extensive intratumor heterogeneity and ongoing clonal adaptation in human tumors substantiated the notion of cancer as an evolutionary process. Random events are inherent in evolution and tumor spatial structures hinder the efficacy of selection, which is the only deterministic evolutionary force. This review outlines how the interaction of these stochastic and deterministic processes, which have been extensively studied in evolutionary biology, limits cancer predictability and develops evolutionary strategies to improve predictions. Understanding and advancing the cancer predictability horizon is crucial to improve precision medicine outcomes. PMID:26949746

  2. Precision and accuracy of manual water-level measurements taken in the Yucca Mountain area, Nye County, Nevada, 1988-90

    USGS Publications Warehouse

    Boucher, M.S.

    1994-01-01

    Water-level measurements have been made in deep boreholes in the Yucca Mountain area, Nye County, Nevada, since 1983 in support of the U.S. Department of Energy's Yucca Mountain Project, which is an evaluation of the area to determine its suitability as a potential storage area for high-level nuclear waste. Water-level measurements were taken either manually, using various water-level measuring equipment such as steel tapes, or they were taken continuously, using automated data recorders and pressure transducers. This report presents precision range and accuracy data established for manual water-level measurements taken in the Yucca Mountain area, 1988-90. Precision and accuracy ranges were determined for all phases of the water-level measuring process, and overall accuracy ranges are presented. Precision ranges were determined for three steel tapes using a total of 462 data points. Mean precision ranges of these three tapes ranged from 0.014 foot to 0.026 foot. A mean precision range of 0.093 foot was calculated for the multiconductor cable, using 72 data points. Mean accuracy values were calculated on the basis of calibrations of the steel tapes and the multiconductor cable against a reference steel tape. The mean accuracy values of the steel tapes ranged from 0.053 foot, based on three data points to 0.078, foot based on six data points. The mean accuracy of the multiconductor cable was O. 15 foot, based on six data points. Overall accuracy of the water-level measurements was calculated by taking the square root of the sum of the squares of the individual accuracy values. Overall accuracy was calculated to be 0.36 foot for water-level measurements taken with steel tapes, without accounting for the inaccuracy of borehole deviations from vertical. An overall accuracy of 0.36 foot for measurements made with steel tapes is considered satisfactory for this project.

  3. Limiting Energy Dissipation Induces Glassy Kinetics in Single-Cell High-Precision Responses

    NASA Astrophysics Data System (ADS)

    Das, Jayajit

    2016-03-01

    Single cells often generate precise responses by involving dissipative out-of-thermodynamic equilibrium processes in signaling networks. The available free energy to fuel these processes could become limited depending on the metabolic state of an individual cell. How does limiting dissipation affect the kinetics of high precision responses in single cells? I address this question in the context of a kinetic proofreading scheme used in a simple model of early time T cell signaling. I show using exact analytical calculations and numerical simulations that limiting dissipation qualitatively changes the kinetics in single cells marked by emergence of slow kinetics, large cell-to-cell variations of copy numbers, temporally correlated stochastic events (dynamic facilitation), and, ergodicity breaking. Thus, constraints in energy dissipation, in addition to negatively affecting ligand discrimination in T cells, can create a fundamental difficulty in interpreting single cell kinetics from cell population level results.

  4. Accounting for Limited Detection Efficiency and Localization Precision in Cluster Analysis in Single Molecule Localization Microscopy

    PubMed Central

    Shivanandan, Arun; Unnikrishnan, Jayakrishnan; Radenovic, Aleksandra

    2015-01-01

    Single Molecule Localization Microscopy techniques like PhotoActivated Localization Microscopy, with their sub-diffraction limit spatial resolution, have been popularly used to characterize the spatial organization of membrane proteins, by means of quantitative cluster analysis. However, such quantitative studies remain challenged by the techniques’ inherent sources of errors such as a limited detection efficiency of less than 60%, due to incomplete photo-conversion, and a limited localization precision in the range of 10 – 30nm, varying across the detected molecules, mainly depending on the number of photons collected from each. We provide analytical methods to estimate the effect of these errors in cluster analysis and to correct for them. These methods, based on the Ripley’s L(r) – r or Pair Correlation Function popularly used by the community, can facilitate potentially breakthrough results in quantitative biology by providing a more accurate and precise quantification of protein spatial organization. PMID:25794150

  5. On the accuracy of limiters and convergence to steady state solutions

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, V.

    1993-01-01

    This paper addresses the practical problem of obtaining convergence to steady state solutions when limiters are used in conjunction with upwind schemes on unstructured grids. The base scheme forms a gradient and limits it by imposing monotonicity conditions in the reconstruction stage. It is shown by analysis in one dimension that such an approach leads to various schemes meeting TVD requirements in one dimension. It is further shown that these formally second order accurate schemes are less than second order accurate in practice because of the action of the limiter function in smooth regions of the solution. Modifications are proposed to the limiter that restore the second order accuracy. In multiple dimensions these schemes produce steady state solutions that are monotone and devoid of oscillations. However, convergence stalls after a few orders of reduction in the residual. With the modified limiter, on the other hand, it is shown that converged steady state solutions can be obtained.

  6. An evaluation of the accuracy and precision of a stand-alone submersible continuous ruminal pH measurement system.

    PubMed

    Penner, G B; Beauchemin, K A; Mutsvangwa, T

    2006-06-01

    The objectives of this study were 1) to develop and evaluate the accuracy and precision of a new stand-alone submersible continuous ruminal pH measurement system called the Lethbridge Research Centre ruminal pH measurement system (LRCpH; Experiment 1); 2) to establish the accuracy and precision of a well-documented, previously used continuous indwelling ruminal pH system (CIpH) to ensure that the new system (LRCpH) was as accurate and precise as the previous system (CIpH; Experiment 2); and 3) to determine the required frequency for pH electrode standardization by comparing baseline millivolt readings of pH electrodes in pH buffers 4 and 7 after 0, 24, 48, and 72 h of ruminal incubation (Experiment 3). In Experiment 1, 6 pregnant Holstein heifers, 3 lactating, primiparous Holstein cows, and 2 Black Angus heifers were used. All experimental animals were fitted with permanent ruminal cannulas. In Experiment 2, the 3 cannulated, lactating, primiparous Holstein cows were used. In both experiments, ruminal pH was determined continuously using indwelling pH electrodes. Subsequently, mean pH values were then compared with ruminal pH values obtained using spot samples of ruminal fluid (MANpH) obtained at the same time. A correlation coefficient accounting for repeated measures was calculated and results were used to calculate the concordance correlation to examine the relationships between the LRCpH-derived values and MANpH, and the CIpH-derived values and MANpH. In Experiment 3, the 6 pregnant Holstein heifers were used along with 6 new submersible pH electrodes. In Experiments 1 and 2, the comparison of the LRCpH output (1- and 5-min averages) to MANpH had higher correlation coefficients after accounting for repeated measures (0.98 and 0.97 for 1- and 5-min averages, respectively) and concordance correlation coefficients (0.96 and 0.97 for 1- and 5-min averages, respectively) than the comparison of CIpH to MANpH (0.88 and 0.87, correlation coefficient and concordance

  7. Standardization of Operator-Dependent Variables Affecting Precision and Accuracy of the Disk Diffusion Method for Antibiotic Susceptibility Testing.

    PubMed

    Hombach, Michael; Maurer, Florian P; Pfiffner, Tamara; Böttger, Erik C; Furrer, Reinhard

    2015-12-01

    Parameters like zone reading, inoculum density, and plate streaking influence the precision and accuracy of disk diffusion antibiotic susceptibility testing (AST). While improved reading precision has been demonstrated using automated imaging systems, standardization of the inoculum and of plate streaking have not been systematically investigated yet. This study analyzed whether photometrically controlled inoculum preparation and/or automated inoculation could further improve the standardization of disk diffusion. Suspensions of Escherichia coli ATCC 25922 and Staphylococcus aureus ATCC 29213 of 0.5 McFarland standard were prepared by 10 operators using both visual comparison to turbidity standards and a Densichek photometer (bioMérieux), and the resulting CFU counts were determined. Furthermore, eight experienced operators each inoculated 10 Mueller-Hinton agar plates using a single 0.5 McFarland standard bacterial suspension of E. coli ATCC 25922 using regular cotton swabs, dry flocked swabs (Copan, Brescia, Italy), or an automated streaking device (BD-Kiestra, Drachten, Netherlands). The mean CFU counts obtained from 0.5 McFarland standard E. coli ATCC 25922 suspensions were significantly different for suspensions prepared by eye and by Densichek (P < 0.001). Preparation by eye resulted in counts that were closer to the CLSI/EUCAST target of 10(8) CFU/ml than those resulting from Densichek preparation. No significant differences in the standard deviations of the CFU counts were observed. The interoperator differences in standard deviations when dry flocked swabs were used decreased significantly compared to the differences when regular cotton swabs were used, whereas the mean of the standard deviations of all operators together was not significantly altered. In contrast, automated streaking significantly reduced both interoperator differences, i.e., the individual standard deviations, compared to the standard deviations for the manual method, and the mean of

  8. Standardization of Operator-Dependent Variables Affecting Precision and Accuracy of the Disk Diffusion Method for Antibiotic Susceptibility Testing.

    PubMed

    Hombach, Michael; Maurer, Florian P; Pfiffner, Tamara; Böttger, Erik C; Furrer, Reinhard

    2015-12-01

    Parameters like zone reading, inoculum density, and plate streaking influence the precision and accuracy of disk diffusion antibiotic susceptibility testing (AST). While improved reading precision has been demonstrated using automated imaging systems, standardization of the inoculum and of plate streaking have not been systematically investigated yet. This study analyzed whether photometrically controlled inoculum preparation and/or automated inoculation could further improve the standardization of disk diffusion. Suspensions of Escherichia coli ATCC 25922 and Staphylococcus aureus ATCC 29213 of 0.5 McFarland standard were prepared by 10 operators using both visual comparison to turbidity standards and a Densichek photometer (bioMérieux), and the resulting CFU counts were determined. Furthermore, eight experienced operators each inoculated 10 Mueller-Hinton agar plates using a single 0.5 McFarland standard bacterial suspension of E. coli ATCC 25922 using regular cotton swabs, dry flocked swabs (Copan, Brescia, Italy), or an automated streaking device (BD-Kiestra, Drachten, Netherlands). The mean CFU counts obtained from 0.5 McFarland standard E. coli ATCC 25922 suspensions were significantly different for suspensions prepared by eye and by Densichek (P < 0.001). Preparation by eye resulted in counts that were closer to the CLSI/EUCAST target of 10(8) CFU/ml than those resulting from Densichek preparation. No significant differences in the standard deviations of the CFU counts were observed. The interoperator differences in standard deviations when dry flocked swabs were used decreased significantly compared to the differences when regular cotton swabs were used, whereas the mean of the standard deviations of all operators together was not significantly altered. In contrast, automated streaking significantly reduced both interoperator differences, i.e., the individual standard deviations, compared to the standard deviations for the manual method, and the mean of

  9. Tracking gaze while walking on a treadmill: spatial accuracy and limits of use of a stationary remote eye-tracker.

    PubMed

    Serchi, V; Peruzzi, A; Cereatti, A; Della Croce, U

    2014-01-01

    Inaccurate visual sampling and foot placement may lead to unsafe walking. Virtual environments, challenging obstacle negotiation, may be used to investigate the relationship between the point of gaze and stepping accuracy. A measurement of the point of gaze during walking can be obtained using a remote eye-tracker. The assessment of its performance and limits of applicability is essential to define the areas of interest in a virtual environment and to collect information for the analysis of the visual strategy. The current study aims at characterizing a gaze eye-tracker in static and dynamic conditions. Three different conditions were analyzed: a) looking at a single stimulus during selected head movements b) looking at multiple stimuli distributed on the screen from different distances, c) looking at multiple stimuli distributed on the screen while walking. The eye-tracker was able to measure the point of gaze during the head motion along medio-lateral and vertical directions consistently with the device specifications, while the tracking during the head motion along the anterior-posterior direction resulted to be lower than the device specifications. During head rotation around the vertical direction, the error of the point of gaze was lower than 23 mm. The best accuracy (10 mm) was achieved, consistently to the device specifications, in the static condition performed at 650 mm from the eye-tracker, while point of gaze data were lost while getting closer to the eye-tracker. In general, the accuracy and precision of the point of gaze did not show to be related to the stimulus position. During fast walking (1.1 m/s), the eye-tracker did not lose any data, since the head range of motion was always within the ranges of trackability. The values of accuracy and precision during walking were similar to those resulting from static conditions. These values will be considered in the definition of the size and shape of the areas of interest in the virtual environment. PMID

  10. Determination of the precision and accuracy of morphological measurements using the Kinect™ sensor: comparison with standard stereophotogrammetry.

    PubMed

    Bonnechère, B; Jansen, B; Salvia, P; Bouzahouene, H; Sholukha, V; Cornelis, J; Rooze, M; Van Sint Jan, S

    2014-01-01

    The recent availability of the Kinect™ sensor, a low-cost Markerless Motion Capture (MMC) system, could give new and interesting insights into ergonomics (e.g. the creation of a morphological database). Extensive validation of this system is still missing. The aim of the study was to determine if the Kinect™ sensor can be used as an easy, cheap and fast tool to conduct morphology estimation. A total of 48 subjects were analysed using MMC. Results were compared with measurements obtained from a high-resolution stereophotogrammetric system, a marker-based system (MBS). Differences between MMC and MBS were found; however, these differences were systematically correlated and enabled regression equations to be obtained to correct MMC results. After correction, final results were in agreement with MBS data (p = 0.99). Results show that measurements were reproducible and precise after applying regression equations. Kinect™ sensors-based systems therefore seem to be suitable for use as fast and reliable tools to estimate morphology. Practitioner Summary: The Kinect™ sensor could eventually be used for fast morphology estimation as a body scanner. This paper presents an extensive validation of this device for anthropometric measurements in comparison to manual measurements and stereophotogrammetric devices. The accuracy is dependent on the segment studied but the reproducibility is excellent. PMID:24646374

  11. Strategy for high-accuracy-and-precision retrieval of atmospheric methane from the mid-infrared FTIR network

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Forster, F.; Rettinger, M.; Jones, N.

    2011-05-01

    We present a strategy (MIR-GBM v1.0) for the retrieval of column-averaged dry-air mole fractions of methane (XCH4) with a precision <0.3 % (1-σ diurnal variation, 7-min integration) and a seasonal bias <0.14 % from mid-infrared ground-based solar FTIR measurements of the Network for the Detection of Atmospheric Composition Change (NDACC, comprising 22 FTIR stations). This makes NDACC methane data useful for satellite validation and for the inversion of regional-scale sources and sinks in addition to long-term trend analysis. Such retrievals complement the high accuracy and precision near-infrared observations of the younger Total Carbon Column Observing Network (TCCON) with time series dating back 15 yr or so before TCCON operations began. MIR-GBM v1.0 is using HITRAN 2000 (including the 2001 update release) and 3 spectral micro windows (2613.70-2615.40 cm-1, 2835.50-2835.80 cm-1, 2921.00-2921.60 cm-1). A first-order Tikhonov constraint is applied to the state vector given in units of per cent of volume mixing ratio. It is tuned to achieve minimum diurnal variation without damping seasonality. Final quality selection of the retrievals uses a threshold for the ratio of root-mean-square spectral residuals and information content (<0.15 %). Column-averaged dry-air mole fractions are calculated using the retrieved methane profiles and four-times-daily pressure-temperature-humidity profiles from National Center for Environmental Prediction (NCEP) interpolated to the time of measurement. MIR-GBM v1.0 is the optimum of 24 tested retrieval strategies (8 different spectral micro-window selections, 3 spectroscopic line lists: HITRAN 2000, 2004, 2008). Dominant errors of the non-optimum retrieval strategies are HDO/H2O-CH4 interference errors (seasonal bias up to ≈4 %). Therefore interference errors have been quantified at 3 test sites covering clear-sky integrated water vapor levels representative for all NDACC sites (Wollongong maximum = 44.9 mm, Garmisch mean = 14.9 mm

  12. Strategy for high-accuracy-and-precision retrieval of atmospheric methane from the mid-infrared FTIR network

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Forster, F.; Rettinger, M.; Jones, N.

    2011-09-01

    We present a strategy (MIR-GBM v1.0) for the retrieval of column-averaged dry-air mole fractions of methane (XCH4) with a precision <0.3% (1-σ diurnal variation, 7-min integration) and a seasonal bias <0.14% from mid-infrared ground-based solar FTIR measurements of the Network for the Detection of Atmospheric Composition Change (NDACC, comprising 22 FTIR stations). This makes NDACC methane data useful for satellite validation and for the inversion of regional-scale sources and sinks in addition to long-term trend analysis. Such retrievals complement the high accuracy and precision near-infrared observations of the younger Total Carbon Column Observing Network (TCCON) with time series dating back 15 years or so before TCCON operations began. MIR-GBM v1.0 is using HITRAN 2000 (including the 2001 update release) and 3 spectral micro windows (2613.70-2615.40 cm-1, 2835.50-2835.80 cm-1, 2921.00-2921.60 cm-1). A first-order Tikhonov constraint is applied to the state vector given in units of per cent of volume mixing ratio. It is tuned to achieve minimum diurnal variation without damping seasonality. Final quality selection of the retrievals uses a threshold for the goodness of fit (χ2 < 1) as well as for the ratio of root-mean-square spectral noise and information content (<0.15%). Column-averaged dry-air mole fractions are calculated using the retrieved methane profiles and four-times-daily pressure-temperature-humidity profiles from National Center for Environmental Prediction (NCEP) interpolated to the time of measurement. MIR-GBM v1.0 is the optimum of 24 tested retrieval strategies (8 different spectral micro-window selections, 3 spectroscopic line lists: HITRAN 2000, 2004, 2008). Dominant errors of the non-optimum retrieval strategies are systematic HDO/H2O-CH4 interference errors leading to a seasonal bias up to ≈5%. Therefore interference errors have been quantified at 3 test sites covering clear-sky integrated water vapor levels representative for all NDACC

  13. An in-depth evaluation of accuracy and precision in Hg isotopic analysis via pneumatic nebulization and cold vapor generation multi-collector ICP-mass spectrometry.

    PubMed

    Rua-Ibarz, Ana; Bolea-Fernandez, Eduardo; Vanhaecke, Frank

    2016-01-01

    Mercury (Hg) isotopic analysis via multi-collector inductively coupled plasma (ICP)-mass spectrometry (MC-ICP-MS) can provide relevant biogeochemical information by revealing sources, pathways, and sinks of this highly toxic metal. In this work, the capabilities and limitations of two different sample introduction systems, based on pneumatic nebulization (PN) and cold vapor generation (CVG), respectively, were evaluated in the context of Hg isotopic analysis via MC-ICP-MS. The effect of (i) instrument settings and acquisition parameters, (ii) concentration of analyte element (Hg), and internal standard (Tl)-used for mass discrimination correction purposes-and (iii) different mass bias correction approaches on the accuracy and precision of Hg isotope ratio results was evaluated. The extent and stability of mass bias were assessed in a long-term study (18 months, n = 250), demonstrating a precision ≤0.006% relative standard deviation (RSD). CVG-MC-ICP-MS showed an approximately 20-fold enhancement in Hg signal intensity compared with PN-MC-ICP-MS. For CVG-MC-ICP-MS, the mass bias induced by instrumental mass discrimination was accurately corrected for by using either external correction in a sample-standard bracketing approach (SSB) or double correction, consisting of the use of Tl as internal standard in a revised version of the Russell law (Baxter approach), followed by SSB. Concomitant matrix elements did not affect CVG-ICP-MS results. Neither with PN, nor with CVG, any evidence for mass-independent discrimination effects in the instrument was observed within the experimental precision obtained. CVG-MC-ICP-MS was finally used for Hg isotopic analysis of reference materials (RMs) of relevant environmental origin. The isotopic composition of Hg in RMs of marine biological origin testified of mass-independent fractionation that affected the odd-numbered Hg isotopes. While older RMs were used for validation purposes, novel Hg isotopic data are provided for the

  14. Miniaturized orb-weaving spiders: behavioural precision is not limited by small size.

    PubMed

    Eberhard, William G

    2007-09-01

    The special problems confronted by very small animals in nervous system design that may impose limitations on their behaviour and evolution are reviewed. Previous attempts to test for such behavioural limitations have suffered from lack of detail in behavioural observations of tiny species and unsatisfactory measurements of their behavioural capacities. This study presents partial solutions to both problems. The orb-web construction behaviour of spiders provided data on the comparative behavioural capabilities of tiny animals in heretofore unparalleled detail; species ranged about five orders of magnitude in weight, from approximately 50-100mg down to some of the smallest spiders known (less than 0.005mg), whose small size is a derived trait. Previous attempts to quantify the 'complexity' of behaviour were abandoned in favour of using comparisons of behavioural imprecision in performing the same task. The prediction of the size limitation hypothesis that very small spiders would have a reduced ability to repeat one particular behaviour pattern precisely was not confirmed. The anatomical and physiological mechanisms by which these tiny animals achieve this precision and the possibility that they are more limited in the performance of higher-order behaviour patterns await further investigation.

  15. Bound on range precision for shot-noise limited ladar systems.

    PubMed

    Johnson, Steven; Cain, Stephen

    2008-10-01

    The precision of ladar range measurements is limited by noise. The fundamental source of noise in a laser signal is the random time between photon arrivals. This phenomenon, called shot noise, is modeled as a Poisson random process. Other noise sources in the system are also modeled as Poisson processes. Under the Poisson-noise assumption, the Cramer-Rao lower bound (CRLB) on range measurements is derived. This bound on the variance of any unbiased range estimate is greater than the CRLB derived by assuming Gaussian noise of equal variance. Finally, it is shown that, for a ladar capable of dividing a fixed amount of energy into multiple laser pulses, the range precision is maximized when all energy is transmitted in a single pulse.

  16. Exploring the Accuracy Limits of Local Pair Natural Orbital Coupled-Cluster Theory.

    PubMed

    Liakos, Dimitrios G; Sparta, Manuel; Kesharwani, Manoj K; Martin, Jan M L; Neese, Frank

    2015-04-14

    The domain based local pair natural orbital coupled cluster method with single-, double-, and perturbative triple excitations (DLPNO–CCSD(T)) is an efficient quantum chemical method that allows for coupled cluster calculations on molecules with hundreds of atoms. Because coupled-cluster theory is the method of choice if high-accuracy is needed, DLPNO–CCSD(T) is very promising for large-scale chemical application. However, the various approximations that have to be introduced in order to reach near linear scaling also introduce limited deviations from the canonical results. In the present work, we investigate how far the accuracy of the DLPNO–CCSD(T) method can be pushed for chemical applications. We also address the question at which additional computational cost improvements, relative to the previously established default scheme, come. To answer these questions, a series of benchmark sets covering a broad range of quantum chemical applications including reaction energies, hydrogen bonds, and other noncovalent interactions, conformer energies, and a prototype organometallic problem were selected. An accuracy of 1 kcal/mol or better can readily be obtained for all data sets using the default truncation scheme, which corresponds to the stated goal of the original implementation. Tightening of the three thresholds that control DLPNO leads to mean absolute errors and standard deviations from the canonical results of less than 0.25 kcal/mol (<1 kJ/mol). The price one has then to pay is an increased computational time by a factor close to 3. The applicability of the method is shown to be independent of the nature of the reaction. On the basis of the careful analysis of the results, three different sets of truncation thresholds (termed “LoosePNO”, “NormalPNO”, and “TightPNO”) have been chosen for “black box” use of DLPNO–CCSD(T). This will allow users of the method to optimally balance performance and accuracy. PMID:26889511

  17. Progress integrating ID-TIMS U-Pb geochronology with accessory mineral geochemistry: towards better accuracy and higher precision time

    NASA Astrophysics Data System (ADS)

    Schoene, B.; Samperton, K. M.; Crowley, J. L.; Cottle, J. M.

    2012-12-01

    It is increasingly common that hand samples of plutonic and volcanic rocks contain zircon with dates that span between zero and >100 ka. This recognition comes from the increased application of U-series geochronology on young volcanic rocks and the increased precision to better than 0.1% on single zircons by the U-Pb ID-TIMS method. It has thus become more difficult to interpret such complicated datasets in terms of ashbed eruption or magma emplacement, which are critical constraints for geochronologic applications ranging from biotic evolution and the stratigraphic record to magmatic and metamorphic processes in orogenic belts. It is important, therefore, to develop methods that aid in interpreting which minerals, if any, date the targeted process. One promising tactic is to better integrate accessory mineral geochemistry with high-precision ID-TIMS U-Pb geochronology. These dual constraints can 1) identify cogenetic populations of minerals, and 2) record magmatic or metamorphic fluid evolution through time. Goal (1) has been widely sought with in situ geochronology and geochemical analysis but is limited by low-precision dates. Recent work has attempted to bridge this gap by retrieving the typically discarded elution from ion exchange chemistry that precedes ID-TIMS U-Pb geochronology and analyzing it by ICP-MS (U-Pb TIMS-TEA). The result integrates geochemistry and high-precision geochronology from the exact same volume of material. The limitation of this method is the relatively coarse spatial resolution compared to in situ techniques, and thus averages potentially complicated trace element profiles through single minerals or mineral fragments. In continued work, we test the effect of this on zircon by beginning with CL imaging to reveal internal zonation and growth histories. This is followed by in situ LA-ICPMS trace element transects of imaged grains to reveal internal geochemical zonation. The same grains are then removed from grain-mount, fragmented, and

  18. Phase noise in pulsed Doppler lidar and limitations on achievable single-shot velocity accuracy

    NASA Technical Reports Server (NTRS)

    Mcnicholl, P.; Alejandro, S.

    1992-01-01

    The smaller sampling volumes afforded by Doppler lidars compared to radars allows for spatial resolutions at and below some sheer and turbulence wind structure scale sizes. This has brought new emphasis on achieving the optimum product of wind velocity and range resolutions. Several recent studies have considered the effects of amplitude noise, reduction algorithms, and possible hardware related signal artifacts on obtainable velocity accuracy. We discuss here the limitation on this accuracy resulting from the incoherent nature and finite temporal extent of backscatter from aerosols. For a lidar return from a hard (or slab) target, the phase of the intermediate frequency (IF) signal is random and the total return energy fluctuates from shot to shot due to speckle; however, the offset from the transmitted frequency is determinable with an accuracy subject only to instrumental effects and the signal to noise ratio (SNR), the noise being determined by the LO power in the shot noise limited regime. This is not the case for a return from a media extending over a range on the order of or greater than the spatial extent of the transmitted pulse, such as from atmospheric aerosols. In this case, the phase of the IF signal will exhibit a temporal random walk like behavior. It will be uncorrelated over times greater than the pulse duration as the transmitted pulse samples non-overlapping volumes of scattering centers. Frequency analysis of the IF signal in a window similar to the transmitted pulse envelope will therefore show shot-to-shot frequency deviations on the order of the inverse pulse duration reflecting the random phase rate variations. Like speckle, these deviations arise from the incoherent nature of the scattering process and diminish if the IF signal is averaged over times greater than a single range resolution cell (here the pulse duration). Apart from limiting the high SNR performance of a Doppler lidar, this shot-to-shot variance in velocity estimates has a

  19. Accuracy, precision and response time of consumer fork, remote digital probe and disposable indicator thermometers for cooked ground beef patties and chicken breasts

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Nine different commercially available instant-read consumer thermometers (forks, remotes, digital probe and disposable color change indicators) were tested for accuracy and precision compared to a calibrated thermocouple in 80 percent and 90 percent lean ground beef patties, and boneless and bone-in...

  20. An Examination of the Precision and Technical Accuracy of the First Wave of Group-Randomized Trials Funded by the Institute of Education Sciences

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Raudenbush, Stephen W.

    2009-01-01

    This article examines the power analyses for the first wave of group-randomized trials funded by the Institute of Education Sciences. Specifically, it assesses the precision and technical accuracy of the studies. The authors identified the appropriate experimental design and estimated the minimum detectable standardized effect size (MDES) for each…

  1. Precision of three-dimensional atomic scale measurements from HRTEM images: what are the limits?

    PubMed

    Wang, A; Van Aert, S; Goos, P; Van Dyck, D

    2012-03-01

    In this paper, we investigate to what extent high resolution transmission electron microscopy images can be used to measure the mass, in terms of thickness, and surface profile, corresponding to the defocus offset, of an object at the atomic scale. Therefore, we derive an expression for the statistical precision with which these object parameters can be estimated in a quantitative analysis. Evaluating this expression as a function of the microscope settings allows us to derive the optimal microscope design. Acquiring three-dimensional structure information in terms of thickness turns out to be much more difficult than obtaining two-dimensional information on the projected atom column positions. The attainable precision is found to be more strongly affected by processes influencing the image contrast, such as phonon scattering, than by the specific choice of microscope settings. For a realistic incident electron dose, it is expected that atom columns can be distinguished with single atom sensitivity up to a thickness of the order of the extinction distance. A comparable thickness limit is determined to measure surface steps of one atom. An increase of the electron dose shifts the limiting thickness upward due to an increase in the signal-to-noise ratio.

  2. Inhomogeneity in films: limitation of the accuracy of optical monitoring of thin films.

    PubMed

    Borgogno, J P; Bousquet, P; Flory, F; Lazarides, B; Pelletier, E; Roche, P

    1981-01-01

    With present-day refinements, thin film multilayers can be designed theoretically to meet virtually any reasonable filtering requirements. Often, when the optical properties are specified over very wide spectral regions the thicknesses of the various layers are not related in any simple way. The manufacture of such multilayers presents many difficulties. The tolerances on layer thickness and refractive indices in some designs are often very narrow. We have developed an optical method for the accurate control of layer thickness that involves the measurement of transmittance over a wide spectral region (400-1000 mm). This measurement is performed continuously during deposition by a rapid scanning monochromator. The accuracy of such a system depends on a precise knowledge of the indices of refraction that are produced during the multilayer deposition. In addition the structure of many optical thin films used for hard coatings departs considerably from the simple method that is traditionally used in optical coating designs. In the method we have developed to compensate for such discrepancies, optical inhomogeneity is included by assuming a linear refractive-index profile, determined by analyzing experimental results. These results are in agreement with other studies of structure.

  3. Cognitive Reserve in Alzheimer's Dementia: Diagnostic Accuracy of a Testing-the-Limits Paradigm.

    PubMed

    Küster, Olivia C; Kösel, Jonas; Spohn, Stephanie; Schurig, Niklas; Tumani, Hayrettin; von Arnim, Christine A F; Uttner, Ingo

    2016-03-29

    Individuals with higher cognitive reserve are more able to cope with pathological brain alterations, potentially due to the application of more efficient cognitive strategies. The extent to which an individual's cognitive performance can be increased by advantageous conditions differs substantially between patients with Alzheimer's dementia (AD) and healthy older adults and can be assessed with the Testing-the-Limits (TtL) approach. Thus, TtL has been proposed as a tool for the early diagnosis of AD. Here, we report the diagnostic accuracy of a memory TtL paradigm to discriminate between AD patients and controls. The TtL paradigm was administered to 57 patients with clinically diagnosed AD and 94 controls. It consisted of a pre-test condition, representing baseline cognitive performance, the presentation of an encoding strategy, and two subsequent post-test conditions, representing learning potential. Receiver operating characteristic (ROC) curves were analyzed for each condition in order to receive optimal cutoff points along with their sensitivity and specificity and to compare the diagnostic accuracy of the conditions. Differentiation between AD patients and controls, indicated by the area under the ROC curve, increased significantly for the TtL post-test and total error scores compared to the pre-test score. The combined error score in the two post-tests could differentiate between AD patients and controls with a sensitivity of 0.93 and a specificity of 0.80. The presented approach can be carried out in 25 minutes and thus constitutes a time- and cost-effective way to diagnose AD with high accuracy. PMID:27031485

  4. Flame acceleration due to wall friction: Accuracy and intrinsic limitations of the formulations

    NASA Astrophysics Data System (ADS)

    Demirgok, Berk; Sezer, Hayri; Akkerman, V.'Yacheslav

    2015-11-01

    The analytical formulations on the premixed flame acceleration induced by wall friction in two-dimensional (2D) channels [Bychkov et al., Phys. Rev. E 72 (2005) 046307] and cylindrical tubes [Akkerman et al., Combust. Flame 145 (2006) 206] are revisited. Specifically, pipes with one end closed are considered, with a flame front propagating from the closed pipe end to the open one. The original studies provide the analytical formulas for the basic flame and fluid characteristics such as the flame acceleration rate, the flame shape and its propagation speed, as well as the flame-generated flow velocity profile. In the present work, the accuracy of these approaches is verified, computationally, and the intrinsic limitations and validity domains of the formulations are identified. Specifically, the error diagrams are presented to demonstrate how the accuracy of the formulations depends on the thermal expansion in the combustion process and the Reynolds number associated with the flame propagation. It is shown that the 2D theory is accurate enough for a wide range of parameters. In contrast, the zeroth-order approximation for the cylindrical configuration appeared to be quite inaccurate and had to be revisited. It is subsequently demonstrated that the first-order approximation for the cylindrical geometry is very accurate for realistically large thermal expansions and Reynolds numbers. Consequently, unlike the zeroth-order approach, the first-order formulation can constitute a backbone for the comprehensive theory of the flame acceleration and detonation initiation in cylindrical tubes. Cumulatively, the accuracy of the formulations deteriorates with the reduction of the Reynolds number and thermal expansion.

  5. Monthly Strontium/Calcium oscillations in symbiotic coral aragonite: Biological effects limiting the precision of the paleotemperature proxy

    USGS Publications Warehouse

    Meibom, A.; Stage, M.; Wooden, J.; Constantz, B.R.; Dunbar, R.B.; Owen, A.; Grumet, N.; Bacon, C.R.; Chamberlain, C.P.

    2003-01-01

    In thermodynamic equilibrium with sea water the Sr/Ca ratio of aragonite varies predictably with temperature and the Sr/Ca ratio in coral have thus become a frequently used proxy for past Sea Surface Temperature (SST). However, biological effects can offset the Sr/Ca ratio from its equilibrium value. We report high spatial resolution ion microprobe analyses of well defined skeletal elements in the reef-building coral Porites lutea that reveal distinct monthly oscillations in the Sr/Ca ratio, with an amplitude in excess of ten percent. The extreme Sr/Ca variations, which we propose result from metabolic changes synchronous with the lunar cycle, introduce variability in Sr/Ca measurements based on conventional sampling techniques well beyond the analytical precision. These variations can limit the accuracy of Sr/Ca paleothermometry by conventional sampling techniques to about 2??C. Our results may help explain the notorious difficulties involved in obtaining an accurate and consistent calibration of the Sr/Ca vs. SST relationship.

  6. Towards the GEOSAT Follow-On Precise Orbit Determination Goals of High Accuracy and Near-Real-Time Processing

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Zelensky, Nikita P.; Chinn, Douglas S.; Beckley, Brian D.; Lillibridge, John L.

    2006-01-01

    The US Navy's GEOSAT Follow-On spacecraft (GFO) primary mission objective is to map the oceans using a radar altimeter. Satellite laser ranging data, especially in combination with altimeter crossover data, offer the only means of determining high-quality precise orbits. Two tuned gravity models, PGS7727 and PGS7777b, were created at NASA GSFC for GFO that reduce the predicted radial orbit through degree 70 to 13.7 and 10.0 mm. A macromodel was developed to model the nonconservative forces and the SLR spacecraft measurement offset was adjusted to remove a mean bias. Using these improved models, satellite-ranging data, altimeter crossover data, and Doppler data are used to compute both daily medium precision orbits with a latency of less than 24 hours. Final precise orbits are also computed using these tracking data and exported with a latency of three to four weeks to NOAA for use on the GFO Geophysical Data Records (GDR s). The estimated orbit precision of the daily orbits is between 10 and 20 cm, whereas the precise orbits have a precision of 5 cm.

  7. The precision and accuracy of iterative and non-iterative methods of photopeak integration in activation analysis, with particular reference to the analysis of multiplets

    USGS Publications Warehouse

    Baedecker, P.A.

    1977-01-01

    The relative precisions obtainable using two digital methods, and three iterative least squares fitting procedures of photopeak integration have been compared empirically using 12 replicate counts of a test sample with 14 photopeaks of varying intensity. The accuracy by which the various iterative fitting methods could analyse synthetic doublets has also been evaluated, and compared with a simple non-iterative approach. ?? 1977 Akade??miai Kiado??.

  8. Cluster model for the ionic product of water: accuracy and limitations of common density functional methods.

    PubMed

    Svozil, Daniel; Jungwirth, Pavel

    2006-07-27

    In the present study, the performance of six popular density functionals (B3LYP, PBE0, BLYP, BP86, PBE, and SVWN) for the description of the autoionization process in the water octamer was studied. As a benchmark, MP2 energies with complete basis sets limit extrapolation and CCSD(T) correction were used. At this level, the autoionized structure lies 28.5 kcal.mol(-1) above the neutral water octamer. Accounting for zero-point energy lowers this value by 3.0 kcal.mol(-1). The transition state of the proton transfer reaction, lying only 0.7 kcal.mol(-1) above the energy of the ionized system, was identified at the MP2/aug-cc-pVDZ level of theory. Different density functionals describe the reactant and product with varying accuracy, while they all fail to characterize the transition state. We find improved results with hybrid functionals compared to the gradient-corrected ones. In particular, B3LYP describes the reaction energetics within 2.5 kcal.mol(-1) of the benchmark value. Therefore, this functional is suggested to be preferably used both for Carr-Parinello molecular dynamics and for quantum mechanics/molecular mechanics (QM/MM) simulations of autoionization of water.

  9. Improved Accuracy and Precision in LA-ICP-MS U-Th/Pb Dating of Zircon through the Reduction of Crystallinity Related Bias

    NASA Astrophysics Data System (ADS)

    Matthews, W.; McDonald, A.; Hamilton, B.; Guest, B.

    2015-12-01

    The accuracy of zircon U-Th/Pb ages generated by LA-ICP-MS is limited by systematic bias resulting from differences in crystallinity of the primary reference and that of the unknowns being analyzed. In general, the use of a highly crystalline primary reference will tend to bias analyses of materials of lesser crystallinity toward older ages. When dating igneous rocks, bias can be minimized by matching the crystallinity of the primary reference to that of the unknowns. However, the crystallinity of the unknowns is often not well constrained prior to ablation, as it is a function of U and Th concentration, crystallization age, and thermal history. Likewise, selecting an appropriate primary reference is impossible when dating detrital rocks where zircons with differing ages, protoliths, and thermal histories are analyzed in the same session. We investigate the causes of systematic bias using Raman spectroscopy and measurements of the ablated pit geometry. The crystallinity of five zircon reference materials with ages between 28.2 Ma and 2674 Ma was estimated using Raman spectroscopy. Zircon references varied from being highly crystalline to highly metamict, with individual reference materials plotting as distinct clusters in peak wavelength versus Full-Width Half-Maximum (FWHM) space. A strong positive correlation (R2=0.69) was found between the FWHM for the band at ~1000 cm-1 in the Raman spectrum of the zircon and its ablation rate, suggesting the degree of crystallinity is a primary control on ablation rate in zircons. A moderate positive correlation (R2=0.37) was found between ablation rate and the difference between the age determined by LA-ICP-MS and the accepted ID-TIMS age (ΔAge). We use the measured, intra-sessional relationship between ablation rate and ΔAge of secondary references to reduce systematic bias. Rapid, high-precision measurement of ablated pit geometries using an optical profilometer and custom MatLab algorithm facilitates the implementation

  10. Precision electron polarimetry

    SciTech Connect

    Chudakov, Eugene A.

    2013-11-01

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. M{\\o}ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at ~300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100\\%-polarized electron target for M{\\o}ller polarimetry.

  11. Precision electron polarimetry

    NASA Astrophysics Data System (ADS)

    Chudakov, E.

    2013-11-01

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. Mo/ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at 300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100%-polarized electron target for Mo/ller polarimetry.

  12. New Precision Limit on the Strange Vector Form Factors of the Proton

    SciTech Connect

    Ahmed, Z.; Allada, K.; Aniol, K. A.; Armstrong, D. S.; Arrington, J.; Baturin, P.; Bellini, V.; Benesch, J.; Beminiwattha, R.; Benmokhtar, F.; Canan, M.; Camsonne, A.; Cates, G. D.; Chen, J. -P.; Chudakov, E.; Cisbani, E.; Dalton, M. M.; de Jager, C. W.; De Leo, R.; Deconinck, W.; Decowski, P.; Deng, X.; Deur, A.; Dutta, C.; Franklin, G. B.; Friend, M.; Frullani, S.; Garibaldi, F.; Giusa, A.; Glamazdin, A.; Golge, S.; Grimm, K.; Hansen, O.; Higinbotham, D. W.; Holmes, R.; Holmstrom, T.; Huang, J.; Huang, M.; Hyde, C. E.; Jen, C. M.; Jin, G.; Jones, D.; Kang, H.; King, P.; Kowalski, S.; Kumar, K. S.; Lee, J. H.; LeRose, J. J.; Liyanage, N.; Long, E.; McNulty, D.; Margaziotis, D.; Meddi, F.; Meekins, D. G.; Mercado, L.; Meziani, Z. -E.; Michaels, R.; Muñoz-Camacho, C.; Mihovilovic, M.; Muangma, N.; Myers, K. E.; Nanda, S.; Narayan, A.; Nelyubin, V.; Nuruzzaman, None; Oh, Y.; Pan, K.; Parno, D.; Paschke, K. D.; Phillips, S. K.; Qian, X.; Qiang, Y.; Quinn, B.; Rakhman, A.; Reimer, P. E.; Rider, K.; Riordan, S.; Roche, J.; Rubin, J.; Russo, G.; Saenboonruang, K.; Saha, A.; Sawatzky, B.; Silwal, R.; Sirca, S.; Souder, P. A.; Sperduto, M.; Subedi, R.; Suleiman, R.; Sulkosky, V.; Sutera, C. M.; Tobias, W. A.; Urciuoli, G. M.; Waidyawansa, B.; Wang, D.; Wexler, J.; Wilson, R.; Wojtsekhowski, B.; Zhan, X.; Yan, X.; Yao, H.; Ye, L.; Zhao, B.; Zheng, X.

    2012-03-01

    The parity-violating cross-section asymmetry in the elastic scattering of polarized electrons from unpolarized protons has been measured at a four-momentum transfer squared Q2 = 0.624 GeV2 and beam energy Eb = 3.48 GeV to be APV = -23.80 ± 0.78 (stat) ± 0.36 (syst) parts per million. This result is consistent with zero contribution of strange quarks to the combination of electric and magnetic form factors GEs + 0.517 GMs = 0.003 ± 0.010 (stat) ± 0.004 (syst) ± 0.009 (ff), where the third error is due to the limits of precision on the electromagnetic form factors and radiative corrections. With this measurement, the world data on strange contributions to nucleon form factors are seen to be consistent with zero and not more than a few percent of the proton form factors.

  13. New Precision Limit on the Strange Vector Form Factors of the Proton

    DOE PAGES

    Ahmed, Z.; Allada, K.; Aniol, K. A.; Armstrong, D. S.; Arrington, J.; Baturin, P.; Bellini, V.; Benesch, J.; Beminiwattha, R.; Benmokhtar, F.; et al

    2012-03-01

    The parity-violating cross-section asymmetry in the elastic scattering of polarized electrons from unpolarized protons has been measured at a four-momentum transfer squared Q2 = 0.624 GeV2 and beam energy Eb = 3.48 GeV to be APV = -23.80 ± 0.78 (stat) ± 0.36 (syst) parts per million. This result is consistent with zero contribution of strange quarks to the combination of electric and magnetic form factors GEs + 0.517 GMs = 0.003 ± 0.010 (stat) ± 0.004 (syst) ± 0.009 (ff), where the third error is due to the limits of precision on the electromagnetic form factors and radiative corrections.more » With this measurement, the world data on strange contributions to nucleon form factors are seen to be consistent with zero and not more than a few percent of the proton form factors.« less

  14. Precision mass measurements of neutron-rich nuclei, and limitations on the r-process environment

    NASA Astrophysics Data System (ADS)

    Van Schelt, Jonathon A.

    2012-05-01

    The masses of 65 neutron-rich nuclides and 6 metastable states from Z = 49 to 64 were measured at a typical precision of δm/m= 10-7 using the Canadian Penning Trap mass spectrometer at Argonne National Laboratory. The measurements are on fission fragments from 252Cf spontaneous fission sources, including those measurements made at the new Californium Rare Isotope Breeder Upgrade facility (CARIBU) and an earlier source. The measured nuclides lie on or approach the predicted path of the astrophysical r process. Where overlap exists, this data set is largely consistent with previous measurements from Penning traps, storage rings, and reaction energetics, but large systematic deviations are apparent in β-endpoint measurements. Simulations of the r process were undertaken to determine how quickly material can pass through the studied elements for a variety of conditions, placing limits on what temperatures densities allow passage on a desired timescale. The new masses produce manifold differences in effective lifetime compared to simulations performed with some model masses.

  15. Spatial Structure of Above-Ground Biomass Limits Accuracy of Carbon Mapping in Rainforest but Large Scale Forest Inventories Can Help to Overcome

    PubMed Central

    Guitet, Stéphane; Hérault, Bruno; Molto, Quentin; Brunaux, Olivier; Couteron, Pierre

    2015-01-01

    Precise mapping of above-ground biomass (AGB) is a major challenge for the success of REDD+ processes in tropical rainforest. The usual mapping methods are based on two hypotheses: a large and long-ranged spatial autocorrelation and a strong environment influence at the regional scale. However, there are no studies of the spatial structure of AGB at the landscapes scale to support these assumptions. We studied spatial variation in AGB at various scales using two large forest inventories conducted in French Guiana. The dataset comprised 2507 plots (0.4 to 0.5 ha) of undisturbed rainforest distributed over the whole region. After checking the uncertainties of estimates obtained from these data, we used half of the dataset to develop explicit predictive models including spatial and environmental effects and tested the accuracy of the resulting maps according to their resolution using the rest of the data. Forest inventories provided accurate AGB estimates at the plot scale, for a mean of 325 Mg.ha-1. They revealed high local variability combined with a weak autocorrelation up to distances of no more than10 km. Environmental variables accounted for a minor part of spatial variation. Accuracy of the best model including spatial effects was 90 Mg.ha-1 at plot scale but coarse graining up to 2-km resolution allowed mapping AGB with accuracy lower than 50 Mg.ha-1. Whatever the resolution, no agreement was found with available pan-tropical reference maps at all resolutions. We concluded that the combined weak autocorrelation and weak environmental effect limit AGB maps accuracy in rainforest, and that a trade-off has to be found between spatial resolution and effective accuracy until adequate “wall-to-wall” remote sensing signals provide reliable AGB predictions. Waiting for this, using large forest inventories with low sampling rate (<0.5%) may be an efficient way to increase the global coverage of AGB maps with acceptable accuracy at kilometric resolution. PMID

  16. Spatial Structure of Above-Ground Biomass Limits Accuracy of Carbon Mapping in Rainforest but Large Scale Forest Inventories Can Help to Overcome.

    PubMed

    Guitet, Stéphane; Hérault, Bruno; Molto, Quentin; Brunaux, Olivier; Couteron, Pierre

    2015-01-01

    Precise mapping of above-ground biomass (AGB) is a major challenge for the success of REDD+ processes in tropical rainforest. The usual mapping methods are based on two hypotheses: a large and long-ranged spatial autocorrelation and a strong environment influence at the regional scale. However, there are no studies of the spatial structure of AGB at the landscapes scale to support these assumptions. We studied spatial variation in AGB at various scales using two large forest inventories conducted in French Guiana. The dataset comprised 2507 plots (0.4 to 0.5 ha) of undisturbed rainforest distributed over the whole region. After checking the uncertainties of estimates obtained from these data, we used half of the dataset to develop explicit predictive models including spatial and environmental effects and tested the accuracy of the resulting maps according to their resolution using the rest of the data. Forest inventories provided accurate AGB estimates at the plot scale, for a mean of 325 Mg.ha-1. They revealed high local variability combined with a weak autocorrelation up to distances of no more than10 km. Environmental variables accounted for a minor part of spatial variation. Accuracy of the best model including spatial effects was 90 Mg.ha-1 at plot scale but coarse graining up to 2-km resolution allowed mapping AGB with accuracy lower than 50 Mg.ha-1. Whatever the resolution, no agreement was found with available pan-tropical reference maps at all resolutions. We concluded that the combined weak autocorrelation and weak environmental effect limit AGB maps accuracy in rainforest, and that a trade-off has to be found between spatial resolution and effective accuracy until adequate "wall-to-wall" remote sensing signals provide reliable AGB predictions. Waiting for this, using large forest inventories with low sampling rate (<0.5%) may be an efficient way to increase the global coverage of AGB maps with acceptable accuracy at kilometric resolution.

  17. Effect of predictor traits on accuracy of genomic breeding values for feed intake based on a limited cow reference population.

    PubMed

    Pszczola, M; Veerkamp, R F; de Haas, Y; Wall, E; Strabel, T; Calus, M P L

    2013-11-01

    The genomic breeding value accuracy of scarcely recorded traits is low because of the limited number of phenotypic observations. One solution to increase the breeding value accuracy is to use predictor traits. This study investigated the impact of recording additional phenotypic observations for predictor traits on reference and evaluated animals on the genomic breeding value accuracy for a scarcely recorded trait. The scarcely recorded trait was dry matter intake (DMI, n = 869) and the predictor traits were fat-protein-corrected milk (FPCM, n = 1520) and live weight (LW, n = 1309). All phenotyped animals were genotyped and originated from research farms in Ireland, the United Kingdom and the Netherlands. Multi-trait REML was used to simultaneously estimate variance components and breeding values for DMI using available predictors. In addition, analyses using only pedigree relationships were performed. Breeding value accuracy was assessed through cross-validation (CV) and prediction error variance (PEV). CV groups (n = 7) were defined by splitting animals across genetic lines and management groups within country. With no additional traits recorded for the evaluated animals, both CV- and PEV-based accuracies for DMI were substantially higher for genomic than for pedigree analyses (CV: max. 0.26 for pedigree and 0.33 for genomic analyses; PEV: max. 0.45 and 0.52, respectively). With additional traits available, the differences between pedigree and genomic accuracies diminished. With additional recording for FPCM, pedigree accuracies increased from 0.26 to 0.47 for CV and from 0.45 to 0.48 for PEV. Genomic accuracies increased from 0.33 to 0.50 for CV and from 0.52 to 0.53 for PEV. With additional recording for LW instead of FPCM, pedigree accuracies increased to 0.54 for CV and to 0.61 for PEV. Genomic accuracies increased to 0.57 for CV and to 0.60 for PEV. With both FPCM and LW available for evaluated animals, accuracy was highest (0.62 for CV and 0.61 for PEV in

  18. Limits of Astrometric and Photometric Precision on KBOs using Small Telescopes

    NASA Astrophysics Data System (ADS)

    Markatou, Evangelia Anna; Wang, Amanda; Kosiarek, Molly; Dunham, Emilie

    2014-06-01

    We conducted photometric and astrometric measurements on Haumea and Makemake, two Kuiper Belt Objects which are typically observed by 1-meter class telescopes or larger, with the goal of testing the limitations of small telescopes. Here we present our measurements of Haumea and Makemake obtained between June 5th, 2013 and July 31st, 2013 with the 14-inch Wallace Astrophysical Observatory (WAO) telescopes. Using photometry, we determined that Haumea and Makemake have R-band magnitudes of 17.225±0.347 and 16.850±0.107 respectively. These values agree with the previous R-band measurements of 17.240±0.030 (Lacerda et al. 2008) and 16.802±0.041 (Rabinowitz et al. 2007) for Haumea and Makemake respectively. We obtained rotational light curves for Haumea and Makemake over eight separate nights, again by analysing the photometry observations. Astrometry yielded residuals of -0.039±0.025 arcseconds in RA and 0.234±0.017 arcseconds in DEC for Makemake, and 0.295±0.077 arcseconds in RA and 0.184±0.0554 arcseconds in DEC for Haumea. These results, when submitted to the minor planet center, are able to increase the accuracy of the JPL ephemeris. Additionally, we calculated that observing Haumea with two 14-inch telescopes and Makemake with four 14-inch telescopes, in ideal conditions, could resolve their periodicity. We conclude that with improved observing techniques and modern CCD cameras, it is possible to utilize small telescopes in universities around the world to observe large KBOs and obtain accurate photometric and astrometric results.

  19. Optimizing the accuracy and precision of the single-pulse Laue technique for synchrotron photo-crystallography

    PubMed Central

    Kamiński, Radosław; Graber, Timothy; Benedict, Jason B.; Henning, Robert; Chen, Yu-Sheng; Scheins, Stephan; Messerschmidt, Marc; Coppens, Philip

    2010-01-01

    The accuracy that can be achieved in single-pulse pump-probe Laue experiments is discussed. It is shown that with careful tuning of the experimental conditions a reproducibility of the intensity ratios of equivalent intensities obtained in different measurements of 3–4% can be achieved. The single-pulse experiments maximize the time resolution that can be achieved and, unlike stroboscopic techniques in which the pump-probe cycle is rapidly repeated, minimize the temperature increase due to the laser exposure of the sample. PMID:20567080

  20. Precise, Self-Limited Epitaxy of Ultrathin Organic Semiconductors and Heterojunctions Tailored by van der Waals Interactions.

    PubMed

    Wu, Bing; Zhao, Yinghe; Nan, Haiyan; Yang, Ziyi; Zhang, Yuhan; Zhao, Huijuan; He, Daowei; Jiang, Zonglin; Liu, Xiaolong; Li, Yun; Shi, Yi; Ni, Zhenhua; Wang, Jinlan; Xu, Jian-Bin; Wang, Xinran

    2016-06-01

    Precise assembly of semiconductor heterojunctions is the key to realize many optoelectronic devices. By exploiting the strong and tunable van der Waals (vdW) forces between graphene and organic small molecules, we demonstrate layer-by-layer epitaxy of ultrathin organic semiconductors and heterostructures with unprecedented precision with well-defined number of layers and self-limited characteristics. We further demonstrate organic p-n heterojunctions with molecularly flat interface, which exhibit excellent rectifying behavior and photovoltaic responses. The self-limited organic molecular beam epitaxy (SLOMBE) is generically applicable for many layered small-molecule semiconductors and may lead to advanced organic optoelectronic devices beyond bulk heterojunctions.

  1. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique

    PubMed Central

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan

    2014-01-01

    Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823

  2. How accurate and precise are limited sampling strategies in estimating exposure to mycophenolic acid in people with autoimmune disease?

    PubMed

    Abd Rahman, Azrin N; Tett, Susan E; Staatz, Christine E

    2014-03-01

    Mycophenolic acid (MPA) is a potent immunosuppressant agent, which is increasingly being used in the treatment of patients with various autoimmune diseases. Dosing to achieve a specific target MPA area under the concentration-time curve from 0 to 12 h post-dose (AUC12) is likely to lead to better treatment outcomes in patients with autoimmune disease than a standard fixed-dose strategy. This review summarizes the available published data around concentration monitoring strategies for MPA in patients with autoimmune disease and examines the accuracy and precision of methods reported to date using limited concentration-time points to estimate MPA AUC12. A total of 13 studies were identified that assessed the correlation between single time points and MPA AUC12 and/or examined the predictive performance of limited sampling strategies in estimating MPA AUC12. The majority of studies investigated mycophenolate mofetil (MMF) rather than the enteric-coated mycophenolate sodium (EC-MPS) formulation of MPA. Correlations between MPA trough concentrations and MPA AUC12 estimated by full concentration-time profiling ranged from 0.13 to 0.94 across ten studies, with the highest associations (r (2) = 0.90-0.94) observed in lupus nephritis patients. Correlations were generally higher in autoimmune disease patients compared with renal allograft recipients and higher after MMF compared with EC-MPS intake. Four studies investigated use of a limited sampling strategy to predict MPA AUC12 determined by full concentration-time profiling. Three studies used a limited sampling strategy consisting of a maximum combination of three sampling time points with the latest sample drawn 3-6 h after MMF intake, whereas the remaining study tested all combinations of sampling times. MPA AUC12 was best predicted when three samples were taken at pre-dose and at 1 and 3 h post-dose with a mean bias and imprecision of 0.8 and 22.6 % for multiple linear regression analysis and of -5.5 and 23.0 % for

  3. Clock accuracy and precision evolve as a consequence of selection for adult emergence in a narrow window of time in fruit flies Drosophila melanogaster.

    PubMed

    Kannan, Nisha N; Vaze, Koustubh M; Sharma, Vijay Kumar

    2012-10-15

    Although circadian clocks are believed to have evolved under the action of periodic selection pressures (selection on phasing) present in the geophysical environment, there is very little rigorous and systematic empirical evidence to support this. In the present study, we examined the effect of selection for adult emergence in a narrow window of time on the circadian rhythms of fruit flies Drosophila melanogaster. Selection was imposed in every generation by choosing flies that emerged during a 1 h window of time close to the emergence peak of baseline/control flies under 12 h:12 h light:dark cycles. To study the effect of selection on circadian clocks we estimated several quantifiable features that reflect inter- and intra-individual variance in adult emergence and locomotor activity rhythms. The results showed that with increasing generations, incidence of adult emergence and activity of adult flies during the 1 h selection window increased gradually in the selected populations. Flies from the selected populations were more homogenous in their clock period, were more coherent in their phase of entrainment, and displayed enhanced accuracy and precision in their emergence and activity rhythms compared with controls. These results thus suggest that circadian clocks in D. melanogaster evolve enhanced accuracy and precision when subjected to selection for emergence in a narrow window of time.

  4. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, Jacquelyn C.; Thompson, Anne M.; Schmidlin, F. J.; Oltmans, S. J.; Smit, H. G. J.

    2004-01-01

    Since 1998 the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 ozone profiles over eleven southern hemisphere tropical and subtropical stations. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used to measure ozone. The data are archived at: &ttp://croc.gsfc.nasa.gov/shadoz>. In analysis of ozonesonde imprecision within the SHADOZ dataset, Thompson et al. [JGR, 108,8238,20031 we pointed out that variations in ozonesonde technique (sensor solution strength, instrument manufacturer, data processing) could lead to station-to-station biases within the SHADOZ dataset. Imprecisions and accuracy in the SHADOZ dataset are examined in light of new data. First, SHADOZ total ozone column amounts are compared to version 8 TOMS (2004 release). As for TOMS version 7, satellite total ozone is usually higher than the integrated column amount from the sounding. Discrepancies between the sonde and satellite datasets decline two percentage points on average, compared to version 7 TOMS offsets. Second, the SHADOZ station data are compared to results of chamber simulations (JOSE-2000, Juelich Ozonesonde Intercomparison Experiment) in which the various SHADOZ techniques were evaluated. The range of JOSE column deviations from a standard instrument (-10%) in the chamber resembles that of the SHADOZ station data. It appears that some systematic variations in the SHADOZ ozone record are accounted for by differences in solution strength, data processing and instrument type (manufacturer).

  5. Occupational exposure decisions: can limited data interpretation training help improve accuracy?

    PubMed

    Logan, Perry; Ramachandran, Gurumurthy; Mulhausen, John; Hewett, Paul

    2009-06-01

    Accurate exposure assessments are critical for ensuring that potentially hazardous exposures are properly identified and controlled. The availability and accuracy of exposure assessments can determine whether resources are appropriately allocated to engineering and administrative controls, medical surveillance, personal protective equipment and other programs designed to protect workers. A desktop study was performed using videos, task information and sampling data to evaluate the accuracy and potential bias of participants' exposure judgments. Desktop exposure judgments were obtained from occupational hygienists for material handling jobs with small air sampling data sets (0-8 samples) and without the aid of computers. In addition, data interpretation tests (DITs) were administered to participants where they were asked to estimate the 95th percentile of an underlying log-normal exposure distribution from small data sets. Participants were presented with an exposure data interpretation or rule of thumb training which included a simple set of rules for estimating 95th percentiles for small data sets from a log-normal population. DIT was given to each participant before and after the rule of thumb training. Results of each DIT and qualitative and quantitative exposure judgments were compared with a reference judgment obtained through a Bayesian probabilistic analysis of the sampling data to investigate overall judgment accuracy and bias. There were a total of 4386 participant-task-chemical judgments for all data collections: 552 qualitative judgments made without sampling data and 3834 quantitative judgments with sampling data. The DITs and quantitative judgments were significantly better than random chance and much improved by the rule of thumb training. In addition, the rule of thumb training reduced the amount of bias in the DITs and quantitative judgments. The mean DIT % correct scores increased from 47 to 64% after the rule of thumb training (P < 0.001). The

  6. Occupational exposure decisions: can limited data interpretation training help improve accuracy?

    PubMed

    Logan, Perry; Ramachandran, Gurumurthy; Mulhausen, John; Hewett, Paul

    2009-06-01

    Accurate exposure assessments are critical for ensuring that potentially hazardous exposures are properly identified and controlled. The availability and accuracy of exposure assessments can determine whether resources are appropriately allocated to engineering and administrative controls, medical surveillance, personal protective equipment and other programs designed to protect workers. A desktop study was performed using videos, task information and sampling data to evaluate the accuracy and potential bias of participants' exposure judgments. Desktop exposure judgments were obtained from occupational hygienists for material handling jobs with small air sampling data sets (0-8 samples) and without the aid of computers. In addition, data interpretation tests (DITs) were administered to participants where they were asked to estimate the 95th percentile of an underlying log-normal exposure distribution from small data sets. Participants were presented with an exposure data interpretation or rule of thumb training which included a simple set of rules for estimating 95th percentiles for small data sets from a log-normal population. DIT was given to each participant before and after the rule of thumb training. Results of each DIT and qualitative and quantitative exposure judgments were compared with a reference judgment obtained through a Bayesian probabilistic analysis of the sampling data to investigate overall judgment accuracy and bias. There were a total of 4386 participant-task-chemical judgments for all data collections: 552 qualitative judgments made without sampling data and 3834 quantitative judgments with sampling data. The DITs and quantitative judgments were significantly better than random chance and much improved by the rule of thumb training. In addition, the rule of thumb training reduced the amount of bias in the DITs and quantitative judgments. The mean DIT % correct scores increased from 47 to 64% after the rule of thumb training (P < 0.001). The

  7. TanDEM-X IDEM precision and accuracy assessment based on a large assembly of differential GNSS measurements in Kruger National Park, South Africa

    NASA Astrophysics Data System (ADS)

    Baade, J.; Schmullius, C.

    2016-09-01

    High resolution Digital Elevation Models (DEM) represent fundamental data for a wide range of Earth surface process studies. Over the past years, the German TanDEM-X mission acquired data for a new, truly global Digital Elevation Model with unprecedented geometric resolution, precision and accuracy. First TanDEM Intermediate Digital Elevation Models (i.e. IDEM) with a geometric resolution from 0.4 to 3 arcsec have been made available for scientific purposes in November 2014. This includes four 1° × 1° tiles covering the Kruger National Park in South Africa. Here, we document the results of a local scale IDEM height accuracy validation exercise utilizing over 10,000 RTK-GNSS-based ground survey points from fourteen sites characterized by mainly pristine Savanna vegetation. The vertical precision of the ground checkpoints is 0.02 m (1σ). Selected precursor data sets (SRTMGL1, SRTM41, ASTER-GDEM2) are included in the analysis to facilitate the comparison. Although IDEM represents an intermediate product on the way to the new global TanDEM-X DEM, expected to be released in late 2016, it allows first insight into the properties of the forthcoming product. Remarkably, the TanDEM-X tiles include a number of auxiliary files providing detailed information pertinent to a user-based quality assessment. We present examples for the utilization of this information in the framework of a local scale study including the identification of height readings contaminated by water. Furthermore, this study provides evidence for the high precision and accuracy of IDEM height readings and the sensitivity to canopy cover. For open terrain, the 0.4 arcsec resolution edition (IDEM04) yields an average bias of 0.20 ± 0.05 m (95% confidence interval, Cl95), a RMSE = 1.03 m and an absolute vertical height error (LE90) of 1.5 [1.4, 1.7] m (Cl95). The corresponding values for the lower resolution IDEM editions are about the same and provide evidence for the high quality of the IDEM products

  8. Calculation of the performance of magnetic lenses with limited machining precision.

    PubMed

    Sháněl, O; Zlámal, J; Oral, M

    2014-02-01

    To meet a required STEM resolution, the mechanical precision of the pole pieces of a magnetic lens needs to be determined. A tolerancing plugin in the EOD software is used to determine a configuration which both meets the optical specifications and is cost effective under the constraints of current manufacturing technologies together with a suitable combination of correction elements.

  9. Accuracy and precision of 14C-based source apportionment of organic and elemental carbon in aerosols using the Swiss_4S protocol

    NASA Astrophysics Data System (ADS)

    Mouteva, G. O.; Fahrni, S. M.; Santos, G. M.; Randerson, J. T.; Zhang, Y.-L.; Szidat, S.; Czimczik, C. I.

    2015-09-01

    Aerosol source apportionment remains a critical challenge for understanding the transport and aging of aerosols, as well as for developing successful air pollution mitigation strategies. The contributions of fossil and non-fossil sources to organic carbon (OC) and elemental carbon (EC) in carbonaceous aerosols can be quantified by measuring the radiocarbon (14C) content of each carbon fraction. However, the use of 14C in studying OC and EC has been limited by technical challenges related to the physical separation of the two fractions and small sample sizes. There is no common procedure for OC/EC 14C analysis, and uncertainty studies have largely focused on the precision of yields. Here, we quantified the uncertainty in 14C measurement of aerosols associated with the isolation and analysis of each carbon fraction with the Swiss_4S thermal-optical analysis (TOA) protocol. We used an OC/EC analyzer (Sunset Laboratory Inc., OR, USA) coupled to a vacuum line to separate the two components. Each fraction was thermally desorbed and converted to carbon dioxide (CO2) in pure oxygen (O2). On average, 91 % of the evolving CO2 was then cryogenically trapped on the vacuum line, reduced to filamentous graphite, and measured for its 14C content via accelerator mass spectrometry (AMS). To test the accuracy of our setup, we quantified the total amount of extraneous carbon introduced during the TOA sample processing and graphitization as the sum of modern and fossil (14C-depleted) carbon introduced during the analysis of fossil reference materials (adipic acid for OC and coal for EC) and contemporary standards (oxalic acid for OC and rice char for EC) as a function of sample size. We further tested our methodology by analyzing five ambient airborne particulate matter (PM2.5) samples with a range of OC and EC concentrations and 14C contents in an interlaboratory comparison. The total modern and fossil carbon blanks of our setup were 0.8 ± 0.4 and 0.67 ± 0.34 μg C, respectively

  10. Accuracy and precision of 14C-based source apportionment of organic and elemental carbon in aerosols using the Swiss_4S protocol

    NASA Astrophysics Data System (ADS)

    Mouteva, G. O.; Fahrni, S. M.; Santos, G. M.; Randerson, J. T.; Zhang, Y. L.; Szidat, S.; Czimczik, C. I.

    2015-04-01

    Aerosol source apportionment remains a critical challenge for understanding the transport and aging of aerosols, as well as for developing successful air pollution mitigation strategies. The contributions of fossil and non-fossil sources to organic carbon (OC) and elemental carbon (EC) in carbonaceous aerosols can be quantified by measuring the radiocarbon (14C) content of each carbon fraction. However, the use of 14C in studying OC and EC has been limited by technical challenges related to the physical separation of the two fractions and small sample sizes. There is no common procedure for OC/EC 14C analysis, and uncertainty studies have largely focused on the precision of yields. Here, we quantified the uncertainty in 14C measurement of aerosols associated with the isolation and analysis of each carbon fraction with the Swiss_4S thermal-optical analysis (TOA) protocol. We used an OC/EC analyzer (Sunset Laboratory Inc., OR, USA) coupled to vacuum line to separate the two components. Each fraction was thermally desorbed and converted to carbon dioxide (CO2) in pure oxygen (O2). On average 91% of the evolving CO2 was then cryogenically trapped on the vacuum line, reduced to filamentous graphite, and measured for its 14C content via accelerator mass spectrometry (AMS). To test the accuracy of our set-up, we quantified the total amount of extraneous carbon introduced during the TOA sample processing and graphitization as the sum of modern and fossil (14C-depleted) carbon introduced during the analysis of fossil reference materials (adipic acid for OC and coal for EC) and contemporary standards (oxalic acid for OC and rice char for EC) as a function of sample size. We further tested our methodology by analyzing five ambient airborne particulate matter (PM2.5) samples with a range of OC and EC concentrations and 14C contents in an interlaboratory comparison. The total modern and fossil carbon blanks of our set-up were 0.8 ± 0.4 and 0.67 ± 0.34 μg C, respectively

  11. The science of and advanced technology for cost-effective manufacture of high precision engineering products. Volume 4. Thermal effects on the accuracy of numerically controlled machine tool

    NASA Astrophysics Data System (ADS)

    Venugopal, R.; Barash, M. M.; Liu, C. R.

    1985-10-01

    Thermal effects on the accuracy of numerically controlled machine tools are specially important in the context of unmanned manufacture or under conditions of precision metal cutting. Removal of the operator from the direct control of the metal cutting process has created problems in terms of maintaining accuracy. The objective of this research is to study thermal effects on the accuracy of numerically controlled machine tools. The initial part of the research report is concerned with the analysis of a hypothetical machine. The thermal characteristics of this machine are studied. Numerical methods for evaluating the errors exhibited by the slides of the machine are proposed and the possibility of predicting thermally induced errors by the use of regression equations is investigated. A method for computing the workspace error is also presented. The final part is concerned with the actual measurement of errors on a modern CNC machining center. Thermal influences on the errors is the main objective of the experimental work. Thermal influences on the errors of machine tools are predictable. Techniques for determining thermal effects on machine tools at a design stage are also presented. ; Error models and prediction; Metrology; Automation.

  12. Mesoscopic atomic entanglement for precision measurements beyond the standard quantum limit.

    PubMed

    Appel, J; Windpassinger, P J; Oblak, D; Hoff, U B; Kjaergaard, N; Polzik, E S

    2009-07-01

    Squeezing of quantum fluctuations by means of entanglement is a well-recognized goal in the field of quantum information science and precision measurements. In particular, squeezing the fluctuations via entanglement between 2-level atoms can improve the precision of sensing, clocks, metrology, and spectroscopy. Here, we demonstrate 3.4 dB of metrologically relevant squeezing and entanglement for greater, similar 10(5) cold caesium atoms via a quantum nondemolition (QND) measurement on the atom clock levels. We show that there is an optimal degree of decoherence induced by the quantum measurement which maximizes the generated entanglement. A 2-color QND scheme used in this paper is shown to have a number of advantages for entanglement generation as compared with a single-color QND measurement.

  13. SU-E-J-147: Monte Carlo Study of the Precision and Accuracy of Proton CT Reconstructed Relative Stopping Power Maps

    SciTech Connect

    Dedes, G; Asano, Y; Parodi, K; Arbor, N; Dauvergne, D; Testa, E; Letang, J; Rit, S

    2015-06-15

    Purpose: The quantification of the intrinsic performances of proton computed tomography (pCT) as a modality for treatment planning in proton therapy. The performance of an ideal pCT scanner is studied as a function of various parameters. Methods: Using GATE/Geant4, we simulated an ideal pCT scanner and scans of several cylindrical phantoms with various tissue equivalent inserts of different sizes. Insert materials were selected in order to be of clinical relevance. Tomographic images were reconstructed using a filtered backprojection algorithm taking into account the scattering of protons into the phantom. To quantify the performance of the ideal pCT scanner, we study the precision and the accuracy with respect to the theoretical relative stopping power ratios (RSP) values for different beam energies, imaging doses, insert sizes and detector positions. The planning range uncertainty resulting from the reconstructed RSP is also assessed by comparison with the range of the protons in the analytically simulated phantoms. Results: The results indicate that pCT can intrinsically achieve RSP resolution below 1%, for most examined tissues at beam energies below 300 MeV and for imaging doses around 1 mGy. RSP maps accuracy of less than 0.5 % is observed for most tissue types within the studied dose range (0.2–1.5 mGy). Finally, the uncertainty in the proton range due to the accuracy of the reconstructed RSP map is well below 1%. Conclusion: This work explores the intrinsic performance of pCT as an imaging modality for proton treatment planning. The obtained results show that under ideal conditions, 3D RSP maps can be reconstructed with an accuracy better than 1%. Hence, pCT is a promising candidate for reducing the range uncertainties introduced by the use of X-ray CT alongside with a semiempirical calibration to RSP.Supported by the DFG Cluster of Excellence Munich-Centre for Advanced Photonics (MAP)

  14. Limits of active laser triangulation as an instrument for high precision plant imaging.

    PubMed

    Paulus, Stefan; Eichert, Thomas; Goldbach, Heiner E; Kuhlmann, Heiner

    2014-02-05

    Laser scanning is a non-invasive method for collecting and parameterizing 3D data of well reflecting objects. These systems have been used for 3D imaging of plant growth and structure analysis. A prerequisite is that the recorded signals originate from the true plant surface. In this paper we studied the effects of species, leaf chlorophyll content and sensor settings on the suitability and accuracy of a commercial 660 nm active laser triangulation scanning device. We found that surface images of Ficus benjamina leaves were inaccurate at low chlorophyll concentrations and a long sensor exposure time. Imaging of the rough waxy leaf surface of leek (Allium porrum) was possible using very low exposure times, whereas at higher exposure times penetration and multiple refraction prevented the correct imaging of the surface. A comparison of scans with varying exposure time enabled the target-oriented analysis to identify chlorotic, necrotic and healthy leaf areas or mildew infestations. We found plant properties and sensor settings to have a strong influence on the accuracy of measurements. These interactions have to be further elucidated before laser imaging of plants is possible with the high accuracy required for e.g., the observation of plant growth or reactions to water stress.

  15. Assessing a dual-frequency identification sonars' fish-counting accuracy, precision, and turbid river range capability.

    PubMed

    Maxwell, Suzanne L; Gove, Nancy E

    2007-12-01

    Accurately assessing migrating salmon populations in turbid rivers with hydroacoustics is challenging. Using single, dual, or split-beam sonars, difficulties occur fitting acoustic beams between the river's narrow boundaries, distinguishing fish from nonfish echoes, and resolving individual fish at high densities. To address these issues, the fish-counting capability of a dual-frequency identification sonar (DIDSON), which produces high resolution, video-like images, was assessed. In a clear river, fish counts generated from a DIDSON, an echo counter, split-beam sonar, and video were compared to visual counts from a tower, a method frequently used to ground-truth sonars. The DIDSON and tower counts were very similar and showed the strongest agreement and least variability compared to the other methods. In a highly turbid river, the DIDSON's maximum detection range for a 10.16 cm spherical target was 17 m, less than absorption and wave spreading losses predict, and 26 m in clear water. Unlike tower and video methods, the DIDSON was not limited by surface disturbances or turbidity. DIDSON advantages over other sonars include: better target resolution; wider viewing angle; better coverage of the water column; accurate direction of travel; and simpler to aim and operate.

  16. Measuring the bias, precision, accuracy, and validity of self-reported height and weight in assessing overweight and obesity status among adolescents using a surveillance system

    PubMed Central

    2015-01-01

    Background Evidence regarding bias, precision, and accuracy in adolescent self-reported height and weight across demographic subpopulations is lacking. The bias, precision, and accuracy of adolescent self-reported height and weight across subpopulations were examined using a large, diverse and representative sample of adolescents. A second objective was to develop correction equations for self-reported height and weight to provide more accurate estimates of body mass index (BMI) and weight status. Methods A total of 24,221 students from 8th and 11th grade in Texas participated in the School Physical Activity and Nutrition (SPAN) surveillance system in years 2000–2002 and 2004–2005. To assess bias, the differences between the self-reported and objective measures, for height and weight were estimated. To assess precision and accuracy, the Lin’s concordance correlation coefficient was used. BMI was estimated for self-reported and objective measures. The prevalence of students’ weight status was estimated using self-reported and objective measures; absolute (bias) and relative error (relative bias) were assessed subsequently. Correction equations for sex and race/ethnicity subpopulations were developed to estimate objective measures of height, weight and BMI from self-reported measures using weighted linear regression. Sensitivity, specificity and positive predictive values of weight status classification using self-reported measures and correction equations are assessed by sex and grade. Results Students in 8th- and 11th-grade overestimated their height from 0.68cm (White girls) to 2.02 cm (African-American boys), and underestimated their weight from 0.4 kg (Hispanic girls) to 0.98 kg (African-American girls). The differences in self-reported versus objectively-measured height and weight resulted in underestimation of BMI ranging from -0.23 kg/m2 (White boys) to -0.7 kg/m2 (African-American girls). The sensitivity of self-reported measures to classify weight

  17. Detecting declines in the abundance of a bull trout (Salvelinus confluentus) population: Understanding the accuracy, precision, and costs of our efforts

    USGS Publications Warehouse

    Al-Chokhachy, R.; Budy, P.; Conner, M.

    2009-01-01

    Using empirical field data for bull trout (Salvelinus confluentus), we evaluated the trade-off between power and sampling effort-cost using Monte Carlo simulations of commonly collected mark-recapture-resight and count data, and we estimated the power to detect changes in abundance across different time intervals. We also evaluated the effects of monitoring different components of a population and stratification methods on the precision of each method. Our results illustrate substantial variability in the relative precision, cost, and information gained from each approach. While grouping estimates by age or stage class substantially increased the precision of estimates, spatial stratification of sampling units resulted in limited increases in precision. Although mark-resight methods allowed for estimates of abundance versus indices of abundance, our results suggest snorkel surveys may be a more affordable monitoring approach across large spatial scales. Detecting a 25% decline in abundance after 5 years was not possible, regardless of technique (power = 0.80), without high sampling effort (48% of study site). Detecting a 25% decline was possible after 15 years, but still required high sampling efforts. Our results suggest detecting moderate changes in abundance of freshwater salmonids requires considerable resource and temporal commitments and highlight the difficulties of using abundance measures for monitoring bull trout populations.

  18. High-precision, high-accuracy ultralong-range swept-source optical coherence tomography using vertical cavity surface emitting laser light source.

    PubMed

    Grulkowski, Ireneusz; Liu, Jonathan J; Potsaid, Benjamin; Jayaraman, Vijaysekhar; Jiang, James; Fujimoto, James G; Cable, Alex E

    2013-03-01

    We demonstrate ultralong-range swept-source optical coherence tomography (OCT) imaging using vertical cavity surface emitting laser technology. The ability to adjust laser parameters and high-speed acquisition enables imaging ranges from a few centimeters up to meters using the same instrument. We discuss the challenges of long-range OCT imaging. In vivo human-eye imaging and optical component characterization are presented. The precision and accuracy of OCT-based measurements are assessed and are important for ocular biometry and reproducible intraocular distance measurement before cataract surgery. Additionally, meter-range measurement of fiber length and multicentimeter-range imaging are reported. 3D visualization supports a class of industrial imaging applications of OCT.

  19. In situ sulfur isotope analysis of sulfide minerals by SIMS: Precision and accuracy, with application to thermometry of ~3.5Ga Pilbara cherts

    USGS Publications Warehouse

    Kozdon, R.; Kita, N.T.; Huberty, J.M.; Fournelle, J.H.; Johnson, C.A.; Valley, J.W.

    2010-01-01

    Secondary ion mass spectrometry (SIMS) measurement of sulfur isotope ratios is a potentially powerful technique for in situ studies in many areas of Earth and planetary science. Tests were performed to evaluate the accuracy and precision of sulfur isotope analysis by SIMS in a set of seven well-characterized, isotopically homogeneous natural sulfide standards. The spot-to-spot and grain-to-grain precision for δ34S is ± 0.3‰ for chalcopyrite and pyrrhotite, and ± 0.2‰ for pyrite (2SD) using a 1.6 nA primary beam that was focused to 10 µm diameter with a Gaussian-beam density distribution. Likewise, multiple δ34S measurements within single grains of sphalerite are within ± 0.3‰. However, between individual sphalerite grains, δ34S varies by up to 3.4‰ and the grain-to-grain precision is poor (± 1.7‰, n = 20). Measured values of δ34S correspond with analysis pit microstructures, ranging from smooth surfaces for grains with high δ34S values, to pronounced ripples and terraces in analysis pits from grains featuring low δ34S values. Electron backscatter diffraction (EBSD) shows that individual sphalerite grains are single crystals, whereas crystal orientation varies from grain-to-grain. The 3.4‰ variation in measured δ34S between individual grains of sphalerite is attributed to changes in instrumental bias caused by different crystal orientations with respect to the incident primary Cs+ beam. High δ34S values in sphalerite correlate to when the Cs+ beam is parallel to the set of directions , from [111] to [110], which are preferred directions for channeling and focusing in diamond-centered cubic crystals. Crystal orientation effects on instrumental bias were further detected in galena. However, as a result of the perfect cleavage along {100} crushed chips of galena are typically cube-shaped and likely to be preferentially oriented, thus crystal orientation effects on instrumental bias may be obscured. Test were made to improve the analytical

  20. Advantages, limitations, and diagnostic accuracy of photoscreeners in early detection of amblyopia: a review

    PubMed Central

    Sanchez, Irene; Ortiz-Toquero, Sara; Martin, Raul; de Juan, Victoria

    2016-01-01

    Amblyopia detection is important to ensure proper visual development and avoid permanent decrease of visual acuity. This condition does not produce symptoms, so it is difficult to diagnose if a vision problem actually exists. However, because amblyopia treatment is limited by age, early diagnosis is of paramount relevance. Traditional vision screening (conducted in <3 years) is related with difficulty in getting cooperation from a subject to conduct the eye exam, so accurate objective methods to improve amblyopia detection are necessary. Handheld devices used for photoscreening or autorefraction could offer advantages to improve amblyopia screening because they reduce exploration time to just few seconds, no subject collaboration is needed, and they provide objective information. The purpose of this review is to summarize the main functions and clinical applicability of commercially available devices for early detection of amblyopia and to describe their differences, advantages, and limitations. Although the studies reviewed are heterogeneous (due to wide differences in referral criteria, use of different risk factors, different types of samples studied, etc), these devices provide objective measures in a quick and objective way with a simple outcome report: retest, pass, or refer. However, due to major limitations, these devices are not recommended, and their use in clinical practice is limited. PMID:27555744

  1. Advantages, limitations, and diagnostic accuracy of photoscreeners in early detection of amblyopia: a review.

    PubMed

    Sanchez, Irene; Ortiz-Toquero, Sara; Martin, Raul; de Juan, Victoria

    2016-01-01

    Amblyopia detection is important to ensure proper visual development and avoid permanent decrease of visual acuity. This condition does not produce symptoms, so it is difficult to diagnose if a vision problem actually exists. However, because amblyopia treatment is limited by age, early diagnosis is of paramount relevance. Traditional vision screening (conducted in <3 years) is related with difficulty in getting cooperation from a subject to conduct the eye exam, so accurate objective methods to improve amblyopia detection are necessary. Handheld devices used for photoscreening or autorefraction could offer advantages to improve amblyopia screening because they reduce exploration time to just few seconds, no subject collaboration is needed, and they provide objective information. The purpose of this review is to summarize the main functions and clinical applicability of commercially available devices for early detection of amblyopia and to describe their differences, advantages, and limitations. Although the studies reviewed are heterogeneous (due to wide differences in referral criteria, use of different risk factors, different types of samples studied, etc), these devices provide objective measures in a quick and objective way with a simple outcome report: retest, pass, or refer. However, due to major limitations, these devices are not recommended, and their use in clinical practice is limited. PMID:27555744

  2. Improving Precision and Accuracy of Isotope Ratios from Short Transient Laser Ablation-Multicollector-Inductively Coupled Plasma Mass Spectrometry Signals: Application to Micrometer-Size Uranium Particles.

    PubMed

    Claverie, Fanny; Hubert, Amélie; Berail, Sylvain; Donard, Ariane; Pointurier, Fabien; Pécheyran, Christophe

    2016-04-19

    The isotope drift encountered on short transient signals measured by multicollector inductively coupled plasma mass spectrometry (MC-ICPMS) is related to differences in detector time responses. Faraday to Faraday and Faraday to ion counter time lags were determined and corrected using VBA data processing based on the synchronization of the isotope signals. The coefficient of determination of the linear fit between the two isotopes was selected as the best criterion to obtain accurate detector time lag. The procedure was applied to the analysis by laser ablation-MC-ICPMS of micrometer sized uranium particles (1-3.5 μm). Linear regression slope (LRS) (one isotope plotted over the other), point-by-point, and integration methods were tested to calculate the (235)U/(238)U and (234)U/(238)U ratios. Relative internal precisions of 0.86 to 1.7% and 1.2 to 2.4% were obtained for (235)U/(238)U and (234)U/(238)U, respectively, using LRS calculation, time lag, and mass bias corrections. A relative external precision of 2.1% was obtained for (235)U/(238)U ratios with good accuracy (relative difference with respect to the reference value below 1%). PMID:27031645

  3. The Effect of Aging on the Accuracy of New Friction-Style Mechanical Torque Limiting Devices for Dental Implants

    PubMed Central

    Saboury, Aboulfazl; Sadr, Seyed Jalil; Fayaz, Ali; Mahshid, Minoo

    2013-01-01

    Objective: High variability in delivering the target torque is reported for friction-style mechanical torque limiting devices (F-S MTLDs). The effect of aging (number of use) on the accuracy of these devices is not clear. The purpose of this study was to assess the effect of aging on the accuracy (±10% of the target torque) of F-S MTLDs. Materials and Methods: Fifteen new F-S MTLDs and their appropriate drivers from three different implant manufacturers (Astra Tech, Biohorizon and Dr Idhe), five for each type, were selected. The procedure of peak torque measurement was performed in ten sequences before and after aging. In each sequence, ten repetitions of peak torque values were registered for the aging procedure. To measure the output of each device, a Tohnichi torque gauge was used. Results: Before aging, peak torque measurements of all the devices tested in this study falled within 10% of their preset target values. After aging, a significant difference was seen between raw error values of three groups of MTLDs (P<0.05). More than 50% of all peak torque measurements demonstrated more than 10% difference from their torque values after aging. Conclusion: Within the limitation of this study, aging as an independent factor affects the accuracy of F-S MTLDs. Astra Tech MTLDs presented the most consistent torque output for 25 Ncm target torque. PMID:23724202

  4. Maximum precision closed-form solution for localizing diffraction-limited spots in noisy images.

    PubMed

    Larkin, Joshua D; Cook, Peter R

    2012-07-30

    Super-resolution techniques like PALM and STORM require accurate localization of single fluorophores detected using a CCD. Popular localization algorithms inefficiently assume each photon registered by a pixel can only come from an area in the specimen corresponding to that pixel (not from neighboring areas), before iteratively (slowly) fitting a Gaussian to pixel intensity; they fail with noisy images. We present an alternative; a probability distribution extending over many pixels is assigned to each photon, and independent distributions are joined to describe emitter location. We compare algorithms, and recommend which serves best under different conditions. At low signal-to-noise ratios, ours is 2-fold more precise than others, and 2 orders of magnitude faster; at high ratios, it closely approximates the maximum likelihood estimate.

  5. Monolayer stress microscopy: limitations, artifacts, and accuracy of recovered intercellular stresses.

    PubMed

    Tambe, Dhananjay T; Croutelle, Ugo; Trepat, Xavier; Park, Chan Young; Kim, Jae Hun; Millet, Emil; Butler, James P; Fredberg, Jeffrey J

    2013-01-01

    In wound healing, tissue growth, and certain cancers, the epithelial or the endothelial monolayer sheet expands. Within the expanding monolayer sheet, migration of the individual cell is strongly guided by physical forces imposed by adjacent cells. This process is called plithotaxis and was discovered using Monolayer Stress Microscopy (MSM). MSM rests upon certain simplifying assumptions, however, concerning boundary conditions, cell material properties and system dimensionality. To assess the validity of these assumptions and to quantify associated errors, here we report new analytical, numerical, and experimental investigations. For several commonly used experimental monolayer systems, the simplifying assumptions used previously lead to errors that are shown to be quite small. Out-of-plane components of displacement and traction fields can be safely neglected, and characteristic features of intercellular stresses that underlie plithotaxis remain largely unaffected. Taken together, these findings validate Monolayer Stress Microscopy within broad but well-defined limits of applicability.

  6. Present Limits on the Precision of SM Predictions for Jet Energies

    SciTech Connect

    Paramonov, A.A.; Canelli, F.; D'Onofrio, M.; Frisch, H.J.; Mrenna, S.; /Fermilab

    2010-08-01

    We investigate the impact of theoretical uncertainties on the accuracy of measurements involving hadronic jets. The analysis is performed using events with a Z boson and a single jet observed in p{bar p} collisions at {radical}s = 1.96 TeV in 4.6 fb{sup -1} of data from the Collider Detector at Fermilab (CDF). The transverse momenta (p{sub T}) of the jet and the boson should balance each other due to momentum conservation in the plane transverse to the direction of the p and {bar p} beams. We evaluate the dependence of the measured p{sub T}-balance on theoretical uncertainties associated with initial and final state radiation, choice of renormalization and factorization scales, parton distribution functions, jet-parton matching, calculations of matrix elements, and parton showering. We find that the uncertainty caused by parton showering at large angles is the largest amongst the listed uncertainties. The proposed method can be re-applied at the LHC experiments to investigate and evaluate the uncertainties on the predicted jet energies. The distributions produced at the CDF environment are intended for comparison to those from modern event generators and new tunes of parton showering.

  7. Assessment of microcalcifications with limited number of high-precision macrobiopsies.

    PubMed

    Harries, Richard; Lawson, Sarah; Bruckers, Liesbeth

    2010-09-01

    Stereotactic biopsy assessment of microcalcification clusters with direct and frontal macrobiopsies was performed in a population of 46 women screened for breast cancer. The only clinical finding in these women was microcalcification. Sensitivity of the procedure was 98% and calcifications were detected in 107 out of 148 tissue specimens (73%). This is the highest reported ratio so far. Interestingly, the total number of cores was inversely correlated with the success rate, suggesting that the accuracy of the direct and frontal approach is high. Four out of 46 women underwent surgery for malignancy indicating that 41 women escaped intervention with a mean follow-up of at least 1 year. Patient satisfaction is high, in particular regarding reported pain, fear and overall appreciation. No complications were seen. The data suggest that a lower number of macrobiopsies for microcalcifications could be acceptable with direct and frontal biopsy methods without reducing sensitivity. Lowering the number of biopsies can optimize the interpretation of surgical margin and reduce the number of biopsy-related mastectomies. PMID:20495463

  8. Pushing the limits: latest developments in angle metrology for the inspection of ultra-precise synchrotron optics

    NASA Astrophysics Data System (ADS)

    Yandayan, Tanfer; Geckeler, Ralf D.; Siewert, Frank

    2014-09-01

    The requirements on the quality of ultra-precise X-ray optical components for application in the Synchrotron Radiation (SR) community are increasing continually and strongly depend on the quality of the metrology devices available to measure such optics. To meet the upcoming accuracy goal of 50 nrad rms for slope measuring profilers, a dedicated project, SIB58 Angles, consisting of 16 worldwide partners and supported by the European Metrology Research Programme (EMRP) was started in Sep 2013. The project covers investigations on autocollimators under extremely challenging measuring conditions, ray-tracing models, 2D autocollimator calibration (for the first time worldwide), determination of error sources in angle encoders providing traceability by `sub-division of 2π rad' with nrad uncertainty, angle generation by 'ratio of two lengths' in nrad level, and on the development of portable precise Small Angle Generators (SAGs) for regular in-situ checks of autocollimators' performance. Highlights from the project will be reported in the paper and the community of metrology for X-Ray and EUV Optics will be informed about its progress and the latest work in angle metrology.

  9. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    NASA Astrophysics Data System (ADS)

    He, Bin; Frey, Eric C.

    2010-06-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were

  10. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods.

    PubMed

    He, Bin; Frey, Eric C

    2010-06-21

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed (111)In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were

  11. Fundamental limitations to high-precision tests of the universality of free fall by dropping atoms

    NASA Astrophysics Data System (ADS)

    Nobili, Anna M.

    2016-02-01

    Tests of the universality of free fall and the weak equivalence principle probe the foundations of general relativity. Evidence of a violation may lead to the discovery of a new force. The best torsion balance experiments have ruled it out to 10-13. Cold-atom drop tests have reached 10-7 and promise to do 7 to 10 orders of magnitude better, on the ground or in space. They are limited by the random shot noise, which depends on the number N of atoms in the clouds (as 1 /√{N } ). As mass-dropping experiments in the nonuniform gravitational field of Earth, they are sensitive to the initial conditions. Random accelerations due to initial condition errors of the clouds are designed to be at the same level as shot noise, so that they can be reduced with the number of drops along with it. This sets the requirements for the initial position and velocity spreads of the clouds with given N . In the STE-QUEST space mission proposal aiming at 2 ×10-15 they must be about a factor 8 above the limit established by Heisenberg's uncertainty principle, and the integration time required to reduce both errors is 3 years, with a mission duration of 5 years. Instead, offset errors at release between position and velocity of different atom clouds are systematic and give rise to a systematic effect which mimics a violation. Such systematic offsets must be demonstrated to be as small as required in all drops, i.e., they must be kept small by design, and they must be measured. For STE-QUEST to meet its goal they must be several orders of magnitude smaller than the size—in position and velocity space—of each individual cloud, which in its turn must be at most 8 times larger than the uncertainty principle limit. Even if all technical problems are solved and different atom clouds are released with negligible systematic errors, still these errors must be measured; and Heisenberg's principle dictates that such measurement lasts as long as the experiment. While shot noise is random, hence

  12. Guidelines for Dual Energy X-Ray Absorptiometry Analysis of Trabecular Bone-Rich Regions in Mice: Improved Precision, Accuracy, and Sensitivity for Assessing Longitudinal Bone Changes.

    PubMed

    Shi, Jiayu; Lee, Soonchul; Uyeda, Michael; Tanjaya, Justine; Kim, Jong Kil; Pan, Hsin Chuan; Reese, Patricia; Stodieck, Louis; Lin, Andy; Ting, Kang; Kwak, Jin Hee; Soo, Chia

    2016-05-01

    Trabecular bone is frequently studied in osteoporosis research because changes in trabecular bone are the most common cause of osteoporotic fractures. Dual energy X-ray absorptiometry (DXA) analysis specific to trabecular bone-rich regions is crucial to longitudinal osteoporosis research. The purpose of this study is to define a novel method for accurately analyzing trabecular bone-rich regions in mice via DXA. This method will be utilized to analyze scans obtained from the International Space Station in an upcoming study of microgravity-induced bone loss. Thirty 12-week-old BALB/c mice were studied. The novel method was developed by preanalyzing trabecular bone-rich sites in the distal femur, proximal tibia, and lumbar vertebrae via high-resolution X-ray imaging followed by DXA and micro-computed tomography (micro-CT) analyses. The key DXA steps described by the novel method were (1) proper mouse positioning, (2) region of interest (ROI) sizing, and (3) ROI positioning. The precision of the new method was assessed by reliability tests and a 14-week longitudinal study. The bone mineral content (BMC) data from DXA was then compared to the BMC data from micro-CT to assess accuracy. Bone mineral density (BMD) intra-class correlation coefficients of the new method ranging from 0.743 to 0.945 and Levene's test showing that there was significantly lower variances of data generated by new method both verified its consistency. By new method, a Bland-Altman plot displayed good agreement between DXA BMC and micro-CT BMC for all sites and they were strongly correlated at the distal femur and proximal tibia (r=0.846, p<0.01; r=0.879, p<0.01, respectively). The results suggest that the novel method for site-specific analysis of trabecular bone-rich regions in mice via DXA yields more precise, accurate, and repeatable BMD measurements than the conventional method.

  13. Guidelines for Dual Energy X-Ray Absorptiometry Analysis of Trabecular Bone-Rich Regions in Mice: Improved Precision, Accuracy, and Sensitivity for Assessing Longitudinal Bone Changes.

    PubMed

    Shi, Jiayu; Lee, Soonchul; Uyeda, Michael; Tanjaya, Justine; Kim, Jong Kil; Pan, Hsin Chuan; Reese, Patricia; Stodieck, Louis; Lin, Andy; Ting, Kang; Kwak, Jin Hee; Soo, Chia

    2016-05-01

    Trabecular bone is frequently studied in osteoporosis research because changes in trabecular bone are the most common cause of osteoporotic fractures. Dual energy X-ray absorptiometry (DXA) analysis specific to trabecular bone-rich regions is crucial to longitudinal osteoporosis research. The purpose of this study is to define a novel method for accurately analyzing trabecular bone-rich regions in mice via DXA. This method will be utilized to analyze scans obtained from the International Space Station in an upcoming study of microgravity-induced bone loss. Thirty 12-week-old BALB/c mice were studied. The novel method was developed by preanalyzing trabecular bone-rich sites in the distal femur, proximal tibia, and lumbar vertebrae via high-resolution X-ray imaging followed by DXA and micro-computed tomography (micro-CT) analyses. The key DXA steps described by the novel method were (1) proper mouse positioning, (2) region of interest (ROI) sizing, and (3) ROI positioning. The precision of the new method was assessed by reliability tests and a 14-week longitudinal study. The bone mineral content (BMC) data from DXA was then compared to the BMC data from micro-CT to assess accuracy. Bone mineral density (BMD) intra-class correlation coefficients of the new method ranging from 0.743 to 0.945 and Levene's test showing that there was significantly lower variances of data generated by new method both verified its consistency. By new method, a Bland-Altman plot displayed good agreement between DXA BMC and micro-CT BMC for all sites and they were strongly correlated at the distal femur and proximal tibia (r=0.846, p<0.01; r=0.879, p<0.01, respectively). The results suggest that the novel method for site-specific analysis of trabecular bone-rich regions in mice via DXA yields more precise, accurate, and repeatable BMD measurements than the conventional method. PMID:26956416

  14. Accuracy of Inferior Vena Cava Ultrasound for Predicting Dehydration in Children with Acute Diarrhea in Resource-Limited Settings

    PubMed Central

    Modi, Payal; Glavis-Bloom, Justin; Nasrin, Sabiha; Guy, Allysia; Rege, Soham; Noble, Vicki E.; Alam, Nur H.; Levine, Adam C.

    2016-01-01

    Introduction Although dehydration from diarrhea is a leading cause of morbidity and mortality in children under five, existing methods of assessing dehydration status in children have limited accuracy. Objective To assess the accuracy of point-of-care ultrasound measurement of the aorta-to-IVC ratio as a predictor of dehydration in children. Methods A prospective cohort study of children under five years with acute diarrhea was conducted in the rehydration unit of the International Centre for Diarrhoeal Disease Research, Bangladesh (icddr,b). Ultrasound measurements of aorta-to-IVC ratio and dehydrated weight were obtained on patient arrival. Percent weight change was monitored during rehydration to classify children as having “some dehydration” with weight change 3–9% or “severe dehydration” with weight change > 9%. Logistic regression analysis and Receiver-Operator Characteristic (ROC) curves were used to evaluate the accuracy of aorta-to-IVC ratio as a predictor of dehydration severity. Results 850 children were enrolled, of which 771 were included in the final analysis. Aorta to IVC ratio was a significant predictor of the percent dehydration in children with acute diarrhea, with each 1-point increase in the aorta to IVC ratio predicting a 1.1% increase in the percent dehydration of the child. However, the area under the ROC curve (0.60), sensitivity (67%), and specificity (49%), for predicting severe dehydration were all poor. Conclusions Point-of-care ultrasound of the aorta-to-IVC ratio was statistically associated with volume status, but was not accurate enough to be used as an independent screening tool for dehydration in children under five years presenting with acute diarrhea in a resource-limited setting. PMID:26766306

  15. Exploring the Limits of Lateral Resolution and Anomaly Precision of Marine Gravimeter Data

    NASA Astrophysics Data System (ADS)

    Scheirer, D. S.; Kinsey, J. C.; Childs, J. R.

    2011-12-01

    horizontal velocity and course-over-ground, are often noisier at periods shorter than a few minutes than are smoothed total-gravity estimates; this introduces noise into Eotvos-corrected gravity anomalies. We investigate a number of filtering approaches to minimize this introduced noise. Applying lever-arm corrections to GPS positions that are offset from the BGM3 sensor is one straightforward method to improve the navigation, and hence Eotvos correction, of the gravimeter data. In September 2011, we collected BGM3 data from a small boat throughout Puget Sound, where density contrasts are much shallower and hence gravity gradients are steeper than in typical deep-water marine gravity surveys. By conducting surveys in narrow inlets surrounded by high-quality land gravity stations, we have an ideal BGM3 data set to evaluate the lateral resolution and anomaly precision of this type of marine gravimeter and to investigate the effect of GPS vertical-motion estimates and offset corrections.

  16. Precision, limit of detection and range of quantitation in competitive ELISA.

    PubMed

    Hayashi, Yuzuru; Matsuda, Rieko; Maitani, Tamio; Imai, Kazuhiro; Nishimura, Waka; Ito, Katsutoshi; Maeda, Masako

    2004-03-01

    This paper develops a mathematical model for describing the within-plate variation as the RSD (relative standard deviation) of absorbance measurements in a wide concentration range in competitive ELISA and proposes a method for determining the limit of detection (LOD) and range of quantitation (ROQ). The ELISA for 17 alpha-hydroxyprogesterone is taken as an example. The theoretical RSD description involves analyte concentration as an independent variable and error sources as parameters which concern the pipetting and absorbance measurement. Our model can dispense with repeated experiments of real samples, but the error parameters should be determined experimentally. The theory is in good agreement with the experiments. The most influential error sources at low and high sample concentrations are shown to be the pipetting of a viscous solution of antiserum and the absorbance inherent to the wells of a plate, respectively. The LOD and ROQ are defined as the concentration with 30% RSD and the region with <10% RSD, respectively, and are found in the theoretical plot of the RSD of concentration estimates vs concentration.

  17. Accuracy of a Low-Cost Novel Computer-Vision Dynamic Movement Assessment: Potential Limitations and Future Directions

    NASA Astrophysics Data System (ADS)

    McGroarty, M.; Giblin, S.; Meldrum, D.; Wetterling, F.

    2016-04-01

    The aim of the study was to perform a preliminary validation of a low cost markerless motion capture system (CAPTURE) against an industry gold standard (Vicon). Measurements of knee valgus and flexion during the performance of a countermovement jump (CMJ) between CAPTURE and Vicon were compared. After correction algorithms were applied to the raw CAPTURE data acceptable levels of accuracy and precision were achieved. The knee flexion angle measured for three trials using Capture deviated by -3.8° ± 3° (left) and 1.7° ± 2.8° (right) compared to Vicon. The findings suggest that low-cost markerless motion capture has potential to provide an objective method for assessing lower limb jump and landing mechanics in an applied sports setting. Furthermore, the outcome of the study warrants the need for future research to examine more fully the potential implications of the use of low-cost markerless motion capture in the evaluation of dynamic movement for injury prevention.

  18. The LLNL High Accuracy Volume Renderer for Unstructured Data: Capabilities, Current Limits, and Potential for ASCI/VIEWS Deployment

    SciTech Connect

    Williams, P L; Max, N L

    2001-06-04

    This report describes a volume rendering system for unstructured data, especially finite element data, that creates images with very high accuracy. The system will currently handle meshes whose cells are either linear or quadratic tetrahedra, or meshes with mixed cell types: tetrahedra, bricks, prisms, and pyramids. The cells may have nonplanar facets. Whenever possible, exact mathematical solutions for the radiance integrals and for interpolation are used. Accurate semitransparent shaded isosurfaces may be embedded in the volume rendering. For very small cells, subpixel accumulation by splatting is used to avoid sampling error. A new exact and efficient visibility ordering algorithm is described. The most accurate images are generated in software, however, more efficient algorithms utilizing graphics hardware may also be selected. The report describes the parallelization of the system for a distributed-shared memory multiprocessor machine, and concludes by discussing the system's limits, desirable future work, and ways to extend the system so as to be compatible with projected ASCI/VIEWS architectures.

  19. Technical Note: Precision and accuracy of a commercially available CT optically stimulated luminescent dosimetry system for the measurement of CT dose index

    SciTech Connect

    Vrieze, Thomas J.; Sturchio, Glenn M.; McCollough, Cynthia H.

    2012-11-15

    Purpose: To determine the precision and accuracy of CTDI{sub 100} measurements made using commercially available optically stimulated luminescent (OSL) dosimeters (Landaur, Inc.) as beam width, tube potential, and attenuating material were varied. Methods: One hundred forty OSL dosimeters were individually exposed to a single axial CT scan, either in air, a 16-cm (head), or 32-cm (body) CTDI phantom at both center and peripheral positions. Scans were performed using nominal total beam widths of 3.6, 6, 19.2, and 28.8 mm at 120 kV and 28.8 mm at 80 kV. Five measurements were made for each of 28 parameter combinations. Measurements were made under the same conditions using a 100-mm long CTDI ion chamber. Exposed OSL dosimeters were returned to the manufacturer, who reported dose to air (in mGy) as a function of distance along the probe, integrated dose, and CTDI{sub 100}. Results: The mean precision averaged over 28 datasets containing five measurements each was 1.4%{+-} 0.6%, range = 0.6%-2.7% for OSL and 0.08%{+-} 0.06%, range = 0.02%-0.3% for ion chamber. The root mean square (RMS) percent differences between OSL and ion chamber CTDI{sub 100} values were 13.8%, 6.4%, and 8.7% for in-air, head, and body measurements, respectively, with an overall RMS percent difference of 10.1%. OSL underestimated CTDI{sub 100} relative to the ion chamber 21/28 times (75%). After manual correction of the 80 kV measurements, the RMS percent differences between OSL and ion chamber measurements were 9.9% and 10.0% for 80 and 120 kV, respectively. Conclusions: Measurements of CTDI{sub 100} with commercially available CT OSL dosimeters had a percent standard deviation of 1.4%. After energy-dependent correction factors were applied, the RMS percent difference in the measured CTDI{sub 100} values was about 10%, with a tendency of OSL to underestimate CTDI relative to the ion chamber. Unlike ion chamber methods, however, OSL dosimeters allow measurement of the radiation dose profile.

  20. Technical Note: Precision and accuracy of a commercially available CT optically stimulated luminescent dosimetry system for the measurement of CT dose index

    PubMed Central

    Vrieze, Thomas J.; Sturchio, Glenn M.; McCollough, Cynthia H.

    2012-01-01

    Purpose: To determine the precision and accuracy of CTDI100 measurements made using commercially available optically stimulated luminescent (OSL) dosimeters (Landaur, Inc.) as beam width, tube potential, and attenuating material were varied. Methods: One hundred forty OSL dosimeters were individually exposed to a single axial CT scan, either in air, a 16-cm (head), or 32-cm (body) CTDI phantom at both center and peripheral positions. Scans were performed using nominal total beam widths of 3.6, 6, 19.2, and 28.8 mm at 120 kV and 28.8 mm at 80 kV. Five measurements were made for each of 28 parameter combinations. Measurements were made under the same conditions using a 100-mm long CTDI ion chamber. Exposed OSL dosimeters were returned to the manufacturer, who reported dose to air (in mGy) as a function of distance along the probe, integrated dose, and CTDI100. Results: The mean precision averaged over 28 datasets containing five measurements each was 1.4% ± 0.6%, range = 0.6%–2.7% for OSL and 0.08% ± 0.06%, range = 0.02%–0.3% for ion chamber. The root mean square (RMS) percent differences between OSL and ion chamber CTDI100 values were 13.8%, 6.4%, and 8.7% for in-air, head, and body measurements, respectively, with an overall RMS percent difference of 10.1%. OSL underestimated CTDI100 relative to the ion chamber 21/28 times (75%). After manual correction of the 80 kV measurements, the RMS percent differences between OSL and ion chamber measurements were 9.9% and 10.0% for 80 and 120 kV, respectively. Conclusions: Measurements of CTDI100 with commercially available CT OSL dosimeters had a percent standard deviation of 1.4%. After energy-dependent correction factors were applied, the RMS percent difference in the measured CTDI100 values was about 10%, with a tendency of OSL to underestimate CTDI relative to the ion chamber. Unlike ion chamber methods, however, OSL dosimeters allow measurement of the radiation dose profile. PMID:23127052

  1. Acceptability, Precision and Accuracy of 3D Photonic Scanning for Measurement of Body Shape in a Multi-Ethnic Sample of Children Aged 5-11 Years: The SLIC Study

    PubMed Central

    Wells, Jonathan C. K.; Stocks, Janet; Bonner, Rachel; Raywood, Emma; Legg, Sarah; Lee, Simon; Treleaven, Philip; Lum, Sooky

    2015-01-01

    Background Information on body size and shape is used to interpret many aspects of physiology, including nutritional status, cardio-metabolic risk and lung function. Such data have traditionally been obtained through manual anthropometry, which becomes time-consuming when many measurements are required. 3D photonic scanning (3D-PS) of body surface topography represents an alternative digital technique, previously applied successfully in large studies of adults. The acceptability, precision and accuracy of 3D-PS in young children have not been assessed. Methods We attempted to obtain data on girth, width and depth of the chest and waist, and girth of the knee and calf, manually and by 3D-PS in a multi-ethnic sample of 1484 children aged 5–11 years. The rate of 3D-PS success, and reasons for failure, were documented. Precision and accuracy of 3D-PS were assessed relative to manual measurements using the methods of Bland and Altman. Results Manual measurements were successful in all cases. Although 97.4% of children agreed to undergo 3D-PS, successful scans were only obtained in 70.7% of these. Unsuccessful scans were primarily due to body movement, or inability of the software to extract shape outputs. The odds of scan failure, and the underlying reason, differed by age, size and ethnicity. 3D-PS measurements tended to be greater than those obtained manually (p<0.05), however ranking consistency was high (r2>0.90 for most outcomes). Conclusions 3D-PS is acceptable in children aged ≥5 years, though with current hardware/software, and body movement artefacts, approximately one third of scans may be unsuccessful. The technique had poorer technical success than manual measurements, and had poorer precision when the measurements were viable. Compared to manual measurements, 3D-PS showed modest average biases but acceptable limits of agreement for large surveys, and little evidence that bias varied substantially with size. Most of the issues we identified could be

  2. Accuracy and precision of reconstruction of complex refractive index in near-field single-distance propagation-based phase-contrast tomography

    NASA Astrophysics Data System (ADS)

    Gureyev, Timur; Mohammadi, Sara; Nesterets, Yakov; Dullin, Christian; Tromba, Giuliana

    2013-10-01

    We investigate the quantitative accuracy and noise sensitivity of reconstruction of the 3D distribution of complex refractive index, n(r)=1-δ(r)+iβ(r), in samples containing materials with different refractive indices using propagation-based phase-contrast computed tomography (PB-CT). Our present study is limited to the case of parallel-beam geometry with monochromatic synchrotron radiation, but can be readily extended to cone-beam CT and partially coherent polychromatic X-rays at least in the case of weakly absorbing samples. We demonstrate that, except for regions near the interfaces between distinct materials, the distribution of imaginary part of the refractive index, β(r), can be accurately reconstructed from a single projection image per view angle using phase retrieval based on the so-called homogeneous version of the Transport of Intensity equation (TIE-Hom) in combination with conventional CT reconstruction. In contrast, the accuracy of reconstruction of δ(r) depends strongly on the choice of the "regularization" parameter in TIE-Hom. We demonstrate by means of an instructive example that for some multi-material samples, a direct application of the TIE-Hom method in PB-CT produces qualitatively incorrect results for δ(r), which can be rectified either by collecting additional projection images at each view angle, or by utilising suitable a priori information about the sample. As a separate observation, we also show that, in agreement with previous reports, it is possible to significantly improve signal-to-noise ratio by increasing the sample-to-detector distance in combination with TIE-Hom phase retrieval in PB-CT compared to conventional ("contact") CT, with the maximum achievable gain of the order of 0.3δ /β. This can lead to improved image quality and/or reduction of the X-ray dose delivered to patients in medical imaging.

  3. [CONTROVERSIES REGARDING THE ACCURACY AND LIMITATIONS OF FROZEN SECTION IN THYROID PATHOLOGY: AN EVIDENCE-BASED ASSESSMENT].

    PubMed

    Stanciu-Pop, C; Pop, F C; Thiry, A; Scagnol, I; Maweja, S; Hamoir, E; Beckers, A; Meurisse, M; Grosu, F; Delvenne, Ph

    2015-12-01

    Palpable thyroid nodules are present clinically in 4-7% of the population and their prevalence increases to 50%-67% when using high-resolution neck ultrasonography. By contrast, thyroid carcinoma (TC) represents only 5-20% of these nodules, which underlines the need for an appropriate approach to avoid unnecessary surgery. Frozen section (PS) has been used for more than 40 years in thyroid surgery to establish the diagnosis of malignancy. However, a controversy persists regarding the accuracy of FS and its place in thyroid pathology has changed with the emergence of fine-needle aspiration (FNA). A PubMed Medline and SpringerLink search was made covering the period from January 2000 to June 2012 to assess the accuracy of ES, its limitations and indications for the diagnosis of thyroid nodules. Twenty publications encompassing 8.567 subjects were included in our study. The average value of TC among thyroid nodules in analyzed studies was 15.5 %. ES ability to detect cancer expressed by its sensitivity (Ss) was 67.5 %. More than two thirds of the authors considered PS useful exclusively in the presence of doubtful ENA and for guiding the surgical extension in cases confirmed as malignant by FNA; however, only 33% accepted FS as a routine examination for the management of thyroid nodules. The influence of FS on surgical reintervention rate in nodular thyroid pathology was considered to be negligible by most studies, whereas 31 % of the authors thought that FS has a favorable benefit by decreasing the number of surgical re-interventions. In conclusion, the role of FS in thyroid pathology evolved from a mandatory component for thyroid surgery to an optional examination after a pre-operative FNA cytology. The accuracy of FS seems to provide no sufficient additional benefit and most experts support its use only in the presence of equivocal or suspicious cytological features, for guiding the surgical extension in cases confirmed as malignant by FNA and for the

  4. Evaluation of the Accuracy and Related Factors of the Mechanical Torque-Limiting Device for Dental Implants

    PubMed Central

    Kazemi, Mahmood; Rohanian, Ahmad; Monzavi, Abbas; Nazari, Mohammad Sadegh

    2013-01-01

    Objective: Accurate delivery of torque to implant screws is critical to generate ideal preload in the screw joint and to offer protection against screw loosening. Mechanical torque-limiting devices (MTLDs) are available for this reason. In this study, the accuracy of one type of friction-style and two types of spring-style MTLDs at baseline, following fatigue conditions and sterilization processes were determined. Materials and Methods: Five unused MTLDs were selected from each of Straumann (ITI), Astra TECH and CWM systems. To measure the output of each MTLD, a digital torque gauge with a 3-jaw chuck was used to hold the driver. Force was applied to the MTLDs until either the friction styles released at a pre-calibrated torque value or the spring styles flexed to a pre-calibrated limit (target torque value). The peak torque value was recorded and the procedure was repeated 5 times for each MTLD. Then MTLDs were subjected to fatigue conditions at 500 and 1000 times and steam sterilization processes at 50 and 100 times and the peak torque value was recorded again at each stage. Results: Adjusted difference between measured torque values and target torque values differed significantly between stages for all 3 systems. Adjusted difference did not differ significantly between systems at all stages, but differed significantly between two different styles at baseline and 500 times fatigue stages. Conclusion: Straumann (ITI) devices differed minimally from target torque values at all stages. MTLDs with Spring-style were significantly more accurate than Friction-style device in achieving their target torque values at baseline and 500 times fatigue. PMID:23724209

  5. Comparative Analysis of the Equivital EQ02 Lifemonitor with Holter Ambulatory ECG Device for Continuous Measurement of ECG, Heart Rate, and Heart Rate Variability: A Validation Study for Precision and Accuracy

    PubMed Central

    Akintola, Abimbola A.; van de Pol, Vera; Bimmel, Daniel; Maan, Arie C.; van Heemst, Diana

    2016-01-01

    Background: The Equivital (EQ02) is a multi-parameter telemetric device offering both real-time and/or retrospective, synchronized monitoring of ECG, HR, and HRV, respiration, activity, and temperature. Unlike the Holter, which is the gold standard for continuous ECG measurement, EQO2 continuously monitors ECG via electrodes interwoven in the textile of a wearable belt. Objective: To compare EQ02 with the Holter for continuous home measurement of ECG, heart rate (HR), and heart rate variability (HRV). Methods: Eighteen healthy participants wore, simultaneously for 24 h, the Holter and EQ02 monitors. Per participant, averaged HR, and HRV per 5 min from the two devices were compared using Pearson correlation, paired T-test, and Bland-Altman analyses. Accuracy and precision metrics included mean absolute relative difference (MARD). Results: Artifact content of EQ02 data varied widely between (range 1.93–56.45%) and within (range 0.75–9.61%) participants. Comparing the EQ02 to the Holter, the Pearson correlations were respectively 0.724, 0.955, and 0.997 for datasets containing all data and data with < 50 or < 20% artifacts respectively. For datasets containing respectively all data, data with < 50, or < 20% artifacts, bias estimated by Bland-Altman analysis was −2.8, −1.0, and −0.8 beats per minute and 24 h MARD was 7.08, 3.01, and 1.5. After selecting a 3-h stretch of data containing 1.15% artifacts, Pearson correlation was 0.786 for HRV measured as standard deviation of NN intervals (SDNN). Conclusions: Although the EQ02 can accurately measure ECG and HRV, its accuracy and precision is highly dependent on artifact content. This is a limitation for clinical use in individual patients. However, the advantages of the EQ02 (ability to simultaneously monitor several physiologic parameters) may outweigh its disadvantages (higher artifact load) for research purposes and/ or for home monitoring in larger groups of study participants. Further studies can be aimed

  6. Limited Production (LP) Precision Runway Monitor (PRM) Operational Test and Evaluation integration and OT and E Operational Test Plan

    NASA Astrophysics Data System (ADS)

    Livings, Jeffrey

    1995-05-01

    This document defines the Test Plan and corresponding Test Verification Requirements Traceability Matrix (TVRTM) that will be used to conduct the Limited Production (LP) Precision Runway Monitor (PRM) Operational Test and Evaluation (OT and E) Integration and OT and E Operational tests. These tests will be conducted at the Minneapolis-St. Paul International Airport following the Contractor Site Acceptance Test. The LP PRM OT and E test effort will concentrate on Operational Effectiveness and Suitability. The Operational Effectiveness Test consists of a review of the contractor performed Development Test and Evaluation (DT and E) and Site Acceptance Tests. This review will evaluate whether each of the Measures of Effectiveness had been satisfactorily tested and whether the results meet the Minimum Acceptable Operational REquirements MAORs). This review will be conducted solely by test engineers and does not require the PRM system. The Operational Suitability Tests will expose the test participants (Air Traffic (AT) Controllers and Airway Facilities (AF) Technicians) to the PRM system in an operational environment while they perform specified operational procedures. These tests will be conducted in two separate phases: AT Suitability and AF Suitability. Each of these phases is focused on the specific test participants.

  7. Mixed-Precision Spectral Deferred Correction: Preprint

    SciTech Connect

    Grout, Ray W. S.

    2015-09-02

    Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.

  8. SU-E-J-03: Characterization of the Precision and Accuracy of a New, Preclinical, MRI-Guided Focused Ultrasound System for Image-Guided Interventions in Small-Bore, High-Field Magnets

    SciTech Connect

    Ellens, N; Farahani, K

    2015-06-15

    Purpose: MRI-guided focused ultrasound (MRgFUS) has many potential and realized applications including controlled heating and localized drug delivery. The development of many of these applications requires extensive preclinical work, much of it in small animal models. The goal of this study is to characterize the spatial targeting accuracy and reproducibility of a preclinical high field MRgFUS system for thermal ablation and drug delivery applications. Methods: The RK300 (FUS Instruments, Toronto, Canada) is a motorized, 2-axis FUS positioning system suitable for small bore (72 mm), high-field MRI systems. The accuracy of the system was assessed in three ways. First, the precision of the system was assessed by sonicating regular grids of 5 mm squares on polystyrene plates and comparing the resulting focal dimples to the intended pattern, thereby assessing the reproducibility and precision of the motion control alone. Second, the targeting accuracy was assessed by imaging a polystyrene plate with randomly drilled holes and replicating the hole pattern by sonicating the observed hole locations on intact polystyrene plates and comparing the results. Third, the practicallyrealizable accuracy and precision were assessed by comparing the locations of transcranial, FUS-induced blood-brain-barrier disruption (BBBD) (observed through Gadolinium enhancement) to the intended targets in a retrospective analysis of animals sonicated for other experiments. Results: The evenly-spaced grids indicated that the precision was 0.11 +/− 0.05 mm. When image-guidance was included by targeting random locations, the accuracy was 0.5 +/− 0.2 mm. The effective accuracy in the four rodent brains assessed was 0.8 +/− 0.6 mm. In all cases, the error appeared normally distributed (p<0.05) in both orthogonal axes, though the left/right error was systematically greater than the superior/inferior error. Conclusions: The targeting accuracy of this device is sub-millimeter, suitable for many

  9. Precision orbit determination for Topex

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Schutz, B. E.; Ries, J. C.; Shum, C. K.

    1990-01-01

    The ability of radar altimeters to measure the distance from a satellite to the ocean surface with a precision of the order of 2 cm imposes unique requirements for the orbit determination accuracy. The orbit accuracy requirements will be especially demanding for the joint NASA/CNES Ocean Topography Experiment (Topex/Poseidon). For this mission, a radial orbit accuracy of 13 centimeters will be required for a mission period of three to five years. This is an order of magnitude improvement in the accuracy achieved during any previous satellite mission. This investigation considers the factors which limit the orbit accuracy for the Topex mission. Particular error sources which are considered include the geopotential, the radiation pressure and the atmospheric drag model.

  10. Increasing consistency and accuracy in radiation therapy via educational interventions is not just limited to radiation oncologists.

    PubMed

    Bell, Linda J

    2016-09-01

    This editorial is advocating that increasing consistency and accuracy in radiation therapy via educational interventions is important for radiation therapist. Education and training with ongoing refreshers is the key to maintaining consistency throughout the radiotherapy process, which in turn will ensure all patients receive accurate treatment. PMID:27648277

  11. Improved precision and accuracy for high-performance liquid chromatography/Fourier transform ion cyclotron resonance mass spectrometric exact mass measurement of small molecules from the simultaneous and controlled introduction of internal calibrants via a second electrospray nebuliser.

    PubMed

    Herniman, Julie M; Bristow, Tony W T; O'Connor, Gavin; Jarvis, Jackie; Langley, G John

    2004-01-01

    The use of a second electrospray nebuliser has proved to be highly successful for exact mass measurement during high-performance liquid chromatography/Fourier transform ion cyclotron resonance mass spectrometry (HPLC/FTICRMS). Much improved accuracy and precision of mass measurement were afforded by the introduction of the internal calibration solution, thus overcoming space charge issues due to the lack of control over relative ion abundances of the species eluting from the HPLC column. Further, issues of suppression of ionisation, observed when using a T-piece method, are addressed and this simple system has significant benefits over other more elaborate approaches providing data that compares very favourably with these other approaches. The technique is robust, flexible and transferable and can be used in conjunction with HPLC, infusion or flow injection analysis (FIA) to provide constant internal calibration signals to allow routine, accurate and precise mass measurements to be recorded.

  12. Precision and accuracy of manual water-level measurements taken in the Yucca Mountain area, Nye County, Nevada, 1988--1990; Water-resources investigations report 93-4025

    SciTech Connect

    Boucher, M.S.

    1994-05-01

    Water-level measurements have been made in deep boreholes in the Yucca Mountain area, Nye County, Nevada, since 1983 in support of the US Department of Energy`s Yucca Mountain Project, which is an evaluation of the area to determine its suit-ability as a potential storage area for high-level nuclear waste. Water-level measurements were taken either manually, using various water-level measuring equipment such as steel tapes, or they were taken continuously, using automated data recorders and pressure transducers. This report presents precision range and accuracy data established for manual water-level measurements taken in the Yucca Mountain area, 1988--90.

  13. Method and system using power modulation for maskless vapor deposition of spatially graded thin film and multilayer coatings with atomic-level precision and accuracy

    DOEpatents

    Montcalm, Claude; Folta, James Allen; Tan, Swie-In; Reiss, Ira

    2002-07-30

    A method and system for producing a film (preferably a thin film with highly uniform or highly accurate custom graded thickness) on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source operated with time-varying flux distribution. In preferred embodiments, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. A user selects a source flux modulation recipe for achieving a predetermined desired thickness profile of the deposited film. The method relies on precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  14. Isotopic analysis of small Pb samples using MC-ICPMS: The limits of precision and comparison to TIMS

    NASA Astrophysics Data System (ADS)

    Amelin, Y.; Janney, P.; Chakrabarti, R.; Wadhwa, M.; Jacobsen, S. B.

    2008-12-01

    Multicollector ICP-MS is a mainstream method for precise isotopic analyses of large (over 10-8 g) quantities of Pb, and is becoming increasingly popular for very rapid, even if relatively imprecise, U-Pb dating of U-bearing minerals using laser ablation. At the same time, high precision U-Pb geo- and cosmo- chronology mainly utilizes isotope dilution thermal ionization mass spectrometry, recently enhanced by application of double spikes for both Pb and U. Here we explore the suitability of MC-ICPMS for analysis of 10-11-10-9 g quantities of radiogenic Pb, contained in small single grains of zircon and other U- bearing minerals, and in chondrules, refractory inclusions and mineral fractions from meteorites. Analyses were performed at the Geological Survey of Canada using a Nu Plasma with DSN-100 desolvating nebulizer, at Arizona State University using a Neptune with Apex nebulizer, and at Harvard University using an Isoprobe P with Apex nebulizer. A total ion yield of 0.4-0.5% was achieved in all three instruments in 2.5-4 minute analyses. The fractions of SRM-981 and SRM-983 standards, spiked with 202Pb-205Pb- 233U-235U [1], containing between 3*10-11 and 10-9 Pb, were analyzed in all three labs. Precision of 207Pb/206Pb ratios in SRM-981 was 0.1-0.3% for 3*10-11 g fractions, 0.03-0.1% for 10-10 g fractions, and 0.006-0.013% for 10-9 g fractions. Precision of the best MC-ICPMS analyses was similar to precision of average TIMS analyses from the same quantities of Pb. Reproducibility of analyses depends on accurate blank and background subtraction as much as on the counting statistics. A series of analyses of the same solution run within a short period of time (i.e. with constant background) yielded a reproducibility similar to that of TIMS, whereas the analyses of a series of separately prepared aliquots were less reproducible. Our data demonstrate that the quality of analyses of 10^11 - 10^9 g Pb fractions by modern MC-ICPMS approaches the quality of TIMS analyses

  15. Accuracy and Precision of a Custom Camera-Based System for 2-D and 3-D Motion Tracking during Speech and Nonspeech Motor Tasks

    ERIC Educational Resources Information Center

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose: Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable…

  16. The accuracy and precision of a micro computer tomography volumetric measurement technique for the analysis of in-vitro tested total disc replacements.

    PubMed

    Vicars, R; Fisher, J; Hall, R M

    2009-04-01

    Total disc replacements (TDRs) in the spine have been clinically successful in the short term, but there are concerns over long-term failure due to wear, as seen in other joint replacements. Simulators have been used to investigate the wear of TDRs, but only gravimetric measurements have been used to assess material loss. Micro computer tomography (microCT) has been used for volumetric measurement of explanted components but has yet to be used for in-vitro studies with the wear typically less than < 20 mm3 per 10(6) cycles. The aim of this study was to compare microCT volume measurements with gravimetric measurements and to assess whether microCT can quantify wear volumes of in-vitro tested TDRs. microCT measurements of TDR polyethylene cores were undertaken and the results compared with gravimetric assessments. The effects of repositioning, integration time, and scan resolution were investigated. The best volume measurement resolution was found to be +/- 3 mm3, at least three orders of magnitude greater than those determined for gravimetric measurements. In conclusion, the microCT measurement technique is suitable for quantifying in-vitro TDR polyethylene wear volumes and can provide qualitative data (e.g. wear location), and also further quantitative data (e.g. height loss), assisting comparisons with in-vivo and ex-vivo data. It is best used alongside gravimetric measurements to maintain the high level of precision that these measurements provide.

  17. Leaf Vein Length per Unit Area Is Not Intrinsically Dependent on Image Magnification: Avoiding Measurement Artifacts for Accuracy and Precision1[W][OPEN

    PubMed Central

    Sack, Lawren; Caringella, Marissa; Scoffoni, Christine; Mason, Chase; Rawls, Michael; Markesteijn, Lars; Poorter, Lourens

    2014-01-01

    Leaf vein length per unit leaf area (VLA; also known as vein density) is an important determinant of water and sugar transport, photosynthetic function, and biomechanical support. A range of software methods are in use to visualize and measure vein systems in cleared leaf images; typically, users locate veins by digital tracing, but recent articles introduced software by which users can locate veins using thresholding (i.e. based on the contrasting of veins in the image). Based on the use of this method, a recent study argued against the existence of a fixed VLA value for a given leaf, proposing instead that VLA increases with the magnification of the image due to intrinsic properties of the vein system, and recommended that future measurements use a common, low image magnification for measurements. We tested these claims with new measurements using the software LEAFGUI in comparison with digital tracing using ImageJ software. We found that the apparent increase of VLA with magnification was an artifact of (1) using low-quality and low-magnification images and (2) errors in the algorithms of LEAFGUI. Given the use of images of sufficient magnification and quality, and analysis with error-free software, the VLA can be measured precisely and accurately. These findings point to important principles for improving the quantity and quality of important information gathered from leaf vein systems. PMID:25096977

  18. High-accuracy, high-precision, high-resolution, continuous monitoring of urban greenhouse gas emissions? Results to date from INFLUX

    NASA Astrophysics Data System (ADS)

    Davis, K. J.; Brewer, A.; Cambaliza, M. O. L.; Deng, A.; Hardesty, M.; Gurney, K. R.; Heimburger, A. M. F.; Karion, A.; Lauvaux, T.; Lopez-Coto, I.; McKain, K.; Miles, N. L.; Patarasuk, R.; Prasad, K.; Razlivanov, I. N.; Richardson, S.; Sarmiento, D. P.; Shepson, P. B.; Sweeney, C.; Turnbull, J. C.; Whetstone, J. R.; Wu, K.

    2015-12-01

    The Indianapolis Flux Experiment (INFLUX) is testing the boundaries of our ability to use atmospheric measurements to quantify urban greenhouse gas (GHG) emissions. The project brings together inventory assessments, tower-based and aircraft-based atmospheric measurements, and atmospheric modeling to provide high-accuracy, high-resolution, continuous monitoring of emissions of GHGs from the city. Results to date include a multi-year record of tower and aircraft based measurements of the urban CO2 and CH4 signal, long-term atmospheric modeling of GHG transport, and emission estimates for both CO2 and CH4 based on both tower and aircraft measurements. We will present these emissions estimates, the uncertainties in each, and our assessment of the primary needs for improvements in these emissions estimates. We will also present ongoing efforts to improve our understanding of atmospheric transport and background atmospheric GHG mole fractions, and to disaggregate GHG sources (e.g. biogenic vs. fossil fuel CO2 fluxes), topics that promise significant improvement in urban GHG emissions estimates.

  19. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset 1998-2000 in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, A. M.; Schmidlin, F. J.; Oltmans, S. J.; McPeters, R. D.; Smit, H. G. J.

    2003-01-01

    A network of 12 southern hemisphere tropical and subtropical stations in the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 profiles of stratospheric and tropospheric ozone since 1998. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used with standard radiosondes for pressure, temperature and relative humidity measurements. The archived data are available at:http: //croc.gsfc.nasa.gov/shadoz. In Thompson et al., accuracies and imprecisions in the SHADOZ 1998- 2000 dataset were examined using ground-based instruments and the TOMS total ozone measurement (version 7) as references. Small variations in ozonesonde technique introduced possible biases from station-to-station. SHADOZ total ozone column amounts are now compared to version 8 TOMS; discrepancies between the two datasets are reduced 2\\% on average. An evaluation of ozone variations among the stations is made using the results of a series of chamber simulations of ozone launches (JOSIE-2000, Juelich Ozonesonde Intercomparison Experiment) in which a standard reference ozone instrument was employed with the various sonde techniques used in SHADOZ. A number of variations in SHADOZ ozone data are explained when differences in solution strength, data processing and instrument type (manufacturer) are taken into account.

  20. The effect of dilution and the use of a post-extraction nucleic acid purification column on the accuracy, precision, and inhibition of environmental DNA samples

    USGS Publications Warehouse

    Mckee, Anna M.; Spear, Stephen F.; Pierson, Todd W.

    2015-01-01

    Isolation of environmental DNA (eDNA) is an increasingly common method for detecting presence and assessing relative abundance of rare or elusive species in aquatic systems via the isolation of DNA from environmental samples and the amplification of species-specific sequences using quantitative PCR (qPCR). Co-extracted substances that inhibit qPCR can lead to inaccurate results and subsequent misinterpretation about a species’ status in the tested system. We tested three treatments (5-fold and 10-fold dilutions, and spin-column purification) for reducing qPCR inhibition from 21 partially and fully inhibited eDNA samples collected from coastal plain wetlands and mountain headwater streams in the southeastern USA. All treatments reduced the concentration of DNA in the samples. However, column purified samples retained the greatest sensitivity. For stream samples, all three treatments effectively reduced qPCR inhibition. However, for wetland samples, the 5-fold dilution was less effective than other treatments. Quantitative PCR results for column purified samples were more precise than the 5-fold and 10-fold dilutions by 2.2× and 3.7×, respectively. Column purified samples consistently underestimated qPCR-based DNA concentrations by approximately 25%, whereas the directional bias in qPCR-based DNA concentration estimates differed between stream and wetland samples for both dilution treatments. While the directional bias of qPCR-based DNA concentration estimates differed among treatments and locations, the magnitude of inaccuracy did not. Our results suggest that 10-fold dilution and column purification effectively reduce qPCR inhibition in mountain headwater stream and coastal plain wetland eDNA samples, and if applied to all samples in a study, column purification may provide the most accurate relative qPCR-based DNA concentrations estimates while retaining the greatest assay sensitivity.

  1. Application of terrestrial 'structure-from-motion' photogrammetry on a medium-size Arctic valley glacier: potential, accuracy and limitations

    NASA Astrophysics Data System (ADS)

    Hynek, Bernhard; Binder, Daniel; Boffi, Geo; Schöner, Wolfgang; Verhoeven, Geert

    2014-05-01

    Terrestrial photogrammetry was the standard method for mapping high mountain terrain in the early days of mountain cartography, until it was replaced by aerial photogrammetry and airborne laser scanning. Modern low-price digital single-lens reflex (DSLR) cameras and highly automatic and cheap digital computer vision software with automatic image matching and multiview-stereo routines suggest the rebirth of terrestrial photogrammetry, especially in remote regions, where airborne surveying methods are expensive due to high flight costs. Terrestrial photogrammetry and modern automated image matching is widely used in geodesy, however, its application in glaciology is still rare, especially for surveying ice bodies at the scale of some km², which is typical for valley glaciers. In August 2013 a terrestrial photogrammetric survey was carried out on Freya Glacier, a 6km² valley glacier next to Zackenberg Research Station in NE-Greenland, where a detailed glacier mass balance monitoring was initiated during the last IPY. Photos with a consumer grade digital camera (Nikon D7100) were taken from the ridges surrounding the glacier. To create a digital elevation model, the photos were processed with the software photoscan. A set of ~100 dGPS surveyed ground control points on the glacier surface was used to georeference and validate the final DEM. Aim of this study was to produce a high resolution and high accuracy DEM of the actual surface topography of the Freya glacier catchment with a novel approach and to explore the potential of modern low-cost terrestrial photogrammetry combined with state-of-the-art automated image matching and multiview-stereo routines for glacier monitoring and to communicate this powerful and cheap method within the environmental research and glacier monitoring community.

  2. Re-Os geochronology of the El Salvador porphyry Cu-Mo deposit, Chile: Tracking analytical improvements in accuracy and precision over the past decade

    NASA Astrophysics Data System (ADS)

    Zimmerman, Aaron; Stein, Holly J.; Morgan, John W.; Markey, Richard J.; Watanabe, Yasushi

    2014-04-01

    deposit geochronology. The timing and duration of mineralization from Re-Os dating of ore minerals is more precise than estimates from previously reported 40Ar/39Ar and K-Ar ages on alteration minerals. The Re-Os results suggest that the mineralization is temporally distinct from pre-mineral rhyolite porphyry (42.63 ± 0.28 Ma) and is immediately prior to or overlapping with post-mineral latite dike emplacement (41.16 ± 0.48 Ma). Based on the Re-Os and other geochronologic data, the Middle Eocene intrusive activity in the El Salvador district is divided into three pulses: (1) 44-42.5 Ma for weakly mineralized porphyry intrusions, (2) 41.8-41.2 Ma for intensely mineralized porphyry intrusions, and (3) ∼41 Ma for small latite dike intrusions without major porphyry stocks. The orientation of igneous dikes and porphyry stocks changed from NNE-SSW during the first pulse to WNW-ESE for the second and third pulses. This implies that the WNW-ESE striking stress changed from σ3 (minimum principal compressive stress) during the first pulse to σHmax (maximum principal compressional stress in a horizontal plane) during the second and third pulses. Therefore, the focus of intense porphyry Cu-Mo mineralization occurred during a transient geodynamic reconfiguration just before extinction of major intrusive activity in the region.

  3. High-precision radiogenic strontium isotope measurements of the modern and glacial ocean: Limits on glacial-interglacial variations in continental weathering

    NASA Astrophysics Data System (ADS)

    Mokadem, Fatima; Parkinson, Ian J.; Hathorne, Ed C.; Anand, Pallavi; Allen, John T.; Burton, Kevin W.

    2015-04-01

    Existing strontium radiogenic isotope (87Sr/86Sr) measurements for foraminifera over Quaternary glacial-interglacial climate cycles provide no evidence for variations in the isotope composition of seawater at the ±9-13 ppm level of precision. However, modelling suggests that even within this level of uncertainty significant (up to 30%) variations in chemical weathering of the continents are permitted, accounting for the longer-term rise in 87Sr/86Sr over the Quaternary, and the apparent imbalance of Sr in the oceans at the present-day. This study presents very high-precision 87Sr/86Sr isotope data for modern seawater from each of the major oceans, and a glacial-interglacial seawater record preserved by planktic foraminifera from Ocean Drilling Program (ODP) Site 758 in the north-east Indian ocean. Strontium isotope 87Sr/86Sr measurements for modern seawater from the Atlantic, Pacific and Indian Oceans are indistinguishable from one another (87Sr/86Sr = 0.7091792 ± 0.0000021, n = 17) at the level of precision obtained in this study (±4.9 ppm 2σ). This observation is consistent with the very long residence time of Sr in seawater, and underpins the utility of this element for high precision isotope stratigraphy. The 87Sr/86Sr seawater record preserved by planktic foraminifera shows no resolvable glacial-interglacial variation (87Sr/86Sr = 0.7091784 ± 0.0000035, n = 10), and limits the response of seawater to variations in the chemical weathering flux and/or composition to ±4.9 ppm or less. Calculations suggest that a variation of ±12% around the steady-state weathering flux can be accommodated by the uncertainties obtained here. The new data cannot accommodate a short-term weathering pulse during de-glaciation, although a more a diffuse weathering pulse accompanying protracted ice retreat is permissible. However, these results still indicate that modern weathering fluxes are potentially higher than average over the Quaternary, and such variations through

  4. Precise limits on cosmological variability of the fine-structure constant with zinc and chromium quasar absorption lines

    NASA Astrophysics Data System (ADS)

    Murphy, Michael T.; Malec, Adrian L.; Prochaska, J. Xavier

    2016-09-01

    The strongest transitions of Zn and Cr II are the most sensitive to relative variations in the fine-structure constant (Δα/α) among the transitions commonly observed in quasar absorption spectra. They also lie within just 40 Å of each other (rest frame), so they are resistant to the main systematic error affecting most previous measurements of Δα/α: long-range distortions of the wavelength calibration. While Zn and Cr II absorption is normally very weak in quasar spectra, we obtained high signal-to-noise, high-resolution echelle spectra from the Keck and Very Large Telescopes of nine rare systems where it is strong enough to constrain Δα/α from these species alone. These provide 12 independent measurements (three quasars were observed with both telescopes) at redshifts 1.0-2.4, 11 of which pass stringent reliability criteria. These 11 are all consistent with Δα/α = 0 within their individual uncertainties of 3.5-13 parts per million (ppm), with a weighted mean Δα/α = 0.4 ± 1.4stat ± 0.9sys ppm (1σ statistical and systematic uncertainties), indicating no significant cosmological variations in α. This is the first statistical sample of absorbers that is resistant to long-range calibration distortions (at the <1 ppm level), with a precision comparable to previous large samples of ˜150 (distortion-affected) absorbers. Our systematic error budget is instead dominated by much shorter range distortions repeated across echelle orders of individual spectra.

  5. Precision atomic spectroscopy for improved limits on variation of the fine structure constant and local position invariance.

    PubMed

    Fortier, T M; Ashby, N; Bergquist, J C; Delaney, M J; Diddams, S A; Heavner, T P; Hollberg, L; Itano, W M; Jefferts, S R; Kim, K; Levi, F; Lorini, L; Oskay, W H; Parker, T E; Shirley, J; Stalnaker, J E

    2007-02-16

    We report tests of local position invariance and the variation of fundamental constants from measurements of the frequency ratio of the 282-nm 199Hg+ optical clock transition to the ground state hyperfine splitting in 133Cs. Analysis of the frequency ratio of the two clocks, extending over 6 yr at NIST, is used to place a limit on its fractional variation of <5.8x10(-6) per change in normalized solar gravitational potential. The same frequency ratio is also used to obtain 20-fold improvement over previous limits on the fractional variation of the fine structure constant of |alpha/alpha|<1.3x10(-16) yr-1, assuming invariance of other fundamental constants. Comparisons of our results with those previously reported for the absolute optical frequency measurements in H and 171Yb+ vs other 133Cs standards yield a coupled constraint of -1.5x10(-15)

  6. Limits of diagnostic accuracy of anti-hepatitis C virus antibodies detection by ELISA and immunoblot assay.

    PubMed

    Suslov, Anatoly P; Kuzin, Stanislav N; Golosova, Tatiana V; Shalunova, Nina V; Malyshev, Nikolai A; Sadikova, Natalia V; Vavilova, Lubov M; Somova, Anna V; Musina, Elena E; Ivanova, Maria V; Kipor, Tatiana T; Timonin, Igor M; Kuzina, Lubov E; Godkov, Mihail A; Bajenov, Alexei I; Nesterenko, Vladimir G

    2002-07-01

    When human sera samples are tested for anti-hepatitis C virus (HCV) antibodies using different ELISA kits as well as immunoblot assay kits discrepant results often occur. As a result the diagnostics of HCV infection in such sera remains unclear. The purpose of this investigation is to define the limits of HCV serodiagnostics. Overall 7 different test kits of domestic and foreign manufacturers were used for the sampled sera testing. Preliminary comparative study, using seroconversion panels PHV905, PHV907, PHV908 was performed and reference kit was chosen (Murex anti-HCV version 4) as the most sensitive kit on the base of this study results. Overall 1640 sera samples have been screened using different anti-HCV ELISA kits and 667 of them gave discrepant results in at least two kits. These sera were then tested using three anti-HCV ELISA kits (first set of 377 samples) or four anti-HCV ELISA kits (second set of 290 samples) at the conditions of reference laboratory. In the first set 17.2% samples remained discrepant and in the second set - 13.4%. "Discrepant" sera were further tested in RIBA 3.0 and INNO-LIA immunoblot confirmatory assays, but approximately 5-7% of them remained undetermined after all the tests. For the samples with signal-to-cutoff ratio higher than 3.0 high rate of result consistency by reference, ELISA routing and INNO-LIA immunoblot assay was observed. On the other hand the results of tests 27 "problematic" sera in RIBA 3.0 and INNO-LIA were consistent only in 55.5% cases. Analysis of the antigen spectrum reactive with antibodies in "problematic" sera, demonstrated predominance of Core, NS3 and NS4 antigens for sera, positive in RIBA 3.0 and Core and NS3 antigens for sera, positive in INNO-LIA. To overcome the problem of undetermined sera, methods based on other principles, as well as alternative criteria of HCV infection diagnostics are discussed.

  7. Overview of the national precision database for ozone

    SciTech Connect

    Mikel, D.K.

    1999-07-01

    One of the most important ambient air monitoring quality assurance indicators is the precision test. Code of Federal Regulation Title 40, Section 58 (40 CFR 58) Appendix A1 states that all automated analyzers must have precision tests performed at least once every two weeks. Precision tests can be the best indicator of quality of data for the following reasons: Precision tests are performed once every two weeks. There are approximately 24 to 26 tests per year per instrument. Accuracy tests (audits) usually occur only 1--2 times per year. Precision tests and the subsequent statistical tests can be used to calculate the bias in a set of data. Precision test are used to calculate 95% confidence (probability) limits for the data set. This is important because the confidence of any data point can be determined. If the authors examine any exceedances or near exceedances of the ozone NAAQS, the confidence limits must be examined as well. Precision tests are performed by the monitoring staff and the precision standards are certified against the internal agency primary standards. Precision data are submitted by all state and local agencies that are required to submit criteria pollutant data to the Aerometric and Information Retrieval System (AIRS) database. This subset of the AIRS database is named Precision and Accuracy Retrieval Systems (PARS). In essence, the precision test is an internally performed test performed by the agency collecting and reporting the data.

  8. Limiter

    DOEpatents

    Cohen, S.A.; Hosea, J.C.; Timberlake, J.R.

    1984-10-19

    A limiter with a specially contoured front face is provided. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution. This limiter shape accommodates the various power scrape-off distances lambda p, which depend on the parallel velocity, V/sub parallel/, of the impacting particles.

  9. Precision volume measurement system.

    SciTech Connect

    Fischer, Erin E.; Shugard, Andrew D.

    2004-11-01

    A new precision volume measurement system based on a Kansas City Plant (KCP) design was built to support the volume measurement needs of the Gas Transfer Systems (GTS) department at Sandia National Labs (SNL) in California. An engineering study was undertaken to verify or refute KCP's claims of 0.5% accuracy. The study assesses the accuracy and precision of the system. The system uses the ideal gas law and precise pressure measurements (of low-pressure helium) in a temperature and computer controlled environment to ratio a known volume to an unknown volume.

  10. Limiter

    DOEpatents

    Cohen, Samuel A.; Hosea, Joel C.; Timberlake, John R.

    1986-01-01

    A limiter with a specially contoured front face accommodates the various power scrape-off distances .lambda..sub.p, which depend on the parallel velocity, V.sub..parallel., of the impacting particles. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution.

  11. Age-related changes in speed and accuracy during rapid targeted center of pressure movements near the posterior limit of the base of support

    PubMed Central

    Hernandez, Manuel E.; Ashton-Miller, James A.; Alexander, Neil B.

    2012-01-01

    Background Backward falls are often associated with injury, particularly among older women. An age-related increase occurs in center of pressure variability when standing and leaning. So, we hypothesized that, in comparison to young women, older women would display a disproportionate decrease of speed and accuracy in the primary center of pressure submovements as movement amplitude increases. Methods Ground reaction forces were recorded from thirteen healthy young and twelve older women while performing rapid, targeted, center of pressure movements of small and large amplitude in upright stance. Measures included center of pressure speed, the number of center of pressure submovements, and the incidence rate of primary center of pressure submovements undershooting the target. Findings In comparison to young women, older women used slower primary submovements, particularly as movement amplitude increased (P < 0.01). Even though older women achieved similar endpoint accuracy, they demonstrated a 2 to 5-fold increase in the incidence of primary submovement undershooting for large-amplitude movements (P < 0.01). Overall, posterior center of pressure movements of older women were 41% slower and exhibited 43% more secondary submovements than in young women (P < 0.01). Interpretations We conclude that the increased primary submovement undershoots and secondary center of pressure submovements in the older women reflect the use of a conservative control strategy near the posterior limit of their base of support. PMID:22770467

  12. Using measurements of muscle color, pH, and electrical impedance to augment the current USDA beef quality grading standards and improve the accuracy and precision of sorting carcasses into palatability groups.

    PubMed

    Wulf, D M; Page, J K

    2000-10-01

    This research was conducted to determine whether objective measures of muscle color, muscle pH, and(or) electrical impedance are useful in segregating palatable beef from unpalatable beef, and to determine whether the current USDA quality grading standards for beef carcasses could be revised to improve their effectiveness at distinguishing palatable from unpalatable beef. One hundred beef carcasses were selected from packing plants in Texas, Illinois, and Ohio to represent the full range of muscle color observed in the U.S. beef carcass population. Steaks from these 100 carcasses were used to determine shear force on eight cooked beef muscles and taste panel ratings on three cooked beef muscles. It was discovered that the darkest-colored 20 to 25% of the beef carcasses sampled were less palatable and considerably less consistent than the other 75 to 80% sampled. Marbling score, by itself, explained 12% of the variation in beef palatability; hump height, by itself, explained 8% of the variation in beef palatability; measures of muscle color or pH, by themselves, explained 15 to 23% of the variation in beef palatability. When combined together, marbling score, hump height, and some measure of muscle color or pH explained 36 to 46% of the variation in beef palatability. Alternative quality grading systems were proposed to improve the accuracy and precision of sorting carcasses into palatability groups. The two proposed grading systems decreased palatability variation by 29% and 39%, respectively, within the Choice grade and decreased palatability variation by 37% and 12%, respectively, within the Select grade, when compared with current USDA standards. The percentage of unpalatable Choice carcasses was reduced from 14% under the current USDA grading standards to 4% and 1%, respectively, for the two proposed systems. The percentage of unpalatable Select carcasses was reduced from 36% under the current USDA standards to 7% and 29%, respectively, for the proposed systems

  13. Limited proteolysis and peptide mapping for comparability of biopharmaceuticals: An evaluation of repeatability, intra-assay precision and capability to detect structural change.

    PubMed

    Perrin, Camille; Burkitt, Will; Perraud, Xavier; O'Hara, John; Jone, Carl

    2016-05-10

    The use of limited proteolysis followed by peptide mapping for the comparability of the higher-order structure of biopharmaceuticals was investigated. In this approach the proteolysis is performed under non-reducing and non-denaturing conditions, and the resulting peptide map is determined by the samples primary and higher order structures. This allows comparability of biopharmaceuticals to be made in terms of their higher order structure, using a method that is relatively simple to implement. The digestion of a monoclonal antibody under non-denaturing conditions was analyzed using peptide mapping, circular dichroism (CD) and sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE). This allowed an optimal digestion time to be chosen. This method was then assessed for its ability to detect structural change using a monoclonal antibody, which had been subjected to a range of stresses; deglycosylation, mild denaturation and a batch that had failed specifications due to in-process reduction. The repeatability and inter-assay precision were assessed. It was demonstrated that the limited proteolysis peptide maps of the three stressed samples were significantly different to control samples and that the differences observed were consistent between the occasions when the assays were run. A combination of limited proteolysis and CD or SDS-PAGE analysis was shown to enhance the capacity of these techniques to detect structural change, which otherwise would not have been observed.

  14. Relative Accuracy Evaluation

    PubMed Central

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  15. SU-E-P-54: Evaluation of the Accuracy and Precision of IGPS-O X-Ray Image-Guided Positioning System by Comparison with On-Board Imager Cone-Beam Computed Tomography

    SciTech Connect

    Zhang, D; Wang, W; Jiang, B; Fu, D

    2015-06-15

    Purpose: The purpose of this study is to assess the positioning accuracy and precision of IGPS-O system which is a novel radiographic kilo-voltage x-ray image-guided positioning system developed for clinical IGRT applications. Methods: IGPS-O x-ray image-guided positioning system consists of two oblique sets of radiographic kilo-voltage x-ray projecting and imaging devices which were equiped on the ground and ceiling of treatment room. This system can determine the positioning error in the form of three translations and three rotations according to the registration of two X-ray images acquired online and the planning CT image. An anthropomorphic head phantom and an anthropomorphic thorax phantom were used for this study. The phantom was set up on the treatment table with correct position and various “planned” setup errors. Both IGPS-O x-ray image-guided positioning system and the commercial On-board Imager Cone-beam Computed Tomography (OBI CBCT) were used to obtain the setup errors of the phantom. Difference of the Result between the two image-guided positioning systems were computed and analyzed. Results: The setup errors measured by IGPS-O x-ray image-guided positioning system and the OBI CBCT system showed a general agreement, the means and standard errors of the discrepancies between the two systems in the left-right, anterior-posterior, superior-inferior directions were −0.13±0.09mm, 0.03±0.25mm, 0.04±0.31mm, respectively. The maximum difference was only 0.51mm in all the directions and the angular discrepancy was 0.3±0.5° between the two systems. Conclusion: The spatial and angular discrepancies between IGPS-O system and OBI CBCT for setup error correction was minimal. There is a general agreement between the two positioning system. IGPS-O x-ray image-guided positioning system can achieve as good accuracy as CBCT and can be used in the clinical IGRT applications.

  16. Analysis and design of numerical schemes for gas dynamics 1: Artificial diffusion, upwind biasing, limiters and their effect on accuracy and multigrid convergence

    NASA Technical Reports Server (NTRS)

    Jameson, Antony

    1994-01-01

    The theory of non-oscillatory scalar schemes is developed in this paper in terms of the local extremum diminishing (LED) principle that maxima should not increase and minima should not decrease. This principle can be used for multi-dimensional problems on both structured and unstructured meshes, while it is equivalent to the total variation diminishing (TVD) principle for one-dimensional problems. A new formulation of symmetric limited positive (SLIP) schemes is presented, which can be generalized to produce schemes with arbitrary high order of accuracy in regions where the solution contains no extrema, and which can also be implemented on multi-dimensional unstructured meshes. Systems of equations lead to waves traveling with distinct speeds and possibly in opposite directions. Alternative treatments using characteristic splitting and scalar diffusive fluxes are examined, together with modification of the scalar diffusion through the addition of pressure differences to the momentum equations to produce full upwinding in supersonic flow. This convective upwind and split pressure (CUSP) scheme exhibits very rapid convergence in multigrid calculations of transonic flow, and provides excellent shock resolution at very high Mach numbers.

  17. Exploring the limit of accuracy for density functionals based on the generalized gradient approximation: local, global hybrid, and range-separated hybrid functionals with and without dispersion corrections.

    PubMed

    Mardirossian, Narbe; Head-Gordon, Martin

    2014-05-14

    The limit of accuracy for semi-empirical generalized gradient approximation (GGA) density functionals is explored by parameterizing a variety of local, global hybrid, and range-separated hybrid functionals. The training methodology employed differs from conventional approaches in 2 main ways: (1) Instead of uniformly truncating the exchange, same-spin correlation, and opposite-spin correlation functional inhomogeneity correction factors, all possible fits up to fourth order are considered, and (2) Instead of selecting the optimal functionals based solely on their training set performance, the fits are validated on an independent test set and ranked based on their overall performance on the training and test sets. The 3 different methods of accounting for exchange are trained both with and without dispersion corrections (DFT-D2 and VV10), resulting in a total of 491 508 candidate functionals. For each of the 9 functional classes considered, the results illustrate the trade-off between improved training set performance and diminished transferability. Since all 491 508 functionals are uniformly trained and tested, this methodology allows the relative strengths of each type of functional to be consistently compared and contrasted. The range-separated hybrid GGA functional paired with the VV10 nonlocal correlation functional emerges as the most accurate form for the present training and test sets, which span thermochemical energy differences, reaction barriers, and intermolecular interactions involving lighter main group elements.

  18. Application of AFINCH as a Tool for Evaluating the Effects of Streamflow-Gaging-Network Size and Composition on the Accuracy and Precision of Streamflow Estimates at Ungaged Locations in the Southeast Lake Michigan Hydrologic Subregion

    USGS Publications Warehouse

    Koltun, G.F.; Holtschlag, David J.

    2010-01-01

    Bootstrapping techniques employing random subsampling were used with the AFINCH (Analysis of Flows In Networks of CHannels) model to gain insights into the effects of variation in streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the 0405 (Southeast Lake Michigan) hydrologic subregion. AFINCH uses stepwise-regression techniques to estimate monthly water yields from catchments based on geospatial-climate and land-cover data in combination with available streamflow and water-use data. Calculations are performed on a hydrologic-subregion scale for each catchment and stream reach contained in a National Hydrography Dataset Plus (NHDPlus) subregion. Water yields from contributing catchments are multiplied by catchment areas and resulting flow values are accumulated to compute streamflows in stream reaches which are referred to as flow lines. AFINCH imposes constraints on water yields to ensure that observed streamflows are conserved at gaged locations. Data from the 0405 hydrologic subregion (referred to as Southeast Lake Michigan) were used for the analyses. Daily streamflow data were measured in the subregion for 1 or more years at a total of 75 streamflow-gaging stations during the analysis period which spanned water years 1971-2003. The number of streamflow gages in operation each year during the analysis period ranged from 42 to 56 and averaged 47. Six sets (one set for each censoring level), each composed of 30 random subsets of the 75 streamflow gages, were created by censoring (removing) approximately 10, 20, 30, 40, 50, and 75 percent of the streamflow gages (the actual percentage of operating streamflow gages censored for each set varied from year to year, and within the year from subset to subset, but averaged approximately the indicated percentages). Streamflow estimates for six flow lines each were aggregated by censoring level, and results were analyzed to assess (a) how the size

  19. Precision Nova operations

    SciTech Connect

    Ehrlich, R.B.; Miller, J.L.; Saunders, R.L.; Thompson, C.E.; Weiland, T.L.; Laumann, C.W.

    1995-09-01

    To improve the symmetry of x-ray drive on indirectly driven ICF capsules, we have increased the accuracy of operating procedures and diagnostics on the Nova laser. Precision Nova operations includes routine precision power balance to within 10% rms in the ``foot`` and 5% nns in the peak of shaped pulses, beam synchronization to within 10 ps rms, and pointing of the beams onto targets to within 35 {mu}m rms. We have also added a ``fail-safe chirp`` system to avoid Stimulated Brillouin Scattering (SBS) in optical components during high energy shots.

  20. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  1. Precise Orbit Determination for ALOS

    NASA Technical Reports Server (NTRS)

    Nakamura, Ryo; Nakamura, Shinichi; Kudo, Nobuo; Katagiri, Seiji

    2007-01-01

    The Advanced Land Observing Satellite (ALOS) has been developed to contribute to the fields of mapping, precise regional land coverage observation, disaster monitoring, and resource surveying. Because the mounted sensors need high geometrical accuracy, precise orbit determination for ALOS is essential for satisfying the mission objectives. So ALOS mounts a GPS receiver and a Laser Reflector (LR) for Satellite Laser Ranging (SLR). This paper deals with the precise orbit determination experiments for ALOS using Global and High Accuracy Trajectory determination System (GUTS) and the evaluation of the orbit determination accuracy by SLR data. The results show that, even though the GPS receiver loses lock of GPS signals more frequently than expected, GPS-based orbit is consistent with SLR-based orbit. And considering the 1 sigma error, orbit determination accuracy of a few decimeters (peak-to-peak) was achieved.

  2. A Day in the Life of Millisecond Pulsar J1713+0747: Limits on Timing Precision Over 24 Hours and Implications for Gravitational Wave Detection

    NASA Astrophysics Data System (ADS)

    Dolch, Timothy; Bailes, M.; Bassa, C.; Bhat, R.; Bhattacharyya, B.; Champion, D.; Chatterjee, S.; Cognard, I.; Cordes, J. M.; Crowter, K.; Demorest, P.; Finn, L. S.; Fonseca, E.; Hessels, J.; Hobbs, G.; Janssen, G.; Jones, G.; Jordan, C.; Karuppusamy, R.; Keith, M.; Kramer, M.; Kraus, A.; Lam, M. T.; Lazarus, P.; Lazio, J.; Lee, K.; Levin, L.; Liu, K.; Lorimer, D.; Manchester, R. N.; McLaughlin, M.; Palliyaguru, N.; Perrodin, D.; Petroff, E.; Rajwade, K.; Rankin, J. M.; Ransom, S. M.; Rosenblum, J.; Roy, J.; Shannon, R.; Stappers, B.; Stinebring, D.; Stovall, K.; Teixeira, M.; van Leeuwen, J.; van Straten, W.; Verbiest, J.; Zhu, W.

    2014-01-01

    A 24-hour global observation of millisecond radio pulsar J1713+0747 was undertaken by the International Pulsar Timing Array (IPTA) collaboration as an effort to better quantify sources of noise in this object, which is regularly timed for the purpose of detecting gravitational waves (GWs). Given an 8-year timing RMS of 30ns, it is regarded as one of the best precision clocks in the PTA. However, sources of timing noise visible on timescales longer than the usual 20-30min biweekly observation may nonetheless be present. Data from the campaign were taken contiguously with the Parkes, Arecibo, Green Bank, GMRT, LOFAR, Effelsberg, WSRT, Lovell, and Nancay radio telescopes. The combined pulse times-of-arrival provide an estimate of the absolute noise floor, in other words, what unaccounted sources of timing noise impede an otherwise simple sqrt(N) improvement in timing precision, where N is the number of pulses in a single observing session. We present first results of specific phenomena probed on the unusual timescale of tens of hours, in particular interstellar scattering (ISS), and discuss the degree to which ISS affects precision timing. Finally, we examine single pulse information during selected portions of the observation and determine the degree to which the pulse jitter of J1713+0747 varies throughout the course of the day-long dataset.

  3. New multi-station and multi-decadal trend data on precipitable water. Recipe to match FTIR retrievals from NDACC long-time records to radio sondes within 1 mm accuracy/precision

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Borsdorff, T.; Rettinger, M.; Camy-Peyret, C.; Demoulin, P.; Duchatelet, P.; Mahieu, E.

    2009-04-01

    We present an original optimum strategy for retrieval of precipitable water from routine ground-based mid-infrared FTS measurements performed at a number globally distributed stations within the NDACC network. The strategy utilizes FTIR retrievals which are set in a way to match standard radio sonde operations. Thereby, an unprecedented accuracy and precision for measurements of precipitable water can be demonstrated: the correlation between Zugspitze FTIR water vapor columns from a 3 months measurement campaign with total columns derived from coincident radio sondes shows a regression coefficient of R = 0.988, a bias of 0.05 mm, a standard deviation of 0.28 mm, an intercept of 0.01 mm, and a slope of 1.01. This appears to be even better than what can be achieved with state-of-the-art micro wave techniques, see e.g., Morland et al. (2006, Fig. 9 therein). Our approach is based upon a careful selection of spectral micro windows, comprising a set of both weak and strong water vapor absorption lines between 839.4 - 840.6 cm-1, 849.0 - 850.2 cm-1, and 852.0 - 853.1 cm-1, which is not contaminated by interfering absorptions of any other trace gases. From existing spectroscopic line lists, a careful selection of the best available parameter set was performed, leading to nearly perfect spectral fits without significant forward model parameter errors. To set up the FTIR water vapor profile inversion, a set of FTIR measurements and coincident radio sondes has been utilized. To eliminate/minimize mismatch in time and space, the Tobin best estimate of the state of the atmosphere principle has been applied to the radio sondes. This concept uses pairs of radio sondes launched with a 1-hour separation, and derives the gradient from the two radio sonde measurements, in order to construct a virtual PTU profile for a certain time and location. Coincident FTIR measurements of water vapor columns (two hour mean values) have then been matched to the water columns obtained by

  4. Precision Electroforming For Optical Disk Manufacturing

    NASA Astrophysics Data System (ADS)

    Rodia, Carl M.

    1985-04-01

    Precision electroforming in replication of optical discs is discussed with overview of electro-forming technology capabilities, limitations, and tolerance criteria. Use of expendable and reusable mandrels is treated along with techniques for resist master preparation and processing. A review of applications and common reasons for success and failure is offered. Problems such as tensile/compressive stress, roughness and flatness are discussed. Advice is given on approaches, classic and novel, for remedying and avoiding specific problems. An abridged process description of optical memory disk mold electroforming is presented from resist master through metallization and electroforming. Emphasis is placed on methods of achieving accuracy and quality assurance.

  5. Exploring the Limit of Accuracy of the Global Hybrid Meta Density Functional for Main-Group Thermochemistry, Kinetics, and Noncovalent Interactions

    SciTech Connect

    Zhao, Yan; Truhlar, Donald G.

    2008-11-11

    The hybrid meta density functionals M05-2X and M06-2X have been shown to provide broad accuracy for main group chemistry. In the present article we make the functional form more flexible and improve the self-interaction term in the correlation functional to improve its self-consistent-field convergence. We also explore the constraint of enforcing the exact forms of the exchange and correlation functionals through second order (SO) in the reduced density gradient. This yields two new functionals called M08-HX and M08-SO, with different exact constraints. The new functionals are optimized against 267 diverse main-group energetic data consisting of atomization energies, ionization potentials, electron affinities, proton affinities, dissociation energies, isomerization energies, barrier heights, noncovalent complexation energies, and atomic energies. Then the M08-HX, M08-SO, M05-2X, and M06-2X functionals and the popular B3LYP functional are tested against 250 data that were not part of the original training data for any of the functionals, in particular 164 main-group energetic data in 7 databases, 39 bond lengths, 38 vibrational frequencies, and 9 multiplicity-changing electronic transition energies. These tests include a variety of new challenges for complex systems, including large-molecule atomization energies, organic isomerization energies, interaction energies in uracil trimers, and bond distances in crowded molecules (in particular, cyclophanes). The M08-HX functional performs slightly better than M08-SO and M06-2X on average, significantly better than M05- 2X, and much better than B3LYP for a combination of main-group thermochemistry, kinetics, noncovalent interactions, and electronic spectroscopy. More important than the slight improvement in accuracy afforded by M08-HX is the conformation that the optimization procedure works well for data outside the training set. Problems for which the accuracy is especially improved by the new M08-HX functional include

  6. Acclimation of E miliania huxleyi (1516) to nutrient limitation involves precise modification of the proteome to scavenge alternative sources of N and P

    PubMed Central

    Metodieva, Gergana; Raines, Christine A.; Metodiev, Metodi V.; Geider, Richard J.

    2015-01-01

    Summary Limitation of marine primary production by the availability of nitrogen or phosphorus is common. E miliania huxleyi, a ubiquitous phytoplankter that plays key roles in primary production, calcium carbonate precipitation and production of dimethyl sulfide, often blooms in mid‐latitude at the beginning of summer when inorganic nutrient concentrations are low. To understand physiological mechanisms that allow such blooms, we examined how the proteome of E . huxleyi (strain 1516) responds to N and P limitation. We observed modest changes in much of the proteome despite large physiological changes (e.g. cellular biomass, C, N and P) associated with nutrient limitation of growth rate. Acclimation to nutrient limitation did however involve significant increases in the abundance of transporters for ammonium and nitrate under N limitation and for phosphate under P limitation. More notable were large increases in proteins involved in the acquisition of organic forms of N and P, including urea and amino acid/polyamine transporters and numerous C‐N hydrolases under N limitation and a large upregulation of alkaline phosphatase under P limitation. This highly targeted reorganization of the proteome towards scavenging organic forms of macronutrients gives unique insight into the molecular mechanisms that underpin how E . huxleyi has found its niche to bloom in surface waters depleted of inorganic nutrients. PMID:26119724

  7. Acclimation of Emiliania huxleyi (1516) to nutrient limitation involves precise modification of the proteome to scavenge alternative sources of N and P.

    PubMed

    McKew, Boyd A; Metodieva, Gergana; Raines, Christine A; Metodiev, Metodi V; Geider, Richard J

    2015-10-01

    Limitation of marine primary production by the availability of nitrogen or phosphorus is common. Emiliania huxleyi, a ubiquitous phytoplankter that plays key roles in primary production, calcium carbonate precipitation and production of dimethyl sulfide, often blooms in mid-latitude at the beginning of summer when inorganic nutrient concentrations are low. To understand physiological mechanisms that allow such blooms, we examined how the proteome of E. huxleyi (strain 1516) responds to N and P limitation. We observed modest changes in much of the proteome despite large physiological changes (e.g. cellular biomass, C, N and P) associated with nutrient limitation of growth rate. Acclimation to nutrient limitation did however involve significant increases in the abundance of transporters for ammonium and nitrate under N limitation and for phosphate under P limitation. More notable were large increases in proteins involved in the acquisition of organic forms of N and P, including urea and amino acid/polyamine transporters and numerous C-N hydrolases under N limitation and a large upregulation of alkaline phosphatase under P limitation. This highly targeted reorganization of the proteome towards scavenging organic forms of macronutrients gives unique insight into the molecular mechanisms that underpin how E. huxleyi has found its niche to bloom in surface waters depleted of inorganic nutrients. PMID:26119724

  8. Acclimation of Emiliania huxleyi (1516) to nutrient limitation involves precise modification of the proteome to scavenge alternative sources of N and P.

    PubMed

    McKew, Boyd A; Metodieva, Gergana; Raines, Christine A; Metodiev, Metodi V; Geider, Richard J

    2015-10-01

    Limitation of marine primary production by the availability of nitrogen or phosphorus is common. Emiliania huxleyi, a ubiquitous phytoplankter that plays key roles in primary production, calcium carbonate precipitation and production of dimethyl sulfide, often blooms in mid-latitude at the beginning of summer when inorganic nutrient concentrations are low. To understand physiological mechanisms that allow such blooms, we examined how the proteome of E. huxleyi (strain 1516) responds to N and P limitation. We observed modest changes in much of the proteome despite large physiological changes (e.g. cellular biomass, C, N and P) associated with nutrient limitation of growth rate. Acclimation to nutrient limitation did however involve significant increases in the abundance of transporters for ammonium and nitrate under N limitation and for phosphate under P limitation. More notable were large increases in proteins involved in the acquisition of organic forms of N and P, including urea and amino acid/polyamine transporters and numerous C-N hydrolases under N limitation and a large upregulation of alkaline phosphatase under P limitation. This highly targeted reorganization of the proteome towards scavenging organic forms of macronutrients gives unique insight into the molecular mechanisms that underpin how E. huxleyi has found its niche to bloom in surface waters depleted of inorganic nutrients.

  9. Atmospheric effects and ultimate ranging accuracy for lunar laser ranging

    NASA Astrophysics Data System (ADS)

    Currie, Douglas G.; Prochazka, Ivan

    2014-10-01

    The deployment of next generation lunar laser retroreflectors is planned in the near future. With proper robotic deployment, these will support single shot single photo-electron ranging accuracy at the 100 micron level or better. There are available technologies for the support at this accuracy by advanced ground stations, however, the major question is the ultimate limit imposed on the ranging accuracy due to the changing timing delays due to turbulence and horizontal gradients in the earth's atmosphere. In particular, there are questions of the delay and temporal broadening of a very narrow laser pulse. Theoretical and experimental results will be discussed that address estimates of the magnitudes of these effects and the issue of precision vs. accuracy.

  10. Precision translator

    DOEpatents

    Reedy, R.P.; Crawford, D.W.

    1982-03-09

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  11. Precision translator

    DOEpatents

    Reedy, Robert P.; Crawford, Daniel W.

    1984-01-01

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  12. Extraction of CO2 from air samples for isotopic analysis and limits to ultra high precision delta18O determination in CO2 gas.

    PubMed

    Werner, R A; Rothe, M; Brand, W A

    2001-01-01

    The determination of delta18O values in CO2 at a precision level of +/-0.02 per thousand (delta-notation) has always been a challenging, if not impossible, analytical task. Here, we demonstrate that beyond the usually assumed major cause of uncertainty - water contamination - there are other, hitherto underestimated sources of contamination and processes which can alter the oxygen isotope composition of CO2. Active surfaces in the preparation line with which CO2 comes into contact, as well as traces of air in the sample, can alter the apparent delta18O value both temporarily and permanently. We investigated the effects of different surface materials including electropolished stainless steel, Duran glass, gold and quartz, the latter both untreated and silanized. CO2 frozen with liquid nitrogen showed a transient alteration of the 18O/16O ratio on all surfaces tested. The time to recover from the alteration as well as the size of the alteration varied with surface type. Quartz that had been ultrasonically cleaned for several hours with high purity water (0.05 microS) exhibited the smallest effect on the measured oxygen isotopic composition of CO2 before and after freezing. However, quartz proved to be mechanically unstable with time when subjected to repeated large temperature changes during operation. After several days of operation the gas released from the freezing step contained progressively increasing trace amounts of O2 probably originating from inclusions within the quartz, which precludes the use of quartz for cryogenically trapping CO2. Stainless steel or gold proved to be suitable materials after proper pre-treatment. To ensure a high trapping efficiency of CO2 from a flow of gas, a cold trap design was chosen comprising a thin wall 1/4" outer tube and a 1/8" inner tube, made respectively from electropolished stainless steel and gold. Due to a considerable 18O specific isotope effect during the release of CO2 from the cold surface, the thawing time had to

  13. The Effect of Limited Sample Sizes on the Accuracy of the Estimated Scaling Parameter for Power-Law-Distributed Solar Data

    NASA Astrophysics Data System (ADS)

    D'Huys, Elke; Berghmans, David; Seaton, Daniel B.; Poedts, Stefaan

    2016-05-01

    Many natural processes exhibit a power-law behavior. The power-law exponent is linked to the underlying physical process, and therefore its precise value is of interest. With respect to the energy content of nanoflares, for example, a power-law exponent steeper than 2 is believed to be a necessary condition for solving the enigmatic coronal heating problem. Studying power-law distributions over several orders of magnitudes requires sufficient data and appropriate methodology. In this article we demonstrate the shortcomings of some popular methods in solar physics that are applied to data of typical sample sizes. We use synthetic data to study the effect of the sample size on the performance of different estimation methods. We show that vast amounts of data are needed to obtain a reliable result with graphical methods (where the power-law exponent is estimated by a linear fit on a log-transformed histogram of the data). We revisit published results on power laws for the angular width of solar coronal mass ejections and the radiative losses of nanoflares. We demonstrate the benefits of the maximum likelihood estimator and advocate its use.

  14. On Issues of Precision for Hardware-based Volume Visualization

    SciTech Connect

    LaMar, E C

    2003-04-11

    This paper discusses issues with the limited precision of hardware-based volume visualization. We will describe the compositing OVER operator and how fixed-point arithmetic affects it. We propose two techniques to improve the precision of fixed-point compositing and the accuracy of hardware-based volume visualization. The first technique is to perform dithering of color and alpha values. The second technique we call exponent-factoring, and captures significantly more numeric resolution than dithering, but can only produce monochromatic images.

  15. Precision GPS ephemerides and baselines

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Based on the research, the area of precise ephemerides for GPS satellites, the following observations can be made pertaining to the status and future work needed regarding orbit accuracy. There are several aspects which need to be addressed in discussing determination of precise orbits, such as force models, kinematic models, measurement models, data reduction/estimation methods, etc. Although each one of these aspects was studied at CSR in research efforts, only points pertaining to the force modeling aspect are addressed.

  16. Assessing the Accuracy and Precision of Inorganic Geochemical Data Produced through Flux Fusion and Acid Digestions: Multiple (60+) Comprehensive Analyses of BHVO-2 and the Development of Improved "Accepted" Values

    NASA Astrophysics Data System (ADS)

    Ireland, T. J.; Scudder, R.; Dunlea, A. G.; Anderson, C. H.; Murray, R. W.

    2014-12-01

    The use of geological standard reference materials (SRMs) to assess both the accuracy and the reproducibility of geochemical data is a vital consideration in determining the major and trace element abundances of geologic, oceanographic, and environmental samples. Calibration curves commonly are generated that are predicated on accurate analyses of these SRMs. As a means to verify the robustness of these calibration curves, a SRM can also be run as an unknown item (i.e., not included as a data point in the calibration). The experimentally derived composition of the SRM can thus be compared to the certified (or otherwise accepted) value. This comparison gives a direct measure of the accuracy of the method used. Similarly, if the same SRM is analyzed as an unknown over multiple analytical sessions, the external reproducibility of the method can be evaluated. Two common bulk digestion methods used in geochemical analysis are flux fusion and acid digestion. The flux fusion technique is excellent at ensuring complete digestion of a variety of sample types, is quick, and does not involve much use of hazardous acids. However, this technique is hampered by a high amount of total dissolved solids and may be accompanied by an increased analytical blank for certain trace elements. On the other hand, acid digestion (using a cocktail of concentrated nitric, hydrochloric and hydrofluoric acids) provides an exceptionally clean digestion with very low analytical blanks. However, this technique results in a loss of Si from the system and may compromise results for a few other elements (e.g., Ge). Our lab uses flux fusion for the determination of major elements and a few key trace elements by ICP-ES, while acid digestion is used for Ti and trace element analyses by ICP-MS. Here we present major and trace element data for BHVO-2, a frequently used SRM derived from a Hawaiian basalt, gathered over a period of over two years (30+ analyses by each technique). We show that both digestion

  17. Precision synchrotron radiation detectors

    SciTech Connect

    Levi, M.; Rouse, F.; Butler, J.; Jung, C.K.; Lateur, M.; Nash, J.; Tinsman, J.; Wormser, G.; Gomez, J.J.; Kent, J.

    1989-03-01

    Precision detectors to measure synchrotron radiation beam positions have been designed and installed as part of beam energy spectrometers at the Stanford Linear Collider (SLC). The distance between pairs of synchrotron radiation beams is measured absolutely to better than 28 /mu/m on a pulse-to-pulse basis. This contributes less than 5 MeV to the error in the measurement of SLC beam energies (approximately 50 GeV). A system of high-resolution video cameras viewing precisely-aligned fiducial wire arrays overlaying phosphorescent screens has achieved this accuracy. Also, detectors of synchrotron radiation using the charge developed by the ejection of Compton-recoil electrons from an array of fine wires are being developed. 4 refs., 5 figs., 1 tab.

  18. Ultra precision machining

    NASA Astrophysics Data System (ADS)

    Debra, Daniel B.; Hesselink, Lambertus; Binford, Thomas

    1990-05-01

    There are a number of fields that require or can use to advantage very high precision in machining. For example, further development of high energy lasers and x ray astronomy depend critically on the manufacture of light weight reflecting metal optical components. To fabricate these optical components with machine tools they will be made of metal with mirror quality surface finish. By mirror quality surface finish, it is meant that the dimensions tolerances on the order of 0.02 microns and surface roughness of 0.07. These accuracy targets fall in the category of ultra precision machining. They cannot be achieved by a simple extension of conventional machining processes and techniques. They require single crystal diamond tools, special attention to vibration isolation, special isolation of machine metrology, and on line correction of imperfection in the motion of the machine carriages on their way.

  19. Precision Pointing System Development

    SciTech Connect

    BUGOS, ROBERT M.

    2003-03-01

    The development of precision pointing systems has been underway in Sandia's Electronic Systems Center for over thirty years. Important areas of emphasis are synthetic aperture radars and optical reconnaissance systems. Most applications are in the aerospace arena, with host vehicles including rockets, satellites, and manned and unmanned aircraft. Systems have been used on defense-related missions throughout the world. Presently in development are pointing systems with accuracy goals in the nanoradian regime. Future activity will include efforts to dramatically reduce system size and weight through measures such as the incorporation of advanced materials and MEMS inertial sensors.

  20. New High Precision Linelist of H_3^+

    NASA Astrophysics Data System (ADS)

    Hodges, James N.; Perry, Adam J.; Markus, Charles; Jenkins, Paul A., II; Kocheril, G. Stephen; McCall, Benjamin J.

    2014-06-01

    As the simplest polyatomic molecule, H_3^+ serves as an ideal benchmark for theoretical predictions of rovibrational energy levels. By strictly ab initio methods, the current accuracy of theoretical predictions is limited to an impressive one hundredth of a wavenumber, which has been accomplished by consideration of relativistic, adiabatic, and non-adiabatic corrections to the Born-Oppenheimer PES. More accurate predictions rely on a treatment of quantum electrodynamic effects, which have improved the accuracies of vibrational transitions in molecular hydrogen to a few MHz. High precision spectroscopy is of the utmost importance for extending the frontiers of ab initio calculations, as improved precision and accuracy enable more rigorous testing of calculations. Additionally, measuring rovibrational transitions of H_3^+ can be used to predict its forbidden rotational spectrum. Though the existing data can be used to determine rotational transition frequencies, the uncertainties are prohibitively large. Acquisition of rovibrational spectra with smaller experimental uncertainty would enable a spectroscopic search for the rotational transitions. The technique Noise Immune Cavity Enhanced Optical Heterodyne Velocity Modulation Spectroscopy, or NICE-OHVMS has been previously used to precisely and accurately measure transitions of H_3^+, CH_5^+, and HCO^+ to sub-MHz uncertainty. A second module for our optical parametric oscillator has extended our instrument's frequency coverage from 3.2-3.9 μm to 2.5-3.9 μm. With extended coverage, we have improved our previous linelist by measuring additional transitions. O. L. Polyansky, et al. Phil. Trans. R. Soc. A (2012), 370, 5014--5027. J. Komasa, et al. J. Chem. Theor. Comp. (2011), 7, 3105--3115. C. M. Lindsay, B. J. McCall, J. Mol. Spectrosc. (2001), 210, 66--83. J. N. Hodges, et al. J. Chem. Phys. (2013), 139, 164201.

  1. High-precision Radio and Infrared Astrometry of LSPM J1314+1320AB. I. Parallax, Proper Motions, and Limits on Planets

    NASA Astrophysics Data System (ADS)

    Forbrich, Jan; Dupuy, Trent J.; Reid, Mark J.; Berger, Edo; Rizzuto, Aaron; Mann, Andrew W.; Liu, Michael C.; Aller, Kimberly; Kraus, Adam L.

    2016-08-01

    We present multi-epoch astrometric radio observations with the Very Long Baseline Array (VLBA) of the young ultracool-dwarf binary LSPM J1314+1320AB. The radio emission comes from the secondary star. Combining the VLBA data with Keck near-infrared adaptive-optics observations of both components, a full astrometric fit of parallax (π abs = 57.975 ± 0.045 mas, corresponding to a distance of d = 17.249 ± 0.013 pc), proper motion (μ αcos δ = ‑247.99 ± 0.10 mas yr‑1, μ δ = ‑183.58 ± 0.22 mas yr‑1), and orbital motion is obtained. Despite the fact that the two components have nearly identical masses to within ±2%, the secondary’s radio emission exceeds that of the primary by a factor of ≳30, suggesting a difference in stellar rotation history, which could result in different magnetic field configurations. Alternatively, the emission could be anisotropic and beamed toward us for the secondary but not for the primary. Using only reflex motion, we exclude planets of mass 0.7–10 M jup with orbital periods of 600–10 days, respectively. Additionally, we use the full orbital solution of the binary to derive an upper limit for the semimajor axis of 0.23 au for stable planetary orbits within this system. These limits cover a parameter space that is inaccessible with, and complementary to, near-infrared radial velocity surveys of ultracool dwarfs. Our absolute astrometry will constitute an important test for the astrometric calibration of Gaia.

  2. High-precision Radio and Infrared Astrometry of LSPM J1314+1320AB. I. Parallax, Proper Motions, and Limits on Planets

    NASA Astrophysics Data System (ADS)

    Forbrich, Jan; Dupuy, Trent J.; Reid, Mark J.; Berger, Edo; Rizzuto, Aaron; Mann, Andrew W.; Liu, Michael C.; Aller, Kimberly; Kraus, Adam L.

    2016-08-01

    We present multi-epoch astrometric radio observations with the Very Long Baseline Array (VLBA) of the young ultracool-dwarf binary LSPM J1314+1320AB. The radio emission comes from the secondary star. Combining the VLBA data with Keck near-infrared adaptive-optics observations of both components, a full astrometric fit of parallax (π abs = 57.975 ± 0.045 mas, corresponding to a distance of d = 17.249 ± 0.013 pc), proper motion (μ αcos δ = -247.99 ± 0.10 mas yr-1, μ δ = -183.58 ± 0.22 mas yr-1), and orbital motion is obtained. Despite the fact that the two components have nearly identical masses to within ±2%, the secondary’s radio emission exceeds that of the primary by a factor of ≳30, suggesting a difference in stellar rotation history, which could result in different magnetic field configurations. Alternatively, the emission could be anisotropic and beamed toward us for the secondary but not for the primary. Using only reflex motion, we exclude planets of mass 0.7-10 M jup with orbital periods of 600-10 days, respectively. Additionally, we use the full orbital solution of the binary to derive an upper limit for the semimajor axis of 0.23 au for stable planetary orbits within this system. These limits cover a parameter space that is inaccessible with, and complementary to, near-infrared radial velocity surveys of ultracool dwarfs. Our absolute astrometry will constitute an important test for the astrometric calibration of Gaia.

  3. The GBT precision telescope control system

    NASA Astrophysics Data System (ADS)

    Prestage, Richard M.; Constantikes, Kim T.; Balser, Dana S.; Condon, James J.

    2004-10-01

    The NRAO Robert C. Byrd Green Bank Telescope (GBT) is a 100m diameter advanced single dish radio telescope designed for a wide range of astronomical projects with special emphasis on precision imaging. Open-loop adjustments of the active surface, and real-time corrections to pointing and focus on the basis of structural temperatures already allow observations at frequencies up to 50GHz. Our ultimate goal is to extend the observing frequency limit up to 115GHz; this will require a two dimensional tracking error better than 1.3", and an rms surface accuracy better than 210μm. The Precision Telescope Control System project has two main components. One aspect is the continued deployment of appropriate metrology systems, including temperature sensors, inclinometers, laser rangefinders and other devices. An improved control system architecture will harness this measurement capability with the existing servo systems, to deliver the precision operation required. The second aspect is the execution of a series of experiments to identify, understand and correct the residual pointing and surface accuracy errors. These can have multiple causes, many of which depend on variable environmental conditions. A particularly novel approach is to solve simultaneously for gravitational, thermal and wind effects in the development of the telescope pointing and focus tracking models. Our precision temperature sensor system has already allowed us to compensate for thermal gradients in the antenna, which were previously responsible for the largest "non-repeatable" pointing and focus tracking errors. We are currently targetting the effects of wind as the next, currently uncompensated, source of error.

  4. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  5. Measurements of experimental precision for trials with cowpea (Vigna unguiculata L. Walp.) genotypes.

    PubMed

    Teodoro, P E; Torres, F E; Santos, A D; Corrêa, A M; Nascimento, M; Barroso, L M A; Ceccon, G

    2016-01-01

    The aim of this study was to evaluate the suitability of statistics as experimental precision degree measures for trials with cowpea (Vigna unguiculata L. Walp.) genotypes. Cowpea genotype yields were evaluated in 29 trials conducted in Brazil between 2005 and 2012. The genotypes were evaluated with a randomized block design with four replications. Ten statistics that were estimated for each trial were compared using descriptive statistics, Pearson correlations, and path analysis. According to the class limits established, selective accuracy and F-test values for genotype, heritability, and the coefficient of determination adequately estimated the degree of experimental precision. Using these statistics, 86.21% of the trials had adequate experimental precision. Selective accuracy and the F-test values for genotype, heritability, and the coefficient of determination were directly related to each other, and were more suitable than the coefficient of variation and the least significant difference (by the Tukey test) to evaluate experimental precision in trials with cowpea genotypes. PMID:27173351

  6. Measurements of experimental precision for trials with cowpea (Vigna unguiculata L. Walp.) genotypes.

    PubMed

    Teodoro, P E; Torres, F E; Santos, A D; Corrêa, A M; Nascimento, M; Barroso, L M A; Ceccon, G

    2016-05-09

    The aim of this study was to evaluate the suitability of statistics as experimental precision degree measures for trials with cowpea (Vigna unguiculata L. Walp.) genotypes. Cowpea genotype yields were evaluated in 29 trials conducted in Brazil between 2005 and 2012. The genotypes were evaluated with a randomized block design with four replications. Ten statistics that were estimated for each trial were compared using descriptive statistics, Pearson correlations, and path analysis. According to the class limits established, selective accuracy and F-test values for genotype, heritability, and the coefficient of determination adequately estimated the degree of experimental precision. Using these statistics, 86.21% of the trials had adequate experimental precision. Selective accuracy and the F-test values for genotype, heritability, and the coefficient of determination were directly related to each other, and were more suitable than the coefficient of variation and the least significant difference (by the Tukey test) to evaluate experimental precision in trials with cowpea genotypes.

  7. Accuracy and precision of gravitational-wave models of inspiraling neutron star-black hole binaries with spin: Comparison with matter-free numerical relativity in the low-frequency regime

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilágyi, Béla

    2015-11-01

    Coalescing binaries of neutron stars and black holes are one of the most important sources of gravitational waves for the upcoming network of ground-based detectors. Detection and extraction of astrophysical information from gravitational-wave signals requires accurate waveform models. The effective-one-body and other phenomenological models interpolate between analytic results and numerical relativity simulations, that typically span O (10 ) orbits before coalescence. In this paper we study the faithfulness of these models for neutron star-black hole binaries. We investigate their accuracy using new numerical relativity (NR) simulations that span 36-88 orbits, with mass ratios q and black hole spins χBH of (q ,χBH)=(7 ,±0.4 ),(7 ,±0.6 ) , and (5 ,-0.9 ). These simulations were performed treating the neutron star as a low-mass black hole, ignoring its matter effects. We find that (i) the recently published SEOBNRv1 and SEOBNRv2 models of the effective-one-body family disagree with each other (mismatches of a few percent) for black hole spins χBH≥0.5 or χBH≤-0.3 , with waveform mismatch accumulating during early inspiral; (ii) comparison with numerical waveforms indicates that this disagreement is due to phasing errors of SEOBNRv1, with SEOBNRv2 in good agreement with all of our simulations; (iii) phenomenological waveforms agree with SEOBNRv2 only for comparable-mass low-spin binaries, with overlaps below 0.7 elsewhere in the neutron star-black hole binary parameter space; (iv) comparison with numerical waveforms shows that most of this model's dephasing accumulates near the frequency interval where it switches to a phenomenological phasing prescription; and finally (v) both SEOBNR and post-Newtonian models are effectual for neutron star-black hole systems, but post-Newtonian waveforms will give a significant bias in parameter recovery. Our results suggest that future gravitational-wave detection searches and parameter estimation efforts would benefit

  8. Nickel solution prepared for precision electroforming

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Lightweight, precision optical reflectors are made by electroforming nickel onto masters. Steps for the plating bath preparation, process control testing, and bath composition adjustments are prescribed to avoid internal stresses and maintain dimensional accuracy of the electrodeposited metal.

  9. A novel two-step laser ranging technique for a precision test of the theory of gravity

    NASA Technical Reports Server (NTRS)

    Penanen, Konstantin; Chui, Talso

    2003-01-01

    All powered spacecraft experience residual systematic acceleration due to anisotropy of the thermal radiation pressure and fuel leakage. The residual acceleration limits the accuracy of any test of gravity that relies on the precise determination of the spacecraft trajectory. We describe a novel two-step laser ranging technique, which largely eliminates the effects of non-gravity acceleration sources and enables celestial mechanics checks with unprecedented precision.

  10. Precision spectroscopy of Helium

    SciTech Connect

    Cancio, P.; Giusfredi, G.; Mazzotti, D.; De Natale, P.; De Mauro, C.; Krachmalnicoff, V.; Inguscio, M.

    2005-05-05

    Accurate Quantum-Electrodynamics (QED) tests of the simplest bound three body atomic system are performed by precise laser spectroscopic measurements in atomic Helium. In this paper, we present a review of measurements between triplet states at 1083 nm (23S-23P) and at 389 nm (23S-33P). In 4He, such data have been used to measure the fine structure of the triplet P levels and, then, to determine the fine structure constant when compared with equally accurate theoretical calculations. Moreover, the absolute frequencies of the optical transitions have been used for Lamb-shift determinations of the levels involved with unprecedented accuracy. Finally, determination of the He isotopes nuclear structure and, in particular, a measurement of the nuclear charge radius, are performed by using hyperfine structure and isotope-shift measurements.

  11. Precision ozone vapor pressure measurements

    NASA Technical Reports Server (NTRS)

    Hanson, D.; Mauersberger, K.

    1985-01-01

    The vapor pressure above liquid ozone has been measured with a high accuracy over a temperature range of 85 to 95 K. At the boiling point of liquid argon (87.3 K) an ozone vapor pressure of 0.0403 Torr was obtained with an accuracy of + or - 0.7 percent. A least square fit of the data provided the Clausius-Clapeyron equation for liquid ozone; a latent heat of 82.7 cal/g was calculated. High-precision vapor pressure data are expected to aid research in atmospheric ozone measurements and in many laboratory ozone studies such as measurements of cross sections and reaction rates.

  12. Frequency combs and precision spectroscopy in the extreme ultraviolet

    NASA Astrophysics Data System (ADS)

    Cingöz, Arman

    2012-06-01

    Development of the optical frequency comb has revolutionized optical metrology and precision spectroscopy due to its ability to provide a precise link between microwave and optical frequencies. A novel application that aims to extend the precision and accuracy obtained to the extreme ultraviolet (XUV) is the generation of XUV frequency combs via intracavity high harmonic generation (HHG). Recently, we have been able to generate > 200 μW average power per harmonic and demonstrate the comb structure of the high harmonics by resolving atomic argon and neon lines at 82 and 63 nm, respectively [1]. The argon transition linewidth of 10 MHz, limited by residual Doppler broadening, is unprecedented in this spectral region and places a stringent upper limit on the linewidth of individual comb teeth. To overcome this limitation, we have constructed two independent intracavity HHG sources to study the phase coherence directly via the heterodyne beats between them. With these developments, ultrahigh precision spectroscopy in the XUV is within grasp and has a wide range of applications that include tests of bound state quantum electrodynamics, development of nuclear clocks, and searches for variation of fundamental constants using the enhanced sensitivity of highly charged ions.[4pt] [1] Arman Cing"oz et al., Nature 482, 68 (2012).

  13. Global positioning system measurements for crustal deformation: Precision and accuracy

    USGS Publications Warehouse

    Prescott, W.H.; Davis, J.L.; Svarc, J.L.

    1989-01-01

    Analysis of 27 repeated observations of Global Positioning System (GPS) position-difference vectors, up to 11 kilometers in length, indicates that the standard deviation of the measurements is 4 millimeters for the north component, 6 millimeters for the east component, and 10 to 20 millimeters for the vertical component. The uncertainty grows slowly with increasing vector length. At 225 kilometers, the standard deviation of the measurement is 6, 11, and 40 millimeters for the north, east, and up components, respectively. Measurements with GPS and Geodolite, an electromagnetic distance-measuring system, over distances of 10 to 40 kilometers agree within 0.2 part per million. Measurements with GPS and very long baseline interferometry of the 225-kilometer vector agree within 0.05 part per million.

  14. Precision and accuracy of visual foliar injury assessments

    SciTech Connect

    Gumpertz, M.L.; Tingey, D.T.; Hogsett, W.E.

    1982-07-01

    The study compared three measures of foliar injury: (i) mean percent leaf area injured of all leaves on the plant, (ii) mean percent leaf area injured of the three most injured leaves, and (iii) the proportion of injured leaves to total number of leaves. For the first measure, the variation caused by reader biases and day-to-day variations were compared with the innate plant-to-plant variation. Bean (Phaseolus vulgaris 'Pinto'), pea (Pisum sativum 'Little Marvel'), radish (Rhaphanus sativus 'Cherry Belle'), and spinach (Spinacia oleracea 'Northland') plants were exposed to either 3 ..mu..L L/sup -1/ SO/sub 2/ or 0.3 ..mu..L L/sup -1/ ozone for 2 h. Three leaf readers visually assessed the percent injury on every leaf of each plant while a fourth reader used a transparent grid to make an unbiased assessment for each plant. The mean leaf area injured of the three most injured leaves was highly correlated with all leaves on the plant only if the three most injured leaves were <100% injured. The proportion of leaves injured was not highly correlated with percent leaf area injured of all leaves on the plant for any species in this study. The largest source of variation in visual assessments was plant-to-plant variation, which ranged from 44 to 97% of the total variance, followed by variation among readers (0-32% of the variance). Except for radish exposed to ozone, the day-to-day variation accounted for <18% of the total. Reader bias in assessment of ozone injury was significant but could be adjusted for each reader by a simple linear regression (R/sup 2/ = 0.89-0.91) of the visual assessments against the grid assessments.

  15. Precision and accuracy of decay constants and age standards

    NASA Astrophysics Data System (ADS)

    Villa, I. M.

    2011-12-01

    40 years of round-robin experiments with age standards teach us that systematic errors must be present in at least N-1 labs if participants provide N mutually incompatible data. In EarthTime, the U-Pb community has produced and distributed synthetic solutions with full metrological traceability. Collector linearity is routinely calibrated under variable conditions (e.g. [1]). Instrumental mass fractionation is measured in-run with double spikes (e.g. 233U-236U). Parent-daughter ratios are metrologically traceable, so the full uncertainty budget of a U-Pb age should coincide with interlaboratory uncertainty. TIMS round-robin experiments indeed show a decrease of N towards the ideal value of 1. Comparing 235U-207Pb with 238U-206Pb ages (e.g. [2]) has resulted in a credible re-evaluation of the 235U decay constant, with lower uncertainty than gamma counting. U-Pb microbeam techniques reveal the link petrology-microtextures-microchemistry-isotope record but do not achieve the low uncertainty of TIMS. In the K-Ar community, N is large; interlaboratory bias is > 10 times self-assessed uncertainty. Systematic errors may have analytical and petrological reasons. Metrological traceability is not yet implemented (substantial advance may come from work in progress, e.g. [7]). One of the worst problems is collector stability and linearity. Using electron multipliers (EM) instead of Faraday buckets (FB) reduces both dynamic range and collector linearity. Mass spectrometer backgrounds are never zero; the extent as well as the predictability of their variability must be propagated into the uncertainty evaluation. The high isotope ratio of the atmospheric Ar requires a large dynamic range over which linearity must be demonstrated under all analytical conditions to correctly estimate mass fractionation. The only assessment of EM linearity in Ar analyses [3] points out many fundamental problems; the onus of proof is on every laboratory claiming low uncertainties. Finally, sample size reduction is often associated to reducing clean-up time to increase sample/blank ratio; this may be self-defeating, as "dry blanks" [4] do not represent either the isotopic composition or the amount of Ar released by the sample chamber when exposed to unpurified sample gas. Single grains enhance background and purification problems relative to large sample sizes measured on FB. Petrologically, many natural "standards" are not ideal (e.g. MMhb1 [5], B4M [6]), as their original distributors never conceived petrology as the decisive control on isotope retention. Comparing ever smaller aliquots of unequilibrated minerals causes ever larger age variations. Metrologically traceable synthetic isotope mixtures still lie in the future. Petrological non-ideality of natural standards does not allow a metrological uncertainty budget. Collector behavior, on the contrary, does. Its quantification will, by definition, make true intralaboratory uncertainty greater or equal to interlaboratory bias. [1] Chen J, Wasserburg GJ, 1981. Analyt Chem 53, 2060-2067 [2] Mattinson JM, 2010. Chem Geol 275, 186-198 [3] Turrin B et al, 2010. G-cubed, 11, Q0AA09 [4] Baur H, 1975. PhD thesis, ETH Zürich, No. 6596 [5] Villa IM et al, 1996. Contrib Mineral Petrol 126, 67-80 [6] Villa IM, Heri AR, 2010. AGU abstract V31A-2296 [7] Morgan LE et al, in press. G-cubed, 2011GC003719

  16. Quality, precision and accuracy of the maximum No. 40 anemometer

    SciTech Connect

    Obermeir, J.; Blittersdorf, D.

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  17. Global positioning system measurements for crustal deformation: precision and accuracy.

    PubMed

    Prescott, W H; Davis, J L; Svarc, J L

    1989-06-16

    Analysis of 27 repeated observations of Global Positioning System (GPS) position-difference vectors, up to 11 kilometers in length, indicates that the standard deviation of the measurements is 4 millimeters for the north component, 6 millimeters for the east component, and 10 to 20 millimeters for the vertical component. The uncertainty grows slowly with increasing vector length. At 225 kilometers, the standard deviation of the measurement is 6, 11, and 40 millimeters for the north, east, and up components, respectively. Measurements with GPS and Geodolite, an electromagnetic distance-measuring system, over distances of 10 to 40 kilometers agree within 0.2 part per million. Measurements with GPS and very long baseline interferometry of the 225-kilometer vector agree within 0.05 part per million. PMID:17820661

  18. Arrival Metering Precision Study

    NASA Technical Reports Server (NTRS)

    Prevot, Thomas; Mercer, Joey; Homola, Jeffrey; Hunt, Sarah; Gomez, Ashley; Bienert, Nancy; Omar, Faisal; Kraut, Joshua; Brasil, Connie; Wu, Minghong, G.

    2015-01-01

    This paper describes the background, method and results of the Arrival Metering Precision Study (AMPS) conducted in the Airspace Operations Laboratory at NASA Ames Research Center in May 2014. The simulation study measured delivery accuracy, flight efficiency, controller workload, and acceptability of time-based metering operations to a meter fix at the terminal area boundary for different resolution levels of metering delay times displayed to the air traffic controllers and different levels of airspeed information made available to the Time-Based Flow Management (TBFM) system computing the delay. The results show that the resolution of the delay countdown timer (DCT) on the controllers display has a significant impact on the delivery accuracy at the meter fix. Using the 10 seconds rounded and 1 minute rounded DCT resolutions resulted in more accurate delivery than 1 minute truncated and were preferred by the controllers. Using the speeds the controllers entered into the fourth line of the data tag to update the delay computation in TBFM in high and low altitude sectors increased air traffic control efficiency and reduced fuel burn for arriving aircraft during time based metering.

  19. The reliability of single precision computations in the simulation of deep soil heat diffusion in a land surface model

    NASA Astrophysics Data System (ADS)

    Harvey, Richard; Verseghy, Diana L.

    2016-06-01

    Climate models need discretized numerical algorithms and finite precision arithmetic to solve their differential equations. Most efforts to date have focused on reducing truncation errors due to discretization effects, whereas rounding errors due to the use of floating-point arithmetic have received little attention. However, there are increasing concerns about more frequent occurrences of rounding errors in larger parallel computing platforms (due to the conflicting needs of stability and accuracy vs. performance), and while this has not been the norm in climate and forecast models using double precision, this could change with some models that are now compiled with single precision, which raises questions about the validity of using such low precision in climate applications. For example, processes occurring over large time scales such as permafrost thawing are potentially more vulnerable to this issue. In this study we analyze the theoretical and experimental effects of using single and double precision on simulated deep soil temperature from the Canadian LAnd Surface Scheme (CLASS), a state-of-the-art land surface model. We found that reliable single precision temperatures are limited to depths of less than about 20-25 m while double precision shows no loss of accuracy to depths of at least several hundred meters. We also found that, for a given precision level, model accuracy deteriorates when using smaller time steps, further reducing the usefulness of single precision. There is thus a clear danger of using single precision in some climate model applications, in particular any scientifically meaningful study of deep soil permafrost must at least use double precision. In addition, climate modelling teams might well benefit from paying more attention to numerical precision and roundoff issues to offset the potentially more frequent numerical anomalies in future large-scale parallel climate applications.

  20. Dynamics of statistical distance: Quantum limits for two-level clocks

    SciTech Connect

    Braunstein, S.L. ); Milburn, G.J. )

    1995-03-01

    We study the evolution of statistical distance on the Bloch sphere under unitary and nonunitary dynamics. This corresponds to studying the limits to clock precision for a clock constructed from a two-state system. We find that the initial motion away from pure states under nonunitary dynamics yields the greatest accuracy for a one-tick'' clock; in this case the clock's precision is not limited by the largest frequency of the system.

  1. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    LRO definitive and predictive accuracy requirements were easily met in the nominal mission orbit, using the LP150Q lunar gravity model. center dot Accuracy of the LP150Q model is poorer in the extended mission elliptical orbit. center dot Later lunar gravity models, in particular GSFC-GRAIL-270, improve OD accuracy in the extended mission. center dot Implementation of a constrained plane when the orbit is within 45 degrees of the Earth-Moon line improves cross-track accuracy. center dot Prediction accuracy is still challenged during full-Sun periods due to coarse spacecraft area modeling - Implementation of a multi-plate area model with definitive attitude input can eliminate prediction violations. - The FDF is evaluating using analytic and predicted attitude modeling to improve full-Sun prediction accuracy. center dot Comparison of FDF ephemeris file to high-precision ephemeris files provides gross confirmation that overlap compares properly assess orbit accuracy.

  2. Comparative study of application accuracy of two frameless neuronavigation systems: experimental error assessment quantifying registration methods and clinically influencing factors.

    PubMed

    Paraskevopoulos, Dimitrios; Unterberg, Andreas; Metzner, Roland; Dreyhaupt, Jens; Eggers, Georg; Wirtz, Christian Rainer

    2010-04-01

    This study aimed at comparing the accuracy of two commercial neuronavigation systems. Error assessment and quantification of clinical factors and surface registration, often resulting in decreased accuracy, were intended. Active (Stryker Navigation) and passive (VectorVision Sky, BrainLAB) neuronavigation systems were tested with an anthropomorphic phantom with a deformable layer, simulating skin and soft tissue. True coordinates measured by computer numerical control were compared with coordinates on image data and during navigation, to calculate software and system accuracy respectively. Comparison of image and navigation coordinates was used to evaluate navigation accuracy. Both systems achieved an overall accuracy of <1.5 mm. Stryker achieved better software accuracy, whereas BrainLAB better system and navigation accuracy. Factors with conspicuous influence (P<0.01) were imaging, instrument replacement, sterile cover drape and geometry of instruments. Precision data indicated by the systems did not reflect measured accuracy in general. Surface matching resulted in no improvement of accuracy, confirming former studies. Laser registration showed no differences compared to conventional pointers. Differences between the two systems were limited. Surface registration may improve inaccurate point-based registrations but does not in general affect overall accuracy. Accuracy feedback by the systems does not always match with true target accuracy and requires critical evaluation from the surgeon.

  3. Presentation accuracy of the web revisited: animation methods in the HTML5 era.

    PubMed

    Garaizar, Pablo; Vadillo, Miguel A; López-de-Ipiña, Diego

    2014-01-01

    Using the Web to run behavioural and social experiments quickly and efficiently has become increasingly popular in recent years, but there is some controversy about the suitability of using the Web for these objectives. Several studies have analysed the accuracy and precision of different web technologies in order to determine their limitations. This paper updates the extant evidence about presentation accuracy and precision of the Web and extends the study of the accuracy and precision in the presentation of multimedia stimuli to HTML5-based solutions, which were previously untested. The accuracy and precision in the presentation of visual content in classic web technologies is acceptable for use in online experiments, although some results suggest that these technologies should be used with caution in certain circumstances. Declarative animations based on CSS are the best alternative when animation intervals are above 50 milliseconds. The performance of procedural web technologies based on the HTML5 standard is similar to that of previous web technologies. These technologies are being progressively adopted by the scientific community and have promising futures, which makes their use advisable to utilizing more obsolete technologies.

  4. Presentation Accuracy of the Web Revisited: Animation Methods in the HTML5 Era

    PubMed Central

    Garaizar, Pablo; Vadillo, Miguel A.; López-de-Ipiña, Diego

    2014-01-01

    Using the Web to run behavioural and social experiments quickly and efficiently has become increasingly popular in recent years, but there is some controversy about the suitability of using the Web for these objectives. Several studies have analysed the accuracy and precision of different web technologies in order to determine their limitations. This paper updates the extant evidence about presentation accuracy and precision of the Web and extends the study of the accuracy and precision in the presentation of multimedia stimuli to HTML5-based solutions, which were previously untested. The accuracy and precision in the presentation of visual content in classic web technologies is acceptable for use in online experiments, although some results suggest that these technologies should be used with caution in certain circumstances. Declarative animations based on CSS are the best alternative when animation intervals are above 50 milliseconds. The performance of procedural web technologies based on the HTML5 standard is similar to that of previous web technologies. These technologies are being progressively adopted by the scientific community and have promising futures, which makes their use advisable to utilizing more obsolete technologies. PMID:25302791

  5. Presentation accuracy of the web revisited: animation methods in the HTML5 era.

    PubMed

    Garaizar, Pablo; Vadillo, Miguel A; López-de-Ipiña, Diego

    2014-01-01

    Using the Web to run behavioural and social experiments quickly and efficiently has become increasingly popular in recent years, but there is some controversy about the suitability of using the Web for these objectives. Several studies have analysed the accuracy and precision of different web technologies in order to determine their limitations. This paper updates the extant evidence about presentation accuracy and precision of the Web and extends the study of the accuracy and precision in the presentation of multimedia stimuli to HTML5-based solutions, which were previously untested. The accuracy and precision in the presentation of visual content in classic web technologies is acceptable for use in online experiments, although some results suggest that these technologies should be used with caution in certain circumstances. Declarative animations based on CSS are the best alternative when animation intervals are above 50 milliseconds. The performance of procedural web technologies based on the HTML5 standard is similar to that of previous web technologies. These technologies are being progressively adopted by the scientific community and have promising futures, which makes their use advisable to utilizing more obsolete technologies. PMID:25302791

  6. Ionospheric limitations to time transfer by satellite

    NASA Technical Reports Server (NTRS)

    Knowles, S. H.

    1983-01-01

    The ionosphere can contribute appreciable group delay and phase change to radio signals traversing it; this can constitute a fundamental limitation to the accuracy of time and frequency measurements using satellites. Because of the dispersive nature of the ionosphere, the amount of delay is strongly frequency-dependent. Ionospheric compensation is necessary for the most precise time transfer and frequency measurements, with a group delay accuracy better than 10 nanoseconds. A priori modeling is not accurate to better than 25%. The dual-frequency compensation method holds promise, but has not been rigorously experimentally tested. Irregularities in the ionosphere must be included in the compensation process.

  7. Precision injection molding of freeform optics

    NASA Astrophysics Data System (ADS)

    Fang, Fengzhou; Zhang, Nan; Zhang, Xiaodong

    2016-08-01

    Precision injection molding is the most efficient mass production technology for manufacturing plastic optics. Applications of plastic optics in field of imaging, illumination, and concentration demonstrate a variety of complex surface forms, developing from conventional plano and spherical surfaces to aspheric and freeform surfaces. It requires high optical quality with high form accuracy and lower residual stresses, which challenges both optical tool inserts machining and precision injection molding process. The present paper reviews recent progress in mold tool machining and precision injection molding, with more emphasis on precision injection molding. The challenges and future development trend are also discussed.

  8. Precision powder feeder

    DOEpatents

    Schlienger, M. Eric; Schmale, David T.; Oliver, Michael S.

    2001-07-10

    A new class of precision powder feeders is disclosed. These feeders provide a precision flow of a wide range of powdered materials, while remaining robust against jamming or damage. These feeders can be precisely controlled by feedback mechanisms.

  9. Factors affecting the accuracy of airborne quartz determination.

    PubMed

    Reut, Stepan; Stadnichenko, Raisa; Hillis, Derek; Pityn, Peter

    2007-02-01

    Samples collected in a foundry were used to analyze sources of variation and factors influencing the overall accuracy of sampling results. Air samples were analyzed by Fourier Transform Infrared Spectroscopy (FTIR) using NIOSH Method 7602 to study particle size effects, analytical precision, sampling equipment performance, and production factors. The FTIR technique provides accuracy when silica particle size is taken into consideration. In this case, the variability due to analytical factors is small compared with other sources of error. The typical coefficient of variation of the analytical procedure is 0.08; variation associated with sampling reaches 0.21; and interday coefficient of variation can be as high as 0.48. The IR method has advantages over XRD analysis, including cost effectiveness, sensitivity, and a lower detection limit. PMID:17249146

  10. High-accuracy EUV reflectometer

    NASA Astrophysics Data System (ADS)

    Hinze, U.; Fokoua, M.; Chichkov, B.

    2007-03-01

    Developers and users of EUV-optics need precise tools for the characterization of their products. Often a measurement accuracy of 0.1% or better is desired to detect and study slow-acting aging effect or degradation by organic contaminants. To achieve a measurement accuracy of 0.1% an EUV-source is required which provides an excellent long-time stability, namely power stability, spatial stability and spectral stability. Naturally, it should be free of debris. An EUV-source particularly suitable for this task is an advanced electron-based EUV-tube. This EUV source provides an output of up to 300 μW at 13.5 nm. Reflectometers benefit from the excellent long-time stability of this tool. We design and set up different reflectometers using EUV-tubes for the precise characterisation of EUV-optics, such as debris samples, filters, multilayer mirrors, grazing incidence optics, collectors and masks. Reflectivity measurements from grazing incidence to near normal incidence as well as transmission studies were realised at a precision of down to 0.1%. The reflectometers are computer-controlled and allow varying and scanning all important parameters online. The concepts of a sample reflectometer is discussed and results are presented. The devices can be purchased from the Laser Zentrum Hannover e.V.

  11. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  12. Accelerometers for Precise GNSS Orbit Determination

    NASA Astrophysics Data System (ADS)

    Hugentobler, Urs; Schlicht, Anja

    2016-07-01

    The solar radiation pressure is the largest non-gravitational acceleration on GNSS satellites limiting the accuracy of precise orbit models. Other non-gravitational accelerations may be thrusts for station keeping maneuvers. Accelerometers measure the motion of a test mass that is shielded against satellite surface forces with respect to a cage that is rigidly connected to the satellite. They can thus be used to measure these difficult-to-model non-gravitational accelerations. Accelerometers however typically show correlated noise as well as a drift of the scaling factors converting measured voltages to accelerations. The scaling thus needs to be regularly calibrated. The presented study is based on several simulated scenarios including orbit determination of accelerometer-equipped Galileo satellites. It shall evaluate different options on how to accommodate accelerometer measurements in the orbit integrator, indicate to what extent currently available accelerometers can be used to improve the modeling of non-gravitational accelerations on GNSS satellites for precise orbit determination, and assess the necessary requirements for an accelerometer that can serve this purpose.

  13. High precision innovative micropump for artificial pancreas

    NASA Astrophysics Data System (ADS)

    Chappel, E.; Mefti, S.; Lettieri, G.-L.; Proennecke, S.; Conan, C.

    2014-03-01

    The concept of artificial pancreas, which comprises an insulin pump, a continuous glucose meter and a control algorithm, is a major step forward in managing patient with type 1 diabetes mellitus. The stability of the control algorithm is based on short-term precision micropump to deliver rapid-acting insulin and to specific integrated sensors able to monitor any failure leading to a loss of accuracy. Debiotech's MEMS micropump, based on the membrane pump principle, is made of a stack of 3 silicon wafers. The pumping chamber comprises a pillar check-valve at the inlet, a pumping membrane which is actuated against stop limiters by a piezo cantilever, an anti-free-flow outlet valve and a pressure sensor. The micropump inlet is tightly connected to the insulin reservoir while the outlet is in direct communication with the patient skin via a cannula. To meet the requirement of a pump dedicated to closed-loop application for diabetes care, in addition to the well-controlled displacement of the pumping membrane, the high precision of the micropump is based on specific actuation profiles that balance effect of pump elasticity in low-consumption push-pull mode.

  14. Precision enhancement of pavement roughness localization with connected vehicles

    NASA Astrophysics Data System (ADS)

    Bridgelall, R.; Huang, Y.; Zhang, Z.; Deng, F.

    2016-02-01

    Transportation agencies rely on the accurate localization and reporting of roadway anomalies that could pose serious hazards to the traveling public. However, the cost and technical limitations of present methods prevent their scaling to all roadways. Connected vehicles with on-board accelerometers and conventional geospatial position receivers offer an attractive alternative because of their potential to monitor all roadways in real-time. The conventional global positioning system is ubiquitous and essentially free to use but it produces impractically large position errors. This study evaluated the improvement in precision achievable by augmenting the conventional geo-fence system with a standard speed bump or an existing anomaly at a pre-determined position to establish a reference inertial marker. The speed sensor subsequently generates position tags for the remaining inertial samples by computing their path distances relative to the reference position. The error model and a case study using smartphones to emulate connected vehicles revealed that the precision in localization improves from tens of metres to sub-centimetre levels, and the accuracy of measuring localized roughness more than doubles. The research results demonstrate that transportation agencies will benefit from using the connected vehicle method to achieve precision and accuracy levels that are comparable to existing laser-based inertial profilers.

  15. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  16. Classification of LIDAR Data for Generating a High-Precision Roadway Map

    NASA Astrophysics Data System (ADS)

    Jeong, J.; Lee, I.

    2016-06-01

    Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.

  17. Precision laser cutting

    SciTech Connect

    Kautz, D.D.; Anglin, C.D.; Ramos, T.J.

    1990-01-19

    Many materials that are otherwise difficult to fabricate can be cut precisely with lasers. This presentation discusses the advantages and limitations of laser cutting for refractory metals, ceramics, and composites. Cutting in these materials was performed with a 400-W, pulsed Nd:YAG laser. Important cutting parameters such as beam power, pulse waveforms, cutting gases, travel speed, and laser coupling are outlined. The effects of process parameters on cut quality are evaluated. Three variables are used to determine the cut quality: kerf width, slag adherence, and metallurgical characteristics of recast layers and heat-affected zones around the cuts. Results indicate that ductile materials with good coupling characteristics (such as stainless steel alloys and tantalum) cut well. Materials lacking one or both of these properties (such as tungsten and ceramics) are difficult to cut without proper part design, stress relief, or coupling aids. 3 refs., 2 figs., 1 tab.

  18. High-precision arithmetic in mathematical physics

    DOE PAGES

    Bailey, David H.; Borwein, Jonathan M.

    2015-05-12

    For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.

  19. Assignment of Calibration Information to Deeper Phylogenetic Nodes is More Effective in Obtaining Precise and Accurate Divergence Time Estimates.

    PubMed

    Mello, Beatriz; Schrago, Carlos G

    2014-01-01

    Divergence time estimation has become an essential tool for understanding macroevolutionary events. Molecular dating aims to obtain reliable inferences, which, within a statistical framework, means jointly increasing the accuracy and precision of estimates. Bayesian dating methods exhibit the propriety of a linear relationship between uncertainty and estimated divergence dates. This relationship occurs even if the number of sites approaches infinity and places a limit on the maximum precision of node ages. However, how the placement of calibration information may affect the precision of divergence time estimates remains an open question. In this study, relying on simulated and empirical data, we investigated how the location of calibration within a phylogeny affects the accuracy and precision of time estimates. We found that calibration priors set at median and deep phylogenetic nodes were associated with higher precision values compared to analyses involving calibration at the shallowest node. The results were independent of the tree symmetry. An empirical mammalian dataset produced results that were consistent with those generated by the simulated sequences. Assigning time information to the deeper nodes of a tree is crucial to guarantee the accuracy and precision of divergence times. This finding highlights the importance of the appropriate choice of outgroups in molecular dating. PMID:24855333

  20. Precision measurement of muonium hyperfine splitting at J-PARC

    NASA Astrophysics Data System (ADS)

    Kanda, Sohtaro; J-PARC MuHFS Collaboration

    2014-09-01

    Muonium is the bound state of a positive muon and an electron. Because neither muon nor electron has internal structure, muonium's ground state hyperfine splitting (MuHFS) can be the most precise probe for the test of the bound state QED and for the determination of the ratio of magnetic moments of muon and proton. At J-PARC, we plan to perform a precision measurement of the MuHFS via microwave spectroscopy of muonium. Muonium is formed in Kr gas target and state transition between energy levels is induced by microwave resonance. Spectroscopy of the muonium states can be performed by measurement of positron asymmetry from muonium decay. Precision of the most recent experimental result (LAMPF1999) was mostly statistically limited. Hence, improved statistics is essential for higher precision of the measurement. Our goal is to improve accuracy by an order of magnitude compared to the most recent experiment. In order to achieve the goal, we utilize J-PARC's highest-intensity pulsed muon beam (expected intensity is 1 ×108μ+ / s), highly segmented positron detector with SiPM (Silicon PhotoMultiplier), and an online/offline muon beam profile monitor. In this presentation, we discuss the experimental overview and development status of each components.

  1. GPS and Glonass Combined Static Precise Point Positioning (ppp)

    NASA Astrophysics Data System (ADS)

    Pandey, D.; Dwivedi, R.; Dikshit, O.; Singh, A. K.

    2016-06-01

    With the rapid development of multi-constellation Global Navigation Satellite Systems (GNSSs), satellite navigation is undergoing drastic changes. Presently, more than 70 satellites are already available and nearly 120 more satellites will be available in the coming years after the achievement of complete constellation for all four systems- GPS, GLONASS, Galileo and BeiDou. The significant improvement in terms of satellite visibility, spatial geometry, dilution of precision and accuracy demands the utilization of combining multi-GNSS for Precise Point Positioning (PPP), especially in constrained environments. Currently, PPP is performed based on the processing of only GPS observations. Static and kinematic PPP solutions based on the processing of only GPS observations is limited by the satellite visibility, which is often insufficient for the mountainous and open pit mines areas. One of the easiest options available to enhance the positioning reliability is to integrate GPS and GLONASS observations. This research investigates the efficacy of combining GPS and GLONASS observations for achieving static PPP solution and its sensitivity to different processing methodology. Two static PPP solutions, namely standalone GPS and combined GPS-GLONASS solutions are compared. The datasets are processed using the open source GNSS processing environment gLAB 2.2.7 as well as magicGNSS software package. The results reveal that the addition of GLONASS observations improves the static positioning accuracy in comparison with the standalone GPS point positioning. Further, results show that there is an improvement in the three dimensional positioning accuracy. It is also shown that the addition of GLONASS constellation improves the total number of visible satellites by more than 60% which leads to the improvement of satellite geometry represented by Position Dilution of Precision (PDOP) by more than 30%.

  2. A review on the processing accuracy of two-photon polymerization

    SciTech Connect

    Zhou, Xiaoqin; Hou, Yihong; Lin, Jieqiong

    2015-03-15

    Two-photon polymerization (TPP) is a powerful and potential technology to fabricate true three-dimensional (3D) micro/nanostructures of various materials with subdiffraction-limit resolution. And it has been applied to microoptics, electronics, communications, biomedicine, microfluidic devices, MEMS and metamaterials. These applications, such as microoptics and photon crystals, put forward rigorous requirements on the processing accuracy of TPP, including the dimensional accuracy, shape accuracy and surface roughness and the processing accuracy influences their performance, even invalidate them. In order to fabricate precise 3D micro/nanostructures, the factors influencing the processing accuracy need to be considered comprehensively and systematically. In this paper, we review the basis of TPP micro/nanofabrication, including mechanism of TPP, experimental set-up for TPP and scaling laws of resolution of TPP. Then, we discuss the factors influencing the processing accuracy. Finally, we summarize the methods reported lately to improve the processing accuracy from improving the resolution and changing spatial arrangement of voxels.

  3. A review on the processing accuracy of two-photon polymerization

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaoqin; Hou, Yihong; Lin, Jieqiong

    2015-03-01

    Two-photon polymerization (TPP) is a powerful and potential technology to fabricate true three-dimensional (3D) micro/nanostructures of various materials with subdiffraction-limit resolution. And it has been applied to microoptics, electronics, communications, biomedicine, microfluidic devices, MEMS and metamaterials. These applications, such as microoptics and photon crystals, put forward rigorous requirements on the processing accuracy of TPP, including the dimensional accuracy, shape accuracy and surface roughness and the processing accuracy influences their performance, even invalidate them. In order to fabricate precise 3D micro/nanostructures, the factors influencing the processing accuracy need to be considered comprehensively and systematically. In this paper, we review the basis of TPP micro/nanofabrication, including mechanism of TPP, experimental set-up for TPP and scaling laws of resolution of TPP. Then, we discuss the factors influencing the processing accuracy. Finally, we summarize the methods reported lately to improve the processing accuracy from improving the resolution and changing spatial arrangement of voxels.

  4. Precision CW laser automatic tracking system investigated

    NASA Technical Reports Server (NTRS)

    Lang, K. T.; Lucy, R. F.; Mcgann, E. J.; Peters, C. J.

    1966-01-01

    Precision laser tracker capable of tracking a low acceleration target to an accuracy of about 20 microradians rms is being constructed and tested. This laser tracking has the advantage of discriminating against other optical sources and the capability of simultaneously measuring range.

  5. High-precision measurement of pixel positions in a charge-coupled device.

    PubMed

    Shaklan, S; Sharman, M C; Pravdo, S H

    1995-10-10

    The high level of spatial uniformity in modern CCD's makes them excellent devices for astrometric instruments. However, at the level of accuracy envisioned by the more ambitious projects such as the Astrometric Imaging Telescope, current technology produces CCD's with significant pixel registration errors. We describe a technique for making high-precision measurements of relative pixel positions. We measured CCD's manufactured for the Wide Field Planetary Camera II installed in the Hubble Space Telescope. These CCD's are shown to have significant step-and-repeat errors of 0.033 pixel along every 34th row, as well as a 0.003-pixel curvature along 34-pixel stripes. The source of these errors is described. Our experiments achieved a per-pixel accuracy of 0.011 pixel. The ultimate shot-noise limited precision of the method is less than 0.001 pixel.

  6. Using satellite data to increase accuracy of PMF calculations

    SciTech Connect

    Mettel, M.C.

    1992-03-01

    The accuracy of a flood severity estimate depends on the data used. The more detailed and precise the data, the more accurate the estimate. Earth observation satellites gather detailed data for determining the probable maximum flood at hydropower projects.

  7. A new chapter in precise orbit determination

    NASA Technical Reports Server (NTRS)

    Yunck, T. P.

    1992-01-01

    A report is presented on the use of GPS receivers on board orbiting spacecraft to determine their orbits with unprecedented accuracy. By placing a GPS receiver aboard a satellite one can observe its true motion and reconstruct its trajectory in great detail without knowledge of the forces acting on it. Only the accuracy of the GPS carrier-phase observable, which can be better than 1 cm for a 1 sec duration observation, ultimately limits 'user orbit' accuracy.

  8. Precision performance lamp technology

    NASA Astrophysics Data System (ADS)

    Bell, Dean A.; Kiesa, James E.; Dean, Raymond A.

    1997-09-01

    A principal function of a lamp is to produce light output with designated spectra, intensity, and/or geometric radiation patterns. The function of a precision performance lamp is to go beyond these parameters and into the precision repeatability of performance. All lamps are not equal. There are a variety of incandescent lamps, from the vacuum incandescent indictor lamp to the precision lamp of a blood analyzer. In the past the definition of a precision lamp was described in terms of wattage, light center length (LCL), filament position, and/or spot alignment. This paper presents a new view of precision lamps through the discussion of a new segment of lamp design, which we term precision performance lamps. The definition of precision performance lamps will include (must include) the factors of a precision lamp. But what makes a precision lamp a precision performance lamp is the manner in which the design factors of amperage, mscp (mean spherical candlepower), efficacy (lumens/watt), life, not considered individually but rather considered collectively. There is a statistical bias in a precision performance lamp for each of these factors; taken individually and as a whole. When properly considered the results can be dramatic to the system design engineer, system production manage and the system end-user. It can be shown that for the lamp user, the use of precision performance lamps can translate to: (1) ease of system design, (2) simplification of electronics, (3) superior signal to noise ratios, (4) higher manufacturing yields, (5) lower system costs, (6) better product performance. The factors mentioned above are described along with their interdependent relationships. It is statistically shown how the benefits listed above are achievable. Examples are provided to illustrate how proper attention to precision performance lamp characteristics actually aid in system product design and manufacturing to build and market more, market acceptable product products in the

  9. Precise baseline determination for the TanDEM-X mission

    NASA Astrophysics Data System (ADS)

    Koenig, Rolf; Moon, Yongjin; Neumayer, Hans; Wermuth, Martin; Montenbruck, Oliver; Jäggi, Adrian

    The TanDEM-X mission will strive for generating a global precise Digital Elevation Model (DEM) by way of bi-static SAR in a close formation of the TerraSAR-X satellite, already launched on June 15, 2007, and the TanDEM-X satellite to be launched in May 2010. Both satellites carry the Tracking, Occultation and Ranging (TOR) payload supplied by the GFZ German Research Centre for Geosciences. The TOR consists of a high-precision dual-frequency GPS receiver, called Integrated GPS Occultation Receiver (IGOR), and a Laser retro-reflector (LRR) for precise orbit determination (POD) and atmospheric sounding. The IGOR is of vital importance for the TanDEM-X mission objectives as the millimeter level determination of the baseline or distance between the two spacecrafts is needed to derive meter level accurate DEMs. Within the TanDEM-X ground segment GFZ is responsible for the operational provision of precise baselines. For this GFZ uses two software chains, first its Earth Parameter and Orbit System (EPOS) software and second the BERNESE software, for backup purposes and quality control. In a concerted effort also the German Aerospace Center (DLR) generates precise baselines independently with a dedicated Kalman filter approach realized in its FRNS software. By the example of GRACE the generation of baselines with millimeter accuracy from on-board GPS data can be validated directly by way of comparing them to the intersatellite K-band range measurements. The K-band ranges are accurate down to the micrometer-level and therefore may be considered as truth. Both TanDEM-X baseline providers are able to generate GRACE baselines with sub-millimeter accuracy. By merging the independent baselines by GFZ and DLR, the accuracy can even be increased. The K-band validation however covers solely the along-track component as the K-band data measure just the distance between the two GRACE satellites. In addition they inhibit an un-known bias which must be modelled in the comparison, so the

  10. A Precise Lunar Photometric Function

    NASA Astrophysics Data System (ADS)

    McEwen, A. S.

    1996-03-01

    The Clementine multispectral dataset will enable compositional mapping of the entire lunar surface at a resolution of ~100-200 m, but a highly accurate photometric normalization is needed to achieve challenging scientific objectives such as mapping petrographic or elemental compositions. The goal of this work is to normalize the Clementine data to an accuracy of 1% for the UVVIS images (0.415, 0.75, 0.9, 0.95, and 1.0 micrometers) and 2% for NIR images (1.1, 1.25, 1.5, 2.0, 2.6, and 2.78 micrometers), consistent with radiometric calibration goals. The data will be normalized to R30, the reflectance expected at an incidence angle (i) and phase angle (alpha) of 30 degrees and emission angle (e) of 0 degree, matching the photometric geometry of lunar samples measured at the reflectance laboratory (RELAB) at Brown University The focus here is on the precision of the normalization, not the putative physical significance of the photometric function parameters. The 2% precision achieved is significantly better than the ~10% precision of a previous normalization.

  11. Superior accuracy of model-based radiostereometric analysis for measurement of polyethylene wear

    PubMed Central

    Stilling, M.; Kold, S.; de Raedt, S.; Andersen, N. T.; Rahbek, O.; Søballe, K.

    2012-01-01

    Objectives The accuracy and precision of two new methods of model-based radiostereometric analysis (RSA) were hypothesised to be superior to a plain radiograph method in the assessment of polyethylene (PE) wear. Methods A phantom device was constructed to simulate three-dimensional (3D) PE wear. Images were obtained consecutively for each simulated wear position for each modality. Three commercially available packages were evaluated: model-based RSA using laser-scanned cup models (MB-RSA), model-based RSA using computer-generated elementary geometrical shape models (EGS-RSA), and PolyWare. Precision (95% repeatability limits) and accuracy (Root Mean Square Errors) for two-dimensional (2D) and 3D wear measurements were assessed. Results The precision for 2D wear measures was 0.078 mm, 0.102 mm, and 0.076 mm for EGS-RSA, MB-RSA, and PolyWare, respectively. For the 3D wear measures the precision was 0.185 mm, 0.189 mm, and 0.244 mm for EGS-RSA, MB-RSA, and PolyWare respectively. Repeatability was similar for all methods within the same dimension, when compared between 2D and 3D (all p > 0.28). For the 2D RSA methods, accuracy was below 0.055 mm and at least 0.335 mm for PolyWare. For 3D measurements, accuracy was 0.1 mm, 0.2 mm, and 0.3 mm for EGS-RSA, MB-RSA and PolyWare respectively. PolyWare was less accurate compared with RSA methods (p = 0.036). No difference was observed between the RSA methods (p = 0.10). Conclusions For all methods, precision and accuracy were better in 2D, with RSA methods being superior in accuracy. Although less accurate and precise, 3D RSA defines the clinically relevant wear pattern (multidirectional). PolyWare is a good and low-cost alternative to RSA, despite being less accurate and requiring a larger sample size. PMID:23610688

  12. Extensive mapping of ice marginal landforms in northern Russia (25°E - 112°E); new precise constraints on ice sheet limits of the Eurasian ice sheets in Russia

    NASA Astrophysics Data System (ADS)

    Fredin, O.; Rubensdotter, L.; van Welden, A.; Larsen, E.; Lyså, A.; Jensen, M.

    2009-12-01

    Ice sheet extent for the last glaciation(s) are well established in most previously glaciated areas, most notably in North America and Europe. However, in Russia, which have hosted major sectors of the Scandinavian-, Barents sea-, and Kara sea ice sheets, knowledge of exact ice marginal positions is sporadic. Most evidence of ice sheet extent so far, have been from drift distribution, and only limited attempts have been made to use remote sensing data to precisely locate ice marginal zones. This is probably because of difficulties in using optical remote sensing data (typically Landsat ETM+ and ASTER) in low relief, densely forested areas (Taiga), and sheer scale of the mapped areas. Furthermore, no reliable elevation model has existed north of 60°N, aiding interpretation of optical remote sensing data. We have used recently digitized Russian topographic maps (scale 1:100,000) and the new ASTER GDEM 15 m resolution elevation model to map ice marginal moraines in Russia (25°E - 112°E), thereby covering most formerly glaciated areas in Russia. The majority of the mapping was made using shaded relief maps. Critical interpretation was made using the ASTER GDEM elevation model combined with multispectral Landsat ETM+ data to construct a synthetic stereo-model, which was analyzed in 3D using ERDAS Stereo Analyst® software. Several operators have worked independently to insure unbiased interpretation of the landforms. So far we have mapped about 2.1E6 km2. Many of the mapped moraines are distinct at the mapping scale, with a typical relief of 20 - 120 m, and a cross-sectional width of 500 - 1500 m. Moreover, several moraines are hundreds kilometers long! Many mapped ice marginal moraines exhibit a very lobate morphology, reflecting low gradient ice lobes extending into the low relief river valleys. We infer very low basal shear stresses in the valleys, indicating glacier flow on soft sediments and possible flotation of ice tongues on ice dammed lakes. There are

  13. Environment Assisted Precision Magnetometry

    NASA Astrophysics Data System (ADS)

    Cappellaro, P.; Goldstein, G.; Maze, J. R.; Jiang, L.; Hodges, J. S.; Sorensen, A. S.; Lukin, M. D.

    2010-03-01

    We describe a method to enhance the sensitivity of magnetometry and achieve nearly Heisenberg-limited precision measurement using a novel class of entangled states. An individual qubit is used to sense the dynamics of surrounding ancillary qubits, which are in turn affected by the external field to be measured. The resulting sensitivity enhancement is determined by the number of ancillas strongly coupled to the sensor qubit, it does not depend on the exact values of the couplings (allowing to use disordered systems), and is resilient to decoherence. As a specific example we consider electronic spins in the solid-state, where the ancillary system is associated with the surrounding spin bath. The conventional approach has been to consider these spins only as a source of decoherence and to adopt decoupling scheme to mitigate their effects. Here we describe novel control techniques that transform the environment spins into a resource used to amplify the sensor spin response to weak external perturbations, while maintaining the beneficial effects of dynamical decoupling sequences. We discuss specific applications to improve magnetic sensing with diamond nano-crystals, using one Nitrogen-Vacancy center spin coupled to Nitrogen electronic spins.

  14. Precision volume measuring system

    SciTech Connect

    Klevgard, P.A.

    1984-11-01

    An engineering study was undertaken to calibrate and certify a precision volume measurement system that uses the ideal gas law and precise pressure measurements (of low-pressure helium) to ratio a known to an unknown volume. The constant-temperature, computer-controlled system was tested for thermodynamic instabilities, for precision (0.01%), and for bias (0.01%). Ratio scaling was used to optimize the quartz crystal pressure transducer calibration.

  15. Precision goniometer equipped with a 22-bit absolute rotary encoder.

    PubMed

    Xiaowei, Z; Ando, M; Jidong, W

    1998-05-01

    The calibration of a compact precision goniometer equipped with a 22-bit absolute rotary encoder is presented. The goniometer is a modified Huber 410 goniometer: the diffraction angles can be coarsely generated by a stepping-motor-driven worm gear and precisely interpolated by a piezoactuator-driven tangent arm. The angular accuracy of the precision rotary stage was evaluated with an autocollimator. It was shown that the deviation from circularity of the rolling bearing utilized in the precision rotary stage restricts the angular positioning accuracy of the goniometer, and results in an angular accuracy ten times larger than the angular resolution of 0.01 arcsec. The 22-bit encoder was calibrated by an incremental rotary encoder. It became evident that the accuracy of the absolute encoder is approximately 18 bit due to systematic errors.

  16. Precision aerial application for site-specific rice crop management

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Precision agriculture includes different technologies that allow agricultural professional to use information management tools to optimize agriculture production. The new technologies allow aerial application applicators to improve application accuracy and efficiency, which saves time and money for...

  17. Precision positioning device

    DOEpatents

    McInroy, John E.

    2005-01-18

    A precision positioning device is provided. The precision positioning device comprises a precision measuring/vibration isolation mechanism. A first plate is provided with the precision measuring mean secured to the first plate. A second plate is secured to the first plate. A third plate is secured to the second plate with the first plate being positioned between the second plate and the third plate. A fourth plate is secured to the third plate with the second plate being positioned between the third plate and the fourth plate. An adjusting mechanism for adjusting the position of the first plate, the second plate, the third plate, and the fourth plate relative to each other.

  18. Precise and accurate isotopic measurements using multiple-collector ICPMS

    NASA Astrophysics Data System (ADS)

    Albarède, F.; Telouk, Philippe; Blichert-Toft, Janne; Boyet, Maud; Agranier, Arnaud; Nelson, Bruce

    2004-06-01

    New techniques of isotopic measurements by a new generation of mass spectrometers equipped with an inductively-coupled-plasma source, a magnetic mass filter, and multiple collection (MC-ICPMS) are quickly developing. These techniques are valuable because of (1) the ability of ICP sources to ionize virtually every element in the periodic table, and (2) the large sample throughout. However, because of the complex trajectories of multiple ion beams produced in the plasma source whether from the same or different elements, the acquisition of precise and accurate isotopic data with this type of instrument still requires a good understanding of instrumental fractionation processes, both mass-dependent and mass-independent. Although physical processes responsible for the instrumental mass bias are still to be understood more fully, we here present a theoretical framework that allows for most of the analytical limitations to high precision and accuracy to be overcome. After a presentation of unifying phenomenological theory for mass-dependent fractionation in mass spectrometers, we show how this theory accounts for the techniques of standard bracketing and of isotopic normalization by a ratio of either the same or a different element, such as the use of Tl to correct mass bias on Pb. Accuracy is discussed with reference to the concept of cup efficiencies. Although these can be simply calibrated by analyzing standards, we derive a straightforward, very general method to calculate accurate isotopic ratios from dynamic measurements. In this study, we successfully applied the dynamic method to Nd and Pb as examples. We confirm that the assumption of identical mass bias for neighboring elements (notably Pb and Tl, and Yb and Lu) is both unnecessary and incorrect. We further discuss the dangers of straightforward standard-sample bracketing when chemical purification of the element to be analyzed is imperfect. Pooling runs to improve precision is acceptable provided the pooled

  19. System and method for high precision isotope ratio destructive analysis

    SciTech Connect

    Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R

    2013-07-02

    A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).

  20. IRCM spectral signature measurements instrumentation featuring enhanced radiometric accuracy

    NASA Astrophysics Data System (ADS)

    Lantagne, Stéphane; Prel, Florent; Moreau, Louis; Roy, Claude; Willers, Cornelius J.

    2015-10-01

    Hyperspectral Infrared (IR) signature measurements are performed in military applications including aircraft- and -naval vessel stealth characterization, detection/lock-on ranges, and flares efficiency characterization. Numerous military applications require high precision measurement of infrared signature characterization. For instance, Infrared Countermeasure (IRCM) systems and Infrared Counter-Countermeasure (IRCCM) system are continuously evolving. Infrared flares defeated IR guided seekers, IR flares became defeated by intelligent IR guided seekers and Jammers defeated the intelligent IR guided seekers [7]. A precise knowledge of the target infrared signature phenomenology is crucial for the development and improvement of countermeasure and counter-countermeasure systems and so precise quantification of the infrared energy emitted from the targets requires accurate spectral signature measurements. Errors in infrared characterization measurements can lead to weakness in the safety of the countermeasure system and errors in the determination of detection/lock-on range of an aircraft. The infrared signatures are analyzed, modeled, and simulated to provide a good understanding of the signature phenomenology to improve the IRCM and IRCCM technologies efficiency [7,8,9]. There is a growing need for infrared spectral signature measurement technology in order to further improve and validate infrared-based models and simulations. The addition of imagery to Spectroradiometers is improving the measurement capability of complex targets and scenes because all elements in the scene can now be measured simultaneously. However, the limited dynamic range of the Focal Plane Array (FPA) sensors used in these instruments confines the ranges of measurable radiance intensities. This ultimately affects the radiometric accuracy of these complex signatures. We will describe and demonstrate how the ABB hyperspectral imaging spectroradiometer features enhanced the radiometric accuracy

  1. Fundamental Limits to Cellular Sensing

    NASA Astrophysics Data System (ADS)

    ten Wolde, Pieter Rein; Becker, Nils B.; Ouldridge, Thomas E.; Mugler, Andrew

    2016-03-01

    In recent years experiments have demonstrated that living cells can measure low chemical concentrations with high precision, and much progress has been made in understanding what sets the fundamental limit to the precision of chemical sensing. Chemical concentration measurements start with the binding of ligand molecules to receptor proteins, which is an inherently noisy process, especially at low concentrations. The signaling networks that transmit the information on the ligand concentration from the receptors into the cell have to filter this receptor input noise as much as possible. These networks, however, are also intrinsically stochastic in nature, which means that they will also add noise to the transmitted signal. In this review, we will first discuss how the diffusive transport and binding of ligand to the receptor sets the receptor correlation time, which is the timescale over which fluctuations in the state of the receptor, arising from the stochastic receptor-ligand binding, decay. We then describe how downstream signaling pathways integrate these receptor-state fluctuations, and how the number of receptors, the receptor correlation time, and the effective integration time set by the downstream network, together impose a fundamental limit on the precision of sensing. We then discuss how cells can remove the receptor input noise while simultaneously suppressing the intrinsic noise in the signaling network. We describe why this mechanism of time integration requires three classes (groups) of resources—receptors and their integration time, readout molecules, energy—and how each resource class sets a fundamental sensing limit. We also briefly discuss the scheme of maximum-likelihood estimation, the role of receptor cooperativity, and how cellular copy protocols differ from canonical copy protocols typically considered in the computational literature, explaining why cellular sensing systems can never reach the Landauer limit on the optimal trade

  2. Precision Teaching: An Introduction.

    ERIC Educational Resources Information Center

    West, Richard P.; And Others

    1990-01-01

    Precision teaching is introduced as a method of helping students develop fluency or automaticity in the performance of academic skills. Precision teaching involves being aware of the relationship between teaching and learning, measuring student performance regularly and frequently, and analyzing the measurements to develop instructional and…

  3. Precision Optics Curriculum.

    ERIC Educational Resources Information Center

    Reid, Robert L.; And Others

    This guide outlines the competency-based, two-year precision optics curriculum that the American Precision Optics Manufacturers Association has proposed to fill the void that it suggests will soon exist as many of the master opticians currently employed retire. The model, which closely resembles the old European apprenticeship model, calls for 300…

  4. Precision cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Fendt, William Ashton, Jr.

    2009-09-01

    Experimental efforts of the last few decades have brought. a golden age to mankind's endeavor to understand tine physical properties of the Universe throughout its history. Recent measurements of the cosmic microwave background (CMB) provide strong confirmation of the standard big bang paradigm, as well as introducing new mysteries, to unexplained by current physical models. In the following decades. even more ambitious scientific endeavours will begin to shed light on the new physics by looking at the detailed structure of the Universe both at very early and recent times. Modern data has allowed us to begins to test inflationary models of the early Universe, and the near future will bring higher precision data and much stronger tests. Cracking the codes hidden in these cosmological observables is a difficult and computationally intensive problem. The challenges will continue to increase as future experiments bring larger and more precise data sets. Because of the complexity of the problem, we are forced to use approximate techniques and make simplifying assumptions to ease the computational workload. While this has been reasonably sufficient until now, hints of the limitations of our techniques have begun to come to light. For example, the likelihood approximation used for analysis of CMB data from the Wilkinson Microwave Anistropy Probe (WMAP) satellite was shown to have short falls, leading to pre-emptive conclusions drawn about current cosmological theories. Also it can he shown that an approximate method used by all current analysis codes to describe the recombination history of the Universe will not be sufficiently accurate for future experiments. With a new CMB satellite scheduled for launch in the coming months, it is vital that we develop techniques to improve the analysis of cosmological data. This work develops a novel technique of both avoiding the use of approximate computational codes as well as allowing the application of new, more precise analysis

  5. An optical lattice clock with accuracy and stability at the 10(-18) level.

    PubMed

    Bloom, B J; Nicholson, T L; Williams, J R; Campbell, S L; Bishof, M; Zhang, X; Zhang, W; Bromley, S L; Ye, J

    2014-02-01

    Progress in atomic, optical and quantum science has led to rapid improvements in atomic clocks. At the same time, atomic clock research has helped to advance the frontiers of science, affecting both fundamental and applied research. The ability to control quantum states of individual atoms and photons is central to quantum information science and precision measurement, and optical clocks based on single ions have achieved the lowest systematic uncertainty of any frequency standard. Although many-atom lattice clocks have shown advantages in measurement precision over trapped-ion clocks, their accuracy has remained 16 times worse. Here we demonstrate a many-atom system that achieves an accuracy of 6.4 × 10(-18), which is not only better than a single-ion-based clock, but also reduces the required measurement time by two orders of magnitude. By systematically evaluating all known sources of uncertainty, including in situ monitoring of the blackbody radiation environment, we improve the accuracy of optical lattice clocks by a factor of 22. This single clock has simultaneously achieved the best known performance in the key characteristics necessary for consideration as a primary standard-stability and accuracy. More stable and accurate atomic clocks will benefit a wide range of fields, such as the realization and distribution of SI units, the search for time variation of fundamental constants, clock-based geodesy and other precision tests of the fundamental laws of nature. This work also connects to the development of quantum sensors and many-body quantum state engineering (such as spin squeezing) to advance measurement precision beyond the standard quantum limit.

  6. An optical lattice clock with accuracy and stability at the 10(-18) level.

    PubMed

    Bloom, B J; Nicholson, T L; Williams, J R; Campbell, S L; Bishof, M; Zhang, X; Zhang, W; Bromley, S L; Ye, J

    2014-02-01

    Progress in atomic, optical and quantum science has led to rapid improvements in atomic clocks. At the same time, atomic clock research has helped to advance the frontiers of science, affecting both fundamental and applied research. The ability to control quantum states of individual atoms and photons is central to quantum information science and precision measurement, and optical clocks based on single ions have achieved the lowest systematic uncertainty of any frequency standard. Although many-atom lattice clocks have shown advantages in measurement precision over trapped-ion clocks, their accuracy has remained 16 times worse. Here we demonstrate a many-atom system that achieves an accuracy of 6.4 × 10(-18), which is not only better than a single-ion-based clock, but also reduces the required measurement time by two orders of magnitude. By systematically evaluating all known sources of uncertainty, including in situ monitoring of the blackbody radiation environment, we improve the accuracy of optical lattice clocks by a factor of 22. This single clock has simultaneously achieved the best known performance in the key characteristics necessary for consideration as a primary standard-stability and accuracy. More stable and accurate atomic clocks will benefit a wide range of fields, such as the realization and distribution of SI units, the search for time variation of fundamental constants, clock-based geodesy and other precision tests of the fundamental laws of nature. This work also connects to the development of quantum sensors and many-body quantum state engineering (such as spin squeezing) to advance measurement precision beyond the standard quantum limit. PMID:24463513

  7. High accuracy flexural hinge development

    NASA Astrophysics Data System (ADS)

    Santos, I.; Ortiz de Zárate, I.; Migliorero, G.

    2005-07-01

    This document provides a synthesis of the technical results obtained in the frame of the HAFHA (High Accuracy Flexural Hinge Assembly) development performed by SENER (in charge of design, development, manufacturing and testing at component and mechanism levels) with EADS Astrium as subcontractor (in charge of doing an inventory of candidate applications among existing and emerging projects, establishing the requirements and perform system level testing) under ESA contract. The purpose of this project has been to develop a competitive technology for a flexural pivot, usuable in highly accurate and dynamic pointing/scanning mechanisms. Compared with other solutions (e.g. magnetic or ball bearing technologies) flexural hinges are the appropriate technology for guiding with accuracy a mobile payload over a limited angular ranges around one rotation axes.

  8. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  9. Classification accuracy improvement

    NASA Technical Reports Server (NTRS)

    Kistler, R.; Kriegler, F. J.

    1977-01-01

    Improvements made in processing system designed for MIDAS (prototype multivariate interactive digital analysis system) effects higher accuracy in classification of pixels, resulting in significantly-reduced processing time. Improved system realizes cost reduction factor of 20 or more.

  10. High-precision three-dimensional coordinate measurement with subwavelength-aperture-fiber point diffraction interferometer

    NASA Astrophysics Data System (ADS)

    Wang, Daodang; Xu, Yangbo; Chen, Xixi; Wang, Fumin; Kong, Ming; Zhao, Jun

    2014-11-01

    To overcome the accuracy limitation due to the machining error of standard parts in measurement system, a threedimensional coordinate measurement method with subwavelength-aperture-fiber point diffraction interferometer (PDI) is proposed, in which the high-precision measurement standard is obtained from the ideal point-diffracted spherical wavefront instead of standard components. On the basis of the phase distribution demodulated from point-diffraction interference field, high-precision three-dimensional coordinate measurement is realized with numerical iteration optimization algorithm. The subwavelength-aperture fiber is used as point-diffraction source to get precise and highenergy spherical wavefront within high aperture angle range, by which the conflict between diffraction wave angle and energy in traditional PDI can be avoided. Besides, a double-iterative method based on Levenbery-Marquardt algorithm is proposed to realize precise reconstruct three-dimensional coordinate. The analysis shows that the proposed method can reach the measurement precision better than microns within a 200×200×300 (in unit of mm) working volume. This measurement method does not rely on the initial iteration value in numerical coordinate reconstruction, and also has high measurement precision, large measuring range, fast processing speed and preferable anti-noise ability. It is of great practicality for measurement of three-dimensional coordinate and calibration of measurement system.

  11. Does DFT-SAPT method provide spectroscopic accuracy?

    SciTech Connect

    Shirkov, Leonid; Makarewicz, Jan

    2015-02-14

    Ground state potential energy curves for homonuclear and heteronuclear dimers consisting of noble gas atoms from He to Kr were calculated within the symmetry adapted perturbation theory based on the density functional theory (DFT-SAPT). These potentials together with spectroscopic data derived from them were compared to previous high-precision coupled cluster with singles and doubles including the connected triples theory calculations (or better if available) as well as to experimental data used as the benchmark. The impact of midbond functions on DFT-SAPT results was tested to study the convergence of the interaction energies. It was shown that, for most of the complexes, DFT-SAPT potential calculated at the complete basis set (CBS) limit is lower than the corresponding benchmark potential in the region near its minimum and hence, spectroscopic accuracy cannot be achieved. The influence of the residual term δ(HF) on the interaction energy was also studied. As a result, we have found that this term improves the agreement with the benchmark in the repulsive region for the dimers considered, but leads to even larger overestimation of potential depth D{sub e}. Although the standard hybrid exchange-correlation (xc) functionals with asymptotic correction within the second order DFT-SAPT do not provide the spectroscopic accuracy at the CBS limit, it is possible to adjust empirically basis sets yielding highly accurate results.

  12. Precision and Power Grip Priming by Observed Grasping

    ERIC Educational Resources Information Center

    Vainio, Lari; Tucker, Mike; Ellis, Rob

    2007-01-01

    The coupling of hand grasping stimuli and the subsequent grasp execution was explored in normal participants. Participants were asked to respond with their right- or left-hand to the accuracy of an observed (dynamic) grasp while they were holding precision or power grasp response devices in their hands (e.g., precision device/right-hand; power…

  13. Research of limits of applicability of an open-source equipment for development the optical equipment kit

    NASA Astrophysics Data System (ADS)

    Saitgalina, Azaiya; Mityushkin, Anthon; Tolstoba, Nadezhda D.

    2016-04-01

    This work devoted to a comparative study of different designs of optical equipment models to creating it in a best way, as well as a comparison of conditions and materials, of which these moutains are made of. For fasteners for optical elements required considerable precision. Speaking of affordable 3D printers, precision fasteners depends on many parameters.The relevance of the work is to study the characteristics of three-dimensional printing accuracy and limits of its application.

  14. A 3-D Multilateration: A Precision Geodetic Measurement System

    NASA Technical Reports Server (NTRS)

    Escobal, P. R.; Fliegel, H. F.; Jaffe, R. M.; Muller, P. M.; Ong, K. M.; Vonroos, O. H.

    1972-01-01

    A system was designed with the capability of determining 1-cm accuracy station positions in three dimensions using pulsed laser earth satellite tracking stations coupled with strictly geometric data reduction. With this high accuracy, several crucial geodetic applications become possible, including earthquake hazards assessment, precision surveying, plate tectonics, and orbital determination.

  15. Precision spectroscopy with a frequency-comb-calibrated solar spectrograph

    NASA Astrophysics Data System (ADS)

    Doerr, H.-P.

    2015-06-01

    The measurement of the velocity field of the plasma at the solar surface is a standard diagnostic tool in observational solar physics. Detailed information about the energy transport as well as on the stratification of temperature, pressure and magnetic fields in the solar atmosphere are encoded in Doppler shifts and in the precise shape of the spectral lines. The available instruments deliver data of excellent quality and precision. However, absolute wavelength calibration in solar spectroscopy was so far mostly limited to indirect methods and in general suffers from large systematic uncertainties of the order of 100 m/s. During the course of this thesis, a novel wavelength calibration system based on a laser frequency comb was deployed to the solar Vacuum Tower Telescope (VTT), Tenerife, with the goal of enabling highly accurate solar wavelength measurements at the level of 1 m/s on an absolute scale. The frequency comb was developed in a collaboration between the Kiepenheuer-Institute for Solar Physics, Freiburg, Germany and the Max Planck Institute for Quantum Optics, Garching, Germany. The efforts cumulated in the new prototype instrument LARS (Lars is an Absolute Reference Spectrograph) for solar precision spectroscopy which is in preliminary scientific operation since~2013. The instrument is based on the high-resolution echelle spectrograph of the VTT for which feed optics based on single-mode optical fibres were developed for this project. The setup routinely achieves an absolute calibration accuracy of 60 cm/s and a repeatability of 2.5 cm/s. An unprecedented repeatability of only 0.32 cm/s could be demonstrated with a differential calibration scheme. In combination with the high spectral resolving power of the spectrograph of 7x10^5 and virtually absent internal scattered light, LARS provides a spectral purity and fidelity that previously was the domain of Fourier-transform spectrometers only. The instrument therefore provides unique capabilities for

  16. Measures of Diagnostic Accuracy: Basic Definitions

    PubMed Central

    Šimundić, Ana-Maria

    2009-01-01

    Diagnostic accuracy relates to the ability of a test to discriminate between the target condition and health. This discriminative potential can be quantified by the measures of diagnostic accuracy such as sensitivity and specificity, predictive values, likelihood ratios, the area under the ROC curve, Youden's index and diagnostic odds ratio. Different measures of diagnostic accuracy relate to the different aspects of diagnostic procedure: while some measures are used to assess the discriminative property of the test, others are used to assess its predictive ability. Measures of diagnostic accuracy are not fixed indicators of a test performance, some are very sensitive to the disease prevalence, while others to the spectrum and definition of the disease. Furthermore, measures of diagnostic accuracy are extremely sensitive to the design of the study. Studies not meeting strict methodological standards usually over- or under-estimate the indicators of test performance as well as they limit the applicability of the results of the study. STARD initiative was a very important step toward the improvement the quality of reporting of studies of diagnostic accuracy. STARD statement should be included into the Instructions to authors by scientific journals and authors should be encouraged to use the checklist whenever reporting their studies on diagnostic accuracy. Such efforts could make a substantial difference in the quality of reporting of studies of diagnostic accuracy and serve to provide the best possible evidence to the best for the patient care. This brief review outlines some basic definitions and characteristics of the measures of diagnostic accuracy.

  17. Accuracy of analyses of microelectronics nanostructures in atom probe tomography

    NASA Astrophysics Data System (ADS)

    Vurpillot, F.; Rolland, N.; Estivill, R.; Duguay, S.; Blavette, D.

    2016-07-01

    The routine use of atom probe tomography (APT) as a nano-analysis microscope in the semiconductor industry requires the precise evaluation of the metrological parameters of this instrument (spatial accuracy, spatial precision, composition accuracy or composition precision). The spatial accuracy of this microscope is evaluated in this paper in the analysis of planar structures such as high-k metal gate stacks. It is shown both experimentally and theoretically that the in-depth accuracy of reconstructed APT images is perturbed when analyzing this structure composed of an oxide layer of high electrical permittivity (higher-k dielectric constant) that separates the metal gate and the semiconductor channel of a field emitter transistor. Large differences in the evaporation field between these layers (resulting from large differences in material properties) are the main sources of image distortions. An analytic model is used to interpret inaccuracy in the depth reconstruction of these devices in APT.

  18. Decade-Spanning High-Precision Terahertz Frequency Comb

    NASA Astrophysics Data System (ADS)

    Finneran, Ian A.; Good, Jacob T.; Holland, Daniel B.; Carroll, P. Brandon; Allodi, Marco A.; Blake, Geoffrey A.

    2015-04-01

    The generation and detection of a decade-spanning terahertz (THz) frequency comb is reported using two Ti:sapphire femtosecond laser oscillators and asynchronous optical sampling THz time-domain spectroscopy. The comb extends from 0.15 to 2.4 THz, with a tooth spacing of 80 MHz, a linewidth of 3.7 kHz, and a fractional precision of 1.8 ×10-9 . With time-domain detection of the comb, we measure three transitions of water vapor at 10 mTorr between 1-2 THz with an average Doppler-limited fractional accuracy of 6.1 ×10-8 . Significant improvements in bandwidth, resolution, and sensitivity are possible with existing technologies.

  19. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    A precision liquid level sensor utilizes a balanced bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  20. Precision Measurement in Biology

    NASA Astrophysics Data System (ADS)

    Quake, Stephen

    Is biology a quantitative science like physics? I will discuss the role of precision measurement in both physics and biology, and argue that in fact both fields can be tied together by the use and consequences of precision measurement. The elementary quanta of biology are twofold: the macromolecule and the cell. Cells are the fundamental unit of life, and macromolecules are the fundamental elements of the cell. I will describe how precision measurements have been used to explore the basic properties of these quanta, and more generally how the quest for higher precision almost inevitably leads to the development of new technologies, which in turn catalyze further scientific discovery. In the 21st century, there are no remaining experimental barriers to biology becoming a truly quantitative and mathematical science.

  1. Quantum limits of thermometry

    SciTech Connect

    Stace, Thomas M.

    2010-07-15

    The precision of typical thermometers consisting of N particles scales as {approx}1/{radical}(N). For high-precision thermometry and thermometric standards, this presents an important theoretical noise floor. Here it is demonstrated that thermometry may be mapped onto the problem of phase estimation, and using techniques from optimal phase estimation, it follows that the scaling of the precision of a thermometer may in principle be improved to {approx}1/N, representing a Heisenberg limit to thermometry.

  2. Precision Environmental Radiation Monitoring System

    SciTech Connect

    Vladimir Popov, Pavel Degtiarenko

    2010-07-01

    A new precision low-level environmental radiation monitoring system has been developed and tested at Jefferson Lab. This system provides environmental radiation measurements with accuracy and stability of the order of 1 nGy/h in an hour, roughly corresponding to approximately 1% of the natural cosmic background at the sea level. Advanced electronic front-end has been designed and produced for use with the industry-standard High Pressure Ionization Chamber detector hardware. A new highly sensitive readout electronic circuit was designed to measure charge from the virtually suspended ionization chamber ion collecting electrode. New signal processing technique and dedicated data acquisition were tested together with the new readout. The designed system enabled data collection in a remote Linux-operated computer workstation, which was connected to the detectors using a standard telephone cable line. The data acquisition system algorithm is built around the continuously running 24-bit resolution 192 kHz data sampling analog to digital convertor. The major features of the design include: extremely low leakage current in the input circuit, true charge integrating mode operation, and relatively fast response to the intermediate radiation change. These features allow operating of the device as an environmental radiation monitor, at the perimeters of the radiation-generating installations in densely populated areas, like in other monitoring and security applications requiring high precision and long-term stability. Initial system evaluation results are presented.

  3. Speed-Accuracy Response Models: Scoring Rules Based on Response Time and Accuracy

    ERIC Educational Resources Information Center

    Maris, Gunter; van der Maas, Han

    2012-01-01

    Starting from an explicit scoring rule for time limit tasks incorporating both response time and accuracy, and a definite trade-off between speed and accuracy, a response model is derived. Since the scoring rule is interpreted as a sufficient statistic, the model belongs to the exponential family. The various marginal and conditional distributions…

  4. A priori evaluation of two-stage cluster sampling for accuracy assessment of large-area land-cover maps

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.

    2004-01-01

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.

  5. Precision displacement reference system

    DOEpatents

    Bieg, Lothar F.; Dubois, Robert R.; Strother, Jerry D.

    2000-02-22

    A precision displacement reference system is described, which enables real time accountability over the applied displacement feedback system to precision machine tools, positioning mechanisms, motion devices, and related operations. As independent measurements of tool location is taken by a displacement feedback system, a rotating reference disk compares feedback counts with performed motion. These measurements are compared to characterize and analyze real time mechanical and control performance during operation.

  6. Precision In Situ Field Geologic Contact Mapping by MERA, Columbia Hills, Mars

    NASA Astrophysics Data System (ADS)

    Crumpler, L. S.

    2006-12-01

    The positions of identified lithologic contacts, outcrops, traverse landforms, and data derived from in situ measurements of outcrop materials by the Athena instrument suite have been determined by stereo-ranging and rover tracking along the traverse by MERA (Spirit) within the Columbia Hills. High precision geologic maps of several sites and moderate precision transect maps between sites have been constructed fro these data showing the geology of Spirit's path through the Columbia Hills. The overall accuracy of contact locations with respect to global position reflects the overall accuracy of knowledge about the rover location. But measurements of contacts from multiple (as many as five) positions agree remarkably well and are well within the standards and limitations acceptable within terrestrial field geologic contact mapping precision. Orthographic maps of the results along the traverse also agree well with features in narrow angle MOC images crossed during the traverse. Some site-to-site variations in lithology and chemistry within the Columbia Hills reflect possible variations in surficial materials. But other differences between outcrops could be a result of variations in alteration of a limited range of protoliths draped as either distal crater ejecta or volcanic air fall materials over a Columbia Hills substrate. Large scale changes in lithology along the traverse, and particularly abrupt discontinuities coincident with through-going linear trends are evidence for possible structural (faulting) control on exposures that expose fundamental differences in basement or substrate materials. The geological complexity of the Columbia Hills appears comparable to that of some ancient continental basement terrains.

  7. Towards Arbitrary Accuracy Inviscid Surface Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Hixon, Ray

    2002-01-01

    Inviscid nonlinear surface boundary conditions are currently limited to third order accuracy in time for non-moving surfaces and actually reduce to first order in time when the surfaces move. For steady-state calculations it may be possible to achieve higher accuracy in space, but high accuracy in time is required for efficient simulation of multiscale unsteady phenomena. A surprisingly simple technique is shown here that can be used to correct the normal pressure derivatives of the flow at a surface on a Cartesian grid so that arbitrarily high order time accuracy is achieved in idealized cases. This work demonstrates that nonlinear high order time accuracy at a solid surface is possible and desirable, but it also shows that the current practice of only correcting the pressure is inadequate.

  8. Seasonal Effects on GPS PPP Accuracy

    NASA Astrophysics Data System (ADS)

    Saracoglu, Aziz; Ugur Sanli, D.

    2016-04-01

    GPS Precise Point Positioning (PPP) is now routinely used in many geophysical applications. Static positioning and 24 h data are requested for high precision results however real life situations do not always let us collect 24 h data. Thus repeated GPS surveys of 8-10 h observation sessions are still used by some research groups. Positioning solutions from shorter data spans are subject to various systematic influences, and the positioning quality as well as the estimated velocity is degraded. Researchers pay attention to the accuracy of GPS positions and of the estimated velocities derived from short observation sessions. Recently some research groups turned their attention to the study of seasonal effects (i.e. meteorological seasons) on GPS solutions. Up to now usually regional studies have been reported. In this study, we adopt a global approach and study the various seasonal effects (including the effect of the annual signal) on GPS solutions produced from short observation sessions. We use the PPP module of the NASA/JPL's GIPSY/OASIS II software and globally distributed GPS stations' data of the International GNSS Service. Accuracy studies previously performed with 10-30 consecutive days of continuous data. Here, data from each month of a year, incorporating two years in succession, is used in the analysis. Our major conclusion is that a reformulation for the GPS positioning accuracy is necessary when taking into account the seasonal effects, and typical one term accuracy formulation is expanded to a two-term one.

  9. Design for H type co-planar precision stage based on closed air bearing guideway with vacuum attraction force

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Shi, Zhaoyao; Lin, Jiachun; Zhang, Hua

    2011-12-01

    The accuracy of traditional two-dimensional precision stage is limited not only by the accuracy of each guideway but also by the configuration of the stage. It is not easy to calculate and compensate the total accuracy of the stage due to the complicated influence caused by the different position of the slides. An air bearing guideways with vacuum attraction forces has been designed with closed slide structure to enhance the stiffness and avoid the deformation caused by the weight of slide and workpieces. An H style two-dimension ultra-precision stage with co-planar structure has been developed based on the air bearing guideways to avoid the multi-influence by the axes. Driven by linear motors, the position of the workpiece is encoded by length scales with resolution of 50nm and thermal expansion of 0.6 μm/m/°C (0 °C to 30 °C). The travel span of the stage is 320x320mm, during which each axis has a positioning accuracy of +/-1μm, a repeatability of +/-0.3μm and a straightness of +/-0.5μm. The stage can be applied in precision manufacturing and measurement.

  10. Precise Orbit Determination of GPS Satellites Using Phase Observables

    NASA Astrophysics Data System (ADS)

    Jee, Myung-Kook; Choi, Kyu-Hong; Park, Pil-Ho

    1997-12-01

    The accuracy of user position by GPS is heavily dependent upon the accuracy of satellite position which is usually transmitted to GPS users in radio signals. The real-time satellite position information directly obtained from broadcast ephimerides has the accuracy of 3 x 10 meters which is very unsatisfactory to measure 100km baseline to the accuracy of less than a few mili-meters. There are globally at present seven orbit analysis centers capable of generating precise GPS ephimerides and their orbit quality is of the order of about 10cm. Therefore, precise orbit model and phase processing technique were reviewed and consequently precise GPS ephimerides were produced after processing the phase observables of 28 global GPS stations for 1 day. Initial 6 orbit parameters and 2 solar radiation coefficients were estimated using batch least square algorithm and the final results were compared with the orbit of IGS, the International GPS Service for Geodynamics.

  11. Resist development modeling for OPC accuracy improvement

    NASA Astrophysics Data System (ADS)

    Fan, Yongfa; Zavyalova, Lena; Zhang, Yunqiang; Zhang, Charlie; Lucas, Kevin; Falch, Brad; Croffie, Ebo; Li, Jianliang; Melvin, Lawrence; Ward, Brian

    2009-03-01

    A precise lithographic model has always been a critical component for the technique of Optical Proximity Correction (OPC) since it was introduced a decade ago [1]. As semiconductor manufacturing moves to 32nm and 22nm technology nodes with 193nm wafer immersion lithography, the demand for more accurate models is unprecedented to predict complex imaging phenomena at high numerical aperture (NA) with aggressive illumination conditions necessary for these nodes. An OPC model may comprise all the physical processing components from mask e-beam writing steps to final CDSEM measurement of the feature dimensions. In order to provide a precise model, it is desired that every component involved in the processing physics be accurately modeled using minimum metrology data. In the past years, much attention has been paid to studying mask 3-D effects, mask writing limitations, laser spectrum profile, lens pupil polarization/apodization, source shape characterization, stage vibration, and so on. However, relatively fewer studies have been devoted to modeling of the development process of resist film though it is an essential processing step that cannot be neglected. Instead, threshold models are commonly used to approximate resist development behavior. While resist models capable of simulating development path are widely used in many commercial lithography simulators, the lack of this component in current OPC modeling lies in the fact that direct adoption of those development models into OPC modeling compromises its capability of full chip simulation. In this work, we have successfully incorporated a photoresist development model into production OPC modeling software without sacrificing its full chip capability. The resist film development behavior is simulated in the model to incorporate observed complex resist phenomena such as surface inhibition, developer mass transport, HMDS poisoning, development contrast, etc. The necessary parameters are calibrated using metrology data

  12. Precision gap particle separator

    DOEpatents

    Benett, William J.; Miles, Robin; Jones, II., Leslie M.; Stockton, Cheryl

    2004-06-08

    A system for separating particles entrained in a fluid includes a base with a first channel and a second channel. A precision gap connects the first channel and the second channel. The precision gap is of a size that allows small particles to pass from the first channel into the second channel and prevents large particles from the first channel into the second channel. A cover is positioned over the base unit, the first channel, the precision gap, and the second channel. An port directs the fluid containing the entrained particles into the first channel. An output port directs the large particles out of the first channel. A port connected to the second channel directs the small particles out of the second channel.

  13. How Physics Got Precise

    SciTech Connect

    Kleppner, Daniel

    2005-01-19

    Although the ancients knew the length of the year to about ten parts per million, it was not until the end of the 19th century that precision measurements came to play a defining role in physics. Eventually such measurements made it possible to replace human-made artifacts for the standards of length and time with natural standards. For a new generation of atomic clocks, time keeping could be so precise that the effects of the local gravitational potentials on the clock rates would be important. This would force us to re-introduce an artifact into the definition of the second - the location of the primary clock. I will describe some of the events in the history of precision measurements that have led us to this pleasing conundrum, and some of the unexpected uses of atomic clocks today.

  14. Precision Muonium Spectroscopy

    NASA Astrophysics Data System (ADS)

    Jungmann, Klaus P.

    2016-09-01

    The muonium atom is the purely leptonic bound state of a positive muon and an electron. It has a lifetime of 2.2 µs. The absence of any known internal structure provides for precision experiments to test fundamental physics theories and to determine accurate values of fundamental constants. In particular ground state hyperfine structure transitions can be measured by microwave spectroscopy to deliver the muon magnetic moment. The frequency of the 1s-2s transition in the hydrogen-like atom can be determined with laser spectroscopy to obtain the muon mass. With such measurements fundamental physical interactions, in particular quantum electrodynamics, can also be tested at highest precision. The results are important input parameters for experiments on the muon magnetic anomaly. The simplicity of the atom enables further precise experiments, such as a search for muonium-antimuonium conversion for testing charged lepton number conservation and searches for possible antigravity of muons and dark matter.

  15. High Precision Fe Isotope Analysis in low Concentration Samples by High Resolution MC-ICPMS

    NASA Astrophysics Data System (ADS)

    Chung, C.; Wu, J.; You, C.

    2009-12-01

    Iron availability has been shown to be the main limitation factor for phytoplankton growth in the ocean. However, due to the limitation of analytical technique, the database of dissolved Fe concentrations and isotope ratio distribution in the ocean is still very limited. In particular, the iron sources to the ocean remain uncertain. Aeolian dust from the continental is considered as the primary source, also the digenetic dissolution at the continental margins is proposed to contribute significant portion of iron content of the sea surface water. The field of Fe isotope geochemistry has seen important developments in methodology and scope since the advent of Multi-Collector Inductively Coupled Plasma Mass Spectrometry (MC-ICPMS). Although increasing the number of replicates in High Resolution MC-ICPMS reduces the uncertainty related to instability in instrumental mass bias and counting statistics, many other parameters include mass fractionation during column separation, matrix effect in ICPMS analysis and the presence of isobaric interferences can affect the precision and accuracy of Fe isotopic analyses. In this study, a high precision analytical method of Fe isotope measurement for low concentration samples was developed using HR-MC-ICPMS. Several parameters that may affect the accuracy and precision of 56Fe/54Fe result such as background, instrumental mass discrimination, isobaric interferences, type of introduction system and acid molarity were identified and evaluated. External precisions better than 0.04‰ for δ56Fe can be achieve using only 10ng of iron sample with APEX and X-cone as introduction system. Significant improvement in terms of sample size was made. This method can be applied on very low concentration samples such as coral and seawater.

  16. Precision Heating Process

    NASA Technical Reports Server (NTRS)

    1992-01-01

    A heat sealing process was developed by SEBRA based on technology that originated in work with NASA's Jet Propulsion Laboratory. The project involved connecting and transferring blood and fluids between sterile plastic containers while maintaining a closed system. SEBRA markets the PIRF Process to manufacturers of medical catheters. It is a precisely controlled method of heating thermoplastic materials in a mold to form or weld catheters and other products. The process offers advantages in fast, precise welding or shape forming of catheters as well as applications in a variety of other industries.

  17. Precision manometer gauge

    DOEpatents

    McPherson, Malcolm J.; Bellman, Robert A.

    1984-01-01

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  18. Precision manometer gauge

    DOEpatents

    McPherson, M.J.; Bellman, R.A.

    1982-09-27

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  19. Diameter measurement by laser at the submicron accuracy level

    NASA Astrophysics Data System (ADS)

    Mainsah, E.; Wong, Cheuk-Mun G.; Stout, Kenneth J.

    1993-09-01

    One important consequence of the " Quality Revolution" that is currently taking place in all sectors of advanced manufacturing industry is the requirement for more systematic and precise measurement. This is a pre-requisite for controlling tolerances on manufactured components and for ensuring that products leaving the factory meet the required specifications. The dramatic increase in computer power coupled with the demands of the space age nanotechnology and customer sophistication have meant that instrumentation is being constantly pushed to the limits in terms of accuracy tolerance and speed. Diameter measurements are carried out on a daily basis in many sectors of manufacturing industry. Due to the emphasis on factors such as speed accuracy and repeatability the current trend is to move away from conventional measurement techniques (metre rule measuring tape Vernier callipers) towards non-contact techniques. One of such techniques involves the use of the laser. This paper discusses at the design of a laser tracer data initiation capture and processing unit that permits diameter measurements to be made on-line and has the capability of carrying out up to 500 measurements per second. The system is non-contact with a measurement range of 2. 0000 mm and a resolution of 0. 5 im. It is demonstraated that by using two of these devices diameters of up to 220. 000 mm can be measured. This is done by incorporating a translational table that provides the

  20. Accuracy of different impression materials in parallel and nonparallel implants

    PubMed Central

    Vojdani, Mahroo; Torabi, Kianoosh; Ansarifard, Elham

    2015-01-01

    Background: A precise impression is mandatory to obtain passive fit in implant-supported prostheses. The aim of this study was to compare the accuracy of three impression materials in both parallel and nonparallel implant positions. Materials and Methods: In this experimental study, two partial dentate maxillary acrylic models with four implant analogues in canines and lateral incisors areas were used. One model was simulating the parallel condition and the other nonparallel one, in which implants were tilted 30° bucally and 20° in either mesial or distal directions. Thirty stone casts were made from each model using polyether (Impregum), additional silicone (Monopren) and vinyl siloxanether (Identium), with open tray technique. The distortion values in three-dimensions (X, Y and Z-axis) were measured by coordinate measuring machine. Two-way analysis of variance (ANOVA), one-way ANOVA and Tukey tests were used for data analysis (α = 0.05). Results: Under parallel condition, all the materials showed comparable, accurate casts (P = 0.74). In the presence of angulated implants, while Monopren showed more accurate results compared to Impregum (P = 0.01), Identium yielded almost similar results to those produced by Impregum (P = 0.27) and Monopren (P = 0.26). Conclusion: Within the limitations of this study, in parallel conditions, the type of impression material cannot affect the accuracy of the implant impressions; however, in nonparallel conditions, polyvinyl siloxane is shown to be a better choice, followed by vinyl siloxanether and polyether respectively. PMID:26288620

  1. Precision Metrology Using Weak Measurements

    NASA Astrophysics Data System (ADS)

    Zhang, Lijian; Datta, Animesh; Walmsley, Ian A.

    2015-05-01

    Weak values and measurements have been proposed as a means to achieve dramatic enhancements in metrology based on the greatly increased range of possible measurement outcomes. Unfortunately, the very large values of measurement outcomes occur with highly suppressed probabilities. This raises three vital questions in weak-measurement-based metrology. Namely, (Q1) Does postselection enhance the measurement precision? (Q2) Does weak measurement offer better precision than strong measurement? (Q3) Is it possible to beat the standard quantum limit or to achieve the Heisenberg limit with weak measurement using only classical resources? We analyze these questions for two prototypical, and generic, measurement protocols and show that while the answers to the first two questions are negative for both protocols, the answer to the last is affirmative for measurements with phase-space interactions, and negative for configuration space interactions. Our results, particularly the ability of weak measurements to perform at par with strong measurements in some cases, are instructive for the design of weak-measurement-based protocols for quantum metrology.

  2. Active transport improves the precision of linear long distance molecular signalling

    NASA Astrophysics Data System (ADS)

    Godec, Aljaž; Metzler, Ralf

    2016-09-01

    Molecular signalling in living cells occurs at low copy numbers and is thereby inherently limited by the noise imposed by thermal diffusion. The precision at which biochemical receptors can count signalling molecules is intimately related to the noise correlation time. In addition to passive thermal diffusion, messenger RNA and vesicle-engulfed signalling molecules can transiently bind to molecular motors and are actively transported across biological cells. Active transport is most beneficial when trafficking occurs over large distances, for instance up to the order of 1 metre in neurons. Here we explain how intermittent active transport allows for faster equilibration upon a change in concentration triggered by biochemical stimuli. Moreover, we show how intermittent active excursions induce qualitative changes in the noise in effectively one-dimensional systems such as dendrites. Thereby they allow for significantly improved signalling precision in the sense of a smaller relative deviation in the concentration read-out by the receptor. On the basis of linear response theory we derive the exact mean field precision limit for counting actively transported molecules. We explain how intermittent active excursions disrupt the recurrence in the molecular motion, thereby facilitating improved signalling accuracy. Our results provide a deeper understanding of how recurrence affects molecular signalling precision in biological cells and novel medical-diagnostic devices.

  3. Developing and implementing a high precision setup system

    NASA Astrophysics Data System (ADS)

    Peng, Lee-Cheng

    the treatment planning system (TPS) has limited adaptive treatments. A reliable and accurate dosimetric simulation using TPS and in-house software in uncorrected errors has been developed. In SRT, the calculated dose deviation is compared to the original treatment dose with the dose-volume histogram to investigate the dose effect of rotational errors. In summary, this work performed a quality assessment to investigate the overall accuracy of current setup systems. To reach the ideal HPRT, the reliable dosimetric simulation, an effective daily QA program and effective, precise setup systems were developed and validated.

  4. Asymptotic accuracy of two-class discrimination

    SciTech Connect

    Ho, T.K.; Baird, H.S.

    1994-12-31

    Poor quality-e.g. sparse or unrepresentative-training data is widely suspected to be one cause of disappointing accuracy of isolated-character classification in modern OCR machines. We conjecture that, for many trainable classification techniques, it is in fact the dominant factor affecting accuracy. To test this, we have carried out a study of the asymptotic accuracy of three dissimilar classifiers on a difficult two-character recognition problem. We state this problem precisely in terms of high-quality prototype images and an explicit model of the distribution of image defects. So stated, the problem can be represented as a stochastic source of an indefinitely long sequence of simulated images labeled with ground truth. Using this sequence, we were able to train all three classifiers to high and statistically indistinguishable asymptotic accuracies (99.9%). This result suggests that the quality of training data was the dominant factor affecting accuracy. The speed of convergence during training, as well as time/space trade-offs during recognition, differed among the classifiers.

  5. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  6. Precision bolometer bridge

    NASA Technical Reports Server (NTRS)

    White, D. R.

    1968-01-01

    Prototype precision bolometer calibration bridge is manually balanced device for indicating dc bias and balance with either dc or ac power. An external galvanometer is used with the bridge for null indication, and the circuitry monitors voltage and current simultaneously without adapters in testing 100 and 200 ohm thin film bolometers.

  7. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    1985-01-29

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge. 2 figs.

  8. Precision physics at LHC

    SciTech Connect

    Hinchliffe, I.

    1997-05-01

    In this talk the author gives a brief survey of some physics topics that will be addressed by the Large Hadron Collider currently under construction at CERN. Instead of discussing the reach of this machine for new physics, the author gives examples of the types of precision measurements that might be made if new physics is discovered.

  9. Precision in Stereochemical Terminology

    ERIC Educational Resources Information Center

    Wade, Leroy G., Jr.

    2006-01-01

    An analysis of relatively new terminology that has given multiple definitions often resulting in students learning principles that are actually false is presented with an example of the new term stereogenic atom introduced by Mislow and Siegel. The Mislow terminology would be useful in some cases if it were used precisely and correctly, but it is…

  10. High Precision Astrometry

    NASA Astrophysics Data System (ADS)

    Riess, Adam

    2012-10-01

    This |*|program |*|uses |*|the |*|enhanced |*|astrometric |*|precision |*|enabled |*|by |*|spatial |*|scanning |*|to |*|calibrate |*|remaining |*|obstacles |*|toreaching |*|<<40 |*|microarc|*|second |*|astrometry |*|{<1 |*|millipixel} |*|with |*|WFC3/UVIS |*|by |*|1} |*|improving |*|geometric |*|distor-on |*|2} |*|calibratingthe |*|e|*|ect |*|of |*|breathing |*|on |*|astrometry|*|3} |*|calibrating |*|the |*|e|*|ect |*|of |*|CTE |*|on |*|astrometry, |*|4} |*|characterizing |*|the |*|boundaries |*|andorientations |*|of |*|the |*|WFC3 |*|lithograph |*|cells.

  11. Precision liquid level sensor

    DOEpatents

    Field, Michael E.; Sullivan, William H.

    1985-01-01

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  12. Programming supramolecular biohybrids as precision therapeutics.

    PubMed

    Ng, David Yuen Wah; Wu, Yuzhou; Kuan, Seah Ling; Weil, Tanja

    2014-12-16

    CONSPECTUS: Chemical programming of macromolecular structures to instill a set of defined chemical properties designed to behave in a sequential and precise manner is a characteristic vision for creating next generation nanomaterials. In this context, biopolymers such as proteins and nucleic acids provide an attractive platform for the integration of complex chemical design due to their sequence specificity and geometric definition, which allows accurate translation of chemical functionalities to biological activity. Coupled with the advent of amino acid specific modification techniques, "programmable" areas of a protein chain become exclusively available for any synthetic customization. We envision that chemically reprogrammed hybrid proteins will bridge the vital link to overcome the limitations of synthetic and biological materials, providing a unique strategy for tailoring precision therapeutics. In this Account, we present our work toward the chemical design of protein- derived hybrid polymers and their supramolecular responsiveness, while summarizing their impact and the advancement in biomedicine. Proteins, in their native form, represent the central framework of all biological processes and are an unrivaled class of macromolecular drugs with immense specificity. Nonetheless, the route of administration of protein therapeutics is often vastly different from Nature's biosynthesis. Therefore, it is imperative to chemically reprogram these biopolymers to direct their entry and activity toward the designated target. As a consequence of the innate structural regularity of proteins, we show that supramolecular interactions facilitated by stimulus responsive chemistry can be intricately designed as a powerful tool to customize their functions, stability, activity profiles, and transportation capabilities. From another perspective, a protein in its denatured, unfolded form serves as a monodispersed, biodegradable polymer scaffold decorated with functional side

  13. Method for improving terahertz band absorption spectrum measurement accuracy using noncontact sample thickness measurement.

    PubMed

    Li, Zhi; Zhang, Zhaohui; Zhao, Xiaoyan; Su, Haixia; Yan, Fang; Zhang, Han

    2012-07-10

    The terahertz absorption spectrum has a complex nonlinear relationship with sample thickness, which is normally measured mechanically with limited accuracy. As a result, the terahertz absorption spectrum is usually determined incorrectly. In this paper, an iterative algorithm is proposed to accurately determine sample thickness. This algorithm is independent of the initial value used and results in convergent calculations. Precision in sample thickness can be improved up to 0.1 μm. A more precise absorption spectrum can then be extracted. By comparing the proposed method with the traditional method based on mechanical thickness measurements, quantitative analysis experiments on a three-component amino acid mixture shows that the global error decreased from 0.0338 to 0.0301.

  14. Astrophysics with Microarcsecond Accuracy Astrometry

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  15. Precision Falling Body Experiment

    ERIC Educational Resources Information Center

    Blackburn, James A.; Koenig, R.

    1976-01-01

    Described is a simple apparatus to determine acceleration due to gravity. It utilizes direct contact switches in lieu of conventional photocells to time the fall of a ball bearing. Accuracies to better than one part in a thousand were obtained. (SL)

  16. Advances in Swept-Wavelength Interferometry for Precision Measurements

    NASA Astrophysics Data System (ADS)

    Moore, Eric D.

    2011-12-01

    Originally developed for radar applications in the 1950s, swept-wavelength interferometry (SWI) at optical wavelengths has been an active area of research for the past thirty years, with applications in fields ranging from fiber optic telecommunications to biomedical imaging. It now forms the basis of several measurement techniques, including optical frequency domain reflectometry (OFDR), swept-source optical coherence tomography (SS-OCT), and frequency-modulated continuous-wave (FMCW) lidar. In this thesis, I present several novel contributions to the field of SWI that include improvements and extensions to the state of the art in SWI for performing precision measurements. The first is a method for accurately monitoring the instantaneous frequency of the tunable source to accommodate nonlinearities in the source tuning characteristics. This work ex- tends the commonly used method incorporating an auxiliary interferometer to the increasingly relevant cases of long interferometer path mismatches and high-speed wavelength tuning. The second contribution enables precision absolute range measurements to within a small fraction of the transform-limited range resolution of the SWI system. This is accomplished through the use of digital filtering in the time domain and phase slope estimation in the frequency domain. Measurements of optical group delay with attosecond-level precision are experimentally demonstrated and applied to measurements of group refractive index and physical thickness. The accuracy of the group refractive index measurement is shown to be on the order of 10-6, while measurements of absolute thicknesses of macroscopic samples are accomplished with accuracy on the order of 10 nm. Furthermore, sub-nanometer uncertainty for relative thickness measurements can be achieved. For the case of crystalline silicon wafers, the achievable uncertainty is on the same order as the Si-Si bond length, opening the door to potential thickness profiling with single atomic

  17. Manufacturing Precise, Lightweight Paraboloidal Mirrors

    NASA Technical Reports Server (NTRS)

    Hermann, Frederick Thomas

    2006-01-01

    A process for fabricating a precise, diffraction- limited, ultra-lightweight, composite- material (matrix/fiber) paraboloidal telescope mirror has been devised. Unlike the traditional process of fabrication of heavier glass-based mirrors, this process involves a minimum of manual steps and subjective judgment. Instead, this process involves objectively controllable, repeatable steps; hence, this process is better suited for mass production. Other processes that have been investigated for fabrication of precise composite-material lightweight mirrors have resulted in print-through of fiber patterns onto reflecting surfaces, and have not provided adequate structural support for maintenance of stable, diffraction-limited surface figures. In contrast, this process does not result in print-through of the fiber pattern onto the reflecting surface and does provide a lightweight, rigid structure capable of maintaining a diffraction-limited surface figure in the face of changing temperature, humidity, and air pressure. The process consists mainly of the following steps: 1. A precise glass mandrel is fabricated by conventional optical grinding and polishing. 2. The mandrel is coated with a release agent and covered with layers of a carbon- fiber composite material. 3. The outer surface of the outer layer of the carbon-fiber composite material is coated with a surfactant chosen to provide for the proper flow of an epoxy resin to be applied subsequently. 4. The mandrel as thus covered is mounted on a temperature-controlled spin table. 5. The table is heated to a suitable temperature and spun at a suitable speed as the epoxy resin is poured onto the coated carbon-fiber composite material. 6. The surface figure of the optic is monitored and adjusted by use of traditional Ronchi, Focault, and interferometric optical measurement techniques while the speed of rotation and the temperature are adjusted to obtain the desired figure. The proper selection of surfactant, speed or rotation

  18. Precision positioning system based on intelligent Fuzzy-PID control

    NASA Astrophysics Data System (ADS)

    Liu, Zhen; Zhang, Liqiong; Li, Yan

    2010-08-01

    To break through the limitations of static and dynamic characteristics of conventional step motor driven open-loop positioning devices, a two-dimensional precision positioning system with a travel range of 100mm×100mm has been developed. This paper presents its structure, control principle and performance experiments. This system, equipped with cross roller guides working as linear guiding elements, is driven by step motors through ball screw transmission. A threeaxis dual-frequency laser interferometric measurement system is established for real-time measurement and feedback of system's movements in three degrees of freedom (DOF) and an intelligent Fuzzy-PID controller is implemented for this system's motion control. In the controller, the PID module calculates the output from motor drivers and its initial parameters are tuned through expansion of critical proportioning method; the Fuzzy module optimizes PID parameters to fulfill specific requirements of different movement stages. A dead zone control mechanism is developed in this controller to minimize the oscillations around target position. Experimental results indicate that system with Fuzzy-PID controller shows faster response than that with ordinary PID controller. Moreover, with this controller implemented, the developed precision positioning system achieves better repeatability (+/-2μm) and accuracy (+/-2.5μm) within the full range than open-loop system using step motor.

  19. Precision Tiltmeter as a Reference for Slope MeasuringInstruments

    SciTech Connect

    Kirschman, Jonathan L.; Domning, Edward E.; Morrison, Gregory Y.; Smith, Brian V.; Yashchuk, Valeriy V.

    2007-08-01

    The next generation of synchrotrons and free electron lasers require extremely high-performance x-ray optical systems for proper focusing. The necessary optics cannot be fabricated without the use of precise optical metrology instrumentation. In particular, the Long Trace Profiler (LTP) based on the pencil-beam interferometer is a valuable tool for low-spatial-frequency slope measurement with x-ray optics. The limitations of such a device are set by the amount of systematic errors and noise. A significant improvement of LTP performance was the addition of an optical reference channel, which allowed to partially account for systematic errors associated with wiggling and wobbling of the LTP carriage. However, the optical reference is affected by changing optical path length, non-homogeneous optics, and air turbulence. In the present work, we experimentally investigate the questions related to the use of a precision tiltmeter as a reference channel. Dependence of the tiltmeter performance on horizontal acceleration, temperature drift, motion regime, and kinematical scheme of the translation stage has been investigated. It is shown that at an appropriate experimental arrangement, the tiltmeter provides a slope reference for the LTP system with accuracy on the level of 0.1 {micro}rad (rms).

  20. High-precision positioning of radar scatterers

    NASA Astrophysics Data System (ADS)

    Dheenathayalan, Prabu; Small, David; Schubert, Adrian; Hanssen, Ramon F.

    2016-05-01

    Remote sensing radar satellites cover wide areas and provide spatially dense measurements, with millions of scatterers. Knowledge of the precise position of each radar scatterer is essential to identify the corresponding object and interpret the estimated deformation. The absolute position accuracy of synthetic aperture radar (SAR) scatterers in a 2D radar coordinate system, after compensating for atmosphere and tidal effects, is in the order of centimeters for TerraSAR-X (TSX) spotlight images. However, the absolute positioning in 3D and its quality description are not well known. Here, we exploit time-series interferometric SAR to enhance the positioning capability in three dimensions. The 3D positioning precision is parameterized by a variance-covariance matrix and visualized as an error ellipsoid centered at the estimated position. The intersection of the error ellipsoid with objects in the field is exploited to link radar scatterers to real-world objects. We demonstrate the estimation of scatterer position and its quality using 20 months of TSX stripmap acquisitions over Delft, the Netherlands. Using trihedral corner reflectors (CR) for validation, the accuracy of absolute positioning in 2D is about 7 cm. In 3D, an absolute accuracy of up to ˜ 66 cm is realized, with a cigar-shaped error ellipsoid having centimeter precision in azimuth and range dimensions, and elongated in cross-range dimension with a precision in the order of meters (the ratio of the ellipsoid axis lengths is 1/3/213, respectively). The CR absolute 3D position, along with the associated error ellipsoid, is found to be accurate and agree with the ground truth position at a 99 % confidence level. For other non-CR coherent scatterers, the error ellipsoid concept is validated using 3D building models. In both cases, the error ellipsoid not only serves as a quality descriptor, but can also help to associate radar scatterers to real-world objects.

  1. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  2. Surface errors in the course of machining precision optics

    NASA Astrophysics Data System (ADS)

    Biskup, H.; Haberl, A.; Rascher, R.

    2015-08-01

    Precision optical components are usually machined by grinding and polishing in several steps with increasing accuracy. Spherical surfaces will be finished in a last step with large tools to smooth the surface. The requested surface accuracy of non-spherical surfaces only can be achieved with tools in point contact to the surface. So called mid-frequency errors (MSFE) can accumulate with zonal processes. This work is on the formation of surface errors from grinding to polishing by conducting an analysis of the surfaces in their machining steps by non-contact interferometric methods. The errors on the surface can be distinguished as described in DIN 4760 whereby 2nd to 3rd order errors are the so-called MSFE. By appropriate filtering of the measured data frequencies of errors can be suppressed in a manner that only defined spatial frequencies will be shown in the surface plot. It can be observed that some frequencies already may be formed in the early machining steps like grinding and main-polishing. Additionally it is known that MSFE can be produced by the process itself and other side effects. Beside a description of surface errors based on the limits of measurement technologies, different formation mechanisms for selected spatial frequencies are presented. A correction may be only possible by tools that have a lateral size below the wavelength of the error structure. The presented considerations may be used to develop proposals to handle surface errors.

  3. Precision atomic beam density characterization by diode laser absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Oxley, Paul; Wihbey, Joseph

    2016-09-01

    We provide experimental and theoretical details of a simple technique to determine absolute line-of-sight integrated atomic beam densities based on resonant laser absorption. In our experiments, a thermal lithium beam is chopped on and off while the frequency of a laser crossing the beam at right angles is scanned slowly across the resonance transition. A lock-in amplifier detects the laser absorption signal at the chop frequency from which the atomic density is determined. The accuracy of our experimental method is confirmed using the related technique of wavelength modulation spectroscopy. For beams which absorb of order 1% of the incident laser light, our measurements allow the beam density to be determined to an accuracy better than 5% and with a precision of 3% on a time scale of order 1 s. Fractional absorptions of order 10-5 are detectable on a one-minute time scale when we employ a double laser beam technique which limits laser intensity noise. For a lithium beam with a thickness of 9 mm, we have measured atomic densities as low as 5 × 104 atoms cm-3. The simplicity of our technique and the details we provide should allow our method to be easily implemented in most atomic or molecular beam apparatuses.

  4. High precision predictions for exclusive VH production at the LHC

    DOE PAGES

    Li, Ye; Liu, Xiaohui

    2014-06-04

    We present a resummation-improved prediction for pp → VH + 0 jets at the Large Hadron Collider. We focus on highly-boosted final states in the presence of jet veto to suppress the tt¯ background. In this case, conventional fixed-order calculations are plagued by the existence of large Sudakov logarithms αnslogm(pvetoT/Q) for Q ~ mV + mH which lead to unreliable predictions as well as large theoretical uncertainties, and thus limit the accuracy when comparing experimental measurements to the Standard Model. In this work, we show that the resummation of Sudakov logarithms beyond the next-to-next-to-leading-log accuracy, combined with the next-to-next-to-leading ordermore » calculation, reduces the scale uncertainty and stabilizes the perturbative expansion in the region where the vector bosons carry large transverse momentum. Thus, our result improves the precision with which Higgs properties can be determined from LHC measurements using boosted Higgs techniques.« less

  5. Optimal design of robot accuracy compensators

    SciTech Connect

    Zhuang, H.; Roth, Z.S. . Robotics Center and Electrical Engineering Dept.); Hamano, Fumio . Dept. of Electrical Engineering)

    1993-12-01

    The problem of optimal design of robot accuracy compensators is addressed. Robot accuracy compensation requires that actual kinematic parameters of a robot be previously identified. Additive corrections of joint commands, including those at singular configurations, can be computed without solving the inverse kinematics problem for the actual robot. This is done by either the damped least-squares (DLS) algorithm or the linear quadratic regulator (LQR) algorithm, which is a recursive version of the DLS algorithm. The weight matrix in the performance index can be selected to achieve specific objectives, such as emphasizing end-effector's positioning accuracy over orientation accuracy or vice versa, or taking into account proximity to robot joint travel limits and singularity zones. The paper also compares the LQR and the DLS algorithms in terms of computational complexity, storage requirement, and programming convenience. Simulation results are provided to show the effectiveness of the algorithms.

  6. Fourier transform ion cyclotron resonance versus time of flight for precision mass measurements

    SciTech Connect

    Kouzes, R.T.

    1993-02-01

    Both Fourier Transform Ion Cyclotron Resonance and ICR Time-of-Flight mass spectroscopy (FTICR-MS and ICR-TOF-MS, respectively) have been applied to precision atomic mass measurements. This paper reviews the status of these approaches and compares their limitations. Comparisons are made of FTICR-MS and ICR-TOF-MS for application to precision atomic mass measurements of stable and unstable nuclei, where the relevant scale is an accuracy of 1 keV and where halflives are longer than 10 milliseconds (optimistically). The atomic mass table is built up from mass chains, and ICR-MS brings a method of producing new types of mass chains to the mass measurement arena.

  7. Synthesis of a combined system for precise stabilization of the Spektr-UF observatory: II

    NASA Astrophysics Data System (ADS)

    Bychkov, I. V.; Voronov, V. A.; Druzhinin, E. I.; Kozlov, R. I.; Ul'yanov, S. A.; Belyaev, B. B.; Telepnev, P. P.; Ul'yashin, A. I.

    2014-03-01

    The paper presents the second part of the results of search studies for the development of a combined system of high-precision stabilization of the optical telescope for the designed Spectr-UF international observatory [1]. A new modification of the strict method of the synthesis of nonlinear discrete-continuous stabilization systems with uncertainties is described, which is based on the minimization of the guaranteed accuracy estimate calculated using vector Lyapunov functions. Using this method, the synthesis of the feedback parameters in the mode of precise inertial stabilization of the optical telescope axis is performed taking the design nonrigidity, quantization of signals over time and level, and errors of orientation meters, as well as the errors and limitation of control moments of executive engine-flywheels into account. The results of numerical experiments that demonstrate the quality of the synthesized system are presented.

  8. The Precise Location of the Soft Gamma Repeater SGR 1627-41 with Chandra

    NASA Technical Reports Server (NTRS)

    Wachter, S.; Kouveliotou, C.; Patel, S. K.; Tennant, A. F.; Woods, P. M.; Eichler, D.; Lyubarsky, Y.; Bouchet, P.

    2003-01-01

    We report the precise localization of the Soft Gamma Repeater SGR 1627-41 with the Chandra X-ray Observatory. The best position for SGR 1627-41 was determined to be RA=16:35:51.844, DEC=-47:35:23.31 (J2000) with an accuracy of 0.6 arcsec. We present the results of our search for an IR counterpart to SGR 1627-41 and compare our results to the existing detections and limits of other magnetar infrared and optical observations in the literature. We also present new observations of SGR 1806-20 obtained during the recent reactivation of the source. In addition, we have determined a precise location for archival Chandra observations and reanalyzed archival IR data in the search for a counterpart.

  9. Comb-calibrated laser ranging for three-dimensional surface profiling with micrometer-level precision at a distance.

    PubMed

    Baumann, E; Giorgetta, F R; Deschênes, J-D; Swann, W C; Coddington, I; Newbury, N R

    2014-10-20

    Non-contact surface mapping at a distance is interesting in diverse applications including industrial metrology, manufacturing, forensics, and artifact documentation and preservation. Frequency modulated continuous wave (FMCW) laser detection and ranging (LADAR) is a promising approach since it offers shot-noise limited precision/accuracy, high resolution and high sensitivity. We demonstrate a scanning imaging system based on a frequency-comb calibrated FMCW LADAR and real-time digital signal processing. This system can obtain three-dimensional images of a diffusely scattering surface at stand-off distances up to 10.5 m with sub-micrometer accuracy and with a precision below 10 µm, limited by fundamental speckle noise. Because of its shot-noise limited sensitivity, this comb-calibrated FMCW LADAR has a large dynamic range, which enables precise mapping of scenes with vastly differing reflectivities such as metal, dirt or vegetation. The current system is implemented with fiber-optic components, but the basic system architecture is compatible with future optically integrated, on-chip systems. PMID:25401525

  10. Limb volume measurements: comparison of accuracy and decisive parameters of the most used present methods.

    PubMed

    Chromy, Adam; Zalud, Ludek; Dobsak, Petr; Suskevic, Igor; Mrkvicova, Veronika

    2015-01-01

    Limb volume measurements are used for evaluating growth of muscle mass and effectivity of strength training. Beside sport sciences, it is used e.g. for detection of oedemas, lymphedemas or carcinomas or for examinations of muscle atrophy. There are several commonly used methods, but there is a lack of clear comparison, which shows their advantages and limits. The accuracy of each method is uncertainly estimated only. The aim of this paper is to determine and experimentally verify their accuracy and compare them among each other. Water Displacement Method (WD), three methods based on circumferential measures-Frustum Sign Model (FSM), Disc Model (DM), Partial Frustum Model (PFM) and two 3D scan based methods Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) were compared. Precise reference cylinders and limbs of two human subjects were measured 10 times by each method. Personal dependency of methods was also tested by measuring 10 times the same object by 3 different people. Accuracies: WD 0.3 %, FSM 2-8 % according person, DM, PFM 1-8 %, MRI 2 % (hand) or 8 % (finger), CT 0.5 % (hand) or 2 % (finger);times: FSM 1 min, CT 7 min, WD, DM, PFM 15 min, MRI 19 min; and more. WD was found as the best method for most of uses with best accuracy. The CT disposes with almost the same accuracy and allows measurements of specific regions (e.g. particular muscles), as same as MRI, which accuracy is worse though, but it is not harmful. Frustum Sign Model is usable for very fast estimation of limb volume, but with lower accuracy, Disc Model and Partial Frustum Model is useful in cases when Water Displacement cannot be used. PMID:26618096

  11. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  12. Construction and precision evaluation of the GPS virtual reference station network in North Taiwan

    NASA Astrophysics Data System (ADS)

    Yeh, T.; Lee, Z.; Chang, M.; Chen, C.

    2006-12-01

    The conventional single-reference station positioning is affected by systematic errors such as ionospheric and tropospheric delay, so that the rover must be located within 10 km from the reference station in order to acquire centimeter-level accuracy. The medium-range real-time kinematic has been proven feasible and can be used for high precision applications. However, the longer of the baseline, the more of the time for resolving the integer ambiguity. This is due to the fact that systematic errors can't be eliminated effectively by double- differencing. Recently, network approaches have been proposed to overcome the limitation of the single- reference station positioning. The real-time systematic error modeling can be achieved with the use of GPS network. For expanding the effective range and decreasing the density of the reference stations, Land Survey Bureau, Ministry of the Interior in Taiwan have set up a national GPS network. In order to obtain the high precision positioning and provide the multi-goals services, a GPS network including 27 stations already been constructed in North Taiwan. The users can download the corrections from the data center via the wireless internet and obtain the centimeter-level accuracy positioning. The service is very useful for surveyors and the high precision coordinates can be obtained real time.

  13. The Precision Field Lysimeter Concept

    NASA Astrophysics Data System (ADS)

    Fank, J.

    2009-04-01

    The understanding and interpretation of leaching processes have improved significantly during the past decades. Unlike laboratory experiments, which are mostly performed under very controlled conditions (e.g. homogeneous, uniform packing of pre-treated test material, saturated steady-state flow conditions, and controlled uniform hydraulic conditions), lysimeter experiments generally simulate actual field conditions. Lysimeters may be classified according to different criteria such as type of soil block used (monolithic or reconstructed), drainage (drainage by gravity or vacuum or a water table may be maintained), or weighing or non-weighing lysimeters. In 2004 experimental investigations have been set up to assess the impact of different farming systems on groundwater quality of the shallow floodplain aquifer of the river Mur in Wagna (Styria, Austria). The sediment is characterized by a thin layer (30 - 100 cm) of sandy Dystric Cambisol and underlying gravel and sand. Three precisely weighing equilibrium tension block lysimeters have been installed in agricultural test fields to compare water flow and solute transport under (i) organic farming, (ii) conventional low input farming and (iii) extensification by mulching grass. Specific monitoring equipment is used to reduce the well known shortcomings of lysimeter investigations: The lysimeter core is excavated as an undisturbed monolithic block (circular, 1 m2 surface area, 2 m depth) to prevent destruction of the natural soil structure, and pore system. Tracing experiments have been achieved to investigate the occurrence of artificial preferential flow and transport along the walls of the lysimeters. The results show that such effects can be neglected. Precisely weighing load cells are used to constantly determine the weight loss of the lysimeter due to evaporation and transpiration and to measure different forms of precipitation. The accuracy of the weighing apparatus is 0.05 kg, or 0.05 mm water equivalent

  14. A passion for precision

    ScienceCinema

    None

    2016-07-12

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  15. A passion for precision

    SciTech Connect

    2010-05-19

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  16. Towards precision medicine.

    PubMed

    Ashley, Euan A

    2016-08-16

    There is great potential for genome sequencing to enhance patient care through improved diagnostic sensitivity and more precise therapeutic targeting. To maximize this potential, genomics strategies that have been developed for genetic discovery - including DNA-sequencing technologies and analysis algorithms - need to be adapted to fit clinical needs. This will require the optimization of alignment algorithms, attention to quality-coverage metrics, tailored solutions for paralogous or low-complexity areas of the genome, and the adoption of consensus standards for variant calling and interpretation. Global sharing of this more accurate genotypic and phenotypic data will accelerate the determination of causality for novel genes or variants. Thus, a deeper understanding of disease will be realized that will allow its targeting with much greater therapeutic precision. PMID:27528417

  17. Principles and techniques for designing precision machines

    SciTech Connect

    Hale, L C

    1999-02-01

    This thesis is written to advance the reader's knowledge of precision-engineering principles and their application to designing machines that achieve both sufficient precision and minimum cost. It provides the concepts and tools necessary for the engineer to create new precision machine designs. Four case studies demonstrate the principles and showcase approaches and solutions to specific problems that generally have wider applications. These come from projects at the Lawrence Livermore National Laboratory in which the author participated: the Large Optics Diamond Turning Machine, Accuracy Enhancement of High- Productivity Machine Tools, the National Ignition Facility, and Extreme Ultraviolet Lithography. Although broad in scope, the topics go into sufficient depth to be useful to practicing precision engineers and often fulfill more academic ambitions. The thesis begins with a chapter that presents significant principles and fundamental knowledge from the Precision Engineering literature. Following this is a chapter that presents engineering design techniques that are general and not specific to precision machines. All subsequent chapters cover specific aspects of precision machine design. The first of these is Structural Design, guidelines and analysis techniques for achieving independently stiff machine structures. The next chapter addresses dynamic stiffness by presenting several techniques for Deterministic Damping, damping designs that can be analyzed and optimized with predictive results. Several chapters present a main thrust of the thesis, Exact-Constraint Design. A main contribution is a generalized modeling approach developed through the course of creating several unique designs. The final chapter is the primary case study of the thesis, the Conceptual Design of a Horizontal Machining Center.

  18. Precision orbit determination of altimetric satellites

    NASA Astrophysics Data System (ADS)

    Shum, C. K.; Ries, John C.; Tapley, Byron D.

    1994-11-01

    The ability to determine accurate global sea level variations is important to both detection and understanding of changes in climate patterns. Sea level variability occurs over a wide spectrum of temporal and spatial scales, and precise global measurements are only recently possible with the advent of spaceborne satellite radar altimetry missions. One of the inherent requirements for accurate determination of absolute sea surface topography is that the altimetric satellite orbits be computed with sub-decimeter accuracy within a well defined terrestrial reference frame. SLR tracking in support of precision orbit determination of altimetric satellites is significant. Recent examples are the use of SLR as the primary tracking systems for TOPEX/Poseidon and for ERS-1 precision orbit determination. The current radial orbit accuracy for TOPEX/Poseidon is estimated to be around 3-4 cm, with geographically correlated orbit errors around 2 cm. The significance of the SLR tracking system is its ability to allow altimetric satellites to obtain absolute sea level measurements and thereby provide a link to other altimetry measurement systems for long-term sea level studies. SLR tracking allows the production of precise orbits which are well centered in an accurate terrestrial reference frame. With proper calibration of the radar altimeter, these precise orbits, along with the altimeter measurements, provide long term absolute sea level measurements. The U.S. Navy's Geosat mission is equipped with only Doppler beacons and lacks laser retroreflectors. However, its orbits, and even the Geosat orbits computed using the available full 40-station Tranet tracking network, yield orbits with significant north-south shifts with respect to the IERS terrestrial reference frame. The resulting Geosat sea surface topography will be tilted accordingly, making interpretation of long-term sea level variability studies difficult.

  19. Precision tracking control of dual-stage actuation system for optical manufacturing

    NASA Astrophysics Data System (ADS)

    Dong, W.; Tang, J.

    2009-03-01

    Actuators with high linear motion speed, high positioning resolution and long motion stroke are needed in many precision machining systems. In some current systems, voice coil motors (VCMs) are implemented for servo control. While the voice coil motors may provide long motion stroke needed in many applications, the main obstacle that hinders the improvement of the machining accuracy and efficiency is its limited bandwidth. To fundamentally solve this issue, we propose to develop a dual-stage actuation system that consists of a voice coil motor that covers the coarse motion and a piezoelectric stack actuator that induces the fine motion to enhance the positioning accuracy. A flexure hinge-based mechanism is developed to connect these two actuators together. A series of numerical and experimental studies are carried out to facilitate the system design and preliminary control development.

  20. Ultra-Precision Optics

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Under a Joint Sponsored Research Agreement with Goddard Space Flight Center, SEMATECH, Inc., the Silicon Valley Group, Inc. and Tinsley Laboratories, known as SVG-Tinsley, developed an Ultra-Precision Optics Manufacturing System for space and microlithographic applications. Continuing improvements in optics manufacture will be able to meet unique NASA requirements and the production needs of the lithography industry for many years to come.

  1. Tightly coupled integration of ionosphere-constrained precise point positioning and inertial navigation systems.

    PubMed

    Gao, Zhouzheng; Zhang, Hongping; Ge, Maorong; Niu, Xiaoji; Shen, Wenbin; Wickert, Jens; Schuh, Harald

    2015-01-01

    The continuity and reliability of precise GNSS positioning can be seriously limited by severe user observation environments. The Inertial Navigation System (INS) can overcome such drawbacks, but its performance is clearly restricted by INS sensor errors over time. Accordingly, the tightly coupled integration of GPS and INS can overcome the disadvantages of each individual system and together form a new navigation system with a higher accuracy, reliability and availability. Recently, ionosphere-constrained (IC) precise point positioning (PPP) utilizing raw GPS observations was proven able to improve both the convergence and positioning accuracy of the conventional PPP using ionosphere-free combined observations (LC-PPP). In this paper, a new mode of tightly coupled integration, in which the IC-PPP instead of LC-PPP is employed, is implemented to further improve the performance of the coupled system. We present the detailed mathematical model and the related algorithm of the new integration of IC-PPP and INS. To evaluate the performance of the new tightly coupled integration, data of both airborne and vehicle experiments with a geodetic GPS receiver and tactical grade inertial measurement unit are processed and the results are analyzed. The statistics show that the new approach can further improve the positioning accuracy compared with both IC-PPP and the tightly coupled integration of the conventional PPP and INS. PMID:25763647

  2. The Coyote Universe II: Cosmological Models and Precision Emulation of the Nonlinear Matter Power Spectrum

    SciTech Connect

    Heitmann, Katrin; Habib, Salman; Higdon, David; Williams, Brian J; White, Martin; Wagner, Christian

    2008-01-01

    The power spectrum of density fluctuations is a foundational source of cosmological information. Precision cosmological probes targeted primarily at investigations of dark energy require accurate theoretical determinations of the power spectrum in the nonlinear regime. To exploit the observational power of future cosmological surveys, accuracy demands on the theory are at the one percent level or better. Numerical simulations are currently the only way to produce sufficiently error-controlled predictions for the power spectrum. The very high computational cost of (precision) N-body simulations is a major obstacle to obtaining predictions in the nonlinear regime, while scanning over cosmological parameters. Near-future observations, however, are likely to provide a meaningful constraint only on constant dark energy equation of state 'wCDM' cosmologies. In this paper we demonstrate that a limited set of only 37 cosmological models -- the 'Coyote Universe' suite -- can be used to predict the nonlinear matter power spectrum at the required accuracy over a prior parameter range set by cosmic microwave background observations. This paper is the second in a series of three, with the final aim to provide a high-accuracy prediction scheme for the nonlinear matter power spectrum for wCDM cosmologies.

  3. Attentional priority determines working memory precision.

    PubMed

    Klyszejko, Zuzanna; Rahmati, Masih; Curtis, Clayton E

    2014-12-01

    Visual working memory is a system used to hold information actively in mind for a limited time. The number of items and the precision with which we can store information has limits that define its capacity. How much control do we have over the precision with which we store information when faced with these severe capacity limitations? Here, we tested the hypothesis that rank-ordered attentional priority determines the precision of multiple working memory representations. We conducted two psychophysical experiments that manipulated the priority of multiple items in a two-alternative forced choice task (2AFC) with distance discrimination. In Experiment 1, we varied the probabilities with which memorized items were likely to be tested. To generalize the effects of priority beyond simple cueing, in Experiment 2, we manipulated priority by varying monetary incentives contingent upon successful memory for items tested. Moreover, we illustrate our hypothesis using a simple model that distributed attentional resources across items with rank-ordered priorities. Indeed, we found evidence in both experiments that priority affects the precision of working memory in a monotonic fashion. Our results demonstrate that representations of priority may provide a mechanism by which resources can be allocated to increase the precision with which we encode and briefly store information.

  4. Attentional priority determines working memory precision.

    PubMed

    Klyszejko, Zuzanna; Rahmati, Masih; Curtis, Clayton E

    2014-12-01

    Visual working memory is a system used to hold information actively in mind for a limited time. The number of items and the precision with which we can store information has limits that define its capacity. How much control do we have over the precision with which we store information when faced with these severe capacity limitations? Here, we tested the hypothesis that rank-ordered attentional priority determines the precision of multiple working memory representations. We conducted two psychophysical experiments that manipulated the priority of multiple items in a two-alternative forced choice task (2AFC) with distance discrimination. In Experiment 1, we varied the probabilities with which memorized items were likely to be tested. To generalize the effects of priority beyond simple cueing, in Experiment 2, we manipulated priority by varying monetary incentives contingent upon successful memory for items tested. Moreover, we illustrate our hypothesis using a simple model that distributed attentional resources across items with rank-ordered priorities. Indeed, we found evidence in both experiments that priority affects the precision of working memory in a monotonic fashion. Our results demonstrate that representations of priority may provide a mechanism by which resources can be allocated to increase the precision with which we encode and briefly store information. PMID:25240420

  5. Precise clock synchronization protocol

    NASA Astrophysics Data System (ADS)

    Luit, E. J.; Martin, J. M. M.

    1993-12-01

    A distributed clock synchronization protocol is presented which achieves a very high precision without the need for very frequent resynchronizations. The protocol tolerates failures of the clocks: clocks may be too slow or too fast, exhibit omission failures and report inconsistent values. Synchronization takes place in synchronization rounds as in many other synchronization protocols. At the end of each round, clock times are exchanged between the clocks. Each clock applies a convergence function (CF) to the values obtained. This function estimates the difference between its clock and an average clock and corrects its clock accordingly. Clocks are corrected for drift relative to this average clock during the next synchronization round. The protocol is based on the assumption that clock reading errors are small with respect to the required precision of synchronization. It is shown that the CF resynchronizes the clocks with high precision even when relatively large clock drifts are possible. It is also shown that the drift-corrected clocks remain synchronized until the end of the next synchronization round. The stability of the protocol is proven.

  6. Precision Experiments at LEP

    NASA Astrophysics Data System (ADS)

    de Boer, W.

    2015-07-01

    The Large Electron-Positron Collider (LEP) established the Standard Model (SM) of particle physics with unprecedented precision, including all its radiative corrections. These led to predictions for the masses of the top quark and Higgs boson, which were beautifully confirmed later on. After these precision measurements the Nobel Prize in Physics was awarded in 1999 jointly to 't Hooft and Veltman "for elucidating the quantum structure of electroweak interactions in physics". Another hallmark of the LEP results were the precise measurements of the gauge coupling constants, which excluded unification of the forces within the SM, but allowed unification within the supersymmetric extension of the SM. This increased the interest in Supersymmetry (SUSY) and Grand Unified Theories, especially since the SM has no candidate for the elusive dark matter, while SUSY provides an excellent candidate for dark matter. In addition, SUSY removes the quadratic divergencies of the SM and predicts the Higgs mechanism from radiative electroweak symmetry breaking with a SM-like Higgs boson having a mass below 130 GeV in agreement with the Higgs boson discovery at the LHC. However, the predicted SUSY particles have not been found either because they are too heavy for the present LHC energy and luminosity or Nature has found alternative ways to circumvent the shortcomings of the SM.

  7. Precision Experiments at LEP

    NASA Astrophysics Data System (ADS)

    de Boer, W.

    2015-09-01

    The Large Electron Positron Collider (LEP) established the Standard Model (SM) of particle physics with unprecedented precision, including all its radiative corrections. These led to predictions for the masses of the top quark and Higgs boson, which were beautifully confirmed later on. After these precision measurements the Nobel Prize in Physics was awarded in 1999 jointly to 't Hooft and Veltman "for elucidating the quantum structure of electroweak interactions in physics". Another hallmark of the LEP results were the precise measurements of the gauge coupling constants, which excluded unification of the forces within the SM, but allowed unification within the supersymmetric extension of the SM. This increased the interest in Supersymmetry (SUSY) and Grand Unified Theories, especially since the SM has no candidate for the elusive dark matter, while Supersymmetry provides an excellent candidate for dark matter. In addition, Supersymmetry removes the quadratic divergencies of the SM and {\\it predicts} the Higgs mechanism from radiative electroweak symmetry breaking with a SM-like Higgs boson having a mass below 130 GeV in agreement with the Higgs boson discovery at the LHC. However, the predicted SUSY particles have not been found either because they are too heavy for the present LHC energy and luminosity or Nature has found alternative ways to circumvent the shortcomings of the SM.

  8. Standardization of radon measurements. 2. Accuracy and proficiency testing

    SciTech Connect

    Matuszek, J.M.

    1990-01-01

    The accuracy of in situ environmental radon measurement techniques is reviewed and new data for charcoal canister, alpha-track (track-etch) and electret detectors are presented. Deficiencies reported at the 1987 meeting in Wurenlingen, Federal Republic of Germany, for measurements using charcoal detectors are confirmed by the new results. Accuracy and precision of the alpha-track measurements laboratory were better than in 1987. Electret detectors appear to provide a convenient, accurate, and precise system for the measurement of radon concentration. The need for a comprehensive, blind proficiency-testing programs is discussed.

  9. Accuracy of Information Processing under Focused Attention.

    ERIC Educational Resources Information Center

    Bastick, Tony

    This paper reports the results of an experiment on the accuracy of information processing during attention focused arousal under two conditions: single estimation and double estimation. The attention of 187 college students was focused by a task requiring high level competition for a monetary prize ($10) under severely limited time conditions. The…

  10. Highly Parallel, High-Precision Numerical Integration

    SciTech Connect

    Bailey, David H.; Borwein, Jonathan M.

    2005-04-22

    This paper describes a scheme for rapidly computing numerical values of definite integrals to very high accuracy, ranging from ordinary machine precision to hundreds or thousands of digits, even for functions with singularities or infinite derivatives at endpoints. Such a scheme is of interest not only in computational physics and computational chemistry, but also in experimental mathematics, where high-precision numerical values of definite integrals can be used to numerically discover new identities. This paper discusses techniques for a parallel implementation of this scheme, then presents performance results for 1-D and 2-D test suites. Results are also given for a certain problem from mathematical physics, which features a difficult singularity, confirming a conjecture to 20,000 digit accuracy. The performance rate for this latter calculation on 1024 CPUs is 690 Gflop/s. We believe that this and one other 20,000-digit integral evaluation that we report are the highest-precision non-trivial numerical integrations performed to date.

  11. A Reusable Design for Precision Lunar Landing Systems

    NASA Technical Reports Server (NTRS)

    Fuhrman, Linda; Brand, Timothy; Fill, Tom; Norris, Lee; Paschall, Steve

    2005-01-01

    The top-level architecture to accomplish NASA's Vision for Space Exploration is to use Lunar missions and systems not just as an end in themselves, but also as testbeds for the more ambitious goals of Human Mars Exploration (HME). This approach means that Lunar missions and systems are most likely going to be targeted for (Lunar) polar missions, and also for long-duration (months) surface stays. This overacting theme creates basic top-level requirements for any next-generation lander system: 1) Long duration stays: a) Multiple landers in close proximity; b) Pinpoint landings for "surface rendezvous"; c) Autonomous landing of pre-positioned assets; and d) Autonomous Hazard Detection and Avoidance. 2) Polar and deep-crater landings (dark); 3) Common/extensible systems for Moon and Mars, crew and cargo. These requirements pose challenging technology and capability needs. Compare and contrast: 4) Apollo: a) 1 km landing accuracy; b) Lunar near-side (well imaged and direct-to-Earth com. possible); c) Lunar equatorial (landing trajectories offer best navigation support from Earth); d) Limited lighting conditions; e) Significant ground-in-the-loop operations; 5) Lunar Access: a) 10-100m landing precision; b) "Anywhere" access includes polar (potentially poor nav. support from Earth) and far side (poor gravity and imaging; no direct-to-Earth com); c) "Anytime" access includes any lighting condition (including dark); d) Full autonomous landing capability; e) Extensible design for tele-operation or operator-in-the-loop; and f) Minimal ground support to reduce operations costs. The Lunar Access program objectives, therefore, are to: a) Develop a baseline Lunar Precision Landing System (PLS) design to enable pinpoint "anywhere, anytime" landings; b) landing precision 10m-100m; c) Any LAT, LON; and d) Any lighting condition; This paper will characterize basic features of the next generation Lunar landing system, including trajectory types, sensor suite options and a reference

  12. Precision experiments in electroweak interactions

    SciTech Connect

    Swartz, M.L.

    1990-03-01

    The electroweak theory of Glashow, Weinberg, and Salam (GWS) has become one of the twin pillars upon which our understanding of all particle physics phenomena rests. It is a brilliant achievement that qualitatively and quantitatively describes all of the vast quantity of experimental data that have been accumulated over some forty years. Note that the word quantitatively must be qualified. The low energy limiting cases of the GWS theory, Quantum Electrodynamics and the V-A Theory of Weak Interactions, have withstood rigorous testing. The high energy synthesis of these ideas, the GWS theory, has not yet been subjected to comparably precise scrutiny. The recent operation of a new generation of proton-antiproton (p{bar p}) and electron-positron (e{sup +}e{sup {minus}}) colliders has made it possible to produce and study large samples of the electroweak gauge bosons W{sup {plus minus}} and Z{sup 0}. We expect that these facilities will enable very precise tests of the GWS theory to be performed in the near future. In keeping with the theme of this Institute, Physics at the 100 GeV Mass Scale, these lectures will explore the current status and the near-future prospects of these experiments.

  13. A lane-level LBS system for vehicle network with high-precision BDS/GPS positioning.

    PubMed

    Guo, Chi; Guo, Wenfei; Cao, Guangyi; Dong, Hongbo

    2015-01-01

    In recent years, research on vehicle network location service has begun to focus on its intelligence and precision. The accuracy of space-time information has become a core factor for vehicle network systems in a mobile environment. However, difficulties persist in vehicle satellite positioning since deficiencies in the provision of high-quality space-time references greatly limit the development and application of vehicle networks. In this paper, we propose a high-precision-based vehicle network location service to solve this problem. The major components of this study include the following: (1) application of wide-area precise positioning technology to the vehicle network system. An adaptive correction message broadcast protocol is designed to satisfy the requirements for large-scale target precise positioning in the mobile Internet environment; (2) development of a concurrence service system with a flexible virtual expansion architecture to guarantee reliable data interaction between vehicles and the background; (3) verification of the positioning precision and service quality in the urban environment. Based on this high-precision positioning service platform, a lane-level location service is designed to solve a typical traffic safety problem. PMID:25755665

  14. A lane-level LBS system for vehicle network with high-precision BDS/GPS positioning.

    PubMed

    Guo, Chi; Guo, Wenfei; Cao, Guangyi; Dong, Hongbo

    2015-01-01

    In recent years, research on vehicle network location service has begun to focus on its intelligence and precision. The accuracy of space-time information has become a core factor for vehicle network systems in a mobile environment. However, difficulties persist in vehicle satellite positioning since deficiencies in the provision of high-quality space-time references greatly limit the development and application of vehicle networks. In this paper, we propose a high-precision-based vehicle network location service to solve this problem. The major components of this study include the following: (1) application of wide-area precise positioning technology to the vehicle network system. An adaptive correction message broadcast protocol is designed to satisfy the requirements for large-scale target precise positioning in the mobile Internet environment; (2) development of a concurrence service system with a flexible virtual expansion architecture to guarantee reliable data interaction between vehicles and the background; (3) verification of the positioning precision and service quality in the urban environment. Based on this high-precision positioning service platform, a lane-level location service is designed to solve a typical traffic safety problem.

  15. A Lane-Level LBS System for Vehicle Network with High-Precision BDS/GPS Positioning

    PubMed Central

    Guo, Chi; Guo, Wenfei; Cao, Guangyi; Dong, Hongbo

    2015-01-01

    In recent years, research on vehicle network location service has begun to focus on its intelligence and precision. The accuracy of space-time information has become a core factor for vehicle network systems in a mobile environment. However, difficulties persist in vehicle satellite positioning since deficiencies in the provision of high-quality space-time references greatly limit the development and application of vehicle networks. In this paper, we propose a high-precision-based vehicle network location service to solve this problem. The major components of this study include the following: (1) application of wide-area precise positioning technology to the vehicle network system. An adaptive correction message broadcast protocol is designed to satisfy the requirements for large-scale target precise positioning in the mobile Internet environment; (2) development of a concurrence service system with a flexible virtual expansion architecture to guarantee reliable data interaction between vehicles and the background; (3) verification of the positioning precision and service quality in the urban environment. Based on this high-precision positioning service platform, a lane-level location service is designed to solve a typical traffic safety problem. PMID:25755665

  16. Performance of Airborne Precision Spacing Under Realistic Wind Conditions

    NASA Technical Reports Server (NTRS)

    Wieland, Frederick; Santos, Michel; Krueger, William; Houston, Vincent E.

    2011-01-01

    With the expected worldwide increase of air traffic during the coming decade, both the Federal Aviation Administration s (FAA s) Next Generation Air Transportation System (NextGen), as well as Eurocontrol s Single European Sky ATM Research (SESAR) program have, as part of their plans, air traffic management solutions that can increase performance without requiring time-consuming and expensive infrastructure changes. One such solution involves the ability of both controllers and flight crews to deliver aircraft to the runway with greater accuracy than is possible today. Previous research has shown that time-based spacing techniques, wherein the controller assigns a time spacing to each pair of arriving aircraft, is one way to achieve this goal by providing greater runway delivery accuracy that produces a concomitant increase in system-wide performance. The research described herein focuses on a specific application of time-based spacing, called Airborne Precision Spacing (APS), which has evolved over the past ten years. This research furthers APS understanding by studying its performance with realistic wind conditions obtained from atmospheric sounding data and with realistic wind forecasts obtained from the Rapid Update Cycle (RUC) short-range weather forecast. In addition, this study investigates APS performance with limited surveillance range, as provided by the Automatic Dependent Surveillance-Broadcast (ADS-B) system, and with an algorithm designed to improve APS performance when an ADS-B signal is unavailable. The results presented herein quantify the runway threshold delivery accuracy of APS un-der these conditions, and also quantify resulting workload metrics such as the number of speed changes required to maintain spacing.

  17. Accuracy of Digital vs. Conventional Implant Impressions

    PubMed Central

    Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.

    2015-01-01

    The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423

  18. Precise Orbit Determination of BeiDou Navigation Satellite System

    NASA Astrophysics Data System (ADS)

    He, Lina; Ge, Maorong; Wang, Jiexian; Wickert, Jens; Schuh, Harald

    2013-04-01

    China has been developing its own independent satellite navigation system since decades. Now the COMPASS system, also known as BeiDou, is emerging and gaining more and more interest and attention in the worldwide GNSS communities. The current regional BeiDou system is ready for its operational service around the end of 2012 with a constellation including five Geostationary Earth Orbit satellites (GEO), five Inclined Geosynchronous Orbit satellites (IGSO) and four Medium Earth orbit (MEO) satellites in operation. Besides the open service with positioning accuracy of around 10m which is free to civilian users, both precise relative positioning, and precise point positioning are demonstrated as well. In order to enhance the BeiDou precise positioning service, Precise Orbit Determination (POD) which is essential of any satellite navigation system has been investigated and studied thoroughly. To further improving the orbits of different types of satellites, we study the impact of network coverage on POD data products by comparing results from tracking networks over the Chinese territory, Asian-Pacific, Asian and of global scale. Furthermore, we concentrate on the improvement of involving MEOs on the orbit quality of GEOs and IGSOs. POD with and without MEOs are undertaken and results are analyzed. Finally, integer ambiguity resolution which brings highly improvement on orbits and positions with GPS data is also carried out and its effect on POD data products is assessed and discussed in detail. Seven weeks of BeiDou data from a ground tracking network, deployed by Wuhan University is employed in this study. The test constellation includes four GEO, five IGSO and two MEO satellites in operation. The three-day solution approach is employed to enhance its strength due to the limited coverage of the tracking network and the small movement of most of the satellites. A number of tracking scenarios and processing schemas are identified and processed and overlapping orbit

  19. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  20. Precision orbit determination software validation experiment

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.; Tapley, B. D.; Eanes, R. J.; Marsh, J. G.; Williamson, R. G.; Martin, T. V.

    1980-01-01

    This paper presents the results of an experiment which was designed to ascertain the level of agreement between GEODYN and UTOPIA, two completely independent computer programs used for precision orbit determination, and to identify the sources which limit the agreement. For a limited set of models and a seven-day data set arc length, the altitude components of the ephemeris obtained by the two programs agree at the sub-centimeter level throughout the arc.

  1. Galvanometer deflection: a precision high-speed system.

    PubMed

    Jablonowski, D P; Raamot, J

    1976-06-01

    An X-Y galvanometer deflection system capable of high precision in a random access mode of operation is described. Beam positional information in digitized form is obtained by employing a Ronchi grating with a sophisticated optical detection scheme. This information is used in a control interface to locate the beam to the required precision. The system is characterized by high accuracy at maximum speed and is designed for operation in a variable environment, with particular attention placed on thermal insensitivity.

  2. Galvanometer deflection: a precision high-speed system.

    PubMed

    Jablonowski, D P; Raamot, J

    1976-06-01

    An X-Y galvanometer deflection system capable of high precision in a random access mode of operation is described. Beam positional information in digitized form is obtained by employing a Ronchi grating with a sophisticated optical detection scheme. This information is used in a control interface to locate the beam to the required precision. The system is characterized by high accuracy at maximum speed and is designed for operation in a variable environment, with particular attention placed on thermal insensitivity. PMID:20165203

  3. Precision electroweak measurements

    SciTech Connect

    Demarteau, M.

    1996-11-01

    Recent electroweak precision measurements fro {ital e}{sup +}{ital e}{sup -} and {ital p{anti p}} colliders are presented. Some emphasis is placed on the recent developments in the heavy flavor sector. The measurements are compared to predictions from the Standard Model of electroweak interactions. All results are found to be consistent with the Standard Model. The indirect constraint on the top quark mass from all measurements is in excellent agreement with the direct {ital m{sub t}} measurements. Using the world`s electroweak data in conjunction with the current measurement of the top quark mass, the constraints on the Higgs` mass are discussed.

  4. Precision Robotic Assembly Machine

    ScienceCinema

    None

    2016-07-12

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  5. Instrument Attitude Precision Control

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    2004-01-01

    A novel approach is presented in this paper to analyze attitude precision and control for an instrument gimbaled to a spacecraft subject to an internal disturbance caused by a moving component inside the instrument. Nonlinear differential equations of motion for some sample cases are derived and solved analytically to gain insight into the influence of the disturbance on the attitude pointing error. A simple control law is developed to eliminate the instrument pointing error caused by the internal disturbance. Several cases are presented to demonstrate and verify the concept presented in this paper.

  6. Precision Robotic Assembly Machine

    SciTech Connect

    2009-08-14

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  7. Precision mass measurements

    NASA Astrophysics Data System (ADS)

    Gläser, M.; Borys, M.

    2009-12-01

    Mass as a physical quantity and its measurement are described. After some historical remarks, a short summary of the concept of mass in classical and modern physics is given. Principles and methods of mass measurements, for example as energy measurement or as measurement of weight forces and forces caused by acceleration, are discussed. Precision mass measurement by comparing mass standards using balances is described in detail. Measurement of atomic masses related to 12C is briefly reviewed as well as experiments and recent discussions for a future new definition of the kilogram, the SI unit of mass.

  8. Ultrahigh accuracy imaging modality for super-localization microscopy.

    PubMed

    Chao, Jerry; Ram, Sripad; Ward, E Sally; Ober, Raimund J

    2013-04-01

    Super-localization microscopy encompasses techniques that depend on the accurate localization of individual molecules from generally low-light images. The obtainable localization accuracies, however, are ultimately limited by the image detector's pixelation and noise. We present the ultrahigh accuracy imaging modality (UAIM), which allows users to obtain accuracies approaching the accuracy that is achievable only in the absence of detector pixelation and noise, and which we found can experimentally provide a >200% accuracy improvement over conventional low-light imaging. PMID:23455923

  9. Precision and power grip priming by observed grasping.

    PubMed

    Vainio, Lari; Tucker, Mike; Ellis, Rob

    2007-11-01

    The coupling of hand grasping stimuli and the subsequent grasp execution was explored in normal participants. Participants were asked to respond with their right- or left-hand to the accuracy of an observed (dynamic) grasp while they were holding precision or power grasp response devices in their hands (e.g., precision device/right-hand; power device/left-hand). The observed hand was making either accurate or inaccurate precision or power grasps and participants signalled the accuracy of the observed grip by making one or other response depending on instructions. Responses were made faster when they matched the observed grip type. The two grasp types differed in their sensitivity to the end-state (i.e., accuracy) of the observed grip. The end-state influenced the power grasp congruency effect more than the precision grasp effect when the observed hand was performing the grasp without any goal object (Experiments 1 and 2). However, the end-state also influenced the precision grip congruency effect (Experiment 3) when the action was object-directed. The data are interpreted as behavioural evidence of the automatic imitation coding of the observed actions. The study suggests that, in goal-oriented imitation coding, the context of an action (e.g., being object-directed) is more important factor in coding precision grips than power grips.

  10. Precision flyer initiator

    DOEpatents

    Frank, Alan M.; Lee, Ronald S.

    1998-01-01

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or "flyer" is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices.

  11. Precision flyer initiator

    DOEpatents

    Frank, A.M.; Lee, R.S.

    1998-05-26

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or ``flyer`` is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices. 10 figs.

  12. Precision Joining Center

    SciTech Connect

    Powell, J.W.; Westphal, D.A.

    1991-08-01

    A workshop to obtain input from industry on the establishment of the Precision Joining Center (PJC) was held on July 10--12, 1991. The PJC is a center for training Joining Technologists in advanced joining techniques and concepts in order to promote the competitiveness of US industry. The center will be established as part of the DOE Defense Programs Technology Commercialization Initiative, and operated by EG G Rocky Flats in cooperation with the American Welding Society and the Colorado School of Mines Center for Welding and Joining Research. The overall objectives of the workshop were to validate the need for a Joining Technologists to fill the gap between the welding operator and the welding engineer, and to assure that the PJC will train individuals to satisfy that need. The consensus of the workshop participants was that the Joining Technologist is a necessary position in industry, and is currently used, with some variation, by many companies. It was agreed that the PJC core curriculum, as presented, would produce a Joining Technologist of value to industries that use precision joining techniques. The advantage of the PJC would be to train the Joining Technologist much more quickly and more completely. The proposed emphasis of the PJC curriculum on equipment intensive and hands-on training was judged to be essential.

  13. Precision measurements in supersymmetry

    SciTech Connect

    Feng, J.L.

    1995-05-01

    Supersymmetry is a promising framework in which to explore extensions of the standard model. If candidates for supersymmetric particles are found, precision measurements of their properties will then be of paramount importance. The prospects for such measurements and their implications are the subject of this thesis. If charginos are produced at the LEP II collider, they are likely to be one of the few available supersymmetric signals for many years. The author considers the possibility of determining fundamental supersymmetry parameters in such a scenario. The study is complicated by the dependence of observables on a large number of these parameters. He proposes a straightforward procedure for disentangling these dependences and demonstrate its effectiveness by presenting a number of case studies at representative points in parameter space. In addition to determining the properties of supersymmetric particles, precision measurements may also be used to establish that newly-discovered particles are, in fact, supersymmetric. Supersymmetry predicts quantitative relations among the couplings and masses of superparticles. The author discusses tests of such relations at a future e{sup +}e{sup {minus}} linear collider, using measurements that exploit the availability of polarizable beams. Stringent tests of supersymmetry from chargino production are demonstrated in two representative cases, and fermion and neutralino processes are also discussed.

  14. The neglected tool in the Bayesian ecologist's shed: a case study testing informative priors' effect on model accuracy.

    PubMed

    Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J

    2015-01-01

    Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy.

  15. Accuracy of laser beam center and width calculations.

    PubMed

    Mana, G; Massa, E; Rovera, A

    2001-03-20

    The application of lasers in high-precision measurements and the demand for accuracy make the plane-wave model of laser beams unsatisfactory. Measurements of the variance of the transverse components of the photon impulse are essential for wavelength determination. Accuracy evaluation of the relevant calculations is thus an integral part of the assessment of the wavelength of stabilized-laser radiation. We present a propagation-of-error analysis on variance calculations when digitized intensity profiles are obtained by means of silicon video cameras. Image clipping criteria are obtained that maximize the accuracy of the computed result.

  16. Precision measurement of refractive index of air based on laser synthetic wavelength interferometry with Edlén equation estimation.

    PubMed

    Yan, Liping; Chen, Benyong; Zhang, Enzheng; Zhang, Shihua; Yang, Ye

    2015-08-01

    A novel method for the precision measurement of refractive index of air (n(air)) based on the combining of the laser synthetic wavelength interferometry with the Edlén equation estimation is proposed. First, a n(air_e) is calculated from the modified Edlén equation according to environmental parameters measured by low precision sensors with an uncertainty of 10(-6). Second, a unique integral fringe number N corresponding to n(air) is determined based on the calculated n(air_e). Then, a fractional fringe ε corresponding to n(air) with high accuracy can be obtained according to the principle of fringe subdivision of laser synthetic wavelength interferometry. Finally, high accurate measurement of n(air) is achieved according to the determined fringes N and ε. The merit of the proposed method is that it not only solves the problem of the measurement accuracy of n(air) being limited by the accuracies of environmental sensors, but also avoids adopting complicated vacuum pumping to measure the integral fringe N in the method of conventional laser interferometry. To verify the feasibility of the proposed method, comparison experiments with Edlén equations in short time and in long time were performed. Experimental results show that the measurement accuracy of n(air) is better than 2.5 × 10(-8) in short time tests and 6.2 × 10(-8) in long time tests. PMID:26329237

  17. Phase space correlation to improve detection accuracy.

    PubMed

    Carroll, T L; Rachford, F J

    2009-09-01

    The standard method used for detecting signals in radar or sonar is cross correlation. The accuracy of the detection with cross correlation is limited by the bandwidth of the signals. We show that by calculating the cross correlation based on points that are nearby in phase space rather than points that are simultaneous in time, the detection accuracy is improved. The phase space correlation technique works for some standard radar signals, but it is especially well suited to chaotic signals because trajectories that are adjacent in phase space move apart from each other at an exponential rate.

  18. Optical Frequency Stabilization and Optical Phase Locked Loops: Golden Threads of Precision Measurement

    SciTech Connect

    Taubman, Matthew S.

    2013-07-01

    Stabilization of lasers through locking to optical cavities, atomic transitions, and molecular transitions has enabled the field of precision optical measurement since shortly after the invention of the laser. Recent advances in the field have produced an optical clock that is orders of magnitude more stable than those of just a few years prior. Phase locking of one laser to another, or to a frequency offset from another, formed the basis for linking stable lasers across the optical spectrum, such frequency chains exhibiting progressively finer precision through the years. Phase locking between the modes within a femtosecond pulsed laser has yielded the optical frequency comb, one of the most beautiful and useful instruments of our time. This talk gives an overview of these topics, from early work through to the latest 1E-16 thermal noise-limited precision recently attained for a stable laser, and the ongoing quest for ever finer precision and accuracy. The issues of understanding and measuring line widths and shapes are also studied in some depth, highlighting implications for servo design for sub-Hz line widths.

  19. Development of a precision large deployable antenna

    NASA Astrophysics Data System (ADS)

    Iwata, Yoji; Yamamoto, Kazuo; Noda, Takahiko; Tamai, Yasuo; Ebisui, Takashi; Miura, Koryo; Takano, Tadashi

    This paper describes the results of a study of a precision large deployable antenna for the space VLBI satellite 'MUSES-B'. An antenna with high gain and pointing accuracy is required for the mission objective. The frequency bands required are 22, 5 and 1.6 GHz. The required aperture diameter of the reflector is 10 meters. A displaced axis Cassegrain antenna is adopted with a mesh reflector formed in a tension truss concept. Analysis shows the possibility to achieve aperture efficiency of 60 percent at 22.15 GHz and surface accuracy of 0.5 mm rms. A one-fourth scale model of the reflector has been assembled in order to verify the design and clarify problems in manufacturing and assembly processes.

  20. Precise autofocusing microscope with rapid response

    NASA Astrophysics Data System (ADS)

    Liu, Chien-Sheng; Jiang, Sheng-Hong

    2015-03-01

    The rapid on-line or off-line automated vision inspection is a critical operation in the manufacturing fields. Accordingly, this present study designs and characterizes a novel precise optics-based autofocusing microscope with a rapid response and no reduction in the focusing accuracy. In contrast to conventional optics-based autofocusing microscopes with centroid method, the proposed microscope comprises a high-speed rotating optical diffuser in which the variation of the image centroid position is reduced and consequently the focusing response is improved. The proposed microscope is characterized and verified experimentally using a laboratory-built prototype. The experimental results show that compared to conventional optics-based autofocusing microscopes, the proposed microscope achieves a more rapid response with no reduction in the focusing accuracy. Consequently, the proposed microscope represents another solution for both existing and emerging industrial applications of automated vision inspection.

  1. Visual inspection reliability for precision manufactured parts

    SciTech Connect

    See, Judi E.

    2015-09-04

    Sandia National Laboratories conducted an experiment for the National Nuclear Security Administration to determine the reliability of visual inspection of precision manufactured parts used in nuclear weapons. In addition visual inspection has been extensively researched since the early 20th century; however, the reliability of visual inspection for nuclear weapons parts has not been addressed. In addition, the efficacy of using inspector confidence ratings to guide multiple inspections in an effort to improve overall performance accuracy is unknown. Further, the workload associated with inspection has not been documented, and newer measures of stress have not been applied.

  2. Precision laser spectroscopy using acousto-optic modulators

    SciTech Connect

    Van Wijngaarden, W.A.

    1996-12-31

    This paper reports on a new spectroscopic method that uses a frequency-modulated laser to excite an atomic beam. It has an especially promising future given the rapid technological advances in developing new relatively inexpensive acousto-optic and electro-optic modulators. Most significantly, this new method is free of various systematic effects that have limited the accuracy of past experiments. This chapter is organized as follows. Section II briefly reviews some of the advances made in optical spectroscopy during the last few decades. Principally, it discusses the use of Fabry-Perot etalons in conjunction with laser atomic beam spectroscopy. Interferometers have been extensively employed by numerous groups to determine many different kinds of frequency shifts. Section III describes three possible experimental arrangements using optically modulated laser beams to make frequency measurements. The advantages and limitations of these approaches are illustrated in Section IV by three specific examples of experiments that determined isotope shifts and hyperfine structure. Section V discusses some precision Stark shift measurements for optical transitions. It concludes with a summary of polarizability data having uncertainties of less than 0.5%. Sections IV and V also compare the results obtained using a variety of competing spectroscopic techniques. Finally, Section VI gives concluding remarks. 96 refs., 15 figs., 6 tabs.

  3. Reticence, Accuracy and Efficacy

    NASA Astrophysics Data System (ADS)

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  4. High-Precision Distribution of Highly Stable Optical Pulse Trains with 8.8 × 10−19 instability

    PubMed Central

    Ning, B.; Zhang, S. Y.; Hou, D.; Wu, J. T.; Li, Z. B.; Zhao, J. Y.

    2014-01-01

    The high-precision distribution of optical pulse trains via fibre links has had a considerable impact in many fields. In most published work, the accuracy is still fundamentally limited by unavoidable noise sources, such as thermal and shot noise from conventional photodiodes and thermal noise from mixers. Here, we demonstrate a new high-precision timing distribution system that uses a highly precise phase detector to obviously reduce the effect of these limitations. Instead of using photodiodes and microwave mixers, we use several fibre Sagnac-loop-based optical-microwave phase detectors (OM-PDs) to achieve optical-electrical conversion and phase measurements, thereby suppressing the sources of noise and achieving ultra-high accuracy. The results of a distribution experiment using a 10-km fibre link indicate that our system exhibits a residual instability of 2.0 × 10−15 at1 s and8.8 × 10−19 at 40,000 s and an integrated timing jitter as low as 3.8 fs in a bandwidth of 1 Hz to 100 kHz. This low instability and timing jitter make it possible for our system to be used in the distribution of optical-clock signals or in applications that require extremely accurate frequency/time synchronisation. PMID:24870442

  5. Truss Assembly and Welding by Intelligent Precision Jigging Robots

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Dorsey, John T.; Doggett, William R.; Correll, Nikolaus

    2014-01-01

    This paper describes an Intelligent Precision Jigging Robot (IPJR) prototype that enables the precise alignment and welding of titanium space telescope optical benches. The IPJR, equipped with micron accuracy sensors and actuators, worked in tandem with a lower precision remote controlled manipulator. The combined system assembled and welded a 2 m truss from stock titanium components. The calibration of the IPJR, and the difference between the predicted and the truss dimensions as-built, identified additional sources of error that should be addressed in the next generation of IPJRs in 2D and 3D.

  6. Precision Joining Center

    NASA Technical Reports Server (NTRS)

    Powell, John W.

    1991-01-01

    The establishment of a Precision Joining Center (PJC) is proposed. The PJC will be a cooperatively operated center with participation from U.S. private industry, the Colorado School of Mines, and various government agencies, including the Department of Energy's Nuclear Weapons Complex (NWC). The PJC's primary mission will be as a training center for advanced joining technologies. This will accomplish the following objectives: (1) it will provide an effective mechanism to transfer joining technology from the NWC to private industry; (2) it will provide a center for testing new joining processes for the NWC and private industry; and (3) it will provide highly trained personnel to support advance joining processes for the NWC and private industry.

  7. Precise Selenodetic Coordinate System on Artificial Light Refers

    NASA Astrophysics Data System (ADS)

    Bagrov, Alexander; Pichkhadze, Konstantin M.; Sysoev, Valentin

    that coordinates of the beacon will be determined with accuracy not worse then 6 meters on the lunar surface. Much more accuracy can be achieved if orbital probe will use as precise angular measurer as optical interferometer. The limiting accuracy of proposed method is far above any reasonable level, because it may be sub-millimeter one. Theoretical analysis shows that for achievement of 1-meter accuracy of coordinate measuring over lunar globe it will be enough to disperse over it surface some 60 light beacons. Designed by Lavochkin Association light beacon is autonomous one, and it will work at least 10 years, so coordinate frame of any other lunar mission could use established selenodetic coordinates during this period. The same approach may be used for establishing Martial coordinates system.

  8. 40 CFR 91.314 - Analyzer accuracy and specifications.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... 91.314 Section 91.314 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Provisions § 91.314 Analyzer accuracy and specifications. (a) Measurement accuracy—general. The analyzers... precision is defined as 2.5 times the standard deviation(s) of 10 repetitive responses to a...

  9. 40 CFR 91.314 - Analyzer accuracy and specifications.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... 91.314 Section 91.314 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Provisions § 91.314 Analyzer accuracy and specifications. (a) Measurement accuracy—general. The analyzers... precision is defined as 2.5 times the standard deviation(s) of 10 repetitive responses to a...

  10. Precision Spectroscopy of Tellurium

    NASA Astrophysics Data System (ADS)

    Coker, J.; Furneaux, J. E.

    2013-06-01

    Tellurium (Te_2) is widely used as a frequency reference, largely due to the fact that it has an optical transition roughly every 2-3 GHz throughout a large portion of the visible spectrum. Although a standard atlas encompassing over 5200 cm^{-1} already exists [1], Doppler broadening present in that work buries a significant portion of the features [2]. More recent studies of Te_2 exist which do not exhibit Doppler broadening, such as Refs. [3-5], and each covers different parts of the spectrum. This work adds to that knowledge a few hundred transitions in the vicinity of 444 nm, measured with high precision in order to improve measurement of the spectroscopic constants of Te_2's excited states. Using a Fabry Perot cavity in a shock-absorbing, temperature and pressure regulated chamber, locked to a Zeeman stabilized HeNe laser, we measure changes in frequency of our diode laser to ˜1 MHz precision. This diode laser is scanned over 1000 GHz for use in a saturated-absorption spectroscopy cell filled with Te_2 vapor. Details of the cavity and its short and long-term stability are discussed, as well as spectroscopic properties of Te_2. References: J. Cariou, and P. Luc, Atlas du spectre d'absorption de la molecule de tellure, Laboratoire Aime-Cotton (1980). J. Coker et al., J. Opt. Soc. Am. B {28}, 2934 (2011). J. Verges et al., Physica Scripta {25}, 338 (1982). Ph. Courteille et al., Appl. Phys. B {59}, 187 (1994) T.J. Scholl et al., J. Opt. Soc. Am. B {22}, 1128 (2005).

  11. Operating a real time high accuracy positioning system

    NASA Astrophysics Data System (ADS)

    Johnston, G.; Hanley, J.; Russell, D.; Vooght, A.

    2003-04-01

    The paper shall review the history and development of real time DGPS services prior to then describing the design of a high accuracy GPS commercial augmentation system and service currently delivering over a wide area to users of precise positioning products. The infrastructure and system shall be explained in relation to the need for high accuracy and high integrity of positioning for users. A comparison of the different techniques for the delivery of data shall be provided to outline the technical approach taken. Examples of the performance of the real time system shall be shown in various regions and modes to outline the current achievable accuracies. Having described and established the current GPS based situation, a review of the potential of the Galileo system shall be presented. Following brief contextual information relating to the Galileo project, core system and services, the paper will identify possible key applications and the main user communities for sub decimetre level precise positioning. The paper will address the Galileo and modernised GPS signals in space that are relevant to commercial precise positioning for the future and will discuss the implications for precise positioning performance. An outline of the proposed architecture shall be described and associated with pointers towards a successful implementation. Central to this discussion will be an assessment of the likely evolution of system infrastructure and user equipment implementation, prospects for new applications and their effect upon the business case for precise positioning services.

  12. The Impact of Ionospheric Disturbances on High Accuracy Positioning in Brazil

    NASA Astrophysics Data System (ADS)

    Yang, L.; Park, J.; Susnik, A.; Aquino, M. H.; Dodson, A.

    2013-12-01

    High positioning accuracy is a key requirement to a number of applications with a high economic impact, such as precision agriculture, surveying, geodesy, land management, off-shore operations. Global Navigation Satellite Systems (GNSS) carrier phase measurement based techniques, such as Real Time Kinematic (RTK), Network-RTK (NRTK) and Precise Point Positioning (PPP), have played an important role in providing centimetre-level positioning accuracy, and become the core of the above applications. However these techniques are especially sensitive to ionospheric perturbations, in particular scintillation. Brazil sits in one of the most affected regions of the Earth and can be regarded as a test-bed for scenarios of the severe ionospheric condition. Over the Brazilian territory, the ionosphere behaves in a considerably unpredictable way and scintillation activity is very prominent, occurring especially after sunset hours. NRTK services may not be able to provide satisfactory accuracy, or even continuous positioning during strong scintillation periods. CALIBRA (Countering GNSS high Accuracy applications Limitations due to Ionospheric disturbances in BRAzil) started in late 2012 and is a project funded by the GSA (European GNSS Agency) and the European Commission under the Framework Program 7 to deliver improvements on carrier phase based high accuracy algorithms and their implementation in GNSS receivers, aiming to counter the adverse ionospheric effects over Brazil. As the first stage of this project, the ionospheric disturbances, which affect the applications of RTK, NRTK or PPP, are characterized. Typical problems include degraded positioning accuracy, difficulties in ambiguity fixing, NRTK network interpolation errors, long PPP convergence time etc. It will identify how GNSS observables and existing algorithms are degraded by ionosphere related phenomena, evaluating the impact on positioning techniques in terms of accuracy, integrity and availability. Through the

  13. Limits of Astrometric and Photometric Precision on KBOs

    NASA Astrophysics Data System (ADS)

    Dunham, Emilie; Kosiarek, Molly; Markatou, Evangelia Anna; Wang, Amanda

    2014-09-01

    We present photometric and astrometric measurements of the Kuiper Belt Objects (KBO) Haumea and Makemake obtained between 2013 June 5 and 2013 July 31 with the 14-inch Wallace Astrophysical Observatory (WAO) telescopes. Using photometry, we determined that Haumea and Makemake have R magnitudes 17.225 ± 0.347 and 16.850 ± 0.107, respectively. We obtained rotational light curves for Haumea and Makemake over eight separate nights. Astrometry yielded mean residuals with respect to the JPL ephemeris of -0.0095 ± 0.027'' with an rms residual of 0.480'' in R.A. and 0.261 ± 0.019'' with an rms residual of 0.335 in decl. for Makemake, and 0.219 ± 0.090'' in R.A. with an rms residual of 0.748'', and 0.223 ± 0.068'' in decl. with an rms residual of 0.571'' for Haumea. Additionally, we calculated that observing Haumea with two 14-inch telescopes and Makemake with four 14-inch telescopes could resolve their periodicity. With improved observing techniques and modern CCD cameras, it is possible to utilize small telescopes in universities around the world to observe large KBOs.

  14. High precision anatomy for MEG.

    PubMed

    Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth

    2014-02-01

    Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1mm. Estimates of relative co-registration error were <1.5mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. PMID:23911673

  15. Precision laser automatic tracking system.

    PubMed

    Lucy, R F; Peters, C J; McGann, E J; Lang, K T

    1966-04-01

    A precision laser tracker has been constructed and tested that is capable of tracking a low-acceleration target to an accuracy of about 25 microrad root mean square. In tracking high-acceleration targets, the error is directly proportional to the angular acceleration. For an angular acceleration of 0.6 rad/sec(2), the measured tracking error was about 0.1 mrad. The basic components in this tracker, similar in configuration to a heliostat, are a laser and an image dissector, which are mounted on a stationary frame, and a servocontrolled tracking mirror. The daytime sensitivity of this system is approximately 3 x 10(-10) W/m(2); the ultimate nighttime sensitivity is approximately 3 x 10(-14) W/m(2). Experimental tests were performed to evaluate both dynamic characteristics of this system and the system sensitivity. Dynamic performance of the system was obtained, using a small rocket covered with retroreflective material launched at an acceleration of about 13 g at a point 204 m from the tracker. The daytime sensitivity of the system was checked, using an efficient retroreflector mounted on a light aircraft. This aircraft was tracked out to a maximum range of 15 km, which checked the daytime sensitivity of the system measured by other means. The system also has been used to track passively stars and the Echo I satellite. Also, the system tracked passively a +7.5 magnitude star, and the signal-to-noise ratio in this experiment indicates that it should be possible to track a + 12.5 magnitude star.

  16. High precision anatomy for MEG☆

    PubMed Central

    Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth

    2014-01-01

    Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1 mm. Estimates of relative co-registration error were < 1.5 mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6 month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5 mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. PMID:23911673

  17. Validation of accuracy of liver model with temperature-dependent thermal conductivity by comparing the simulation and in vitro RF ablation experiment.

    PubMed

    Watanabe, Hiroki; Yamazaki, Nozomu; Isobe, Yosuke; Lu, XiaoWei; Kobayashi, Yo; Miyashita, Tomoyuki; Ohdaira, Takeshi; Hashizume, Makoto; Fujie, Masakatsu G

    2012-01-01

    Radiofrequency (RF) ablation is increasingly used to treat cancer because it is minimally invasive. However, it is difficult for operators to control precisely the formation of coagulation zones because of the inadequacies of imaging modalities. To overcome this limitation, we previously proposed a model-based robotic ablation system that can create the required size and shape of coagulation zone based on the dimensions of the tumor. At the heart of such a robotic system is a precise temperature distribution simulator for RF ablation. In this article, we evaluated the simulation accuracy of two numerical simulation liver models, one using a constant thermal conductivity value and the other using temperature-dependent thermal conductivity values, compared with temperatures obtained using in vitro experiments. The liver model that reflected the temperature dependence of thermal conductivity did not result in a large increase of simulation accuracy compared with the temperature-independent model in the temperature range achieved during clinical RF ablation.

  18. Trap Array Configuration Influences Estimates and Precision of Black Bear Density and Abundance

    PubMed Central

    Wilton, Clay M.; Puckett, Emily E.; Beringer, Jeff; Gardner, Beth; Eggert, Lori S.; Belant, Jerrold L.

    2014-01-01

    Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI = 193–406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information

  19. High accuracy wavelength calibration for a scanning visible spectrometer

    SciTech Connect

    Scotti, Filippo; Bell, Ronald E.

    2010-10-15

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies {<=}0.2 A. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of {approx}0.25 A has been demonstrated. With the addition of a high resolution (0.075 arc sec) optical encoder on the grating stage, greater precision ({approx}0.005 A) is possible, allowing absolute velocity measurements within {approx}0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  20. High Accuracy Wavelength Calibration For A Scanning Visible Spectrometer

    SciTech Connect

    Filippo Scotti and Ronald Bell

    2010-07-29

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤ 0.2Â. An automated calibration for a scanning spectrometer has been developed to achieve a high wavelength accuracy overr the visible spectrum, stable over time and environmental conditions, without the need to recalibrate after each grating movement. The method fits all relevant spectrometer paraameters using multiple calibration spectra. With a steping-motor controlled sine-drive, accuracies of ~0.025 Â have been demonstrated. With the addition of high resolution (0.075 aresec) optical encoder on the grading stage, greater precision (~0.005 Â) is possible, allowing absolute velocity measurements with ~0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  1. Increasing Accuracy in Computed Inviscid Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Roger

    2004-01-01

    A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number

  2. New methods for precision Møller polarimetry

    SciTech Connect

    D. Gaskell; D.G. Meekins; C. Yan

    2007-07-01

    Precision electron beam polarimetry is becoming increasingly important as parity violation experiments attempt to probe the frontiers of the standard model. In the few GeV regime, Møller polarimetry is well suited to high-precision measurements, however is generally limited to use at relatively low beam currents (< 10 μA). We present a novel technique that will enable precision Møller polarimetry at very large currents, up to 100μA.

  3. precision deburring using NC and robot equipment. Final report

    SciTech Connect

    Gillespie, L.K.

    1980-05-01

    Deburring precision miniature components is often time consuming and inconsistent. Although robots are available for deburring parts, they are not precise enough for precision miniature parts. Numerical control (NC) machining can provide edge break consistencies to meet requirements such as 76.2-..mu..m maximum edge break (chamfer). Although NC machining has a number of technical limitations which prohibits its use on many geometries, it can be an effective approach to features that are particularly difficult to deburr.

  4. Ground Truth Accuracy Tests of GPS Seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Oberlander, D. J.; Davis, J. L.; Baena, R.; Ekstrom, G.

    2005-12-01

    As the precision of GPS determinations of site position continues to improve the detection of smaller and faster geophysical signals becomes possible. However, lack of independent measurements of these signals often precludes an assessment of the accuracy of such GPS position determinations. This may be particularly true for high-rate GPS applications. We have built an apparatus to assess the accuracy of GPS position determinations for high-rate applications, in particular the application known as "GPS seismology." The apparatus consists of a bidirectional, single-axis positioning table coupled to a digitally controlled stepping motor. The motor, in turn, is connected to a Field Programmable Gate Array (FPGA) chip that synchronously sequences through real historical earthquake profiles stored in Erasable Programmable Read Only Memory's (EPROM). A GPS antenna attached to this positioning table undergoes the simulated seismic motions of the Earth's surface while collecting high-rate GPS data. Analysis of the time-dependent position estimates can then be compared to the "ground truth," and the resultant GPS error spectrum can be measured. We have made extensive measurements with this system while inducing simulated seismic motions either in the horizontal plane or the vertical axis. A second stationary GPS antenna at a distance of several meters was simultaneously collecting high-rate (5 Hz) GPS data. We will present the calibration of this system, describe the GPS observations and data analysis, and assess the accuracy of GPS for high-rate geophysical applications and natural hazards mitigation.

  5. A Precision Variable, Double Prism Attenuator for CO(2) Lasers.

    PubMed

    Oseki, T; Saito, S

    1971-01-01

    A precision, double prism attenuator for CO(2) lasers, calibrated by its gap capacitance, was constructed to evaluate its possible use as a standard for attenuation measurements. It was found that the accuracy was about 0.1 dB with a dynamic range of about 40 dB.

  6. EVALUATION OF METRIC PRECISION FOR A RIPARIAN FOREST SURVEY

    EPA Science Inventory

    This paper evaluates the performance of a protocol to monitor riparian forests in western Oregon based on the quality of the data obtained from a recent field survey. Precision and accuracy are the criteria used to determine the quality of 19 field metrics. The field survey con...

  7. Soviet precision timekeeping research and technology

    SciTech Connect

    Vessot, R.F.C.; Allan, D.W.; Crampton, S.J.B.; Cutler, L.S.; Kern, R.H.; McCoubrey, A.O.; White, J.D.

    1991-08-01

    This report is the result of a study of Soviet progress in precision timekeeping research and timekeeping capability during the last two decades. The study was conducted by a panel of seven US scientists who have expertise in timekeeping, frequency control, time dissemination, and the direct applications of these disciplines to scientific investigation. The following topics are addressed in this report: generation of time by atomic clocks at the present level of their technology, new and emerging technologies related to atomic clocks, time and frequency transfer technology, statistical processes involving metrological applications of time and frequency, applications of precise time and frequency to scientific investigations, supporting timekeeping technology, and a comparison of Soviet research efforts with those of the United States and the West. The number of Soviet professionals working in this field is roughly 10 times that in the United States. The Soviet Union has facilities for large-scale production of frequency standards and has concentrated its efforts on developing and producing rubidium gas cell devices (relatively compact, low-cost frequency standards of modest accuracy and stability) and atomic hydrogen masers (relatively large, high-cost standards of modest accuracy and high stability). 203 refs., 45 figs., 9 tabs.

  8. Glass ceramic ZERODUR enabling nanometer precision

    NASA Astrophysics Data System (ADS)

    Jedamzik, Ralf; Kunisch, Clemens; Nieder, Johannes; Westerhoff, Thomas

    2014-03-01

    The IC Lithography roadmap foresees manufacturing of devices with critical dimension of < 20 nm. Overlay specification of single digit nanometer asking for nanometer positioning accuracy requiring sub nanometer position measurement accuracy. The glass ceramic ZERODUR® is a well-established material in critical components of microlithography wafer stepper and offered with an extremely low coefficient of thermal expansion (CTE), the tightest tolerance available on market. SCHOTT is continuously improving manufacturing processes and it's method to measure and characterize the CTE behavior of ZERODUR® to full fill the ever tighter CTE specification for wafer stepper components. In this paper we present the ZERODUR® Lithography Roadmap on the CTE metrology and tolerance. Additionally, simulation calculations based on a physical model are presented predicting the long term CTE behavior of ZERODUR® components to optimize dimensional stability of precision positioning devices. CTE data of several low thermal expansion materials are compared regarding their temperature dependence between - 50°C and + 100°C. ZERODUR® TAILORED 22°C is full filling the tight CTE tolerance of +/- 10 ppb / K within the broadest temperature interval compared to all other materials of this investigation. The data presented in this paper explicitly demonstrates the capability of ZERODUR® to enable the nanometer precision required for future generation of lithography equipment and processes.

  9. Prompt and Precise Prototyping

    NASA Technical Reports Server (NTRS)

    2003-01-01

    For Sanders Design International, Inc., of Wilton, New Hampshire, every passing second between the concept and realization of a product is essential to succeed in the rapid prototyping industry where amongst heavy competition, faster time-to-market means more business. To separate itself from its rivals, Sanders Design aligned with NASA's Marshall Space Flight Center to develop what it considers to be the most accurate rapid prototyping machine for fabrication of extremely precise tooling prototypes. The company's Rapid ToolMaker System has revolutionized production of high quality, small-to-medium sized prototype patterns and tooling molds with an exactness that surpasses that of computer numerically-controlled (CNC) machining devices. Created with funding and support from Marshall under a Small Business Innovation Research (SBIR) contract, the Rapid ToolMaker is a dual-use technology with applications in both commercial and military aerospace fields. The advanced technology provides cost savings in the design and manufacturing of automotive, electronic, and medical parts, as well as in other areas of consumer interest, such as jewelry and toys. For aerospace applications, the Rapid ToolMaker enables fabrication of high-quality turbine and compressor blades for jet engines on unmanned air vehicles, aircraft, and missiles.

  10. Application of Vehicle Dynamic Modeling in Uavs for Precise Determination of Exterior Orientation

    NASA Astrophysics Data System (ADS)

    Khaghani, M.; Skaloud, J.

    2016-06-01

    Advances in unmanned aerial vehicles (UAV) and especially micro aerial vehicle (MAV) technology together with increasing quality and decreasing price of imaging devices have resulted in growing use of MAVs in photogrammetry. The practicality of MAV mapping is seriously enhanced with the ability to determine parameters of exterior orientation (EO) with sufficient accuracy, in both absolute and relative senses (change of attitude between successive images). While differential carrier phase GNSS satisfies cm-level positioning accuracy, precise attitude determination is essential for both direct sensor orientation (DiSO) and integrated sensor orientation (ISO) in corridor mapping or in block configuration imaging over surfaces with low texture. Limited cost, size, and weight of MAVs represent limitations on quality of onboard navigation sensors and puts emphasis on exploiting full capacity of available resources. Typically short flying times (10-30 minutes) also limit the possibility of estimating and/or correcting factors such as sensor misalignment and poor attitude initialization of inertial navigation system (INS). This research aims at increasing the accuracy of attitude determination in both absolute and relative senses with no extra sensors onboard. In comparison to classical INS/GNSS setup, novel approach is presented here to integrated state estimation, in which vehicle dynamic model (VDM) is used as the main process model. Such system benefits from available information from autopilot and physical properties of the platform in enhancing performance of determination of trajectory and parameters of exterior orientation consequently. The navigation system employs a differential carrier phase GNSS receiver and a micro electro-mechanical system (MEMS) grade inertial measurement unit (IMU), together with MAV control input from autopilot. Monte-Carlo simulation has been performed on trajectories for typical corridor mapping and block imaging. Results reveal

  11. A Precise Position and Attitude Determination System for Lightweight Unmanned Aerial Vehicles

    NASA Astrophysics Data System (ADS)

    Eling, C.; Klingbeil, L.; Wieland, M.; Kuhlmann, H.

    2013-08-01

    In many unmanned aerial vehicle (UAV) applications a direct georeferencing is required. The reason can be that the UAV flies autonomous and must be navigated precisely, or that the UAV performs a remote sensing operation, where the position of the camera has to be known at the moment of the recording. In our application, a project called Mapping on Demand, we are motivated by both of these reasons. The goal of this project is to develop a lightweight autonomously flying UAV that is able to identify and measure inaccessible three-dimensional objects by use of visual information. Due to payload and space limitations, precise position and attitude determination of micro- and mini-sized UAVs is very challenging. The limitations do not only affect the onboard computing capacity, but they are also noticeable when choosing the georeferencing sensors. In this article, we will present a new developed onboard direct georeferencing system which is real-time capable, applicable for lightweight UAVs and provides very precise results (position accuracy σ < 5 cm and attitude accuracy σ < 0.5 deg). In this system GPS, inertial sensors, magnetic field sensors, a barometer as well as stereo video cameras are used as georeferencing sensors. We will describe the hardware development and will go into details of the implemented software. In this context especially the RTK-GPS software and the concept of the attitude determination by use of inertial sensors, magnetic field sensors as well as an onboard GPS baseline will be highlighted. Finally, results of first field tests as well as an outlook on further developments will conclude this contribution.

  12. Apparatus for precision micromachining with lasers

    DOEpatents

    Chang, Jim J.; Dragon, Ernest P.; Warner, Bruce E.

    1998-01-01

    A new material processing apparatus using a short-pulsed, high-repetition-rate visible laser for precision micromachining utilizes a near diffraction limited laser, a high-speed precision two-axis tilt-mirror for steering the laser beam, an optical system for either focusing or imaging the laser beam on the part, and a part holder that may consist of a cover plate and a back plate. The system is generally useful for precision drilling, cutting, milling and polishing of metals and ceramics, and has broad application in manufacturing precision components. Precision machining has been demonstrated through percussion drilling and trepanning using this system. With a 30 W copper vapor laser running at multi-kHz pulse repetition frequency, straight parallel holes with size varying from 500 microns to less than 25 microns and with aspect ratios up to 1:40 have been consistently drilled with good surface finish on a variety of metals. Micromilling and microdrilling on ceramics using a 250 W copper vapor laser have also been demonstrated with good results. Materialogroaphic sections of machined parts show little (submicron scale) recast layer and heat affected zone.

  13. Apparatus for precision micromachining with lasers

    DOEpatents

    Chang, J.J.; Dragon, E.P.; Warner, B.E.

    1998-04-28

    A new material processing apparatus using a short-pulsed, high-repetition-rate visible laser for precision micromachining utilizes a near diffraction limited laser, a high-speed precision two-axis tilt-mirror for steering the laser beam, an optical system for either focusing or imaging the laser beam on the part, and a part holder that may consist of a cover plate and a back plate. The system is generally useful for precision drilling, cutting, milling and polishing of metals and ceramics, and has broad application in manufacturing precision components. Precision machining has been demonstrated through percussion drilling and trepanning using this system. With a 30 W copper vapor laser running at multi-kHz pulse repetition frequency, straight parallel holes with size varying from 500 microns to less than 25 microns and with aspect ratios up to 1:40 have been consistently drilled with good surface finish on a variety of metals. Micromilling and microdrilling on ceramics using a 250 W copper vapor laser have also been demonstrated with good results. Materialographic sections of machined parts show little (submicron scale) recast layer and heat affected zone. 1 fig.

  14. Precision of a radial basis function neural network tracking method

    NASA Technical Reports Server (NTRS)

    Hanan, J.; Zhou, H.; Chao, T. H.

    2003-01-01

    The precision of a radial basis function (RBF) neural network based tracking method has been assessed against real targets. Precision was assessed against traditionally measured frame-by-frame measurements from the recorded data set. The results show the potential limit for the technique and reveal intricacies associated with empirical data not necessarily observed in simulations.

  15. Improving the precision of astrometry for space debris

    SciTech Connect

    Sun, Rongyu; Zhao, Changyin; Zhang, Xiaoxiang

    2014-03-01

    The data reduction method for optical space debris observations has many similarities with the one adopted for surveying near-Earth objects; however, due to several specific issues, the image degradation is particularly critical, which makes it difficult to obtain precise astrometry. An automatic image reconstruction method was developed to improve the astrometry precision for space debris, based on the mathematical morphology operator. Variable structural elements along multiple directions are adopted for image transformation, and then all the resultant images are stacked to obtain a final result. To investigate its efficiency, trial observations are made with Global Positioning System satellites and the astrometry accuracy improvement is obtained by comparison with the reference positions. The results of our experiments indicate that the influence of degradation in astrometric CCD images is reduced, and the position accuracy of both objects and stellar stars is improved distinctly. Our technique will contribute significantly to optical data reduction and high-order precision astrometry for space debris.

  16. Landsat classification accuracy assessment procedures

    USGS Publications Warehouse

    Mead, R. R.; Szajgin, John

    1982-01-01

    A working conference was held in Sioux Falls, South Dakota, 12-14 November, 1980 dealing with Landsat classification Accuracy Assessment Procedures. Thirteen formal presentations were made on three general topics: (1) sampling procedures, (2) statistical analysis techniques, and (3) examples of projects which included accuracy assessment and the associated costs, logistical problems, and value of the accuracy data to the remote sensing specialist and the resource manager. Nearly twenty conference attendees participated in two discussion sessions addressing various issues associated with accuracy assessment. This paper presents an account of the accomplishments of the conference.

  17. Precision positioning of earth orbiting remote sensing systems

    NASA Technical Reports Server (NTRS)

    Melbourne, William G.; Yunck, T. P.; Wu, S. C.

    1987-01-01

    Decimeter tracking accuracy is sought for a number of precise earth sensing satellites to be flown in the 1990's. This accuracy can be achieved with techniques which use the Global Positioning System (GPS) in a differential mode. A precisely located global network of GPS ground receivers and a receiver aboard the user satellite are needed, and all techniques simultaneously estimate the user and GPS satellite states. Three basic navigation approaches include classical dynamic, wholly nondynamic, and reduced dynamic or hybrid formulations. The first two are simply special cases of the third, which promises to deliver subdecimeter accuracy for dynamically unpredictable vehicles down to the lowest orbit altitudes. The potential of these techniques for tracking and gravity field recovery will be demonstrated on NASA's Topex satellite beginning in 1991. Applications to the Shuttle, Space Station, and dedicated remote sensing platforms are being pursued.

  18. Precision medicine in myasthenia graves: begin from the data precision

    PubMed Central

    Hong, Yu; Xie, Yanchen; Hao, Hong-Jun; Sun, Ren-Cheng

    2016-01-01

    Myasthenia gravis (MG) is a prototypic autoimmune disease with overt clinical and immunological heterogeneity. The data of MG is far from individually precise now, partially due to the rarity and heterogeneity of this disease. In this review, we provide the basic insights of MG data precision, including onset age, presenting symptoms, generalization, thymus status, pathogenic autoantibodies, muscle involvement, severity and response to treatment based on references and our previous studies. Subgroups and quantitative traits of MG are discussed in the sense of data precision. The role of disease registries and scientific bases of precise analysis are also discussed to ensure better collection and analysis of MG data. PMID:27127759

  19. Precise Truss Assembly using Commodity Parts and Low Precision Welding

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, William R.; Correll, Nikolaus

    2013-01-01

    We describe an Intelligent Precision Jigging Robot (IPJR), which allows high precision assembly of commodity parts with low-precision bonding. We present preliminary experiments in 2D that are motivated by the problem of assembling a space telescope optical bench on orbit using inexpensive, stock hardware and low-precision welding. An IPJR is a robot that acts as the precise "jigging", holding parts of a local assembly site in place while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (in this case, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. We report the challenges of designing the IPJR hardware and software, analyze the error in assembly, document the test results over several experiments including a large-scale ring structure, and describe future work to implement the IPJR in 3D and with micron precision.

  20. Precise Truss Assembly Using Commodity Parts and Low Precision Welding

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, W. R.; Correll, Nikolaus

    2014-01-01

    Hardware and software design and system integration for an intelligent precision jigging robot (IPJR), which allows high precision assembly using commodity parts and low-precision bonding, is described. Preliminary 2D experiments that are motivated by the problem of assembling space telescope optical benches and very large manipulators on orbit using inexpensive, stock hardware and low-precision welding are also described. An IPJR is a robot that acts as the precise "jigging", holding parts of a local structure assembly site in place, while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (for this prototype, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. The analysis of the assembly error and the results of building a square structure and a ring structure are discussed. Options for future work, to extend the IPJR paradigm to building in 3D structures at micron precision are also summarized.