Science.gov

Sample records for accuracy precision selectivity

  1. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032

  2. Clock accuracy and precision evolve as a consequence of selection for adult emergence in a narrow window of time in fruit flies Drosophila melanogaster.

    PubMed

    Kannan, Nisha N; Vaze, Koustubh M; Sharma, Vijay Kumar

    2012-10-15

    Although circadian clocks are believed to have evolved under the action of periodic selection pressures (selection on phasing) present in the geophysical environment, there is very little rigorous and systematic empirical evidence to support this. In the present study, we examined the effect of selection for adult emergence in a narrow window of time on the circadian rhythms of fruit flies Drosophila melanogaster. Selection was imposed in every generation by choosing flies that emerged during a 1 h window of time close to the emergence peak of baseline/control flies under 12 h:12 h light:dark cycles. To study the effect of selection on circadian clocks we estimated several quantifiable features that reflect inter- and intra-individual variance in adult emergence and locomotor activity rhythms. The results showed that with increasing generations, incidence of adult emergence and activity of adult flies during the 1 h selection window increased gradually in the selected populations. Flies from the selected populations were more homogenous in their clock period, were more coherent in their phase of entrainment, and displayed enhanced accuracy and precision in their emergence and activity rhythms compared with controls. These results thus suggest that circadian clocks in D. melanogaster evolve enhanced accuracy and precision when subjected to selection for emergence in a narrow window of time.

  3. Accuracy and precision of manual baseline determination.

    PubMed

    Jirasek, A; Schulze, G; Yu, M M L; Blades, M W; Turner, R F B

    2004-12-01

    Vibrational spectra often require baseline removal before further data analysis can be performed. Manual (i.e., user) baseline determination and removal is a common technique used to perform this operation. Currently, little data exists that details the accuracy and precision that can be expected with manual baseline removal techniques. This study addresses this current lack of data. One hundred spectra of varying signal-to-noise ratio (SNR), signal-to-baseline ratio (SBR), baseline slope, and spectral congestion were constructed and baselines were subtracted by 16 volunteers who were categorized as being either experienced or inexperienced in baseline determination. In total, 285 baseline determinations were performed. The general level of accuracy and precision that can be expected for manually determined baselines from spectra of varying SNR, SBR, baseline slope, and spectral congestion is established. Furthermore, the effects of user experience on the accuracy and precision of baseline determination is estimated. The interactions between the above factors in affecting the accuracy and precision of baseline determination is highlighted. Where possible, the functional relationships between accuracy, precision, and the given spectral characteristic are detailed. The results provide users of manual baseline determination useful guidelines in establishing limits of accuracy and precision when performing manual baseline determination, as well as highlighting conditions that confound the accuracy and precision of manual baseline determination.

  4. Accuracy and Precision of an IGRT Solution

    SciTech Connect

    Webster, Gareth J. Rowbottom, Carl G.; Mackay, Ranald I.

    2009-07-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within {+-} 3% in dose over the range of sample points. For some points in high-dose gradients

  5. [History, accuracy and precision of SMBG devices].

    PubMed

    Dufaitre-Patouraux, L; Vague, P; Lassmann-Vague, V

    2003-04-01

    Self-monitoring of blood glucose started only fifty years ago. Until then metabolic control was evaluated by means of qualitative urinary blood measure often of poor reliability. Reagent strips were the first semi quantitative tests to monitor blood glucose, and in the late seventies meters were launched on the market. Initially the use of such devices was intended for medical staff, but thanks to handiness improvement they became more and more adequate to patients and are now a necessary tool for self-blood glucose monitoring. The advanced technologies allow to develop photometric measurements but also more recently electrochemical one. In the nineties, improvements were made mainly in meters' miniaturisation, reduction of reaction time and reading, simplification of blood sampling and capillary blood laying. Although accuracy and precision concern was in the heart of considerations at the beginning of self-blood glucose monitoring, the recommendations of societies of diabetology came up in the late eighties. Now, the French drug agency: AFSSAPS asks for a control of meter before any launching on the market. According to recent publications very few meters meet reliability criteria set up by societies of diabetology in the late nineties. Finally because devices may be handled by numerous persons in hospitals, meters use as possible source of nosocomial infections have been recently questioned and is subject to very strict guidelines published by AFSSAPS.

  6. S-193 scatterometer backscattering cross section precision/accuracy for Skylab 2 and 3 missions

    NASA Technical Reports Server (NTRS)

    Krishen, K.; Pounds, D. J.

    1975-01-01

    Procedures for measuring the precision and accuracy with which the S-193 scatterometer measured the background cross section of ground scenes are described. Homogeneous ground sites were selected, and data from Skylab missions were analyzed. The precision was expressed as the standard deviation of the scatterometer-acquired backscattering cross section. In special cases, inference of the precision of measurement was made by considering the total range from the maximum to minimum of the backscatter measurements within a data segment, rather than the standard deviation. For Skylab 2 and 3 missions a precision better than 1.5 dB is indicated. This procedure indicates an accuracy of better than 3 dB for the Skylab 2 and 3 missions. The estimates of precision and accuracy given in this report are for backscattering cross sections from -28 to 18 dB. Outside this range the precision and accuracy decrease significantly.

  7. A study of laseruler accuracy and precision (1986-1987)

    SciTech Connect

    Ramachandran, R.S.; Armstrong, K.P.

    1989-06-22

    A study was conducted to investigate Laserruler accuracy and precision. Tests were performed on 0.050 in., 0.100 in., and 0.120 in. gauge block standards. Results showed and accuracy of 3.7 {mu}in. for the 0.12 in. standard, with higher accuracies for the two thinner blocks. The Laserruler precision was 4.83 {mu}in. for the 0.120 in. standard, 3.83 {mu}in. for the 0.100 in. standard, and 4.2 {mu}in. for the 0.050 in. standard.

  8. Precision and accuracy in diffusion tensor magnetic resonance imaging.

    PubMed

    Jones, Derek K

    2010-04-01

    This article reviews some of the key factors influencing the accuracy and precision of quantitative metrics derived from diffusion magnetic resonance imaging data. It focuses on the study pipeline beginning at the choice of imaging protocol, through preprocessing and model fitting up to the point of extracting quantitative estimates for subsequent analysis. The aim was to provide the newcomers to the field with sufficient knowledge of how their decisions at each stage along this process might impact on precision and accuracy, to design their study/approach, and to use diffusion tensor magnetic resonance imaging in the clinic. More specifically, emphasis is placed on improving accuracy and precision. I illustrate how careful choices along the way can substantially affect the sample size needed to make an inference from the data.

  9. Accuracy and precision of temporal artery thermometers in febrile patients.

    PubMed

    Wolfson, Margaret; Granstrom, Patsy; Pomarico, Bernie; Reimanis, Cathryn

    2013-01-01

    The noninvasive temporal artery thermometer offers a way to measure temperature when oral assessment is contraindicated, uncomfortable, or difficult to obtain. In this study, the accuracy and precision of the temporal artery thermometer exceeded levels recommended by experts for use in acute care clinical practice.

  10. Accuracy-precision trade-off in visual orientation constancy.

    PubMed

    De Vrijer, M; Medendorp, W P; Van Gisbergen, J A M

    2009-02-09

    Using the subjective visual vertical task (SVV), previous investigations on the maintenance of visual orientation constancy during lateral tilt have found two opposite bias effects in different tilt ranges. The SVV typically shows accurate performance near upright but severe undercompensation at tilts beyond 60 deg (A-effect), frequently with slight overcompensation responses (E-effect) in between. Here we investigate whether a Bayesian spatial-perception model can account for this error pattern. The model interprets A- and E-effects as the drawback of a computational strategy, geared at maintaining visual stability with optimal precision at small tilt angles. In this study, we test whether these systematic errors can be seen as the consequence of a precision-accuracy trade-off when combining a veridical but noisy signal about eye orientation in space with the visual signal. To do so, we used a psychometric approach to assess both precision and accuracy of the SVV in eight subjects laterally tilted at 9 different tilt angles (-120 degrees to 120 degrees). Results show that SVV accuracy and precision worsened with tilt angle, according to a pattern that could be fitted quite adequately by the Bayesian model. We conclude that spatial vision essentially follows the rules of Bayes' optimal observer theory.

  11. The Plus or Minus Game - Teaching Estimation, Precision, and Accuracy

    NASA Astrophysics Data System (ADS)

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-03-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in TPT (Larry Weinstein's "Fermi Questions.") For several years the authors (a college physics professor, a retired algebra teacher, and a fifth-grade teacher) have been playing a game, primarily at home to challenge each other for fun, but also in the classroom as an educational tool. We call the game "The Plus or Minus Game." The game combines estimation with the principle of precision and uncertainty in a competitive and fun way.

  12. Fluorescence Axial Localization with Nanometer Accuracy and Precision

    SciTech Connect

    Li, Hui; Yen, Chi-Fu; Sivasankar, Sanjeevi

    2012-06-15

    We describe a new technique, standing wave axial nanometry (SWAN), to image the axial location of a single nanoscale fluorescent object with sub-nanometer accuracy and 3.7 nm precision. A standing wave, generated by positioning an atomic force microscope tip over a focused laser beam, is used to excite fluorescence; axial position is determined from the phase of the emission intensity. We use SWAN to measure the orientation of single DNA molecules of different lengths, grafted on surfaces with different functionalities.

  13. Assessing the Accuracy of the Precise Point Positioning Technique

    NASA Astrophysics Data System (ADS)

    Bisnath, S. B.; Collins, P.; Seepersad, G.

    2012-12-01

    The Precise Point Positioning (PPP) GPS data processing technique has developed over the past 15 years to become a standard method for growing categories of positioning and navigation applications. The technique relies on single receiver point positioning combined with the use of precise satellite orbit and clock information and high-fidelity error modelling. The research presented here uniquely addresses the current accuracy of the technique, explains the limits of performance, and defines paths to improvements. For geodetic purposes, performance refers to daily static position accuracy. PPP processing of over 80 IGS stations over one week results in few millimetre positioning rms error in the north and east components and few centimetres in the vertical (all one sigma values). Larger error statistics for real-time and kinematic processing are also given. GPS PPP with ambiguity resolution processing is also carried out, producing slight improvements over the float solution results. These results are categorised into quality classes in order to analyse the root error causes of the resultant accuracies: "best", "worst", multipath, site displacement effects, satellite availability and geometry, etc. Also of interest in PPP performance is solution convergence period. Static, conventional solutions are slow to converge, with approximately 35 minutes required for 95% of solutions to reach the 20 cm or better horizontal accuracy. Ambiguity resolution can significantly reduce this period without biasing solutions. The definition of a PPP error budget is a complex task even with the resulting numerical assessment, as unlike the epoch-by-epoch processing in the Standard Position Service, PPP processing involving filtering. An attempt is made here to 1) define the magnitude of each error source in terms of range, 2) transform ranging error to position error via Dilution Of Precision (DOP), and 3) scale the DOP through the filtering process. The result is a deeper

  14. Scatterometry measurement precision and accuracy below 70 nm

    NASA Astrophysics Data System (ADS)

    Sendelbach, Matthew; Archie, Charles N.

    2003-05-01

    Scatterometry is a contender for various measurement applications where structure widths and heights can be significantly smaller than 70 nm within one or two ITRS generations. For example, feedforward process control in the post-lithography transistor gate formation is being actively pursued by a number of RIE tool manufacturers. Several commercial forms of scatterometry are available or under development which promise to provide satisfactory performance in this regime. Scatterometry, as commercially practiced today, involves analyzing the zeroth order reflected light from a grating of lines. Normal incidence spectroscopic reflectometry, 2-theta fixed-wavelength ellipsometry, and spectroscopic ellipsometry are among the optical techniques, while library based spectra matching and realtime regression are among the analysis techniques. All these commercial forms will find accurate and precise measurement a challenge when the material constituting the critical structure approaches a very small volume. Equally challenging is executing an evaluation methodology that first determines the true properties (critical dimensions and materials) of semiconductor wafer artifacts and then compares measurement performance of several scatterometers. How well do scatterometers track process induced changes in bottom CD and sidewall profile? This paper introduces a general 3D metrology assessment methodology and reports upon work involving sub-70 nm structures and several scatterometers. The methodology combines results from multiple metrologies (CD-SEM, CD-AFM, TEM, and XSEM) to form a Reference Measurement System (RMS). The methodology determines how well the scatterometry measurement tracks critical structure changes even in the presence of other noncritical changes that take place at the same time; these are key components of accuracy. Because the assessment rewards scatterometers that measure with good precision (reproducibility) and good accuracy, the most precise

  15. Sex differences in accuracy and precision when judging time to arrival: data from two Internet studies.

    PubMed

    Sanders, Geoff; Sinclair, Kamila

    2011-12-01

    We report two Internet studies that investigated sex differences in the accuracy and precision of judging time to arrival. We used accuracy to mean the ability to match the actual time to arrival and precision to mean the consistency with which each participant made their judgments. Our task was presented as a computer game in which a toy UFO moved obliquely towards the participant through a virtual three-dimensional space on route to a docking station. The UFO disappeared before docking and participants pressed their space bar at the precise moment they thought the UFO would have docked. Study 1 showed it was possible to conduct quantitative studies of spatiotemporal judgments in virtual reality via the Internet and confirmed reports that men are more accurate because women underestimate, but found no difference in precision measured as intra-participant variation. Study 2 repeated Study 1 with five additional presentations of one condition to provide a better measure of precision. Again, men were more accurate than women but there were no sex differences in precision. However, within the coincidence-anticipation timing (CAT) literature, of those studies that report sex differences, a majority found that males are both more accurate and more precise than females. Noting that many CAT studies report no sex differences, we discuss appropriate interpretations of such null findings. While acknowledging that CAT performance may be influenced by experience we suggest that the sex difference may have originated among our ancestors with the evolutionary selection of men for hunting and women for gathering.

  16. On the Accuracy of Genomic Selection

    PubMed Central

    Rabier, Charles-Elie; Barre, Philippe; Asp, Torben; Charmet, Gilles; Mangin, Brigitte

    2016-01-01

    Genomic selection is focused on prediction of breeding values of selection candidates by means of high density of markers. It relies on the assumption that all quantitative trait loci (QTLs) tend to be in strong linkage disequilibrium (LD) with at least one marker. In this context, we present theoretical results regarding the accuracy of genomic selection, i.e., the correlation between predicted and true breeding values. Typically, for individuals (so-called test individuals), breeding values are predicted by means of markers, using marker effects estimated by fitting a ridge regression model to a set of training individuals. We present a theoretical expression for the accuracy; this expression is suitable for any configurations of LD between QTLs and markers. We also introduce a new accuracy proxy that is free of the QTL parameters and easily computable; it outperforms the proxies suggested in the literature, in particular, those based on an estimated effective number of independent loci (Me). The theoretical formula, the new proxy, and existing proxies were compared for simulated data, and the results point to the validity of our approach. The calculations were also illustrated on a new perennial ryegrass set (367 individuals) genotyped for 24,957 single nucleotide polymorphisms (SNPs). In this case, most of the proxies studied yielded similar results because of the lack of markers for coverage of the entire genome (2.7 Gb). PMID:27322178

  17. Sex differences in accuracy and precision when judging time to arrival: data from two Internet studies.

    PubMed

    Sanders, Geoff; Sinclair, Kamila

    2011-12-01

    We report two Internet studies that investigated sex differences in the accuracy and precision of judging time to arrival. We used accuracy to mean the ability to match the actual time to arrival and precision to mean the consistency with which each participant made their judgments. Our task was presented as a computer game in which a toy UFO moved obliquely towards the participant through a virtual three-dimensional space on route to a docking station. The UFO disappeared before docking and participants pressed their space bar at the precise moment they thought the UFO would have docked. Study 1 showed it was possible to conduct quantitative studies of spatiotemporal judgments in virtual reality via the Internet and confirmed reports that men are more accurate because women underestimate, but found no difference in precision measured as intra-participant variation. Study 2 repeated Study 1 with five additional presentations of one condition to provide a better measure of precision. Again, men were more accurate than women but there were no sex differences in precision. However, within the coincidence-anticipation timing (CAT) literature, of those studies that report sex differences, a majority found that males are both more accurate and more precise than females. Noting that many CAT studies report no sex differences, we discuss appropriate interpretations of such null findings. While acknowledging that CAT performance may be influenced by experience we suggest that the sex difference may have originated among our ancestors with the evolutionary selection of men for hunting and women for gathering. PMID:21125324

  18. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new

  19. Large format focal plane array integration with precision alignment, metrology and accuracy capabilities

    NASA Astrophysics Data System (ADS)

    Neumann, Jay; Parlato, Russell; Tracy, Gregory; Randolph, Max

    2015-09-01

    Focal plane alignment for large format arrays and faster optical systems require enhanced precision methodology and stability over temperature. The increase in focal plane array size continues to drive the alignment capability. Depending on the optical system, the focal plane flatness of less than 25μm (.001") is required over transition temperatures from ambient to cooled operating temperatures. The focal plane flatness requirement must also be maintained in airborne or launch vibration environments. This paper addresses the challenge of the detector integration into the focal plane module and housing assemblies, the methodology to reduce error terms during integration and the evaluation of thermal effects. The driving factors influencing the alignment accuracy include: datum transfers, material effects over temperature, alignment stability over test, adjustment precision and traceability to NIST standard. The FPA module design and alignment methodology reduces the error terms by minimizing the measurement transfers to the housing. In the design, the proper material selection requires matched coefficient of expansion materials minimizes both the physical shift over temperature as well as lowering the stress induced into the detector. When required, the co-registration of focal planes and filters can achieve submicron relative positioning by applying precision equipment, interferometry and piezoelectric positioning stages. All measurements and characterizations maintain traceability to NIST standards. The metrology characterizes the equipment's accuracy, repeatability and precision of the measurements.

  20. Improved DORIS accuracy for precise orbit determination and geodesy

    NASA Technical Reports Server (NTRS)

    Willis, Pascal; Jayles, Christian; Tavernier, Gilles

    2004-01-01

    In 2001 and 2002, 3 more DORIS satellites were launched. Since then, all DORIS results have been significantly improved. For precise orbit determination, 20 cm are now available in real-time with DIODE and 1.5 to 2 cm in post-processing. For geodesy, 1 cm precision can now be achieved regularly every week, making now DORIS an active part of a Global Observing System for Geodesy through the IDS.

  1. [Accuracy and precision in the evaluation of computer assisted surgical systems. A definition].

    PubMed

    Strauss, G; Hofer, M; Korb, W; Trantakis, C; Winkler, D; Burgert, O; Schulz, T; Dietz, A; Meixensberger, J; Koulechov, K

    2006-02-01

    Accuracy represents the outstanding criterion for navigation systems. Surgeons have noticed a great discrepancy between the values from the literature and system specifications on one hand, and intraoperative accuracy on the other. A unitary understanding for the term accuracy does not exist in clinical practice. Furthermore, an incorrect equality for the terms precision and accuracy can be found in the literature. On top of this, clinical accuracy differs from mechanical (technical) accuracy. From a clinical point of view, we had to deal with remarkably many different terms all describing accuracy. This study has the goals of: 1. Defining "accuracy" and related terms, 2. Differentiating between "precision" and "accuracy", 3. Deriving the term "surgical accuracy", 4. Recommending use of the the term "surgical accuracy" for a navigation system. To a great extent, definitions were applied from the International Standardisation Organisation-ISO and the norm from the Deutsches Institut für Normung e.V.-DIN (the German Institute for Standardization). For defining surgical accuracy, the terms reference value, expectation, accuracy and precision are of major interest. Surgical accuracy should indicate the maximum values for the deviation between test results and the reference value (true value) A(max), and additionally indicate precision P(surg). As a basis for measurements, a standardized technical model was used. Coordinates of the model were acquired by CT. To determine statistically and reality relevant results for head surgery, 50 measurements with an accuracy of 50, 75, 100 and 150 mm from the centre of the registration geometry are adequate. In the future, we recommend labeling the system's overall performance with the following specifications: maximum accuracy deviation A(max), precision P and information on the measurement method. This could be displayed on a seal of quality.

  2. Accuracy and precision in measurements of biomass oxidative ratios

    NASA Astrophysics Data System (ADS)

    Gallagher, M. E.; Masiello, C. A.; Randerson, J. T.; Chadwick, O. A.

    2005-12-01

    One fundamental property of the Earth system is the oxidative ratio (OR) of the terrestrial biosphere, or the mols CO2 fixed per mols O2 released via photosynthesis. This is also an essential, poorly constrained parameter in the calculation of the size of the terrestrial and oceanic carbon sinks via atmospheric O2 and CO2 measurements. We are pursuing a number of techniques to accurately measure natural variations in above- and below-ground OR. For aboveground biomass, OR can be calculated directly from percent C, H, N, and O data measured via elemental analysis; however, the precision of this technique is a function of 4 measurements, resulting in increased data variability. It is also possible to measure OR via bomb calorimetry and percent C, using relationships between the heat of combustion of a sample and its OR. These measurements hold the potential for generation of more precise data, as error depends only on 2 measurements instead of 4. We present data comparing these two OR measurement techniques.

  3. Gamma-Ray Peak Integration: Accuracy and Precision

    SciTech Connect

    Richard M. Lindstrom

    2000-11-12

    The accuracy of singlet gamma-ray peak areas obtained by a peak analysis program is immaterial. If the same algorithm is used for sample measurement as for calibration and if the peak shapes are similar, then biases in the integration method cancel. Reproducibility is the only important issue. Even the uncertainty of the areas computed by the program is trivial because the true standard uncertainty can be experimentally assessed by repeated measurements of the same source. Reproducible peak integration was important in a recent standard reference material certification task. The primary tool used for spectrum analysis was SUM, a National Institute of Standards and Technology interactive program to sum peaks and subtract a linear background, using the same channels to integrate all 20 spectra. For comparison, this work examines other peak integration programs. Unlike some published comparisons of peak performance in which synthetic spectra were used, this experiment used spectra collected for a real (though exacting) analytical project, analyzed by conventional software used in routine ways. Because both components of the 559- to 564-keV doublet are from {sup 76}As, they were integrated together with SUM. The other programs, however, deconvoluted the peaks. A sensitive test of the fitting algorithm is the ratio of reported peak areas. In almost all the cases, this ratio was much more variable than expected from the reported uncertainties reported by the program. Other comparisons to be reported indicate that peak integration is still an imperfect tool in the analysis of gamma-ray spectra.

  4. Spectropolarimetry with PEPSI at the LBT: accuracy vs. precision in magnetic field measurements

    NASA Astrophysics Data System (ADS)

    Ilyin, Ilya; Strassmeier, Klaus G.; Woche, Manfred; Hofmann, Axel

    2009-04-01

    We present the design of the new PEPSI spectropolarimeter to be installed at the Large Binocular Telescope (LBT) in Arizona to measure the full set of Stokes parameters in spectral lines and outline its precision and the accuracy limiting factors.

  5. Precision and Accuracy in Measurements: A Tale of Four Graduated Cylinders.

    ERIC Educational Resources Information Center

    Treptow, Richard S.

    1998-01-01

    Expands upon the concepts of precision and accuracy at a level suitable for general chemistry. Serves as a bridge to the more extensive treatments in analytical chemistry textbooks and the advanced literature on error analysis. Contains 22 references. (DDR)

  6. Accuracy and precision of silicon based impression media for quantitative areal texture analysis.

    PubMed

    Goodall, Robert H; Darras, Laurent P; Purnell, Mark A

    2015-05-20

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis.

  7. Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis

    PubMed Central

    Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.

    2015-01-01

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505

  8. Gaining Precision and Accuracy on Microprobe Trace Element Analysis with the Multipoint Background Method

    NASA Astrophysics Data System (ADS)

    Allaz, J. M.; Williams, M. L.; Jercinovic, M. J.; Donovan, J. J.

    2014-12-01

    Electron microprobe trace element analysis is a significant challenge, but can provide critical data when high spatial resolution is required. Due to the low peak intensity, the accuracy and precision of such analyses relies critically on background measurements, and on the accuracy of any pertinent peak interference corrections. A linear regression between two points selected at appropriate off-peak positions is a classical approach for background characterization in microprobe analysis. However, this approach disallows an accurate assessment of background curvature (usually exponential). Moreover, if present, background interferences can dramatically affect the results if underestimated or ignored. The acquisition of a quantitative WDS scan over the spectral region of interest is still a valuable option to determine the background intensity and curvature from a fitted regression of background portions of the scan, but this technique retains an element of subjectivity as the analyst has to select areas in the scan, which appear to represent background. We present here a new method, "Multi-Point Background" (MPB), that allows acquiring up to 24 off-peak background measurements from wavelength positions around the peaks. This method aims to improve the accuracy, precision, and objectivity of trace element analysis. The overall efficiency is amended because no systematic WDS scan needs to be acquired in order to check for the presence of possible background interferences. Moreover, the method is less subjective because "true" backgrounds are selected by the statistical exclusion of erroneous background measurements, reducing the need for analyst intervention. This idea originated from efforts to refine EPMA monazite U-Th-Pb dating, where it was recognised that background errors (peak interference or background curvature) could result in errors of several tens of million years on the calculated age. Results obtained on a CAMECA SX-100 "UltraChron" using monazite

  9. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    NASA Astrophysics Data System (ADS)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  10. A Comparison of the Astrometric Precision and Accuracy of Double Star Observations with Two Telescopes

    NASA Astrophysics Data System (ADS)

    Alvarez, Pablo; Fishbein, Amos E.; Hyland, Michael W.; Kight, Cheyne L.; Lopez, Hairold; Navarro, Tanya; Rosas, Carlos A.; Schachter, Aubrey E.; Summers, Molly A.; Weise, Eric D.; Hoffman, Megan A.; Mires, Robert C.; Johnson, Jolyon M.; Genet, Russell M.; White, Robin

    2009-01-01

    Using a manual Meade 6" Newtonian telescope and a computerized Meade 10" Schmidt-Cassegrain telescope, students from Arroyo Grande High School measured the well-known separation and position angle of the bright visual double star Albireo. The precision and accuracy of the observations from the two telescopes were compared to each other and to published values of Albireo taken as the standard. It was hypothesized that the larger, computerized telescope would be both more precise and more accurate.

  11. Evaluation of optoelectronic Plethysmography accuracy and precision in recording displacements during quiet breathing simulation.

    PubMed

    Massaroni, C; Schena, E; Saccomandi, P; Morrone, M; Sterzi, S; Silvestri, S

    2015-08-01

    Opto-electronic Plethysmography (OEP) is a motion analysis system used to measure chest wall kinematics and to indirectly evaluate respiratory volumes during breathing. Its working principle is based on the computation of marker displacements placed on the chest wall. This work aims at evaluating the accuracy and precision of OEP in measuring displacement in the range of human chest wall displacement during quiet breathing. OEP performances were investigated by the use of a fully programmable chest wall simulator (CWS). CWS was programmed to move 10 times its eight shafts in the range of physiological displacement (i.e., between 1 mm and 8 mm) at three different frequencies (i.e., 0.17 Hz, 0.25 Hz, 0.33 Hz). Experiments were performed with the aim to: (i) evaluate OEP accuracy and precision error in recording displacement in the overall calibrated volume and in three sub-volumes, (ii) evaluate the OEP volume measurement accuracy due to the measurement accuracy of linear displacements. OEP showed an accuracy better than 0.08 mm in all trials, considering the whole 2m(3) calibrated volume. The mean measurement discrepancy was 0.017 mm. The precision error, expressed as the ratio between measurement uncertainty and the recorded displacement by OEP, was always lower than 0.55%. Volume overestimation due to OEP linear measurement accuracy was always <; 12 mL (<; 3.2% of total volume), considering all settings. PMID:26736504

  12. The Plus or Minus Game--Teaching Estimation, Precision, and Accuracy

    ERIC Educational Resources Information Center

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-01-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in "TPT" (Larry Weinstein's "Fermi…

  13. Commissioning Procedures for Mechanical Precision and Accuracy in a Dedicated LINAC

    SciTech Connect

    Ballesteros-Zebadua, P.; Larrga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Juarez, J.; Prieto, I.; Moreno-Jimenez, S.; Celis, M. A.

    2008-08-11

    Mechanical precision measurements are fundamental procedures for the commissioning of a dedicated LINAC. At our Radioneurosurgery Unit, these procedures can be suitable as quality assurance routines that allow the verification of the equipment geometrical accuracy and precision. In this work mechanical tests were performed for gantry and table rotation, obtaining mean associated uncertainties of 0.3 mm and 0.71 mm, respectively. Using an anthropomorphic phantom and a series of localized surface markers, isocenter accuracy showed to be smaller than 0.86 mm for radiosurgery procedures and 0.95 mm for fractionated treatments with mask. All uncertainties were below tolerances. The highest contribution to mechanical variations is due to table rotation, so it is important to correct variations using a localization frame with printed overlays. Mechanical precision knowledge would allow to consider the statistical errors in the treatment planning volume margins.

  14. Accuracy and precisions of water quality parameters retrieved from particle swarm optimisation in a sub-tropical lake

    NASA Astrophysics Data System (ADS)

    Campbell, Glenn; Phinn, Stuart R.

    2009-09-01

    Optical remote sensing has been used to map and monitor water quality parameters such as the concentrations of hydrosols (chlorophyll and other pigments, total suspended material, and coloured dissolved organic matter). In the inversion / optimisation approach a forward model is used to simulate the water reflectance spectra from a set of parameters and the set that gives the closest match is selected as the solution. The accuracy of the hydrosol retrieval is dependent on an efficient search of the solution space and the reliability of the similarity measure. In this paper the Particle Swarm Optimisation (PSO) was used to search the solution space and seven similarity measures were trialled. The accuracy and precision of this method depends on the inherent noise in the spectral bands of the sensor being employed, as well as the radiometric corrections applied to images to calculate the subsurface reflectance. Using the Hydrolight® radiative transfer model and typical hydrosol concentrations from Lake Wivenhoe, Australia, MERIS reflectance spectra were simulated. The accuracy and precision of hydrosol concentrations derived from each similarity measure were evaluated after errors associated with the air-water interface correction, atmospheric correction and the IOP measurement were modelled and applied to the simulated reflectance spectra. The use of band specific empirically estimated values for the anisotropy value in the forward model improved the accuracy of hydrosol retrieval. The results of this study will be used to improve an algorithm for the remote sensing of water quality for freshwater impoundments.

  15. Comparison between predicted and actual accuracies for an Ultra-Precision CNC measuring machine

    SciTech Connect

    Thompson, D.C.; Fix, B.L.

    1995-05-30

    At the 1989 CIRP annual meeting, we reported on the design of a specialized, ultra-precision CNC measuring machine, and on the error budget that was developed to guide the design process. In our paper we proposed a combinatorial rule for merging estimated and/or calculated values for all known sources of error, to yield a single overall predicted accuracy for the machine. In this paper we compare our original predictions with measured performance of the completed instrument.

  16. Measuring changes in Plasmodium falciparum transmission: precision, accuracy and costs of metrics.

    PubMed

    Tusting, Lucy S; Bousema, Teun; Smith, David L; Drakeley, Chris

    2014-01-01

    As malaria declines in parts of Africa and elsewhere, and as more countries move towards elimination, it is necessary to robustly evaluate the effect of interventions and control programmes on malaria transmission. To help guide the appropriate design of trials to evaluate transmission-reducing interventions, we review 11 metrics of malaria transmission, discussing their accuracy, precision, collection methods and costs and presenting an overall critique. We also review the nonlinear scaling relationships between five metrics of malaria transmission: the entomological inoculation rate, force of infection, sporozoite rate, parasite rate and the basic reproductive number, R0. Our chapter highlights that while the entomological inoculation rate is widely considered the gold standard metric of malaria transmission and may be necessary for measuring changes in transmission in highly endemic areas, it has limited precision and accuracy and more standardised methods for its collection are required. In areas of low transmission, parasite rate, seroconversion rates and molecular metrics including MOI and mFOI may be most appropriate. When assessing a specific intervention, the most relevant effects will be detected by examining the metrics most directly affected by that intervention. Future work should aim to better quantify the precision and accuracy of malaria metrics and to improve methods for their collection.

  17. Precision and accuracy of 3D lower extremity residua measurement systems

    NASA Astrophysics Data System (ADS)

    Commean, Paul K.; Smith, Kirk E.; Vannier, Michael W.; Hildebolt, Charles F.; Pilgram, Thomas K.

    1996-04-01

    Accurate and reproducible geometric measurement of lower extremity residua is required for custom prosthetic socket design. We compared spiral x-ray computed tomography (SXCT) and 3D optical surface scanning (OSS) with caliper measurements and evaluated the precision and accuracy of each system. Spiral volumetric CT scanned surface and subsurface information was used to make external and internal measurements, and finite element models (FEMs). SXCT and OSS were used to measure lower limb residuum geometry of 13 below knee (BK) adult amputees. Six markers were placed on each subject's BK residuum and corresponding plaster casts and distance measurements were taken to determine precision and accuracy for each system. Solid models were created from spiral CT scan data sets with the prosthesis in situ under different loads using p-version finite element analysis (FEA). Tissue properties of the residuum were estimated iteratively and compared with values taken from the biomechanics literature. The OSS and SXCT measurements were precise within 1% in vivo and 0.5% on plaster casts, and accuracy was within 3.5% in vivo and 1% on plaster casts compared with caliper measures. Three-dimensional optical surface and SXCT imaging systems are feasible for capturing the comprehensive 3D surface geometry of BK residua, and provide distance measurements statistically equivalent to calipers. In addition, SXCT can readily distinguish internal soft tissue and bony structure of the residuum. FEM can be applied to determine tissue material properties interactively using inverse methods.

  18. Evaluation of precision and accuracy of selenium measurements in biological materials using neutron activation analysis

    SciTech Connect

    Greenberg, R.R.

    1988-01-01

    In recent years, the accurate determination of selenium in biological materials has become increasingly important in view of the essential nature of this element for human nutrition and its possible role as a protective agent against cancer. Unfortunately, the accurate determination of selenium in biological materials is often difficult for most analytical techniques for a variety of reasons, including interferences, complicated selenium chemistry due to the presence of this element in multiple oxidation states and in a variety of different organic species, stability and resistance to destruction of some of these organo-selenium species during acid dissolution, volatility of some selenium compounds, and potential for contamination. Neutron activation analysis (NAA) can be one of the best analytical techniques for selenium determinations in biological materials for a number of reasons. Currently, precision at the 1% level (1s) and overall accuracy at the 1 to 2% level (95% confidence interval) can be attained at the U.S. National Bureau of Standards (NBS) for selenium determinations in biological materials when counting statistics are not limiting (using the {sup 75}Se isotope). An example of this level of precision and accuracy is summarized. Achieving this level of accuracy, however, requires strict attention to all sources of systematic error. Precise and accurate results can also be obtained after radiochemical separations.

  19. Assessing genomic selection prediction accuracy in a dynamic barley breeding

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection is a method to improve quantitative traits in crops and livestock by estimating breeding values of selection candidates using phenotype and genome-wide marker data sets. Prediction accuracy has been evaluated through simulation and cross-validation, however validation based on prog...

  20. Accuracy and precision of ice stream bed topography derived from ground-based radar surveys

    NASA Astrophysics Data System (ADS)

    King, Edward

    2016-04-01

    There is some confusion within the glaciological community as to the accuracy of the basal topography derived from radar measurements. A number of texts and papers state that basal topography cannot be determined to better than one quarter of the wavelength of the radar system. On the other hand King et al (Nature Geoscience, 2009) claimed that features of the bed topography beneath Rutford Ice Stream, Antarctica can be distinguished to +/- 3m using a 3 MHz radar system (which has a quarter wavelength of 14m in ice). These statements of accuracy are mutually exclusive. I will show in this presentation that the measurement of ice thickness is a radar range determination to a single strongly-reflective target. This measurement has much higher accuracy than the resolution of two targets of similar reflection strength, which is governed by the quarter-wave criterion. The rise time of the source signal and the sensitivity and digitisation interval of the recording system are the controlling criteria on radar range accuracy. A dataset from Pine Island Glacier, West Antarctica will be used to illustrate these points, as well as the repeatability or precision of radar range measurements, and the influence of gridding parameters and positioning accuracy on the final DEM product.

  1. Wound Area Measurement with Digital Planimetry: Improved Accuracy and Precision with Calibration Based on 2 Rulers

    PubMed Central

    Foltynski, Piotr

    2015-01-01

    Introduction In the treatment of chronic wounds the wound surface area change over time is useful parameter in assessment of the applied therapy plan. The more precise the method of wound area measurement the earlier may be identified and changed inappropriate treatment plan. Digital planimetry may be used in wound area measurement and therapy assessment when it is properly used, but the common problem is the camera lens orientation during the taking of a picture. The camera lens axis should be perpendicular to the wound plane, and if it is not, the measured area differ from the true area. Results Current study shows that the use of 2 rulers placed in parallel below and above the wound for the calibration increases on average 3.8 times the precision of area measurement in comparison to the measurement with one ruler used for calibration. The proposed procedure of calibration increases also 4 times accuracy of area measurement. It was also showed that wound area range and camera type do not influence the precision of area measurement with digital planimetry based on two ruler calibration, however the measurements based on smartphone camera were significantly less accurate than these based on D-SLR or compact cameras. Area measurement on flat surface was more precise with the digital planimetry with 2 rulers than performed with the Visitrak device, the Silhouette Mobile device or the AreaMe software-based method. Conclusion The calibration in digital planimetry with using 2 rulers remarkably increases precision and accuracy of measurement and therefore should be recommended instead of calibration based on single ruler. PMID:26252747

  2. Selection of Wavelengths for Optimum Precision in Simultaneous Spectrophotometric Determinations.

    ERIC Educational Resources Information Center

    DiTusa, Michael R.; Schilt, Alfred A.

    1985-01-01

    Although many textbooks include a description of simultaneous determinations employing absorption spectrophotometry and treat the mathematics necessary for analytical quantitations, treatment of analytical wavelength selection has been mostly qualitative. Therefore, a general method for selecting wavelengths for optimum precision in simultaneous…

  3. Accuracy or precision: Implications of sample design and methodology on abundance estimation

    USGS Publications Warehouse

    Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.

    2015-01-01

    Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.

  4. Automated Gravimetric Calibration to Optimize the Accuracy and Precision of TECAN Freedom EVO Liquid Handler.

    PubMed

    Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique

    2016-10-01

    High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications. PMID:26905719

  5. The tradeoff between accuracy and precision in latent variable models of mediation processes

    PubMed Central

    Ledgerwood, Alison; Shrout, Patrick E.

    2016-01-01

    Social psychologists place high importance on understanding mechanisms, and frequently employ mediation analyses to shed light on the process underlying an effect. Such analyses can be conducted using observed variables (e.g., a typical regression approach) or latent variables (e.g., a SEM approach), and choosing between these methods can be a more complex and consequential decision than researchers often realize. The present paper adds to the literature on mediation by examining the relative tradeoff between accuracy and precision in latent versus observed variable modeling. Whereas past work has shown that latent variable models tend to produce more accurate estimates, we demonstrate that observed variable models tend to produce more precise estimates, and examine this relative tradeoff both theoretically and empirically in a typical three-variable mediation model across varying levels of effect size and reliability. We discuss implications for social psychologists seeking to uncover mediating variables, and recommend practical approaches for maximizing both accuracy and precision in mediation analyses. PMID:21806305

  6. Automated Gravimetric Calibration to Optimize the Accuracy and Precision of TECAN Freedom EVO Liquid Handler

    PubMed Central

    Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique

    2016-01-01

    High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications. PMID:26905719

  7. Accuracy, precision, usability, and cost of free chlorine residual testing methods.

    PubMed

    Murray, Anna; Lantagne, Daniele

    2015-03-01

    Chlorine is the most widely used disinfectant worldwide, partially because residual protection is maintained after treatment. This residual is measured using colorimetric test kits varying in accuracy, precision, training required, and cost. Seven commercially available colorimeters, color wheel and test tube comparator kits, pool test kits, and test strips were evaluated for use in low-resource settings by: (1) measuring in quintuplicate 11 samples from 0.0-4.0 mg/L free chlorine residual in laboratory and natural light settings to determine accuracy and precision; (2) conducting volunteer testing where participants used and evaluated each test kit; and (3) comparing costs. Laboratory accuracy ranged from 5.1-40.5% measurement error, with colorimeters the most accurate and test strip methods the least. Variation between laboratory and natural light readings occurred with one test strip method. Volunteer participants found test strip methods easiest and color wheel methods most difficult, and were most confident in the colorimeter and least confident in test strip methods. Costs range from 3.50-444 USD for 100 tests. Application of a decision matrix found colorimeters and test tube comparator kits were most appropriate for use in low-resource settings; it is recommended users apply the decision matrix themselves, as the appropriate kit might vary by context.

  8. Accuracy and precision of stream reach water surface slopes estimated in the field and from maps

    USGS Publications Warehouse

    Isaak, D.J.; Hubert, W.A.; Krueger, K.L.

    1999-01-01

    The accuracy and precision of five tools used to measure stream water surface slope (WSS) were evaluated. Water surface slopes estimated in the field with a clinometer or from topographic maps used in conjunction with a map wheel or geographic information system (GIS) were significantly higher than WSS estimated in the field with a surveying level (biases of 34, 41, and 53%, respectively). Accuracy of WSS estimates obtained with an Abney level did not differ from surveying level estimates, but conclusions regarding the accuracy of Abney levels and clinometers were weakened by intratool variability. The surveying level estimated WSS most precisely (coefficient of variation [CV] = 0.26%), followed by the GIS (CV = 1.87%), map wheel (CV = 6.18%), Abney level (CV = 13.68%), and clinometer (CV = 21.57%). Estimates of WSS measured in the field with an Abney level and estimated for the same reaches with a GIS used in conjunction with l:24,000-scale topographic maps were significantly correlated (r = 0.86), but there was a tendency for the GIS to overestimate WSS. Detailed accounts of the methods used to measure WSS and recommendations regarding the measurement of WSS are provided.

  9. Accuracy and precision of protein-ligand interaction kinetics determined from chemical shift titrations.

    PubMed

    Markin, Craig J; Spyracopoulos, Leo

    2012-12-01

    NMR-monitored chemical shift titrations for the study of weak protein-ligand interactions represent a rich source of information regarding thermodynamic parameters such as dissociation constants (K ( D )) in the micro- to millimolar range, populations for the free and ligand-bound states, and the kinetics of interconversion between states, which are typically within the fast exchange regime on the NMR timescale. We recently developed two chemical shift titration methods wherein co-variation of the total protein and ligand concentrations gives increased precision for the K ( D ) value of a 1:1 protein-ligand interaction (Markin and Spyracopoulos in J Biomol NMR 53: 125-138, 2012). In this study, we demonstrate that classical line shape analysis applied to a single set of (1)H-(15)N 2D HSQC NMR spectra acquired using precise protein-ligand chemical shift titration methods we developed, produces accurate and precise kinetic parameters such as the off-rate (k ( off )). For experimentally determined kinetics in the fast exchange regime on the NMR timescale, k ( off ) ~ 3,000 s(-1) in this work, the accuracy of classical line shape analysis was determined to be better than 5 % by conducting quantum mechanical NMR simulations of the chemical shift titration methods with the magnetic resonance toolkit GAMMA. Using Monte Carlo simulations, the experimental precision for k ( off ) from line shape analysis of NMR spectra was determined to be 13 %, in agreement with the theoretical precision of 12 % from line shape analysis of the GAMMA simulations in the presence of noise and protein concentration errors. In addition, GAMMA simulations were employed to demonstrate that line shape analysis has the potential to provide reasonably accurate and precise k ( off ) values over a wide range, from 100 to 15,000 s(-1). The validity of line shape analysis for k ( off ) values approaching intermediate exchange (~100 s(-1)), may be facilitated by more accurate K ( D ) measurements

  10. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients

    PubMed Central

    Asadian, Simin; Khatony, Alireza; Moradi, Gholamreza; Abdi, Alireza; Rezaei, Mansour

    2016-01-01

    Introduction An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. Methods In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients’ body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. Results There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (P<0.001). Kappa coefficients showed good agreement between the temperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%). Paired t-test demonstrated an acceptable precision with forehead (P=0.132), left (P=0.18) and right (P=0.318) tympanic membranes, oral (P=1.00), and axillary (P=1.00) methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. Conclusion The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left) for assessing a patient’s body temperature in the intensive care units because of high accuracy and acceptable precision.

  11. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients

    PubMed Central

    Asadian, Simin; Khatony, Alireza; Moradi, Gholamreza; Abdi, Alireza; Rezaei, Mansour

    2016-01-01

    Introduction An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. Methods In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients’ body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. Results There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (P<0.001). Kappa coefficients showed good agreement between the temperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%). Paired t-test demonstrated an acceptable precision with forehead (P=0.132), left (P=0.18) and right (P=0.318) tympanic membranes, oral (P=1.00), and axillary (P=1.00) methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. Conclusion The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left) for assessing a patient’s body temperature in the intensive care units because of high accuracy and acceptable precision. PMID:27621673

  12. Assessing accuracy and precision for field and laboratory data: a perspective in ecosystem restoration

    USGS Publications Warehouse

    Stapanian, Martin A.; Lewis, Timothy E; Palmer, Craig J.; Middlebrook Amos, Molly

    2016-01-01

    Unlike most laboratory studies, rigorous quality assurance/quality control (QA/QC) procedures may be lacking in ecosystem restoration (“ecorestoration”) projects, despite legislative mandates in the United States. This is due, in part, to ecorestoration specialists making the false assumption that some types of data (e.g. discrete variables such as species identification and abundance classes) are not subject to evaluations of data quality. Moreover, emergent behavior manifested by complex, adapting, and nonlinear organizations responsible for monitoring the success of ecorestoration projects tend to unconsciously minimize disorder, QA/QC being an activity perceived as creating disorder. We discuss similarities and differences in assessing precision and accuracy for field and laboratory data. Although the concepts for assessing precision and accuracy of ecorestoration field data are conceptually the same as laboratory data, the manner in which these data quality attributes are assessed is different. From a sample analysis perspective, a field crew is comparable to a laboratory instrument that requires regular “recalibration,” with results obtained by experts at the same plot treated as laboratory calibration standards. Unlike laboratory standards and reference materials, the “true” value for many field variables is commonly unknown. In the laboratory, specific QA/QC samples assess error for each aspect of the measurement process, whereas field revisits assess precision and accuracy of the entire data collection process following initial calibration. Rigorous QA/QC data in an ecorestoration project are essential for evaluating the success of a project, and they provide the only objective “legacy” of the dataset for potential legal challenges and future uses.

  13. Mapping stream habitats with a global positioning system: Accuracy, precision, and comparison with traditional methods

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.

    2006-01-01

    We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media

  14. Accuracy, precision, and method detection limits of quantitative PCR for airborne bacteria and fungi.

    PubMed

    Hospodsky, Denina; Yamamoto, Naomichi; Peccia, Jordan

    2010-11-01

    Real-time quantitative PCR (qPCR) for rapid and specific enumeration of microbial agents is finding increased use in aerosol science. The goal of this study was to determine qPCR accuracy, precision, and method detection limits (MDLs) within the context of indoor and ambient aerosol samples. Escherichia coli and Bacillus atrophaeus vegetative bacterial cells and Aspergillus fumigatus fungal spores loaded onto aerosol filters were considered. Efficiencies associated with recovery of DNA from aerosol filters were low, and excluding these efficiencies in quantitative analysis led to underestimating the true aerosol concentration by 10 to 24 times. Precision near detection limits ranged from a 28% to 79% coefficient of variation (COV) for the three test organisms, and the majority of this variation was due to instrument repeatability. Depending on the organism and sampling filter material, precision results suggest that qPCR is useful for determining dissimilarity between two samples only if the true differences are greater than 1.3 to 3.2 times (95% confidence level at n = 7 replicates). For MDLs, qPCR was able to produce a positive response with 99% confidence from the DNA of five B. atrophaeus cells and less than one A. fumigatus spore. Overall MDL values that included sample processing efficiencies ranged from 2,000 to 3,000 B. atrophaeus cells per filter and 10 to 25 A. fumigatus spores per filter. Applying the concepts of accuracy, precision, and MDL to qPCR aerosol measurements demonstrates that sample processing efficiencies must be accounted for in order to accurately estimate bioaerosol exposure, provides guidance on the necessary statistical rigor required to understand significant differences among separate aerosol samples, and prevents undetected (i.e., nonquantifiable) values for true aerosol concentrations that may be significant.

  15. To address accuracy and precision using methods from analytical chemistry and computational physics.

    PubMed

    Kozmutza, Cornelia; Picó, Yolanda

    2009-04-01

    In this work the pesticides were determined by liquid chromatography-mass spectrometry (LC-MS). In present study the occurrence of imidacloprid in 343 samples of oranges, tangerines, date plum, and watermelons from Valencian Community (Spain) has been investigated. The nine additional pesticides were chosen as they have been recommended for orchard treatment together with imidacloprid. The Mulliken population analysis has been applied to present the charge distribution in imidacloprid. Partitioned energy terms and the virial ratios have been calculated for certain molecules entering in interaction. A new technique based on the comparison of the decomposed total energy terms at various configurations is demonstrated in this work. The interaction ability could be established correctly in the studied case. An attempt is also made in this work to address accuracy and precision. These quantities are well-known in experimental measurements. In case precise theoretical description is achieved for the contributing monomers and also for the interacting complex structure some properties of this latter system can be predicted to quite a good accuracy. Based on simple hypothetical considerations we estimate the impact of applying computations on reducing the amount of analytical work.

  16. Accuracy and Precision in Measurements of Biomass Oxidative Ratio and Carbon Oxidation State

    NASA Astrophysics Data System (ADS)

    Gallagher, M. E.; Masiello, C. A.; Randerson, J. T.; Chadwick, O. A.; Robertson, G. P.

    2007-12-01

    Ecosystem oxidative ratio (OR) is a critical parameter in the apportionment of anthropogenic CO2 between the terrestrial biosphere and ocean carbon reservoirs. OR is the ratio of O2 to CO2 in gas exchange fluxes between the terrestrial biosphere and atmosphere. Ecosystem OR is linearly related to biomass carbon oxidation state (Cox), a fundamental property of the earth system describing the bonding environment of carbon in molecules. Cox can range from -4 to +4 (CH4 to CO2). Variations in both Cox and OR are driven by photosynthesis, respiration, and decomposition. We are developing several techniques to accurately measure variations in ecosystem Cox and OR; these include elemental analysis, bomb calorimetry, and 13C nuclear magnetic resonance spectroscopy. A previous study, comparing the accuracy and precision of elemental analysis versus bomb calorimetry for pure chemicals, showed that elemental analysis-based measurements are more accurate, while calorimetry- based measurements yield more precise data. However, the limited biochemical range of natural samples makes it possible that calorimetry may ultimately prove most accurate, as well as most cost-effective. Here we examine more closely the accuracy of Cox and OR values generated by calorimetry on a large set of natural biomass samples collected from the Kellogg Biological Station-Long Term Ecological Research (KBS-LTER) site in Michigan.

  17. Precision and accuracy of spectrophotometric pH measurements at environmental conditions in the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Hammer, Karoline; Schneider, Bernd; Kuliński, Karol; Schulz-Bull, Detlef E.

    2014-06-01

    The increasing uptake of anthropogenic CO2 by the oceans has raised an interest in precise and accurate pH measurement in order to assess the impact on the marine CO2-system. Spectrophotometric pH measurements were refined during the last decade yielding a precision and accuracy that cannot be achieved with the conventional potentiometric method. However, until now the method was only tested in oceanic systems with a relative stable and high salinity and a small pH range. This paper describes the first application of such a pH measurement system at conditions in the Baltic Sea which is characterized by a wide salinity and pH range. The performance of the spectrophotometric system at pH values as low as 7.0 (“total” scale) and salinities between 0 and 35 was examined using TRIS-buffer solutions, certified reference materials, and tests of consistency with measurements of other parameters of the marine CO2 system. Using m-cresol purple as indicator dye and a spectrophotometric measurement system designed at Scripps Institution of Oceanography (B. Carter, A. Dickson), a precision better than ±0.001 and an accuracy between ±0.01 and ±0.02 was achieved within the observed pH and salinity ranges in the Baltic Sea. The influence of the indicator dye on the pH of the sample was determined theoretically and is presented as a pH correction term for the different alkalinity regimes in the Baltic Sea. Because of the encouraging tests, the ease of operation and the fact that the measurements refer to the internationally accepted “total” pH scale, it is recommended to use the spectrophotometric method also for pH monitoring and trend detection in the Baltic Sea.

  18. Improvement in precision, accuracy, and efficiency in sstandardizing the characterization of granular materials

    SciTech Connect

    Tucker, Jonathan R.; Shadle, Lawrence J.; Benyahia, Sofiane; Mei, Joseph; Guenther, Chris; Koepke, M. E.

    2013-01-01

    Useful prediction of the kinematics, dynamics, and chemistry of a system relies on precision and accuracy in the quantification of component properties, operating mechanisms, and collected data. In an attempt to emphasize, rather than gloss over, the benefit of proper characterization to fundamental investigations of multiphase systems incorporating solid particles, a set of procedures were developed and implemented for the purpose of providing a revised methodology having the desirable attributes of reduced uncertainty, expanded relevance and detail, and higher throughput. Better, faster, cheaper characterization of multiphase systems result. Methodologies are presented to characterize particle size, shape, size distribution, density (particle, skeletal and bulk), minimum fluidization velocity, void fraction, particle porosity, and assignment within the Geldart Classification. A novel form of the Ergun equation was used to determine the bulk void fractions and particle density. Accuracy of properties-characterization methodology was validated on materials of known properties prior to testing materials of unknown properties. Several of the standard present-day techniques were scrutinized and improved upon where appropriate. Validity, accuracy, and repeatability were assessed for the procedures presented and deemed higher than present-day techniques. A database of over seventy materials has been developed to assist in model validation efforts and future desig

  19. Hepatic perfusion in a tumor model using DCE-CT: an accuracy and precision study

    NASA Astrophysics Data System (ADS)

    Stewart, Errol E.; Chen, Xiaogang; Hadway, Jennifer; Lee, Ting-Yim

    2008-08-01

    In the current study we investigate the accuracy and precision of hepatic perfusion measurements based on the Johnson and Wilson model with the adiabatic approximation. VX2 carcinoma cells were implanted into the livers of New Zealand white rabbits. Simultaneous dynamic contrast-enhanced computed tomography (DCE-CT) and radiolabeled microsphere studies were performed under steady-state normo-, hyper- and hypo-capnia. The hepatic arterial blood flows (HABF) obtained using both techniques were compared with ANOVA. The precision was assessed by the coefficient of variation (CV). Under normo-capnia the microsphere HABF were 51.9 ± 4.2, 40.7 ± 4.9 and 99.7 ± 6.0 ml min-1 (100 g)-1 while DCE-CT HABF were 50.0 ± 5.7, 37.1 ± 4.5 and 99.8 ± 6.8 ml min-1 (100 g)-1 in normal tissue, tumor core and rim, respectively. There were no significant differences between HABF measurements obtained with both techniques (P > 0.05). Furthermore, a strong correlation was observed between HABF values from both techniques: slope of 0.92 ± 0.05, intercept of 4.62 ± 2.69 ml min-1 (100 g)-1 and R2 = 0.81 ± 0.05 (P < 0.05). The Bland-Altman plot comparing DCE-CT and microsphere HABF measurements gives a mean difference of -0.13 ml min-1 (100 g)-1, which is not significantly different from zero. DCE-CT HABF is precise, with CV of 5.7, 24.9 and 1.4% in the normal tissue, tumor core and rim, respectively. Non-invasive measurement of HABF with DCE-CT is accurate and precise. DCE-CT can be an important extension of CT to assess hepatic function besides morphology in liver diseases.

  20. Accuracy and precision of integumental linear dimensions in a three-dimensional facial imaging system

    PubMed Central

    Kim, Soo-Hwan; Jung, Woo-Young; Seo, Yu-Jin; Kim, Kyung-A; Park, Ki-Ho

    2015-01-01

    Objective A recently developed facial scanning method uses three-dimensional (3D) surface imaging with a light-emitting diode. Such scanning enables surface data to be captured in high-resolution color and at relatively fast speeds. The purpose of this study was to evaluate the accuracy and precision of 3D images obtained using the Morpheus 3D® scanner (Morpheus Co., Seoul, Korea). Methods The sample comprised 30 subjects aged 24-34 years (mean 29.0 ± 2.5 years). To test the correlation between direct and 3D image measurements, 21 landmarks were labeled on the face of each subject. Sixteen direct measurements were obtained twice using digital calipers; the same measurements were then made on two sets of 3D facial images. The mean values of measurements obtained from both methods were compared. To investigate the precision, a comparison was made between two sets of measurements taken with each method. Results When comparing the variables from both methods, five of the 16 possible anthropometric variables were found to be significantly different. However, in 12 of the 16 cases, the mean difference was under 1 mm. The average value of the differences for all variables was 0.75 mm. Precision was high in both methods, with error magnitudes under 0.5 mm. Conclusions 3D scanning images have high levels of precision and fairly good congruence with traditional anthropometry methods, with mean differences of less than 1 mm. 3D surface imaging using the Morpheus 3D® scanner is therefore a clinically acceptable method of recording facial integumental data. PMID:26023538

  1. Slight pressure imbalances can affect accuracy and precision of dual inlet-based clumped isotope analysis.

    PubMed

    Fiebig, Jens; Hofmann, Sven; Löffler, Niklas; Lüdecke, Tina; Methner, Katharina; Wacker, Ulrike

    2016-01-01

    It is well known that a subtle nonlinearity can occur during clumped isotope analysis of CO2 that - if remaining unaddressed - limits accuracy. The nonlinearity is induced by a negative background on the m/z 47 ion Faraday cup, whose magnitude is correlated with the intensity of the m/z 44 ion beam. The origin of the negative background remains unclear, but is possibly due to secondary electrons. Usually, CO2 gases of distinct bulk isotopic compositions are equilibrated at 1000 °C and measured along with the samples in order to be able to correct for this effect. Alternatively, measured m/z 47 beam intensities can be corrected for the contribution of secondary electrons after monitoring how the negative background on m/z 47 evolves with the intensity of the m/z 44 ion beam. The latter correction procedure seems to work well if the m/z 44 cup exhibits a wider slit width than the m/z 47 cup. Here we show that the negative m/z 47 background affects precision of dual inlet-based clumped isotope measurements of CO2 unless raw m/z 47 intensities are directly corrected for the contribution of secondary electrons. Moreover, inaccurate results can be obtained even if the heated gas approach is used to correct for the observed nonlinearity. The impact of the negative background on accuracy and precision arises from small imbalances in m/z 44 ion beam intensities between reference and sample CO2 measurements. It becomes the more significant the larger the relative contribution of secondary electrons to the m/z 47 signal is and the higher the flux rate of CO2 into the ion source is set. These problems can be overcome by correcting the measured m/z 47 ion beam intensities of sample and reference gas for the contributions deriving from secondary electrons after scaling these contributions to the intensities of the corresponding m/z 49 ion beams. Accuracy and precision of this correction are demonstrated by clumped isotope analysis of three internal carbonate standards. The

  2. Slight pressure imbalances can affect accuracy and precision of dual inlet-based clumped isotope analysis.

    PubMed

    Fiebig, Jens; Hofmann, Sven; Löffler, Niklas; Lüdecke, Tina; Methner, Katharina; Wacker, Ulrike

    2016-01-01

    It is well known that a subtle nonlinearity can occur during clumped isotope analysis of CO2 that - if remaining unaddressed - limits accuracy. The nonlinearity is induced by a negative background on the m/z 47 ion Faraday cup, whose magnitude is correlated with the intensity of the m/z 44 ion beam. The origin of the negative background remains unclear, but is possibly due to secondary electrons. Usually, CO2 gases of distinct bulk isotopic compositions are equilibrated at 1000 °C and measured along with the samples in order to be able to correct for this effect. Alternatively, measured m/z 47 beam intensities can be corrected for the contribution of secondary electrons after monitoring how the negative background on m/z 47 evolves with the intensity of the m/z 44 ion beam. The latter correction procedure seems to work well if the m/z 44 cup exhibits a wider slit width than the m/z 47 cup. Here we show that the negative m/z 47 background affects precision of dual inlet-based clumped isotope measurements of CO2 unless raw m/z 47 intensities are directly corrected for the contribution of secondary electrons. Moreover, inaccurate results can be obtained even if the heated gas approach is used to correct for the observed nonlinearity. The impact of the negative background on accuracy and precision arises from small imbalances in m/z 44 ion beam intensities between reference and sample CO2 measurements. It becomes the more significant the larger the relative contribution of secondary electrons to the m/z 47 signal is and the higher the flux rate of CO2 into the ion source is set. These problems can be overcome by correcting the measured m/z 47 ion beam intensities of sample and reference gas for the contributions deriving from secondary electrons after scaling these contributions to the intensities of the corresponding m/z 49 ion beams. Accuracy and precision of this correction are demonstrated by clumped isotope analysis of three internal carbonate standards. The

  3. Selective Influences of Precision and Power Grips on Speech Categorization.

    PubMed

    Tiainen, Mikko; Tiippana, Kaisa; Vainio, Martti; Peromaa, Tarja; Komeilipoor, Naeem; Vainio, Lari

    2016-01-01

    Recent studies have shown that articulatory gestures are systematically associated with specific manual grip actions. Here we show that executing such actions can influence performance on a speech-categorization task. Participants watched and/or listened to speech stimuli while executing either a power or a precision grip. Grip performance influenced the syllable categorization by increasing the proportion of responses of the syllable congruent with the executed grip (power grip-[ke] and precision grip-[te]). Two follow-up experiments indicated that the effect was based on action-induced bias in selecting the syllable. PMID:26978074

  4. Estimated results analysis and application of the precise point positioning based high-accuracy ionosphere delay

    NASA Astrophysics Data System (ADS)

    Wang, Shi-tai; Peng, Jun-huan

    2015-12-01

    The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.

  5. Precision and accuracy testing of FMCW ladar-based length metrology.

    PubMed

    Mateo, Ana Baselga; Barber, Zeb W

    2015-07-01

    The calibration and traceability of high-resolution frequency modulated continuous wave (FMCW) ladar sources is a requirement for their use in length and volume metrology. We report the calibration of FMCW ladar length measurement systems by use of spectroscopy of molecular frequency references HCN (C-band) or CO (L-band) to calibrate the chirp rate of the FMCW sources. Propagating the stated uncertainties from the molecular calibrations provided by NIST and measurement errors provide an estimated uncertainty of a few ppm for the FMCW system. As a test of this calibration, a displacement measurement interferometer with a laser wavelength close to that of our FMCW system was built to make comparisons of the relative precision and accuracy. The comparisons performed show <10  ppm agreement, which was within the combined estimated uncertainties of the FMCW system and interferometer. PMID:26193146

  6. Accuracy improvement of protrusion angle of carbon nanotube tips by precision multiaxis nanomanipulator

    SciTech Connect

    Young Song, Won; Young Jung, Ki; O, Beom-Hoan; Park, Byong Chon

    2005-02-01

    In order to manufacture a carbon nanotube (CNT) tip in which the attachment angle and position of CNT were precisely adjusted, a nanomanipulator was installed inside a scanning electron microscope (SEM). A CNT tip, atomic force microscopy (AFM) probe to which a nanotube is attached, is known to be the most appropriate probe for measuring the shape of high aspect ratio. The developed nanomanipulator has two sets of modules with the degree of freedom of three-directional rectilinear motion and one-directional rotational motion at an accuracy of tens of nanometers, so it enables the manufacturing of more accurate CNT tips. The present study developed a CNT tip with the error of attachment angle less then 10 deg. through three-dimensional operation of a multiwalled carbon nanotube and an AFM probe inside a SEM.

  7. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    DOE PAGES

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence ofmore » a significant quantity of 238U in the samples.« less

  8. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    SciTech Connect

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence of a significant quantity of 238U in the samples.

  9. Accuracy and precision of estimating age of gray wolves by tooth wear

    USGS Publications Warehouse

    Gipson, P.S.; Ballard, W.B.; Nowak, R.M.; Mech, L.D.

    2000-01-01

    We evaluated the accuracy and precision of tooth wear for aging gray wolves (Canis lupus) from Alaska, Minnesota, and Ontario based on 47 known-age or known-minimum-age skulls. Estimates of age using tooth wear and a commercial cementum annuli-aging service were useful for wolves up to 14 years old. The precision of estimates from cementum annuli was greater than estimates from tooth wear, but tooth wear estimates are more applicable in the field. We tended to overestimate age by 1-2 years and occasionally by 3 or 4 years. The commercial service aged young wolves with cementum annuli to within ?? 1 year of actual age, but under estimated ages of wolves ???9 years old by 1-3 years. No differences were detected in tooth wear patterns for wild wolves from Alaska, Minnesota, and Ontario, nor between captive and wild wolves. Tooth wear was not appropriate for aging wolves with an underbite that prevented normal wear or severely broken and missing teeth.

  10. Accuracy, Precision, and Reliability of Chemical Measurements in Natural Products Research

    PubMed Central

    Betz, Joseph M.; Brown, Paula N.; Roman, Mark C.

    2010-01-01

    Natural products chemistry is the discipline that lies at the heart of modern pharmacognosy. The field encompasses qualitative and quantitative analytical tools that range from spectroscopy and spectrometry to chromatography. Among other things, modern research on crude botanicals is engaged in the discovery of the phytochemical constituents necessary for therapeutic efficacy, including the synergistic effects of components of complex mixtures in the botanical matrix. In the phytomedicine field, these botanicals and their contained mixtures are considered the active pharmaceutical ingredient (API), and pharmacognosists are increasingly called upon to supplement their molecular discovery work by assisting in the development and utilization of analytical tools for assessing the quality and safety of these products. Unlike single-chemical entity APIs, botanical raw materials and their derived products are highly variable because their chemistry and morphology depend on the genotypic and phenotypic variation, geographical origin and weather exposure, harvesting practices, and processing conditions of the source material. Unless controlled, this inherent variability in the raw material stream can result in inconsistent finished products that are under-potent, over-potent, and/or contaminated. Over the decades, natural products chemists have routinely developed quantitative analytical methods for phytochemicals of interest. Quantitative methods for the determination of product quality bear the weight of regulatory scrutiny. These methods must be accurate, precise, and reproducible. Accordingly, this review discusses the principles of accuracy (relationship between experimental and true value), precision (distribution of data values), and reliability in the quantitation of phytochemicals in natural products. PMID:20884340

  11. Transfer accuracy and precision scoring in planar bone cutting validated with ex vivo data.

    PubMed

    Milano, Federico Edgardo; Ritacco, Lucas Eduardo; Farfalli, Germán Luis; Bahamonde, Luis Alberto; Aponte-Tinao, Luis Alberto; Risk, Marcelo

    2015-05-01

    The use of interactive surgical scenarios for virtual preoperative planning of osteotomies has increased in the last 5 years. As it has been reported by several authors, this technology has been used in tumor resection osteotomies, knee osteotomies, and spine surgery with good results. A digital three-dimensional preoperative plan makes possible to quantitatively evaluate the transfer process from the virtual plan to the anatomy of the patient. We introduce an exact definition of accuracy and precision of this transfer process for planar bone cutting. We present a method to compute these properties from ex vivo data. We also propose a clinical score to assess the goodness of a cut. A computer simulation is used to characterize the definitions and the data generated by the measurement method. The definitions and method are evaluated in 17 ex vivo planar cuts of tumor resection osteotomies. The results show that the proposed method and definitions are highly correlated with a previous definition of accuracy based in ISO 1101. The score is also evaluated by showing that it distinguishes among different transfer techniques based in its distribution location and shape. The introduced definitions produce acceptable results in cases where the ISO-based definition produce counter intuitive results.

  12. Accuracy and precision of gait events derived from motion capture in horses during walk and trot.

    PubMed

    Boye, Jenny Katrine; Thomsen, Maj Halling; Pfau, Thilo; Olsen, Emil

    2014-03-21

    This study aimed to create an evidence base for detection of stance-phase timings from motion capture in horses. The objective was to compare the accuracy (bias) and precision (SD) for five published algorithms for the detection of hoof-on and hoof-off using force plates as the reference standard. Six horses were walked and trotted over eight force plates surrounded by a synchronised 12-camera infrared motion capture system. The five algorithms (A-E) were based on: (A) horizontal velocity of the hoof; (B) Fetlock angle and horizontal hoof velocity; (C) horizontal displacement of the hoof relative to the centre of mass; (D) horizontal velocity of the hoof relative to the Centre of Mass and; (E) vertical acceleration of the hoof. A total of 240 stance phases in walk and 240 stance phases in trot were included in the assessment. Method D provided the most accurate and precise results in walk for stance phase duration with a bias of 4.1% for front limbs and 4.8% for hind limbs. For trot we derived a combination of method A for hoof-on and method E for hoof-off resulting in a bias of -6.2% of stance in the front limbs and method B for the hind limbs with a bias of 3.8% of stance phase duration. We conclude that motion capture yields accurate and precise detection of gait events for horses walking and trotting over ground and the results emphasise a need for different algorithms for front limbs versus hind limbs in trot.

  13. Selective assemblies of giant tetrahedra via precisely controlled positional interactions

    NASA Astrophysics Data System (ADS)

    Huang, Mingjun; Hsu, Chih-Hao; Wang, Jing; Mei, Shan; Dong, Xuehui; Li, Yiwen; Li, Mingxuan; Liu, Hao; Zhang, Wei; Aida, Takuzo; Zhang, Wen-Bin; Yue, Kan; Cheng, Stephen Z. D.

    2015-04-01

    Self-assembly of rigid building blocks with explicit shape and symmetry is substantially influenced by the geometric factors and remains largely unexplored. We report the selective assembly behaviors of a class of precisely defined, nanosized giant tetrahedra constructed by placing different polyhedral oligomeric silsesquioxane (POSS) molecular nanoparticles at the vertices of a rigid tetrahedral framework. Designed symmetry breaking of these giant tetrahedra introduces precise positional interactions and results in diverse selectively assembled, highly ordered supramolecular lattices including a Frank-Kasper A15 phase, which resembles the essential structural features of certain metal alloys but at a larger length scale. These results demonstrate the power of persistent molecular geometry with balanced enthalpy and entropy in creating thermodynamically stable supramolecular lattices with properties distinct from those of other self-assembling soft materials.

  14. Systematic accuracy and precision analysis of video motion capturing systems--exemplified on the Vicon-460 system.

    PubMed

    Windolf, Markus; Götzen, Nils; Morlock, Michael

    2008-08-28

    With rising demand on highly accurate acquisition of small motion the use of video-based motion capturing becomes more and more popular. However, the performance of these systems strongly depends on a variety of influencing factors. A method was developed in order to systematically assess accuracy and precision of motion capturing systems with regard to influential system parameters. A calibration and measurement robot was designed to perform a repeatable dynamic calibration and to determine the resultant system accuracy and precision in a control volume investigating small motion magnitudes (180 x 180 x 150 mm3). The procedure was exemplified on the Vicon-460 system. Following parameters were analyzed: Camera setup, calibration volume, marker size and lens filter application. Equipped with four cameras the Vicon-460 system provided an overall accuracy of 63+/-5 microm and overall precision (noise level) of 15 microm for the most favorable parameter setting. Arbitrary changes in camera arrangement revealed variations in mean accuracy between 76 and 129 microm. The noise level normal to the cameras' projection plane was found higher compared to the other coordinate directions. Measurements including regions unaffected by the dynamic calibration reflected considerably lower accuracy (221+/-79 microm). Lager marker diameters led to higher accuracy and precision. Accuracy dropped significantly when using an optical lens filter. This study revealed significant influence of the system environment on the performance of video-based motion capturing systems. With careful configuration, optical motion capturing provides a powerful measuring opportunity for the majority of biomechanical applications.

  15. Improving accuracy and precision in biological applications of fluorescence lifetime imaging microscopy

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Wei

    The quantitative understanding of cellular and molecular responses in living cells is important for many reasons, including identifying potential molecular targets for treatments of diseases like cancer. Fluorescence lifetime imaging microscopy (FLIM) can quantitatively measure these responses in living cells by producing spatially resolved images of fluorophore lifetime, and has advantages over intensity-based measurements. However, in live-cell microscopy applications using high-intensity light sources such as lasers, maintaining biological viability remains critical. Although high-speed, time-gated FLIM significantly reduces light delivered to live cells, making measurements at low light levels remains a challenge affecting quantitative FLIM results. We can significantly improve both accuracy and precision in gated FLIM applications. We use fluorescence resonance energy transfer (FRET) with fluorescent proteins to detect molecular interactions in living cells: the use of FLIM, better fluorophores, and temperature/CO2 controls can improve live-cell FRET results with higher consistency, better statistics, and less non-specific FRET (for negative control comparisons, p-value = 0.93 (physiological) vs. 9.43E-05 (non-physiological)). Several lifetime determination methods are investigated to optimize gating schemes. We demonstrate a reduction in relative standard deviation (RSD) from 52.57% to 18.93% with optimized gating in an example under typical experimental conditions. We develop two novel total variation (TV) image denoising algorithms, FWTV ( f-weighted TV) and UWTV (u-weighted TV), that can achieve significant improvements for real imaging systems. With live-cell images, they improve the precision of local lifetime determination without significantly altering the global mean lifetime values (<5% lifetime changes). Finally, by combining optimal gating and TV denoising, even low-light excitation can achieve precision better than that obtained in high

  16. Parallaxes and Proper Motions of QSOs: A Test of Astrometric Precision and Accuracy

    NASA Astrophysics Data System (ADS)

    Harris, Hugh C.; Dahn, Conard C.; Zacharias, Norbert; Canzian, Blaise; Guetter, Harry H.; Levine, Stephen E.; Luginbuhl, Christian B.; Monet, Alice K. B.; Monet, David G.; Pier, Jeffrey R.; Stone, Ronald C.; Subasavage, John P.; Tilleman, Trudy; Walker, Richard L.; Johnston, Kenneth J.

    2016-11-01

    Optical astrometry of 12 fields containing quasi-stellar objects (QSOs) is presented. The targets are radio sources in the International Celestial Reference Frame with accurate radio positions that also have optical counterparts. The data are used to test several quantities: the internal precision of the relative optical astrometry, the relative parallaxes and proper motions, the procedures to correct from relative to absolute parallax and proper motion, the accuracy of the absolute parallaxes and proper motions, and the stability of the optical photocenters for these optically variable QSOs. For these 12 fields, the mean error in absolute parallax is 0.38 mas and the mean error in each coordinate of absolute proper motion is 1.1 mas yr‑1. The results yield a mean absolute parallax of ‑0.03 ± 0.11 mas. For 11 targets, we find no significant systematic motions of the photocenters at the level of 1–2 mas over the 10 years of this study; for one BL Lac object, we find a possible motion of 4 mas correlated with its brightness.

  17. AMES Stereo Pipeline Derived DEM Accuracy Experiment Using LROC-NAC Stereopairs and Weighted Spatial Dependence Simulation for Lunar Site Selection

    NASA Astrophysics Data System (ADS)

    Laura, J. R.; Miller, D.; Paul, M. V.

    2012-03-01

    An accuracy assessment of AMES Stereo Pipeline derived DEMs for lunar site selection using weighted spatial dependence simulation and a call for outside AMES derived DEMs to facilitate a statistical precision analysis.

  18. The Signatures of Selection for Translational Accuracy in Plant Genes

    PubMed Central

    Porceddu, Andrea; Zenoni, Sara; Camiolo, Salvatore

    2013-01-01

    Little is known about the natural selection of synonymous codons within the coding sequences of plant genes. We analyzed the distribution of synonymous codons within plant coding sequences and found that preferred codons tend to encode the more conserved and functionally important residues of plant proteins. This was consistent among several synonymous codon families and applied to genes with different expression profiles and functions. Most of the randomly chosen alternative sets of codons scored weaker associations than the actual sets of preferred codons, suggesting that codon position within plant genes and codon usage bias have coevolved to maximize translational accuracy. All these findings are consistent with the mistranslation-induced protein misfolding theory, which predicts the natural selection of highly preferred codons more frequently at sites where translation errors could compromise protein folding or functionality. Our results will provide an important insight in future studies of protein folding, molecular evolution, and transgene design for optimal expression. PMID:23695187

  19. Optically-Selected Cluster Catalogs As a Precision Cosmology Tool

    SciTech Connect

    Rozo, Eduardo; Wechsler, Risa H.; Koester, Benjamin P.; Evrard, August E.; McKay, Timothy A.; /Michigan U.

    2007-03-26

    We introduce a framework for describing the halo selection function of optical cluster finders. We treat the problem as being separable into a term that describes the intrinsic galaxy content of a halo (the Halo Occupation Distribution, or HOD) and a term that captures the effects of projection and selection by the particular cluster finding algorithm. Using mock galaxy catalogs tuned to reproduce the luminosity dependent correlation function and the empirical color-density relation measured in the SDSS, we characterize the maxBCG algorithm applied by Koester et al. to the SDSS galaxy catalog. We define and calibrate measures of completeness and purity for this algorithm, and demonstrate successful recovery of the underlying cosmology and HOD when applied to the mock catalogs. We identify principal components--combinations of cosmology and HOD parameters--that are recovered by survey counts as a function of richness, and demonstrate that percent-level accuracies are possible in the first two components, if the selection function can be understood to {approx} 15% accuracy.

  20. Deformable Image Registration for Adaptive Radiation Therapy of Head and Neck Cancer: Accuracy and Precision in the Presence of Tumor Changes

    SciTech Connect

    Mencarelli, Angelo; Kranen, Simon Robert van; Hamming-Vrieze, Olga; Beek, Suzanne van; Nico Rasch, Coenraad Robert; Herk, Marcel van; Sonke, Jan-Jakob

    2014-11-01

    Purpose: To compare deformable image registration (DIR) accuracy and precision for normal and tumor tissues in head and neck cancer patients during the course of radiation therapy (RT). Methods and Materials: Thirteen patients with oropharyngeal tumors, who underwent submucosal implantation of small gold markers (average 6, range 4-10) around the tumor and were treated with RT were retrospectively selected. Two observers identified 15 anatomical features (landmarks) representative of normal tissues in the planning computed tomography (pCT) scan and in weekly cone beam CTs (CBCTs). Gold markers were digitally removed after semiautomatic identification in pCTs and CBCTs. Subsequently, landmarks and gold markers on pCT were propagated to CBCTs, using a b-spline-based DIR and, for comparison, rigid registration (RR). To account for observer variability, the pair-wise difference analysis of variance method was applied. DIR accuracy (systematic error) and precision (random error) for landmarks and gold markers were quantified. Time trend of the precisions for RR and DIR over the weekly CBCTs were evaluated. Results: DIR accuracies were submillimeter and similar for normal and tumor tissue. DIR precision (1 SD) on the other hand was significantly different (P<.01), with 2.2 mm vector length in normal tissue versus 3.3 mm in tumor tissue. No significant time trend in DIR precision was found for normal tissue, whereas in tumor, DIR precision was significantly (P<.009) degraded during the course of treatment by 0.21 mm/week. Conclusions: DIR for tumor registration proved to be less precise than that for normal tissues due to limited contrast and complex non-elastic tumor response. Caution should therefore be exercised when applying DIR for tumor changes in adaptive procedures.

  1. 13 Years of TOPEX/POSEIDON Precision Orbit Determination and the 10-fold Improvement in Expected Orbit Accuracy

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Beckley, B. D.; Klosko, S. M.

    2006-01-01

    Launched in the summer of 1992, TOPEX/POSEIDON (T/P) was a joint mission between NASA and the Centre National d Etudes Spatiales (CNES), the French Space Agency, to make precise radar altimeter measurements of the ocean surface. After the remarkably successful 13-years of mapping the ocean surface T/P lost its ability to maneuver and was de-commissioned January 2006. T/P revolutionized the study of the Earth s oceans by vastly exceeding pre-launch estimates of surface height accuracy recoverable from radar altimeter measurements. The precision orbit lies at the heart of the altimeter measurement providing the reference frame from which the radar altimeter measurements are made. The expected quality of orbit knowledge had limited the measurement accuracy expectations of past altimeter missions, and still remains a major component in the error budget of all altimeter missions. This paper describes critical improvements made to the T/P orbit time series over the 13-years of precise orbit determination (POD) provided by the GSFC Space Geodesy Laboratory. The POD improvements from the pre-launch T/P expectation of radial orbit accuracy and Mission requirement of 13-cm to an expected accuracy of about 1.5-cm with today s latest orbits will be discussed. The latest orbits with 1.5 cm RMS radial accuracy represent a significant improvement to the 2.0-cm accuracy orbits currently available on the T/P Geophysical Data Record (GDR) altimeter product.

  2. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review).

    PubMed

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194

  3. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review)

    PubMed Central

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194

  4. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review).

    PubMed

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong.

  5. Evaluation of Accuracy in Kinematic GPS Analyses Using a Precision Roving Antenna Platform

    NASA Astrophysics Data System (ADS)

    Miura, S.; Sweeney, A.; Fujimoto, H.; Osaki, H.; Kawai, E.; Ichikawa, R.; Kondo, T.; Osada, Y.; Chadwell, C. D.

    2002-12-01

    Most tectonic plate boundaries and seismogenic zones of interplate earthquakes exist beneath the ocean and our knowledge on interplate coupling and on generation processes of those earthquakes remain limited. Seafloor geodesy will consequently play a very important role in improving our understanding of the physical process near plate boundaries. Seafloor positioning using a GPS/Acoustic technique is the one potential method to detect the displacement occurring at the ocean bottom. The accuracy of the technique depends on two parts: acoustic ranging in seawater, and kinematic GPS (KGPS) analysis. Accuracy of KGPS have evaluated with following way: 1) Static test: First, we carried out an experiment to confirm the capability of the KGPS analysis using GIPSY/OASIS-II for a long baseline of about 310 km. We used two GPS stations on land, one as a reference station in Sendai, and the other in Tokyo as a rover one, whose coordinate can vary from epoch to epoch. This baseline length is required for our project because the farthest seafloor transponder array is 280 km east of the nearest coastal GPS station. The 1 cm stability of the KGPS solution was achieved in the horizontal components of the 310-km baseline over the course of one day. The vertical component showed fluctuation probably due to parameters unmodeled in the analysis such as multipath and/or tropospheric delay. 2) Sea surface experiment: During cruise KT01-11 of the R/V Tansei-maru, Ocean Research Institute (ORI), University of Tokyo, around the Japan Trench in late July 2001, we deployed three precision acoustic transponders on both the Pacific plate (280 km from the coast, depth around 5450 m) and the landward slope (110 km from the coast, depth around 1600 m). We used a surface buoy with 3 GPS antennas, a motion sensor, a hydrophone, and a computer for data acquisition and control to make combined GPS/Acoustic observations. The buoy was towed about 80 m away from the R/V to reduce the impact of ship

  6. Accuracy of selected techniques for estimating ice-affected streamflow

    USGS Publications Warehouse

    Walker, John F.

    1991-01-01

    This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.

  7. Sensitivity Analysis for Characterizing the Accuracy and Precision of JEM/SMILES Mesospheric O3

    NASA Astrophysics Data System (ADS)

    Esmaeili Mahani, M.; Baron, P.; Kasai, Y.; Murata, I.; Kasaba, Y.

    2011-12-01

    The main purpose of this study is to evaluate the Superconducting sub-Millimeter Limb Emission Sounder (SMILES) measurements of mesospheric ozone, O3. As the first step, the error due to the impact of Mesospheric Temperature Inversions (MTIs) on ozone retrieval has been determined. The impacts of other parameters such as pressure variability, solar events, and etc. on mesospheric O3 will also be investigated. Ozone, is known to be important due to the stratospheric O3 layer protection of life on Earth by absorbing harmful UV radiations. However, O3 chemistry can be studied purely in the mesosphere without distraction of heterogeneous situation and dynamical variations due to the short lifetime of O3 in this region. Mesospheric ozone is produced by the photo-dissociation of O2 and the subsequent reaction of O with O2. Diurnal and semi-diurnal variations of mesospheric ozone are associated with variations in solar activity. The amplitude of the diurnal variation increases from a few percent at an altitude of 50 km, to about 80 percent at 70 km. Although despite the apparent simplicity of this situation, significant disagreements exist between the predictions from the existing models and observations, which need to be resolved. SMILES is a highly sensitive radiometer with a few to several tens percent of precision from upper troposphere to the mesosphere. SMILES was developed by the Japanese Aerospace eXploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT) located at the Japanese Experiment Module (JEM) on the International Space Station (ISS). SMILES has successfully measured the vertical distributions and the diurnal variations of various atmospheric species in the latitude range of 38S to 65N from October 2009 to April 2010. A sensitivity analysis is being conducted to investigate the expected precision and accuracy of the mesospheric O3 profiles (from 50 to 90 km height) due to the impact of Mesospheric Temperature

  8. Improvement of olfactometric measurement accuracy and repeatability by optimization of panel selection procedures.

    PubMed

    Capelli, L; Sironi, S; Del Rosso, R; Céntola, P; Bonati, S

    2010-01-01

    The EN 13725:2003, which standardizes the determination of odour concentration by dynamic olfactometry, fixes the limits for panel selection in terms of individual threshold towards a reference gas (n-butanol in nitrogen) and of standard deviation of the responses. Nonetheless, laboratories have some degrees of freedom in developing their own procedures for panel selection and evaluation. Most Italian olfactometric laboratories use a similar procedure for panel selection, based on the repeated analysis of samples of n-butanol at a concentration of 60 ppm. The first part of this study demonstrates that this procedure may originate a sort of "smartening" of the assessors, which means that they become able to guess the right answers in order to maintain their qualification as panel members, independently from their real olfactory perception. For this reason, the panel selection procedure has been revised with the aim of making it less repetitive, therefore preventing the possibility for panel members to be able to guess the best answers in order to comply with the selection criteria. The selection of new panel members and the screening of the active ones according to this revised procedure proved this new procedure to be more selective than the "standard" one. Finally, the results of the tests with n-butanol conducted after the introduction of the revised procedure for panel selection and regular verification showed an effective improvement of the laboratory measurement performances in terms of accuracy and precision.

  9. Improvement of olfactometric measurement accuracy and repeatability by optimization of panel selection procedures.

    PubMed

    Capelli, L; Sironi, S; Del Rosso, R; Céntola, P; Bonati, S

    2010-01-01

    The EN 13725:2003, which standardizes the determination of odour concentration by dynamic olfactometry, fixes the limits for panel selection in terms of individual threshold towards a reference gas (n-butanol in nitrogen) and of standard deviation of the responses. Nonetheless, laboratories have some degrees of freedom in developing their own procedures for panel selection and evaluation. Most Italian olfactometric laboratories use a similar procedure for panel selection, based on the repeated analysis of samples of n-butanol at a concentration of 60 ppm. The first part of this study demonstrates that this procedure may originate a sort of "smartening" of the assessors, which means that they become able to guess the right answers in order to maintain their qualification as panel members, independently from their real olfactory perception. For this reason, the panel selection procedure has been revised with the aim of making it less repetitive, therefore preventing the possibility for panel members to be able to guess the best answers in order to comply with the selection criteria. The selection of new panel members and the screening of the active ones according to this revised procedure proved this new procedure to be more selective than the "standard" one. Finally, the results of the tests with n-butanol conducted after the introduction of the revised procedure for panel selection and regular verification showed an effective improvement of the laboratory measurement performances in terms of accuracy and precision. PMID:20220249

  10. Selective Effect of Physical Fatigue on Motor Imagery Accuracy

    PubMed Central

    Di Rienzo, Franck; Collet, Christian; Hoyek, Nady; Guillot, Aymeric

    2012-01-01

    While the use of motor imagery (the mental representation of an action without overt execution) during actual training sessions is usually recommended, experimental studies examining the effect of physical fatigue on subsequent motor imagery performance are sparse and yielded divergent findings. Here, we investigated whether physical fatigue occurring during an intense sport training session affected motor imagery ability. Twelve swimmers (nine males, mean age 15.5 years) conducted a 45 min physically-fatiguing protocol where they swam from 70% to 100% of their maximal aerobic speed. We tested motor imagery ability immediately before and after fatigue state. Participants randomly imagined performing a swim turn using internal and external visual imagery. Self-reports ratings, imagery times and electrodermal responses, an index of alertness from the autonomic nervous system, were the dependent variables. Self-reports ratings indicated that participants did not encounter difficulty when performing motor imagery after fatigue. However, motor imagery times were significantly shortened during posttest compared to both pretest and actual turn times, thus indicating reduced timing accuracy. Looking at the selective effect of physical fatigue on external visual imagery did not reveal any difference before and after fatigue, whereas significantly shorter imagined times and electrodermal responses (respectively 15% and 48% decrease, p<0.001) were observed during the posttest for internal visual imagery. A significant correlation (r = 0.64; p<0.05) was observed between motor imagery vividness (estimated through imagery questionnaire) and autonomic responses during motor imagery after fatigue. These data support that unlike local muscle fatigue, physical fatigue occurring during intense sport training sessions is likely to affect motor imagery accuracy. These results might be explained by the updating of the internal representation of the motor sequence, due to temporary

  11. Strategy for high-accuracy-and-precision retrieval of atmospheric methane from the mid-infrared FTIR network

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Forster, F.; Rettinger, M.; Jones, N.

    2011-05-01

    We present a strategy (MIR-GBM v1.0) for the retrieval of column-averaged dry-air mole fractions of methane (XCH4) with a precision <0.3 % (1-σ diurnal variation, 7-min integration) and a seasonal bias <0.14 % from mid-infrared ground-based solar FTIR measurements of the Network for the Detection of Atmospheric Composition Change (NDACC, comprising 22 FTIR stations). This makes NDACC methane data useful for satellite validation and for the inversion of regional-scale sources and sinks in addition to long-term trend analysis. Such retrievals complement the high accuracy and precision near-infrared observations of the younger Total Carbon Column Observing Network (TCCON) with time series dating back 15 yr or so before TCCON operations began. MIR-GBM v1.0 is using HITRAN 2000 (including the 2001 update release) and 3 spectral micro windows (2613.70-2615.40 cm-1, 2835.50-2835.80 cm-1, 2921.00-2921.60 cm-1). A first-order Tikhonov constraint is applied to the state vector given in units of per cent of volume mixing ratio. It is tuned to achieve minimum diurnal variation without damping seasonality. Final quality selection of the retrievals uses a threshold for the ratio of root-mean-square spectral residuals and information content (<0.15 %). Column-averaged dry-air mole fractions are calculated using the retrieved methane profiles and four-times-daily pressure-temperature-humidity profiles from National Center for Environmental Prediction (NCEP) interpolated to the time of measurement. MIR-GBM v1.0 is the optimum of 24 tested retrieval strategies (8 different spectral micro-window selections, 3 spectroscopic line lists: HITRAN 2000, 2004, 2008). Dominant errors of the non-optimum retrieval strategies are HDO/H2O-CH4 interference errors (seasonal bias up to ≈4 %). Therefore interference errors have been quantified at 3 test sites covering clear-sky integrated water vapor levels representative for all NDACC sites (Wollongong maximum = 44.9 mm, Garmisch mean = 14.9 mm

  12. Strategy for high-accuracy-and-precision retrieval of atmospheric methane from the mid-infrared FTIR network

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Forster, F.; Rettinger, M.; Jones, N.

    2011-09-01

    We present a strategy (MIR-GBM v1.0) for the retrieval of column-averaged dry-air mole fractions of methane (XCH4) with a precision <0.3% (1-σ diurnal variation, 7-min integration) and a seasonal bias <0.14% from mid-infrared ground-based solar FTIR measurements of the Network for the Detection of Atmospheric Composition Change (NDACC, comprising 22 FTIR stations). This makes NDACC methane data useful for satellite validation and for the inversion of regional-scale sources and sinks in addition to long-term trend analysis. Such retrievals complement the high accuracy and precision near-infrared observations of the younger Total Carbon Column Observing Network (TCCON) with time series dating back 15 years or so before TCCON operations began. MIR-GBM v1.0 is using HITRAN 2000 (including the 2001 update release) and 3 spectral micro windows (2613.70-2615.40 cm-1, 2835.50-2835.80 cm-1, 2921.00-2921.60 cm-1). A first-order Tikhonov constraint is applied to the state vector given in units of per cent of volume mixing ratio. It is tuned to achieve minimum diurnal variation without damping seasonality. Final quality selection of the retrievals uses a threshold for the goodness of fit (χ2 < 1) as well as for the ratio of root-mean-square spectral noise and information content (<0.15%). Column-averaged dry-air mole fractions are calculated using the retrieved methane profiles and four-times-daily pressure-temperature-humidity profiles from National Center for Environmental Prediction (NCEP) interpolated to the time of measurement. MIR-GBM v1.0 is the optimum of 24 tested retrieval strategies (8 different spectral micro-window selections, 3 spectroscopic line lists: HITRAN 2000, 2004, 2008). Dominant errors of the non-optimum retrieval strategies are systematic HDO/H2O-CH4 interference errors leading to a seasonal bias up to ≈5%. Therefore interference errors have been quantified at 3 test sites covering clear-sky integrated water vapor levels representative for all NDACC

  13. Accuracy of Genomic Selection in a Rice Synthetic Population Developed for Recurrent Selection Breeding.

    PubMed

    Grenier, Cécile; Cao, Tuong-Vi; Ospina, Yolima; Quintero, Constanza; Châtel, Marc Henri; Tohme, Joe; Courtois, Brigitte; Ahmadi, Nourollah

    2015-01-01

    Genomic selection (GS) is a promising strategy for enhancing genetic gain. We investigated the accuracy of genomic estimated breeding values (GEBV) in four inter-related synthetic populations that underwent several cycles of recurrent selection in an upland rice-breeding program. A total of 343 S2:4 lines extracted from those populations were phenotyped for flowering time, plant height, grain yield and panicle weight, and genotyped with an average density of one marker per 44.8 kb. The relative effect of the linkage disequilibrium (LD) and minor allele frequency (MAF) thresholds for selecting markers, the relative size of the training population (TP) and of the validation population (VP), the selected trait and the genomic prediction models (frequentist and Bayesian) on the accuracy of GEBVs was investigated in 540 cross validation experiments with 100 replicates. The effect of kinship between the training and validation populations was tested in an additional set of 840 cross validation experiments with a single genomic prediction model. LD was high (average r2 = 0.59 at 25 kb) and decreased slowly, distribution of allele frequencies at individual loci was markedly skewed toward unbalanced frequencies (MAF average value 15.2% and median 9.6%), and differentiation between the four synthetic populations was low (FST ≤0.06). The accuracy of GEBV across all cross validation experiments ranged from 0.12 to 0.54 with an average of 0.30. Significant differences in accuracy were observed among the different levels of each factor investigated. Phenotypic traits had the biggest effect, and the size of the incidence matrix had the smallest. Significant first degree interaction was observed for GEBV accuracy between traits and all the other factors studied, and between prediction models and LD, MAF and composition of the TP. The potential of GS to accelerate genetic gain and breeding options to increase the accuracy of predictions are discussed. PMID:26313446

  14. Accuracy of Genomic Selection in a Rice Synthetic Population Developed for Recurrent Selection Breeding.

    PubMed

    Grenier, Cécile; Cao, Tuong-Vi; Ospina, Yolima; Quintero, Constanza; Châtel, Marc Henri; Tohme, Joe; Courtois, Brigitte; Ahmadi, Nourollah

    2015-01-01

    Genomic selection (GS) is a promising strategy for enhancing genetic gain. We investigated the accuracy of genomic estimated breeding values (GEBV) in four inter-related synthetic populations that underwent several cycles of recurrent selection in an upland rice-breeding program. A total of 343 S2:4 lines extracted from those populations were phenotyped for flowering time, plant height, grain yield and panicle weight, and genotyped with an average density of one marker per 44.8 kb. The relative effect of the linkage disequilibrium (LD) and minor allele frequency (MAF) thresholds for selecting markers, the relative size of the training population (TP) and of the validation population (VP), the selected trait and the genomic prediction models (frequentist and Bayesian) on the accuracy of GEBVs was investigated in 540 cross validation experiments with 100 replicates. The effect of kinship between the training and validation populations was tested in an additional set of 840 cross validation experiments with a single genomic prediction model. LD was high (average r2 = 0.59 at 25 kb) and decreased slowly, distribution of allele frequencies at individual loci was markedly skewed toward unbalanced frequencies (MAF average value 15.2% and median 9.6%), and differentiation between the four synthetic populations was low (FST ≤0.06). The accuracy of GEBV across all cross validation experiments ranged from 0.12 to 0.54 with an average of 0.30. Significant differences in accuracy were observed among the different levels of each factor investigated. Phenotypic traits had the biggest effect, and the size of the incidence matrix had the smallest. Significant first degree interaction was observed for GEBV accuracy between traits and all the other factors studied, and between prediction models and LD, MAF and composition of the TP. The potential of GS to accelerate genetic gain and breeding options to increase the accuracy of predictions are discussed.

  15. Accuracy of Genomic Selection in a Rice Synthetic Population Developed for Recurrent Selection Breeding

    PubMed Central

    Ospina, Yolima; Quintero, Constanza; Châtel, Marc Henri; Tohme, Joe; Courtois, Brigitte

    2015-01-01

    Genomic selection (GS) is a promising strategy for enhancing genetic gain. We investigated the accuracy of genomic estimated breeding values (GEBV) in four inter-related synthetic populations that underwent several cycles of recurrent selection in an upland rice-breeding program. A total of 343 S2:4 lines extracted from those populations were phenotyped for flowering time, plant height, grain yield and panicle weight, and genotyped with an average density of one marker per 44.8 kb. The relative effect of the linkage disequilibrium (LD) and minor allele frequency (MAF) thresholds for selecting markers, the relative size of the training population (TP) and of the validation population (VP), the selected trait and the genomic prediction models (frequentist and Bayesian) on the accuracy of GEBVs was investigated in 540 cross validation experiments with 100 replicates. The effect of kinship between the training and validation populations was tested in an additional set of 840 cross validation experiments with a single genomic prediction model. LD was high (average r2 = 0.59 at 25 kb) and decreased slowly, distribution of allele frequencies at individual loci was markedly skewed toward unbalanced frequencies (MAF average value 15.2% and median 9.6%), and differentiation between the four synthetic populations was low (FST ≤0.06). The accuracy of GEBV across all cross validation experiments ranged from 0.12 to 0.54 with an average of 0.30. Significant differences in accuracy were observed among the different levels of each factor investigated. Phenotypic traits had the biggest effect, and the size of the incidence matrix had the smallest. Significant first degree interaction was observed for GEBV accuracy between traits and all the other factors studied, and between prediction models and LD, MAF and composition of the TP. The potential of GS to accelerate genetic gain and breeding options to increase the accuracy of predictions are discussed. PMID:26313446

  16. Accuracy and precision of total mixed rations fed on commercial dairy farms.

    PubMed

    Sova, A D; LeBlanc, S J; McBride, B W; DeVries, T J

    2014-01-01

    Despite the significant time and effort spent formulating total mixed rations (TMR), it is evident that the ration delivered by the producer and that consumed by the cow may not accurately reflect that originally formulated. The objectives of this study were to (1) determine how TMR fed agrees with or differs from TMR formulation (accuracy), (2) determine daily variability in physical and chemical characteristics of TMR delivered (precision), and (3) investigate the relationship between daily variability in ration characteristics and group-average measures of productivity [dry matter intake (DMI), milk yield, milk components, efficiency, and feed sorting] on commercial dairy farms. Twenty-two commercial freestall herds were visited for 7 consecutive days in both summer and winter months. Fresh and refusal feed samples were collected daily to assess particle size distribution, dry matter, and chemical composition. Milk test data, including yield, fat, and protein were collected from a coinciding Dairy Herd Improvement test. Multivariable mixed-effect regression models were used to analyze associations between productivity measures and daily ration variability, measured as coefficient of variation (CV) over 7d. The average TMR [crude protein=16.5%, net energy for lactation (NEL) = 1.7 Mcal/kg, nonfiber carbohydrates = 41.3%, total digestible nutrients = 73.3%, neutral detergent fiber=31.3%, acid detergent fiber=20.5%, Ca = 0.92%, p=0.42%, Mg = 0.35%, K = 1.45%, Na = 0.41%] delivered exceeded TMR formulation for NEL (+0.05 Mcal/kg), nonfiber carbohydrates (+1.2%), acid detergent fiber (+0.7%), Ca (+0.08%), P (+0.02%), Mg (+0.02%), and K (+0.04%) and underfed crude protein (-0.4%), neutral detergent fiber (-0.6%), and Na (-0.1%). Dietary measures with high day-to-day CV were average feed refusal rate (CV = 74%), percent long particles (CV = 16%), percent medium particles (CV = 7.7%), percent short particles (CV = 6.1%), percent fine particles (CV = 13%), Ca (CV = 7

  17. Nano-accuracy measurements and the surface profiler by use of Monolithic Hollow Penta-Prism for precision mirror testing

    NASA Astrophysics Data System (ADS)

    Qian, Shinan; Wayne, Lewis; Idir, Mourad

    2014-09-01

    We developed a Monolithic Hollow Penta-Prism Long Trace Profiler-NOM (MHPP-LTP-NOM) to attain nano-accuracy in testing plane- and near-plane-mirrors. A new developed Monolithic Hollow Penta-Prism (MHPP) combined with the advantages of PPLTP and autocollimator ELCOMAT of the Nano-Optic-Measuring Machine (NOM) is used to enhance the accuracy and stability of our measurements. Our precise system-alignment method by using a newly developed CCD position-monitor system (PMS) assured significant thermal stability and, along with our optimized noise-reduction analytic method, ensured nano-accuracy measurements. Herein we report our tests results; all errors are about 60 nrad rms or less in tests of plane- and near-plane- mirrors.

  18. Simulations of thermally transferred OSL signals in quartz: Accuracy and precision of the protocols for equivalent dose evaluation

    NASA Astrophysics Data System (ADS)

    Pagonis, Vasilis; Adamiec, Grzegorz; Athanassas, C.; Chen, Reuven; Baker, Atlee; Larsen, Meredith; Thompson, Zachary

    2011-06-01

    Thermally-transferred optically stimulated luminescence (TT-OSL) signals in sedimentary quartz have been the subject of several recent studies, due to the potential shown by these signals to increase the range of luminescence dating by an order of magnitude. Based on these signals, a single aliquot protocol termed the ReSAR protocol has been developed and tested experimentally. This paper presents extensive numerical simulations of this ReSAR protocol. The purpose of the simulations is to investigate several aspects of the ReSAR protocol which are believed to cause difficulties during application of the protocol. Furthermore, several modified versions of the ReSAR protocol are simulated, and their relative accuracy and precision are compared. The simulations are carried out using a recently published kinetic model for quartz, consisting of 11 energy levels. One hundred random variants of the natural samples were generated by keeping the transition probabilities between energy levels fixed, while allowing simultaneous random variations of the concentrations of the 11 energy levels. The relative intrinsic accuracy and precision of the protocols are simulated by calculating the equivalent dose (ED) within the model, for a given natural burial dose of the sample. The complete sequence of steps undertaken in several versions of the dating protocols is simulated. The relative intrinsic precision of these techniques is estimated by fitting Gaussian probability functions to the resulting simulated distribution of ED values. New simulations are presented for commonly used OSL sensitivity tests, consisting of successive cycles of sample irradiation with the same dose, followed by measurements of the sensitivity corrected L/T signals. We investigate several experimental factors which may be affecting both the intrinsic precision and intrinsic accuracy of the ReSAR protocol. The results of the simulation show that the four different published versions of the ReSAR protocol can

  19. A high-precision Jacob's staff with improved spatial accuracy and laser sighting capability

    NASA Astrophysics Data System (ADS)

    Patacci, Marco

    2016-04-01

    A new Jacob's staff design incorporating a 3D positioning stage and a laser sighting stage is described. The first combines a compass and a circular spirit level on a movable bracket and the second introduces a laser able to slide vertically and rotate on a plane parallel to bedding. The new design allows greater precision in stratigraphic thickness measurement while restricting the cost and maintaining speed of measurement to levels similar to those of a traditional Jacob's staff. Greater precision is achieved as a result of: a) improved 3D positioning of the rod through the use of the integrated compass and spirit level holder; b) more accurate sighting of geological surfaces by tracing with height adjustable rotatable laser; c) reduced error when shifting the trace of the log laterally (i.e. away from the dip direction) within the trace of the laser plane, and d) improved measurement of bedding dip and direction necessary to orientate the Jacob's staff, using the rotatable laser. The new laser holder design can also be used to verify parallelism of a geological surface with structural dip by creating a visual planar datum in the field and thus allowing determination of surfaces which cut the bedding at an angle (e.g., clinoforms, levees, erosion surfaces, amalgamation surfaces, etc.). Stratigraphic thickness measurements and estimates of measurement uncertainty are valuable to many applications of sedimentology and stratigraphy at different scales (e.g., bed statistics, reconstruction of palaeotopographies, depositional processes at bed scale, architectural element analysis), especially when a quantitative approach is applied to the analysis of the data; the ability to collect larger data sets with improved precision will increase the quality of such studies.

  20. Note: electronic circuit for two-way time transfer via a single coaxial cable with picosecond accuracy and precision.

    PubMed

    Prochazka, Ivan; Kodet, Jan; Panek, Petr

    2012-11-01

    We have designed, constructed, and tested the overall performance of the electronic circuit for the two-way time transfer between two timing devices over modest distances with sub-picosecond precision and a systematic error of a few picoseconds. The concept of the electronic circuit enables to carry out time tagging of pulses of interest in parallel to the comparison of the time scales of these timing devices. The key timing parameters of the circuit are: temperature change of the delay is below 100 fs/K, timing stability time deviation better than 8 fs for averaging time from minutes to hours, sub-picosecond time transfer precision, and a few picoseconds time transfer accuracy.

  1. Accuracy and Precision of Three-Dimensional Low Dose CT Compared to Standard RSA in Acetabular Cups: An Experimental Study

    PubMed Central

    Olivecrona, Henrik; Maguire, Gerald Q.; Noz, Marilyn E.; Zeleznik, Michael P.

    2016-01-01

    Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting. PMID:27478832

  2. Accuracy and Precision of Three-Dimensional Low Dose CT Compared to Standard RSA in Acetabular Cups: An Experimental Study.

    PubMed

    Brodén, Cyrus; Olivecrona, Henrik; Maguire, Gerald Q; Noz, Marilyn E; Zeleznik, Michael P; Sköldenberg, Olof

    2016-01-01

    Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting. PMID:27478832

  3. A Time Projection Chamber for High Accuracy and Precision Fission Cross-Section Measurements

    SciTech Connect

    T. Hill; K. Jewell; M. Heffner; D. Carter; M. Cunningham; V. Riot; J. Ruz; S. Sangiorgio; B. Seilhan; L. Snyder; D. M. Asner; S. Stave; G. Tatishvili; L. Wood; R. G. Baker; J. L. Klay; R. Kudo; S. Barrett; J. King; M. Leonard; W. Loveland; L. Yao; C. Brune; S. Grimes; N. Kornilov; T. N. Massey; J. Bundgaard; D. L. Duke; U. Greife; U. Hager; E. Burgett; J. Deaven; V. Kleinrath; C. McGrath; B. Wendt; N. Hertel; D. Isenhower; N. Pickle; H. Qu; S. Sharma; R. T. Thornton; D. Tovwell; R. S. Towell; S.

    2014-09-01

    The fission Time Projection Chamber (fissionTPC) is a compact (15 cm diameter) two-chamber MICROMEGAS TPC designed to make precision cross-section measurements of neutron-induced fission. The actinide targets are placed on the central cathode and irradiated with a neutron beam that passes axially through the TPC inducing fission in the target. The 4p acceptance for fission fragments and complete charged particle track reconstruction are powerful features of the fissionTPC which will be used to measure fission cross-sections and examine the associated systematic errors. This paper provides a detailed description of the design requirements, the design solutions, and the initial performance of the fissionTPC.

  4. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique

    PubMed Central

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan

    2014-01-01

    Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823

  5. Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine.

    PubMed

    Castaneda, Christian; Nalley, Kip; Mannion, Ciaran; Bhattacharyya, Pritish; Blake, Patrick; Pecora, Andrew; Goy, Andre; Suh, K Stephen

    2015-01-01

    As research laboratories and clinics collaborate to achieve precision medicine, both communities are required to understand mandated electronic health/medical record (EHR/EMR) initiatives that will be fully implemented in all clinics in the United States by 2015. Stakeholders will need to evaluate current record keeping practices and optimize and standardize methodologies to capture nearly all information in digital format. Collaborative efforts from academic and industry sectors are crucial to achieving higher efficacy in patient care while minimizing costs. Currently existing digitized data and information are present in multiple formats and are largely unstructured. In the absence of a universally accepted management system, departments and institutions continue to generate silos of information. As a result, invaluable and newly discovered knowledge is difficult to access. To accelerate biomedical research and reduce healthcare costs, clinical and bioinformatics systems must employ common data elements to create structured annotation forms enabling laboratories and clinics to capture sharable data in real time. Conversion of these datasets to knowable information should be a routine institutionalized process. New scientific knowledge and clinical discoveries can be shared via integrated knowledge environments defined by flexible data models and extensive use of standards, ontologies, vocabularies, and thesauri. In the clinical setting, aggregated knowledge must be displayed in user-friendly formats so that physicians, non-technical laboratory personnel, nurses, data/research coordinators, and end-users can enter data, access information, and understand the output. The effort to connect astronomical numbers of data points, including '-omics'-based molecular data, individual genome sequences, experimental data, patient clinical phenotypes, and follow-up data is a monumental task. Roadblocks to this vision of integration and interoperability include ethical, legal

  6. Effects of machining accuracy on frequency response properties of thick-screen frequency selective surface

    NASA Astrophysics Data System (ADS)

    Fang, Chunyi; Gao, Jinsong; Xin, Chen

    2012-10-01

    Electromagnetic theory shows that a thick-screen frequency selective surface (FSS) has many advantages in its frequency response characteristics. In addition, it can be used to make a stealth radome. Therefore, we research in detail how machining accuracy affects the frequency response properties of the FSS in the gigahertz range. Specifically, by using the least squares method applied to machining data, the effects of different machining precision in the samples can be calculated thus obtaining frequency response curves which were verified by testing in the near-field in a microwave dark room. The results show that decreasing roughness and flatness variation leads to an increase in the bandwidth and that an increase in spacing error leads to the center frequency drifting lower. Finally, an increase in aperture error leads to an increase in bandwidth. Therefore, the conclusion is that machining accuracy should be controlled and that a spatial error less than 0.05 mm is required in order to avoid unwanted center frequency drift and a transmittance decrease.

  7. Precise and Continuous Time and Frequency Synchronisation at the 5×10-19 Accuracy Level

    PubMed Central

    Wang, B.; Gao, C.; Chen, W. L.; Miao, J.; Zhu, X.; Bai, Y.; Zhang, J. W.; Feng, Y. Y.; Li, T. C.; Wang, L. J.

    2012-01-01

    The synchronisation of time and frequency between remote locations is crucial for many important applications. Conventional time and frequency dissemination often makes use of satellite links. Recently, the communication fibre network has become an attractive option for long-distance time and frequency dissemination. Here, we demonstrate accurate frequency transfer and time synchronisation via an 80 km fibre link between Tsinghua University (THU) and the National Institute of Metrology of China (NIM). Using a 9.1 GHz microwave modulation and a timing signal carried by two continuous-wave lasers and transferred across the same 80 km urban fibre link, frequency transfer stability at the level of 5×10−19/day was achieved. Time synchronisation at the 50 ps precision level was also demonstrated. The system is reliable and has operated continuously for several months. We further discuss the feasibility of using such frequency and time transfer over 1000 km and its applications to long-baseline radio astronomy. PMID:22870385

  8. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation.

    PubMed

    Choe, Kyoung Whan; Blake, Randolph; Lee, Sang-Hun

    2016-01-01

    Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (<16s) and large (∼4min) time scales, and these pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task.

  9. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation.

    PubMed

    Choe, Kyoung Whan; Blake, Randolph; Lee, Sang-Hun

    2016-01-01

    Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (<16s) and large (∼4min) time scales, and these pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task. PMID:25578924

  10. A simple device for high-precision head image registration: Preliminary performance and accuracy tests

    SciTech Connect

    Pallotta, Stefania

    2007-05-15

    The purpose of this paper is to present a new device for multimodal head study registration and to examine its performance in preliminary tests. The device consists of a system of eight markers fixed to mobile carbon pipes and bars which can be easily mounted on the patient's head using the ear canals and the nasal bridge. Four graduated scales fixed to the rigid support allow examiners to find the same device position on the patient's head during different acquisitions. The markers can be filled with appropriate substances for visualisation in computed tomography (CT), magnetic resonance, single photon emission computer tomography (SPECT) and positron emission tomography images. The device's rigidity and its position reproducibility were measured in 15 repeated CT acquisitions of the Alderson Rando anthropomorphic phantom and in two SPECT studies of a patient. The proposed system displays good rigidity and reproducibility characteristics. A relocation accuracy of less than 1,5 mm was found in more than 90% of the results. The registration parameters obtained using such a device were compared to those obtained using fiducial markers fixed on phantom and patient heads, resulting in differences of less than 1 deg. and 1 mm for rotation and translation parameters, respectively. Residual differences between fiducial marker coordinates in reference and in registered studies were less than 1 mm in more than 90% of the results, proving that the device performed as accurately as noninvasive stereotactic devices. Finally, an example of multimodal employment of the proposed device is reported.

  11. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment.

    PubMed

    Vu, An T; Phillips, Jeffrey S; Kay, Kendrick; Phillips, Matthew E; Johnson, Matthew R; Shinkareva, Svetlana V; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2016-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms. PMID:27686111

  12. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment.

    PubMed

    Vu, An T; Phillips, Jeffrey S; Kay, Kendrick; Phillips, Matthew E; Johnson, Matthew R; Shinkareva, Svetlana V; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2016-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms.

  13. Selecting fillers on emotional appearance improves lineup identification accuracy.

    PubMed

    Flowe, Heather D; Klatt, Thimna; Colloff, Melissa F

    2014-12-01

    Mock witnesses sometimes report using criminal stereotypes to identify a face from a lineup, a tendency known as criminal face bias. Faces are perceived as criminal-looking if they appear angry. We tested whether matching the emotional appearance of the fillers to an angry suspect can reduce criminal face bias. In Study 1, mock witnesses (n = 226) viewed lineups in which the suspect had an angry, happy, or neutral expression, and we varied whether the fillers matched the expression. An additional group of participants (n = 59) rated the faces on criminal and emotional appearance. As predicted, mock witnesses tended to identify suspects who appeared angrier and more criminal-looking than the fillers. This tendency was reduced when the lineup fillers matched the emotional appearance of the suspect. Study 2 extended the results, testing whether the emotional appearance of the suspect and fillers affects recognition memory. Participants (n = 1,983) studied faces and took a lineup test in which the emotional appearance of the target and fillers was varied between subjects. Discrimination accuracy was enhanced when the fillers matched an angry target's emotional appearance. We conclude that lineup member emotional appearance plays a critical role in the psychology of lineup identification. The fillers should match an angry suspect's emotional appearance to improve lineup identification accuracy.

  14. A Method of Determining Accuracy and Precision for Dosimeter Systems Using Accreditation Data

    SciTech Connect

    Rick Cummings and John Flood

    2010-12-01

    A study of the uncertainty of dosimeter results is required by the national accreditation programs for each dosimeter model for which accreditation is sought. Typically, the methods used to determine uncertainty have included the partial differentiation method described in the U.S. Guide to Uncertainty in Measurements or the use of Monte Carlo techniques and probability distribution functions to generate simulated dose results. Each of these techniques has particular strengths and should be employed when the areas of uncertainty are required to be understood in detail. However, the uncertainty of dosimeter results can also be determined using a Model II One-Way Analysis of Variance technique and accreditation testing data. The strengths of the technique include (1) the method is straightforward and the data are provided under accreditation testing and (2) the method provides additional data for the analysis of long-term uncertainty using Statistical Process Control (SPC) techniques. The use of SPC to compare variances and standard deviations over time is described well in other areas and is not discussed in detail in this paper. The application of Analysis of Variance to historic testing data indicated that the accuracy in a representative dosimetry system (Panasonic® Model UD-802) was 8.2%, 5.1%, and 4.8% and the expanded uncertainties at the 95% confidence level were 10.7%, 14.9%, and 15.2% for the Accident, Protection Level-Shallow, and Protection Level-Deep test categories in the Department of Energy Laboratory Accreditation Program, respectively. The 95% level of confidence ranges were (0.98 to 1.19), (0.90 to 1.20), and (0.90 to 1.20) for the three groupings of test categories, respectively.

  15. A method of determining accuracy and precision for dosimeter systems using accreditation data.

    PubMed

    Cummings, Frederick; Flood, John R

    2010-12-01

    A study of the uncertainty of dosimeter results is required by the national accreditation programs for each dosimeter model for which accreditation is sought. Typically, the methods used to determine uncertainty have included the partial differentiation method described in the U.S. Guide to Uncertainty in Measurements or the use of Monte Carlo techniques and probability distribution functions to generate simulated dose results. Each of these techniques has particular strengths and should be employed when the areas of uncertainty are required to be understood in detail. However, the uncertainty of dosimeter results can also be determined using a Model II One-Way Analysis of Variance technique and accreditation testing data. The strengths of the technique include (1) the method is straightforward and the data are provided under accreditation testing and (2) the method provides additional data for the analysis of long-term uncertainty using Statistical Process Control (SPC) techniques. The use of SPC to compare variances and standard deviations over time is described well in other areas and is not discussed in detail in this paper. The application of Analysis of Variance to historic testing data indicated that the accuracy in a representative dosimetry system (Panasonic® Model UD-802) was 8.2%, 5.1%, and 4.8% and the expanded uncertainties at the 95% confidence level were 10.7%, 14.9%, and 15.2% for the Accident, Protection Level-Shallow, and Protection Level-Deep test categories in the Department of Energy Laboratory Accreditation Program, respectively. The 95% level of confidence ranges were (0.98 to 1.19), (0.90 to 1.20), and (0.90 to 1.20) for the three groupings of test categories, respectively. PMID:21068596

  16. TanDEM-X IDEM precision and accuracy assessment based on a large assembly of differential GNSS measurements in Kruger National Park, South Africa

    NASA Astrophysics Data System (ADS)

    Baade, J.; Schmullius, C.

    2016-09-01

    High resolution Digital Elevation Models (DEM) represent fundamental data for a wide range of Earth surface process studies. Over the past years, the German TanDEM-X mission acquired data for a new, truly global Digital Elevation Model with unprecedented geometric resolution, precision and accuracy. First TanDEM Intermediate Digital Elevation Models (i.e. IDEM) with a geometric resolution from 0.4 to 3 arcsec have been made available for scientific purposes in November 2014. This includes four 1° × 1° tiles covering the Kruger National Park in South Africa. Here, we document the results of a local scale IDEM height accuracy validation exercise utilizing over 10,000 RTK-GNSS-based ground survey points from fourteen sites characterized by mainly pristine Savanna vegetation. The vertical precision of the ground checkpoints is 0.02 m (1σ). Selected precursor data sets (SRTMGL1, SRTM41, ASTER-GDEM2) are included in the analysis to facilitate the comparison. Although IDEM represents an intermediate product on the way to the new global TanDEM-X DEM, expected to be released in late 2016, it allows first insight into the properties of the forthcoming product. Remarkably, the TanDEM-X tiles include a number of auxiliary files providing detailed information pertinent to a user-based quality assessment. We present examples for the utilization of this information in the framework of a local scale study including the identification of height readings contaminated by water. Furthermore, this study provides evidence for the high precision and accuracy of IDEM height readings and the sensitivity to canopy cover. For open terrain, the 0.4 arcsec resolution edition (IDEM04) yields an average bias of 0.20 ± 0.05 m (95% confidence interval, Cl95), a RMSE = 1.03 m and an absolute vertical height error (LE90) of 1.5 [1.4, 1.7] m (Cl95). The corresponding values for the lower resolution IDEM editions are about the same and provide evidence for the high quality of the IDEM products

  17. Accuracy and Precision of Equine Gait Event Detection during Walking with Limb and Trunk Mounted Inertial Sensors

    PubMed Central

    Olsen, Emil; Andersen, Pia Haubro; Pfau, Thilo

    2012-01-01

    The increased variations of temporal gait events when pathology is present are good candidate features for objective diagnostic tests. We hypothesised that the gait events hoof-on/off and stance can be detected accurately and precisely using features from trunk and distal limb-mounted Inertial Measurement Units (IMUs). Four IMUs were mounted on the distal limb and five IMUs were attached to the skin over the dorsal spinous processes at the withers, fourth lumbar vertebrae and sacrum as well as left and right tuber coxae. IMU data were synchronised to a force plate array and a motion capture system. Accuracy (bias) and precision (SD of bias) was calculated to compare force plate and IMU timings for gait events. Data were collected from seven horses. One hundred and twenty three (123) front limb steps were analysed; hoof-on was detected with a bias (SD) of −7 (23) ms, hoof-off with 0.7 (37) ms and front limb stance with −0.02 (37) ms. A total of 119 hind limb steps were analysed; hoof-on was found with a bias (SD) of −4 (25) ms, hoof-off with 6 (21) ms and hind limb stance with 0.2 (28) ms. IMUs mounted on the distal limbs and sacrum can detect gait events accurately and precisely. PMID:22969392

  18. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    PubMed

    Wells, Emma; Wolfe, Marlene K; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4-19% error), then test strips (5.2-48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration. Given the

  19. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    PubMed

    Wells, Emma; Wolfe, Marlene K; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4-19% error), then test strips (5.2-48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration. Given the

  20. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions

    PubMed Central

    Wells, Emma; Wolfe, Marlene K.; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4–19% error), then test strips (5.2–48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5–11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14–37 for test strips and $33–609 for titration

  1. Accuracy and precision of minimally-invasive cardiac output monitoring in children: a systematic review and meta-analysis.

    PubMed

    Suehiro, Koichi; Joosten, Alexandre; Murphy, Linda Suk-Ling; Desebbe, Olivier; Alexander, Brenton; Kim, Sang-Hyun; Cannesson, Maxime

    2016-10-01

    Several minimally-invasive technologies are available for cardiac output (CO) measurement in children, but the accuracy and precision of these devices have not yet been evaluated in a systematic review and meta-analysis. We conducted a comprehensive search of the medical literature in PubMed, Cochrane Library of Clinical Trials, Scopus, and Web of Science from its inception to June 2014 assessing the accuracy and precision of all minimally-invasive CO monitoring systems used in children when compared with CO monitoring reference methods. Pooled mean bias, standard deviation, and mean percentage error of included studies were calculated using a random-effects model. The inter-study heterogeneity was also assessed using an I(2) statistic. A total of 20 studies (624 patients) were included. The overall random-effects pooled bias, and mean percentage error were 0.13 ± 0.44 l min(-1) and 29.1 %, respectively. Significant inter-study heterogeneity was detected (P < 0.0001, I(2) = 98.3 %). In the sub-analysis regarding the device, electrical cardiometry showed the smallest bias (-0.03 l min(-1)) and lowest percentage error (23.6 %). Significant residual heterogeneity remained after conducting sensitivity and subgroup analyses based on the various study characteristics. By meta-regression analysis, we found no independent effects of study characteristics on weighted mean difference between reference and tested methods. Although the pooled bias was small, the mean pooled percentage error was in the gray zone of clinical applicability. In the sub-group analysis, electrical cardiometry was the device that provided the most accurate measurement. However, a high heterogeneity between studies was found, likely due to a wide range of study characteristics. PMID:26315477

  2. Community-based Approaches to Improving Accuracy, Precision, and Reproducibility in U-Pb and U-Th Geochronology

    NASA Astrophysics Data System (ADS)

    McLean, N. M.; Condon, D. J.; Bowring, S. A.; Schoene, B.; Dutton, A.; Rubin, K. H.

    2015-12-01

    The last two decades have seen a grassroots effort by the international geochronology community to "calibrate Earth history through teamwork and cooperation," both as part of the EARTHTIME initiative and though several daughter projects with similar goals. Its mission originally challenged laboratories "to produce temporal constraints with uncertainties approaching 0.1% of the radioisotopic ages," but EARTHTIME has since exceeded its charge in many ways. Both the U-Pb and Ar-Ar chronometers first considered for high-precision timescale calibration now regularly produce dates at the sub-per mil level thanks to instrumentation, laboratory, and software advances. At the same time new isotope systems, including U-Th dating of carbonates, have developed comparable precision. But the larger, inter-related scientific challenges envisioned at EARTHTIME's inception remain - for instance, precisely calibrating the global geologic timescale, estimating rates of change around major climatic perturbations, and understanding evolutionary rates through time - and increasingly require that data from multiple geochronometers be combined. To solve these problems, the next two decades of uranium-daughter geochronology will require further advances in accuracy, precision, and reproducibility. The U-Th system has much in common with U-Pb, in that both parent and daughter isotopes are solids that can easily be weighed and dissolved in acid, and have well-characterized reference materials certified for isotopic composition and/or purity. For U-Pb, improving lab-to-lab reproducibility has entailed dissolving precisely weighed U and Pb metals of known purity and isotopic composition together to make gravimetric solutions, then using these to calibrate widely distributed tracers composed of artificial U and Pb isotopes. To mimic laboratory measurements, naturally occurring U and Pb isotopes were also mixed in proportions to mimic samples of three different ages, to be run as internal

  3. Cascade impactor (CI) mensuration--an assessment of the accuracy and precision of commercially available optical measurement systems.

    PubMed

    Chambers, Frank; Ali, Aziz; Mitchell, Jolyon; Shelton, Christopher; Nichols, Steve

    2010-03-01

    Multi-stage cascade impactors (CIs) are the preferred measurement technique for characterizing the aerodynamic particle size distribution of an inhalable aerosol. Stage mensuration is the recommended pharmacopeial method for monitoring CI "fitness for purpose" within a GxP environment. The Impactor Sub-Team of the European Pharmaceutical Aerosol Group has undertaken an inter-laboratory study to assess both the precision and accuracy of a range of makes and models of instruments currently used for optical inspection of impactor stages. Measurement of two Andersen 8-stage 'non-viable' cascade impactor "reference" stages that were representative of jet sizes for this instrument type (stages 2 and 7) confirmed that all instruments evaluated were capable of reproducible jet measurement, with the overall capability being within the current pharmacopeial stage specifications for both stages. In the assessment of absolute accuracy, small, but consistent differences (ca. 0.6% of the certified value) observed between 'dots' and 'spots' of a calibrated chromium-plated reticule were observed, most likely the result of treatment of partially lit pixels along the circumference of this calibration standard. Measurements of three certified ring gauges, the smallest having a nominal diameter of 1.0 mm, were consistent with the observation where treatment of partially illuminated pixels at the periphery of the projected image can result in undersizing. However, the bias was less than 1% of the certified diameter. The optical inspection instruments evaluated are fully capable of confirming cascade impactor suitability in accordance with pharmacopeial practice.

  4. High precision pulsed selective laser sintering of metallic powders

    NASA Astrophysics Data System (ADS)

    Fischer, Pascal; Romano, Valerio; Blatter, Andreas; Weber, Heinz P.

    2005-06-01

    The generative process of selective laser sintering of powders such as Titanium, Platinum alloys and steel can in comparison to cw radiation significantly be improved by using pulsed radiation. With an appropriate energy deposition in the metallic powder layer, the material properties of the selective laser sintered parts can locally be tailored to the requirements of the finished work piece. By adapting the laser parameters of a Q-switched Nd:YAG laser, notably pulse duration and local intensity, the degree of porosity, density and even the crystalline microstructure can be controlled. Pulsed interaction allows minimizing the average power needed for consolidation of the metallic powder, and leads to less residual thermal stresses. With laser post processing, the surface can achieve bulk-like density. Furthermore, we present the possibility of forming metallic glass components by sintering amorphous metallic powders.

  5. Precision and accuracy of manual water-level measurements taken in the Yucca Mountain area, Nye County, Nevada, 1988-90

    USGS Publications Warehouse

    Boucher, M.S.

    1994-01-01

    Water-level measurements have been made in deep boreholes in the Yucca Mountain area, Nye County, Nevada, since 1983 in support of the U.S. Department of Energy's Yucca Mountain Project, which is an evaluation of the area to determine its suitability as a potential storage area for high-level nuclear waste. Water-level measurements were taken either manually, using various water-level measuring equipment such as steel tapes, or they were taken continuously, using automated data recorders and pressure transducers. This report presents precision range and accuracy data established for manual water-level measurements taken in the Yucca Mountain area, 1988-90. Precision and accuracy ranges were determined for all phases of the water-level measuring process, and overall accuracy ranges are presented. Precision ranges were determined for three steel tapes using a total of 462 data points. Mean precision ranges of these three tapes ranged from 0.014 foot to 0.026 foot. A mean precision range of 0.093 foot was calculated for the multiconductor cable, using 72 data points. Mean accuracy values were calculated on the basis of calibrations of the steel tapes and the multiconductor cable against a reference steel tape. The mean accuracy values of the steel tapes ranged from 0.053 foot, based on three data points to 0.078, foot based on six data points. The mean accuracy of the multiconductor cable was O. 15 foot, based on six data points. Overall accuracy of the water-level measurements was calculated by taking the square root of the sum of the squares of the individual accuracy values. Overall accuracy was calculated to be 0.36 foot for water-level measurements taken with steel tapes, without accounting for the inaccuracy of borehole deviations from vertical. An overall accuracy of 0.36 foot for measurements made with steel tapes is considered satisfactory for this project.

  6. SU-E-J-147: Monte Carlo Study of the Precision and Accuracy of Proton CT Reconstructed Relative Stopping Power Maps

    SciTech Connect

    Dedes, G; Asano, Y; Parodi, K; Arbor, N; Dauvergne, D; Testa, E; Letang, J; Rit, S

    2015-06-15

    Purpose: The quantification of the intrinsic performances of proton computed tomography (pCT) as a modality for treatment planning in proton therapy. The performance of an ideal pCT scanner is studied as a function of various parameters. Methods: Using GATE/Geant4, we simulated an ideal pCT scanner and scans of several cylindrical phantoms with various tissue equivalent inserts of different sizes. Insert materials were selected in order to be of clinical relevance. Tomographic images were reconstructed using a filtered backprojection algorithm taking into account the scattering of protons into the phantom. To quantify the performance of the ideal pCT scanner, we study the precision and the accuracy with respect to the theoretical relative stopping power ratios (RSP) values for different beam energies, imaging doses, insert sizes and detector positions. The planning range uncertainty resulting from the reconstructed RSP is also assessed by comparison with the range of the protons in the analytically simulated phantoms. Results: The results indicate that pCT can intrinsically achieve RSP resolution below 1%, for most examined tissues at beam energies below 300 MeV and for imaging doses around 1 mGy. RSP maps accuracy of less than 0.5 % is observed for most tissue types within the studied dose range (0.2–1.5 mGy). Finally, the uncertainty in the proton range due to the accuracy of the reconstructed RSP map is well below 1%. Conclusion: This work explores the intrinsic performance of pCT as an imaging modality for proton treatment planning. The obtained results show that under ideal conditions, 3D RSP maps can be reconstructed with an accuracy better than 1%. Hence, pCT is a promising candidate for reducing the range uncertainties introduced by the use of X-ray CT alongside with a semiempirical calibration to RSP.Supported by the DFG Cluster of Excellence Munich-Centre for Advanced Photonics (MAP)

  7. An evaluation of the accuracy and precision of a stand-alone submersible continuous ruminal pH measurement system.

    PubMed

    Penner, G B; Beauchemin, K A; Mutsvangwa, T

    2006-06-01

    The objectives of this study were 1) to develop and evaluate the accuracy and precision of a new stand-alone submersible continuous ruminal pH measurement system called the Lethbridge Research Centre ruminal pH measurement system (LRCpH; Experiment 1); 2) to establish the accuracy and precision of a well-documented, previously used continuous indwelling ruminal pH system (CIpH) to ensure that the new system (LRCpH) was as accurate and precise as the previous system (CIpH; Experiment 2); and 3) to determine the required frequency for pH electrode standardization by comparing baseline millivolt readings of pH electrodes in pH buffers 4 and 7 after 0, 24, 48, and 72 h of ruminal incubation (Experiment 3). In Experiment 1, 6 pregnant Holstein heifers, 3 lactating, primiparous Holstein cows, and 2 Black Angus heifers were used. All experimental animals were fitted with permanent ruminal cannulas. In Experiment 2, the 3 cannulated, lactating, primiparous Holstein cows were used. In both experiments, ruminal pH was determined continuously using indwelling pH electrodes. Subsequently, mean pH values were then compared with ruminal pH values obtained using spot samples of ruminal fluid (MANpH) obtained at the same time. A correlation coefficient accounting for repeated measures was calculated and results were used to calculate the concordance correlation to examine the relationships between the LRCpH-derived values and MANpH, and the CIpH-derived values and MANpH. In Experiment 3, the 6 pregnant Holstein heifers were used along with 6 new submersible pH electrodes. In Experiments 1 and 2, the comparison of the LRCpH output (1- and 5-min averages) to MANpH had higher correlation coefficients after accounting for repeated measures (0.98 and 0.97 for 1- and 5-min averages, respectively) and concordance correlation coefficients (0.96 and 0.97 for 1- and 5-min averages, respectively) than the comparison of CIpH to MANpH (0.88 and 0.87, correlation coefficient and concordance

  8. The effects of selective and divided attention on sensory precision and integration.

    PubMed

    Odegaard, Brian; Wozny, David R; Shams, Ladan

    2016-02-12

    In our daily lives, our capacity to selectively attend to stimuli within or across sensory modalities enables enhanced perception of the surrounding world. While previous research on selective attention has studied this phenomenon extensively, two important questions still remain unanswered: (1) how selective attention to a single modality impacts sensory integration processes, and (2) the mechanism by which selective attention improves perception. We explored how selective attention impacts performance in both a spatial task and a temporal numerosity judgment task, and employed a Bayesian Causal Inference model to investigate the computational mechanism(s) impacted by selective attention. We report three findings: (1) in the spatial domain, selective attention improves precision of the visual sensory representations (which were relatively precise), but not the auditory sensory representations (which were fairly noisy); (2) in the temporal domain, selective attention improves the sensory precision in both modalities (both of which were fairly reliable to begin with); (3) in both tasks, selective attention did not exert a significant influence over the tendency to integrate sensory stimuli. Therefore, it may be postulated that a sensory modality must possess a certain inherent degree of encoding precision in order to benefit from selective attention. It also appears that in certain basic perceptual tasks, the tendency to integrate crossmodal signals does not depend significantly on selective attention. We conclude with a discussion of how these results relate to recent theoretical considerations of selective attention.

  9. Single-frequency receivers as master permanent stations in GNSS networks: precision and accuracy of the positioning in mixed networks

    NASA Astrophysics Data System (ADS)

    Dabove, Paolo; Manzino, Ambrogio Maria

    2015-04-01

    The use of GPS/GNSS instruments is a common practice in the world at both a commercial and academic research level. Since last ten years, Continuous Operating Reference Stations (CORSs) networks were born in order to achieve the possibility to extend a precise positioning more than 15 km far from the master station. In this context, the Geomatics Research Group of DIATI at the Politecnico di Torino has carried out several experiments in order to evaluate the achievable precision obtainable with different GNSS receivers (geodetic and mass-market) and antennas if a CORSs network is considered. This work starts from the research above described, in particular focusing the attention on the usefulness of single frequency permanent stations in order to thicken the existing CORSs, especially for monitoring purposes. Two different types of CORSs network are available today in Italy: the first one is the so called "regional network" and the second one is the "national network", where the mean inter-station distances are about 25/30 and 50/70 km respectively. These distances are useful for many applications (e.g. mobile mapping) if geodetic instruments are considered but become less useful if mass-market instruments are used or if the inter-station distance between master and rover increases. In this context, some innovative GNSS networks were developed and tested, analyzing the performance of rover's positioning in terms of quality, accuracy and reliability both in real-time and post-processing approach. The use of single frequency GNSS receivers leads to have some limits, especially due to a limited baseline length, the possibility to obtain a correct fixing of the phase ambiguity for the network and to fix the phase ambiguity correctly also for the rover. These factors play a crucial role in order to reach a positioning with a good level of accuracy (as centimetric o better) in a short time and with an high reliability. The goal of this work is to investigate about the

  10. Improving Precision and Accuracy of Isotope Ratios from Short Transient Laser Ablation-Multicollector-Inductively Coupled Plasma Mass Spectrometry Signals: Application to Micrometer-Size Uranium Particles.

    PubMed

    Claverie, Fanny; Hubert, Amélie; Berail, Sylvain; Donard, Ariane; Pointurier, Fabien; Pécheyran, Christophe

    2016-04-19

    The isotope drift encountered on short transient signals measured by multicollector inductively coupled plasma mass spectrometry (MC-ICPMS) is related to differences in detector time responses. Faraday to Faraday and Faraday to ion counter time lags were determined and corrected using VBA data processing based on the synchronization of the isotope signals. The coefficient of determination of the linear fit between the two isotopes was selected as the best criterion to obtain accurate detector time lag. The procedure was applied to the analysis by laser ablation-MC-ICPMS of micrometer sized uranium particles (1-3.5 μm). Linear regression slope (LRS) (one isotope plotted over the other), point-by-point, and integration methods were tested to calculate the (235)U/(238)U and (234)U/(238)U ratios. Relative internal precisions of 0.86 to 1.7% and 1.2 to 2.4% were obtained for (235)U/(238)U and (234)U/(238)U, respectively, using LRS calculation, time lag, and mass bias corrections. A relative external precision of 2.1% was obtained for (235)U/(238)U ratios with good accuracy (relative difference with respect to the reference value below 1%). PMID:27031645

  11. Standardization of Operator-Dependent Variables Affecting Precision and Accuracy of the Disk Diffusion Method for Antibiotic Susceptibility Testing.

    PubMed

    Hombach, Michael; Maurer, Florian P; Pfiffner, Tamara; Böttger, Erik C; Furrer, Reinhard

    2015-12-01

    Parameters like zone reading, inoculum density, and plate streaking influence the precision and accuracy of disk diffusion antibiotic susceptibility testing (AST). While improved reading precision has been demonstrated using automated imaging systems, standardization of the inoculum and of plate streaking have not been systematically investigated yet. This study analyzed whether photometrically controlled inoculum preparation and/or automated inoculation could further improve the standardization of disk diffusion. Suspensions of Escherichia coli ATCC 25922 and Staphylococcus aureus ATCC 29213 of 0.5 McFarland standard were prepared by 10 operators using both visual comparison to turbidity standards and a Densichek photometer (bioMérieux), and the resulting CFU counts were determined. Furthermore, eight experienced operators each inoculated 10 Mueller-Hinton agar plates using a single 0.5 McFarland standard bacterial suspension of E. coli ATCC 25922 using regular cotton swabs, dry flocked swabs (Copan, Brescia, Italy), or an automated streaking device (BD-Kiestra, Drachten, Netherlands). The mean CFU counts obtained from 0.5 McFarland standard E. coli ATCC 25922 suspensions were significantly different for suspensions prepared by eye and by Densichek (P < 0.001). Preparation by eye resulted in counts that were closer to the CLSI/EUCAST target of 10(8) CFU/ml than those resulting from Densichek preparation. No significant differences in the standard deviations of the CFU counts were observed. The interoperator differences in standard deviations when dry flocked swabs were used decreased significantly compared to the differences when regular cotton swabs were used, whereas the mean of the standard deviations of all operators together was not significantly altered. In contrast, automated streaking significantly reduced both interoperator differences, i.e., the individual standard deviations, compared to the standard deviations for the manual method, and the mean of

  12. Standardization of Operator-Dependent Variables Affecting Precision and Accuracy of the Disk Diffusion Method for Antibiotic Susceptibility Testing.

    PubMed

    Hombach, Michael; Maurer, Florian P; Pfiffner, Tamara; Böttger, Erik C; Furrer, Reinhard

    2015-12-01

    Parameters like zone reading, inoculum density, and plate streaking influence the precision and accuracy of disk diffusion antibiotic susceptibility testing (AST). While improved reading precision has been demonstrated using automated imaging systems, standardization of the inoculum and of plate streaking have not been systematically investigated yet. This study analyzed whether photometrically controlled inoculum preparation and/or automated inoculation could further improve the standardization of disk diffusion. Suspensions of Escherichia coli ATCC 25922 and Staphylococcus aureus ATCC 29213 of 0.5 McFarland standard were prepared by 10 operators using both visual comparison to turbidity standards and a Densichek photometer (bioMérieux), and the resulting CFU counts were determined. Furthermore, eight experienced operators each inoculated 10 Mueller-Hinton agar plates using a single 0.5 McFarland standard bacterial suspension of E. coli ATCC 25922 using regular cotton swabs, dry flocked swabs (Copan, Brescia, Italy), or an automated streaking device (BD-Kiestra, Drachten, Netherlands). The mean CFU counts obtained from 0.5 McFarland standard E. coli ATCC 25922 suspensions were significantly different for suspensions prepared by eye and by Densichek (P < 0.001). Preparation by eye resulted in counts that were closer to the CLSI/EUCAST target of 10(8) CFU/ml than those resulting from Densichek preparation. No significant differences in the standard deviations of the CFU counts were observed. The interoperator differences in standard deviations when dry flocked swabs were used decreased significantly compared to the differences when regular cotton swabs were used, whereas the mean of the standard deviations of all operators together was not significantly altered. In contrast, automated streaking significantly reduced both interoperator differences, i.e., the individual standard deviations, compared to the standard deviations for the manual method, and the mean of

  13. Does feature selection improve classification accuracy? Impact of sample size and feature selection on classification using anatomical magnetic resonance images.

    PubMed

    Chu, Carlton; Hsu, Ai-Ling; Chou, Kun-Hsien; Bandettini, Peter; Lin, Chingpo

    2012-03-01

    There are growing numbers of studies using machine learning approaches to characterize patterns of anatomical difference discernible from neuroimaging data. The high-dimensionality of image data often raises a concern that feature selection is needed to obtain optimal accuracy. Among previous studies, mostly using fixed sample sizes, some show greater predictive accuracies with feature selection, whereas others do not. In this study, we compared four common feature selection methods. 1) Pre-selected region of interests (ROIs) that are based on prior knowledge. 2) Univariate t-test filtering. 3) Recursive feature elimination (RFE), and 4) t-test filtering constrained by ROIs. The predictive accuracies achieved from different sample sizes, with and without feature selection, were compared statistically. To demonstrate the effect, we used grey matter segmented from the T1-weighted anatomical scans collected by the Alzheimer's disease Neuroimaging Initiative (ADNI) as the input features to a linear support vector machine classifier. The objective was to characterize the patterns of difference between Alzheimer's disease (AD) patients and cognitively normal subjects, and also to characterize the difference between mild cognitive impairment (MCI) patients and normal subjects. In addition, we also compared the classification accuracies between MCI patients who converted to AD and MCI patients who did not convert within the period of 12 months. Predictive accuracies from two data-driven feature selection methods (t-test filtering and RFE) were no better than those achieved using whole brain data. We showed that we could achieve the most accurate characterizations by using prior knowledge of where to expect neurodegeneration (hippocampus and parahippocampal gyrus). Therefore, feature selection does improve the classification accuracies, but it depends on the method adopted. In general, larger sample sizes yielded higher accuracies with less advantage obtained by using

  14. Precision and accuracy in the quantitative analysis of biological samples by accelerator mass spectrometry: application in microdose absolute bioavailability studies.

    PubMed

    Gao, Lan; Li, Jing; Kasserra, Claudia; Song, Qi; Arjomand, Ali; Hesk, David; Chowdhury, Swapan K

    2011-07-15

    Determination of the pharmacokinetics and absolute bioavailability of an experimental compound, SCH 900518, following a 89.7 nCi (100 μg) intravenous (iv) dose of (14)C-SCH 900518 2 h post 200 mg oral administration of nonradiolabeled SCH 900518 to six healthy male subjects has been described. The plasma concentration of SCH 900518 was measured using a validated LC-MS/MS system, and accelerator mass spectrometry (AMS) was used for quantitative plasma (14)C-SCH 900518 concentration determination. Calibration standards and quality controls were included for every batch of sample analysis by AMS to ensure acceptable quality of the assay. Plasma (14)C-SCH 900518 concentrations were derived from the regression function established from the calibration standards, rather than directly from isotopic ratios from AMS measurement. The precision and accuracy of quality controls and calibration standards met the requirements of bioanalytical guidance (U.S. Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research, Center for Veterinary Medicine. Guidance for Industry: Bioanalytical Method Validation (ucm070107), May 2001. http://www.fda.gov/downloads/Drugs/GuidanceCompilanceRegulatoryInformation/Guidances/ucm070107.pdf ). The AMS measurement had a linear response range from 0.0159 to 9.07 dpm/mL for plasma (14)C-SCH 900158 concentrations. The CV and accuracy were 3.4-8.5% and 94-108% (82-119% for the lower limit of quantitation (LLOQ)), respectively, with a correlation coefficient of 0.9998. The absolute bioavailability was calculated from the dose-normalized area under the curve of iv and oral doses after the plasma concentrations were plotted vs the sampling time post oral dose. The mean absolute bioavailability of SCH 900518 was 40.8% (range 16.8-60.6%). The typical accuracy and standard deviation in AMS quantitative analysis of drugs from human plasma samples have been reported for the first time, and the impact of these

  15. Determination of the precision and accuracy of morphological measurements using the Kinect™ sensor: comparison with standard stereophotogrammetry.

    PubMed

    Bonnechère, B; Jansen, B; Salvia, P; Bouzahouene, H; Sholukha, V; Cornelis, J; Rooze, M; Van Sint Jan, S

    2014-01-01

    The recent availability of the Kinect™ sensor, a low-cost Markerless Motion Capture (MMC) system, could give new and interesting insights into ergonomics (e.g. the creation of a morphological database). Extensive validation of this system is still missing. The aim of the study was to determine if the Kinect™ sensor can be used as an easy, cheap and fast tool to conduct morphology estimation. A total of 48 subjects were analysed using MMC. Results were compared with measurements obtained from a high-resolution stereophotogrammetric system, a marker-based system (MBS). Differences between MMC and MBS were found; however, these differences were systematically correlated and enabled regression equations to be obtained to correct MMC results. After correction, final results were in agreement with MBS data (p = 0.99). Results show that measurements were reproducible and precise after applying regression equations. Kinect™ sensors-based systems therefore seem to be suitable for use as fast and reliable tools to estimate morphology. Practitioner Summary: The Kinect™ sensor could eventually be used for fast morphology estimation as a body scanner. This paper presents an extensive validation of this device for anthropometric measurements in comparison to manual measurements and stereophotogrammetric devices. The accuracy is dependent on the segment studied but the reproducibility is excellent. PMID:24646374

  16. Accuracy-rate tradeoffs: how do enzymes meet demands of selectivity and catalytic efficiency?

    PubMed

    Tawfik, Dan S

    2014-08-01

    I discuss some physico-chemical and evolutionary aspects of enzyme accuracy (selectivity, specificity) and speed (turnover rate, processivity). Accuracy can be a beneficial side-product of active-sites being refined to proficiently convert a given substrate into one product. However, exclusion of undesirable, non-cognate substrates is also an explicitly evolved trait that may come with a cost. I define two schematic mechanisms. Ground-state discrimination applies to enzymes where selectivity is achieved primarily at the level of substrate binding. Exemplified by DNA methyltransferases and the ribosome, ground-state discrimination imposes strong accuracy-rate tradeoffs. Alternatively, transition-state discrimination, applies to relatively small substrates where substrate binding and chemistry are efficiently coupled, and evokes weaker tradeoffs. Overall, the mechanistic, structural and evolutionary basis of enzymatic accuracy-rate tradeoffs merits deeper understanding.

  17. High-precision robotic microcontact printing (R-μCP) utilizing a vision guided selectively compliant articulated robotic arm.

    PubMed

    McNulty, Jason D; Klann, Tyler; Sha, Jin; Salick, Max; Knight, Gavin T; Turng, Lih-Sheng; Ashton, Randolph S

    2014-06-01

    Increased realization of the spatial heterogeneity found within in vivo tissue microenvironments has prompted the desire to engineer similar complexities into in vitro culture substrates. Microcontact printing (μCP) is a versatile technique for engineering such complexities onto cell culture substrates because it permits microscale control of the relative positioning of molecules and cells over large surface areas. However, challenges associated with precisely aligning and superimposing multiple μCP steps severely limits the extent of substrate modification that can be achieved using this method. Thus, we investigated the feasibility of using a vision guided selectively compliant articulated robotic arm (SCARA) for μCP applications. SCARAs are routinely used to perform high precision, repetitive tasks in manufacturing, and even low-end models are capable of achieving microscale precision. Here, we present customization of a SCARA to execute robotic-μCP (R-μCP) onto gold-coated microscope coverslips. The system not only possesses the ability to align multiple polydimethylsiloxane (PDMS) stamps but also has the capability to do so even after the substrates have been removed, reacted to graft polymer brushes, and replaced back into the system. Plus, non-biased computerized analysis shows that the system performs such sequential patterning with <10 μm precision and accuracy, which is equivalent to the repeatability specifications of the employed SCARA model. R-μCP should facilitate the engineering of complex in vivo-like complexities onto culture substrates and their integration with microfluidic devices. PMID:24759945

  18. High-precision robotic microcontact printing (R-μCP) utilizing a vision guided selectively compliant articulated robotic arm.

    PubMed

    McNulty, Jason D; Klann, Tyler; Sha, Jin; Salick, Max; Knight, Gavin T; Turng, Lih-Sheng; Ashton, Randolph S

    2014-06-01

    Increased realization of the spatial heterogeneity found within in vivo tissue microenvironments has prompted the desire to engineer similar complexities into in vitro culture substrates. Microcontact printing (μCP) is a versatile technique for engineering such complexities onto cell culture substrates because it permits microscale control of the relative positioning of molecules and cells over large surface areas. However, challenges associated with precisely aligning and superimposing multiple μCP steps severely limits the extent of substrate modification that can be achieved using this method. Thus, we investigated the feasibility of using a vision guided selectively compliant articulated robotic arm (SCARA) for μCP applications. SCARAs are routinely used to perform high precision, repetitive tasks in manufacturing, and even low-end models are capable of achieving microscale precision. Here, we present customization of a SCARA to execute robotic-μCP (R-μCP) onto gold-coated microscope coverslips. The system not only possesses the ability to align multiple polydimethylsiloxane (PDMS) stamps but also has the capability to do so even after the substrates have been removed, reacted to graft polymer brushes, and replaced back into the system. Plus, non-biased computerized analysis shows that the system performs such sequential patterning with <10 μm precision and accuracy, which is equivalent to the repeatability specifications of the employed SCARA model. R-μCP should facilitate the engineering of complex in vivo-like complexities onto culture substrates and their integration with microfluidic devices.

  19. Effects of implant angulation, material selection, and impression technique on impression accuracy: a preliminary laboratory study.

    PubMed

    Rutkunas, Vygandas; Sveikata, Kestutis; Savickas, Raimondas

    2012-01-01

    The aim of this preliminary laboratory study was to evaluate the effects of 5- and 25-degree implant angulations in simulated clinical casts on an impression's accuracy when using different impression materials and tray selections. A convenience sample of each implant angulation group was selected for both open and closed trays in combination with one polyether and two polyvinyl siloxane impression materials. The influence of material and technique appeared to be significant for both 5- and 25-degree angulations (P < .05), and increased angulation tended to decrease impression accuracy. The open-tray technique was more accurate with highly nonaxially oriented implants for the small sample size investigated.

  20. Accuracy, precision and response time of consumer fork, remote digital probe and disposable indicator thermometers for cooked ground beef patties and chicken breasts

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Nine different commercially available instant-read consumer thermometers (forks, remotes, digital probe and disposable color change indicators) were tested for accuracy and precision compared to a calibrated thermocouple in 80 percent and 90 percent lean ground beef patties, and boneless and bone-in...

  1. An Examination of the Precision and Technical Accuracy of the First Wave of Group-Randomized Trials Funded by the Institute of Education Sciences

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Raudenbush, Stephen W.

    2009-01-01

    This article examines the power analyses for the first wave of group-randomized trials funded by the Institute of Education Sciences. Specifically, it assesses the precision and technical accuracy of the studies. The authors identified the appropriate experimental design and estimated the minimum detectable standardized effect size (MDES) for each…

  2. The use of vector bootstrapping to improve variable selection precision in Lasso models.

    PubMed

    Laurin, Charles; Boomsma, Dorret; Lubke, Gitta

    2016-08-01

    The Lasso is a shrinkage regression method that is widely used for variable selection in statistical genetics. Commonly, K-fold cross-validation is used to fit a Lasso model. This is sometimes followed by using bootstrap confidence intervals to improve precision in the resulting variable selections. Nesting cross-validation within bootstrapping could provide further improvements in precision, but this has not been investigated systematically. We performed simulation studies of Lasso variable selection precision (VSP) with and without nesting cross-validation within bootstrapping. Data were simulated to represent genomic data under a polygenic model as well as under a model with effect sizes representative of typical GWAS results. We compared these approaches to each other as well as to software defaults for the Lasso. Nested cross-validation had the most precise variable selection at small effect sizes. At larger effect sizes, there was no advantage to nesting. We illustrated the nested approach with empirical data comprising SNPs and SNP-SNP interactions from the most significant SNPs in a GWAS of borderline personality symptoms. In the empirical example, we found that the default Lasso selected low-reliability SNPs and interactions which were excluded by bootstrapping. PMID:27248122

  3. International normalised ratio (INR) measured on the CoaguChek S and XS compared with the laboratory for determination of precision and accuracy.

    PubMed

    Christensen, Thomas D; Larsen, Torben B; Jensen, Claus; Maegaard, Marianne; Sørensen, Benny

    2009-03-01

    Oral anticoagulation therapy is monitored by the use of international normalised ratio (INR). Patients performing self-management estimate INR using a coagulometer, but studies have been partly flawed regarding the estimated precision and accuracy. The objective was to estimate the imprecision and accuracy for two different coagulometers (CoaguChek S and XS). Twenty-four patients treated with coumarin were prospectively followed for six weeks. INR's were analyzed weekly in duplicates on both coagulometers, and compared with results from the hospital laboratory. Statistical analysis included Bland-Altman plot, 95% limits of agreement, coefficient of variance (CV), and an analysis of variance using a mixed effect model. Comparing 141 duplicate measurements (a total of 564 measurements) of INR, we found that the CoaguChek S and CoaguChek XS had a precision (CV) of 3.4% and 2.3%, respectively. Regarding analytical accuracy, the INR measurements tended to be lower on the coagulometers, and regarding diagnostic accuracy the CoaguChek S and CoaguChek XS deviated more than 15% from the laboratory measurements in 40% and 43% of the measurements, respectively. In conclusion, the precision of the coagulometers was found to be good, but only the CoaguChek XS had a precision within the predefined limit of 3%. Regarding analytical accuracy, the INR measurements tended to be lower on the coagulometers, compared to the laboratory. A large proportion of measurement of the coagulometers deviated more than 15% from the laboratory measurements. Whether this will have a clinical impact awaits further studies.

  4. Accuracy and responses of genomic selection on key traits in apple breeding

    PubMed Central

    Muranty, Hélène; Troggio, Michela; Sadok, Inès Ben; Rifaï, Mehdi Al; Auwerkerken, Annemarie; Banchi, Elisa; Velasco, Riccardo; Stevanato, Piergiorgio; van de Weg, W Eric; Di Guardo, Mario; Kumar, Satish; Laurens, François; Bink, Marco C A M

    2015-01-01

    The application of genomic selection in fruit tree crops is expected to enhance breeding efficiency by increasing prediction accuracy, increasing selection intensity and decreasing generation interval. The objectives of this study were to assess the accuracy of prediction and selection response in commercial apple breeding programmes for key traits. The training population comprised 977 individuals derived from 20 pedigreed full-sib families. Historic phenotypic data were available on 10 traits related to productivity and fruit external appearance and genotypic data for 7829 SNPs obtained with an Illumina 20K SNP array. From these data, a genome-wide prediction model was built and subsequently used to calculate genomic breeding values of five application full-sib families. The application families had genotypes at 364 SNPs from a dedicated 512 SNP array, and these genotypic data were extended to the high-density level by imputation. These five families were phenotyped for 1 year and their phenotypes were compared to the predicted breeding values. Accuracy of genomic prediction across the 10 traits reached a maximum value of 0.5 and had a median value of 0.19. The accuracies were strongly affected by the phenotypic distribution and heritability of traits. In the largest family, significant selection response was observed for traits with high heritability and symmetric phenotypic distribution. Traits that showed non-significant response often had reduced and skewed phenotypic variation or low heritability. Among the five application families the accuracies were uncorrelated to the degree of relatedness to the training population. The results underline the potential of genomic prediction to accelerate breeding progress in outbred fruit tree crops that still need to overcome long generation intervals and extensive phenotyping costs. PMID:26744627

  5. Estimation of accuracies and expected genetic change from selection for selection indexes that use multiple-trait predictions of breeding values.

    PubMed

    Barwick, S A; Tier, B; Swan, A A; Henzell, A L

    2013-10-01

    Procedures are described for estimating selection index accuracies for individual animals and expected genetic change from selection for the general case where indexes of EBVs predict an aggregate breeding objective of traits that may or may not have been measured. Index accuracies for the breeding objective are shown to take an important general form, being able to be expressed as the product of the accuracy of the index function of true breeding values and the accuracy with which that function predicts the breeding objective. When the accuracies of the individual EBVs of the index are known, prediction error variances (PEVs) and covariances (PECs) for the EBVs within animal are able to be well approximated, and index accuracies and expected genetic change from selection estimated with high accuracy. The procedures are suited to routine use in estimating index accuracies in genetic evaluation, and for providing important information, without additional modelling, on the directions in which a population will move under selection.

  6. Improved localization accuracy in double-helix point spread function super-resolution fluorescence microscopy using selective-plane illumination

    NASA Astrophysics Data System (ADS)

    Yu, Jie; Cao, Bo; Li, Heng; Yu, Bin; Chen, Danni; Niu, Hanben

    2014-09-01

    Recently, three-dimensional (3D) super resolution imaging of cellular structures in thick samples has been enabled with the wide-field super-resolution fluorescence microscopy based on double helix point spread function (DH-PSF). However, when the sample is Epi-illuminated, much background fluorescence from those excited molecules out-of-focus will reduce the signal-to-noise ratio (SNR) of the image in-focus. In this paper, we resort to a selective-plane illumination strategy, which has been used for tissue-level imaging and single molecule tracking, to eliminate out-of-focus background and to improve SNR and the localization accuracy of the standard DH-PSF super-resolution imaging in thick samples. We present a novel super-resolution microscopy that combine selective-plane illumination and DH-PSF. The setup utilizes a well-defined laser light sheet which theoretical thickness is 1.7μm (FWHM) at 640nm excitation wavelength. The image SNR of DH-PSF microscopy between selective-plane illumination and Epi-illumination are compared. As we expect, the SNR of the DH-PSF microscopy based selective-plane illumination is increased remarkably. So, 3D localization precision of DH-PSF would be improved significantly. We demonstrate its capabilities by studying 3D localizing of single fluorescent particles. These features will provide high thick samples compatibility for future biomedical applications.

  7. Towards the GEOSAT Follow-On Precise Orbit Determination Goals of High Accuracy and Near-Real-Time Processing

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Zelensky, Nikita P.; Chinn, Douglas S.; Beckley, Brian D.; Lillibridge, John L.

    2006-01-01

    The US Navy's GEOSAT Follow-On spacecraft (GFO) primary mission objective is to map the oceans using a radar altimeter. Satellite laser ranging data, especially in combination with altimeter crossover data, offer the only means of determining high-quality precise orbits. Two tuned gravity models, PGS7727 and PGS7777b, were created at NASA GSFC for GFO that reduce the predicted radial orbit through degree 70 to 13.7 and 10.0 mm. A macromodel was developed to model the nonconservative forces and the SLR spacecraft measurement offset was adjusted to remove a mean bias. Using these improved models, satellite-ranging data, altimeter crossover data, and Doppler data are used to compute both daily medium precision orbits with a latency of less than 24 hours. Final precise orbits are also computed using these tracking data and exported with a latency of three to four weeks to NOAA for use on the GFO Geophysical Data Records (GDR s). The estimated orbit precision of the daily orbits is between 10 and 20 cm, whereas the precise orbits have a precision of 5 cm.

  8. Hyperspectral band selection based on a variable precision neighborhood rough set.

    PubMed

    Liu, Yao; Xie, Hong; Wang, Liguo; Tan, Kezhu

    2016-01-20

    Band selection is a well-known approach for reducing dimensionality in hyperspectral images. We propose a band-selection method based on the variable precision neighborhood rough set theory to select informative bands from hyperspectral images. A decision-making information system was established by hyperspectral data derived from soybean samples between 400 and 1000 nm wavelengths. The dependency was used to evaluate band significance. The optimal band subset was selected by a forward greedy search algorithm. After adjusting appropriate threshold values, stable optimized results were obtained. To assess the effectiveness of the proposed band-selection technique, two classification models were constructed. The experimental results showed that admitting inclusion errors could improve classification performance, including band selection and generalization ability. PMID:26835918

  9. The influence of feature selection methods on accuracy, stability and interpretability of molecular signatures.

    PubMed

    Haury, Anne-Claire; Gestraud, Pierre; Vert, Jean-Philippe

    2011-01-01

    Biomarker discovery from high-dimensional data is a crucial problem with enormous applications in biology and medicine. It is also extremely challenging from a statistical viewpoint, but surprisingly few studies have investigated the relative strengths and weaknesses of the plethora of existing feature selection methods. In this study we compare 32 feature selection methods on 4 public gene expression datasets for breast cancer prognosis, in terms of predictive performance, stability and functional interpretability of the signatures they produce. We observe that the feature selection method has a significant influence on the accuracy, stability and interpretability of signatures. Surprisingly, complex wrapper and embedded methods generally do not outperform simple univariate feature selection methods, and ensemble feature selection has generally no positive effect. Overall a simple Student's t-test seems to provide the best results.

  10. Assessing the accuracy of selectivity as a basis for solvent screening in extractive distillation processes

    SciTech Connect

    Momoh, S.O. )

    1991-01-01

    An important parameter for consideration in the screening of solvents for an extractive distillation process is selectivity at infinite dilution. The higher the selectivity, the better the solvent. This paper assesses the accuracy of using selectivity as a basis for solvent screening in extractive distillation processes. Three types of binary mixtures that are usually separated by an extractive distillation process are chosen for investigation. Having determined the optimum solvent feed rate to be two times the feed rate of the binary mixture, the total annual costs of extractive distillation processes for each of the chosen mixtures and for various solvents are carried out. The solvents are ranked on the basis of the total annual cost (obtained by design and costing equations) for the extractive distillation processes, and this ranking order is compared with that of selectivity at infinite dilution as determined by the UNIFAC method. This matching of selectivity with total annual cost does not produce a very good correlation.

  11. The precision and accuracy of iterative and non-iterative methods of photopeak integration in activation analysis, with particular reference to the analysis of multiplets

    USGS Publications Warehouse

    Baedecker, P.A.

    1977-01-01

    The relative precisions obtainable using two digital methods, and three iterative least squares fitting procedures of photopeak integration have been compared empirically using 12 replicate counts of a test sample with 14 photopeaks of varying intensity. The accuracy by which the various iterative fitting methods could analyse synthetic doublets has also been evaluated, and compared with a simple non-iterative approach. ?? 1977 Akade??miai Kiado??.

  12. Precision of high-resolution multibeam echo sounding coupled with high-accuracy positioning in a shallow water coastal environment

    NASA Astrophysics Data System (ADS)

    Ernstsen, Verner B.; Noormets, Riko; Hebbeln, Dierk; Bartholomä, Alex; Flemming, Burg W.

    2006-09-01

    Over 4 years, repetitive bathymetric measurements of a shipwreck in the Grådyb tidal inlet channel in the Danish Wadden Sea were carried out using a state-of-the-art high-resolution multibeam echosounder (MBES) coupled with a real-time long range kinematic (LRK™) global positioning system. Seven measurements during a single survey in 2003 ( n=7) revealed a horizontal and vertical precision of the MBES system of ±20 and ±2 cm, respectively, at a 95% confidence level. By contrast, four annual surveys from 2002 to 2005 ( n=4) yielded a horizontal and vertical precision (at 95% confidence level) of only ±30 and ±8 cm, respectively. This difference in precision can be explained by three main factors: (1) the dismounting of the system between the annual surveys, (2) rougher sea conditions during the survey in 2004 and (3) the limited number of annual surveys. In general, the precision achieved here did not correspond to the full potential of the MBES system, as this could certainly have been improved by an increase in coverage density (soundings/m2), achievable by reducing the survey speed of the vessel. Nevertheless, precision was higher than that reported to date for earlier offshore test surveys using comparable equipment.

  13. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  14. Improved Accuracy and Precision in LA-ICP-MS U-Th/Pb Dating of Zircon through the Reduction of Crystallinity Related Bias

    NASA Astrophysics Data System (ADS)

    Matthews, W.; McDonald, A.; Hamilton, B.; Guest, B.

    2015-12-01

    The accuracy of zircon U-Th/Pb ages generated by LA-ICP-MS is limited by systematic bias resulting from differences in crystallinity of the primary reference and that of the unknowns being analyzed. In general, the use of a highly crystalline primary reference will tend to bias analyses of materials of lesser crystallinity toward older ages. When dating igneous rocks, bias can be minimized by matching the crystallinity of the primary reference to that of the unknowns. However, the crystallinity of the unknowns is often not well constrained prior to ablation, as it is a function of U and Th concentration, crystallization age, and thermal history. Likewise, selecting an appropriate primary reference is impossible when dating detrital rocks where zircons with differing ages, protoliths, and thermal histories are analyzed in the same session. We investigate the causes of systematic bias using Raman spectroscopy and measurements of the ablated pit geometry. The crystallinity of five zircon reference materials with ages between 28.2 Ma and 2674 Ma was estimated using Raman spectroscopy. Zircon references varied from being highly crystalline to highly metamict, with individual reference materials plotting as distinct clusters in peak wavelength versus Full-Width Half-Maximum (FWHM) space. A strong positive correlation (R2=0.69) was found between the FWHM for the band at ~1000 cm-1 in the Raman spectrum of the zircon and its ablation rate, suggesting the degree of crystallinity is a primary control on ablation rate in zircons. A moderate positive correlation (R2=0.37) was found between ablation rate and the difference between the age determined by LA-ICP-MS and the accepted ID-TIMS age (ΔAge). We use the measured, intra-sessional relationship between ablation rate and ΔAge of secondary references to reduce systematic bias. Rapid, high-precision measurement of ablated pit geometries using an optical profilometer and custom MatLab algorithm facilitates the implementation

  15. Selection and use of TLDS for high precision NERVA shielding measurements

    NASA Technical Reports Server (NTRS)

    Woodsum, H. C.

    1972-01-01

    An experimental evaluation of thermoluminescent dosimeters was performed in order to select high precision dosimeters for a study whose purpose is to measure gamma streaming through the coolant passages of a simulated flight type internal NERVA reactor shield. Based on this study, the CaF2 chip TLDs are the most reproducible dosimeters with reproducibility generally within a few percent, but none of the TLDs tested met the reproducibility criterion of plus or minus 2%.

  16. Decision precision or holistic heuristic?: Insights on on-site selection of student nurses and midwives.

    PubMed

    Macduff, Colin; Stephen, Audrey; Taylor, Ruth

    2016-01-01

    Concerns about quality of care delivery in the UK have led to more scrutiny of criteria and methods for the selection of student nurses. However few substantive research studies of on-site selection processes exist. This study elicited and interpreted perspectives on interviewing processes and related decision making involved in on-site selection of student nurses and midwives. Individual and focus group interviews were undertaken with 36 lecturers, 5 clinical staff and 72 students from seven Scottish universities. Enquiry focused primarily on interviewing of candidates on-site. Qualitative content analysis was used as a primary strategy, followed by in-depth thematic analysis. Students had very mixed experiences of interview processes. Staff typically took into account a range of candidate attributes that they valued in order to achieve holistic assessments. These included: interpersonal skills, team working, confidence, problem-solving, aptitude for caring, motivations, and commitment. Staff had mixed views of the validity and reliability of interview processes. A holistic heuristic for overall decision making predominated over belief in the precision of, and evidence base for, particular attribute measurement processes. While the development of measurement tools for particular attributes continues apace, tension between holism and precision is likely to persist within on-site selection procedures.

  17. Decision precision or holistic heuristic?: Insights on on-site selection of student nurses and midwives.

    PubMed

    Macduff, Colin; Stephen, Audrey; Taylor, Ruth

    2016-01-01

    Concerns about quality of care delivery in the UK have led to more scrutiny of criteria and methods for the selection of student nurses. However few substantive research studies of on-site selection processes exist. This study elicited and interpreted perspectives on interviewing processes and related decision making involved in on-site selection of student nurses and midwives. Individual and focus group interviews were undertaken with 36 lecturers, 5 clinical staff and 72 students from seven Scottish universities. Enquiry focused primarily on interviewing of candidates on-site. Qualitative content analysis was used as a primary strategy, followed by in-depth thematic analysis. Students had very mixed experiences of interview processes. Staff typically took into account a range of candidate attributes that they valued in order to achieve holistic assessments. These included: interpersonal skills, team working, confidence, problem-solving, aptitude for caring, motivations, and commitment. Staff had mixed views of the validity and reliability of interview processes. A holistic heuristic for overall decision making predominated over belief in the precision of, and evidence base for, particular attribute measurement processes. While the development of measurement tools for particular attributes continues apace, tension between holism and precision is likely to persist within on-site selection procedures. PMID:26213147

  18. Optimizing the accuracy and precision of the single-pulse Laue technique for synchrotron photo-crystallography

    PubMed Central

    Kamiński, Radosław; Graber, Timothy; Benedict, Jason B.; Henning, Robert; Chen, Yu-Sheng; Scheins, Stephan; Messerschmidt, Marc; Coppens, Philip

    2010-01-01

    The accuracy that can be achieved in single-pulse pump-probe Laue experiments is discussed. It is shown that with careful tuning of the experimental conditions a reproducibility of the intensity ratios of equivalent intensities obtained in different measurements of 3–4% can be achieved. The single-pulse experiments maximize the time resolution that can be achieved and, unlike stroboscopic techniques in which the pump-probe cycle is rapidly repeated, minimize the temperature increase due to the laser exposure of the sample. PMID:20567080

  19. Exploiting microRNA Specificity and Selectivity: Paving a Sustainable Path Towards Precision Medicine.

    PubMed

    Santulli, Gaetano

    2015-01-01

    In his State of the Union address before both chambers of the US Congress, President Barack Obama called for increased investment in US infrastructure and research and announced the launch of a new Precision Medicine Initiative, aiming to accelerate biomedical discovery. Due to their well-established selectivity and specificity, microRNAs can represent a useful tool, both in diagnosis and therapy, in forging the path towards the achievement of precision medicine. This introductory chapter represents a guide for the Reader in examining the functional roles of microRNAs in the most diverse aspects of clinical practice, which will be explored in this third volume of the microRNA trilogy. PMID:26663175

  20. Exploiting microRNA Specificity and Selectivity: Paving a Sustainable Path Towards Precision Medicine

    PubMed Central

    2016-01-01

    In his State of the Union address before both chambers of the US Congress, President Barack Obama called for increased investment in US infrastructure and research and announced the launch of a new Precision Medicine Initiative, aiming to accelerate biomedical discovery. Due to their well-established selectivity and specificity, microRNAs can represent a useful tool, both in diagnosis and therapy, in forging the path towards the achievement of precision medicine. This introductory chapter represents a guide for the Reader in examining the functional roles of microRNAs in the most diverse aspects of clinical practice, which will be explored in this third volume of the microRNA trilogy. PMID:26663175

  1. Comparative accuracy of the Albedo, transmission and absorption for selected radiative transfer approximations

    NASA Technical Reports Server (NTRS)

    King, M. D.; HARSHVARDHAN

    1986-01-01

    Illustrations of both the relative and absolute accuracy of eight different radiative transfer approximations as a function of optical thickness, solar zenith angle and single scattering albedo are given. Computational results for the plane albedo, total transmission and fractional absorption were obtained for plane-parallel atmospheres composed of cloud particles. These computations, which were obtained using the doubling method, are compared with comparable results obtained using selected radiative transfer approximations. Comparisons were made between asymptotic theory for thick layers and the following widely used two stream approximations: Coakley-Chylek's models 1 and 2, Meador-Weaver, Eddington, delta-Eddington, PIFM and delta-discrete ordinates.

  2. Predicted accuracy of and response to genomic selection for new traits in dairy cattle.

    PubMed

    Calus, M P L; de Haas, Y; Pszczola, M; Veerkamp, R F

    2013-02-01

    Genomic selection relaxes the requirement of traditional selection tools to have phenotypic measurements on close relatives of all selection candidates. This opens up possibilities to select for traits that are difficult or expensive to measure. The objectives of this paper were to predict accuracy of and response to genomic selection for a new trait, considering that only a cow reference population of moderate size was available for the new trait, and that selection simultaneously targeted an index and this new trait. Accuracy for and response to selection were deterministically evaluated for three different breeding goals. Single trait selection for the new trait based only on a limited cow reference population of up to 10 000 cows, showed that maximum genetic responses of 0.20 and 0.28 genetic standard deviation (s.d.) per year can be achieved for traits with a heritability of 0.05 and 0.30, respectively. Adding information from the index based on a reference population of 5000 bulls, and assuming a genetic correlation of 0.5, increased genetic response for both heritability levels by up to 0.14 genetic s.d. per year. The scenario with simultaneous selection for the new trait and the index, yielded a substantially lower response for the new trait, especially when the genetic correlation with the index was negative. Despite the lower response for the index, whenever the new trait had considerable economic value, including the cow reference population considerably improved the genetic response for the new trait. For scenarios with a zero or negative genetic correlation with the index and equal economic value for the index and the new trait, a reference population of 2000 cows increased genetic response for the new trait with at least 0.10 and 0.20 genetic s.d. per year, for heritability levels of 0.05 and 0.30, respectively. We conclude that for new traits with a very small or positive genetic correlation with the index, and a high positive economic value

  3. Individual variation in exploratory behaviour improves speed and accuracy of collective nest selection by Argentine ants

    PubMed Central

    Hui, Ashley; Pinter-Wollman, Noa

    2014-01-01

    Collective behaviours are influenced by the behavioural composition of the group. For example, a collective behaviour may emerge from the average behaviour of the group's constituents, or be driven by a few key individuals that catalyse the behaviour of others in the group. When ant colonies collectively relocate to a new nest site, there is an inherent trade-off between the speed and accuracy of their decision of where to move due to the time it takes to gather information. Thus, variation among workers in exploratory behaviour, which allows gathering information about potential new nest sites, may impact the ability of a colony to move quickly into a suitable new nest. The invasive Argentine ant, Linepithema humile, expands its range locally through the dispersal and establishment of propagules: groups of ants and queens. We examine whether the success of these groups in rapidly finding a suitable nest site is affected by their behavioural composition. We compared nest choice speed and accuracy among groups of all-exploratory, all-nonexploratory and half-exploratory–half-nonexploratory individuals. We show that exploratory individuals improve both the speed and accuracy of collective nest choice, and that exploratory individuals have additive, not synergistic, effects on nest site selection. By integrating an examination of behaviour into the study of invasive species we shed light on the mechanisms that impact the progression of invasion. PMID:25018558

  4. Progress integrating ID-TIMS U-Pb geochronology with accessory mineral geochemistry: towards better accuracy and higher precision time

    NASA Astrophysics Data System (ADS)

    Schoene, B.; Samperton, K. M.; Crowley, J. L.; Cottle, J. M.

    2012-12-01

    It is increasingly common that hand samples of plutonic and volcanic rocks contain zircon with dates that span between zero and >100 ka. This recognition comes from the increased application of U-series geochronology on young volcanic rocks and the increased precision to better than 0.1% on single zircons by the U-Pb ID-TIMS method. It has thus become more difficult to interpret such complicated datasets in terms of ashbed eruption or magma emplacement, which are critical constraints for geochronologic applications ranging from biotic evolution and the stratigraphic record to magmatic and metamorphic processes in orogenic belts. It is important, therefore, to develop methods that aid in interpreting which minerals, if any, date the targeted process. One promising tactic is to better integrate accessory mineral geochemistry with high-precision ID-TIMS U-Pb geochronology. These dual constraints can 1) identify cogenetic populations of minerals, and 2) record magmatic or metamorphic fluid evolution through time. Goal (1) has been widely sought with in situ geochronology and geochemical analysis but is limited by low-precision dates. Recent work has attempted to bridge this gap by retrieving the typically discarded elution from ion exchange chemistry that precedes ID-TIMS U-Pb geochronology and analyzing it by ICP-MS (U-Pb TIMS-TEA). The result integrates geochemistry and high-precision geochronology from the exact same volume of material. The limitation of this method is the relatively coarse spatial resolution compared to in situ techniques, and thus averages potentially complicated trace element profiles through single minerals or mineral fragments. In continued work, we test the effect of this on zircon by beginning with CL imaging to reveal internal zonation and growth histories. This is followed by in situ LA-ICPMS trace element transects of imaged grains to reveal internal geochemical zonation. The same grains are then removed from grain-mount, fragmented, and

  5. [Analysis on the accuracy of simple selection method of Fengshi (GB 31)].

    PubMed

    Li, Zhixing; Zhang, Haihua; Li, Suhe

    2015-12-01

    To explore the accuracy of simple selection method of Fengshi (GB 31). Through the study of the ancient and modern data,the analysis and integration of the acupuncture books,the comparison of the locations of Fengshi (GB 31) by doctors from all dynasties and the integration of modern anatomia, the modern simple selection method of Fengshi (GB 31) is definite, which is the same as the traditional way. It is believed that the simple selec tion method is in accord with the human-oriented thought of TCM. Treatment by acupoints should be based on the emerging nature and the individual difference of patients. Also, it is proposed that Fengshi (GB 31) should be located through the integration between the simple method and body surface anatomical mark.

  6. Precise Muscle Selection Using Dynamic Polyelectromyography for Treatment of Post-stroke Dystonia: A Case Report

    PubMed Central

    2016-01-01

    Dystonia has a wide range of causes, but treatment of dystonia is limited to minimizing the symptoms as there is yet no successful treatment for its cause. One of the optimal treatment methods for dystonia is chemodenervation using botulinum toxin type A (BTX-A), alcohol injection, etc., but its success depends on how precisely the dystonic muscle is selected. Here, we reported a successful experience in a 49-year-old post-stroke female patient who showed paroxysmal repetitive contractions involving the right leg, which may be of dystonic nature. BTX-A and alcohol were injected into the muscles which were identified by dynamic polyelectromyography. After injection, the dystonic muscle spasm, cramping pain, and the range of motion of the affected lower limb improved markedly, and she was able to walk independently indoors. In such a case, dynamic polyelectromyography may be a useful method for selecting the dominant dystonic muscles. PMID:27446795

  7. Maskless deposition technique for the physical vapor deposition of thin film and multilayer coatings with subnanometer precision and accuracy

    DOEpatents

    Vernon, Stephen P.; Ceglio, Natale M.

    2000-01-01

    The invention is a method for the production of axially symmetric, graded and ungraded thickness thin film and multilayer coatings that avoids the use of apertures or masks to tailor the deposition profile. A motional averaging scheme permits the deposition of uniform thickness coatings independent of the substrate radius. Coating uniformity results from an exact cancellation of substrate radius dependent terms, which occurs when the substrate moves at constant velocity. If the substrate is allowed to accelerate over the source, arbitrary coating profiles can be generated through appropriate selection and control of the substrate center of mass equation of motion. The radial symmetry of the coating profile is an artifact produced by orbiting the substrate about its center of mass; other distributions are obtained by selecting another rotation axis. Consequently there is a direct mapping between the coating thickness and substrate equation of motion which can be used to tailor the coating profile without the use of masks and apertures.

  8. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, Jacquelyn C.; Thompson, Anne M.; Schmidlin, F. J.; Oltmans, S. J.; Smit, H. G. J.

    2004-01-01

    Since 1998 the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 ozone profiles over eleven southern hemisphere tropical and subtropical stations. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used to measure ozone. The data are archived at: &ttp://croc.gsfc.nasa.gov/shadoz>. In analysis of ozonesonde imprecision within the SHADOZ dataset, Thompson et al. [JGR, 108,8238,20031 we pointed out that variations in ozonesonde technique (sensor solution strength, instrument manufacturer, data processing) could lead to station-to-station biases within the SHADOZ dataset. Imprecisions and accuracy in the SHADOZ dataset are examined in light of new data. First, SHADOZ total ozone column amounts are compared to version 8 TOMS (2004 release). As for TOMS version 7, satellite total ozone is usually higher than the integrated column amount from the sounding. Discrepancies between the sonde and satellite datasets decline two percentage points on average, compared to version 7 TOMS offsets. Second, the SHADOZ station data are compared to results of chamber simulations (JOSE-2000, Juelich Ozonesonde Intercomparison Experiment) in which the various SHADOZ techniques were evaluated. The range of JOSE column deviations from a standard instrument (-10%) in the chamber resembles that of the SHADOZ station data. It appears that some systematic variations in the SHADOZ ozone record are accounted for by differences in solution strength, data processing and instrument type (manufacturer).

  9. Group-Item and Directed Scanning: Examining Preschoolers' Accuracy and Efficiency in Two Augmentative Communication Symbol Selection Methods

    ERIC Educational Resources Information Center

    White, Aubrey Randall; Carney, Edward; Reichle, Joe

    2010-01-01

    Purpose: The current investigation compared directed scanning and group-item scanning among typically developing 4-year-old children. Of specific interest were their accuracy, selection speed, and efficiency of cursor movement in selecting colored line drawn symbols representing object vocabulary. Method: Twelve 4-year-olds made selections in both…

  10. The science of and advanced technology for cost-effective manufacture of high precision engineering products. Volume 4. Thermal effects on the accuracy of numerically controlled machine tool

    NASA Astrophysics Data System (ADS)

    Venugopal, R.; Barash, M. M.; Liu, C. R.

    1985-10-01

    Thermal effects on the accuracy of numerically controlled machine tools are specially important in the context of unmanned manufacture or under conditions of precision metal cutting. Removal of the operator from the direct control of the metal cutting process has created problems in terms of maintaining accuracy. The objective of this research is to study thermal effects on the accuracy of numerically controlled machine tools. The initial part of the research report is concerned with the analysis of a hypothetical machine. The thermal characteristics of this machine are studied. Numerical methods for evaluating the errors exhibited by the slides of the machine are proposed and the possibility of predicting thermally induced errors by the use of regression equations is investigated. A method for computing the workspace error is also presented. The final part is concerned with the actual measurement of errors on a modern CNC machining center. Thermal influences on the errors is the main objective of the experimental work. Thermal influences on the errors of machine tools are predictable. Techniques for determining thermal effects on machine tools at a design stage are also presented. ; Error models and prediction; Metrology; Automation.

  11. Development of new methods in modern selective organic synthesis: preparation of functionalized molecules with atomic precision

    NASA Astrophysics Data System (ADS)

    Ananikov, V. P.; Khemchyan, L. L.; Ivanova, Yu V.; Bukhtiyarov, V. I.; Sorokin, A. M.; Prosvirin, I. P.; Vatsadze, S. Z.; Medved'ko, A. V.; Nuriev, V. N.; Dilman, A. D.; Levin, V. V.; Koptyug, I. V.; Kovtunov, K. V.; Zhivonitko, V. V.; Likholobov, V. A.; Romanenko, A. V.; Simonov, P. A.; Nenajdenko, V. G.; Shmatova, O. I.; Muzalevskiy, V. M.; Nechaev, M. S.; Asachenko, A. F.; Morozov, O. S.; Dzhevakov, P. B.; Osipov, S. N.; Vorobyeva, D. V.; Topchiy, M. A.; Zotova, M. A.; Ponomarenko, S. A.; Borshchev, O. V.; Luponosov, Yu N.; Rempel, A. A.; Valeeva, A. A.; Stakheev, A. Yu; Turova, O. V.; Mashkovsky, I. S.; Sysolyatin, S. V.; Malykhin, V. V.; Bukhtiyarova, G. A.; Terent'ev, A. O.; Krylov, I. B.

    2014-10-01

    The challenges of the modern society and the growing demand of high-technology sectors of industrial production bring about a new phase in the development of organic synthesis. A cutting edge of modern synthetic methods is introduction of functional groups and more complex structural units into organic molecules with unprecedented control over the course of chemical transformation. Analysis of the state-of-the-art achievements in selective organic synthesis indicates the appearance of a new trend — the synthesis of organic molecules, biologically active compounds, pharmaceutical substances and smart materials with absolute selectivity. Most advanced approaches to organic synthesis anticipated in the near future can be defined as 'atomic precision' in chemical reactions. The present review considers selective methods of organic synthesis suitable for transformation of complex functionalized molecules under mild conditions. Selected key trends in the modern organic synthesis are considered including the preparation of organofluorine compounds, catalytic cross-coupling and oxidative cross-coupling reactions, atom-economic addition reactions, methathesis processes, oxidation and reduction reactions, synthesis of heterocyclic compounds, design of new homogeneous and heterogeneous catalytic systems, application of photocatalysis, scaling up synthetic procedures to industrial level and development of new approaches to investigation of mechanisms of catalytic reactions. The bibliography includes 840 references.

  12. Measuring the bias, precision, accuracy, and validity of self-reported height and weight in assessing overweight and obesity status among adolescents using a surveillance system

    PubMed Central

    2015-01-01

    Background Evidence regarding bias, precision, and accuracy in adolescent self-reported height and weight across demographic subpopulations is lacking. The bias, precision, and accuracy of adolescent self-reported height and weight across subpopulations were examined using a large, diverse and representative sample of adolescents. A second objective was to develop correction equations for self-reported height and weight to provide more accurate estimates of body mass index (BMI) and weight status. Methods A total of 24,221 students from 8th and 11th grade in Texas participated in the School Physical Activity and Nutrition (SPAN) surveillance system in years 2000–2002 and 2004–2005. To assess bias, the differences between the self-reported and objective measures, for height and weight were estimated. To assess precision and accuracy, the Lin’s concordance correlation coefficient was used. BMI was estimated for self-reported and objective measures. The prevalence of students’ weight status was estimated using self-reported and objective measures; absolute (bias) and relative error (relative bias) were assessed subsequently. Correction equations for sex and race/ethnicity subpopulations were developed to estimate objective measures of height, weight and BMI from self-reported measures using weighted linear regression. Sensitivity, specificity and positive predictive values of weight status classification using self-reported measures and correction equations are assessed by sex and grade. Results Students in 8th- and 11th-grade overestimated their height from 0.68cm (White girls) to 2.02 cm (African-American boys), and underestimated their weight from 0.4 kg (Hispanic girls) to 0.98 kg (African-American girls). The differences in self-reported versus objectively-measured height and weight resulted in underestimation of BMI ranging from -0.23 kg/m2 (White boys) to -0.7 kg/m2 (African-American girls). The sensitivity of self-reported measures to classify weight

  13. Balancing accuracy and efficiency in selecting vibrational configuration interaction basis states using vibrational perturbation theory

    NASA Astrophysics Data System (ADS)

    Sibaev, Marat; Crittenden, Deborah L.

    2016-08-01

    This work describes the benchmarking of a vibrational configuration interaction (VCI) algorithm that combines the favourable computational scaling of VPT2 with the algorithmic robustness of VCI, in which VCI basis states are selected according to the magnitude of their contribution to the VPT2 energy, for the ground state and fundamental excited states. Particularly novel aspects of this work include: expanding the potential to 6th order in normal mode coordinates, using a double-iterative procedure in which configuration selection and VCI wavefunction updates are performed iteratively (micro-iterations) over a range of screening threshold values (macro-iterations), and characterisation of computational resource requirements as a function of molecular size. Computational costs may be further reduced by a priori truncation of the VCI wavefunction according to maximum extent of mode coupling, along with discarding negligible force constants and VCI matrix elements, and formulating the wavefunction in a harmonic oscillator product basis to enable efficient evaluation of VCI matrix elements. Combining these strategies, we define a series of screening procedures that scale as O ( Nmode 6 ) - O ( Nmode 9 ) in run time and O ( Nmode 6 ) - O ( Nmode 7 ) in memory, depending on the desired level of accuracy. Our open-source code is freely available for download from http://www.sourceforge.net/projects/pyvci-vpt2.

  14. An experimental analysis of accuracy and precision of a high-speed strain-gage system based on the direct-resistance method

    NASA Astrophysics Data System (ADS)

    Cappa, P.; del Prete, Z.

    1992-03-01

    An experimental study on the relative merits of using a high-speed digital-acquisition system to measure directly the strain-gage resistance, rather than using a conventional Wheatstone bridge, is carried out. Both strain gages, with a nominal resistance of 120 ohm and 1 kohm, were simulated with precision resistors, and the output signals were acquired over a time of 48 and 144 hours; furthermore, the effects in metrological performances caused by a statistical filtering were evaluated. The results show that the implementation of the statistical filtering gains a considerable improvement in gathering strain-gage-resistance readings. On the other hand, such a procedure causes, obviously, a loss of performance with regard to the acquisition rate, and therefore to the dynamic data-collecting capabilities. In any case, the intrinsic resolution of the 12-bit a/d converter, utilized in the present experimental analysis, causes a limitation for measurement accuracy in the range of hundreds microns/m.

  15. High-precision, high-accuracy ultralong-range swept-source optical coherence tomography using vertical cavity surface emitting laser light source.

    PubMed

    Grulkowski, Ireneusz; Liu, Jonathan J; Potsaid, Benjamin; Jayaraman, Vijaysekhar; Jiang, James; Fujimoto, James G; Cable, Alex E

    2013-03-01

    We demonstrate ultralong-range swept-source optical coherence tomography (OCT) imaging using vertical cavity surface emitting laser technology. The ability to adjust laser parameters and high-speed acquisition enables imaging ranges from a few centimeters up to meters using the same instrument. We discuss the challenges of long-range OCT imaging. In vivo human-eye imaging and optical component characterization are presented. The precision and accuracy of OCT-based measurements are assessed and are important for ocular biometry and reproducible intraocular distance measurement before cataract surgery. Additionally, meter-range measurement of fiber length and multicentimeter-range imaging are reported. 3D visualization supports a class of industrial imaging applications of OCT.

  16. In situ sulfur isotope analysis of sulfide minerals by SIMS: Precision and accuracy, with application to thermometry of ~3.5Ga Pilbara cherts

    USGS Publications Warehouse

    Kozdon, R.; Kita, N.T.; Huberty, J.M.; Fournelle, J.H.; Johnson, C.A.; Valley, J.W.

    2010-01-01

    Secondary ion mass spectrometry (SIMS) measurement of sulfur isotope ratios is a potentially powerful technique for in situ studies in many areas of Earth and planetary science. Tests were performed to evaluate the accuracy and precision of sulfur isotope analysis by SIMS in a set of seven well-characterized, isotopically homogeneous natural sulfide standards. The spot-to-spot and grain-to-grain precision for δ34S is ± 0.3‰ for chalcopyrite and pyrrhotite, and ± 0.2‰ for pyrite (2SD) using a 1.6 nA primary beam that was focused to 10 µm diameter with a Gaussian-beam density distribution. Likewise, multiple δ34S measurements within single grains of sphalerite are within ± 0.3‰. However, between individual sphalerite grains, δ34S varies by up to 3.4‰ and the grain-to-grain precision is poor (± 1.7‰, n = 20). Measured values of δ34S correspond with analysis pit microstructures, ranging from smooth surfaces for grains with high δ34S values, to pronounced ripples and terraces in analysis pits from grains featuring low δ34S values. Electron backscatter diffraction (EBSD) shows that individual sphalerite grains are single crystals, whereas crystal orientation varies from grain-to-grain. The 3.4‰ variation in measured δ34S between individual grains of sphalerite is attributed to changes in instrumental bias caused by different crystal orientations with respect to the incident primary Cs+ beam. High δ34S values in sphalerite correlate to when the Cs+ beam is parallel to the set of directions , from [111] to [110], which are preferred directions for channeling and focusing in diamond-centered cubic crystals. Crystal orientation effects on instrumental bias were further detected in galena. However, as a result of the perfect cleavage along {100} crushed chips of galena are typically cube-shaped and likely to be preferentially oriented, thus crystal orientation effects on instrumental bias may be obscured. Test were made to improve the analytical

  17. An in-depth evaluation of accuracy and precision in Hg isotopic analysis via pneumatic nebulization and cold vapor generation multi-collector ICP-mass spectrometry.

    PubMed

    Rua-Ibarz, Ana; Bolea-Fernandez, Eduardo; Vanhaecke, Frank

    2016-01-01

    Mercury (Hg) isotopic analysis via multi-collector inductively coupled plasma (ICP)-mass spectrometry (MC-ICP-MS) can provide relevant biogeochemical information by revealing sources, pathways, and sinks of this highly toxic metal. In this work, the capabilities and limitations of two different sample introduction systems, based on pneumatic nebulization (PN) and cold vapor generation (CVG), respectively, were evaluated in the context of Hg isotopic analysis via MC-ICP-MS. The effect of (i) instrument settings and acquisition parameters, (ii) concentration of analyte element (Hg), and internal standard (Tl)-used for mass discrimination correction purposes-and (iii) different mass bias correction approaches on the accuracy and precision of Hg isotope ratio results was evaluated. The extent and stability of mass bias were assessed in a long-term study (18 months, n = 250), demonstrating a precision ≤0.006% relative standard deviation (RSD). CVG-MC-ICP-MS showed an approximately 20-fold enhancement in Hg signal intensity compared with PN-MC-ICP-MS. For CVG-MC-ICP-MS, the mass bias induced by instrumental mass discrimination was accurately corrected for by using either external correction in a sample-standard bracketing approach (SSB) or double correction, consisting of the use of Tl as internal standard in a revised version of the Russell law (Baxter approach), followed by SSB. Concomitant matrix elements did not affect CVG-ICP-MS results. Neither with PN, nor with CVG, any evidence for mass-independent discrimination effects in the instrument was observed within the experimental precision obtained. CVG-MC-ICP-MS was finally used for Hg isotopic analysis of reference materials (RMs) of relevant environmental origin. The isotopic composition of Hg in RMs of marine biological origin testified of mass-independent fractionation that affected the odd-numbered Hg isotopes. While older RMs were used for validation purposes, novel Hg isotopic data are provided for the

  18. Increased prediction accuracy in wheat breeding trials using a marker × environment interaction genomic selection model.

    PubMed

    Lopez-Cruz, Marco; Crossa, Jose; Bonnett, David; Dreisigacker, Susanne; Poland, Jesse; Jannink, Jean-Luc; Singh, Ravi P; Autrique, Enrique; de los Campos, Gustavo

    2015-04-01

    Genomic selection (GS) models use genome-wide genetic information to predict genetic values of candidates of selection. Originally, these models were developed without considering genotype × environment interaction(G×E). Several authors have proposed extensions of the single-environment GS model that accommodate G×E using either covariance functions or environmental covariates. In this study, we model G×E using a marker × environment interaction (M×E) GS model; the approach is conceptually simple and can be implemented with existing GS software. We discuss how the model can be implemented by using an explicit regression of phenotypes on markers or using co-variance structures (a genomic best linear unbiased prediction-type model). We used the M×E model to analyze three CIMMYT wheat data sets (W1, W2, and W3), where more than 1000 lines were genotyped using genotyping-by-sequencing and evaluated at CIMMYT's research station in Ciudad Obregon, Mexico, under simulated environmental conditions that covered different irrigation levels, sowing dates and planting systems. We compared the M×E model with a stratified (i.e., within-environment) analysis and with a standard (across-environment) GS model that assumes that effects are constant across environments (i.e., ignoring G×E). The prediction accuracy of the M×E model was substantially greater of that of an across-environment analysis that ignores G×E. Depending on the prediction problem, the M×E model had either similar or greater levels of prediction accuracy than the stratified analyses. The M×E model decomposes marker effects and genomic values into components that are stable across environments (main effects) and others that are environment-specific (interactions). Therefore, in principle, the interaction model could shed light over which variants have effects that are stable across environments and which ones are responsible for G×E. The data set and the scripts required to reproduce the analysis are

  19. Increased Prediction Accuracy in Wheat Breeding Trials Using a Marker × Environment Interaction Genomic Selection Model

    PubMed Central

    Lopez-Cruz, Marco; Crossa, Jose; Bonnett, David; Dreisigacker, Susanne; Poland, Jesse; Jannink, Jean-Luc; Singh, Ravi P.; Autrique, Enrique; de los Campos, Gustavo

    2015-01-01

    Genomic selection (GS) models use genome-wide genetic information to predict genetic values of candidates of selection. Originally, these models were developed without considering genotype × environment interaction(G×E). Several authors have proposed extensions of the single-environment GS model that accommodate G×E using either covariance functions or environmental covariates. In this study, we model G×E using a marker × environment interaction (M×E) GS model; the approach is conceptually simple and can be implemented with existing GS software. We discuss how the model can be implemented by using an explicit regression of phenotypes on markers or using co-variance structures (a genomic best linear unbiased prediction-type model). We used the M×E model to analyze three CIMMYT wheat data sets (W1, W2, and W3), where more than 1000 lines were genotyped using genotyping-by-sequencing and evaluated at CIMMYT’s research station in Ciudad Obregon, Mexico, under simulated environmental conditions that covered different irrigation levels, sowing dates and planting systems. We compared the M×E model with a stratified (i.e., within-environment) analysis and with a standard (across-environment) GS model that assumes that effects are constant across environments (i.e., ignoring G×E). The prediction accuracy of the M×E model was substantially greater of that of an across-environment analysis that ignores G×E. Depending on the prediction problem, the M×E model had either similar or greater levels of prediction accuracy than the stratified analyses. The M×E model decomposes marker effects and genomic values into components that are stable across environments (main effects) and others that are environment-specific (interactions). Therefore, in principle, the interaction model could shed light over which variants have effects that are stable across environments and which ones are responsible for G×E. The data set and the scripts required to reproduce the analysis

  20. Accuracy of genomic selection for age at puberty in a multi-breed population of tropically adapted beef cattle.

    PubMed

    Farah, M M; Swan, A A; Fortes, M R S; Fonseca, R; Moore, S S; Kelly, M J

    2016-02-01

    Genomic selection is becoming a standard tool in livestock breeding programs, particularly for traits that are hard to measure. Accuracy of genomic selection can be improved by increasing the quantity and quality of data and potentially by improving analytical methods. Adding genotypes and phenotypes from additional breeds or crosses often improves the accuracy of genomic predictions but requires specific methodology. A model was developed to incorporate breed composition estimated from genotypes into genomic selection models. This method was applied to age at puberty data in female beef cattle (as estimated from age at first observation of a corpus luteum) from a mix of Brahman and Tropical Composite beef cattle. In this dataset, the new model incorporating breed composition did not increase the accuracy of genomic selection. However, the breeding values exhibited slightly less bias (as assessed by deviation of regression of phenotype on genomic breeding values from the expected value of 1). Adding additional Brahman animals to the Tropical Composite analysis increased the accuracy of genomic predictions and did not affect the accuracy of the Brahman predictions. PMID:26490440

  1. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    NASA Astrophysics Data System (ADS)

    He, Bin; Frey, Eric C.

    2010-06-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were

  2. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods.

    PubMed

    He, Bin; Frey, Eric C

    2010-06-21

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed (111)In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were

  3. Guidelines for Dual Energy X-Ray Absorptiometry Analysis of Trabecular Bone-Rich Regions in Mice: Improved Precision, Accuracy, and Sensitivity for Assessing Longitudinal Bone Changes.

    PubMed

    Shi, Jiayu; Lee, Soonchul; Uyeda, Michael; Tanjaya, Justine; Kim, Jong Kil; Pan, Hsin Chuan; Reese, Patricia; Stodieck, Louis; Lin, Andy; Ting, Kang; Kwak, Jin Hee; Soo, Chia

    2016-05-01

    Trabecular bone is frequently studied in osteoporosis research because changes in trabecular bone are the most common cause of osteoporotic fractures. Dual energy X-ray absorptiometry (DXA) analysis specific to trabecular bone-rich regions is crucial to longitudinal osteoporosis research. The purpose of this study is to define a novel method for accurately analyzing trabecular bone-rich regions in mice via DXA. This method will be utilized to analyze scans obtained from the International Space Station in an upcoming study of microgravity-induced bone loss. Thirty 12-week-old BALB/c mice were studied. The novel method was developed by preanalyzing trabecular bone-rich sites in the distal femur, proximal tibia, and lumbar vertebrae via high-resolution X-ray imaging followed by DXA and micro-computed tomography (micro-CT) analyses. The key DXA steps described by the novel method were (1) proper mouse positioning, (2) region of interest (ROI) sizing, and (3) ROI positioning. The precision of the new method was assessed by reliability tests and a 14-week longitudinal study. The bone mineral content (BMC) data from DXA was then compared to the BMC data from micro-CT to assess accuracy. Bone mineral density (BMD) intra-class correlation coefficients of the new method ranging from 0.743 to 0.945 and Levene's test showing that there was significantly lower variances of data generated by new method both verified its consistency. By new method, a Bland-Altman plot displayed good agreement between DXA BMC and micro-CT BMC for all sites and they were strongly correlated at the distal femur and proximal tibia (r=0.846, p<0.01; r=0.879, p<0.01, respectively). The results suggest that the novel method for site-specific analysis of trabecular bone-rich regions in mice via DXA yields more precise, accurate, and repeatable BMD measurements than the conventional method.

  4. Guidelines for Dual Energy X-Ray Absorptiometry Analysis of Trabecular Bone-Rich Regions in Mice: Improved Precision, Accuracy, and Sensitivity for Assessing Longitudinal Bone Changes.

    PubMed

    Shi, Jiayu; Lee, Soonchul; Uyeda, Michael; Tanjaya, Justine; Kim, Jong Kil; Pan, Hsin Chuan; Reese, Patricia; Stodieck, Louis; Lin, Andy; Ting, Kang; Kwak, Jin Hee; Soo, Chia

    2016-05-01

    Trabecular bone is frequently studied in osteoporosis research because changes in trabecular bone are the most common cause of osteoporotic fractures. Dual energy X-ray absorptiometry (DXA) analysis specific to trabecular bone-rich regions is crucial to longitudinal osteoporosis research. The purpose of this study is to define a novel method for accurately analyzing trabecular bone-rich regions in mice via DXA. This method will be utilized to analyze scans obtained from the International Space Station in an upcoming study of microgravity-induced bone loss. Thirty 12-week-old BALB/c mice were studied. The novel method was developed by preanalyzing trabecular bone-rich sites in the distal femur, proximal tibia, and lumbar vertebrae via high-resolution X-ray imaging followed by DXA and micro-computed tomography (micro-CT) analyses. The key DXA steps described by the novel method were (1) proper mouse positioning, (2) region of interest (ROI) sizing, and (3) ROI positioning. The precision of the new method was assessed by reliability tests and a 14-week longitudinal study. The bone mineral content (BMC) data from DXA was then compared to the BMC data from micro-CT to assess accuracy. Bone mineral density (BMD) intra-class correlation coefficients of the new method ranging from 0.743 to 0.945 and Levene's test showing that there was significantly lower variances of data generated by new method both verified its consistency. By new method, a Bland-Altman plot displayed good agreement between DXA BMC and micro-CT BMC for all sites and they were strongly correlated at the distal femur and proximal tibia (r=0.846, p<0.01; r=0.879, p<0.01, respectively). The results suggest that the novel method for site-specific analysis of trabecular bone-rich regions in mice via DXA yields more precise, accurate, and repeatable BMD measurements than the conventional method. PMID:26956416

  5. The effects of relatedness and GxE interaction on prediction accuracies in genomic selection: a study in cassava

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Prior to implementation of genomic selection, an evaluation of the potential accuracy of prediction can be obtained by cross validation. In this procedure, a population with both phenotypes and genotypes is split into training and validation sets. The prediction model is fitted using the training se...

  6. Method and system using power modulation for maskless vapor deposition of spatially graded thin film and multilayer coatings with atomic-level precision and accuracy

    DOEpatents

    Montcalm, Claude; Folta, James Allen; Tan, Swie-In; Reiss, Ira

    2002-07-30

    A method and system for producing a film (preferably a thin film with highly uniform or highly accurate custom graded thickness) on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source operated with time-varying flux distribution. In preferred embodiments, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. A user selects a source flux modulation recipe for achieving a predetermined desired thickness profile of the deposited film. The method relies on precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  7. A precisely substituted benzopyran targets androgen refractory prostate cancer cells through selective modulation of estrogen receptors.

    PubMed

    Kumar, Rajeev; Verma, Vikas; Sharma, Vikas; Jain, Ashish; Singh, Vishal; Sarswat, Amit; Maikhuri, Jagdamba P; Sharma, Vishnu L; Gupta, Gopal

    2015-03-15

    Dietary consumption of phytoestrogens like genistein has been linked with lower incidence of prostate cancer. The estradiol-like benzopyran core of genistein confers estrogen receptor-β (ER-β) selectivity that imparts weak anti-proliferative activity against prostate cancer cells. DL-2-[4-(2-piperidinoethoxy)phenyl]-3-phenyl-2H-1-benzopyran (BP), a SERM designed with benzopyran core, targeted androgen independent prostate cancer (PC-3) cells 14-times more potently than genistein, ~25% more efficiently than tamoxifen and 6.5-times more actively than ICI-182780, without forfeiting significant specificity in comparison to genistein. BP increased apoptosis (annexin-V and TUNEL labeling), arrested cell cycle, and significantly increased caspase-3 activity along with mRNA expressions of estrogen receptor (ER)-β and FasL (qPCR) in PC-3 cells. In classical ERE-luc reporter assay BP behaved as a potent ER-α antagonist and ER-β agonist. Accordingly, it decreased expression of ER-α target PS2 (P<0.01) and increased expression of ER-β target TNF-α (P<0.05) genes in PC-3. ER-β deficient PC-3 (siRNA-transfected) was resistant to apoptotic and anti-proliferative actions of SERMs, including stimulation of FasL expression by BP. BP significantly inhibited phosphorylation of Akt and ERK-1/2, JNK and p38 in PC-3 (immunoblotting), and thus adopted a multi-pathway mechanism to exert a more potent anti-proliferative activity against prostate cancer cells than natural and synthetic SERMs. Its precise ER-subtype specific activity presents a unique lead structure for further optimization.

  8. Using measurements of muscle color, pH, and electrical impedance to augment the current USDA beef quality grading standards and improve the accuracy and precision of sorting carcasses into palatability groups.

    PubMed

    Wulf, D M; Page, J K

    2000-10-01

    This research was conducted to determine whether objective measures of muscle color, muscle pH, and(or) electrical impedance are useful in segregating palatable beef from unpalatable beef, and to determine whether the current USDA quality grading standards for beef carcasses could be revised to improve their effectiveness at distinguishing palatable from unpalatable beef. One hundred beef carcasses were selected from packing plants in Texas, Illinois, and Ohio to represent the full range of muscle color observed in the U.S. beef carcass population. Steaks from these 100 carcasses were used to determine shear force on eight cooked beef muscles and taste panel ratings on three cooked beef muscles. It was discovered that the darkest-colored 20 to 25% of the beef carcasses sampled were less palatable and considerably less consistent than the other 75 to 80% sampled. Marbling score, by itself, explained 12% of the variation in beef palatability; hump height, by itself, explained 8% of the variation in beef palatability; measures of muscle color or pH, by themselves, explained 15 to 23% of the variation in beef palatability. When combined together, marbling score, hump height, and some measure of muscle color or pH explained 36 to 46% of the variation in beef palatability. Alternative quality grading systems were proposed to improve the accuracy and precision of sorting carcasses into palatability groups. The two proposed grading systems decreased palatability variation by 29% and 39%, respectively, within the Choice grade and decreased palatability variation by 37% and 12%, respectively, within the Select grade, when compared with current USDA standards. The percentage of unpalatable Choice carcasses was reduced from 14% under the current USDA grading standards to 4% and 1%, respectively, for the two proposed systems. The percentage of unpalatable Select carcasses was reduced from 36% under the current USDA standards to 7% and 29%, respectively, for the proposed systems

  9. A precisely substituted benzopyran targets androgen refractory prostate cancer cells through selective modulation of estrogen receptors

    SciTech Connect

    Kumar, Rajeev; Verma, Vikas; Sharma, Vikas; Jain, Ashish; Singh, Vishal; Sarswat, Amit; Maikhuri, Jagdamba P.; Sharma, Vishnu L.; Gupta, Gopal

    2015-03-15

    Dietary consumption of phytoestrogens like genistein has been linked with lower incidence of prostate cancer. The estradiol-like benzopyran core of genistein confers estrogen receptor-β (ER-β) selectivity that imparts weak anti-proliferative activity against prostate cancer cells. DL-2-[4-(2-piperidinoethoxy)phenyl]-3-phenyl-2H-1-benzopyran (BP), a SERM designed with benzopyran core, targeted androgen independent prostate cancer (PC-3) cells 14-times more potently than genistein, ~ 25% more efficiently than tamoxifen and 6.5-times more actively than ICI-182780, without forfeiting significant specificity in comparison to genistein. BP increased apoptosis (annexin-V and TUNEL labeling), arrested cell cycle, and significantly increased caspase-3 activity along with mRNA expressions of estrogen receptor (ER)-β and FasL (qPCR) in PC-3 cells. In classical ERE-luc reporter assay BP behaved as a potent ER-α antagonist and ER-β agonist. Accordingly, it decreased expression of ER-α target PS2 (P < 0.01) and increased expression of ER-β target TNF-α (P < 0.05) genes in PC-3. ER-β deficient PC-3 (siRNA-transfected) was resistant to apoptotic and anti-proliferative actions of SERMs, including stimulation of FasL expression by BP. BP significantly inhibited phosphorylation of Akt and ERK-1/2, JNK and p38 in PC-3 (immunoblotting), and thus adopted a multi-pathway mechanism to exert a more potent anti-proliferative activity against prostate cancer cells than natural and synthetic SERMs. Its precise ER-subtype specific activity presents a unique lead structure for further optimization. - Highlights: • BP with benzopyran core of genistein was identified for ER-β selective action. • BP was 14-times more potent than genistien in targeting prostate cancer cells. • It behaved as a potent ER-β agonist and ER-α antagonist in gene reporter assays. • BP's anti-proliferative action was inhibited significantly in ER-β deficient cells. • BP — a unique lead structure

  10. Technical Note: Precision and accuracy of a commercially available CT optically stimulated luminescent dosimetry system for the measurement of CT dose index

    SciTech Connect

    Vrieze, Thomas J.; Sturchio, Glenn M.; McCollough, Cynthia H.

    2012-11-15

    Purpose: To determine the precision and accuracy of CTDI{sub 100} measurements made using commercially available optically stimulated luminescent (OSL) dosimeters (Landaur, Inc.) as beam width, tube potential, and attenuating material were varied. Methods: One hundred forty OSL dosimeters were individually exposed to a single axial CT scan, either in air, a 16-cm (head), or 32-cm (body) CTDI phantom at both center and peripheral positions. Scans were performed using nominal total beam widths of 3.6, 6, 19.2, and 28.8 mm at 120 kV and 28.8 mm at 80 kV. Five measurements were made for each of 28 parameter combinations. Measurements were made under the same conditions using a 100-mm long CTDI ion chamber. Exposed OSL dosimeters were returned to the manufacturer, who reported dose to air (in mGy) as a function of distance along the probe, integrated dose, and CTDI{sub 100}. Results: The mean precision averaged over 28 datasets containing five measurements each was 1.4%{+-} 0.6%, range = 0.6%-2.7% for OSL and 0.08%{+-} 0.06%, range = 0.02%-0.3% for ion chamber. The root mean square (RMS) percent differences between OSL and ion chamber CTDI{sub 100} values were 13.8%, 6.4%, and 8.7% for in-air, head, and body measurements, respectively, with an overall RMS percent difference of 10.1%. OSL underestimated CTDI{sub 100} relative to the ion chamber 21/28 times (75%). After manual correction of the 80 kV measurements, the RMS percent differences between OSL and ion chamber measurements were 9.9% and 10.0% for 80 and 120 kV, respectively. Conclusions: Measurements of CTDI{sub 100} with commercially available CT OSL dosimeters had a percent standard deviation of 1.4%. After energy-dependent correction factors were applied, the RMS percent difference in the measured CTDI{sub 100} values was about 10%, with a tendency of OSL to underestimate CTDI relative to the ion chamber. Unlike ion chamber methods, however, OSL dosimeters allow measurement of the radiation dose profile.

  11. Technical Note: Precision and accuracy of a commercially available CT optically stimulated luminescent dosimetry system for the measurement of CT dose index

    PubMed Central

    Vrieze, Thomas J.; Sturchio, Glenn M.; McCollough, Cynthia H.

    2012-01-01

    Purpose: To determine the precision and accuracy of CTDI100 measurements made using commercially available optically stimulated luminescent (OSL) dosimeters (Landaur, Inc.) as beam width, tube potential, and attenuating material were varied. Methods: One hundred forty OSL dosimeters were individually exposed to a single axial CT scan, either in air, a 16-cm (head), or 32-cm (body) CTDI phantom at both center and peripheral positions. Scans were performed using nominal total beam widths of 3.6, 6, 19.2, and 28.8 mm at 120 kV and 28.8 mm at 80 kV. Five measurements were made for each of 28 parameter combinations. Measurements were made under the same conditions using a 100-mm long CTDI ion chamber. Exposed OSL dosimeters were returned to the manufacturer, who reported dose to air (in mGy) as a function of distance along the probe, integrated dose, and CTDI100. Results: The mean precision averaged over 28 datasets containing five measurements each was 1.4% ± 0.6%, range = 0.6%–2.7% for OSL and 0.08% ± 0.06%, range = 0.02%–0.3% for ion chamber. The root mean square (RMS) percent differences between OSL and ion chamber CTDI100 values were 13.8%, 6.4%, and 8.7% for in-air, head, and body measurements, respectively, with an overall RMS percent difference of 10.1%. OSL underestimated CTDI100 relative to the ion chamber 21/28 times (75%). After manual correction of the 80 kV measurements, the RMS percent differences between OSL and ion chamber measurements were 9.9% and 10.0% for 80 and 120 kV, respectively. Conclusions: Measurements of CTDI100 with commercially available CT OSL dosimeters had a percent standard deviation of 1.4%. After energy-dependent correction factors were applied, the RMS percent difference in the measured CTDI100 values was about 10%, with a tendency of OSL to underestimate CTDI relative to the ion chamber. Unlike ion chamber methods, however, OSL dosimeters allow measurement of the radiation dose profile. PMID:23127052

  12. Screening Accuracy of Level 2 Autism Spectrum Disorder Rating Scales: A Review of Selected Instruments

    ERIC Educational Resources Information Center

    Norris, Megan; Lecavalier, Luc

    2010-01-01

    The goal of this review was to examine the state of Level 2, caregiver-completed rating scales for the screening of Autism Spectrum Disorders (ASDs) in individuals above the age of three years. We focused on screening accuracy and paid particular attention to comparison groups. Inclusion criteria required that scales be developed post ICD-10, be…

  13. Genomic selection accuracy for grain quality traits in biparental wheat populations

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection (GS) is a promising tool for plant and animal breeding that uses genome wide molecular marker data to capture small and large effect quantitative trait loci and predict the genetic value of selection candidates. Genomic selection has been shown previously to have higher prediction ...

  14. Accuracy of initial codon selection by aminoacyl-tRNAs on the mRNA-programmed bacterial ribosome

    PubMed Central

    Zhang, Jingji; Ieong, Ka-Weng; Johansson, Magnus; Ehrenberg, Måns

    2015-01-01

    We used a cell-free system with pure Escherichia coli components to study initial codon selection of aminoacyl-tRNAs in ternary complex with elongation factor Tu and GTP on messenger RNA-programmed ribosomes. We took advantage of the universal rate-accuracy trade-off for all enzymatic selections to determine how the efficiency of initial codon readings decreased linearly toward zero as the accuracy of discrimination against near-cognate and wobble codon readings increased toward the maximal asymptote, the d value. We report data on the rate-accuracy variation for 7 cognate, 7 wobble, and 56 near-cognate codon readings comprising about 15% of the genetic code. Their d values varied about 400-fold in the 200–80,000 range depending on type of mismatch, mismatch position in the codon, and tRNA isoacceptor type. We identified error hot spots (d = 200) for U:G misreading in second and U:U or G:A misreading in third codon position by His-tRNAHis and, as also seen in vivo, Glu-tRNAGlu. We suggest that the proofreading mechanism has evolved to attenuate error hot spots in initial selection such as those found here. PMID:26195797

  15. Feature Selection Has a Large Impact on One-Class Classification Accuracy for MicroRNAs in Plants

    PubMed Central

    Yousef, Malik; Saçar Demirci, Müşerref Duygu; Khalifa, Waleed; Allmer, Jens

    2016-01-01

    MicroRNAs (miRNAs) are short RNA sequences involved in posttranscriptional gene regulation. Their experimental analysis is complicated and, therefore, needs to be supplemented with computational miRNA detection. Currently computational miRNA detection is mainly performed using machine learning and in particular two-class classification. For machine learning, the miRNAs need to be parametrized and more than 700 features have been described. Positive training examples for machine learning are readily available, but negative data is hard to come by. Therefore, it seems prerogative to use one-class classification instead of two-class classification. Previously, we were able to almost reach two-class classification accuracy using one-class classifiers. In this work, we employ feature selection procedures in conjunction with one-class classification and show that there is up to 36% difference in accuracy among these feature selection methods. The best feature set allowed the training of a one-class classifier which achieved an average accuracy of ~95.6% thereby outperforming previous two-class-based plant miRNA detection approaches by about 0.5%. We believe that this can be improved upon in the future by rigorous filtering of the positive training examples and by improving current feature clustering algorithms to better target pre-miRNA feature selection. PMID:27190509

  16. Accuracy and precision of 14C-based source apportionment of organic and elemental carbon in aerosols using the Swiss_4S protocol

    NASA Astrophysics Data System (ADS)

    Mouteva, G. O.; Fahrni, S. M.; Santos, G. M.; Randerson, J. T.; Zhang, Y.-L.; Szidat, S.; Czimczik, C. I.

    2015-09-01

    Aerosol source apportionment remains a critical challenge for understanding the transport and aging of aerosols, as well as for developing successful air pollution mitigation strategies. The contributions of fossil and non-fossil sources to organic carbon (OC) and elemental carbon (EC) in carbonaceous aerosols can be quantified by measuring the radiocarbon (14C) content of each carbon fraction. However, the use of 14C in studying OC and EC has been limited by technical challenges related to the physical separation of the two fractions and small sample sizes. There is no common procedure for OC/EC 14C analysis, and uncertainty studies have largely focused on the precision of yields. Here, we quantified the uncertainty in 14C measurement of aerosols associated with the isolation and analysis of each carbon fraction with the Swiss_4S thermal-optical analysis (TOA) protocol. We used an OC/EC analyzer (Sunset Laboratory Inc., OR, USA) coupled to a vacuum line to separate the two components. Each fraction was thermally desorbed and converted to carbon dioxide (CO2) in pure oxygen (O2). On average, 91 % of the evolving CO2 was then cryogenically trapped on the vacuum line, reduced to filamentous graphite, and measured for its 14C content via accelerator mass spectrometry (AMS). To test the accuracy of our setup, we quantified the total amount of extraneous carbon introduced during the TOA sample processing and graphitization as the sum of modern and fossil (14C-depleted) carbon introduced during the analysis of fossil reference materials (adipic acid for OC and coal for EC) and contemporary standards (oxalic acid for OC and rice char for EC) as a function of sample size. We further tested our methodology by analyzing five ambient airborne particulate matter (PM2.5) samples with a range of OC and EC concentrations and 14C contents in an interlaboratory comparison. The total modern and fossil carbon blanks of our setup were 0.8 ± 0.4 and 0.67 ± 0.34 μg C, respectively

  17. Accuracy and precision of 14C-based source apportionment of organic and elemental carbon in aerosols using the Swiss_4S protocol

    NASA Astrophysics Data System (ADS)

    Mouteva, G. O.; Fahrni, S. M.; Santos, G. M.; Randerson, J. T.; Zhang, Y. L.; Szidat, S.; Czimczik, C. I.

    2015-04-01

    Aerosol source apportionment remains a critical challenge for understanding the transport and aging of aerosols, as well as for developing successful air pollution mitigation strategies. The contributions of fossil and non-fossil sources to organic carbon (OC) and elemental carbon (EC) in carbonaceous aerosols can be quantified by measuring the radiocarbon (14C) content of each carbon fraction. However, the use of 14C in studying OC and EC has been limited by technical challenges related to the physical separation of the two fractions and small sample sizes. There is no common procedure for OC/EC 14C analysis, and uncertainty studies have largely focused on the precision of yields. Here, we quantified the uncertainty in 14C measurement of aerosols associated with the isolation and analysis of each carbon fraction with the Swiss_4S thermal-optical analysis (TOA) protocol. We used an OC/EC analyzer (Sunset Laboratory Inc., OR, USA) coupled to vacuum line to separate the two components. Each fraction was thermally desorbed and converted to carbon dioxide (CO2) in pure oxygen (O2). On average 91% of the evolving CO2 was then cryogenically trapped on the vacuum line, reduced to filamentous graphite, and measured for its 14C content via accelerator mass spectrometry (AMS). To test the accuracy of our set-up, we quantified the total amount of extraneous carbon introduced during the TOA sample processing and graphitization as the sum of modern and fossil (14C-depleted) carbon introduced during the analysis of fossil reference materials (adipic acid for OC and coal for EC) and contemporary standards (oxalic acid for OC and rice char for EC) as a function of sample size. We further tested our methodology by analyzing five ambient airborne particulate matter (PM2.5) samples with a range of OC and EC concentrations and 14C contents in an interlaboratory comparison. The total modern and fossil carbon blanks of our set-up were 0.8 ± 0.4 and 0.67 ± 0.34 μg C, respectively

  18. Comparative Analysis of the Equivital EQ02 Lifemonitor with Holter Ambulatory ECG Device for Continuous Measurement of ECG, Heart Rate, and Heart Rate Variability: A Validation Study for Precision and Accuracy

    PubMed Central

    Akintola, Abimbola A.; van de Pol, Vera; Bimmel, Daniel; Maan, Arie C.; van Heemst, Diana

    2016-01-01

    Background: The Equivital (EQ02) is a multi-parameter telemetric device offering both real-time and/or retrospective, synchronized monitoring of ECG, HR, and HRV, respiration, activity, and temperature. Unlike the Holter, which is the gold standard for continuous ECG measurement, EQO2 continuously monitors ECG via electrodes interwoven in the textile of a wearable belt. Objective: To compare EQ02 with the Holter for continuous home measurement of ECG, heart rate (HR), and heart rate variability (HRV). Methods: Eighteen healthy participants wore, simultaneously for 24 h, the Holter and EQ02 monitors. Per participant, averaged HR, and HRV per 5 min from the two devices were compared using Pearson correlation, paired T-test, and Bland-Altman analyses. Accuracy and precision metrics included mean absolute relative difference (MARD). Results: Artifact content of EQ02 data varied widely between (range 1.93–56.45%) and within (range 0.75–9.61%) participants. Comparing the EQ02 to the Holter, the Pearson correlations were respectively 0.724, 0.955, and 0.997 for datasets containing all data and data with < 50 or < 20% artifacts respectively. For datasets containing respectively all data, data with < 50, or < 20% artifacts, bias estimated by Bland-Altman analysis was −2.8, −1.0, and −0.8 beats per minute and 24 h MARD was 7.08, 3.01, and 1.5. After selecting a 3-h stretch of data containing 1.15% artifacts, Pearson correlation was 0.786 for HRV measured as standard deviation of NN intervals (SDNN). Conclusions: Although the EQ02 can accurately measure ECG and HRV, its accuracy and precision is highly dependent on artifact content. This is a limitation for clinical use in individual patients. However, the advantages of the EQ02 (ability to simultaneously monitor several physiologic parameters) may outweigh its disadvantages (higher artifact load) for research purposes and/ or for home monitoring in larger groups of study participants. Further studies can be aimed

  19. On selecting a prior for the precision parameter of Dirichlet process mixture models

    USGS Publications Warehouse

    Dorazio, R.M.

    2009-01-01

    In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.

  20. Selectivity of conditioned fear of touch is modulated by somatosensory precision.

    PubMed

    Harvie, Daniel S; Meulders, Ann; Reid, Emily; Camfferman, Danny; Brinkworth, Russell S A; Moseley, G Lorimer

    2016-06-01

    Learning to initiate defenses in response to specific signals of danger is adaptive. Some chronic pain conditions, however, are characterized by widespread anxiety, avoidance, and pain consistent with a loss of defensive response specificity. Response specificity depends on ability to discriminate between safe and threatening stimuli; therefore, specificity might depend on sensory precision. This would help explain the high prevalence of chronic pain in body areas of low tactile acuity, such as the lower back, and clarify why improving sensory precision may reduce chronic pain. We compared the acquisition and generalization of fear of pain-associated vibrotactile stimuli delivered to either the hand (high tactile acuity) or the back (low tactile acuity). During acquisition, tactile stimulation at one location (CS+) predicted the noxious electrocutaneous stimulation (US), while tactile stimulation at another location (CS-) did not. Responses to three stimuli with decreasing spatial proximity to the CS+ (generalizing stimuli; GS1-3) were tested. Differential learning and generalization were compared between groups. The main outcome of fear-potentiated startle responses showed differential learning only in the hand group. Self-reported fear and expectancy confirmed differential learning and limited generalization in the hand group, and suggested undifferentiated fear and expectancy in the back group. Differences in generalization could not be inferred from the startle data. Specificity of fear responses appears to be affected by somatosensory precision. This has implications for our understanding of the role of sensory imprecision in the development of chronic pain. PMID:26950514

  1. Precision Metabolic Engineering: the Design of Responsive, Selective, and Controllable Metabolic Systems

    PubMed Central

    McNerney, Monica P.; Watstein, Daniel M.; Styczynski, Mark P.

    2015-01-01

    Metabolic engineering is generally focused on static optimization of cells to maximize production of a desired product, though recently dynamic metabolic engineering has explored how metabolic programs can be varied over time to improve titer. However, these are not the only types of applications where metabolic engineering could make a significant impact. Here, we discuss a new conceptual framework, termed “precision metabolic engineering,” involving the design and engineering of systems that make different products in response to different signals. Rather than focusing on maximizing titer, these types of applications typically have three hallmarks: sensing signals that determine the desired metabolic target, completely directing metabolic flux in response to those signals, and producing sharp responses at specific signal thresholds. In this review, we will first discuss and provide examples of precision metabolic engineering. We will then discuss each of these hallmarks and identify which existing metabolic engineering methods can be applied to accomplish those tasks, as well as some of their shortcomings. Ultimately, precise control of metabolic systems has the potential to enable a host of new metabolic engineering and synthetic biology applications for any problem where flexibility of response to an external signal could be useful. PMID:26189665

  2. Precision metabolic engineering: The design of responsive, selective, and controllable metabolic systems.

    PubMed

    McNerney, Monica P; Watstein, Daniel M; Styczynski, Mark P

    2015-09-01

    Metabolic engineering is generally focused on static optimization of cells to maximize production of a desired product, though recently dynamic metabolic engineering has explored how metabolic programs can be varied over time to improve titer. However, these are not the only types of applications where metabolic engineering could make a significant impact. Here, we discuss a new conceptual framework, termed "precision metabolic engineering," involving the design and engineering of systems that make different products in response to different signals. Rather than focusing on maximizing titer, these types of applications typically have three hallmarks: sensing signals that determine the desired metabolic target, completely directing metabolic flux in response to those signals, and producing sharp responses at specific signal thresholds. In this review, we will first discuss and provide examples of precision metabolic engineering. We will then discuss each of these hallmarks and identify which existing metabolic engineering methods can be applied to accomplish those tasks, as well as some of their shortcomings. Ultimately, precise control of metabolic systems has the potential to enable a host of new metabolic engineering and synthetic biology applications for any problem where flexibility of response to an external signal could be useful.

  3. The effect of tray selection on the accuracy of elastomeric impression materials.

    PubMed

    Gordon, G E; Johnson, G H; Drennon, D G

    1990-01-01

    This study evaluated the accuracy of reproduction of stone casts made from impressions using different tray and impression materials. The tray materials used were an acrylic resin, a thermoplastic, and a plastic. The impression materials used were an additional silicone, a polyether, and a polysulfide. Impressions were made of a stainless steel master die that simulated crown preparations for a fixed partial denture and an acrylic resin model with cross-arch and anteroposterior landmarks in stainless steel that typify clinical intra-arch distances. Impressions of the fixed partial denture simulation were made with all three impression materials and all three tray types. Impressions of the cross-arch and anteroposterior landmarks were made by using all three tray types with only the addition reaction silicone impression material. Impressions were poured at 1 hour with a type IV dental stone. Data were analyzed by using ANOVA with a sample size of five. Results indicated that custom-made trays of acrylic resin and the thermoplastic material performed similarly regarding die accuracy and produced clinically acceptable casts. The stock plastic tray consistently produced casts with greater dimensional change than the two custom trays. PMID:2404101

  4. Impact of marker ascertainment bias on genomic selection accuracy and estimates of genetic diversity

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genome-wide molecular markers are readily being applied to evaluate genetic diversity in germplasm collections and for making genomic selections in breeding programs. To accurately predict phenotypes and assay genetic diversity, molecular markers should assay a representative sample of the polymorp...

  5. Imputation of unordered markers and the impact on genomic selection accuracy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection, a breeding method that promises to accelerate rates of genetic gain, requires dense, genome-wide marker data. Sequence-based genotyping methods can generate de novo large numbers of markers. However, without a reference genome, these markers are unordered and typically have a lar...

  6. Imputation of unordered markers and the impact on genomic selection accuracy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic selection, a breeding method that promises to accelerate rates of genetic gain, requires dense, genome-wide marker data. Genotyping-by-sequencing can generate a large number of de novo markers. However, without a reference genome, these markers are unordered and typically have a large propo...

  7. Bayesian approach increases accuracy when selecting cowpea genotypes with high adaptability and phenotypic stability.

    PubMed

    Barroso, L M A; Teodoro, P E; Nascimento, M; Torres, F E; Dos Santos, A; Corrêa, A M; Sagrilo, E; Corrêa, C C G; Silva, F A; Ceccon, G

    2016-01-01

    This study aimed to verify that a Bayesian approach could be used for the selection of upright cowpea genotypes with high adaptability and phenotypic stability, and the study also evaluated the efficiency of using informative and minimally informative a priori distributions. Six trials were conducted in randomized blocks, and the grain yield of 17 upright cowpea genotypes was assessed. To represent the minimally informative a priori distributions, a probability distribution with high variance was used, and a meta-analysis concept was adopted to represent the informative a priori distributions. Bayes factors were used to conduct comparisons between the a priori distributions. The Bayesian approach was effective for selection of upright cowpea genotypes with high adaptability and phenotypic stability using the Eberhart and Russell method. Bayes factors indicated that the use of informative a priori distributions provided more accurate results than minimally informative a priori distributions. PMID:26985961

  8. Bayesian approach increases accuracy when selecting cowpea genotypes with high adaptability and phenotypic stability.

    PubMed

    Barroso, L M A; Teodoro, P E; Nascimento, M; Torres, F E; Dos Santos, A; Corrêa, A M; Sagrilo, E; Corrêa, C C G; Silva, F A; Ceccon, G

    2016-03-11

    This study aimed to verify that a Bayesian approach could be used for the selection of upright cowpea genotypes with high adaptability and phenotypic stability, and the study also evaluated the efficiency of using informative and minimally informative a priori distributions. Six trials were conducted in randomized blocks, and the grain yield of 17 upright cowpea genotypes was assessed. To represent the minimally informative a priori distributions, a probability distribution with high variance was used, and a meta-analysis concept was adopted to represent the informative a priori distributions. Bayes factors were used to conduct comparisons between the a priori distributions. The Bayesian approach was effective for selection of upright cowpea genotypes with high adaptability and phenotypic stability using the Eberhart and Russell method. Bayes factors indicated that the use of informative a priori distributions provided more accurate results than minimally informative a priori distributions.

  9. Accuracy of travel time distribution (TTD) models as affected by TTD complexity, observation errors, and model and tracer selection

    USGS Publications Warehouse

    Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.

    2014-01-01

    Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.

  10. Precision improvement for the analysis of flavonoids in selected Thai plants by capillary zone electrophoresis.

    PubMed

    Suntornsuk, Leena; Anurukvorakun, Oraphan

    2005-02-01

    A capillary zone electrophoresis (CZE) method for the analyses of kaempferol in Centella asiatica and Rosa hybrids and rutin in Chromolaena odorata was developed. The optimization was performed on analyses of flavonoids (e.g., rutin, kaempferol, quercetin, myricetin, and apigenin) and organic carboxylic acids (e.g., ethacrynic acid and xanthene-9-carboxylic acid) by investigation of the effects of types and amounts of organic modifiers, background electrolyte concentrations, temperature, and voltage. Baseline separation (R(s) = 2.83) of the compounds was achieved within 10 min in 20 mM NaH2PO4 - Na2HPO4 (pH 8.0) containing 10% v/v ACN and 6% v/v MeOH using a voltage of 25 kV, a temperature of 30 degrees C, and a detection wavelength set at 220 nm. The application of the corrected migration time (t(c)), using ethacrynic acid as the single marker, was efficient to improve the precision of flavonoid identification (% relative standard deviation (RSD) = 0.65%). The method linearity was excellent (r2 > 0.999) over 50-150 microg/mL. Precision (%RSD < 1.66%) and recoveries were good (> 96% and %RSDs < 1.70%) with detection and quantitation limits of 2.23 and 7.14 microg/mL, respectively. Kaempferol in C. asiatica and R. hybrids was 0.014 g/100 g (%RSD = 0.59%) and 0.044 g/100 g (%RSD = 1.04%), respectively, and rutin in C. odorata was 0.088 g/100 g (%RSD = 0.06%).

  11. SU-E-J-03: Characterization of the Precision and Accuracy of a New, Preclinical, MRI-Guided Focused Ultrasound System for Image-Guided Interventions in Small-Bore, High-Field Magnets

    SciTech Connect

    Ellens, N; Farahani, K

    2015-06-15

    Purpose: MRI-guided focused ultrasound (MRgFUS) has many potential and realized applications including controlled heating and localized drug delivery. The development of many of these applications requires extensive preclinical work, much of it in small animal models. The goal of this study is to characterize the spatial targeting accuracy and reproducibility of a preclinical high field MRgFUS system for thermal ablation and drug delivery applications. Methods: The RK300 (FUS Instruments, Toronto, Canada) is a motorized, 2-axis FUS positioning system suitable for small bore (72 mm), high-field MRI systems. The accuracy of the system was assessed in three ways. First, the precision of the system was assessed by sonicating regular grids of 5 mm squares on polystyrene plates and comparing the resulting focal dimples to the intended pattern, thereby assessing the reproducibility and precision of the motion control alone. Second, the targeting accuracy was assessed by imaging a polystyrene plate with randomly drilled holes and replicating the hole pattern by sonicating the observed hole locations on intact polystyrene plates and comparing the results. Third, the practicallyrealizable accuracy and precision were assessed by comparing the locations of transcranial, FUS-induced blood-brain-barrier disruption (BBBD) (observed through Gadolinium enhancement) to the intended targets in a retrospective analysis of animals sonicated for other experiments. Results: The evenly-spaced grids indicated that the precision was 0.11 +/− 0.05 mm. When image-guidance was included by targeting random locations, the accuracy was 0.5 +/− 0.2 mm. The effective accuracy in the four rodent brains assessed was 0.8 +/− 0.6 mm. In all cases, the error appeared normally distributed (p<0.05) in both orthogonal axes, though the left/right error was systematically greater than the superior/inferior error. Conclusions: The targeting accuracy of this device is sub-millimeter, suitable for many

  12. Site-selective substitutional doping with atomic precision on stepped Al (111) surface by single-atom manipulation.

    PubMed

    Chen, Chang; Zhang, Jinhu; Dong, Guofeng; Shao, Hezhu; Ning, Bo-Yuan; Zhao, Li; Ning, Xi-Jing; Zhuang, Jun

    2014-01-01

    In fabrication of nano- and quantum devices, it is sometimes critical to position individual dopants at certain sites precisely to obtain the specific or enhanced functionalities. With first-principles simulations, we propose a method for substitutional doping of individual atom at a certain position on a stepped metal surface by single-atom manipulation. A selected atom at the step of Al (111) surface could be extracted vertically with an Al trimer-apex tip, and then the dopant atom will be positioned to this site. The details of the entire process including potential energy curves are given, which suggests the reliability of the proposed single-atom doping method.

  13. Detecting declines in the abundance of a bull trout (Salvelinus confluentus) population: Understanding the accuracy, precision, and costs of our efforts

    USGS Publications Warehouse

    Al-Chokhachy, R.; Budy, P.; Conner, M.

    2009-01-01

    Using empirical field data for bull trout (Salvelinus confluentus), we evaluated the trade-off between power and sampling effort-cost using Monte Carlo simulations of commonly collected mark-recapture-resight and count data, and we estimated the power to detect changes in abundance across different time intervals. We also evaluated the effects of monitoring different components of a population and stratification methods on the precision of each method. Our results illustrate substantial variability in the relative precision, cost, and information gained from each approach. While grouping estimates by age or stage class substantially increased the precision of estimates, spatial stratification of sampling units resulted in limited increases in precision. Although mark-resight methods allowed for estimates of abundance versus indices of abundance, our results suggest snorkel surveys may be a more affordable monitoring approach across large spatial scales. Detecting a 25% decline in abundance after 5 years was not possible, regardless of technique (power = 0.80), without high sampling effort (48% of study site). Detecting a 25% decline was possible after 15 years, but still required high sampling efforts. Our results suggest detecting moderate changes in abundance of freshwater salmonids requires considerable resource and temporal commitments and highlight the difficulties of using abundance measures for monitoring bull trout populations.

  14. Precision autophagy: Will the next wave of selective autophagy markers and specific autophagy inhibitors feed clinical pipelines?

    PubMed

    Lebovitz, Chandra B; DeVorkin, Lindsay; Bosc, Damien; Rothe, Katharina; Singh, Jagbir; Bally, Marcel; Jiang, Xiaoyan; Young, Robert N; Lum, Julian J; Gorski, Sharon M

    2015-01-01

    Research presented at the Vancouver Autophagy Symposium (VAS) 2014 suggests that autophagy's influence on health and disease depends on tight regulation and precision targeting of substrates. Discussions recognized a pressing need for robust biomarkers that accurately assess the clinical utility of modulating autophagy in disease contexts. Biomarker discovery could flow from investigations of context-dependent triggers, sensors, and adaptors that tailor the autophagy machinery to achieve target specificity. In his keynote address, Dr. Vojo Deretic (University of New Mexico) described the discovery of a cargo receptor family that utilizes peptide motif-based cargo recognition, a mechanism that may be more precise than generic substrate tagging. The keynote by Dr. Alec Kimmelman (Harvard Medical School) emphasized that unbiased screens for novel selective autophagy factors may accelerate the development of autophagy-based therapies. Using a quantitative proteomics screen for de novo identification of autophagosome substrates in pancreatic cancer, Kimmelman's group discovered a new type of selective autophagy that regulates bioavailable iron. Additional presentations revealed novel autophagy regulators and receptors in metabolic diseases, proteinopathies, and cancer, and outlined the development of specific autophagy inhibitors and treatment regimens that combine autophagy modulation with anticancer therapies. VAS 2014 stimulated interdisciplinary discussions focused on the development of biomarkers, drugs, and preclinical models to facilitate clinical translation of key autophagy discoveries.

  15. Direction-Selective Circuits Shape Noise to Ensure a Precise Population Code.

    PubMed

    Zylberberg, Joel; Cafaro, Jon; Turner, Maxwell H; Shea-Brown, Eric; Rieke, Fred

    2016-01-20

    Neural responses are noisy, and circuit structure can correlate this noise across neurons. Theoretical studies show that noise correlations can have diverse effects on population coding, but these studies rarely explore stimulus dependence of noise correlations. Here, we show that noise correlations in responses of ON-OFF direction-selective retinal ganglion cells are strongly stimulus dependent, and we uncover the circuit mechanisms producing this stimulus dependence. A population model based on these mechanistic studies shows that stimulus-dependent noise correlations improve the encoding of motion direction 2-fold compared to independent noise. This work demonstrates a mechanism by which a neural circuit effectively shapes its signal and noise in concert, minimizing corruption of signal by noise. Finally, we generalize our findings beyond direction coding in the retina and show that stimulus-dependent correlations will generally enhance information coding in populations of diversely tuned neurons. PMID:26796691

  16. Direction-Selective Circuits Shape Noise to Ensure a Precise Population Code.

    PubMed

    Zylberberg, Joel; Cafaro, Jon; Turner, Maxwell H; Shea-Brown, Eric; Rieke, Fred

    2016-01-20

    Neural responses are noisy, and circuit structure can correlate this noise across neurons. Theoretical studies show that noise correlations can have diverse effects on population coding, but these studies rarely explore stimulus dependence of noise correlations. Here, we show that noise correlations in responses of ON-OFF direction-selective retinal ganglion cells are strongly stimulus dependent, and we uncover the circuit mechanisms producing this stimulus dependence. A population model based on these mechanistic studies shows that stimulus-dependent noise correlations improve the encoding of motion direction 2-fold compared to independent noise. This work demonstrates a mechanism by which a neural circuit effectively shapes its signal and noise in concert, minimizing corruption of signal by noise. Finally, we generalize our findings beyond direction coding in the retina and show that stimulus-dependent correlations will generally enhance information coding in populations of diversely tuned neurons.

  17. Positive fluorescent selection permits precise, rapid, and in-depth overexpression analysis in plant protoplasts.

    PubMed

    Bargmann, Bastiaan O R; Birnbaum, Kenneth D

    2009-03-01

    Transient genetic modification of plant protoplasts is a straightforward and rapid technique for the study of numerous aspects of plant biology. Recent studies in metazoan systems have utilized cell-based assays to interrogate signal transduction pathways using high-throughput methods. Plant biologists could benefit from new tools that expand the use of cell culture for large-scale analysis of gene function. We have developed a system that employs fluorescent positive selection in combination with flow cytometric analysis and fluorescence-activated cell sorting to isolate responses in the transformed protoplasts exclusively. The system overcomes the drawback that transfected protoplast suspensions are often a heterogeneous mix of cells that have and have not been successfully transformed. This Gateway-compatible system enables high-throughput screening of genetic circuitry using overexpression. The incorporation of a red fluorescent protein selection marker enables combined utilization with widely available green fluorescent protein (GFP) tools. For instance, such a dual labeling approach allows cytometric analysis of GFP reporter gene activation expressly in the transformed cells or fluorescence-activated cell sorting-mediated isolation and downstream examination of overexpression effects in a specific GFP-marked cell population. Here, as an example, novel uses of this system are applied to the study of auxin signaling, exploiting the red fluorescent protein/GFP dual labeling capability. In response to manipulation of the auxin response network through overexpression of dominant negative auxin signaling components, we quantify effects on auxin-responsive DR5::GFP reporter gene activation as well as profile genome-wide transcriptional changes specifically in cells expressing a root epidermal marker.

  18. Improved precision and accuracy for high-performance liquid chromatography/Fourier transform ion cyclotron resonance mass spectrometric exact mass measurement of small molecules from the simultaneous and controlled introduction of internal calibrants via a second electrospray nebuliser.

    PubMed

    Herniman, Julie M; Bristow, Tony W T; O'Connor, Gavin; Jarvis, Jackie; Langley, G John

    2004-01-01

    The use of a second electrospray nebuliser has proved to be highly successful for exact mass measurement during high-performance liquid chromatography/Fourier transform ion cyclotron resonance mass spectrometry (HPLC/FTICRMS). Much improved accuracy and precision of mass measurement were afforded by the introduction of the internal calibration solution, thus overcoming space charge issues due to the lack of control over relative ion abundances of the species eluting from the HPLC column. Further, issues of suppression of ionisation, observed when using a T-piece method, are addressed and this simple system has significant benefits over other more elaborate approaches providing data that compares very favourably with these other approaches. The technique is robust, flexible and transferable and can be used in conjunction with HPLC, infusion or flow injection analysis (FIA) to provide constant internal calibration signals to allow routine, accurate and precise mass measurements to be recorded.

  19. Precision and accuracy of manual water-level measurements taken in the Yucca Mountain area, Nye County, Nevada, 1988--1990; Water-resources investigations report 93-4025

    SciTech Connect

    Boucher, M.S.

    1994-05-01

    Water-level measurements have been made in deep boreholes in the Yucca Mountain area, Nye County, Nevada, since 1983 in support of the US Department of Energy`s Yucca Mountain Project, which is an evaluation of the area to determine its suit-ability as a potential storage area for high-level nuclear waste. Water-level measurements were taken either manually, using various water-level measuring equipment such as steel tapes, or they were taken continuously, using automated data recorders and pressure transducers. This report presents precision range and accuracy data established for manual water-level measurements taken in the Yucca Mountain area, 1988--90.

  20. The Role of Some Selected Psychological and Personality Traits of the Rater in the Accuracy of Self- and Peer-Assessment

    ERIC Educational Resources Information Center

    AlFallay, Ibrahim

    2004-01-01

    This paper investigates the role of some selected psychological and personality traits of learners of English as a foreign language in the accuracy of self- and peer-assessments. The selected traits were motivation types, self-esteem, anxiety, motivational intensity, and achievement. 78 students of English as a foreign language participated in…

  1. Accuracy and precision of reconstruction of complex refractive index in near-field single-distance propagation-based phase-contrast tomography

    NASA Astrophysics Data System (ADS)

    Gureyev, Timur; Mohammadi, Sara; Nesterets, Yakov; Dullin, Christian; Tromba, Giuliana

    2013-10-01

    We investigate the quantitative accuracy and noise sensitivity of reconstruction of the 3D distribution of complex refractive index, n(r)=1-δ(r)+iβ(r), in samples containing materials with different refractive indices using propagation-based phase-contrast computed tomography (PB-CT). Our present study is limited to the case of parallel-beam geometry with monochromatic synchrotron radiation, but can be readily extended to cone-beam CT and partially coherent polychromatic X-rays at least in the case of weakly absorbing samples. We demonstrate that, except for regions near the interfaces between distinct materials, the distribution of imaginary part of the refractive index, β(r), can be accurately reconstructed from a single projection image per view angle using phase retrieval based on the so-called homogeneous version of the Transport of Intensity equation (TIE-Hom) in combination with conventional CT reconstruction. In contrast, the accuracy of reconstruction of δ(r) depends strongly on the choice of the "regularization" parameter in TIE-Hom. We demonstrate by means of an instructive example that for some multi-material samples, a direct application of the TIE-Hom method in PB-CT produces qualitatively incorrect results for δ(r), which can be rectified either by collecting additional projection images at each view angle, or by utilising suitable a priori information about the sample. As a separate observation, we also show that, in agreement with previous reports, it is possible to significantly improve signal-to-noise ratio by increasing the sample-to-detector distance in combination with TIE-Hom phase retrieval in PB-CT compared to conventional ("contact") CT, with the maximum achievable gain of the order of 0.3δ /β. This can lead to improved image quality and/or reduction of the X-ray dose delivered to patients in medical imaging.

  2. The accuracy and precision of a micro computer tomography volumetric measurement technique for the analysis of in-vitro tested total disc replacements.

    PubMed

    Vicars, R; Fisher, J; Hall, R M

    2009-04-01

    Total disc replacements (TDRs) in the spine have been clinically successful in the short term, but there are concerns over long-term failure due to wear, as seen in other joint replacements. Simulators have been used to investigate the wear of TDRs, but only gravimetric measurements have been used to assess material loss. Micro computer tomography (microCT) has been used for volumetric measurement of explanted components but has yet to be used for in-vitro studies with the wear typically less than < 20 mm3 per 10(6) cycles. The aim of this study was to compare microCT volume measurements with gravimetric measurements and to assess whether microCT can quantify wear volumes of in-vitro tested TDRs. microCT measurements of TDR polyethylene cores were undertaken and the results compared with gravimetric assessments. The effects of repositioning, integration time, and scan resolution were investigated. The best volume measurement resolution was found to be +/- 3 mm3, at least three orders of magnitude greater than those determined for gravimetric measurements. In conclusion, the microCT measurement technique is suitable for quantifying in-vitro TDR polyethylene wear volumes and can provide qualitative data (e.g. wear location), and also further quantitative data (e.g. height loss), assisting comparisons with in-vivo and ex-vivo data. It is best used alongside gravimetric measurements to maintain the high level of precision that these measurements provide.

  3. Leaf Vein Length per Unit Area Is Not Intrinsically Dependent on Image Magnification: Avoiding Measurement Artifacts for Accuracy and Precision1[W][OPEN

    PubMed Central

    Sack, Lawren; Caringella, Marissa; Scoffoni, Christine; Mason, Chase; Rawls, Michael; Markesteijn, Lars; Poorter, Lourens

    2014-01-01

    Leaf vein length per unit leaf area (VLA; also known as vein density) is an important determinant of water and sugar transport, photosynthetic function, and biomechanical support. A range of software methods are in use to visualize and measure vein systems in cleared leaf images; typically, users locate veins by digital tracing, but recent articles introduced software by which users can locate veins using thresholding (i.e. based on the contrasting of veins in the image). Based on the use of this method, a recent study argued against the existence of a fixed VLA value for a given leaf, proposing instead that VLA increases with the magnification of the image due to intrinsic properties of the vein system, and recommended that future measurements use a common, low image magnification for measurements. We tested these claims with new measurements using the software LEAFGUI in comparison with digital tracing using ImageJ software. We found that the apparent increase of VLA with magnification was an artifact of (1) using low-quality and low-magnification images and (2) errors in the algorithms of LEAFGUI. Given the use of images of sufficient magnification and quality, and analysis with error-free software, the VLA can be measured precisely and accurately. These findings point to important principles for improving the quantity and quality of important information gathered from leaf vein systems. PMID:25096977

  4. High-accuracy, high-precision, high-resolution, continuous monitoring of urban greenhouse gas emissions? Results to date from INFLUX

    NASA Astrophysics Data System (ADS)

    Davis, K. J.; Brewer, A.; Cambaliza, M. O. L.; Deng, A.; Hardesty, M.; Gurney, K. R.; Heimburger, A. M. F.; Karion, A.; Lauvaux, T.; Lopez-Coto, I.; McKain, K.; Miles, N. L.; Patarasuk, R.; Prasad, K.; Razlivanov, I. N.; Richardson, S.; Sarmiento, D. P.; Shepson, P. B.; Sweeney, C.; Turnbull, J. C.; Whetstone, J. R.; Wu, K.

    2015-12-01

    The Indianapolis Flux Experiment (INFLUX) is testing the boundaries of our ability to use atmospheric measurements to quantify urban greenhouse gas (GHG) emissions. The project brings together inventory assessments, tower-based and aircraft-based atmospheric measurements, and atmospheric modeling to provide high-accuracy, high-resolution, continuous monitoring of emissions of GHGs from the city. Results to date include a multi-year record of tower and aircraft based measurements of the urban CO2 and CH4 signal, long-term atmospheric modeling of GHG transport, and emission estimates for both CO2 and CH4 based on both tower and aircraft measurements. We will present these emissions estimates, the uncertainties in each, and our assessment of the primary needs for improvements in these emissions estimates. We will also present ongoing efforts to improve our understanding of atmospheric transport and background atmospheric GHG mole fractions, and to disaggregate GHG sources (e.g. biogenic vs. fossil fuel CO2 fluxes), topics that promise significant improvement in urban GHG emissions estimates.

  5. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset 1998-2000 in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, A. M.; Schmidlin, F. J.; Oltmans, S. J.; McPeters, R. D.; Smit, H. G. J.

    2003-01-01

    A network of 12 southern hemisphere tropical and subtropical stations in the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 profiles of stratospheric and tropospheric ozone since 1998. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used with standard radiosondes for pressure, temperature and relative humidity measurements. The archived data are available at:http: //croc.gsfc.nasa.gov/shadoz. In Thompson et al., accuracies and imprecisions in the SHADOZ 1998- 2000 dataset were examined using ground-based instruments and the TOMS total ozone measurement (version 7) as references. Small variations in ozonesonde technique introduced possible biases from station-to-station. SHADOZ total ozone column amounts are now compared to version 8 TOMS; discrepancies between the two datasets are reduced 2\\% on average. An evaluation of ozone variations among the stations is made using the results of a series of chamber simulations of ozone launches (JOSIE-2000, Juelich Ozonesonde Intercomparison Experiment) in which a standard reference ozone instrument was employed with the various sonde techniques used in SHADOZ. A number of variations in SHADOZ ozone data are explained when differences in solution strength, data processing and instrument type (manufacturer) are taken into account.

  6. Precision measurements of photoabsorption cross sections of Ar, Kr, Xe, and selected molecules at 58.4, 73.6, and 74.4 nm

    NASA Technical Reports Server (NTRS)

    Samson, James A. R.; Yin, Lifeng

    1989-01-01

    Absolute absorption cross sections have been measured for the rare gases at 58.43, 73.59, and 74.37 nm with an accuracy of + or - 0.8 percent. For the molecules H2, N2, O2, CO, N2O, CO2, and CH4, precision measurements were made at 58.43 nm with an accuracy of + or - 0.8 percent. Molecular absorption cross sections are also reported at 73.59 and 74.37 nm. However, in the vicinity of these wavelengths most molecules exhibit considerable structure, and cross sections measured at these wavelengths may depend on the widths and the amounts of self-reversal of these resonance lines. A detailed discussion is given of the systematic errors encountered with the double-ion chamber used in the cross-sectional measurements. Details are also given of precision pressure measurements.

  7. The effect of dilution and the use of a post-extraction nucleic acid purification column on the accuracy, precision, and inhibition of environmental DNA samples

    USGS Publications Warehouse

    Mckee, Anna M.; Spear, Stephen F.; Pierson, Todd W.

    2015-01-01

    Isolation of environmental DNA (eDNA) is an increasingly common method for detecting presence and assessing relative abundance of rare or elusive species in aquatic systems via the isolation of DNA from environmental samples and the amplification of species-specific sequences using quantitative PCR (qPCR). Co-extracted substances that inhibit qPCR can lead to inaccurate results and subsequent misinterpretation about a species’ status in the tested system. We tested three treatments (5-fold and 10-fold dilutions, and spin-column purification) for reducing qPCR inhibition from 21 partially and fully inhibited eDNA samples collected from coastal plain wetlands and mountain headwater streams in the southeastern USA. All treatments reduced the concentration of DNA in the samples. However, column purified samples retained the greatest sensitivity. For stream samples, all three treatments effectively reduced qPCR inhibition. However, for wetland samples, the 5-fold dilution was less effective than other treatments. Quantitative PCR results for column purified samples were more precise than the 5-fold and 10-fold dilutions by 2.2× and 3.7×, respectively. Column purified samples consistently underestimated qPCR-based DNA concentrations by approximately 25%, whereas the directional bias in qPCR-based DNA concentration estimates differed between stream and wetland samples for both dilution treatments. While the directional bias of qPCR-based DNA concentration estimates differed among treatments and locations, the magnitude of inaccuracy did not. Our results suggest that 10-fold dilution and column purification effectively reduce qPCR inhibition in mountain headwater stream and coastal plain wetland eDNA samples, and if applied to all samples in a study, column purification may provide the most accurate relative qPCR-based DNA concentrations estimates while retaining the greatest assay sensitivity.

  8. Mutations in a conserved region of RNA polymerase II influence the accuracy of mRNA start site selection.

    PubMed Central

    Hekmatpanah, D S; Young, R A

    1991-01-01

    A sensitive phenotypic assay has been used to identify mutations affecting transcription initiation in the genes encoding the two large subunits of Saccharomyces cerevisiae RNA polymerase II (RPB1 and RPB2). The rpb1 and rpb2 mutations alter the ratio of transcripts initiated at two adjacent start sites of a delta-insertion promoter. Of a large number of rpb1 and rpb2 mutations screened, only a few affect transcription initiation patterns at delta-insertion promoters, and these mutations are in close proximity to each other within both RPB1 and RPB2. The two rpb1 mutations alter amino acid residues within homology block G, a region conserved in the large subunits of all RNA polymerases. The three strong rpb2 mutations alter adjacent amino acids. At a wild-type promoter, the rpb1 mutations affect the accuracy of mRNA start site selection by producing a small but detectable increase in the 5'-end heterogeneity of transcripts. These RNA polymerase II mutations implicate specific portions of the enzyme in aspects of transcription initiation. Images PMID:1922077

  9. The relative accuracy of standard estimators for macrofaunal abundance and species richness derived from selected intertidal transect designs used to sample exposed sandy beaches

    NASA Astrophysics Data System (ADS)

    Schoeman, transect designs used to sample exposed sandy beaches D. S.; Wheeler, M.; Wait, M.

    2003-10-01

    In order to ensure that patterns detected in field samples reflect real ecological processes rather than methodological idiosyncrasies, it is important that researchers attempt to understand the consequences of the sampling and analytical designs that they select. This is especially true for sandy beach ecology, which has lagged somewhat behind ecological studies of other intertidal habitats. This paper investigates the performance of routine estimators of macrofaunal abundance and species richness, which are variables that have been widely used to infer predictable patterns of biodiversity across a gradient of beach types. To do this, a total of six shore-normal strip transects were sampled on three exposed, oceanic sandy beaches in the Eastern Cape, South Africa. These transects comprised contiguous quadrats arranged linearly between the spring high and low water marks. Using simple Monte Carlo simulation techniques, data collected from the strip transects were used to assess the accuracy of parameter estimates from different sampling strategies relative to their true values (macrofaunal abundance ranged 595-1369 individuals transect -1; species richness ranged 12-21 species transect -1). Results indicated that estimates from the various transect methods performed in a similar manner both within beaches and among beaches. Estimates for macrofaunal abundance tended to be negatively biased, especially at levels of sampling effort most commonly reported in the literature, and accuracy decreased with decreasing sampling effort. By the same token, estimates for species richness were always negatively biased and were also characterised by low precision. Furthermore, triplicate transects comprising a sampled area in the region of 4 m 2 (as has been previously recommended) are expected to miss more than 30% of the species that occur on the transect. Surprisingly, for both macrofaunal abundance and species richness, estimates based on data from transects sampling quadrats

  10. Re-Os geochronology of the El Salvador porphyry Cu-Mo deposit, Chile: Tracking analytical improvements in accuracy and precision over the past decade

    NASA Astrophysics Data System (ADS)

    Zimmerman, Aaron; Stein, Holly J.; Morgan, John W.; Markey, Richard J.; Watanabe, Yasushi

    2014-04-01

    deposit geochronology. The timing and duration of mineralization from Re-Os dating of ore minerals is more precise than estimates from previously reported 40Ar/39Ar and K-Ar ages on alteration minerals. The Re-Os results suggest that the mineralization is temporally distinct from pre-mineral rhyolite porphyry (42.63 ± 0.28 Ma) and is immediately prior to or overlapping with post-mineral latite dike emplacement (41.16 ± 0.48 Ma). Based on the Re-Os and other geochronologic data, the Middle Eocene intrusive activity in the El Salvador district is divided into three pulses: (1) 44-42.5 Ma for weakly mineralized porphyry intrusions, (2) 41.8-41.2 Ma for intensely mineralized porphyry intrusions, and (3) ∼41 Ma for small latite dike intrusions without major porphyry stocks. The orientation of igneous dikes and porphyry stocks changed from NNE-SSW during the first pulse to WNW-ESE for the second and third pulses. This implies that the WNW-ESE striking stress changed from σ3 (minimum principal compressive stress) during the first pulse to σHmax (maximum principal compressional stress in a horizontal plane) during the second and third pulses. Therefore, the focus of intense porphyry Cu-Mo mineralization occurred during a transient geodynamic reconfiguration just before extinction of major intrusive activity in the region.

  11. A quantitative method for evaluating numerical simulation accuracy of time-transient Lamb wave propagation with its applications to selecting appropriate element size and time step.

    PubMed

    Wan, Xiang; Xu, Guanghua; Zhang, Qing; Tse, Peter W; Tan, Haihui

    2016-01-01

    Lamb wave technique has been widely used in non-destructive evaluation (NDE) and structural health monitoring (SHM). However, due to the multi-mode characteristics and dispersive nature, Lamb wave propagation behavior is much more complex than that of bulk waves. Numerous numerical simulations on Lamb wave propagation have been conducted to study its physical principles. However, few quantitative studies on evaluating the accuracy of these numerical simulations were reported. In this paper, a method based on cross correlation analysis for quantitatively evaluating the simulation accuracy of time-transient Lamb waves propagation is proposed. Two kinds of error, affecting the position and shape accuracies are firstly identified. Consequently, two quantitative indices, i.e., the GVE (group velocity error) and MACCC (maximum absolute value of cross correlation coefficient) derived from cross correlation analysis between a simulated signal and a reference waveform, are proposed to assess the position and shape errors of the simulated signal. In this way, the simulation accuracy on the position and shape is quantitatively evaluated. In order to apply this proposed method to select appropriate element size and time step, a specialized 2D-FEM program combined with the proposed method is developed. Then, the proper element size considering different element types and time step considering different time integration schemes are selected. These results proved that the proposed method is feasible and effective, and can be used as an efficient tool for quantitatively evaluating and verifying the simulation accuracy of time-transient Lamb wave propagation. PMID:26315506

  12. Acceptability, Precision and Accuracy of 3D Photonic Scanning for Measurement of Body Shape in a Multi-Ethnic Sample of Children Aged 5-11 Years: The SLIC Study

    PubMed Central

    Wells, Jonathan C. K.; Stocks, Janet; Bonner, Rachel; Raywood, Emma; Legg, Sarah; Lee, Simon; Treleaven, Philip; Lum, Sooky

    2015-01-01

    Background Information on body size and shape is used to interpret many aspects of physiology, including nutritional status, cardio-metabolic risk and lung function. Such data have traditionally been obtained through manual anthropometry, which becomes time-consuming when many measurements are required. 3D photonic scanning (3D-PS) of body surface topography represents an alternative digital technique, previously applied successfully in large studies of adults. The acceptability, precision and accuracy of 3D-PS in young children have not been assessed. Methods We attempted to obtain data on girth, width and depth of the chest and waist, and girth of the knee and calf, manually and by 3D-PS in a multi-ethnic sample of 1484 children aged 5–11 years. The rate of 3D-PS success, and reasons for failure, were documented. Precision and accuracy of 3D-PS were assessed relative to manual measurements using the methods of Bland and Altman. Results Manual measurements were successful in all cases. Although 97.4% of children agreed to undergo 3D-PS, successful scans were only obtained in 70.7% of these. Unsuccessful scans were primarily due to body movement, or inability of the software to extract shape outputs. The odds of scan failure, and the underlying reason, differed by age, size and ethnicity. 3D-PS measurements tended to be greater than those obtained manually (p<0.05), however ranking consistency was high (r2>0.90 for most outcomes). Conclusions 3D-PS is acceptable in children aged ≥5 years, though with current hardware/software, and body movement artefacts, approximately one third of scans may be unsuccessful. The technique had poorer technical success than manual measurements, and had poorer precision when the measurements were viable. Compared to manual measurements, 3D-PS showed modest average biases but acceptable limits of agreement for large surveys, and little evidence that bias varied substantially with size. Most of the issues we identified could be

  13. Canopy Temperature and Vegetation Indices from High-Throughput Phenotyping Improve Accuracy of Pedigree and Genomic Selection for Grain Yield in Wheat

    PubMed Central

    Rutkoski, Jessica; Poland, Jesse; Mondal, Suchismita; Autrique, Enrique; Pérez, Lorena González; Crossa, José; Reynolds, Matthew; Singh, Ravi

    2016-01-01

    Genomic selection can be applied prior to phenotyping, enabling shorter breeding cycles and greater rates of genetic gain relative to phenotypic selection. Traits measured using high-throughput phenotyping based on proximal or remote sensing could be useful for improving pedigree and genomic prediction model accuracies for traits not yet possible to phenotype directly. We tested if using aerial measurements of canopy temperature, and green and red normalized difference vegetation index as secondary traits in pedigree and genomic best linear unbiased prediction models could increase accuracy for grain yield in wheat, Triticum aestivum L., using 557 lines in five environments. Secondary traits on training and test sets, and grain yield on the training set were modeled as multivariate, and compared to univariate models with grain yield on the training set only. Cross validation accuracies were estimated within and across-environment, with and without replication, and with and without correcting for days to heading. We observed that, within environment, with unreplicated secondary trait data, and without correcting for days to heading, secondary traits increased accuracies for grain yield by 56% in pedigree, and 70% in genomic prediction models, on average. Secondary traits increased accuracy slightly more when replicated, and considerably less when models corrected for days to heading. In across-environment prediction, trends were similar but less consistent. These results show that secondary traits measured in high-throughput could be used in pedigree and genomic prediction to improve accuracy. This approach could improve selection in wheat during early stages if validated in early-generation breeding plots. PMID:27402362

  14. Precision volume measurement system.

    SciTech Connect

    Fischer, Erin E.; Shugard, Andrew D.

    2004-11-01

    A new precision volume measurement system based on a Kansas City Plant (KCP) design was built to support the volume measurement needs of the Gas Transfer Systems (GTS) department at Sandia National Labs (SNL) in California. An engineering study was undertaken to verify or refute KCP's claims of 0.5% accuracy. The study assesses the accuracy and precision of the system. The system uses the ideal gas law and precise pressure measurements (of low-pressure helium) in a temperature and computer controlled environment to ratio a known volume to an unknown volume.

  15. New multi-station and multi-decadal trend data on precipitable water. Recipe to match FTIR retrievals from NDACC long-time records to radio sondes within 1 mm accuracy/precision

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Borsdorff, T.; Rettinger, M.; Camy-Peyret, C.; Demoulin, P.; Duchatelet, P.; Mahieu, E.

    2009-04-01

    We present an original optimum strategy for retrieval of precipitable water from routine ground-based mid-infrared FTS measurements performed at a number globally distributed stations within the NDACC network. The strategy utilizes FTIR retrievals which are set in a way to match standard radio sonde operations. Thereby, an unprecedented accuracy and precision for measurements of precipitable water can be demonstrated: the correlation between Zugspitze FTIR water vapor columns from a 3 months measurement campaign with total columns derived from coincident radio sondes shows a regression coefficient of R = 0.988, a bias of 0.05 mm, a standard deviation of 0.28 mm, an intercept of 0.01 mm, and a slope of 1.01. This appears to be even better than what can be achieved with state-of-the-art micro wave techniques, see e.g., Morland et al. (2006, Fig. 9 therein). Our approach is based upon a careful selection of spectral micro windows, comprising a set of both weak and strong water vapor absorption lines between 839.4 - 840.6 cm-1, 849.0 - 850.2 cm-1, and 852.0 - 853.1 cm-1, which is not contaminated by interfering absorptions of any other trace gases. From existing spectroscopic line lists, a careful selection of the best available parameter set was performed, leading to nearly perfect spectral fits without significant forward model parameter errors. To set up the FTIR water vapor profile inversion, a set of FTIR measurements and coincident radio sondes has been utilized. To eliminate/minimize mismatch in time and space, the Tobin best estimate of the state of the atmosphere principle has been applied to the radio sondes. This concept uses pairs of radio sondes launched with a 1-hour separation, and derives the gradient from the two radio sonde measurements, in order to construct a virtual PTU profile for a certain time and location. Coincident FTIR measurements of water vapor columns (two hour mean values) have then been matched to the water columns obtained by

  16. Dejittered spike-conditioned stimulus waveforms yield improved estimates of neuronal feature selectivity and spike-timing precision of sensory interneurons.

    PubMed

    Aldworth, Zane N; Miller, John P; Gedeon, Tomás; Cummins, Graham I; Dimitrov, Alexander G

    2005-06-01

    What is the meaning associated with a single action potential in a neural spike train? The answer depends on the way the question is formulated. One general approach toward formulating this question involves estimating the average stimulus waveform preceding spikes in a spike train. Many different algorithms have been used to obtain such estimates, ranging from spike-triggered averaging of stimuli to correlation-based extraction of "stimulus-reconstruction" kernels or spatiotemporal receptive fields. We demonstrate that all of these approaches miscalculate the stimulus feature selectivity of a neuron. Their errors arise from the manner in which the stimulus waveforms are aligned to one another during the calculations. Specifically, the waveform segments are locked to the precise time of spike occurrence, ignoring the intrinsic "jitter" in the stimulus-to-spike latency. We present an algorithm that takes this jitter into account. "Dejittered" estimates of the feature selectivity of a neuron are more accurate (i.e., provide a better estimate of the mean waveform eliciting a spike) and more precise (i.e., have smaller variance around that waveform) than estimates obtained using standard techniques. Moreover, this approach yields an explicit measure of spike-timing precision. We applied this technique to study feature selectivity and spike-timing precision in two types of sensory interneurons in the cricket cercal system. The dejittered estimates of the mean stimulus waveforms preceding spikes were up to three times larger than estimates based on the standard techniques used in previous studies and had power that extended into higher-frequency ranges. Spike timing precision was approximately 5 ms.

  17. Relative Accuracy Evaluation

    PubMed Central

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  18. Accuracy and cut-off point selection in three-class classification problems using a generalization of the Youden index.

    PubMed

    Nakas, Christos T; Alonzo, Todd A; Yiannoutsos, Constantin T

    2010-12-10

    We study properties of the index J(3), defined as the accuracy, or the maximum correct classification, for a given three-class classification problem. Specifically, using J(3) one can assess the discrimination between the three distributions and obtain an optimal pair of cut-off points c(1)

  19. ZCURVE 3.0: identify prokaryotic genes with higher accuracy as well as automatically and accurately select essential genes.

    PubMed

    Hua, Zhi-Gang; Lin, Yan; Yuan, Ya-Zhou; Yang, De-Chang; Wei, Wen; Guo, Feng-Biao

    2015-07-01

    In 2003, we developed an ab initio program, ZCURVE 1.0, to find genes in bacterial and archaeal genomes. In this work, we present the updated version (i.e. ZCURVE 3.0). Using 422 prokaryotic genomes, the average accuracy was 93.7% with the updated version, compared with 88.7% with the original version. Such results also demonstrate that ZCURVE 3.0 is comparable with Glimmer 3.02 and may provide complementary predictions to it. In fact, the joint application of the two programs generated better results by correctly finding more annotated genes while also containing fewer false-positive predictions. As the exclusive function, ZCURVE 3.0 contains one post-processing program that can identify essential genes with high accuracy (generally >90%). We hope ZCURVE 3.0 will receive wide use with the web-based running mode. The updated ZCURVE can be freely accessed from http://cefg.uestc.edu.cn/zcurve/ or http://tubic.tju.edu.cn/zcurveb/ without any restrictions. PMID:25977299

  20. ZCURVE 3.0: identify prokaryotic genes with higher accuracy as well as automatically and accurately select essential genes.

    PubMed

    Hua, Zhi-Gang; Lin, Yan; Yuan, Ya-Zhou; Yang, De-Chang; Wei, Wen; Guo, Feng-Biao

    2015-07-01

    In 2003, we developed an ab initio program, ZCURVE 1.0, to find genes in bacterial and archaeal genomes. In this work, we present the updated version (i.e. ZCURVE 3.0). Using 422 prokaryotic genomes, the average accuracy was 93.7% with the updated version, compared with 88.7% with the original version. Such results also demonstrate that ZCURVE 3.0 is comparable with Glimmer 3.02 and may provide complementary predictions to it. In fact, the joint application of the two programs generated better results by correctly finding more annotated genes while also containing fewer false-positive predictions. As the exclusive function, ZCURVE 3.0 contains one post-processing program that can identify essential genes with high accuracy (generally >90%). We hope ZCURVE 3.0 will receive wide use with the web-based running mode. The updated ZCURVE can be freely accessed from http://cefg.uestc.edu.cn/zcurve/ or http://tubic.tju.edu.cn/zcurveb/ without any restrictions.

  1. An Analysis of the Selected Materials Used in Step Measurements During Pre-Fits of Thermal Protection System Tiles and the Accuracy of Measurements Made Using These Selected Materials

    NASA Technical Reports Server (NTRS)

    Kranz, David William

    2010-01-01

    The goal of this research project was be to compare and contrast the selected materials used in step measurements during pre-fits of thermal protection system tiles and to compare and contrast the accuracy of measurements made using these selected materials. The reasoning for conducting this test was to obtain a clearer understanding to which of these materials may yield the highest accuracy rate of exacting measurements in comparison to the completed tile bond. These results in turn will be presented to United Space Alliance and Boeing North America for their own analysis and determination. Aerospace structures operate under extreme thermal environments. Hot external aerothermal environments in high Mach number flights lead to high structural temperatures. The differences between tile heights from one to another are very critical during these high Mach reentries. The Space Shuttle Thermal Protection System is a very delicate and highly calculated system. The thermal tiles on the ship are measured to within an accuracy of .001 of an inch. The accuracy of these tile measurements is critical to a successful reentry of an orbiter. This is why it is necessary to find the most accurate method for measuring the height of each tile in comparison to each of the other tiles. The test results indicated that there were indeed differences in the selected materials used in step measurements during prefits of Thermal Protection System Tiles and that Bees' Wax yielded a higher rate of accuracy when compared to the baseline test. In addition, testing for experience level in accuracy yielded no evidence of difference to be found. Lastly the use of the Trammel tool over the Shim Pack yielded variable difference for those tests.

  2. Use of Selected Goodness-of-Fit Statistics to Assess the Accuracy of a Model of Henry Hagg Lake, Oregon

    NASA Astrophysics Data System (ADS)

    Rounds, S. A.; Sullivan, A. B.

    2004-12-01

    Assessing a model's ability to reproduce field data is a critical step in the modeling process. For any model, some method of determining goodness-of-fit to measured data is needed to aid in calibration and to evaluate model performance. Visualizations and graphical comparisons of model output are an excellent way to begin that assessment. At some point, however, model performance must be quantified. Goodness-of-fit statistics, including the mean error (ME), mean absolute error (MAE), root mean square error, and coefficient of determination, typically are used to measure model accuracy. Statistical tools such as the sign test or Wilcoxon test can be used to test for model bias. The runs test can detect phase errors in simulated time series. Each statistic is useful, but each has its limitations. None provides a complete quantification of model accuracy. In this study, a suite of goodness-of-fit statistics was applied to a model of Henry Hagg Lake in northwest Oregon. Hagg Lake is a man-made reservoir on Scoggins Creek, a tributary to the Tualatin River. Located on the west side of the Portland metropolitan area, the Tualatin Basin is home to more than 450,000 people. Stored water in Hagg Lake helps to meet the agricultural and municipal water needs of that population. Future water demands have caused water managers to plan for a potential expansion of Hagg Lake, doubling its storage to roughly 115,000 acre-feet. A model of the lake was constructed to evaluate the lake's water quality and estimate how that quality might change after raising the dam. The laterally averaged, two-dimensional, U.S. Army Corps of Engineers model CE-QUAL-W2 was used to construct the Hagg Lake model. Calibrated for the years 2000 and 2001 and confirmed with data from 2002 and 2003, modeled parameters included water temperature, ammonia, nitrate, phosphorus, algae, zooplankton, and dissolved oxygen. Several goodness-of-fit statistics were used to quantify model accuracy and bias. Model

  3. Precision electron polarimetry

    SciTech Connect

    Chudakov, Eugene A.

    2013-11-01

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. M{\\o}ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at ~300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100\\%-polarized electron target for M{\\o}ller polarimetry.

  4. Precision electron polarimetry

    NASA Astrophysics Data System (ADS)

    Chudakov, E.

    2013-11-01

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. Mo/ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at 300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100%-polarized electron target for Mo/ller polarimetry.

  5. SU-E-P-54: Evaluation of the Accuracy and Precision of IGPS-O X-Ray Image-Guided Positioning System by Comparison with On-Board Imager Cone-Beam Computed Tomography

    SciTech Connect

    Zhang, D; Wang, W; Jiang, B; Fu, D

    2015-06-15

    Purpose: The purpose of this study is to assess the positioning accuracy and precision of IGPS-O system which is a novel radiographic kilo-voltage x-ray image-guided positioning system developed for clinical IGRT applications. Methods: IGPS-O x-ray image-guided positioning system consists of two oblique sets of radiographic kilo-voltage x-ray projecting and imaging devices which were equiped on the ground and ceiling of treatment room. This system can determine the positioning error in the form of three translations and three rotations according to the registration of two X-ray images acquired online and the planning CT image. An anthropomorphic head phantom and an anthropomorphic thorax phantom were used for this study. The phantom was set up on the treatment table with correct position and various “planned” setup errors. Both IGPS-O x-ray image-guided positioning system and the commercial On-board Imager Cone-beam Computed Tomography (OBI CBCT) were used to obtain the setup errors of the phantom. Difference of the Result between the two image-guided positioning systems were computed and analyzed. Results: The setup errors measured by IGPS-O x-ray image-guided positioning system and the OBI CBCT system showed a general agreement, the means and standard errors of the discrepancies between the two systems in the left-right, anterior-posterior, superior-inferior directions were −0.13±0.09mm, 0.03±0.25mm, 0.04±0.31mm, respectively. The maximum difference was only 0.51mm in all the directions and the angular discrepancy was 0.3±0.5° between the two systems. Conclusion: The spatial and angular discrepancies between IGPS-O system and OBI CBCT for setup error correction was minimal. There is a general agreement between the two positioning system. IGPS-O x-ray image-guided positioning system can achieve as good accuracy as CBCT and can be used in the clinical IGRT applications.

  6. Bias and precision of selected analytes reported by the National Atmospheric Deposition Program and National Trends Network, 1984

    USGS Publications Warehouse

    Brooks, M.H.; Schroder, L.J.; Willoughby, T.C.

    1987-01-01

    The U.S. Geological Survey operated a blind audit sample program during 1974 to test the effects of the sample handling and shipping procedures used by the National Atmospheric Deposition Program and National Trends Network on the quality of wet deposition data produced by the combined networks. Blind audit samples, which were dilutions of standard reference water samples, were submitted by network site operators to the central analytical laboratory disguised as actual wet deposition samples. Results from the analyses of blind audit samples were used to calculate estimates of analyte bias associated with all network wet deposition samples analyzed in 1984 and to estimate analyte precision. Concentration differences between double blind samples that were submitted to the central analytical laboratory and separate analyses of aliquots of those blind audit samples that had not undergone network sample handling and shipping were used to calculate analyte masses that apparently were added to each blind audit sample by routine network handling and shipping procedures. These calculated masses indicated statistically significant biases for magnesium, sodium , potassium, chloride, and sulfate. Median calculated masses were 41.4 micrograms (ug) for calcium, 14.9 ug for magnesium, 23.3 ug for sodium, 0.7 ug for potassium, 16.5 ug for chloride and 55.3 ug for sulfate. Analyte precision was estimated using two different sets of replicate measures performed by the central analytical laboratory. Estimated standard deviations were similar to those previously reported. (Author 's abstract)

  7. Accuracy of genomic prediction for BCWD resistance in rainbow trout using different genotyping platforms and genomic selection models

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In this study, we aimed to (1) predict genomic estimated breeding value (GEBV) for bacterial cold water disease (BCWD) resistance by genotyping training (n=583) and validation samples (n=53) with two genotyping platforms (24K RAD-SNP and 49K SNP) and using different genomic selection (GS) models (Ba...

  8. Performance, Accuracy, Data Delivery, and Feedback Methods in Order Selection: A Comparison of Voice, Handheld, and Paper Technologies

    ERIC Educational Resources Information Center

    Ludwig, Timothy D.; Goomas, David T.

    2007-01-01

    Field study was conducted in auto-parts after-market distribution centers where selectors used handheld computers to receive instructions and feedback about their product selection process. A wireless voice-interaction technology was then implemented in a multiple baseline fashion across three departments of a warehouse (N = 14) and was associated…

  9. Improving accuracy of overhanging structures for selective laser melting through reliability characterization of single track formation on thick powder beds

    NASA Astrophysics Data System (ADS)

    Mohanty, Sankhya; Hattel, Jesper H.

    2016-04-01

    Repeatability and reproducibility of parts produced by selective laser melting is a standing issue, and coupled with a lack of standardized quality control presents a major hindrance towards maturing of selective laser melting as an industrial scale process. Consequently, numerical process modelling has been adopted towards improving the predictability of the outputs from the selective laser melting process. Establishing the reliability of the process, however, is still a challenge, especially in components having overhanging structures. In this paper, a systematic approach towards establishing reliability of overhanging structure production by selective laser melting has been adopted. A calibrated, fast, multiscale thermal model is used to simulate the single track formation on a thick powder bed. Single tracks are manufactured on a thick powder bed using same processing parameters, but at different locations in a powder bed and in different laser scanning directions. The difference in melt track widths and depths captures the effect of changes in incident beam power distribution due to location and processing direction. The experimental results are used in combination with numerical model, and subjected to uncertainty and reliability analysis. Cumulative probability distribution functions obtained for melt track widths and depths are found to be coherent with observed experimental values. The technique is subsequently extended for reliability characterization of single layers produced on a thick powder bed without support structures, by determining cumulative probability distribution functions for average layer thickness, sample density and thermal homogeneity.

  10. Application of AFINCH as a Tool for Evaluating the Effects of Streamflow-Gaging-Network Size and Composition on the Accuracy and Precision of Streamflow Estimates at Ungaged Locations in the Southeast Lake Michigan Hydrologic Subregion

    USGS Publications Warehouse

    Koltun, G.F.; Holtschlag, David J.

    2010-01-01

    Bootstrapping techniques employing random subsampling were used with the AFINCH (Analysis of Flows In Networks of CHannels) model to gain insights into the effects of variation in streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the 0405 (Southeast Lake Michigan) hydrologic subregion. AFINCH uses stepwise-regression techniques to estimate monthly water yields from catchments based on geospatial-climate and land-cover data in combination with available streamflow and water-use data. Calculations are performed on a hydrologic-subregion scale for each catchment and stream reach contained in a National Hydrography Dataset Plus (NHDPlus) subregion. Water yields from contributing catchments are multiplied by catchment areas and resulting flow values are accumulated to compute streamflows in stream reaches which are referred to as flow lines. AFINCH imposes constraints on water yields to ensure that observed streamflows are conserved at gaged locations. Data from the 0405 hydrologic subregion (referred to as Southeast Lake Michigan) were used for the analyses. Daily streamflow data were measured in the subregion for 1 or more years at a total of 75 streamflow-gaging stations during the analysis period which spanned water years 1971-2003. The number of streamflow gages in operation each year during the analysis period ranged from 42 to 56 and averaged 47. Six sets (one set for each censoring level), each composed of 30 random subsets of the 75 streamflow gages, were created by censoring (removing) approximately 10, 20, 30, 40, 50, and 75 percent of the streamflow gages (the actual percentage of operating streamflow gages censored for each set varied from year to year, and within the year from subset to subset, but averaged approximately the indicated percentages). Streamflow estimates for six flow lines each were aggregated by censoring level, and results were analyzed to assess (a) how the size

  11. Variable selection procedures before partial least squares regression enhance the accuracy of milk fatty acid composition predicted by mid-infrared spectroscopy.

    PubMed

    Gottardo, P; Penasa, M; Lopez-Villalobos, N; De Marchi, M

    2016-10-01

    Mid-infrared spectroscopy is a high-throughput technique that allows the prediction of milk quality traits on a large-scale. The accuracy of prediction achievable using partial least squares (PLS) regression is usually high for fatty acids (FA) that are more abundant in milk, whereas it decreases for FA that are present in low concentrations. Two variable selection methods, uninformative variable elimination or a genetic algorithm combined with PLS regression, were used in the present study to investigate their effect on the accuracy of prediction equations for milk FA profile expressed either as a concentration on total identified FA or a concentration in milk. For FA expressed on total identified FA, the coefficient of determination of cross-validation from PLS alone was low (0.25) for the prediction of polyunsaturated FA and medium (0.70) for saturated FA. The coefficient of determination increased to 0.54 and 0.95 for polyunsaturated and saturated FA, respectively, when FA were expressed on a milk basis and using PLS alone. Both algorithms before PLS regression improved the accuracy of prediction for FA, especially for FA that are usually difficult to predict; for example, the improvement with respect to the PLS regression ranged from 9 to 80%. In general, FA were better predicted when their concentrations were expressed on a milk basis. These results might favor the use of prediction equations in the dairy industry for genetic purposes and payment system. PMID:27522434

  12. Assessment of exercise-induced minor muscle lesions: the accuracy of Cyriax's diagnosis by selective tension paradigm.

    PubMed

    Franklin, M E; Conner-Kerr, T; Chamness, M; Chenier, T C; Kelly, R R; Hodge, T

    1996-09-01

    The Cyriax selective tension assessment paradigm is commonly used by clinicians for the diagnosis of soft tissue lesions; however, studies have not demonstrated that it is a valid method. The purpose of this study was to examine the construct validity of the active motion, passive motion, resisted movement, and palpation components of the Cyriax selective tension diagnosis paradigm in subjects with an exercise-induced minor hamstring muscle lesion. Nine female subjects with a mean age of 23.6 years (SD = 4.7) and a mass of 57.3 kg (SD = 10.7) performed two sets of 20 maximal eccentric isokinetic knee flexor contractions designed to induce a minor muscle lesion of the hamstrings. Active range of motion, passive range of motion, knee extension end-feel pain relative to resistance sequence, knee flexor isometric strength, pain perception during knee flexor resisted movement testing, and palpation pain of the hamstrings were assessed at 0, 5, 2, 12, 24, 48, and 72 hours postexercise and compared with Cyriax's hypothesized selective tension paradigm results. Consistent with Cyriax's paradigm, passive range of motion remained unchanged, and perceived pain of the hamstrings increased with resistance testing at 12, 24, 48, and 72 hours postexercise when compared with baseline. In addition, palpation pain of the hamstrings was significantly elevated at 48 and 72 hours after exercise (p < 0.05). In contrast of Cyriax's paradigm, active range of motion was significantly reduced over time (p < 0.05), with the least amount of motion compared to baseline (85%) occurring at 48 hours postexercise. Further, resisted movement testing found significant knee flexor isometric strength reductions over time (p < 0.05), with the greatest reductions (33%) occurring at 48 hours postexercise. According to Cyriax, when a minor muscle lesion is tested, it should be strong and painful; however, none of the postexercise time frames exhibited results that were strong and painful. This study

  13. Precise sensing and selection of molecules by the interface between the metal nanocluster and the oxide support

    NASA Astrophysics Data System (ADS)

    Khubezhov, S. A.; Silaev, I. V.; Tvauri, I. V.; Grigorkina, G. S.; Demeev, Z. S.; Ramonova, A. G.; Kuznetsov, D. V.; Radchenko, T. I.; Magkoev, T. T., Jr.; Ogura, S.; Sekiba, D.; Fukutani, K.; Magkoev, T. T.

    2016-03-01

    Molecules being adsorbed on the surface dramatically change the physics and the chemistry of the substrate, thus offering an opportunity for precise sensing of the molecules via monitoring the transformation of the state of the adsorbent. Combination of different types of substrates, forming the surface interface border, generally exhibits synergistic effect dramatically enhancing sensitivity factor. In this regard, the aim of the present work is to demonstrate how the metal/oxide interface, formed in-vacuo (10-10 Torr) by deposition of metal nanoclusters on oxide support, affects behavior of NO and CO molecules being adsorbed on the surface. Coadsorption of NO and CO molecules on the Ni clusters deposited on MgO(111) film formed on Mo(110) crystal has been studied by reflection-absorption infrared spectroscopy (RAIRS) and temperature programmed desorption (TPD). Observed high sensitivity of metal/oxide interface to the molecular behaviour suggests an opportunity for design of new molecular sensors based on the metal/oxide nanostructures.

  14. High-precision instrument for measuring the surface tension, viscosity and surface viscoelasticity of liquids using ripplon surface laser-light scattering with tunable wavelength selection.

    PubMed

    Nishimura, Yu; Hasegawa, Akinori; Nagasaka, Yuji

    2014-04-01

    Here we describe our new high-precision instrument that simultaneously measures the surface tension, viscosity, and surface viscoelasticity of liquids. The instrument works on the ripplon surface-laser light scattering principle and operates with an automatically tunable selection of ripplon wavelength from 4 to 1500 μm, which corresponds to the frequency range of observing surface phenomena from approximately 400 Hz to 3 MHz in the case of water. The heterodyne technique instrument uses a reference laser beam which intersects at an arbitrarily adjustable angle with a vertically directed probing beam. For the determination of the wavelength of selected ripplons we substituted with the interference fringe spacing, measured using a high-resolution beam profiler. To extract reliable surface tension and viscosity data from the experimentally obtained spectrum shape for a selected wavelength of ripplon, we developed an algorithm to calculate the exact solution of the dispersion equation. The uncertainties of surface tension and viscosity measurement were confirmed through the measurement of seven pure Newtonian liquids at 25 °C measured with the selected wavelength of ripplon from 40 μm to 467 μm. To verify the genuine capability of the tunable wavelength selection of ripplon, we measured the surface elasticity of soluble surface molecular layers spread on pentanoic acid solutions.

  15. High-precision instrument for measuring the surface tension, viscosity and surface viscoelasticity of liquids using ripplon surface laser-light scattering with tunable wavelength selection

    SciTech Connect

    Nishimura, Yu; Hasegawa, Akinori; Nagasaka, Yuji

    2014-04-15

    Here we describe our new high-precision instrument that simultaneously measures the surface tension, viscosity, and surface viscoelasticity of liquids. The instrument works on the ripplon surface-laser light scattering principle and operates with an automatically tunable selection of ripplon wavelength from 4 to 1500 μm, which corresponds to the frequency range of observing surface phenomena from approximately 400 Hz to 3 MHz in the case of water. The heterodyne technique instrument uses a reference laser beam which intersects at an arbitrarily adjustable angle with a vertically directed probing beam. For the determination of the wavelength of selected ripplons we substituted with the interference fringe spacing, measured using a high-resolution beam profiler. To extract reliable surface tension and viscosity data from the experimentally obtained spectrum shape for a selected wavelength of ripplon, we developed an algorithm to calculate the exact solution of the dispersion equation. The uncertainties of surface tension and viscosity measurement were confirmed through the measurement of seven pure Newtonian liquids at 25 °C measured with the selected wavelength of ripplon from 40 μm to 467 μm. To verify the genuine capability of the tunable wavelength selection of ripplon, we measured the surface elasticity of soluble surface molecular layers spread on pentanoic acid solutions.

  16. Precise and selective sensing of DNA-DNA hybridization by graphene/Si-nanowires diode-type biosensors

    NASA Astrophysics Data System (ADS)

    Kim, Jungkil; Park, Shin-Young; Kim, Sung; Lee, Dae Hun; Kim, Ju Hwan; Kim, Jong Min; Kang, Hee; Han, Joong-Soo; Park, Jun Woo; Lee, Hosun; Choi, Suk-Ho

    2016-08-01

    Single-Si-nanowire (NW)-based DNA sensors have been recently developed, but their sensitivity is very limited because of high noise signals, originating from small source-drain current of the single Si NW. Here, we demonstrate that chemical-vapor-deposition-grown large-scale graphene/surface-modified vertical-Si-NW-arrays junctions can be utilized as diode-type biosensors for highly-sensitive and -selective detection of specific oligonucleotides. For this, a twenty-seven-base-long synthetic oligonucleotide, which is a fragment of human DENND2D promoter sequence, is first decorated as a probe on the surface of vertical Si-NW arrays, and then the complementary oligonucleotide is hybridized to the probe. This hybridization gives rise to a doping effect on the surface of Si NWs, resulting in the increase of the current in the biosensor. The current of the biosensor increases from 19 to 120% as the concentration of the target DNA varies from 0.1 to 500 nM. In contrast, such biosensing does not come into play by the use of the oligonucleotide with incompatible or mismatched sequences. Similar results are observed from photoluminescence microscopic images and spectra. The biosensors show very-uniform current changes with standard deviations ranging ~1 to ~10% by ten-times endurance tests. These results are very promising for their applications in accurate, selective, and stable biosensing.

  17. Precise and selective sensing of DNA-DNA hybridization by graphene/Si-nanowires diode-type biosensors.

    PubMed

    Kim, Jungkil; Park, Shin-Young; Kim, Sung; Lee, Dae Hun; Kim, Ju Hwan; Kim, Jong Min; Kang, Hee; Han, Joong-Soo; Park, Jun Woo; Lee, Hosun; Choi, Suk-Ho

    2016-01-01

    Single-Si-nanowire (NW)-based DNA sensors have been recently developed, but their sensitivity is very limited because of high noise signals, originating from small source-drain current of the single Si NW. Here, we demonstrate that chemical-vapor-deposition-grown large-scale graphene/surface-modified vertical-Si-NW-arrays junctions can be utilized as diode-type biosensors for highly-sensitive and -selective detection of specific oligonucleotides. For this, a twenty-seven-base-long synthetic oligonucleotide, which is a fragment of human DENND2D promoter sequence, is first decorated as a probe on the surface of vertical Si-NW arrays, and then the complementary oligonucleotide is hybridized to the probe. This hybridization gives rise to a doping effect on the surface of Si NWs, resulting in the increase of the current in the biosensor. The current of the biosensor increases from 19 to 120% as the concentration of the target DNA varies from 0.1 to 500 nM. In contrast, such biosensing does not come into play by the use of the oligonucleotide with incompatible or mismatched sequences. Similar results are observed from photoluminescence microscopic images and spectra. The biosensors show very-uniform current changes with standard deviations ranging ~1 to ~10% by ten-times endurance tests. These results are very promising for their applications in accurate, selective, and stable biosensing. PMID:27534818

  18. Precise and selective sensing of DNA-DNA hybridization by graphene/Si-nanowires diode-type biosensors

    PubMed Central

    Kim, Jungkil; Park, Shin-Young; Kim, Sung; Lee, Dae Hun; Kim, Ju Hwan; Kim, Jong Min; Kang, Hee; Han, Joong-Soo; Park, Jun Woo; Lee, Hosun; Choi, Suk-Ho

    2016-01-01

    Single-Si-nanowire (NW)-based DNA sensors have been recently developed, but their sensitivity is very limited because of high noise signals, originating from small source-drain current of the single Si NW. Here, we demonstrate that chemical-vapor-deposition-grown large-scale graphene/surface-modified vertical-Si-NW-arrays junctions can be utilized as diode-type biosensors for highly-sensitive and -selective detection of specific oligonucleotides. For this, a twenty-seven-base-long synthetic oligonucleotide, which is a fragment of human DENND2D promoter sequence, is first decorated as a probe on the surface of vertical Si-NW arrays, and then the complementary oligonucleotide is hybridized to the probe. This hybridization gives rise to a doping effect on the surface of Si NWs, resulting in the increase of the current in the biosensor. The current of the biosensor increases from 19 to 120% as the concentration of the target DNA varies from 0.1 to 500 nM. In contrast, such biosensing does not come into play by the use of the oligonucleotide with incompatible or mismatched sequences. Similar results are observed from photoluminescence microscopic images and spectra. The biosensors show very-uniform current changes with standard deviations ranging ~1 to ~10% by ten-times endurance tests. These results are very promising for their applications in accurate, selective, and stable biosensing. PMID:27534818

  19. Influence of Raw Image Preprocessing and Other Selected Processes on Accuracy of Close-Range Photogrammetric Systems According to Vdi 2634

    NASA Astrophysics Data System (ADS)

    Reznicek, J.; Luhmann, T.; Jepping, C.

    2016-06-01

    This paper examines the influence of raw image preprocessing and other selected processes on the accuracy of close-range photogrammetric measurement. The examined processes and features includes: raw image preprocessing, sensor unflatness, distance-dependent lens distortion, extending the input observations (image measurements) by incorporating all RGB colour channels, ellipse centre eccentricity and target detecting. The examination of each effect is carried out experimentally by performing the validation procedure proposed in the German VDI guideline 2634/1. The validation procedure is based on performing standard photogrammetric measurements of high-accurate calibrated measuring lines (multi-scale bars) with known lengths (typical uncertainty = 5 μm at 2 sigma). The comparison of the measured lengths with the known values gives the maximum length measurement error LME, which characterize the accuracy of the validated photogrammetric system. For higher reliability the VDI test field was photographed ten times independently with the same configuration and camera settings. The images were acquired with the metric ALPA 12WA camera. The tests are performed on all ten measurements which gives the possibility to measure the repeatability of the estimated parameters as well. The influences are examined by comparing the quality characteristics of the reference and tested settings.

  20. Precision Nova operations

    SciTech Connect

    Ehrlich, R.B.; Miller, J.L.; Saunders, R.L.; Thompson, C.E.; Weiland, T.L.; Laumann, C.W.

    1995-09-01

    To improve the symmetry of x-ray drive on indirectly driven ICF capsules, we have increased the accuracy of operating procedures and diagnostics on the Nova laser. Precision Nova operations includes routine precision power balance to within 10% rms in the ``foot`` and 5% nns in the peak of shaped pulses, beam synchronization to within 10 ps rms, and pointing of the beams onto targets to within 35 {mu}m rms. We have also added a ``fail-safe chirp`` system to avoid Stimulated Brillouin Scattering (SBS) in optical components during high energy shots.

  1. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  2. Precise Orbit Determination for ALOS

    NASA Technical Reports Server (NTRS)

    Nakamura, Ryo; Nakamura, Shinichi; Kudo, Nobuo; Katagiri, Seiji

    2007-01-01

    The Advanced Land Observing Satellite (ALOS) has been developed to contribute to the fields of mapping, precise regional land coverage observation, disaster monitoring, and resource surveying. Because the mounted sensors need high geometrical accuracy, precise orbit determination for ALOS is essential for satisfying the mission objectives. So ALOS mounts a GPS receiver and a Laser Reflector (LR) for Satellite Laser Ranging (SLR). This paper deals with the precise orbit determination experiments for ALOS using Global and High Accuracy Trajectory determination System (GUTS) and the evaluation of the orbit determination accuracy by SLR data. The results show that, even though the GPS receiver loses lock of GPS signals more frequently than expected, GPS-based orbit is consistent with SLR-based orbit. And considering the 1 sigma error, orbit determination accuracy of a few decimeters (peak-to-peak) was achieved.

  3. Precise GPS ephemerides from DMA and NGS tested by time transfer

    NASA Technical Reports Server (NTRS)

    Lewandowski, Wlodzimierz W.; Petit, Gerard; Thomas, Claudine

    1992-01-01

    It was shown that the use of the Defense Mapping Agency's (DMA) precise ephemerides brings a significant improvement to the accuracy of GPS time transfer. At present a new set of precise ephemerides produced by the National Geodetic Survey (NGS) has been made available to the timing community. This study demonstrates that both types of precise ephemerides improve long-distance GPS time transfer and remove the effects of Selective Availability (SA) degradation of broadcast ephemerides. The issue of overcoming SA is also discussed in terms of the routine availability of precise ephemerides.

  4. Ground control requirements for precision processing of ERTS images

    USGS Publications Warehouse

    Burger, Thomas C.

    1973-01-01

    With the successful flight of the ERTS-1 satellite, orbital height images are available for precision processing into products such as 1:1,000,000-scale photomaps and enlargements up to 1:250,000 scale. In order to maintain positional error below 100 meters, control points for the precision processing must be carefully selected, clearly definitive on photos in both X and Y. Coordinates of selected control points measured on existing ½ and 15-minute standard maps provide sufficient accuracy for any space imaging system thus far defined. This procedure references the points to accepted horizontal and vertical datums. Maps as small as 1:250,000 scale can be used as source material for coordinates, but to maintain the desired accuracy, maps of 1:100,000 and larger scale should be used when available.

  5. Measurements of experimental precision for trials with cowpea (Vigna unguiculata L. Walp.) genotypes.

    PubMed

    Teodoro, P E; Torres, F E; Santos, A D; Corrêa, A M; Nascimento, M; Barroso, L M A; Ceccon, G

    2016-01-01

    The aim of this study was to evaluate the suitability of statistics as experimental precision degree measures for trials with cowpea (Vigna unguiculata L. Walp.) genotypes. Cowpea genotype yields were evaluated in 29 trials conducted in Brazil between 2005 and 2012. The genotypes were evaluated with a randomized block design with four replications. Ten statistics that were estimated for each trial were compared using descriptive statistics, Pearson correlations, and path analysis. According to the class limits established, selective accuracy and F-test values for genotype, heritability, and the coefficient of determination adequately estimated the degree of experimental precision. Using these statistics, 86.21% of the trials had adequate experimental precision. Selective accuracy and the F-test values for genotype, heritability, and the coefficient of determination were directly related to each other, and were more suitable than the coefficient of variation and the least significant difference (by the Tukey test) to evaluate experimental precision in trials with cowpea genotypes. PMID:27173351

  6. Measurements of experimental precision for trials with cowpea (Vigna unguiculata L. Walp.) genotypes.

    PubMed

    Teodoro, P E; Torres, F E; Santos, A D; Corrêa, A M; Nascimento, M; Barroso, L M A; Ceccon, G

    2016-05-09

    The aim of this study was to evaluate the suitability of statistics as experimental precision degree measures for trials with cowpea (Vigna unguiculata L. Walp.) genotypes. Cowpea genotype yields were evaluated in 29 trials conducted in Brazil between 2005 and 2012. The genotypes were evaluated with a randomized block design with four replications. Ten statistics that were estimated for each trial were compared using descriptive statistics, Pearson correlations, and path analysis. According to the class limits established, selective accuracy and F-test values for genotype, heritability, and the coefficient of determination adequately estimated the degree of experimental precision. Using these statistics, 86.21% of the trials had adequate experimental precision. Selective accuracy and the F-test values for genotype, heritability, and the coefficient of determination were directly related to each other, and were more suitable than the coefficient of variation and the least significant difference (by the Tukey test) to evaluate experimental precision in trials with cowpea genotypes.

  7. Genomic selection and association mapping in rice (Oryza sativa): effect of trait genetic architecture, training population composition, marker number and statistical model on accuracy of rice genomic selection in elite, tropical rice breeding lines.

    PubMed

    Spindel, Jennifer; Begum, Hasina; Akdemir, Deniz; Virk, Parminder; Collard, Bertrand; Redoña, Edilberto; Atlin, Gary; Jannink, Jean-Luc; McCouch, Susan R

    2015-02-01

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its efficacy for breeding inbred lines of rice. We performed a genome-wide association study (GWAS) in conjunction with five-fold GS cross-validation on a population of 363 elite breeding lines from the International Rice Research Institute's (IRRI) irrigated rice breeding program and herein report the GS results. The population was genotyped with 73,147 markers using genotyping-by-sequencing. The training population, statistical method used to build the GS model, number of markers, and trait were varied to determine their effect on prediction accuracy. For all three traits, genomic prediction models outperformed prediction based on pedigree records alone. Prediction accuracies ranged from 0.31 and 0.34 for grain yield and plant height to 0.63 for flowering time. Analyses using subsets of the full marker set suggest that using one marker every 0.2 cM is sufficient for genomic selection in this collection of rice breeding materials. RR-BLUP was the best performing statistical method for grain yield where no large effect QTL were detected by GWAS, while for flowering time, where a single very large effect QTL was detected, the non-GS multiple linear regression method outperformed GS models. For plant height, in which four mid-sized QTL were identified by GWAS, random forest produced the most consistently accurate GS models. Our results suggest that GS, informed by GWAS interpretations of genetic architecture and population structure, could become an effective tool for increasing the efficiency of rice breeding as the costs of genotyping continue to decline. PMID:25689273

  8. Genomic Selection and Association Mapping in Rice (Oryza sativa): Effect of Trait Genetic Architecture, Training Population Composition, Marker Number and Statistical Model on Accuracy of Rice Genomic Selection in Elite, Tropical Rice Breeding Lines

    PubMed Central

    Spindel, Jennifer; Begum, Hasina; Akdemir, Deniz; Virk, Parminder; Collard, Bertrand; Redoña, Edilberto; Atlin, Gary; Jannink, Jean-Luc; McCouch, Susan R.

    2015-01-01

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its efficacy for breeding inbred lines of rice. We performed a genome-wide association study (GWAS) in conjunction with five-fold GS cross-validation on a population of 363 elite breeding lines from the International Rice Research Institute's (IRRI) irrigated rice breeding program and herein report the GS results. The population was genotyped with 73,147 markers using genotyping-by-sequencing. The training population, statistical method used to build the GS model, number of markers, and trait were varied to determine their effect on prediction accuracy. For all three traits, genomic prediction models outperformed prediction based on pedigree records alone. Prediction accuracies ranged from 0.31 and 0.34 for grain yield and plant height to 0.63 for flowering time. Analyses using subsets of the full marker set suggest that using one marker every 0.2 cM is sufficient for genomic selection in this collection of rice breeding materials. RR-BLUP was the best performing statistical method for grain yield where no large effect QTL were detected by GWAS, while for flowering time, where a single very large effect QTL was detected, the non-GS multiple linear regression method outperformed GS models. For plant height, in which four mid-sized QTL were identified by GWAS, random forest produced the most consistently accurate GS models. Our results suggest that GS, informed by GWAS interpretations of genetic architecture and population structure, could become an effective tool for increasing the efficiency of rice breeding as the costs of genotyping continue to decline. PMID:25689273

  9. Genomic selection and association mapping in rice (Oryza sativa): effect of trait genetic architecture, training population composition, marker number and statistical model on accuracy of rice genomic selection in elite, tropical rice breeding lines.

    PubMed

    Spindel, Jennifer; Begum, Hasina; Akdemir, Deniz; Virk, Parminder; Collard, Bertrand; Redoña, Edilberto; Atlin, Gary; Jannink, Jean-Luc; McCouch, Susan R

    2015-02-01

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its efficacy for breeding inbred lines of rice. We performed a genome-wide association study (GWAS) in conjunction with five-fold GS cross-validation on a population of 363 elite breeding lines from the International Rice Research Institute's (IRRI) irrigated rice breeding program and herein report the GS results. The population was genotyped with 73,147 markers using genotyping-by-sequencing. The training population, statistical method used to build the GS model, number of markers, and trait were varied to determine their effect on prediction accuracy. For all three traits, genomic prediction models outperformed prediction based on pedigree records alone. Prediction accuracies ranged from 0.31 and 0.34 for grain yield and plant height to 0.63 for flowering time. Analyses using subsets of the full marker set suggest that using one marker every 0.2 cM is sufficient for genomic selection in this collection of rice breeding materials. RR-BLUP was the best performing statistical method for grain yield where no large effect QTL were detected by GWAS, while for flowering time, where a single very large effect QTL was detected, the non-GS multiple linear regression method outperformed GS models. For plant height, in which four mid-sized QTL were identified by GWAS, random forest produced the most consistently accurate GS models. Our results suggest that GS, informed by GWAS interpretations of genetic architecture and population structure, could become an effective tool for increasing the efficiency of rice breeding as the costs of genotyping continue to decline.

  10. Precision translator

    DOEpatents

    Reedy, R.P.; Crawford, D.W.

    1982-03-09

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  11. Precision translator

    DOEpatents

    Reedy, Robert P.; Crawford, Daniel W.

    1984-01-01

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  12. Precision GPS ephemerides and baselines

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Based on the research, the area of precise ephemerides for GPS satellites, the following observations can be made pertaining to the status and future work needed regarding orbit accuracy. There are several aspects which need to be addressed in discussing determination of precise orbits, such as force models, kinematic models, measurement models, data reduction/estimation methods, etc. Although each one of these aspects was studied at CSR in research efforts, only points pertaining to the force modeling aspect are addressed.

  13. Assessing the Accuracy and Precision of Inorganic Geochemical Data Produced through Flux Fusion and Acid Digestions: Multiple (60+) Comprehensive Analyses of BHVO-2 and the Development of Improved "Accepted" Values

    NASA Astrophysics Data System (ADS)

    Ireland, T. J.; Scudder, R.; Dunlea, A. G.; Anderson, C. H.; Murray, R. W.

    2014-12-01

    The use of geological standard reference materials (SRMs) to assess both the accuracy and the reproducibility of geochemical data is a vital consideration in determining the major and trace element abundances of geologic, oceanographic, and environmental samples. Calibration curves commonly are generated that are predicated on accurate analyses of these SRMs. As a means to verify the robustness of these calibration curves, a SRM can also be run as an unknown item (i.e., not included as a data point in the calibration). The experimentally derived composition of the SRM can thus be compared to the certified (or otherwise accepted) value. This comparison gives a direct measure of the accuracy of the method used. Similarly, if the same SRM is analyzed as an unknown over multiple analytical sessions, the external reproducibility of the method can be evaluated. Two common bulk digestion methods used in geochemical analysis are flux fusion and acid digestion. The flux fusion technique is excellent at ensuring complete digestion of a variety of sample types, is quick, and does not involve much use of hazardous acids. However, this technique is hampered by a high amount of total dissolved solids and may be accompanied by an increased analytical blank for certain trace elements. On the other hand, acid digestion (using a cocktail of concentrated nitric, hydrochloric and hydrofluoric acids) provides an exceptionally clean digestion with very low analytical blanks. However, this technique results in a loss of Si from the system and may compromise results for a few other elements (e.g., Ge). Our lab uses flux fusion for the determination of major elements and a few key trace elements by ICP-ES, while acid digestion is used for Ti and trace element analyses by ICP-MS. Here we present major and trace element data for BHVO-2, a frequently used SRM derived from a Hawaiian basalt, gathered over a period of over two years (30+ analyses by each technique). We show that both digestion

  14. Precision synchrotron radiation detectors

    SciTech Connect

    Levi, M.; Rouse, F.; Butler, J.; Jung, C.K.; Lateur, M.; Nash, J.; Tinsman, J.; Wormser, G.; Gomez, J.J.; Kent, J.

    1989-03-01

    Precision detectors to measure synchrotron radiation beam positions have been designed and installed as part of beam energy spectrometers at the Stanford Linear Collider (SLC). The distance between pairs of synchrotron radiation beams is measured absolutely to better than 28 /mu/m on a pulse-to-pulse basis. This contributes less than 5 MeV to the error in the measurement of SLC beam energies (approximately 50 GeV). A system of high-resolution video cameras viewing precisely-aligned fiducial wire arrays overlaying phosphorescent screens has achieved this accuracy. Also, detectors of synchrotron radiation using the charge developed by the ejection of Compton-recoil electrons from an array of fine wires are being developed. 4 refs., 5 figs., 1 tab.

  15. Ultra precision machining

    NASA Astrophysics Data System (ADS)

    Debra, Daniel B.; Hesselink, Lambertus; Binford, Thomas

    1990-05-01

    There are a number of fields that require or can use to advantage very high precision in machining. For example, further development of high energy lasers and x ray astronomy depend critically on the manufacture of light weight reflecting metal optical components. To fabricate these optical components with machine tools they will be made of metal with mirror quality surface finish. By mirror quality surface finish, it is meant that the dimensions tolerances on the order of 0.02 microns and surface roughness of 0.07. These accuracy targets fall in the category of ultra precision machining. They cannot be achieved by a simple extension of conventional machining processes and techniques. They require single crystal diamond tools, special attention to vibration isolation, special isolation of machine metrology, and on line correction of imperfection in the motion of the machine carriages on their way.

  16. The accuracy of selected land use and land cover maps at scales of 1:250,000 and 1:100,000

    USGS Publications Warehouse

    Fitzpatrick-Lins, Katherine

    1980-01-01

    Land use and land cover maps produced by the U.S. Geological Survey are found to meet or exceed the established standard of accuracy. When analyzed using a point sampling technique and binomial probability theory, several maps, illustrative of those produced for different parts of the country, were found to meet or exceed accuracies of 85 percent. Those maps tested were Tampa, Fla., Portland, Me., Charleston, W. Va., and Greeley, Colo., published at a scale of 1:250,000, and Atlanta, Ga., and Seattle and Tacoma, Wash., published at a scale of 1:100,000. For each map, the values were determined by calculating the ratio of the total number of points correctly interpreted to the total number of points sampled. Six of the seven maps tested have accuracies of 85 percent or better at the 95-percent lower confidence limit. When the sample data for predominant categories (those sampled with a significant number of points) were grouped together for all maps, accuracies of those predominant categories met the 85-percent accuracy criterion, with one exception. One category, Residential, had less than 85-percent accuracy at the 95-percent lower confidence limit. Nearly all residential land sampled was mapped correctly, but some areas of other land uses were mapped incorrectly as Residential.

  17. Precision Pointing System Development

    SciTech Connect

    BUGOS, ROBERT M.

    2003-03-01

    The development of precision pointing systems has been underway in Sandia's Electronic Systems Center for over thirty years. Important areas of emphasis are synthetic aperture radars and optical reconnaissance systems. Most applications are in the aerospace arena, with host vehicles including rockets, satellites, and manned and unmanned aircraft. Systems have been used on defense-related missions throughout the world. Presently in development are pointing systems with accuracy goals in the nanoradian regime. Future activity will include efforts to dramatically reduce system size and weight through measures such as the incorporation of advanced materials and MEMS inertial sensors.

  18. Precision orbit determination for Topex

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Schutz, B. E.; Ries, J. C.; Shum, C. K.

    1990-01-01

    The ability of radar altimeters to measure the distance from a satellite to the ocean surface with a precision of the order of 2 cm imposes unique requirements for the orbit determination accuracy. The orbit accuracy requirements will be especially demanding for the joint NASA/CNES Ocean Topography Experiment (Topex/Poseidon). For this mission, a radial orbit accuracy of 13 centimeters will be required for a mission period of three to five years. This is an order of magnitude improvement in the accuracy achieved during any previous satellite mission. This investigation considers the factors which limit the orbit accuracy for the Topex mission. Particular error sources which are considered include the geopotential, the radiation pressure and the atmospheric drag model.

  19. Accuracy and precision of gravitational-wave models of inspiraling neutron star-black hole binaries with spin: Comparison with matter-free numerical relativity in the low-frequency regime

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilágyi, Béla

    2015-11-01

    Coalescing binaries of neutron stars and black holes are one of the most important sources of gravitational waves for the upcoming network of ground-based detectors. Detection and extraction of astrophysical information from gravitational-wave signals requires accurate waveform models. The effective-one-body and other phenomenological models interpolate between analytic results and numerical relativity simulations, that typically span O (10 ) orbits before coalescence. In this paper we study the faithfulness of these models for neutron star-black hole binaries. We investigate their accuracy using new numerical relativity (NR) simulations that span 36-88 orbits, with mass ratios q and black hole spins χBH of (q ,χBH)=(7 ,±0.4 ),(7 ,±0.6 ) , and (5 ,-0.9 ). These simulations were performed treating the neutron star as a low-mass black hole, ignoring its matter effects. We find that (i) the recently published SEOBNRv1 and SEOBNRv2 models of the effective-one-body family disagree with each other (mismatches of a few percent) for black hole spins χBH≥0.5 or χBH≤-0.3 , with waveform mismatch accumulating during early inspiral; (ii) comparison with numerical waveforms indicates that this disagreement is due to phasing errors of SEOBNRv1, with SEOBNRv2 in good agreement with all of our simulations; (iii) phenomenological waveforms agree with SEOBNRv2 only for comparable-mass low-spin binaries, with overlaps below 0.7 elsewhere in the neutron star-black hole binary parameter space; (iv) comparison with numerical waveforms shows that most of this model's dephasing accumulates near the frequency interval where it switches to a phenomenological phasing prescription; and finally (v) both SEOBNR and post-Newtonian models are effectual for neutron star-black hole systems, but post-Newtonian waveforms will give a significant bias in parameter recovery. Our results suggest that future gravitational-wave detection searches and parameter estimation efforts would benefit

  20. The Role of Selected Lexical Factors on Confrontation Naming Accuracy, Speed, and Fluency in Adults Who Do and Do Not Stutter

    ERIC Educational Resources Information Center

    Newman, Rochelle S.; Ratner, Nan Bernstein

    2007-01-01

    Purpose: The purpose of this study was to investigate whether lexical access in adults who stutter (AWS) differs from that in people who do not stutter. Specifically, the authors examined the role of 3 lexical factors on naming speed, accuracy, and fluency: word frequency, neighborhood density, and neighborhood frequency. If stuttering results…

  1. Nickel solution prepared for precision electroforming

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Lightweight, precision optical reflectors are made by electroforming nickel onto masters. Steps for the plating bath preparation, process control testing, and bath composition adjustments are prescribed to avoid internal stresses and maintain dimensional accuracy of the electrodeposited metal.

  2. Perspective on precision machining, polishing, and optical requirements

    SciTech Connect

    Sanger, G.M.

    1981-08-18

    While precision machining has been applied to the manufacture of optical components for a considerable period, the process has, in general, had its thinking restricted to producing only the accurate shapes required. The purpose of this paper is to show how optical components must be considered from an optical (functional) point of view and that the manufacturing process must be selected on that basis. To fill out this perspective, simplistic examples of how optical components are specified with respect to form and finish are given, a comparison between optical polishing and precision machining is made, and some thoughts on which technique should be selected for a specific application are presented. A short discussion of future trends related to accuracy, materials, and tools is included.

  3. Precision spectroscopy of Helium

    SciTech Connect

    Cancio, P.; Giusfredi, G.; Mazzotti, D.; De Natale, P.; De Mauro, C.; Krachmalnicoff, V.; Inguscio, M.

    2005-05-05

    Accurate Quantum-Electrodynamics (QED) tests of the simplest bound three body atomic system are performed by precise laser spectroscopic measurements in atomic Helium. In this paper, we present a review of measurements between triplet states at 1083 nm (23S-23P) and at 389 nm (23S-33P). In 4He, such data have been used to measure the fine structure of the triplet P levels and, then, to determine the fine structure constant when compared with equally accurate theoretical calculations. Moreover, the absolute frequencies of the optical transitions have been used for Lamb-shift determinations of the levels involved with unprecedented accuracy. Finally, determination of the He isotopes nuclear structure and, in particular, a measurement of the nuclear charge radius, are performed by using hyperfine structure and isotope-shift measurements.

  4. Effect of optical digitizer selection on the application accuracy of a surgical localization system-a quantitative comparison between the OPTOTRAK and flashpoint tracking systems

    NASA Technical Reports Server (NTRS)

    Li, Q.; Zamorano, L.; Jiang, Z.; Gong, J. X.; Pandya, A.; Perez, R.; Diaz, F.

    1999-01-01

    Application accuracy is a crucial factor for stereotactic surgical localization systems, in which space digitization camera systems are one of the most critical components. In this study we compared the effect of the OPTOTRAK 3020 space digitization system and the FlashPoint Model 3000 and 5000 3D digitizer systems on the application accuracy for interactive localization of intracranial lesions. A phantom was mounted with several implantable frameless markers which were randomly distributed on its surface. The target point was digitized and the coordinates were recorded and compared with reference points. The differences from the reference points represented the deviation from the "true point." The root mean square (RMS) was calculated to show the differences, and a paired t-test was used to analyze the results. The results with the phantom showed that, for 1-mm sections of CT scans, the RMS was 0.76 +/- 0. 54 mm for the OPTOTRAK system, 1.23 +/- 0.53 mm for the FlashPoint Model 3000 3D digitizer system, and 1.00 +/- 0.42 mm for the FlashPoint Model 5000 system. These preliminary results showed that there is no significant difference between the three tracking systems, and, from the quality point of view, they can all be used for image-guided surgery procedures. Copyright 1999 Wiley-Liss, Inc.

  5. Effect of optical digitizer selection on the application accuracy of a surgical localization system-a quantitative comparison between the OPTOTRAK and flashpoint tracking systems.

    PubMed

    Li, Q; Zamorano, L; Jiang, Z; Gong, J X; Pandya, A; Perez, R; Diaz, F

    1999-01-01

    Application accuracy is a crucial factor for stereotactic surgical localization systems, in which space digitization camera systems are one of the most critical components. In this study we compared the effect of the OPTOTRAK 3020 space digitization system and the FlashPoint Model 3000 and 5000 3D digitizer systems on the application accuracy for interactive localization of intracranial lesions. A phantom was mounted with several implantable frameless markers which were randomly distributed on its surface. The target point was digitized and the coordinates were recorded and compared with reference points. The differences from the reference points represented the deviation from the "true point." The root mean square (RMS) was calculated to show the differences, and a paired t-test was used to analyze the results. The results with the phantom showed that, for 1-mm sections of CT scans, the RMS was 0.76 +/- 0. 54 mm for the OPTOTRAK system, 1.23 +/- 0.53 mm for the FlashPoint Model 3000 3D digitizer system, and 1.00 +/- 0.42 mm for the FlashPoint Model 5000 system. These preliminary results showed that there is no significant difference between the three tracking systems, and, from the quality point of view, they can all be used for image-guided surgery procedures. PMID:10631374

  6. Are Currently Available Wearable Devices for Activity Tracking and Heart Rate Monitoring Accurate, Precise, and Medically Beneficial?

    PubMed Central

    El-Amrawy, Fatema

    2015-01-01

    Objectives The new wave of wireless technologies, fitness trackers, and body sensor devices can have great impact on healthcare systems and the quality of life. However, there have not been enough studies to prove the accuracy and precision of these trackers. The objective of this study was to evaluate the accuracy, precision, and overall performance of seventeen wearable devices currently available compared with direct observation of step counts and heart rate monitoring. Methods Each participant in this study used three accelerometers at a time, running the three corresponding applications of each tracker on an Android or iOS device simultaneously. Each participant was instructed to walk 200, 500, and 1,000 steps. Each set was repeated 40 times. Data was recorded after each trial, and the mean step count, standard deviation, accuracy, and precision were estimated for each tracker. Heart rate was measured by all trackers (if applicable), which support heart rate monitoring, and compared to a positive control, the Onyx Vantage 9590 professional clinical pulse oximeter. Results The accuracy of the tested products ranged between 79.8% and 99.1%, while the coefficient of variation (precision) ranged between 4% and 17.5%. MisFit Shine showed the highest accuracy and precision (along with Qualcomm Toq), while Samsung Gear 2 showed the lowest accuracy, and Jawbone UP showed the lowest precision. However, Xiaomi Mi band showed the best package compared to its price. Conclusions The accuracy and precision of the selected fitness trackers are reasonable and can indicate the average level of activity and thus average energy expenditure. PMID:26618039

  7. Precision ozone vapor pressure measurements

    NASA Technical Reports Server (NTRS)

    Hanson, D.; Mauersberger, K.

    1985-01-01

    The vapor pressure above liquid ozone has been measured with a high accuracy over a temperature range of 85 to 95 K. At the boiling point of liquid argon (87.3 K) an ozone vapor pressure of 0.0403 Torr was obtained with an accuracy of + or - 0.7 percent. A least square fit of the data provided the Clausius-Clapeyron equation for liquid ozone; a latent heat of 82.7 cal/g was calculated. High-precision vapor pressure data are expected to aid research in atmospheric ozone measurements and in many laboratory ozone studies such as measurements of cross sections and reaction rates.

  8. Global positioning system measurements for crustal deformation: Precision and accuracy

    USGS Publications Warehouse

    Prescott, W.H.; Davis, J.L.; Svarc, J.L.

    1989-01-01

    Analysis of 27 repeated observations of Global Positioning System (GPS) position-difference vectors, up to 11 kilometers in length, indicates that the standard deviation of the measurements is 4 millimeters for the north component, 6 millimeters for the east component, and 10 to 20 millimeters for the vertical component. The uncertainty grows slowly with increasing vector length. At 225 kilometers, the standard deviation of the measurement is 6, 11, and 40 millimeters for the north, east, and up components, respectively. Measurements with GPS and Geodolite, an electromagnetic distance-measuring system, over distances of 10 to 40 kilometers agree within 0.2 part per million. Measurements with GPS and very long baseline interferometry of the 225-kilometer vector agree within 0.05 part per million.

  9. Tomography & Geochemistry: Precision, Repeatability, Accuracy and Joint Interpretations

    NASA Astrophysics Data System (ADS)

    Foulger, G. R.; Panza, G. F.; Artemieva, I. M.; Bastow, I. D.; Cammarano, F.; Doglioni, C.; Evans, J. R.; Hamilton, W. B.; Julian, B. R.; Lustrino, M.; Thybo, H.; Yanovskaya, T. B.

    2015-12-01

    Seismic tomography can reveal the spatial seismic structure of the mantle, but has little ability to constrain composition, phase or temperature. In contrast, petrology and geochemistry can give insights into mantle composition, but have severely limited spatial control on magma sources. For these reasons, results from these three disciplines are often interpreted jointly. Nevertheless, the limitations of each method are often underestimated, and underlying assumptions de-emphasized. Examples of the limitations of seismic tomography include its ability to image in detail the three-dimensional structure of the mantle or to determine with certainty the strengths of anomalies. Despite this, published seismic anomaly strengths are often unjustifiably translated directly into physical parameters. Tomography yields seismological parameters such as wave speed and attenuation, not geological or thermal parameters. Much of the mantle is poorly sampled by seismic waves, and resolution- and error-assessment methods do not express the true uncertainties. These and other problems have become highlighted in recent years as a result of multiple tomography experiments performed by different research groups, in areas of particular interest e.g., Yellowstone. The repeatability of the results is often poorer than the calculated resolutions. The ability of geochemistry and petrology to identify magma sources and locations is typically overestimated. These methods have little ability to determine source depths. Models that assign geochemical signatures to specific layers in the mantle, including the transition zone, the lower mantle, and the core-mantle boundary, are based on speculative models that cannot be verified and for which viable, less-astonishing alternatives are available. Our knowledge is poor of the size, distribution and location of protoliths, and of metasomatism of magma sources, the nature of the partial-melting and melt-extraction process, the mixing of disparate melts, and the re-assimilation of crust and mantle lithosphere by rising melt. Interpretations of seismic tomography, petrologic and geochemical observations, and all three together, are ambiguous, and this needs to be emphasized more in presenting interpretations so that the viability of the models can be assessed more reliably.

  10. Precision and accuracy of visual foliar injury assessments

    SciTech Connect

    Gumpertz, M.L.; Tingey, D.T.; Hogsett, W.E.

    1982-07-01

    The study compared three measures of foliar injury: (i) mean percent leaf area injured of all leaves on the plant, (ii) mean percent leaf area injured of the three most injured leaves, and (iii) the proportion of injured leaves to total number of leaves. For the first measure, the variation caused by reader biases and day-to-day variations were compared with the innate plant-to-plant variation. Bean (Phaseolus vulgaris 'Pinto'), pea (Pisum sativum 'Little Marvel'), radish (Rhaphanus sativus 'Cherry Belle'), and spinach (Spinacia oleracea 'Northland') plants were exposed to either 3 ..mu..L L/sup -1/ SO/sub 2/ or 0.3 ..mu..L L/sup -1/ ozone for 2 h. Three leaf readers visually assessed the percent injury on every leaf of each plant while a fourth reader used a transparent grid to make an unbiased assessment for each plant. The mean leaf area injured of the three most injured leaves was highly correlated with all leaves on the plant only if the three most injured leaves were <100% injured. The proportion of leaves injured was not highly correlated with percent leaf area injured of all leaves on the plant for any species in this study. The largest source of variation in visual assessments was plant-to-plant variation, which ranged from 44 to 97% of the total variance, followed by variation among readers (0-32% of the variance). Except for radish exposed to ozone, the day-to-day variation accounted for <18% of the total. Reader bias in assessment of ozone injury was significant but could be adjusted for each reader by a simple linear regression (R/sup 2/ = 0.89-0.91) of the visual assessments against the grid assessments.

  11. Precision and accuracy of decay constants and age standards

    NASA Astrophysics Data System (ADS)

    Villa, I. M.

    2011-12-01

    40 years of round-robin experiments with age standards teach us that systematic errors must be present in at least N-1 labs if participants provide N mutually incompatible data. In EarthTime, the U-Pb community has produced and distributed synthetic solutions with full metrological traceability. Collector linearity is routinely calibrated under variable conditions (e.g. [1]). Instrumental mass fractionation is measured in-run with double spikes (e.g. 233U-236U). Parent-daughter ratios are metrologically traceable, so the full uncertainty budget of a U-Pb age should coincide with interlaboratory uncertainty. TIMS round-robin experiments indeed show a decrease of N towards the ideal value of 1. Comparing 235U-207Pb with 238U-206Pb ages (e.g. [2]) has resulted in a credible re-evaluation of the 235U decay constant, with lower uncertainty than gamma counting. U-Pb microbeam techniques reveal the link petrology-microtextures-microchemistry-isotope record but do not achieve the low uncertainty of TIMS. In the K-Ar community, N is large; interlaboratory bias is > 10 times self-assessed uncertainty. Systematic errors may have analytical and petrological reasons. Metrological traceability is not yet implemented (substantial advance may come from work in progress, e.g. [7]). One of the worst problems is collector stability and linearity. Using electron multipliers (EM) instead of Faraday buckets (FB) reduces both dynamic range and collector linearity. Mass spectrometer backgrounds are never zero; the extent as well as the predictability of their variability must be propagated into the uncertainty evaluation. The high isotope ratio of the atmospheric Ar requires a large dynamic range over which linearity must be demonstrated under all analytical conditions to correctly estimate mass fractionation. The only assessment of EM linearity in Ar analyses [3] points out many fundamental problems; the onus of proof is on every laboratory claiming low uncertainties. Finally, sample size reduction is often associated to reducing clean-up time to increase sample/blank ratio; this may be self-defeating, as "dry blanks" [4] do not represent either the isotopic composition or the amount of Ar released by the sample chamber when exposed to unpurified sample gas. Single grains enhance background and purification problems relative to large sample sizes measured on FB. Petrologically, many natural "standards" are not ideal (e.g. MMhb1 [5], B4M [6]), as their original distributors never conceived petrology as the decisive control on isotope retention. Comparing ever smaller aliquots of unequilibrated minerals causes ever larger age variations. Metrologically traceable synthetic isotope mixtures still lie in the future. Petrological non-ideality of natural standards does not allow a metrological uncertainty budget. Collector behavior, on the contrary, does. Its quantification will, by definition, make true intralaboratory uncertainty greater or equal to interlaboratory bias. [1] Chen J, Wasserburg GJ, 1981. Analyt Chem 53, 2060-2067 [2] Mattinson JM, 2010. Chem Geol 275, 186-198 [3] Turrin B et al, 2010. G-cubed, 11, Q0AA09 [4] Baur H, 1975. PhD thesis, ETH Zürich, No. 6596 [5] Villa IM et al, 1996. Contrib Mineral Petrol 126, 67-80 [6] Villa IM, Heri AR, 2010. AGU abstract V31A-2296 [7] Morgan LE et al, in press. G-cubed, 2011GC003719

  12. Quality, precision and accuracy of the maximum No. 40 anemometer

    SciTech Connect

    Obermeir, J.; Blittersdorf, D.

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  13. Global positioning system measurements for crustal deformation: precision and accuracy.

    PubMed

    Prescott, W H; Davis, J L; Svarc, J L

    1989-06-16

    Analysis of 27 repeated observations of Global Positioning System (GPS) position-difference vectors, up to 11 kilometers in length, indicates that the standard deviation of the measurements is 4 millimeters for the north component, 6 millimeters for the east component, and 10 to 20 millimeters for the vertical component. The uncertainty grows slowly with increasing vector length. At 225 kilometers, the standard deviation of the measurement is 6, 11, and 40 millimeters for the north, east, and up components, respectively. Measurements with GPS and Geodolite, an electromagnetic distance-measuring system, over distances of 10 to 40 kilometers agree within 0.2 part per million. Measurements with GPS and very long baseline interferometry of the 225-kilometer vector agree within 0.05 part per million. PMID:17820661

  14. Mixed-Precision Spectral Deferred Correction: Preprint

    SciTech Connect

    Grout, Ray W. S.

    2015-09-02

    Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.

  15. Arrival Metering Precision Study

    NASA Technical Reports Server (NTRS)

    Prevot, Thomas; Mercer, Joey; Homola, Jeffrey; Hunt, Sarah; Gomez, Ashley; Bienert, Nancy; Omar, Faisal; Kraut, Joshua; Brasil, Connie; Wu, Minghong, G.

    2015-01-01

    This paper describes the background, method and results of the Arrival Metering Precision Study (AMPS) conducted in the Airspace Operations Laboratory at NASA Ames Research Center in May 2014. The simulation study measured delivery accuracy, flight efficiency, controller workload, and acceptability of time-based metering operations to a meter fix at the terminal area boundary for different resolution levels of metering delay times displayed to the air traffic controllers and different levels of airspeed information made available to the Time-Based Flow Management (TBFM) system computing the delay. The results show that the resolution of the delay countdown timer (DCT) on the controllers display has a significant impact on the delivery accuracy at the meter fix. Using the 10 seconds rounded and 1 minute rounded DCT resolutions resulted in more accurate delivery than 1 minute truncated and were preferred by the controllers. Using the speeds the controllers entered into the fourth line of the data tag to update the delay computation in TBFM in high and low altitude sectors increased air traffic control efficiency and reduced fuel burn for arriving aircraft during time based metering.

  16. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    LRO definitive and predictive accuracy requirements were easily met in the nominal mission orbit, using the LP150Q lunar gravity model. center dot Accuracy of the LP150Q model is poorer in the extended mission elliptical orbit. center dot Later lunar gravity models, in particular GSFC-GRAIL-270, improve OD accuracy in the extended mission. center dot Implementation of a constrained plane when the orbit is within 45 degrees of the Earth-Moon line improves cross-track accuracy. center dot Prediction accuracy is still challenged during full-Sun periods due to coarse spacecraft area modeling - Implementation of a multi-plate area model with definitive attitude input can eliminate prediction violations. - The FDF is evaluating using analytic and predicted attitude modeling to improve full-Sun prediction accuracy. center dot Comparison of FDF ephemeris file to high-precision ephemeris files provides gross confirmation that overlap compares properly assess orbit accuracy.

  17. Validation of selected analytical methods using accuracy profiles to assess the impact of a Tobacco Heating System on indoor air quality.

    PubMed

    Mottier, Nicolas; Tharin, Manuel; Cluse, Camille; Crudo, Jean-René; Lueso, María Gómez; Goujon-Ginglinger, Catherine G; Jaquier, Anne; Mitova, Maya I; Rouget, Emmanuel G R; Schaller, Mathieu; Solioz, Jennifer

    2016-09-01

    Studies in environmentally controlled rooms have been used over the years to assess the impact of environmental tobacco smoke on indoor air quality. As new tobacco products are developed, it is important to determine their impact on air quality when used indoors. Before such an assessment can take place it is essential that the analytical methods used to assess indoor air quality are validated and shown to be fit for their intended purpose. Consequently, for this assessment, an environmentally controlled room was built and seven analytical methods, representing eighteen analytes, were validated. The validations were carried out with smoking machines using a matrix-based approach applying the accuracy profile procedure. The performances of the methods were compared for all three matrices under investigation: background air samples, the environmental aerosol of Tobacco Heating System THS 2.2, a heat-not-burn tobacco product developed by Philip Morris International, and the environmental tobacco smoke of a cigarette. The environmental aerosol generated by the THS 2.2 device did not have any appreciable impact on the performances of the methods. The comparison between the background and THS 2.2 environmental aerosol samples generated by smoking machines showed that only five compounds were higher when THS 2.2 was used in the environmentally controlled room. Regarding environmental tobacco smoke from cigarettes, the yields of all analytes were clearly above those obtained with the other two air sample types. PMID:27343591

  18. Validation of selected analytical methods using accuracy profiles to assess the impact of a Tobacco Heating System on indoor air quality.

    PubMed

    Mottier, Nicolas; Tharin, Manuel; Cluse, Camille; Crudo, Jean-René; Lueso, María Gómez; Goujon-Ginglinger, Catherine G; Jaquier, Anne; Mitova, Maya I; Rouget, Emmanuel G R; Schaller, Mathieu; Solioz, Jennifer

    2016-09-01

    Studies in environmentally controlled rooms have been used over the years to assess the impact of environmental tobacco smoke on indoor air quality. As new tobacco products are developed, it is important to determine their impact on air quality when used indoors. Before such an assessment can take place it is essential that the analytical methods used to assess indoor air quality are validated and shown to be fit for their intended purpose. Consequently, for this assessment, an environmentally controlled room was built and seven analytical methods, representing eighteen analytes, were validated. The validations were carried out with smoking machines using a matrix-based approach applying the accuracy profile procedure. The performances of the methods were compared for all three matrices under investigation: background air samples, the environmental aerosol of Tobacco Heating System THS 2.2, a heat-not-burn tobacco product developed by Philip Morris International, and the environmental tobacco smoke of a cigarette. The environmental aerosol generated by the THS 2.2 device did not have any appreciable impact on the performances of the methods. The comparison between the background and THS 2.2 environmental aerosol samples generated by smoking machines showed that only five compounds were higher when THS 2.2 was used in the environmentally controlled room. Regarding environmental tobacco smoke from cigarettes, the yields of all analytes were clearly above those obtained with the other two air sample types.

  19. Precision injection molding of freeform optics

    NASA Astrophysics Data System (ADS)

    Fang, Fengzhou; Zhang, Nan; Zhang, Xiaodong

    2016-08-01

    Precision injection molding is the most efficient mass production technology for manufacturing plastic optics. Applications of plastic optics in field of imaging, illumination, and concentration demonstrate a variety of complex surface forms, developing from conventional plano and spherical surfaces to aspheric and freeform surfaces. It requires high optical quality with high form accuracy and lower residual stresses, which challenges both optical tool inserts machining and precision injection molding process. The present paper reviews recent progress in mold tool machining and precision injection molding, with more emphasis on precision injection molding. The challenges and future development trend are also discussed.

  20. Precision powder feeder

    DOEpatents

    Schlienger, M. Eric; Schmale, David T.; Oliver, Michael S.

    2001-07-10

    A new class of precision powder feeders is disclosed. These feeders provide a precision flow of a wide range of powdered materials, while remaining robust against jamming or damage. These feeders can be precisely controlled by feedback mechanisms.

  1. Precision evaluation of calibration factor of a superconducting gravimeter using an absolute gravimeter

    NASA Astrophysics Data System (ADS)

    Feng, Jin-yang; Wu, Shu-qing; Li, Chun-jian; Su, Duo-wu; Xu, Jin-yi; Yu, Mei

    2016-01-01

    The precision of the calibration factor of a superconducting gravimeter (SG) using an absolute gravimeter (AG) is analyzed based on linear least square fitting and error propagation theory and factors affecting the accuracy are discussed. It can improve the accuracy to choose the observation period of solid tide as a significant change or increase the calibration time. Simulation is carried out based on synthetic gravity tides calculated with T-soft at observed site from Aug. 14th to Sept. 2nd in 2014. The result indicates that the highest precision using half a day's observation data is below 0.28% and the precision exponentially increases with the increase of peak-to-peak gravity change. The comparison of results obtained from the same observation time indicated that using properly selected observation data has more beneficial on the improvement of precision. Finally, the calibration experiment of the SG iGrav-012 is introduced and the calibration factor is determined for the first time using AG FG5X-249. With 2.5 days' data properly selected from solid tide period with large tidal amplitude, the determined calibration factor of iGrav-012 is (-92.54423+/-0.13616) μGal/V (1μGal=10-8m/s2), with the relative accuracy of about 0.15%.

  2. High-accuracy EUV reflectometer

    NASA Astrophysics Data System (ADS)

    Hinze, U.; Fokoua, M.; Chichkov, B.

    2007-03-01

    Developers and users of EUV-optics need precise tools for the characterization of their products. Often a measurement accuracy of 0.1% or better is desired to detect and study slow-acting aging effect or degradation by organic contaminants. To achieve a measurement accuracy of 0.1% an EUV-source is required which provides an excellent long-time stability, namely power stability, spatial stability and spectral stability. Naturally, it should be free of debris. An EUV-source particularly suitable for this task is an advanced electron-based EUV-tube. This EUV source provides an output of up to 300 μW at 13.5 nm. Reflectometers benefit from the excellent long-time stability of this tool. We design and set up different reflectometers using EUV-tubes for the precise characterisation of EUV-optics, such as debris samples, filters, multilayer mirrors, grazing incidence optics, collectors and masks. Reflectivity measurements from grazing incidence to near normal incidence as well as transmission studies were realised at a precision of down to 0.1%. The reflectometers are computer-controlled and allow varying and scanning all important parameters online. The concepts of a sample reflectometer is discussed and results are presented. The devices can be purchased from the Laser Zentrum Hannover e.V.

  3. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  4. Quantification of long chain polyunsaturated fatty acids by gas chromatography. Evaluation of factors affecting accuracy.

    PubMed

    Schreiner, Matthias

    2005-11-18

    The accurate and reproducible analysis of long-chain polyunsaturated fatty acids (PUFA) is of growing importance. Especially for labeling purposes, clear guidelines are needed in order to achieve optimum accuracy. Since calibration standards cannot be used for method validation due to the instability of PUFAs, there is no direct way to check for the absence of systematic errors. In this study the sources of error that weaken the accuracy were evaluated using theoretical considerations and calibration standards with corrected composition. It was demonstrated that the key role for optimum accuracy lies in the optimization of the split injection system. Even when following the instructions outlined in the official methods of the American Oil Chemist's Society (AOCS), systematic errors of more than 7% can arise. Clear guidelines regarding system calibration and selection of appropriate internal standards (IS) can improve precision and accuracy significantly.

  5. Assessing the impact of end-member selection on the accuracy of satellite-based spatial variability models for actual evapotranspiration estimation

    NASA Astrophysics Data System (ADS)

    Long, Di; Singh, Vijay P.

    2013-05-01

    This study examines the impact of end-member (i.e., hot and cold extremes) selection on the performance and mechanisms of error propagation in satellite-based spatial variability models for estimating actual evapotranspiration, using the triangle, surface energy balance algorithm for land (SEBAL), and mapping evapotranspiration with high resolution and internalized calibration (METRIC) models. These models were applied to the soil moisture-atmosphere coupling experiment site in central Iowa on two Landsat Thematic Mapper/Enhanced Thematic Mapper Plus acquisition dates in 2002. Evaporative fraction (EF, defined as the ratio of latent heat flux to availability energy) estimates from the three models at field and watershed scales were examined using varying end-members. Results show that the end-members fundamentally determine the magnitudes of EF retrievals at both field and watershed scales. The hot and cold extremes exercise a similar impact on the discrepancy between the EF estimates and the ground-based measurements, i.e., given a hot (cold) extreme, the EF estimates tend to increase with increasing temperature of cold (hot) extreme, and decrease with decreasing temperature of cold (hot) extreme. The coefficient of determination between the EF estimates and the ground-based measurements depends principally on the capability of remotely sensed surface temperature (Ts) to capture EF (i.e., depending on the correlation between Ts and EF measurements), being slightly influenced by the end-members. Varying the end-members does not substantially affect the standard deviation and skewness of the EF frequency distributions from the same model at the watershed scale. However, different models generate markedly different EF frequency distributions due to differing model physics, especially the limiting edges of EF defined in the remotely sensed vegetation fraction (fc) and Ts space. In general, the end-members cannot be properly determined because (1) they do not

  6. GPS vertical axis performance enhancement for helicopter precision landing approach

    NASA Technical Reports Server (NTRS)

    Denaro, Robert P.; Beser, Jacques

    1986-01-01

    Several areas were investigated for improving vertical accuracy for a rotorcraft using the differential Global Positioning System (GPS) during a landing approach. Continuous deltaranging was studied and the potential improvement achieved by estimating acceleration was studied by comparing the performance on a constant acceleration turn and a rough landing profile of several filters: a position-velocity (PV) filter, a position-velocity-constant acceleration (PVAC) filter, and a position-velocity-turning acceleration (PVAT) filter. In overall statistics, the PVAC filter was found to be most efficient with the more complex PVAT performing equally well. Vertical performance was not significantly different among the filters. Satellite selection algorithms based on vertical errors only (vertical dilution of precision or VDOP) and even-weighted cross-track and vertical errors (XVDOP) were tested. The inclusion of an altimeter was studied by modifying the PVAC filter to include a baro bias estimate. Improved vertical accuracy during degraded DOP conditions resulted. Flight test results for raw differential results excluding filter effects indicated that the differential performance significantly improved overall navigation accuracy. A landing glidepath steering algorithm was devised which exploits the flexibility of GPS in determining precise relative position. A method for propagating the steering command over the GPS update interval was implemented.

  7. Precision linear ramp function generator

    DOEpatents

    Jatko, W. Bruce; McNeilly, David R.; Thacker, Louis H.

    1986-01-01

    A ramp function generator is provided which produces a precise linear ramp unction which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.

  8. Precision linear ramp function generator

    DOEpatents

    Jatko, W.B.; McNeilly, D.R.; Thacker, L.H.

    1984-08-01

    A ramp function generator is provided which produces a precise linear ramp function which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.

  9. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  10. Precision Efficacy Analysis for Regression.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.

    When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…

  11. Dynamic Precision for Electron Repulsion Integral Evaluation on Graphical Processing Units (GPUs).

    PubMed

    Luehr, Nathan; Ufimtsev, Ivan S; Martínez, Todd J

    2011-04-12

    It has recently been demonstrated that novel streaming architectures found in consumer video gaming hardware such as graphical processing units (GPUs) are well-suited to a broad range of computations including electronic structure theory (quantum chemistry). Although recent GPUs have developed robust support for double precision arithmetic, they continue to provide 2-8× more hardware units for single precision. In order to maximize performance on GPU architectures, we present a technique of dynamically selecting double or single precision evaluation for electron repulsion integrals (ERIs) in Hartree-Fock and density functional self-consistent field (SCF) calculations. We show that precision error can be effectively controlled by evaluating only the largest integrals in double precision. By dynamically scaling the precision cutoff over the course of the SCF procedure, we arrive at a scheme that minimizes the number of double precision integral evaluations for any desired accuracy. This dynamic precision scheme is shown to be effective for an array of molecules ranging in size from 20 to nearly 2000 atoms. PMID:26606344

  12. High-precision arithmetic in mathematical physics

    DOE PAGES

    Bailey, David H.; Borwein, Jonathan M.

    2015-05-12

    For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.

  13. Genomic selection & association mapping in rice: effect of trait genetic architecture, training population composition, marker number & statistical model on accuracy of rice genomic selection in elite, tropical rice breeding

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Genomic Selection (GS) is a new breeding method in which genome-wide markers are used to predict the breeding value of individuals in a breeding population. GS has been shown to improve breeding efficiency in dairy cattle and several crop plant species, and here we evaluate for the first time its ef...

  14. Precision Manipulation with Cooperative Robots

    NASA Technical Reports Server (NTRS)

    Stroupe, Ashley; Huntsberger, Terry; Okon, Avi; Aghzarian, Hrand

    2005-01-01

    This work addresses several challenges of cooperative transportThis work addresses several challenges of cooperative transport and precision manipulation. Precision manipulation requires a rigid grasp, which places a hard constraint on the relative rover formation that must be accommodated, even though the rovers cannot directly observe their relative poses. Additionally, rovers must jointly select appropriate actions based on all available sensor information. Lastly, rovers cannot act on independent sensor information, but must fuse information to move jointly; the methods for fusing information must be determined.

  15. Quality and accuracy assessment of nutrition information on the Web for cancer prevention.

    PubMed

    Shahar, Suzana; Shirley, Ng; Noah, Shahrul A

    2013-01-01

    This study aimed to assess the quality and accuracy of nutrition information about cancer prevention available on the Web. The keywords 'nutrition  +  diet  +  cancer  +  prevention' were submitted to the Google search engine. Out of 400 websites evaluated, 100 met the inclusion and exclusion criteria and were selected as the sample for the assessment of quality and accuracy. Overall, 54% of the studied websites had low quality, 48 and 57% had no author's name or information, respectively, 100% were not updated within 1 month during the study period and 86% did not have the Health on the Net seal. When the websites were assessed for readability using the Flesch Reading Ease test, nearly 44% of the websites were categorised as 'quite difficult'. With regard to accuracy, 91% of the websites did not precisely follow the latest WCRF/AICR 2007 recommendation. The quality scores correlated significantly with the accuracy scores (r  =  0.250, p  <  0.05). Professional websites (n  =  22) had the highest mean quality scores, whereas government websites (n  =  2) had the highest mean accuracy scores. The quality of the websites selected in this study was not satisfactory, and there is great concern about the accuracy of the information being disseminated. PMID:22957981

  16. Accuracy in optical overlay metrology

    NASA Astrophysics Data System (ADS)

    Bringoltz, Barak; Marciano, Tal; Yaziv, Tal; DeLeeuw, Yaron; Klein, Dana; Feler, Yoel; Adam, Ido; Gurevich, Evgeni; Sella, Noga; Lindenfeld, Ze'ev; Leviant, Tom; Saltoun, Lilach; Ashwal, Eltsafon; Alumot, Dror; Lamhot, Yuval; Gao, Xindong; Manka, James; Chen, Bryan; Wagner, Mark

    2016-03-01

    In this paper we discuss the mechanism by which process variations determine the overlay accuracy of optical metrology. We start by focusing on scatterometry, and showing that the underlying physics of this mechanism involves interference effects between cavity modes that travel between the upper and lower gratings in the scatterometry target. A direct result is the behavior of accuracy as a function of wavelength, and the existence of relatively well defined spectral regimes in which the overlay accuracy and process robustness degrades (`resonant regimes'). These resonances are separated by wavelength regions in which the overlay accuracy is better and independent of wavelength (we term these `flat regions'). The combination of flat and resonant regions forms a spectral signature which is unique to each overlay alignment and carries certain universal features with respect to different types of process variations. We term this signature the `landscape', and discuss its universality. Next, we show how to characterize overlay performance with a finite set of metrics that are available on the fly, and that are derived from the angular behavior of the signal and the way it flags resonances. These metrics are used to guarantee the selection of accurate recipes and targets for the metrology tool, and for process control with the overlay tool. We end with comments on the similarity of imaging overlay to scatterometry overlay, and on the way that pupil overlay scatterometry and field overlay scatterometry differ from an accuracy perspective.

  17. Stereotype Accuracy: Toward Appreciating Group Differences.

    ERIC Educational Resources Information Center

    Lee, Yueh-Ting, Ed.; And Others

    The preponderance of scholarly theory and research on stereotypes assumes that they are bad and inaccurate, but understanding stereotype accuracy and inaccuracy is more interesting and complicated than simpleminded accusations of racism or sexism would seem to imply. The selections in this collection explore issues of the accuracy of stereotypes…

  18. Accuracy in parameter estimation in cluster randomized designs.

    PubMed

    Pornprasertmanit, Sunthud; Schneider, W Joel

    2014-09-01

    When planning to conduct a study, not only is it important to select a sample size that will ensure adequate statistical power, often it is important to select a sample size that results in accurate effect size estimates. In cluster-randomized designs (CRD), such planning presents special challenges. In CRD studies, instead of assigning individual objects to treatment conditions, objects are grouped in clusters, and these clusters are then assigned to different treatment conditions. Sample size in CRD studies is a function of 2 components: the number of clusters and the cluster size. Planning to conduct a CRD study is difficult because 2 distinct sample size combinations might be associated with similar costs but can result in dramatically different levels of statistical power and accuracy in effect size estimation. Thus, we present a method that assists researchers in finding the least expensive sample size combination that still results in adequate accuracy in effect size estimation. Alternatively, if researchers have a fixed budget, they can select the sample size combination that results in the most precise estimate of effect size. A free computer program that automates these procedures is available. PMID:25046449

  19. Precision CW laser automatic tracking system investigated

    NASA Technical Reports Server (NTRS)

    Lang, K. T.; Lucy, R. F.; Mcgann, E. J.; Peters, C. J.

    1966-01-01

    Precision laser tracker capable of tracking a low acceleration target to an accuracy of about 20 microradians rms is being constructed and tested. This laser tracking has the advantage of discriminating against other optical sources and the capability of simultaneously measuring range.

  20. Using satellite data to increase accuracy of PMF calculations

    SciTech Connect

    Mettel, M.C.

    1992-03-01

    The accuracy of a flood severity estimate depends on the data used. The more detailed and precise the data, the more accurate the estimate. Earth observation satellites gather detailed data for determining the probable maximum flood at hydropower projects.

  1. High precision triangular waveform generator

    DOEpatents

    Mueller, Theodore R.

    1983-01-01

    An ultra-linear ramp generator having separately programmable ascending and descending ramp rates and voltages is provided. Two constant current sources provide the ramp through an integrator. Switching of the current at current source inputs rather than at the integrator input eliminates switching transients and contributes to the waveform precision. The triangular waveforms produced by the waveform generator are characterized by accurate reproduction and low drift over periods of several hours. The ascending and descending slopes are independently selectable.

  2. Precision performance lamp technology

    NASA Astrophysics Data System (ADS)

    Bell, Dean A.; Kiesa, James E.; Dean, Raymond A.

    1997-09-01

    A principal function of a lamp is to produce light output with designated spectra, intensity, and/or geometric radiation patterns. The function of a precision performance lamp is to go beyond these parameters and into the precision repeatability of performance. All lamps are not equal. There are a variety of incandescent lamps, from the vacuum incandescent indictor lamp to the precision lamp of a blood analyzer. In the past the definition of a precision lamp was described in terms of wattage, light center length (LCL), filament position, and/or spot alignment. This paper presents a new view of precision lamps through the discussion of a new segment of lamp design, which we term precision performance lamps. The definition of precision performance lamps will include (must include) the factors of a precision lamp. But what makes a precision lamp a precision performance lamp is the manner in which the design factors of amperage, mscp (mean spherical candlepower), efficacy (lumens/watt), life, not considered individually but rather considered collectively. There is a statistical bias in a precision performance lamp for each of these factors; taken individually and as a whole. When properly considered the results can be dramatic to the system design engineer, system production manage and the system end-user. It can be shown that for the lamp user, the use of precision performance lamps can translate to: (1) ease of system design, (2) simplification of electronics, (3) superior signal to noise ratios, (4) higher manufacturing yields, (5) lower system costs, (6) better product performance. The factors mentioned above are described along with their interdependent relationships. It is statistically shown how the benefits listed above are achievable. Examples are provided to illustrate how proper attention to precision performance lamp characteristics actually aid in system product design and manufacturing to build and market more, market acceptable product products in the

  3. A Precise Lunar Photometric Function

    NASA Astrophysics Data System (ADS)

    McEwen, A. S.

    1996-03-01

    The Clementine multispectral dataset will enable compositional mapping of the entire lunar surface at a resolution of ~100-200 m, but a highly accurate photometric normalization is needed to achieve challenging scientific objectives such as mapping petrographic or elemental compositions. The goal of this work is to normalize the Clementine data to an accuracy of 1% for the UVVIS images (0.415, 0.75, 0.9, 0.95, and 1.0 micrometers) and 2% for NIR images (1.1, 1.25, 1.5, 2.0, 2.6, and 2.78 micrometers), consistent with radiometric calibration goals. The data will be normalized to R30, the reflectance expected at an incidence angle (i) and phase angle (alpha) of 30 degrees and emission angle (e) of 0 degree, matching the photometric geometry of lunar samples measured at the reflectance laboratory (RELAB) at Brown University The focus here is on the precision of the normalization, not the putative physical significance of the photometric function parameters. The 2% precision achieved is significantly better than the ~10% precision of a previous normalization.

  4. Precision volume measuring system

    SciTech Connect

    Klevgard, P.A.

    1984-11-01

    An engineering study was undertaken to calibrate and certify a precision volume measurement system that uses the ideal gas law and precise pressure measurements (of low-pressure helium) to ratio a known to an unknown volume. The constant-temperature, computer-controlled system was tested for thermodynamic instabilities, for precision (0.01%), and for bias (0.01%). Ratio scaling was used to optimize the quartz crystal pressure transducer calibration.

  5. Precision goniometer equipped with a 22-bit absolute rotary encoder.

    PubMed

    Xiaowei, Z; Ando, M; Jidong, W

    1998-05-01

    The calibration of a compact precision goniometer equipped with a 22-bit absolute rotary encoder is presented. The goniometer is a modified Huber 410 goniometer: the diffraction angles can be coarsely generated by a stepping-motor-driven worm gear and precisely interpolated by a piezoactuator-driven tangent arm. The angular accuracy of the precision rotary stage was evaluated with an autocollimator. It was shown that the deviation from circularity of the rolling bearing utilized in the precision rotary stage restricts the angular positioning accuracy of the goniometer, and results in an angular accuracy ten times larger than the angular resolution of 0.01 arcsec. The 22-bit encoder was calibrated by an incremental rotary encoder. It became evident that the accuracy of the absolute encoder is approximately 18 bit due to systematic errors.

  6. Precision aerial application for site-specific rice crop management

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Precision agriculture includes different technologies that allow agricultural professional to use information management tools to optimize agriculture production. The new technologies allow aerial application applicators to improve application accuracy and efficiency, which saves time and money for...

  7. Precision positioning device

    DOEpatents

    McInroy, John E.

    2005-01-18

    A precision positioning device is provided. The precision positioning device comprises a precision measuring/vibration isolation mechanism. A first plate is provided with the precision measuring mean secured to the first plate. A second plate is secured to the first plate. A third plate is secured to the second plate with the first plate being positioned between the second plate and the third plate. A fourth plate is secured to the third plate with the second plate being positioned between the third plate and the fourth plate. An adjusting mechanism for adjusting the position of the first plate, the second plate, the third plate, and the fourth plate relative to each other.

  8. Liquid chromatography-high resolution/ high accuracy (tandem) mass spectrometry-based identification of in vivo generated metabolites of the selective androgen receptor modulator ACP-105 for doping control purposes.

    PubMed

    Thevis, Mario; Thomas, Andreas; Piper, Thomas; Krug, Oliver; Delahaut, Philippe; Schänzer, Wilhelm

    2014-01-01

    Selective androgen receptor modulators (SARMs) represent an emerging class of therapeutics which have been prohibited in sport as anabolic agents according to the regulations of the World Anti-Doping Agency (WADA) since 2008. Within the past three years, numerous adverse analytical findings with SARMs in routine doping control samples have been reported despite missing clinical approval of these substances. Hence, preventive doping research concerning the metabolism and elimination of new therapeutic entities of the class of SARMs are vital for efficient and timely sports drug testing programs as banned compounds are most efficiently screened when viable targets (for example, characteristic metabolites) are identified. In the present study, the metabolism of ACP-105, a novel SARM drug candidate, was studied in vivo in rats. Following oral administration, urine samples were collected over a period of seven days and analyzed for metabolic products by Liquid chromatography-high resolution/high accuracy (tandem) mass spectrometry. Samples were subjected to enzymatic hydrolysis prior to liquid-liquid extraction and a total of seven major phase-I metabolites were detected, three of which were attributed to monohydroxylated and four to bishydroxylated ACP-105. The hydroxylation sites were assigned by means of diagnostic product ions and respective dissociation pathways of the analytes following positive or negative ionization and collisional activation as well as selective chemical derivatization. The identified metabolites were used as target compounds to investigate their traceability in a rat elimination urine samples study and monohydroxylated and bishydroxylated species were detectable for up to four and six days post-administration, respectively.

  9. System and method for high precision isotope ratio destructive analysis

    SciTech Connect

    Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R

    2013-07-02

    A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).

  10. Precision Teaching: An Introduction.

    ERIC Educational Resources Information Center

    West, Richard P.; And Others

    1990-01-01

    Precision teaching is introduced as a method of helping students develop fluency or automaticity in the performance of academic skills. Precision teaching involves being aware of the relationship between teaching and learning, measuring student performance regularly and frequently, and analyzing the measurements to develop instructional and…

  11. Precision Optics Curriculum.

    ERIC Educational Resources Information Center

    Reid, Robert L.; And Others

    This guide outlines the competency-based, two-year precision optics curriculum that the American Precision Optics Manufacturers Association has proposed to fill the void that it suggests will soon exist as many of the master opticians currently employed retire. The model, which closely resembles the old European apprenticeship model, calls for 300…

  12. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  13. Classification accuracy improvement

    NASA Technical Reports Server (NTRS)

    Kistler, R.; Kriegler, F. J.

    1977-01-01

    Improvements made in processing system designed for MIDAS (prototype multivariate interactive digital analysis system) effects higher accuracy in classification of pixels, resulting in significantly-reduced processing time. Improved system realizes cost reduction factor of 20 or more.

  14. Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding

    PubMed Central

    2013-01-01

    fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies. PMID:24314298

  15. Precision and Power Grip Priming by Observed Grasping

    ERIC Educational Resources Information Center

    Vainio, Lari; Tucker, Mike; Ellis, Rob

    2007-01-01

    The coupling of hand grasping stimuli and the subsequent grasp execution was explored in normal participants. Participants were asked to respond with their right- or left-hand to the accuracy of an observed (dynamic) grasp while they were holding precision or power grasp response devices in their hands (e.g., precision device/right-hand; power…

  16. Precision spectroscopy with a frequency-comb-calibrated solar spectrograph

    NASA Astrophysics Data System (ADS)

    Doerr, H.-P.

    2015-06-01

    precision spectroscopy of the Sun and laboratory light sources. The first scientific observations aimed at measuring the accurate wavelengths of selected solar Fraunhofer lines to characterise the so-called convective blue shift and its centre to limb variation. The convective blueshifts were derived with respect to laboratory wavelengths that were obtained from spectral lamps measured with the same instrument. The measurements agree with previous studies but provide a way higher accuracy. The data is only partially compatible with numerical simulations that were published recently. Further measurements were carried out to provide the absolute wavelengths of telluric O2 lines that are commonly used for wavelength calibration. With an accuracy of 1 m/s, these new measurements are two orders of magnitude better than existing data.

  17. A 3-D Multilateration: A Precision Geodetic Measurement System

    NASA Technical Reports Server (NTRS)

    Escobal, P. R.; Fliegel, H. F.; Jaffe, R. M.; Muller, P. M.; Ong, K. M.; Vonroos, O. H.

    1972-01-01

    A system was designed with the capability of determining 1-cm accuracy station positions in three dimensions using pulsed laser earth satellite tracking stations coupled with strictly geometric data reduction. With this high accuracy, several crucial geodetic applications become possible, including earthquake hazards assessment, precision surveying, plate tectonics, and orbital determination.

  18. Accuracy of optical dental digitizers: an in vitro study.

    PubMed

    Vandeweghe, Stefan; Vervack, Valentin; Vanhove, Christian; Dierens, Melissa; Jimbo, Ryo; De Bruyn, Hugo

    2015-01-01

    The aim of this study was to evaluate the accuracy, in terms of trueness and precision, of optical dental scanners. An experimental acrylic resin cast was created and digitized using a microcomputed tomography (microCT) scanner, which served as the reference model. Five polyether impressions were made of the acrylic resin cast to create five stone casts. Each dental digitizer (Imetric, Lava ST, Smart Optics, KaVo Everest) made five scans of the acrylic resin cast and one scan of every stone cast. The scans were superimposed and compared using metrology software. Deviations were calculated between the datasets obtained from the dental digitizers and the microCT scanner (= trueness) and between datasets from the same dental digitizer (= precision). With exception of the Smart Optics scanner, there were no significant differences in trueness for the acrylic resin cast. For the stone casts, however, the Lava ST performed better than Imetric, which did better than the KaVo scanner. The Smart Optics scanner demonstrated the highest deviation. All digitizers demonstrated a significantly higher trueness for the acrylic resin cast compared to the plaster cast, except the Lava ST. The Lava ST was significantly more precise compared to the other scanners. Imetric and Smart Optics also demonstrated a higher level of precision compared to the KaVo scanner. All digitizers demonstrated some degree of error. Stone cast copies are less accurate because of difficulties with scanning the rougher surface or dimensional deformations caused during the production process. For complex, large-span reconstructions, a highly accurate scanner should be selected. PMID:25734714

  19. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.

    1994-01-01

    This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.

  20. Accuracy of analyses of microelectronics nanostructures in atom probe tomography

    NASA Astrophysics Data System (ADS)

    Vurpillot, F.; Rolland, N.; Estivill, R.; Duguay, S.; Blavette, D.

    2016-07-01

    The routine use of atom probe tomography (APT) as a nano-analysis microscope in the semiconductor industry requires the precise evaluation of the metrological parameters of this instrument (spatial accuracy, spatial precision, composition accuracy or composition precision). The spatial accuracy of this microscope is evaluated in this paper in the analysis of planar structures such as high-k metal gate stacks. It is shown both experimentally and theoretically that the in-depth accuracy of reconstructed APT images is perturbed when analyzing this structure composed of an oxide layer of high electrical permittivity (higher-k dielectric constant) that separates the metal gate and the semiconductor channel of a field emitter transistor. Large differences in the evaporation field between these layers (resulting from large differences in material properties) are the main sources of image distortions. An analytic model is used to interpret inaccuracy in the depth reconstruction of these devices in APT.

  1. Overview of the national precision database for ozone

    SciTech Connect

    Mikel, D.K.

    1999-07-01

    One of the most important ambient air monitoring quality assurance indicators is the precision test. Code of Federal Regulation Title 40, Section 58 (40 CFR 58) Appendix A1 states that all automated analyzers must have precision tests performed at least once every two weeks. Precision tests can be the best indicator of quality of data for the following reasons: Precision tests are performed once every two weeks. There are approximately 24 to 26 tests per year per instrument. Accuracy tests (audits) usually occur only 1--2 times per year. Precision tests and the subsequent statistical tests can be used to calculate the bias in a set of data. Precision test are used to calculate 95% confidence (probability) limits for the data set. This is important because the confidence of any data point can be determined. If the authors examine any exceedances or near exceedances of the ozone NAAQS, the confidence limits must be examined as well. Precision tests are performed by the monitoring staff and the precision standards are certified against the internal agency primary standards. Precision data are submitted by all state and local agencies that are required to submit criteria pollutant data to the Aerometric and Information Retrieval System (AIRS) database. This subset of the AIRS database is named Precision and Accuracy Retrieval Systems (PARS). In essence, the precision test is an internally performed test performed by the agency collecting and reporting the data.

  2. Improve the ZY-3 Height Accuracy Using Icesat/glas Laser Altimeter Data

    NASA Astrophysics Data System (ADS)

    Li, Guoyuan; Tang, Xinming; Gao, Xiaoming; Zhang, Chongyang; Li, Tao

    2016-06-01

    ZY-3 is the first civilian high resolution stereo mapping satellite, which has been launched on 9th, Jan, 2012. The aim of ZY-3 satellite is to obtain high resolution stereo images and support the 1:50000 scale national surveying and mapping. Although ZY-3 has very high accuracy for direct geo-locations without GCPs (Ground Control Points), use of some GCPs is still indispensible for high precise stereo mapping. The GLAS (Geo-science Laser Altimetry System) loaded on the ICESat (Ice Cloud and land Elevation Satellite), which is the first laser altimetry satellite for earth observation. GLAS has played an important role in the monitoring of polar ice sheets, the measuring of land topography and vegetation canopy heights after launched in 2003. Although GLAS has ended in 2009, the derived elevation dataset still can be used after selection by some criteria. In this paper, the ICESat/GLAS laser altimeter data is used as height reference data to improve the ZY-3 height accuracy. A selection method is proposed to obtain high precision GLAS elevation data. Two strategies to improve the ZY-3 height accuracy are introduced. One is the conventional bundle adjustment based on RFM and bias-compensated model, in which the GLAS footprint data is viewed as height control. The second is to correct the DSM (Digital Surface Model) straightly by simple block adjustment, and the DSM is derived from the ZY-3 stereo imaging after freedom adjustment and dense image matching. The experimental result demonstrates that the height accuracy of ZY-3 without other GCPs can be improved to 3.0 meter after adding GLAS elevation data. What's more, the comparison of the accuracy and efficiency between the two strategies is implemented for application.

  3. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    A precision liquid level sensor utilizes a balanced bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  4. Precision Measurement in Biology

    NASA Astrophysics Data System (ADS)

    Quake, Stephen

    Is biology a quantitative science like physics? I will discuss the role of precision measurement in both physics and biology, and argue that in fact both fields can be tied together by the use and consequences of precision measurement. The elementary quanta of biology are twofold: the macromolecule and the cell. Cells are the fundamental unit of life, and macromolecules are the fundamental elements of the cell. I will describe how precision measurements have been used to explore the basic properties of these quanta, and more generally how the quest for higher precision almost inevitably leads to the development of new technologies, which in turn catalyze further scientific discovery. In the 21st century, there are no remaining experimental barriers to biology becoming a truly quantitative and mathematical science.

  5. Precision Environmental Radiation Monitoring System

    SciTech Connect

    Vladimir Popov, Pavel Degtiarenko

    2010-07-01

    A new precision low-level environmental radiation monitoring system has been developed and tested at Jefferson Lab. This system provides environmental radiation measurements with accuracy and stability of the order of 1 nGy/h in an hour, roughly corresponding to approximately 1% of the natural cosmic background at the sea level. Advanced electronic front-end has been designed and produced for use with the industry-standard High Pressure Ionization Chamber detector hardware. A new highly sensitive readout electronic circuit was designed to measure charge from the virtually suspended ionization chamber ion collecting electrode. New signal processing technique and dedicated data acquisition were tested together with the new readout. The designed system enabled data collection in a remote Linux-operated computer workstation, which was connected to the detectors using a standard telephone cable line. The data acquisition system algorithm is built around the continuously running 24-bit resolution 192 kHz data sampling analog to digital convertor. The major features of the design include: extremely low leakage current in the input circuit, true charge integrating mode operation, and relatively fast response to the intermediate radiation change. These features allow operating of the device as an environmental radiation monitor, at the perimeters of the radiation-generating installations in densely populated areas, like in other monitoring and security applications requiring high precision and long-term stability. Initial system evaluation results are presented.

  6. Precision displacement reference system

    DOEpatents

    Bieg, Lothar F.; Dubois, Robert R.; Strother, Jerry D.

    2000-02-22

    A precision displacement reference system is described, which enables real time accountability over the applied displacement feedback system to precision machine tools, positioning mechanisms, motion devices, and related operations. As independent measurements of tool location is taken by a displacement feedback system, a rotating reference disk compares feedback counts with performed motion. These measurements are compared to characterize and analyze real time mechanical and control performance during operation.

  7. Accuracy potentials for large space antenna structures

    NASA Technical Reports Server (NTRS)

    Hedgepeth, J. M.

    1980-01-01

    The relationships among materials selection, truss design, and manufacturing techniques in the interest of surface accuracies for large space antennas are discussed. Among the antenna configurations considered are: tetrahedral truss, pretensioned truss, and geodesic dome and radial rib structures. Comparisons are made of the accuracy achievable by truss and dome structure types for a wide variety of diameters, focal lengths, and wavelength of radiated signal, taking into account such deforming influences as solar heating-caused thermal transients and thermal gradients.

  8. Influence of camera calibration conditions on the accuracy of 3D reconstruction.

    PubMed

    Poulin-Girard, Anne-Sophie; Thibault, Simon; Laurendeau, Denis

    2016-02-01

    For stereoscopic systems designed for metrology applications, the accuracy of camera calibration dictates the precision of the 3D reconstruction. In this paper, the impact of various calibration conditions on the reconstruction quality is studied using a virtual camera calibration technique and the design file of a commercially available lens. This technique enables the study of the statistical behavior of the reconstruction task in selected calibration conditions. The data show that the mean reprojection error should not always be used to evaluate the performance of the calibration process and that a low quality of feature detection does not always lead to a high mean reconstruction error.

  9. Seasonal Effects on GPS PPP Accuracy

    NASA Astrophysics Data System (ADS)

    Saracoglu, Aziz; Ugur Sanli, D.

    2016-04-01

    GPS Precise Point Positioning (PPP) is now routinely used in many geophysical applications. Static positioning and 24 h data are requested for high precision results however real life situations do not always let us collect 24 h data. Thus repeated GPS surveys of 8-10 h observation sessions are still used by some research groups. Positioning solutions from shorter data spans are subject to various systematic influences, and the positioning quality as well as the estimated velocity is degraded. Researchers pay attention to the accuracy of GPS positions and of the estimated velocities derived from short observation sessions. Recently some research groups turned their attention to the study of seasonal effects (i.e. meteorological seasons) on GPS solutions. Up to now usually regional studies have been reported. In this study, we adopt a global approach and study the various seasonal effects (including the effect of the annual signal) on GPS solutions produced from short observation sessions. We use the PPP module of the NASA/JPL's GIPSY/OASIS II software and globally distributed GPS stations' data of the International GNSS Service. Accuracy studies previously performed with 10-30 consecutive days of continuous data. Here, data from each month of a year, incorporating two years in succession, is used in the analysis. Our major conclusion is that a reformulation for the GPS positioning accuracy is necessary when taking into account the seasonal effects, and typical one term accuracy formulation is expanded to a two-term one.

  10. Precise Orbit Determination of GPS Satellites Using Phase Observables

    NASA Astrophysics Data System (ADS)

    Jee, Myung-Kook; Choi, Kyu-Hong; Park, Pil-Ho

    1997-12-01

    The accuracy of user position by GPS is heavily dependent upon the accuracy of satellite position which is usually transmitted to GPS users in radio signals. The real-time satellite position information directly obtained from broadcast ephimerides has the accuracy of 3 x 10 meters which is very unsatisfactory to measure 100km baseline to the accuracy of less than a few mili-meters. There are globally at present seven orbit analysis centers capable of generating precise GPS ephimerides and their orbit quality is of the order of about 10cm. Therefore, precise orbit model and phase processing technique were reviewed and consequently precise GPS ephimerides were produced after processing the phase observables of 28 global GPS stations for 1 day. Initial 6 orbit parameters and 2 solar radiation coefficients were estimated using batch least square algorithm and the final results were compared with the orbit of IGS, the International GPS Service for Geodynamics.

  11. The Future of Precision Medicine in Oncology.

    PubMed

    Millner, Lori M; Strotman, Lindsay N

    2016-09-01

    Precision medicine in oncology focuses on identifying which therapies are most effective for each patient based on genetic characterization of the cancer. Traditional chemotherapy is cytotoxic and destroys all cells that are rapidly dividing. The foundation of precision medicine is targeted therapies and selecting patients who will benefit most from these therapies. One of the newest aspects of precision medicine is liquid biopsy. A liquid biopsy includes analysis of circulating tumor cells, cell-free nucleic acid, or exosomes obtained from a peripheral blood draw. These can be studied individually or in combination and collected serially, providing real-time information as a patient's cancer changes. PMID:27514468

  12. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  13. Accuracy of prediction of genomic breeding values for residual feed intake and carcass and meat quality traits in Bos taurus, Bos indicus, and composite beef cattle.

    PubMed

    Bolormaa, S; Pryce, J E; Kemper, K; Savin, K; Hayes, B J; Barendse, W; Zhang, Y; Reich, C M; Mason, B A; Bunch, R J; Harrison, B E; Reverter, A; Herd, R M; Tier, B; Graser, H-U; Goddard, M E

    2013-07-01

    The aim of this study was to assess the accuracy of genomic predictions for 19 traits including feed efficiency, growth, and carcass and meat quality traits in beef cattle. The 10,181 cattle in our study had real or imputed genotypes for 729,068 SNP although not all cattle were measured for all traits. Animals included Bos taurus, Brahman, composite, and crossbred animals. Genomic EBV (GEBV) were calculated using 2 methods of genomic prediction [BayesR and genomic BLUP (GBLUP)] either using a common training dataset for all breeds or using a training dataset comprising only animals of the same breed. Accuracies of GEBV were assessed using 5-fold cross-validation. The accuracy of genomic prediction varied by trait and by method. Traits with a large number of recorded and genotyped animals and with high heritability gave the greatest accuracy of GEBV. Using GBLUP, the average accuracy was 0.27 across traits and breeds, but the accuracies between breeds and between traits varied widely. When the training population was restricted to animals from the same breed as the validation population, GBLUP accuracies declined by an average of 0.04. The greatest decline in accuracy was found for the 4 composite breeds. The BayesR accuracies were greater by an average of 0.03 than GBLUP accuracies, particularly for traits with known genes of moderate to large effect mutations segregating. The accuracies of 0.43 to 0.48 for IGF-I traits were among the greatest in the study. Although accuracies are low compared with those observed in dairy cattle, genomic selection would still be beneficial for traits that are hard to improve by conventional selection, such as tenderness and residual feed intake. BayesR identified many of the same quantitative trait loci as a genomewide association study but appeared to map them more precisely. All traits appear to be highly polygenic with thousands of SNP independently associated with each trait. PMID:23658330

  14. Accuracy of prediction of genomic breeding values for residual feed intake and carcass and meat quality traits in Bos taurus, Bos indicus, and composite beef cattle.

    PubMed

    Bolormaa, S; Pryce, J E; Kemper, K; Savin, K; Hayes, B J; Barendse, W; Zhang, Y; Reich, C M; Mason, B A; Bunch, R J; Harrison, B E; Reverter, A; Herd, R M; Tier, B; Graser, H-U; Goddard, M E

    2013-07-01

    The aim of this study was to assess the accuracy of genomic predictions for 19 traits including feed efficiency, growth, and carcass and meat quality traits in beef cattle. The 10,181 cattle in our study had real or imputed genotypes for 729,068 SNP although not all cattle were measured for all traits. Animals included Bos taurus, Brahman, composite, and crossbred animals. Genomic EBV (GEBV) were calculated using 2 methods of genomic prediction [BayesR and genomic BLUP (GBLUP)] either using a common training dataset for all breeds or using a training dataset comprising only animals of the same breed. Accuracies of GEBV were assessed using 5-fold cross-validation. The accuracy of genomic prediction varied by trait and by method. Traits with a large number of recorded and genotyped animals and with high heritability gave the greatest accuracy of GEBV. Using GBLUP, the average accuracy was 0.27 across traits and breeds, but the accuracies between breeds and between traits varied widely. When the training population was restricted to animals from the same breed as the validation population, GBLUP accuracies declined by an average of 0.04. The greatest decline in accuracy was found for the 4 composite breeds. The BayesR accuracies were greater by an average of 0.03 than GBLUP accuracies, particularly for traits with known genes of moderate to large effect mutations segregating. The accuracies of 0.43 to 0.48 for IGF-I traits were among the greatest in the study. Although accuracies are low compared with those observed in dairy cattle, genomic selection would still be beneficial for traits that are hard to improve by conventional selection, such as tenderness and residual feed intake. BayesR identified many of the same quantitative trait loci as a genomewide association study but appeared to map them more precisely. All traits appear to be highly polygenic with thousands of SNP independently associated with each trait.

  15. Delta-Beta Coupled Oscillations Underlie Temporal Prediction Accuracy.

    PubMed

    Arnal, Luc H; Doelling, Keith B; Poeppel, David

    2015-09-01

    The ability to generate temporal predictions is fundamental for adaptive behavior. Precise timing at the time-scale of seconds is critical, for instance to predict trajectories or to select relevant information. What mechanisms form the basis for such accurate timing? Recent evidence suggests that (1) temporal predictions adjust sensory selection by controlling neural oscillations in time and (2) the motor system plays an active role in inferring "when" events will happen. We hypothesized that oscillations in the delta and beta bands are instrumental in predicting the occurrence of auditory targets. Participants listened to brief rhythmic tone sequences and detected target delays while undergoing magnetoencephalography recording. Prior to target occurrence, we found that coupled delta (1-3 Hz) and beta (18-22 Hz) oscillations temporally align with upcoming targets and bias decisions towards correct responses, suggesting that delta-beta coupled oscillations underpin prediction accuracy. Subsequent to target occurrence, subjects update their decisions using the magnitude of the alpha-band (10-14 Hz) response as internal evidence of target timing. These data support a model in which the orchestration of oscillatory dynamics between sensory and motor systems is exploited to accurately select sensory information in time. PMID:24846147

  16. Precision gap particle separator

    DOEpatents

    Benett, William J.; Miles, Robin; Jones, II., Leslie M.; Stockton, Cheryl

    2004-06-08

    A system for separating particles entrained in a fluid includes a base with a first channel and a second channel. A precision gap connects the first channel and the second channel. The precision gap is of a size that allows small particles to pass from the first channel into the second channel and prevents large particles from the first channel into the second channel. A cover is positioned over the base unit, the first channel, the precision gap, and the second channel. An port directs the fluid containing the entrained particles into the first channel. An output port directs the large particles out of the first channel. A port connected to the second channel directs the small particles out of the second channel.

  17. How Physics Got Precise

    SciTech Connect

    Kleppner, Daniel

    2005-01-19

    Although the ancients knew the length of the year to about ten parts per million, it was not until the end of the 19th century that precision measurements came to play a defining role in physics. Eventually such measurements made it possible to replace human-made artifacts for the standards of length and time with natural standards. For a new generation of atomic clocks, time keeping could be so precise that the effects of the local gravitational potentials on the clock rates would be important. This would force us to re-introduce an artifact into the definition of the second - the location of the primary clock. I will describe some of the events in the history of precision measurements that have led us to this pleasing conundrum, and some of the unexpected uses of atomic clocks today.

  18. Precision Muonium Spectroscopy

    NASA Astrophysics Data System (ADS)

    Jungmann, Klaus P.

    2016-09-01

    The muonium atom is the purely leptonic bound state of a positive muon and an electron. It has a lifetime of 2.2 µs. The absence of any known internal structure provides for precision experiments to test fundamental physics theories and to determine accurate values of fundamental constants. In particular ground state hyperfine structure transitions can be measured by microwave spectroscopy to deliver the muon magnetic moment. The frequency of the 1s-2s transition in the hydrogen-like atom can be determined with laser spectroscopy to obtain the muon mass. With such measurements fundamental physical interactions, in particular quantum electrodynamics, can also be tested at highest precision. The results are important input parameters for experiments on the muon magnetic anomaly. The simplicity of the atom enables further precise experiments, such as a search for muonium-antimuonium conversion for testing charged lepton number conservation and searches for possible antigravity of muons and dark matter.

  19. Precision Heating Process

    NASA Technical Reports Server (NTRS)

    1992-01-01

    A heat sealing process was developed by SEBRA based on technology that originated in work with NASA's Jet Propulsion Laboratory. The project involved connecting and transferring blood and fluids between sterile plastic containers while maintaining a closed system. SEBRA markets the PIRF Process to manufacturers of medical catheters. It is a precisely controlled method of heating thermoplastic materials in a mold to form or weld catheters and other products. The process offers advantages in fast, precise welding or shape forming of catheters as well as applications in a variety of other industries.

  20. Precision manometer gauge

    DOEpatents

    McPherson, Malcolm J.; Bellman, Robert A.

    1984-01-01

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  1. Precision manometer gauge

    DOEpatents

    McPherson, M.J.; Bellman, R.A.

    1982-09-27

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  2. Realization and performance of cryogenic selection mechanisms

    NASA Astrophysics Data System (ADS)

    Aitink-Kroes, Gabby; Bettonvil, Felix; Kragt, Jan; Elswijk, Eddy; Tromp, Niels

    2014-07-01

    Within Infra-Red large wavelength bandwidth instruments the use of mechanisms for selection of observation modes, filters, dispersing elements, pinholes or slits is inevitable. The cryogenic operating environment poses several challenges to these cryogenic mechanisms; like differential thermal shrinkage, physical property change of materials, limited use of lubrication, high feature density, limited space etc. MATISSE the mid-infrared interferometric spectrograph and imager for ESO's VLT interferometer (VLTI) at Paranal in Chile coherently combines the light from 4 telescopes. Within the Cold Optics Bench (COB) of MATISSE two concepts of selection mechanisms can be distinguished based on the same design principles: linear selection mechanisms (sliders) and rotating selection mechanisms (wheels).Both sliders and wheels are used at a temperature of 38 Kelvin. The selection mechanisms have to provide high accuracy and repeatability. The sliders/wheels have integrated tracks that run on small, accurately located, spring loaded precision bearings. Special indents are used for selection of the slider/wheel position. For maximum accuracy/repeatability the guiding/selection system is separated from the actuation in this case a cryogenic actuator inside the cryostat. The paper discusses the detailed design of the mechanisms and the final realization for the MATISSE COB. Limited lifetime and performance tests determine accuracy, warm and cold and the reliability/wear during life of the instrument. The test results and further improvements to the mechanisms are discussed.

  3. Precision Casting via Advanced Simulation and Manufacturing

    NASA Technical Reports Server (NTRS)

    1997-01-01

    A two-year program was conducted to develop and commercially implement selected casting manufacturing technologies to enable significant reductions in the costs of castings, increase the complexity and dimensional accuracy of castings, and reduce the development times for delivery of high quality castings. The industry-led R&D project was cost shared with NASA's Aerospace Industry Technology Program (AITP). The Rocketdyne Division of Boeing North American, Inc. served as the team lead with participation from Lockheed Martin, Ford Motor Company, Howmet Corporation, PCC Airfoils, General Electric, UES, Inc., University of Alabama, Auburn University, Robinson, Inc., Aracor, and NASA-LeRC. The technical effort was organized into four distinct tasks. The accomplishments reported herein. Task 1.0 developed advanced simulation technology for core molding. Ford headed up this task. On this program, a specialized core machine was designed and built. Task 2.0 focused on intelligent process control for precision core molding. Howmet led this effort. The primary focus of these experimental efforts was to characterize the process parameters that have a strong impact on dimensional control issues of injection molded cores during their fabrication. Task 3.0 developed and applied rapid prototyping to produce near net shape castings. Rocketdyne was responsible for this task. CAD files were generated using reverse engineering, rapid prototype patterns were fabricated using SLS and SLA, and castings produced and evaluated. Task 4.0 was aimed at developing technology transfer. Rocketdyne coordinated this task. Casting related technology, explored and evaluated in the first three tasks of this program, was implemented into manufacturing processes.

  4. Asymptotic accuracy of two-class discrimination

    SciTech Connect

    Ho, T.K.; Baird, H.S.

    1994-12-31

    Poor quality-e.g. sparse or unrepresentative-training data is widely suspected to be one cause of disappointing accuracy of isolated-character classification in modern OCR machines. We conjecture that, for many trainable classification techniques, it is in fact the dominant factor affecting accuracy. To test this, we have carried out a study of the asymptotic accuracy of three dissimilar classifiers on a difficult two-character recognition problem. We state this problem precisely in terms of high-quality prototype images and an explicit model of the distribution of image defects. So stated, the problem can be represented as a stochastic source of an indefinitely long sequence of simulated images labeled with ground truth. Using this sequence, we were able to train all three classifiers to high and statistically indistinguishable asymptotic accuracies (99.9%). This result suggests that the quality of training data was the dominant factor affecting accuracy. The speed of convergence during training, as well as time/space trade-offs during recognition, differed among the classifiers.

  5. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  6. Precision bolometer bridge

    NASA Technical Reports Server (NTRS)

    White, D. R.

    1968-01-01

    Prototype precision bolometer calibration bridge is manually balanced device for indicating dc bias and balance with either dc or ac power. An external galvanometer is used with the bridge for null indication, and the circuitry monitors voltage and current simultaneously without adapters in testing 100 and 200 ohm thin film bolometers.

  7. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    1985-01-29

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge. 2 figs.

  8. Precision physics at LHC

    SciTech Connect

    Hinchliffe, I.

    1997-05-01

    In this talk the author gives a brief survey of some physics topics that will be addressed by the Large Hadron Collider currently under construction at CERN. Instead of discussing the reach of this machine for new physics, the author gives examples of the types of precision measurements that might be made if new physics is discovered.

  9. Precision in Stereochemical Terminology

    ERIC Educational Resources Information Center

    Wade, Leroy G., Jr.

    2006-01-01

    An analysis of relatively new terminology that has given multiple definitions often resulting in students learning principles that are actually false is presented with an example of the new term stereogenic atom introduced by Mislow and Siegel. The Mislow terminology would be useful in some cases if it were used precisely and correctly, but it is…

  10. High Precision Astrometry

    NASA Astrophysics Data System (ADS)

    Riess, Adam

    2012-10-01

    This |*|program |*|uses |*|the |*|enhanced |*|astrometric |*|precision |*|enabled |*|by |*|spatial |*|scanning |*|to |*|calibrate |*|remaining |*|obstacles |*|toreaching |*|<<40 |*|microarc|*|second |*|astrometry |*|{<1 |*|millipixel} |*|with |*|WFC3/UVIS |*|by |*|1} |*|improving |*|geometric |*|distor-on |*|2} |*|calibratingthe |*|e|*|ect |*|of |*|breathing |*|on |*|astrometry|*|3} |*|calibrating |*|the |*|e|*|ect |*|of |*|CTE |*|on |*|astrometry, |*|4} |*|characterizing |*|the |*|boundaries |*|andorientations |*|of |*|the |*|WFC3 |*|lithograph |*|cells.

  11. Precision liquid level sensor

    DOEpatents

    Field, Michael E.; Sullivan, William H.

    1985-01-01

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  12. Astrophysics with Microarcsecond Accuracy Astrometry

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  13. Surface errors in the course of machining precision optics

    NASA Astrophysics Data System (ADS)

    Biskup, H.; Haberl, A.; Rascher, R.

    2015-08-01

    Precision optical components are usually machined by grinding and polishing in several steps with increasing accuracy. Spherical surfaces will be finished in a last step with large tools to smooth the surface. The requested surface accuracy of non-spherical surfaces only can be achieved with tools in point contact to the surface. So called mid-frequency errors (MSFE) can accumulate with zonal processes. This work is on the formation of surface errors from grinding to polishing by conducting an analysis of the surfaces in their machining steps by non-contact interferometric methods. The errors on the surface can be distinguished as described in DIN 4760 whereby 2nd to 3rd order errors are the so-called MSFE. By appropriate filtering of the measured data frequencies of errors can be suppressed in a manner that only defined spatial frequencies will be shown in the surface plot. It can be observed that some frequencies already may be formed in the early machining steps like grinding and main-polishing. Additionally it is known that MSFE can be produced by the process itself and other side effects. Beside a description of surface errors based on the limits of measurement technologies, different formation mechanisms for selected spatial frequencies are presented. A correction may be only possible by tools that have a lateral size below the wavelength of the error structure. The presented considerations may be used to develop proposals to handle surface errors.

  14. Precision Falling Body Experiment

    ERIC Educational Resources Information Center

    Blackburn, James A.; Koenig, R.

    1976-01-01

    Described is a simple apparatus to determine acceleration due to gravity. It utilizes direct contact switches in lieu of conventional photocells to time the fall of a ball bearing. Accuracies to better than one part in a thousand were obtained. (SL)

  15. Precision of archerfish C-starts is fully temperature compensated.

    PubMed

    Krupczynski, Philipp; Schuster, Stefan

    2013-09-15

    Hunting archerfish precisely adapt their predictive C-starts to the initial movement of dislodged prey so that turn angle and initial speed are matched to the place and time of the later point of catch. The high accuracy and the known target point of the starts allow a sensitive straightforward assay of how temperature affects the underlying circuits. Furthermore, archerfish face rapid temperature fluctuations in their mangrove biotopes that could compromise performance. Here, we show that after a brief acclimation period the function of the C-starts was fully maintained over a range of operating temperatures: (i) full responsiveness was maintained at all temperatures, (ii) at all temperatures the fish selected accurate turns and were able to do so over the full angular range, (iii) at all temperatures speed attained immediately after the end of the C-start was matched - with equal accuracy - to 'virtual speed', i.e. the ratio of remaining distance to the future landing point and remaining time. While precision was fully temperature compensated, C-start latency was not and increased by about 4 ms per 1°C cooling. Also, kinematic aspects of the C-start were only partly temperature compensated. Above 26°C, the duration of the two major phases of the C-start were temperature compensated. At lower temperatures, however, durations increased similar to latency. Given the accessibility of the underlying networks, the archerfish predictive start should be an excellent model to assay the degree of plasticity and functional stability of C-start motor patterns. PMID:23737557

  16. Optimal design of robot accuracy compensators

    SciTech Connect

    Zhuang, H.; Roth, Z.S. . Robotics Center and Electrical Engineering Dept.); Hamano, Fumio . Dept. of Electrical Engineering)

    1993-12-01

    The problem of optimal design of robot accuracy compensators is addressed. Robot accuracy compensation requires that actual kinematic parameters of a robot be previously identified. Additive corrections of joint commands, including those at singular configurations, can be computed without solving the inverse kinematics problem for the actual robot. This is done by either the damped least-squares (DLS) algorithm or the linear quadratic regulator (LQR) algorithm, which is a recursive version of the DLS algorithm. The weight matrix in the performance index can be selected to achieve specific objectives, such as emphasizing end-effector's positioning accuracy over orientation accuracy or vice versa, or taking into account proximity to robot joint travel limits and singularity zones. The paper also compares the LQR and the DLS algorithms in terms of computational complexity, storage requirement, and programming convenience. Simulation results are provided to show the effectiveness of the algorithms.

  17. Precise measurements of hyperfine components in the spectrum of molecular iodine

    SciTech Connect

    Sansonetti, C.J.

    1996-05-01

    Absolute wave numbers with a typical uncertainty of 1 MHz (95% confidence) were measured for 102 hyperfine-structure components of {sup 127}I{sub 2}. The data cover the range 560-656 nm, with no gaps over 50 cm{sup -1}. The spectra were observed using Doppler-free frequency modulation spectroscopy with tunable cw laser. The laser was locked to selected iodine components and its wave number measured with a high precision Fabry-Perot wavemeter. Accuracy is confirmed by good agreement of 9 of the lines with previous results from other laboratories. These measurements provide a well-distributed set of precise reference lines for this spectral region.

  18. High Accuracy Fuel Flowmeter, Phase 1

    NASA Technical Reports Server (NTRS)

    Mayer, C.; Rose, L.; Chan, A.; Chin, B.; Gregory, W.

    1983-01-01

    Technology related to aircraft fuel mass - flowmeters was reviewed to determine what flowmeter types could provide 0.25%-of-point accuracy over a 50 to one range in flowrates. Three types were selected and were further analyzed to determine what problem areas prevented them from meeting the high accuracy requirement, and what the further development needs were for each. A dual-turbine volumetric flowmeter with densi-viscometer and microprocessor compensation was selected for its relative simplicity and fast response time. An angular momentum type with a motor-driven, spring-restrained turbine and viscosity shroud was selected for its direct mass-flow output. This concept also employed a turbine for fast response and a microcomputer for accurate viscosity compensation. The third concept employed a vortex precession volumetric flowmeter and was selected for its unobtrusive design. Like the turbine flowmeter, it uses a densi-viscometer and microprocessor for density correction and accurate viscosity compensation.

  19. High-precision positioning of radar scatterers

    NASA Astrophysics Data System (ADS)

    Dheenathayalan, Prabu; Small, David; Schubert, Adrian; Hanssen, Ramon F.

    2016-05-01

    Remote sensing radar satellites cover wide areas and provide spatially dense measurements, with millions of scatterers. Knowledge of the precise position of each radar scatterer is essential to identify the corresponding object and interpret the estimated deformation. The absolute position accuracy of synthetic aperture radar (SAR) scatterers in a 2D radar coordinate system, after compensating for atmosphere and tidal effects, is in the order of centimeters for TerraSAR-X (TSX) spotlight images. However, the absolute positioning in 3D and its quality description are not well known. Here, we exploit time-series interferometric SAR to enhance the positioning capability in three dimensions. The 3D positioning precision is parameterized by a variance-covariance matrix and visualized as an error ellipsoid centered at the estimated position. The intersection of the error ellipsoid with objects in the field is exploited to link radar scatterers to real-world objects. We demonstrate the estimation of scatterer position and its quality using 20 months of TSX stripmap acquisitions over Delft, the Netherlands. Using trihedral corner reflectors (CR) for validation, the accuracy of absolute positioning in 2D is about 7 cm. In 3D, an absolute accuracy of up to ˜ 66 cm is realized, with a cigar-shaped error ellipsoid having centimeter precision in azimuth and range dimensions, and elongated in cross-range dimension with a precision in the order of meters (the ratio of the ellipsoid axis lengths is 1/3/213, respectively). The CR absolute 3D position, along with the associated error ellipsoid, is found to be accurate and agree with the ground truth position at a 99 % confidence level. For other non-CR coherent scatterers, the error ellipsoid concept is validated using 3D building models. In both cases, the error ellipsoid not only serves as a quality descriptor, but can also help to associate radar scatterers to real-world objects.

  20. High Precision and Real Time Tracking of Low Earth Orbiters With GPS: Case Studies With TOPEX/POSEIDON and EUVE

    NASA Technical Reports Server (NTRS)

    Yunck, Thomas P.; Bertiger, Winy I.; Gold, Kenn; Guinn, Joseph; Reichert, Angie; Watkins, Michael

    1995-01-01

    TOPEX/POSEIDON carries a dual-frequency 6 channel GPS receiver while EUVE has a 12 channel single frequency receiver. Flying at an altitude of 1334 km, TOPEX/POSEIDON performs precise ocean altimetry, which demands the highest possible accuracy in determining the radial orbit component in post-processing. Radial RMS accuracies of about 2 cm were realized using reduced dynamic tracking techniques. In this approach, orbit errors due to force are substantially reduced by exploiting the geometric strength of GPS to solve for a set of stochastic forces. On EUVE, the emphasis was on evaluating real time positioning techniques with a single frequency receiver. The capability for real time 3D accuracies of 15 m in the presence of Selective Availability was shown. This was validated by comparing to a post-processed differential GPS truth orbit believed accurate to about 1 m.!.

  1. Precise tracking of remote sensing satellites with the Global Positioning System

    NASA Technical Reports Server (NTRS)

    Yunck, Thomas P.; Wu, Sien-Chong; Wu, Jiun-Tsong; Thornton, Catherine L.

    1990-01-01

    The Global Positioning System (GPS) can be applied in a number of ways to track remote sensing satellites at altitudes below 3000 km with accuracies of better than 10 cm. All techniques use a precise global network of GPS ground receivers operating in concert with a receiver aboard the user satellite, and all estimate the user orbit, GPS orbits, and selected ground locations simultaneously. The GPS orbit solutions are always dynamic, relying on the laws of motion, while the user orbit solution can range from purely dynamic to purely kinematic (geometric). Two variations show considerable promise. The first one features an optimal synthesis of dynamics and kinematics in the user solution, while the second introduces a novel gravity model adjustment technique to exploit data from repeat ground tracks. These techniques, to be demonstrated on the Topex/Poseidon mission in 1992, will offer subdecimeter tracking accuracy for dynamically unpredictable satellites down to the lowest orbital altitudes.

  2. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  3. The Precision Field Lysimeter Concept

    NASA Astrophysics Data System (ADS)

    Fank, J.

    2009-04-01

    The understanding and interpretation of leaching processes have improved significantly during the past decades. Unlike laboratory experiments, which are mostly performed under very controlled conditions (e.g. homogeneous, uniform packing of pre-treated test material, saturated steady-state flow conditions, and controlled uniform hydraulic conditions), lysimeter experiments generally simulate actual field conditions. Lysimeters may be classified according to different criteria such as type of soil block used (monolithic or reconstructed), drainage (drainage by gravity or vacuum or a water table may be maintained), or weighing or non-weighing lysimeters. In 2004 experimental investigations have been set up to assess the impact of different farming systems on groundwater quality of the shallow floodplain aquifer of the river Mur in Wagna (Styria, Austria). The sediment is characterized by a thin layer (30 - 100 cm) of sandy Dystric Cambisol and underlying gravel and sand. Three precisely weighing equilibrium tension block lysimeters have been installed in agricultural test fields to compare water flow and solute transport under (i) organic farming, (ii) conventional low input farming and (iii) extensification by mulching grass. Specific monitoring equipment is used to reduce the well known shortcomings of lysimeter investigations: The lysimeter core is excavated as an undisturbed monolithic block (circular, 1 m2 surface area, 2 m depth) to prevent destruction of the natural soil structure, and pore system. Tracing experiments have been achieved to investigate the occurrence of artificial preferential flow and transport along the walls of the lysimeters. The results show that such effects can be neglected. Precisely weighing load cells are used to constantly determine the weight loss of the lysimeter due to evaporation and transpiration and to measure different forms of precipitation. The accuracy of the weighing apparatus is 0.05 kg, or 0.05 mm water equivalent

  4. A passion for precision

    ScienceCinema

    None

    2016-07-12

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  5. A passion for precision

    SciTech Connect

    2010-05-19

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  6. Towards precision medicine.

    PubMed

    Ashley, Euan A

    2016-08-16

    There is great potential for genome sequencing to enhance patient care through improved diagnostic sensitivity and more precise therapeutic targeting. To maximize this potential, genomics strategies that have been developed for genetic discovery - including DNA-sequencing technologies and analysis algorithms - need to be adapted to fit clinical needs. This will require the optimization of alignment algorithms, attention to quality-coverage metrics, tailored solutions for paralogous or low-complexity areas of the genome, and the adoption of consensus standards for variant calling and interpretation. Global sharing of this more accurate genotypic and phenotypic data will accelerate the determination of causality for novel genes or variants. Thus, a deeper understanding of disease will be realized that will allow its targeting with much greater therapeutic precision. PMID:27528417

  7. Principles and techniques for designing precision machines

    SciTech Connect

    Hale, L C

    1999-02-01

    This thesis is written to advance the reader's knowledge of precision-engineering principles and their application to designing machines that achieve both sufficient precision and minimum cost. It provides the concepts and tools necessary for the engineer to create new precision machine designs. Four case studies demonstrate the principles and showcase approaches and solutions to specific problems that generally have wider applications. These come from projects at the Lawrence Livermore National Laboratory in which the author participated: the Large Optics Diamond Turning Machine, Accuracy Enhancement of High- Productivity Machine Tools, the National Ignition Facility, and Extreme Ultraviolet Lithography. Although broad in scope, the topics go into sufficient depth to be useful to practicing precision engineers and often fulfill more academic ambitions. The thesis begins with a chapter that presents significant principles and fundamental knowledge from the Precision Engineering literature. Following this is a chapter that presents engineering design techniques that are general and not specific to precision machines. All subsequent chapters cover specific aspects of precision machine design. The first of these is Structural Design, guidelines and analysis techniques for achieving independently stiff machine structures. The next chapter addresses dynamic stiffness by presenting several techniques for Deterministic Damping, damping designs that can be analyzed and optimized with predictive results. Several chapters present a main thrust of the thesis, Exact-Constraint Design. A main contribution is a generalized modeling approach developed through the course of creating several unique designs. The final chapter is the primary case study of the thesis, the Conceptual Design of a Horizontal Machining Center.

  8. Precision orbit determination of altimetric satellites

    NASA Astrophysics Data System (ADS)

    Shum, C. K.; Ries, John C.; Tapley, Byron D.

    1994-11-01

    The ability to determine accurate global sea level variations is important to both detection and understanding of changes in climate patterns. Sea level variability occurs over a wide spectrum of temporal and spatial scales, and precise global measurements are only recently possible with the advent of spaceborne satellite radar altimetry missions. One of the inherent requirements for accurate determination of absolute sea surface topography is that the altimetric satellite orbits be computed with sub-decimeter accuracy within a well defined terrestrial reference frame. SLR tracking in support of precision orbit determination of altimetric satellites is significant. Recent examples are the use of SLR as the primary tracking systems for TOPEX/Poseidon and for ERS-1 precision orbit determination. The current radial orbit accuracy for TOPEX/Poseidon is estimated to be around 3-4 cm, with geographically correlated orbit errors around 2 cm. The significance of the SLR tracking system is its ability to allow altimetric satellites to obtain absolute sea level measurements and thereby provide a link to other altimetry measurement systems for long-term sea level studies. SLR tracking allows the production of precise orbits which are well centered in an accurate terrestrial reference frame. With proper calibration of the radar altimeter, these precise orbits, along with the altimeter measurements, provide long term absolute sea level measurements. The U.S. Navy's Geosat mission is equipped with only Doppler beacons and lacks laser retroreflectors. However, its orbits, and even the Geosat orbits computed using the available full 40-station Tranet tracking network, yield orbits with significant north-south shifts with respect to the IERS terrestrial reference frame. The resulting Geosat sea surface topography will be tilted accordingly, making interpretation of long-term sea level variability studies difficult.

  9. Ultra-Precision Optics

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Under a Joint Sponsored Research Agreement with Goddard Space Flight Center, SEMATECH, Inc., the Silicon Valley Group, Inc. and Tinsley Laboratories, known as SVG-Tinsley, developed an Ultra-Precision Optics Manufacturing System for space and microlithographic applications. Continuing improvements in optics manufacture will be able to meet unique NASA requirements and the production needs of the lithography industry for many years to come.

  10. Precise clock synchronization protocol

    NASA Astrophysics Data System (ADS)

    Luit, E. J.; Martin, J. M. M.

    1993-12-01

    A distributed clock synchronization protocol is presented which achieves a very high precision without the need for very frequent resynchronizations. The protocol tolerates failures of the clocks: clocks may be too slow or too fast, exhibit omission failures and report inconsistent values. Synchronization takes place in synchronization rounds as in many other synchronization protocols. At the end of each round, clock times are exchanged between the clocks. Each clock applies a convergence function (CF) to the values obtained. This function estimates the difference between its clock and an average clock and corrects its clock accordingly. Clocks are corrected for drift relative to this average clock during the next synchronization round. The protocol is based on the assumption that clock reading errors are small with respect to the required precision of synchronization. It is shown that the CF resynchronizes the clocks with high precision even when relatively large clock drifts are possible. It is also shown that the drift-corrected clocks remain synchronized until the end of the next synchronization round. The stability of the protocol is proven.

  11. Precision Experiments at LEP

    NASA Astrophysics Data System (ADS)

    de Boer, W.

    2015-07-01

    The Large Electron-Positron Collider (LEP) established the Standard Model (SM) of particle physics with unprecedented precision, including all its radiative corrections. These led to predictions for the masses of the top quark and Higgs boson, which were beautifully confirmed later on. After these precision measurements the Nobel Prize in Physics was awarded in 1999 jointly to 't Hooft and Veltman "for elucidating the quantum structure of electroweak interactions in physics". Another hallmark of the LEP results were the precise measurements of the gauge coupling constants, which excluded unification of the forces within the SM, but allowed unification within the supersymmetric extension of the SM. This increased the interest in Supersymmetry (SUSY) and Grand Unified Theories, especially since the SM has no candidate for the elusive dark matter, while SUSY provides an excellent candidate for dark matter. In addition, SUSY removes the quadratic divergencies of the SM and predicts the Higgs mechanism from radiative electroweak symmetry breaking with a SM-like Higgs boson having a mass below 130 GeV in agreement with the Higgs boson discovery at the LHC. However, the predicted SUSY particles have not been found either because they are too heavy for the present LHC energy and luminosity or Nature has found alternative ways to circumvent the shortcomings of the SM.

  12. Precision Experiments at LEP

    NASA Astrophysics Data System (ADS)

    de Boer, W.

    2015-09-01

    The Large Electron Positron Collider (LEP) established the Standard Model (SM) of particle physics with unprecedented precision, including all its radiative corrections. These led to predictions for the masses of the top quark and Higgs boson, which were beautifully confirmed later on. After these precision measurements the Nobel Prize in Physics was awarded in 1999 jointly to 't Hooft and Veltman "for elucidating the quantum structure of electroweak interactions in physics". Another hallmark of the LEP results were the precise measurements of the gauge coupling constants, which excluded unification of the forces within the SM, but allowed unification within the supersymmetric extension of the SM. This increased the interest in Supersymmetry (SUSY) and Grand Unified Theories, especially since the SM has no candidate for the elusive dark matter, while Supersymmetry provides an excellent candidate for dark matter. In addition, Supersymmetry removes the quadratic divergencies of the SM and {\\it predicts} the Higgs mechanism from radiative electroweak symmetry breaking with a SM-like Higgs boson having a mass below 130 GeV in agreement with the Higgs boson discovery at the LHC. However, the predicted SUSY particles have not been found either because they are too heavy for the present LHC energy and luminosity or Nature has found alternative ways to circumvent the shortcomings of the SM.

  13. Standardization of radon measurements. 2. Accuracy and proficiency testing

    SciTech Connect

    Matuszek, J.M.

    1990-01-01

    The accuracy of in situ environmental radon measurement techniques is reviewed and new data for charcoal canister, alpha-track (track-etch) and electret detectors are presented. Deficiencies reported at the 1987 meeting in Wurenlingen, Federal Republic of Germany, for measurements using charcoal detectors are confirmed by the new results. Accuracy and precision of the alpha-track measurements laboratory were better than in 1987. Electret detectors appear to provide a convenient, accurate, and precise system for the measurement of radon concentration. The need for a comprehensive, blind proficiency-testing programs is discussed.

  14. Precision segmented reflector figure control system architecture.

    NASA Astrophysics Data System (ADS)

    Mettler, E.; Eldred, D.; Briggs, C.; Kiceniuk, T.; Agronin, M.

    1989-09-01

    This paper describes an advanced technology figure control system for a generic class of large space based segmented reflector telescopes. Major technology and design motivations for selection of sensing, actuation, and mechanism approaches result from the high precision and very low mass and power goals for the reflector system.

  15. Highly Parallel, High-Precision Numerical Integration

    SciTech Connect

    Bailey, David H.; Borwein, Jonathan M.

    2005-04-22

    This paper describes a scheme for rapidly computing numerical values of definite integrals to very high accuracy, ranging from ordinary machine precision to hundreds or thousands of digits, even for functions with singularities or infinite derivatives at endpoints. Such a scheme is of interest not only in computational physics and computational chemistry, but also in experimental mathematics, where high-precision numerical values of definite integrals can be used to numerically discover new identities. This paper discusses techniques for a parallel implementation of this scheme, then presents performance results for 1-D and 2-D test suites. Results are also given for a certain problem from mathematical physics, which features a difficult singularity, confirming a conjecture to 20,000 digit accuracy. The performance rate for this latter calculation on 1024 CPUs is 690 Gflop/s. We believe that this and one other 20,000-digit integral evaluation that we report are the highest-precision non-trivial numerical integrations performed to date.

  16. Using Genetic Distance to Infer the Accuracy of Genomic Prediction.

    PubMed

    Scutari, Marco; Mackay, Ian; Balding, David

    2016-09-01

    The prediction of phenotypic traits using high-density genomic data has many applications such as the selection of plants and animals of commercial interest; and it is expected to play an increasing role in medical diagnostics. Statistical models used for this task are usually tested using cross-validation, which implicitly assumes that new individuals (whose phenotypes we would like to predict) originate from the same population the genomic prediction model is trained on. In this paper we propose an approach based on clustering and resampling to investigate the effect of increasing genetic distance between training and target populations when predicting quantitative traits. This is important for plant and animal genetics, where genomic selection programs rely on the precision of predictions in future rounds of breeding. Therefore, estimating how quickly predictive accuracy decays is important in deciding which training population to use and how often the model has to be recalibrated. We find that the correlation between true and predicted values decays approximately linearly with respect to either FST or mean kinship between the training and the target populations. We illustrate this relationship using simulations and a collection of data sets from mice, wheat and human genetics. PMID:27589268

  17. Using Genetic Distance to Infer the Accuracy of Genomic Prediction

    PubMed Central

    Scutari, Marco; Mackay, Ian

    2016-01-01

    The prediction of phenotypic traits using high-density genomic data has many applications such as the selection of plants and animals of commercial interest; and it is expected to play an increasing role in medical diagnostics. Statistical models used for this task are usually tested using cross-validation, which implicitly assumes that new individuals (whose phenotypes we would like to predict) originate from the same population the genomic prediction model is trained on. In this paper we propose an approach based on clustering and resampling to investigate the effect of increasing genetic distance between training and target populations when predicting quantitative traits. This is important for plant and animal genetics, where genomic selection programs rely on the precision of predictions in future rounds of breeding. Therefore, estimating how quickly predictive accuracy decays is important in deciding which training population to use and how often the model has to be recalibrated. We find that the correlation between true and predicted values decays approximately linearly with respect to either FST or mean kinship between the training and the target populations. We illustrate this relationship using simulations and a collection of data sets from mice, wheat and human genetics. PMID:27589268

  18. Accuracy of Digital vs. Conventional Implant Impressions

    PubMed Central

    Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.

    2015-01-01

    The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423

  19. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  20. Taking the Measure of the Universe : Precision Astrometry with SIM PlanetQuest

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.; Shao, Michael; Tanner, Angelle M.; Allen, Ronald J.; Beichman, Charles A.; Boboltz, David; Catanzarite, Joseph H.; Chaboyer, Brian C.; Ciardi, David R.; Edberg, Stephen J.; Fey, Alan L.; Fischer, Debra A.; Gelino, Christopher R.; Gould, Andrew P.; Grillmair, Carl; Henry, Todd J.; Johnston, Kathryn V.; Johnston, Kenneth J.; Jones, Dayton L.; Kulkarni, Shrinivas R.; Law, Nicholas M.; Majewski, Steven R.; Makarov, Valeri V.; Marcy, Geoffrey W.; Meier, David L.

    2008-01-01

    Precision astrometry at microarcsecond accuracy has application to a wide range of astrophysical problems. This paper is a study of the science questions that can be addressed using an instrument with flexible scheduling that delivers parallaxes at about 4 microarcsec (microns)as) on targets as faint as V = 20, and differential accuracy of 0.6 (microns)as on bright targets. The science topics are drawn primarily from the Team Key Projects, selected in 2000, for the Space Interferometry Mission PlanetQuest (SIM PlanetQuest). We use the capabilities of this mission to illustrate the importance of the next level of astrometric precision in modern astrophysics. SIM PlanetQuest is currently in the detailed design phase, having completed in 2005 all of the enabling technologies needed for the flight instrument. It will be the first space-based long baseline Michelson interferometer designed for precision astrometry. SIM will contribute strongly to many astronomical fields including stellar and galactic astrophysics, planetary systems around nearby stars, and the study of quasar and AGN nuclei. Using differential astrometry SIM will search for planets with masses as small as an Earth orbiting in the 'habitable zone' around the nearest stars, and could discover many dozen if Earth-like planets are common. It will characterize the multiple-planet systems that are now known to exist, and it will be able to search for terrestrial planets around all of the candidate target stars in the Terrestrial Planet Finder and Darwin mission lists. It will be capable of detecting planets around young stars, thereby providing insights into how planetary systems are born and how they evolve with time. Precision astrometry allows the measurement of accurate dynamical masses for stars in binary systems. SIM will observe significant numbers of very high- and low-mass stars, providing stellar masses to 1%, the accuracy needed to challenge physical models. Using precision proper motion

  1. Galvanometer deflection: a precision high-speed system.

    PubMed

    Jablonowski, D P; Raamot, J

    1976-06-01

    An X-Y galvanometer deflection system capable of high precision in a random access mode of operation is described. Beam positional information in digitized form is obtained by employing a Ronchi grating with a sophisticated optical detection scheme. This information is used in a control interface to locate the beam to the required precision. The system is characterized by high accuracy at maximum speed and is designed for operation in a variable environment, with particular attention placed on thermal insensitivity.

  2. Galvanometer deflection: a precision high-speed system.

    PubMed

    Jablonowski, D P; Raamot, J

    1976-06-01

    An X-Y galvanometer deflection system capable of high precision in a random access mode of operation is described. Beam positional information in digitized form is obtained by employing a Ronchi grating with a sophisticated optical detection scheme. This information is used in a control interface to locate the beam to the required precision. The system is characterized by high accuracy at maximum speed and is designed for operation in a variable environment, with particular attention placed on thermal insensitivity. PMID:20165203

  3. Precision electroweak measurements

    SciTech Connect

    Demarteau, M.

    1996-11-01

    Recent electroweak precision measurements fro {ital e}{sup +}{ital e}{sup -} and {ital p{anti p}} colliders are presented. Some emphasis is placed on the recent developments in the heavy flavor sector. The measurements are compared to predictions from the Standard Model of electroweak interactions. All results are found to be consistent with the Standard Model. The indirect constraint on the top quark mass from all measurements is in excellent agreement with the direct {ital m{sub t}} measurements. Using the world`s electroweak data in conjunction with the current measurement of the top quark mass, the constraints on the Higgs` mass are discussed.

  4. Precision Robotic Assembly Machine

    ScienceCinema

    None

    2016-07-12

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  5. Instrument Attitude Precision Control

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    2004-01-01

    A novel approach is presented in this paper to analyze attitude precision and control for an instrument gimbaled to a spacecraft subject to an internal disturbance caused by a moving component inside the instrument. Nonlinear differential equations of motion for some sample cases are derived and solved analytically to gain insight into the influence of the disturbance on the attitude pointing error. A simple control law is developed to eliminate the instrument pointing error caused by the internal disturbance. Several cases are presented to demonstrate and verify the concept presented in this paper.

  6. Precision Robotic Assembly Machine

    SciTech Connect

    2009-08-14

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  7. Precision mass measurements

    NASA Astrophysics Data System (ADS)

    Gläser, M.; Borys, M.

    2009-12-01

    Mass as a physical quantity and its measurement are described. After some historical remarks, a short summary of the concept of mass in classical and modern physics is given. Principles and methods of mass measurements, for example as energy measurement or as measurement of weight forces and forces caused by acceleration, are discussed. Precision mass measurement by comparing mass standards using balances is described in detail. Measurement of atomic masses related to 12C is briefly reviewed as well as experiments and recent discussions for a future new definition of the kilogram, the SI unit of mass.

  8. Development and simulation of microfluidic Wheatstone bridge for high-precision sensor

    NASA Astrophysics Data System (ADS)

    Shipulya, N. D.; Konakov, S. A.; VKrzhizhanovskaya, V.

    2016-08-01

    In this work we present the results of analytical modeling and 3D computer simulation of microfluidic Wheatstone bridge, which is used for high-accuracy measurements and precision instruments. We propose and simulate a new method of a bridge balancing process by changing the microchannel geometry. This process is based on the “etching in microchannel” technology we developed earlier (doi:10.1088/1742-6596/681/1/012035). Our method ensures a precise control of the flow rate and flow direction in the bridge microchannel. The advantage of our approach is the ability to work without any control valves and other active electronic systems, which are usually used for bridge balancing. The geometrical configuration of microchannels was selected based on the analytical estimations. A detailed 3D numerical model was based on Navier-Stokes equations for a laminar fluid flow at low Reynolds numbers. We investigated the behavior of the Wheatstone bridge under different process conditions; found a relation between the channel resistance and flow rate through the bridge; and calculated the pressure drop across the system under different total flow rates and viscosities. Finally, we describe a high-precision microfluidic pressure sensor that employs the Wheatstone bridge and discuss other applications in complex precision microfluidic systems.

  9. Precision and power grip priming by observed grasping.

    PubMed

    Vainio, Lari; Tucker, Mike; Ellis, Rob

    2007-11-01

    The coupling of hand grasping stimuli and the subsequent grasp execution was explored in normal participants. Participants were asked to respond with their right- or left-hand to the accuracy of an observed (dynamic) grasp while they were holding precision or power grasp response devices in their hands (e.g., precision device/right-hand; power device/left-hand). The observed hand was making either accurate or inaccurate precision or power grasps and participants signalled the accuracy of the observed grip by making one or other response depending on instructions. Responses were made faster when they matched the observed grip type. The two grasp types differed in their sensitivity to the end-state (i.e., accuracy) of the observed grip. The end-state influenced the power grasp congruency effect more than the precision grasp effect when the observed hand was performing the grasp without any goal object (Experiments 1 and 2). However, the end-state also influenced the precision grip congruency effect (Experiment 3) when the action was object-directed. The data are interpreted as behavioural evidence of the automatic imitation coding of the observed actions. The study suggests that, in goal-oriented imitation coding, the context of an action (e.g., being object-directed) is more important factor in coding precision grips than power grips.

  10. Precision flyer initiator

    DOEpatents

    Frank, Alan M.; Lee, Ronald S.

    1998-01-01

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or "flyer" is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices.

  11. Precision flyer initiator

    DOEpatents

    Frank, A.M.; Lee, R.S.

    1998-05-26

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or ``flyer`` is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices. 10 figs.

  12. Precision Joining Center

    SciTech Connect

    Powell, J.W.; Westphal, D.A.

    1991-08-01

    A workshop to obtain input from industry on the establishment of the Precision Joining Center (PJC) was held on July 10--12, 1991. The PJC is a center for training Joining Technologists in advanced joining techniques and concepts in order to promote the competitiveness of US industry. The center will be established as part of the DOE Defense Programs Technology Commercialization Initiative, and operated by EG G Rocky Flats in cooperation with the American Welding Society and the Colorado School of Mines Center for Welding and Joining Research. The overall objectives of the workshop were to validate the need for a Joining Technologists to fill the gap between the welding operator and the welding engineer, and to assure that the PJC will train individuals to satisfy that need. The consensus of the workshop participants was that the Joining Technologist is a necessary position in industry, and is currently used, with some variation, by many companies. It was agreed that the PJC core curriculum, as presented, would produce a Joining Technologist of value to industries that use precision joining techniques. The advantage of the PJC would be to train the Joining Technologist much more quickly and more completely. The proposed emphasis of the PJC curriculum on equipment intensive and hands-on training was judged to be essential.

  13. Precision measurements in supersymmetry

    SciTech Connect

    Feng, J.L.

    1995-05-01

    Supersymmetry is a promising framework in which to explore extensions of the standard model. If candidates for supersymmetric particles are found, precision measurements of their properties will then be of paramount importance. The prospects for such measurements and their implications are the subject of this thesis. If charginos are produced at the LEP II collider, they are likely to be one of the few available supersymmetric signals for many years. The author considers the possibility of determining fundamental supersymmetry parameters in such a scenario. The study is complicated by the dependence of observables on a large number of these parameters. He proposes a straightforward procedure for disentangling these dependences and demonstrate its effectiveness by presenting a number of case studies at representative points in parameter space. In addition to determining the properties of supersymmetric particles, precision measurements may also be used to establish that newly-discovered particles are, in fact, supersymmetric. Supersymmetry predicts quantitative relations among the couplings and masses of superparticles. The author discusses tests of such relations at a future e{sup +}e{sup {minus}} linear collider, using measurements that exploit the availability of polarizable beams. Stringent tests of supersymmetry from chargino production are demonstrated in two representative cases, and fermion and neutralino processes are also discussed.

  14. The neglected tool in the Bayesian ecologist's shed: a case study testing informative priors' effect on model accuracy.

    PubMed

    Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J

    2015-01-01

    Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy.

  15. New High Precision Linelist of H_3^+

    NASA Astrophysics Data System (ADS)

    Hodges, James N.; Perry, Adam J.; Markus, Charles; Jenkins, Paul A., II; Kocheril, G. Stephen; McCall, Benjamin J.

    2014-06-01

    As the simplest polyatomic molecule, H_3^+ serves as an ideal benchmark for theoretical predictions of rovibrational energy levels. By strictly ab initio methods, the current accuracy of theoretical predictions is limited to an impressive one hundredth of a wavenumber, which has been accomplished by consideration of relativistic, adiabatic, and non-adiabatic corrections to the Born-Oppenheimer PES. More accurate predictions rely on a treatment of quantum electrodynamic effects, which have improved the accuracies of vibrational transitions in molecular hydrogen to a few MHz. High precision spectroscopy is of the utmost importance for extending the frontiers of ab initio calculations, as improved precision and accuracy enable more rigorous testing of calculations. Additionally, measuring rovibrational transitions of H_3^+ can be used to predict its forbidden rotational spectrum. Though the existing data can be used to determine rotational transition frequencies, the uncertainties are prohibitively large. Acquisition of rovibrational spectra with smaller experimental uncertainty would enable a spectroscopic search for the rotational transitions. The technique Noise Immune Cavity Enhanced Optical Heterodyne Velocity Modulation Spectroscopy, or NICE-OHVMS has been previously used to precisely and accurately measure transitions of H_3^+, CH_5^+, and HCO^+ to sub-MHz uncertainty. A second module for our optical parametric oscillator has extended our instrument's frequency coverage from 3.2-3.9 μm to 2.5-3.9 μm. With extended coverage, we have improved our previous linelist by measuring additional transitions. O. L. Polyansky, et al. Phil. Trans. R. Soc. A (2012), 370, 5014--5027. J. Komasa, et al. J. Chem. Theor. Comp. (2011), 7, 3105--3115. C. M. Lindsay, B. J. McCall, J. Mol. Spectrosc. (2001), 210, 66--83. J. N. Hodges, et al. J. Chem. Phys. (2013), 139, 164201.

  16. Accuracy of laser beam center and width calculations.

    PubMed

    Mana, G; Massa, E; Rovera, A

    2001-03-20

    The application of lasers in high-precision measurements and the demand for accuracy make the plane-wave model of laser beams unsatisfactory. Measurements of the variance of the transverse components of the photon impulse are essential for wavelength determination. Accuracy evaluation of the relevant calculations is thus an integral part of the assessment of the wavelength of stabilized-laser radiation. We present a propagation-of-error analysis on variance calculations when digitized intensity profiles are obtained by means of silicon video cameras. Image clipping criteria are obtained that maximize the accuracy of the computed result.

  17. Development of a precision large deployable antenna

    NASA Astrophysics Data System (ADS)

    Iwata, Yoji; Yamamoto, Kazuo; Noda, Takahiko; Tamai, Yasuo; Ebisui, Takashi; Miura, Koryo; Takano, Tadashi

    This paper describes the results of a study of a precision large deployable antenna for the space VLBI satellite 'MUSES-B'. An antenna with high gain and pointing accuracy is required for the mission objective. The frequency bands required are 22, 5 and 1.6 GHz. The required aperture diameter of the reflector is 10 meters. A displaced axis Cassegrain antenna is adopted with a mesh reflector formed in a tension truss concept. Analysis shows the possibility to achieve aperture efficiency of 60 percent at 22.15 GHz and surface accuracy of 0.5 mm rms. A one-fourth scale model of the reflector has been assembled in order to verify the design and clarify problems in manufacturing and assembly processes.

  18. Precise autofocusing microscope with rapid response

    NASA Astrophysics Data System (ADS)

    Liu, Chien-Sheng; Jiang, Sheng-Hong

    2015-03-01

    The rapid on-line or off-line automated vision inspection is a critical operation in the manufacturing fields. Accordingly, this present study designs and characterizes a novel precise optics-based autofocusing microscope with a rapid response and no reduction in the focusing accuracy. In contrast to conventional optics-based autofocusing microscopes with centroid method, the proposed microscope comprises a high-speed rotating optical diffuser in which the variation of the image centroid position is reduced and consequently the focusing response is improved. The proposed microscope is characterized and verified experimentally using a laboratory-built prototype. The experimental results show that compared to conventional optics-based autofocusing microscopes, the proposed microscope achieves a more rapid response with no reduction in the focusing accuracy. Consequently, the proposed microscope represents another solution for both existing and emerging industrial applications of automated vision inspection.

  19. Visual inspection reliability for precision manufactured parts

    SciTech Connect

    See, Judi E.

    2015-09-04

    Sandia National Laboratories conducted an experiment for the National Nuclear Security Administration to determine the reliability of visual inspection of precision manufactured parts used in nuclear weapons. In addition visual inspection has been extensively researched since the early 20th century; however, the reliability of visual inspection for nuclear weapons parts has not been addressed. In addition, the efficacy of using inspector confidence ratings to guide multiple inspections in an effort to improve overall performance accuracy is unknown. Further, the workload associated with inspection has not been documented, and newer measures of stress have not been applied.

  20. Precision Electroforming For Optical Disk Manufacturing

    NASA Astrophysics Data System (ADS)

    Rodia, Carl M.

    1985-04-01

    Precision electroforming in replication of optical discs is discussed with overview of electro-forming technology capabilities, limitations, and tolerance criteria. Use of expendable and reusable mandrels is treated along with techniques for resist master preparation and processing. A review of applications and common reasons for success and failure is offered. Problems such as tensile/compressive stress, roughness and flatness are discussed. Advice is given on approaches, classic and novel, for remedying and avoiding specific problems. An abridged process description of optical memory disk mold electroforming is presented from resist master through metallization and electroforming. Emphasis is placed on methods of achieving accuracy and quality assurance.

  1. The GBT precision telescope control system

    NASA Astrophysics Data System (ADS)

    Prestage, Richard M.; Constantikes, Kim T.; Balser, Dana S.; Condon, James J.

    2004-10-01

    The NRAO Robert C. Byrd Green Bank Telescope (GBT) is a 100m diameter advanced single dish radio telescope designed for a wide range of astronomical projects with special emphasis on precision imaging. Open-loop adjustments of the active surface, and real-time corrections to pointing and focus on the basis of structural temperatures already allow observations at frequencies up to 50GHz. Our ultimate goal is to extend the observing frequency limit up to 115GHz; this will require a two dimensional tracking error better than 1.3", and an rms surface accuracy better than 210μm. The Precision Telescope Control System project has two main components. One aspect is the continued deployment of appropriate metrology systems, including temperature sensors, inclinometers, laser rangefinders and other devices. An improved control system architecture will harness this measurement capability with the existing servo systems, to deliver the precision operation required. The second aspect is the execution of a series of experiments to identify, understand and correct the residual pointing and surface accuracy errors. These can have multiple causes, many of which depend on variable environmental conditions. A particularly novel approach is to solve simultaneously for gravitational, thermal and wind effects in the development of the telescope pointing and focus tracking models. Our precision temperature sensor system has already allowed us to compensate for thermal gradients in the antenna, which were previously responsible for the largest "non-repeatable" pointing and focus tracking errors. We are currently targetting the effects of wind as the next, currently uncompensated, source of error.

  2. Reticence, Accuracy and Efficacy

    NASA Astrophysics Data System (ADS)

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  3. Truss Assembly and Welding by Intelligent Precision Jigging Robots

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Dorsey, John T.; Doggett, William R.; Correll, Nikolaus

    2014-01-01

    This paper describes an Intelligent Precision Jigging Robot (IPJR) prototype that enables the precise alignment and welding of titanium space telescope optical benches. The IPJR, equipped with micron accuracy sensors and actuators, worked in tandem with a lower precision remote controlled manipulator. The combined system assembled and welded a 2 m truss from stock titanium components. The calibration of the IPJR, and the difference between the predicted and the truss dimensions as-built, identified additional sources of error that should be addressed in the next generation of IPJRs in 2D and 3D.

  4. Precision Joining Center

    NASA Technical Reports Server (NTRS)

    Powell, John W.

    1991-01-01

    The establishment of a Precision Joining Center (PJC) is proposed. The PJC will be a cooperatively operated center with participation from U.S. private industry, the Colorado School of Mines, and various government agencies, including the Department of Energy's Nuclear Weapons Complex (NWC). The PJC's primary mission will be as a training center for advanced joining technologies. This will accomplish the following objectives: (1) it will provide an effective mechanism to transfer joining technology from the NWC to private industry; (2) it will provide a center for testing new joining processes for the NWC and private industry; and (3) it will provide highly trained personnel to support advance joining processes for the NWC and private industry.

  5. Precision laser cutting

    SciTech Connect

    Kautz, D.D.; Anglin, C.D.; Ramos, T.J.

    1990-01-19

    Many materials that are otherwise difficult to fabricate can be cut precisely with lasers. This presentation discusses the advantages and limitations of laser cutting for refractory metals, ceramics, and composites. Cutting in these materials was performed with a 400-W, pulsed Nd:YAG laser. Important cutting parameters such as beam power, pulse waveforms, cutting gases, travel speed, and laser coupling are outlined. The effects of process parameters on cut quality are evaluated. Three variables are used to determine the cut quality: kerf width, slag adherence, and metallurgical characteristics of recast layers and heat-affected zones around the cuts. Results indicate that ductile materials with good coupling characteristics (such as stainless steel alloys and tantalum) cut well. Materials lacking one or both of these properties (such as tungsten and ceramics) are difficult to cut without proper part design, stress relief, or coupling aids. 3 refs., 2 figs., 1 tab.

  6. 40 CFR 91.314 - Analyzer accuracy and specifications.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... 91.314 Section 91.314 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Provisions § 91.314 Analyzer accuracy and specifications. (a) Measurement accuracy—general. The analyzers... precision is defined as 2.5 times the standard deviation(s) of 10 repetitive responses to a...

  7. 40 CFR 91.314 - Analyzer accuracy and specifications.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... 91.314 Section 91.314 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Provisions § 91.314 Analyzer accuracy and specifications. (a) Measurement accuracy—general. The analyzers... precision is defined as 2.5 times the standard deviation(s) of 10 repetitive responses to a...

  8. Precision Spectroscopy of Tellurium

    NASA Astrophysics Data System (ADS)

    Coker, J.; Furneaux, J. E.

    2013-06-01

    Tellurium (Te_2) is widely used as a frequency reference, largely due to the fact that it has an optical transition roughly every 2-3 GHz throughout a large portion of the visible spectrum. Although a standard atlas encompassing over 5200 cm^{-1} already exists [1], Doppler broadening present in that work buries a significant portion of the features [2]. More recent studies of Te_2 exist which do not exhibit Doppler broadening, such as Refs. [3-5], and each covers different parts of the spectrum. This work adds to that knowledge a few hundred transitions in the vicinity of 444 nm, measured with high precision in order to improve measurement of the spectroscopic constants of Te_2's excited states. Using a Fabry Perot cavity in a shock-absorbing, temperature and pressure regulated chamber, locked to a Zeeman stabilized HeNe laser, we measure changes in frequency of our diode laser to ˜1 MHz precision. This diode laser is scanned over 1000 GHz for use in a saturated-absorption spectroscopy cell filled with Te_2 vapor. Details of the cavity and its short and long-term stability are discussed, as well as spectroscopic properties of Te_2. References: J. Cariou, and P. Luc, Atlas du spectre d'absorption de la molecule de tellure, Laboratoire Aime-Cotton (1980). J. Coker et al., J. Opt. Soc. Am. B {28}, 2934 (2011). J. Verges et al., Physica Scripta {25}, 338 (1982). Ph. Courteille et al., Appl. Phys. B {59}, 187 (1994) T.J. Scholl et al., J. Opt. Soc. Am. B {22}, 1128 (2005).

  9. Balancing Precision and Risk: Should Multiple Detection Methods Be Analyzed Separately in N-Mixture Models?

    PubMed Central

    Graves, Tabitha A.; Royle, J. Andrew; Kendall, Katherine C.; Beier, Paul; Stetz, Jeffrey B.; Macleod, Amy C.

    2012-01-01

    Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic) and bear rubs (opportunistic). We used hierarchical abundance models (N-mixture models) with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1) lead to the selection of the same variables as important and (2) provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3) yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight), and (4) improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed against

  10. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    USGS Publications Warehouse

    Graves, Tabitha A.; Royle, J. Andrew; Kendall, Katherine C.; Beier, Paul; Stetz, Jeffrey B.; Macleod, Amy C.

    2012-01-01

    Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic) and bear rubs (opportunistic). We used hierarchical abundance models (N-mixture models) with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1) lead to the selection of the same variables as important and (2) provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3) yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight), and (4) improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed against

  11. Balancing precision and risk: should multiple detection methods be analyzed separately in N-mixture models?

    PubMed

    Graves, Tabitha A; Royle, J Andrew; Kendall, Katherine C; Beier, Paul; Stetz, Jeffrey B; Macleod, Amy C

    2012-01-01

    Using multiple detection methods can increase the number, kind, and distribution of individuals sampled, which may increase accuracy and precision and reduce cost of population abundance estimates. However, when variables influencing abundance are of interest, if individuals detected via different methods are influenced by the landscape differently, separate analysis of multiple detection methods may be more appropriate. We evaluated the effects of combining two detection methods on the identification of variables important to local abundance using detections of grizzly bears with hair traps (systematic) and bear rubs (opportunistic). We used hierarchical abundance models (N-mixture models) with separate model components for each detection method. If both methods sample the same population, the use of either data set alone should (1) lead to the selection of the same variables as important and (2) provide similar estimates of relative local abundance. We hypothesized that the inclusion of 2 detection methods versus either method alone should (3) yield more support for variables identified in single method analyses (i.e. fewer variables and models with greater weight), and (4) improve precision of covariate estimates for variables selected in both separate and combined analyses because sample size is larger. As expected, joint analysis of both methods increased precision as well as certainty in variable and model selection. However, the single-method analyses identified different variables and the resulting predicted abundances had different spatial distributions. We recommend comparing single-method and jointly modeled results to identify the presence of individual heterogeneity between detection methods in N-mixture models, along with consideration of detection probabilities, correlations among variables, and tolerance to risk of failing to identify variables important to a subset of the population. The benefits of increased precision should be weighed against

  12. Operating a real time high accuracy positioning system

    NASA Astrophysics Data System (ADS)

    Johnston, G.; Hanley, J.; Russell, D.; Vooght, A.

    2003-04-01

    The paper shall review the history and development of real time DGPS services prior to then describing the design of a high accuracy GPS commercial augmentation system and service currently delivering over a wide area to users of precise positioning products. The infrastructure and system shall be explained in relation to the need for high accuracy and high integrity of positioning for users. A comparison of the different techniques for the delivery of data shall be provided to outline the technical approach taken. Examples of the performance of the real time system shall be shown in various regions and modes to outline the current achievable accuracies. Having described and established the current GPS based situation, a review of the potential of the Galileo system shall be presented. Following brief contextual information relating to the Galileo project, core system and services, the paper will identify possible key applications and the main user communities for sub decimetre level precise positioning. The paper will address the Galileo and modernised GPS signals in space that are relevant to commercial precise positioning for the future and will discuss the implications for precise positioning performance. An outline of the proposed architecture shall be described and associated with pointers towards a successful implementation. Central to this discussion will be an assessment of the likely evolution of system infrastructure and user equipment implementation, prospects for new applications and their effect upon the business case for precise positioning services.

  13. High precision anatomy for MEG.

    PubMed

    Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth

    2014-02-01

    Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1mm. Estimates of relative co-registration error were <1.5mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. PMID:23911673

  14. Precision laser automatic tracking system.

    PubMed

    Lucy, R F; Peters, C J; McGann, E J; Lang, K T

    1966-04-01

    A precision laser tracker has been constructed and tested that is capable of tracking a low-acceleration target to an accuracy of about 25 microrad root mean square. In tracking high-acceleration targets, the error is directly proportional to the angular acceleration. For an angular acceleration of 0.6 rad/sec(2), the measured tracking error was about 0.1 mrad. The basic components in this tracker, similar in configuration to a heliostat, are a laser and an image dissector, which are mounted on a stationary frame, and a servocontrolled tracking mirror. The daytime sensitivity of this system is approximately 3 x 10(-10) W/m(2); the ultimate nighttime sensitivity is approximately 3 x 10(-14) W/m(2). Experimental tests were performed to evaluate both dynamic characteristics of this system and the system sensitivity. Dynamic performance of the system was obtained, using a small rocket covered with retroreflective material launched at an acceleration of about 13 g at a point 204 m from the tracker. The daytime sensitivity of the system was checked, using an efficient retroreflector mounted on a light aircraft. This aircraft was tracked out to a maximum range of 15 km, which checked the daytime sensitivity of the system measured by other means. The system also has been used to track passively stars and the Echo I satellite. Also, the system tracked passively a +7.5 magnitude star, and the signal-to-noise ratio in this experiment indicates that it should be possible to track a + 12.5 magnitude star.

  15. High precision anatomy for MEG☆

    PubMed Central

    Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth

    2014-01-01

    Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1 mm. Estimates of relative co-registration error were < 1.5 mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6 month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5 mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. PMID:23911673

  16. High accuracy fuel flowmeter

    NASA Technical Reports Server (NTRS)

    1986-01-01

    All three flowmeter concepts (vortex, dual turbine, and angular momentum) were subjected to experimental and analytical investigation to determine the potential portotype performance. The three concepts were subjected to a comprehensive rating. Eight parameters of performance were evaluated on a zero-to-ten scale, weighted, and summed. The relative ratings of the vortex, dual turbine, and angular momentum flowmeters are 0.71, 1.00, and 0.95, respectively. The dual turbine flowmeter concept was selected as the primary candidate and the angular momentum flowmeter as the secondary candidate for prototype development and evaluation.

  17. High precision Woelter optic calibration facility

    SciTech Connect

    Morales, R.I.; Remington, B.A.; Schwinn, T.

    1994-05-02

    We have developed an off-line facility for very precise characterization of the reflectance and spatial resolution of the grazing incidence Woelter Type 1 x-ray optics used at Nova. The primary component of the facility is a high brightness, ``point`` x-ray source consisting of a focussed DC electron beam incident onto a precision manipulated target/pinhole array. The data are recorded with a selection of detectors. For imaging measurements we use direct exposure x-ray film modules or an x-ray CCD camera. For energy-resolved reflectance measurements, we use lithium drifted silicon detectors and a proportional counter. An in situ laser alignment system allows precise location and rapid periodic alignment verification of the x-ray point source, the statically mounted Woelter optic, and the chosen detector.

  18. Manufacturing Precise, Lightweight Paraboloidal Mirrors

    NASA Technical Reports Server (NTRS)

    Hermann, Frederick Thomas

    2006-01-01

    A process for fabricating a precise, diffraction- limited, ultra-lightweight, composite- material (matrix/fiber) paraboloidal telescope mirror has been devised. Unlike the traditional process of fabrication of heavier glass-based mirrors, this process involves a minimum of manual steps and subjective judgment. Instead, this process involves objectively controllable, repeatable steps; hence, this process is better suited for mass production. Other processes that have been investigated for fabrication of precise composite-material lightweight mirrors have resulted in print-through of fiber patterns onto reflecting surfaces, and have not provided adequate structural support for maintenance of stable, diffraction-limited surface figures. In contrast, this process does not result in print-through of the fiber pattern onto the reflecting surface and does provide a lightweight, rigid structure capable of maintaining a diffraction-limited surface figure in the face of changing temperature, humidity, and air pressure. The process consists mainly of the following steps: 1. A precise glass mandrel is fabricated by conventional optical grinding and polishing. 2. The mandrel is coated with a release agent and covered with layers of a carbon- fiber composite material. 3. The outer surface of the outer layer of the carbon-fiber composite material is coated with a surfactant chosen to provide for the proper flow of an epoxy resin to be applied subsequently. 4. The mandrel as thus covered is mounted on a temperature-controlled spin table. 5. The table is heated to a suitable temperature and spun at a suitable speed as the epoxy resin is poured onto the coated carbon-fiber composite material. 6. The surface figure of the optic is monitored and adjusted by use of traditional Ronchi, Focault, and interferometric optical measurement techniques while the speed of rotation and the temperature are adjusted to obtain the desired figure. The proper selection of surfactant, speed or rotation

  19. High accuracy wavelength calibration for a scanning visible spectrometer

    SciTech Connect

    Scotti, Filippo; Bell, Ronald E.

    2010-10-15

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies {<=}0.2 A. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of {approx}0.25 A has been demonstrated. With the addition of a high resolution (0.075 arc sec) optical encoder on the grating stage, greater precision ({approx}0.005 A) is possible, allowing absolute velocity measurements within {approx}0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  20. High Accuracy Wavelength Calibration For A Scanning Visible Spectrometer

    SciTech Connect

    Filippo Scotti and Ronald Bell

    2010-07-29

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤ 0.2Â. An automated calibration for a scanning spectrometer has been developed to achieve a high wavelength accuracy overr the visible spectrum, stable over time and environmental conditions, without the need to recalibrate after each grating movement. The method fits all relevant spectrometer paraameters using multiple calibration spectra. With a steping-motor controlled sine-drive, accuracies of ~0.025 Â have been demonstrated. With the addition of high resolution (0.075 aresec) optical encoder on the grading stage, greater precision (~0.005 Â) is possible, allowing absolute velocity measurements with ~0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  1. Precision cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Fendt, William Ashton, Jr.

    2009-09-01

    Experimental efforts of the last few decades have brought. a golden age to mankind's endeavor to understand tine physical properties of the Universe throughout its history. Recent measurements of the cosmic microwave background (CMB) provide strong confirmation of the standard big bang paradigm, as well as introducing new mysteries, to unexplained by current physical models. In the following decades. even more ambitious scientific endeavours will begin to shed light on the new physics by looking at the detailed structure of the Universe both at very early and recent times. Modern data has allowed us to begins to test inflationary models of the early Universe, and the near future will bring higher precision data and much stronger tests. Cracking the codes hidden in these cosmological observables is a difficult and computationally intensive problem. The challenges will continue to increase as future experiments bring larger and more precise data sets. Because of the complexity of the problem, we are forced to use approximate techniques and make simplifying assumptions to ease the computational workload. While this has been reasonably sufficient until now, hints of the limitations of our techniques have begun to come to light. For example, the likelihood approximation used for analysis of CMB data from the Wilkinson Microwave Anistropy Probe (WMAP) satellite was shown to have short falls, leading to pre-emptive conclusions drawn about current cosmological theories. Also it can he shown that an approximate method used by all current analysis codes to describe the recombination history of the Universe will not be sufficiently accurate for future experiments. With a new CMB satellite scheduled for launch in the coming months, it is vital that we develop techniques to improve the analysis of cosmological data. This work develops a novel technique of both avoiding the use of approximate computational codes as well as allowing the application of new, more precise analysis

  2. Ground Truth Accuracy Tests of GPS Seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Oberlander, D. J.; Davis, J. L.; Baena, R.; Ekstrom, G.

    2005-12-01

    As the precision of GPS determinations of site position continues to improve the detection of smaller and faster geophysical signals becomes possible. However, lack of independent measurements of these signals often precludes an assessment of the accuracy of such GPS position determinations. This may be particularly true for high-rate GPS applications. We have built an apparatus to assess the accuracy of GPS position determinations for high-rate applications, in particular the application known as "GPS seismology." The apparatus consists of a bidirectional, single-axis positioning table coupled to a digitally controlled stepping motor. The motor, in turn, is connected to a Field Programmable Gate Array (FPGA) chip that synchronously sequences through real historical earthquake profiles stored in Erasable Programmable Read Only Memory's (EPROM). A GPS antenna attached to this positioning table undergoes the simulated seismic motions of the Earth's surface while collecting high-rate GPS data. Analysis of the time-dependent position estimates can then be compared to the "ground truth," and the resultant GPS error spectrum can be measured. We have made extensive measurements with this system while inducing simulated seismic motions either in the horizontal plane or the vertical axis. A second stationary GPS antenna at a distance of several meters was simultaneously collecting high-rate (5 Hz) GPS data. We will present the calibration of this system, describe the GPS observations and data analysis, and assess the accuracy of GPS for high-rate geophysical applications and natural hazards mitigation.

  3. A Precision Variable, Double Prism Attenuator for CO(2) Lasers.

    PubMed

    Oseki, T; Saito, S

    1971-01-01

    A precision, double prism attenuator for CO(2) lasers, calibrated by its gap capacitance, was constructed to evaluate its possible use as a standard for attenuation measurements. It was found that the accuracy was about 0.1 dB with a dynamic range of about 40 dB.

  4. EVALUATION OF METRIC PRECISION FOR A RIPARIAN FOREST SURVEY

    EPA Science Inventory

    This paper evaluates the performance of a protocol to monitor riparian forests in western Oregon based on the quality of the data obtained from a recent field survey. Precision and accuracy are the criteria used to determine the quality of 19 field metrics. The field survey con...

  5. EOS mapping accuracy study

    NASA Technical Reports Server (NTRS)

    Forrest, R. B.; Eppes, T. A.; Ouellette, R. J.

    1973-01-01

    Studies were performed to evaluate various image positioning methods for possible use in the earth observatory satellite (EOS) program and other earth resource imaging satellite programs. The primary goal is the generation of geometrically corrected and registered images, positioned with respect to the earth's surface. The EOS sensors which were considered were the thematic mapper, the return beam vidicon camera, and the high resolution pointable imager. The image positioning methods evaluated consisted of various combinations of satellite data and ground control points. It was concluded that EOS attitude control system design must be considered as a part of the image positioning problem for EOS, along with image sensor design and ground image processing system design. Study results show that, with suitable efficiency for ground control point selection and matching activities during data processing, extensive reliance should be placed on use of ground control points for positioning the images obtained from EOS and similar programs.

  6. Soviet precision timekeeping research and technology

    SciTech Connect

    Vessot, R.F.C.; Allan, D.W.; Crampton, S.J.B.; Cutler, L.S.; Kern, R.H.; McCoubrey, A.O.; White, J.D.

    1991-08-01

    This report is the result of a study of Soviet progress in precision timekeeping research and timekeeping capability during the last two decades. The study was conducted by a panel of seven US scientists who have expertise in timekeeping, frequency control, time dissemination, and the direct applications of these disciplines to scientific investigation. The following topics are addressed in this report: generation of time by atomic clocks at the present level of their technology, new and emerging technologies related to atomic clocks, time and frequency transfer technology, statistical processes involving metrological applications of time and frequency, applications of precise time and frequency to scientific investigations, supporting timekeeping technology, and a comparison of Soviet research efforts with those of the United States and the West. The number of Soviet professionals working in this field is roughly 10 times that in the United States. The Soviet Union has facilities for large-scale production of frequency standards and has concentrated its efforts on developing and producing rubidium gas cell devices (relatively compact, low-cost frequency standards of modest accuracy and stability) and atomic hydrogen masers (relatively large, high-cost standards of modest accuracy and high stability). 203 refs., 45 figs., 9 tabs.

  7. Glass ceramic ZERODUR enabling nanometer precision

    NASA Astrophysics Data System (ADS)

    Jedamzik, Ralf; Kunisch, Clemens; Nieder, Johannes; Westerhoff, Thomas

    2014-03-01

    The IC Lithography roadmap foresees manufacturing of devices with critical dimension of < 20 nm. Overlay specification of single digit nanometer asking for nanometer positioning accuracy requiring sub nanometer position measurement accuracy. The glass ceramic ZERODUR® is a well-established material in critical components of microlithography wafer stepper and offered with an extremely low coefficient of thermal expansion (CTE), the tightest tolerance available on market. SCHOTT is continuously improving manufacturing processes and it's method to measure and characterize the CTE behavior of ZERODUR® to full fill the ever tighter CTE specification for wafer stepper components. In this paper we present the ZERODUR® Lithography Roadmap on the CTE metrology and tolerance. Additionally, simulation calculations based on a physical model are presented predicting the long term CTE behavior of ZERODUR® components to optimize dimensional stability of precision positioning devices. CTE data of several low thermal expansion materials are compared regarding their temperature dependence between - 50°C and + 100°C. ZERODUR® TAILORED 22°C is full filling the tight CTE tolerance of +/- 10 ppb / K within the broadest temperature interval compared to all other materials of this investigation. The data presented in this paper explicitly demonstrates the capability of ZERODUR® to enable the nanometer precision required for future generation of lithography equipment and processes.

  8. Prompt and Precise Prototyping

    NASA Technical Reports Server (NTRS)

    2003-01-01

    For Sanders Design International, Inc., of Wilton, New Hampshire, every passing second between the concept and realization of a product is essential to succeed in the rapid prototyping industry where amongst heavy competition, faster time-to-market means more business. To separate itself from its rivals, Sanders Design aligned with NASA's Marshall Space Flight Center to develop what it considers to be the most accurate rapid prototyping machine for fabrication of extremely precise tooling prototypes. The company's Rapid ToolMaker System has revolutionized production of high quality, small-to-medium sized prototype patterns and tooling molds with an exactness that surpasses that of computer numerically-controlled (CNC) machining devices. Created with funding and support from Marshall under a Small Business Innovation Research (SBIR) contract, the Rapid ToolMaker is a dual-use technology with applications in both commercial and military aerospace fields. The advanced technology provides cost savings in the design and manufacturing of automotive, electronic, and medical parts, as well as in other areas of consumer interest, such as jewelry and toys. For aerospace applications, the Rapid ToolMaker enables fabrication of high-quality turbine and compressor blades for jet engines on unmanned air vehicles, aircraft, and missiles.

  9. Environment Assisted Precision Magnetometry

    NASA Astrophysics Data System (ADS)

    Cappellaro, P.; Goldstein, G.; Maze, J. R.; Jiang, L.; Hodges, J. S.; Sorensen, A. S.; Lukin, M. D.

    2010-03-01

    We describe a method to enhance the sensitivity of magnetometry and achieve nearly Heisenberg-limited precision measurement using a novel class of entangled states. An individual qubit is used to sense the dynamics of surrounding ancillary qubits, which are in turn affected by the external field to be measured. The resulting sensitivity enhancement is determined by the number of ancillas strongly coupled to the sensor qubit, it does not depend on the exact values of the couplings (allowing to use disordered systems), and is resilient to decoherence. As a specific example we consider electronic spins in the solid-state, where the ancillary system is associated with the surrounding spin bath. The conventional approach has been to consider these spins only as a source of decoherence and to adopt decoupling scheme to mitigate their effects. Here we describe novel control techniques that transform the environment spins into a resource used to amplify the sensor spin response to weak external perturbations, while maintaining the beneficial effects of dynamical decoupling sequences. We discuss specific applications to improve magnetic sensing with diamond nano-crystals, using one Nitrogen-Vacancy center spin coupled to Nitrogen electronic spins.

  10. Precision gravimetric survey at the conditions of urban agglomerations

    NASA Astrophysics Data System (ADS)

    Sokolova, Tatiana; Lygin, Ivan; Fadeev, Alexander

    2014-05-01

    internal convergence are independent on transportation mode. Actually, measurements differ just by the processing time and appropriate number of readings. Important, that the internal convergence is the individual attribute of particular device. For the investigated gravimeters it varies from ±3 up to ±8 μGals. Various stability of the gravimeters location base. The most stable basis (minimum microseisms) in this experiment was a concrete pedestal, the least stable - point on the 28th floor. There is no direct dependence of the measurement results variance at the external noise level. Moreover, the external dispersion between different gravimeters is minimal in the point of the highest microseisms. Conclusions. The quality of the modern high-precision gravimeters Scintrex CG-5 Autograv measurements is determined by stability of the particular device, its standard deviation value and the nonlinearity drift degree. Despite the fact, that mentioned parameters of the tested gravimeters, generally corresponded to the factory characters, for the surveys required accuracy ±2-5 μGals, the best gravimeters should be selected. Practical gravimetric survey with such accuracy allowed reliable determination of the position of technical communication boxes and underground walkway in the urban area, indicated by gravity minimums with the amplitudes from 6-8 μGals and 1 - 15 meters width. The holes' parameters, obtained as the result of interpretationare well aligned with priori data.

  11. Precision Orbit Derived Atmospheric Density: Development and Performance

    NASA Astrophysics Data System (ADS)

    McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.

    2012-09-01

    Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer

  12. Improving the precision of astrometry for space debris

    SciTech Connect

    Sun, Rongyu; Zhao, Changyin; Zhang, Xiaoxiang

    2014-03-01

    The data reduction method for optical space debris observations has many similarities with the one adopted for surveying near-Earth objects; however, due to several specific issues, the image degradation is particularly critical, which makes it difficult to obtain precise astrometry. An automatic image reconstruction method was developed to improve the astrometry precision for space debris, based on the mathematical morphology operator. Variable structural elements along multiple directions are adopted for image transformation, and then all the resultant images are stacked to obtain a final result. To investigate its efficiency, trial observations are made with Global Positioning System satellites and the astrometry accuracy improvement is obtained by comparison with the reference positions. The results of our experiments indicate that the influence of degradation in astrometric CCD images is reduced, and the position accuracy of both objects and stellar stars is improved distinctly. Our technique will contribute significantly to optical data reduction and high-order precision astrometry for space debris.

  13. Landsat classification accuracy assessment procedures

    USGS Publications Warehouse

    Mead, R. R.; Szajgin, John

    1982-01-01

    A working conference was held in Sioux Falls, South Dakota, 12-14 November, 1980 dealing with Landsat classification Accuracy Assessment Procedures. Thirteen formal presentations were made on three general topics: (1) sampling procedures, (2) statistical analysis techniques, and (3) examples of projects which included accuracy assessment and the associated costs, logistical problems, and value of the accuracy data to the remote sensing specialist and the resource manager. Nearly twenty conference attendees participated in two discussion sessions addressing various issues associated with accuracy assessment. This paper presents an account of the accomplishments of the conference.

  14. Analysis of precision in tumor tracking based on optical positioning system during radiotherapy.

    PubMed

    Zhou, Han; Shen, Junshu; Li, Bing; Chen, Junting; Zhu, Xixu; Ge, Yun; Wang, Yongjian

    2016-03-19

    Tumor tracking is performed during patient set-up and monitoring of respiratory motion in radiotherapy. In the clinical setting, there are several types of equipment for this set-up such as the Electronic Portal imaging Device (EPID) and Cone Beam CT (CBCT). Technically, an optical positioning system tracks the difference between the infra ball reflected from body and machine isocenter. Our objective is to compare the clinical positioning error of patient setup between Cone Beam CT (CBCT) with the Optical Positioning System (OPS), and to evaluate the traditional positioning systems and OPS based on our proposed approach of patient positioning. In our experiments, a phantom was used, and we measured its setup errors in three directions. Specifically, the deviations in the left-to-right (LR), anterior-to-posterior (AP) and inferior-to-superior (IS) directions were measured by vernier caliper on a graph paper using the Varian Linear accelerator. Then, we verified the accuracy of OPS based on this experimental study. In order to verify the accuracy of phantom experiment, 40 patients were selected in our radiotherapy experiment. To illustrate the precise of optical positioning system, we designed clinical trials using EPID. From our radiotherapy procedure, we can conclude that OPS has higher precise than conventional positioning methods, and is a comparatively fast and efficient positioning method with respect to the CBCT guidance system. PMID:27257880

  15. Precision positioning of earth orbiting remote sensing systems

    NASA Technical Reports Server (NTRS)

    Melbourne, William G.; Yunck, T. P.; Wu, S. C.

    1987-01-01

    Decimeter tracking accuracy is sought for a number of precise earth sensing satellites to be flown in the 1990's. This accuracy can be achieved with techniques which use the Global Positioning System (GPS) in a differential mode. A precisely located global network of GPS ground receivers and a receiver aboard the user satellite are needed, and all techniques simultaneously estimate the user and GPS satellite states. Three basic navigation approaches include classical dynamic, wholly nondynamic, and reduced dynamic or hybrid formulations. The first two are simply special cases of the third, which promises to deliver subdecimeter accuracy for dynamically unpredictable vehicles down to the lowest orbit altitudes. The potential of these techniques for tracking and gravity field recovery will be demonstrated on NASA's Topex satellite beginning in 1991. Applications to the Shuttle, Space Station, and dedicated remote sensing platforms are being pursued.

  16. High-precision triangular-waveform generator

    DOEpatents

    Mueller, T.R.

    1981-11-14

    An ultra-linear ramp generator having separately programmable ascending and decending ramp rates and voltages is provided. Two constant current sources provide the ramp through an integrator. Switching of the current at current source inputs rather than at the integrator input eliminates switching transients and contributes to the waveform precision. The triangular waveforms produced by the waveform generator are characterized by accurate reproduction and low drift over periods of several hours. The ascending and descending slopes are independently selectable.

  17. Precision medicine in myasthenia graves: begin from the data precision

    PubMed Central

    Hong, Yu; Xie, Yanchen; Hao, Hong-Jun; Sun, Ren-Cheng

    2016-01-01

    Myasthenia gravis (MG) is a prototypic autoimmune disease with overt clinical and immunological heterogeneity. The data of MG is far from individually precise now, partially due to the rarity and heterogeneity of this disease. In this review, we provide the basic insights of MG data precision, including onset age, presenting symptoms, generalization, thymus status, pathogenic autoantibodies, muscle involvement, severity and response to treatment based on references and our previous studies. Subgroups and quantitative traits of MG are discussed in the sense of data precision. The role of disease registries and scientific bases of precise analysis are also discussed to ensure better collection and analysis of MG data. PMID:27127759

  18. Precise Truss Assembly using Commodity Parts and Low Precision Welding

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, William R.; Correll, Nikolaus

    2013-01-01

    We describe an Intelligent Precision Jigging Robot (IPJR), which allows high precision assembly of commodity parts with low-precision bonding. We present preliminary experiments in 2D that are motivated by the problem of assembling a space telescope optical bench on orbit using inexpensive, stock hardware and low-precision welding. An IPJR is a robot that acts as the precise "jigging", holding parts of a local assembly site in place while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (in this case, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. We report the challenges of designing the IPJR hardware and software, analyze the error in assembly, document the test results over several experiments including a large-scale ring structure, and describe future work to implement the IPJR in 3D and with micron precision.

  19. Precise Truss Assembly Using Commodity Parts and Low Precision Welding

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, W. R.; Correll, Nikolaus

    2014-01-01

    Hardware and software design and system integration for an intelligent precision jigging robot (IPJR), which allows high precision assembly using commodity parts and low-precision bonding, is described. Preliminary 2D experiments that are motivated by the problem of assembling space telescope optical benches and very large manipulators on orbit using inexpensive, stock hardware and low-precision welding are also described. An IPJR is a robot that acts as the precise "jigging", holding parts of a local structure assembly site in place, while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (for this prototype, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. The analysis of the assembly error and the results of building a square structure and a ring structure are discussed. Options for future work, to extend the IPJR paradigm to building in 3D structures at micron precision are also summarized.

  20. METHOD 530 DETERMINATION OF SELECT SEMIVOLATILE ORGANIC CHEMICALS IN DRINKING WATER BY SOLID PHASE EXTRACTION AND GAS CHROMATOGRAPHY/ MASS SPECTROMETRY (GC/MS)

    EPA Science Inventory

    1.1. This is a gas chromatography/mass spectrometry (GC/MS) method for the determination of selected semivolatile organic compounds in drinking waters. Accuracy and precision data have been generated in reagent water, and in finished ground and surface waters for the compounds li...

  1. Precision mass measurements of highly charged ions

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, A. A.; Bale, J. C.; Brunner, T.; Chaudhuri, A.; Chowdhury, U.; Ettenauer, S.; Frekers, D.; Gallant, A. T.; Grossheim, A.; Lennarz, A.; Mane, E.; MacDonald, T. D.; Schultz, B. E.; Simon, M. C.; Simon, V. V.; Dilling, J.

    2012-10-01

    The reputation of Penning trap mass spectrometry for accuracy and precision was established with singly charged ions (SCI); however, the achievable precision and resolving power can be extended by using highly charged ions (HCI). The TITAN facility has demonstrated these enhancements for long-lived (T1/2>=50 ms) isobars and low-lying isomers, including ^71Ge^21+, ^74Rb^8+, ^78Rb^8+, and ^98Rb^15+. The Q-value of ^71Ge enters into the neutrino cross section, and the use of HCI reduced the resolving power required to distinguish the isobars from 3 x 10^5 to 20. The precision achieved in the measurement of ^74Rb^8+, a superallowed β-emitter and candidate to test the CVC hypothesis, rivaled earlier measurements with SCI in a fraction of the time. The 111.19(22) keV isomeric state in ^78Rb was resolved from the ground state. Mass measurements of neutron-rich Rb and Sr isotopes near A = 100 aid in determining the r-process pathway. Advanced ion manipulation techniques and recent results will be presented.

  2. High precision redundant robotic manipulator

    DOEpatents

    Young, K.K.D.

    1998-09-22

    A high precision redundant robotic manipulator for overcoming contents imposed by obstacles or imposed by a highly congested work space is disclosed. One embodiment of the manipulator has four degrees of freedom and another embodiment has seven degrees of freedom. Each of the embodiments utilize a first selective compliant assembly robot arm (SCARA) configuration to provide high stiffness in the vertical plane, a second SCARA configuration to provide high stiffness in the horizontal plane. The seven degree of freedom embodiment also utilizes kinematic redundancy to provide the capability of avoiding obstacles that lie between the base of the manipulator and the end effector or link of the manipulator. These additional three degrees of freedom are added at the wrist link of the manipulator to provide pitch, yaw and roll. The seven degrees of freedom embodiment uses one revolute point per degree of freedom. For each of the revolute joints, a harmonic gear coupled to an electric motor is introduced, and together with properly designed based servo controllers provide an end point repeatability of less than 10 microns. 3 figs.

  3. High precision redundant robotic manipulator

    DOEpatents

    Young, Kar-Keung David

    1998-01-01

    A high precision redundant robotic manipulator for overcoming contents imposed by obstacles or imposed by a highly congested work space. One embodiment of the manipulator has four degrees of freedom and another embodiment has seven degreed of freedom. Each of the embodiments utilize a first selective compliant assembly robot arm (SCARA) configuration to provide high stiffness in the vertical plane, a second SCARA configuration to provide high stiffness in the horizontal plane. The seven degree of freedom embodiment also utilizes kinematic redundancy to provide the capability of avoiding obstacles that lie between the base of the manipulator and the end effector or link of the manipulator. These additional three degrees of freedom are added at the wrist link of the manipulator to provide pitch, yaw and roll. The seven degrees of freedom embodiment uses one revolute point per degree of freedom. For each of the revolute joints, a harmonic gear coupled to an electric motor is introduced, and together with properly designed based servo controllers provide an end point repeatability of less than 10 microns.

  4. Accuracy Evaluation of Electron-Probe Microanalysis as Applied to Semiconductors and Silicates

    NASA Technical Reports Server (NTRS)

    Carpenter, Paul; Armstrong, John

    2003-01-01

    An evaluation of precision and accuracy will be presented for representative semiconductor and silicate compositions. The accuracy of electron-probe analysis depends on high precision measurements and instrumental calibration, as well as correction algorithms and fundamental parameter data sets. A critical assessment of correction algorithms and mass absorption coefficient data sets can be made using the alpha factor technique. Alpha factor analysis can be used to identify systematic errors in data sets and also of microprobe standards used for calibration.

  5. The application of unmanned aerial vehicle to precision agriculture: Chlorophyll, nitrogen, and evapotranspiration estimation

    NASA Astrophysics Data System (ADS)

    Elarab, Manal

    Precision agriculture (PA) is an integration of a set of technologies aiming to improve productivity and profitability while sustaining the quality of the surrounding environment. It is a process that vastly relies on high-resolution information to enable greater precision in the management of inputs to production. This dissertation explored the usage of multispectral high resolution aerial imagery acquired by an unmanned aerial systems (UAS) platform to serve precision agriculture application. The UAS acquired imagery in the visual, near infrared and thermal infrared spectra with a resolution of less than a meter (15--60 cm). This research focused on developing two models to estimate cm-scale chlorophyll content and leaf nitrogen. To achieve the estimations a well-established machine learning algorithm (relevance vector machine) was used. The two models were trained on a dataset of in situ collected leaf chlorophyll and leaf nitrogen measurements, and the machine learning algorithm intelligently selected the most appropriate bands and indices for building regressions with the highest prediction accuracy. In addition, this research explored the usage of the high resolution imagery to estimate crop evapotranspiration (ET) at 15 cm resolution. A comparison was also made between the high resolution ET and Landsat derived ET over two different crop cover (field crops and vineyards) to assess the advantages of UAS based high resolution ET. This research aimed to bridge the information embedded in the high resolution imagery with ground crop parameters to provide site specific information to assist farmers adopting precision agriculture. The framework of this dissertation consisted of three components that provide tools to support precision agriculture operational decisions. In general, the results for each of the methods developed were satisfactory, relevant, and encouraging.

  6. Test Expectancy Affects Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2011-01-01

    Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…

  7. Enabling Dark Energy and Beyond Science with Precise Absolute Photometry

    NASA Astrophysics Data System (ADS)

    Deustua, Susana E.; Hines, D. C.; Bohlin, R.; Gordon, K. D.

    2014-01-01

    We have obtain WFC3/IR observations of 15 carefully selected stars with the immediate objective of establishing their Absolute Physical Flux (ABF), and an ultimate goal of achieving the sub-1% absolute photometric accuracies required by Dark Energy science with JWST and other facilities. Even with the best data available, the current determination of ABFs is plagued by the reliance on the Vega photometric system, which is known to be problematic primarily due to the fact that Vega is a pole-on rapid rotator with an infrared excess from its circumstellar disk! which makes it difficult to model. Vega is also far too bright for large aperture telescopes. In an effort to remedy these difficulties, teams from the National Institute of Standards (NIST), the University of New Mexico, Johns Hopkins University and STScI have begun to develop a catalog of stars that have spectral energy distributions that are tied directly to NIST (diode) standards with very precisely determined physical characteristics. A key element in this pursuit has been the efforts at STScI to measure the spectra of many of these objects with STIS. We discuss our program to extend this effort into the near-IR which is crucial to reliably extend the SEDs to longer wavelengths, including the mid IR.

  8. Demons deformable registration for CBCT-guided procedures in the head and neck: Convergence and accuracy

    SciTech Connect

    Nithiananthan, S.; Brock, K. K.; Daly, M. J.; Chan, H.; Irish, J. C.; Siewerdsen, J. H.

    2009-10-15

    52 s for the cadaveric head and in an average time of 270 s for the larger FOV patient images. Conclusions: Appropriate selection of convergence and multiscale parameters in Demons registration was shown to reduce computational expense without sacrificing registration performance. For intraoperative CBCT imaging with deformable registration, the ability to perform accurate registration within the stringent time requirements of the operating environment could offer a useful clinical tool allowing integration of preoperative information while accurately reflecting changes in the patient anatomy. Similarly for CBCT-guided radiation therapy, fast accurate deformable registration could further augment high-precision treatment strategies.

  9. Atmospheric effects and ultimate ranging accuracy for lunar laser ranging

    NASA Astrophysics Data System (ADS)

    Currie, Douglas G.; Prochazka, Ivan

    2014-10-01

    The deployment of next generation lunar laser retroreflectors is planned in the near future. With proper robotic deployment, these will support single shot single photo-electron ranging accuracy at the 100 micron level or better. There are available technologies for the support at this accuracy by advanced ground stations, however, the major question is the ultimate limit imposed on the ranging accuracy due to the changing timing delays due to turbulence and horizontal gradients in the earth's atmosphere. In particular, there are questions of the delay and temporal broadening of a very narrow laser pulse. Theoretical and experimental results will be discussed that address estimates of the magnitudes of these effects and the issue of precision vs. accuracy.

  10. What do we mean by accuracy in geomagnetic measurements?

    USGS Publications Warehouse

    Green, A.W.

    1990-01-01

    High accuracy is what distinguishes measurements made at the world's magnetic observatories from other types of geomagnetic measurements. High accuracy in determining the absolute values of the components of the Earth's magnetic field is essential to studying geomagnetic secular variation and processes at the core mantle boundary, as well as some magnetospheric processes. In some applications of geomagnetic data, precision (or resolution) of measurements may also be important. In addition to accuracy and resolution in the amplitude domain, it is necessary to consider these same quantities in the frequency and space domains. New developments in geomagnetic instruments and communications make real-time, high accuracy, global geomagnetic observatory data sets a real possibility. There is a growing realization in the scientific community of the unique relevance of geomagnetic observatory data to the principal contemporary problems in solid Earth and space physics. Together, these factors provide the promise of a 'renaissance' of the world's geomagnetic observatory system. ?? 1990.

  11. A paradigm shift in patterning foundation from frequency multiplication to edge-placement accuracy: a novel processing solution by selective etching and alternating-material self-aligned multiple patterning

    NASA Astrophysics Data System (ADS)

    Han, Ting; Liu, Hongyi; Chen, Yijian

    2016-03-01

    Overlay errors, cut/block and line/space critical-dimension (CD) variations are the major sources of the edge-placement errors (EPE) in the cut/block patterning processes of complementary lithography when IC technology is scaled down to sub-10nm half pitch (HP). In this paper, we propose and discuss a modular technology to reduce the EPE effect by combining selective etching and alternating-material (dual-material) self-aligned multiple patterning (altSAMP) processes. Preliminary results of altSAMP process development and material screening experiment are reported and possible material candidates are suggested. A geometrical cut-process yield model considering the joint effect of overlay errors, cut-hole and line CD variations is developed to analyze its patterning performance. In addition to the contributions from the above three process variations, the impacts of key control parameters (such as cut-hole overhang and etching selectivity) on the patterning yield are examined. It is shown that the optimized altSAMP patterning process significantly improves the patterning yield compared with conventional SAMP processes, especially when the half pitch of device patterns is driven down to 7 nm and below.

  12. Few-Nucleon Charge Radii and a Precision Isotope Shift Measurement in Helium

    NASA Astrophysics Data System (ADS)

    Hassan Rezaeian, Nima; Shiner, David

    2015-10-01

    Recent improvements in atomic theory and experiment provide a valuable method to precisely determine few nucleon charge radii, complementing the more direct scattering approaches, and providing sensitive tests of few-body nuclear theory. Some puzzles with respect to this method exist, particularly in the muonic and electronic measurements of the proton radius, known as the proton puzzle. Perhaps this puzzle will also exist in nuclear size measurements in helium. Muonic helium measurements are ongoing while our new electronic results will be discussed here. We measured precisely the isotope shift of the 23S - 23P transitions in 3He and 4He. The result is almost an order of magnitude more accurate than previous measured values. To achieve this accuracy, we implemented various experimental techniques. We used a tunable laser frequency discriminator and electro-optic modulation technique to precisely control the frequency and intensity. We select and stabilize the intensity of the required sideband and eliminate unused sidebands. The technique uses a MEMS fiber switch (ts = 10 ms) and several temperature stabilized narrow band (3 GHz) fiber gratings. A beam with both species of helium is achieved using a custom fiber laser for simultaneous optical pumping. A servo-controlled retro-reflected laser beam eliminates Doppler effects. Careful detection design and software are essential for unbiased data collection. Our new results will be compared to previous measurements.

  13. High current high accuracy IGBT pulse generator

    SciTech Connect

    Nesterov, V.V.; Donaldson, A.R.

    1995-05-01

    A solid state pulse generator capable of delivering high current triangular or trapezoidal pulses into an inductive load has been developed at SLAC. Energy stored in a capacitor bank of the pulse generator is switched to the load through a pair of insulated gate bipolar transistors (IGBT). The circuit can then recover the remaining energy and transfer it back to the capacitor bank without reversing the capacitor voltage. A third IGBT device is employed to control the initial charge to the capacitor bank, a command charging technique, and to compensate for pulse to pulse power losses. The rack mounted pulse generator contains a 525 {mu}F capacitor bank. It can deliver 500 A at 900V into inductive loads up to 3 mH. The current amplitude and discharge time are controlled to 0.02% accuracy by a precision controller through the SLAC central computer system. This pulse generator drives a series pair of extraction dipoles.

  14. Positional Accuracy of Gps Satellite Almanac

    NASA Astrophysics Data System (ADS)

    Ma, Lihua; Zhou, Shangli

    2014-12-01

    How to accelerate signal acquisition and shorten starting time are key problems in the Global Positioning System (GPS). GPS satellite almanac plays an important role in signal reception period. Almanac accuracy directly affects the speed of GPS signal acquisition, the start time of the receiver, and even the system performance to some extent. Combined with precise ephemeris products released by the International GNSS Service (IGS), the authors analyse GPS satellite almanac from the first day to the third day in the 1805th GPS week (from August 11 to 13, 2014 in the Gregorian calendar). The results show that mean of position errors in three-dimensional coordinate system varies from about 1 kilometer to 3 kilometers, which can satisfy the needs of common users.

  15. Measuring and balancing dynamic unbalance of precision centrifuge

    NASA Astrophysics Data System (ADS)

    Yang, Yafei; Huo, Xin

    2008-10-01

    A precision centrifuge is used to test and calibrate accelerometer model parameters. Its dynamic unbalance may cause the perturbation of the centrifuge to deteriorate the test and calibration accuracy of an accelerometer. By analyzing the causes of dynamic unbalance, the influences on precision centrifuge from static unbalance and couple unbalance are developed. It is considered measuring and balancing of static unbalance is a key to resolving a dynamic unbalance problem of precision centrifuge with a disk in structure. Measuring means and calculating formulas of static unbalance amount are given, and balancing principle and method are provided. The correctness and effectiveness of this method are confirmed by experiments on a device under tuning, thereby the accurate and high-effective measuring and balancing method of dynamic unbalance of this precision centrifuge was provided.

  16. High-precision thermal and electrical characterization of thermoelectric modules

    SciTech Connect

    Kolodner, Paul

    2014-05-15

    This paper describes an apparatus for performing high-precision electrical and thermal characterization of thermoelectric modules (TEMs). The apparatus is calibrated for operation between 20 °C and 80 °C and is normally used for measurements of heat currents in the range 0–10 W. Precision thermometry based on miniature thermistor probes enables an absolute temperature accuracy of better than 0.010 °C. The use of vacuum isolation, thermal guarding, and radiation shielding, augmented by a careful accounting of stray heat leaks and uncertainties, allows the heat current through the TEM under test to be determined with a precision of a few mW. The fractional precision of all measured parameters is approximately 0.1%.

  17. French Meteor Network for High Precision Orbits of Meteoroids

    NASA Technical Reports Server (NTRS)

    Atreya, P.; Vaubaillon, J.; Colas, F.; Bouley, S.; Gaillard, B.; Sauli, I.; Kwon, M. K.

    2011-01-01

    There is a lack of precise meteoroids orbit from video observations as most of the meteor stations use off-the-shelf CCD cameras. Few meteoroids orbit with precise semi-major axis are available using film photographic method. Precise orbits are necessary to compute the dust flux in the Earth s vicinity, and to estimate the ejection time of the meteoroids accurately by comparing them with the theoretical evolution model. We investigate the use of large CCD sensors to observe multi-station meteors and to compute precise orbit of these meteoroids. An ideal spatial and temporal resolution to get an accuracy to those similar of photographic plates are discussed. Various problems faced due to the use of large CCD, such as increasing the spatial and the temporal resolution at the same time and computational problems in finding the meteor position are illustrated.

  18. The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control

    ERIC Educational Resources Information Center

    Page, A.; Moreno, R.; Candelas, P.; Belmar, F.

    2008-01-01

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…

  19. MEASUREMENT AND PRECISION, EXPERIMENTAL VERSION.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    THIS DOCUMENT IS AN EXPERIMENTAL VERSION OF A PROGRAMED TEXT ON MEASUREMENT AND PRECISION. PART I CONTAINS 24 FRAMES DEALING WITH PRECISION AND SIGNIFICANT FIGURES ENCOUNTERED IN VARIOUS MATHEMATICAL COMPUTATIONS AND MEASUREMENTS. PART II BEGINS WITH A BRIEF SECTION ON EXPERIMENTAL DATA, COVERING SUCH POINTS AS (1) ESTABLISHING THE ZERO POINT, (2)…

  20. More Questions on Precision Teaching.

    ERIC Educational Resources Information Center

    Raybould, E. C.; Solity, J. E.

    1988-01-01

    Precision teaching can accelerate basic skills progress of special needs children. Issues discussed include using probes as performance tests, charting daily progress, using the charted data to modify teaching methods, determining appropriate age levels, assessing the number of students to be precision taught, and carefully allocating time. (JDD)

  1. Precision Teaching: Discoveries and Effects.

    ERIC Educational Resources Information Center

    Lindsley, Ogden R.

    1992-01-01

    This paper defines precision teaching; describes its monitoring methods by displaying a standard celeration chart and explaining charting conventions; points out precision teaching's roots in laboratory free-operant conditioning; discusses its learning tactics and performance principles; and describes its effectiveness in producing learning gains.…

  2. Sources, Sinks, and Model Accuracy

    EPA Science Inventory

    Spatial demographic models are a necessary tool for understanding how to manage landscapes sustainably for animal populations. These models, therefore, must offer precise and testable predications about animal population dynamics and how animal demographic parameters respond to ...

  3. HDAC 3-selective inhibitor RGFP966 demonstrates anti-inflammatory properties in RAW 264.7 macrophages and mouse precision-cut lung slices by attenuating NF-κB p65 transcriptional activity.

    PubMed

    Leus, Niek G J; van der Wouden, Petra E; van den Bosch, Thea; Hooghiemstra, Wouter T R; Ourailidou, Maria E; Kistemaker, Loes E M; Bischoff, Rainer; Gosens, Reinoud; Haisma, Hidde J; Dekker, Frank J

    2016-05-15

    The increasing number of patients suffering from chronic obstructive pulmonary disease (COPD) represents a major and increasing health problem. Therefore, novel therapeutic approaches are needed. Class I HDACs 1, 2 and 3 play key roles in the regulation of inflammatory gene expression with a particular pro-inflammatory role for HDAC 3. HDAC 3 has been reported to be an important player in inflammation by deacetylating NF-κB p65, which has been implicated in the pathology of COPD. Here, we applied the pharmacological HDAC 3-selective inhibitor RGFP966, which attenuated pro-inflammatory gene expression in models for inflammatory lung diseases. Consistent with this, a robust decrease of the transcriptional activity of NF-κB p65 was observed. HDAC 3 inhibition affected neither the acetylation status of NF-κB p65 nor histone H3 or histone H4. This indicates that HDAC 3 inhibition does not inhibit NF-κB p65 transcriptional activity by affecting its deacetylation but rather by inhibiting enzymatic activity of HDAC 3. Taken together, our findings indicate that pharmacological HDAC 3-selective inhibition by inhibitors such as RGFP966 may provide a novel and effective approach toward development of therapeutics for inflammatory lung diseases. PMID:26993378

  4. HDAC 3-selective inhibitor RGFP966 demonstrates anti-inflammatory properties in RAW 264.7 macrophages and mouse precision-cut lung slices by attenuating NF-κB p65 transcriptional activity

    PubMed Central

    Leus, Niek G.J.; van der Wouden, Petra E.; van den Bosch, Thea; Hooghiemstra, Wouter T.R.; Ourailidou, Maria E.; Kistemaker, Loes E.M.; Bischoff, Rainer; Gosens, Reinoud; Haisma, Hidde J.; Dekker, Frank J.

    2016-01-01

    The increasing number of patients suffering from chronic obstructive pulmonary disease (COPD) represents a major and increasing health problem. Therefore, novel therapeutic approaches are needed. Class I HDACs 1, 2 and 3 play key roles in the regulation of inflammatory gene expression with a particular pro-inflammatory role for HDAC 3. HDAC 3 has been reported to be an important player in inflammation by deacetylating NF-κB p65, which has been implicated in the pathology of COPD. Here, we applied the pharmacological HDAC 3-selective inhibitor RGFP966, which attenuated pro-inflammatory gene expression in models for inflammatory lung diseases. Consistent with this, a robust decrease of the transcriptional activity of NF-κB p65 was observed. HDAC 3 inhibition affected neither the acetylation status of NF-κB p65 nor histone H3 or histone H4. This indicates that HDAC 3 inhibition does not inhibit NF-κB p65 transcriptional activity by affecting its deacetylation but rather by inhibiting enzymatic activity of HDAC 3. Taken together, our findings indicate that pharmacological HDAC 3-selective inhibition by inhibitors such as RGFP966 may provide a novel and effective approach toward development of therapeutics for inflammatory lung diseases. PMID:26993378

  5. High accuracy wall thickness loss monitoring

    NASA Astrophysics Data System (ADS)

    Gajdacsi, Attila; Cegla, Frederic

    2014-02-01

    Ultrasonic inspection of wall thickness in pipes is a standard technique applied widely in the petrochemical industry. The potential precision of repeat measurements with permanently installed ultrasonic sensors however significantly surpasses that of handheld sensors as uncertainties associated with coupling fluids and positional offsets are eliminated. With permanently installed sensors the precise evaluation of very small wall loss rates becomes feasible in a matter of hours. The improved accuracy and speed of wall loss rate measurements can be used to evaluate and develop more effective mitigation strategies. This paper presents an overview of factors causing variability in the ultrasonic measurements which are then systematically addressed and an experimental setup with the best achievable stability based on these considerations is presented. In the experimental setup galvanic corrosion is used to induce predictable and very small wall thickness loss. Furthermore, it is shown that the experimental measurements can be used to assess the effect of reduced wall loss that is produced by the injection of corrosion inhibitor. The measurements show an estimated standard deviation of about 20nm, which in turn allows us to evaluate the effect and behaviour of corrosion inhibitors within less than an hour.

  6. Measuring the Accuracy of Diagnostic Systems

    NASA Astrophysics Data System (ADS)

    Swets, John A.

    1988-06-01

    Diagnostic systems of several kinds are used to distinguish between two classes of events, essentially ``signals'' and ``noise.'' For then, analysis in terms of the ``relative operating characteristic'' of signal detection theory provides a precise and valid measure of diagnostic accuracy. It is the only measure available that is uninfluenced by decision biases and prior probabilities, and it places the performances of diverse systems on a common, easily interpreted scale. Representative values of this measure are reported here for systems in medical imaging, materials testing, weather forecasting, information retrieval, polygraph lie detection, and aptitude testing. Though the measure itself is sound, the values obtained from tests of diagnostic systems often require qualification because the test data on which they are based are of unsure quality. A common set of problems in testing is faced in all fields. How well these problems are handled, or can be handled in a given field, determines the degree of confidence that can be placed in a measured value of accuracy. Some fields fare much better than others.

  7. Accuracy requirements in radiotherapy treatment planning.

    PubMed

    Buzdar, Saeed Ahmad; Afzal, Muhammad; Nazir, Aalia; Gadhi, Muhammad Asghar

    2013-06-01

    Radiation therapy attempts to deliver ionizing radiation to the tumour and can improve the survival chances and/or quality of life of patients. There are chances of errors and uncertainties in the entire process of radiotherapy that may affect the accuracy and precision of treatment management and decrease degree of conformation. All expected inaccuracies, like radiation dose determination, volume calculation, complete evaluation of the full extent of the tumour, biological behaviour of specific tumour types, organ motion during radiotherapy, imaging, biological/molecular uncertainties, sub-clinical diseases, microscopic spread of the disease, uncertainty in normal tissue responses and radiation morbidity need sound appreciation. Conformity can be increased by reduction of such inaccuracies. With the yearly increase in computing speed and advancement in other technologies the future will provide the opportunity to optimize a greater number of variables and reduce the errors in the treatment planning process. In multi-disciplined task of radiotherapy, efforts are needed to overcome the errors and uncertainty, not only by the physicists but also by radiologists, pathologists and oncologists to reduce molecular and biological uncertainties. The radiation therapy physics is advancing towards an optimal goal that is definitely to improve accuracy where necessary and to reduce uncertainty where possible.

  8. GEODETIC ACCURACY OF LANDSAT 4 MULTISPECTRAL SCANNER AND THEMATIC MAPPER DATA.

    USGS Publications Warehouse

    Thormodsgard, June M.; DeVries, D.J.; ,

    1985-01-01

    EROS Data Center is evaluating the geodetic accuracy of Landsat-4 data from both the Multispectral Scanner (MSS) and Thematic Mapper (TM) processing systems. Geodetic accuracy is a measure of the precision of Landsat data registration to the Earth's figure. This paper describes a geodetic accuracy assessment of several MSS and TM scenes, based on the geodetic referencing information supplied on a standard Landsat 4 computer compatible tape.

  9. When Does Choice of Accuracy Measure Alter Imputation Accuracy Assessments?

    PubMed

    Ramnarine, Shelina; Zhang, Juan; Chen, Li-Shiun; Culverhouse, Robert; Duan, Weimin; Hancock, Dana B; Hartz, Sarah M; Johnson, Eric O; Olfson, Emily; Schwantes-An, Tae-Hwi; Saccone, Nancy L

    2015-01-01

    Imputation, the process of inferring genotypes for untyped variants, is used to identify and refine genetic association findings. Inaccuracies in imputed data can distort the observed association between variants and a disease. Many statistics are used to assess accuracy; some compare imputed to genotyped data and others are calculated without reference to true genotypes. Prior work has shown that the Imputation Quality Score (IQS), which is based on Cohen's kappa statistic and compares imputed genotype probabilities to true genotypes, appropriately adjusts for chance agreement; however, it is not commonly used. To identify differences in accuracy assessment, we compared IQS with concordance rate, squared correlation, and accuracy measures built into imputation programs. Genotypes from the 1000 Genomes reference populations (AFR N = 246 and EUR N = 379) were masked to match the typed single nucleotide polymorphism (SNP) coverage of several SNP arrays and were imputed with BEAGLE 3.3.2 and IMPUTE2 in regions associated with smoking behaviors. Additional masking and imputation was conducted for sequenced subjects from the Collaborative Genetic Study of Nicotine Dependence and the Genetic Study of Nicotine Dependence in African Americans (N = 1,481 African Americans and N = 1,480 European Americans). Our results offer further evidence that concordance rate inflates accuracy estimates, particularly for rare and low frequency variants. For common variants, squared correlation, BEAGLE R2, IMPUTE2 INFO, and IQS produce similar assessments of imputation accuracy. However, for rare and low frequency variants, compared to IQS, the other statistics tend to be more liberal in their assessment of accuracy. IQS is important to consider when evaluating imputation accuracy, particularly for rare and low frequency variants. PMID:26458263

  10. Precise Point Positioning Based on BDS and GPS Observations

    NASA Astrophysics Data System (ADS)

    Gao, ZhouZheng; Zhang, Hongping; Shen, Wenbin

    2014-05-01

    BeiDou Navigation Satellite System (BDS) has obtained the ability applying initial navigation and precise point services for the Asian-Pacific regions at the end of 2012 with the constellation of 5 Geostationary Earth Orbit (GEO), 5 Inclined Geosynchronous Orbit (IGSO) and 4 Medium Earth Orbit (MEO). Till 2020, it will consist with 5 GEO, 3 IGSO and 27 MEO, and apply global navigation service similar to GPS and GLONASS. As we known, GPS precise point positioning (PPP) is a powerful tool for crustal deformation monitoring, GPS meteorology, orbit determination of low earth orbit satellites, high accuracy kinematic positioning et al. However, it accuracy and convergence time are influenced by the quality of pseudo-range observations and the observing geometry between user and Global navigation satellites system (GNSS) satellites. Usually, it takes more than 30 minutes even hours to obtain centimeter level position accuracy for PPP while using GPS dual-frequency observations only. In recent years, many researches have been done to solve this problem. One of the approaches is smooth pseudo-range by carrier-phase observations to improve pseudo-range accuracy. By which can improve PPP initial position accuracy and shorten PPP convergence time. Another sachems is to change position dilution of precision (PDOP) with multi-GNSS observations. Now, BDS has the ability to service whole Asian-Pacific regions, which make it possible to use GPS and BDS for precise positioning. In addition, according to researches on GNSS PDOP distribution, BDS can improve PDOP obviously. Therefore, it necessary to do some researches on PPP performance using both GPS observations and BDS observations, especially in Asian-Pacific regions currently. In this paper, we focus on the influences of BDS to GPS PPP mainly in three terms including BDS PPP accuracy, PDOP improvement and convergence time of PPP based on GPS and BDS observations. Here, the GPS and BDS two-constellation data are collected from

  11. State of the Field: Extreme Precision Radial Velocities

    NASA Astrophysics Data System (ADS)

    Fischer, Debra A.; Anglada-Escude, Guillem; Arriagada, Pamela; Baluev, Roman V.; Bean, Jacob L.; Bouchy, Francois; Buchhave, Lars A.; Carroll, Thorsten; Chakraborty, Abhijit; Crepp, Justin R.; Dawson, Rebekah I.; Diddams, Scott A.; Dumusque, Xavier; Eastman, Jason D.; Endl, Michael; Figueira, Pedro; Ford, Eric B.; Foreman-Mackey, Daniel; Fournier, Paul; Fűrész, Gabor; Gaudi, B. Scott; Gregory, Philip C.; Grundahl, Frank; Hatzes, Artie P.; Hébrard, Guillaume; Herrero, Enrique; Hogg, David W.; Howard, Andrew W.; Johnson, John A.; Jorden, Paul; Jurgenson, Colby A.; Latham, David W.; Laughlin, Greg; Loredo, Thomas J.; Lovis, Christophe; Mahadevan, Suvrath; McCracken, Tyler M.; Pepe, Francesco; Perez, Mario; Phillips, David F.; Plavchan, Peter P.; Prato, Lisa; Quirrenbach, Andreas; Reiners, Ansgar; Robertson, Paul; Santos, Nuno C.; Sawyer, David; Segransan, Damien; Sozzetti, Alessandro; Steinmetz, Tilo; Szentgyorgyi, Andrew; Udry, Stéphane; Valenti, Jeff A.; Wang, Sharon X.; Wittenmyer, Robert A.; Wright, Jason T.

    2016-06-01

    The Second Workshop on Extreme Precision Radial Velocities defined circa 2015 the state of the art Doppler precision and identified the critical path challenges for reaching 10 cm s-1 measurement precision. The presentations and discussion of key issues for instrumentation and data analysis and the workshop recommendations for achieving this bold precision are summarized here. Beginning with the High Accuracy Radial Velocity Planet Searcher spectrograph, technological advances for precision radial velocity (RV) measurements have focused on building extremely stable instruments. To reach still higher precision, future spectrometers will need to improve upon the state of the art, producing even higher fidelity spectra. This should be possible with improved environmental control, greater stability in the illumination of the spectrometer optics, better detectors, more precise wavelength calibration, and broader bandwidth spectra. Key data analysis challenges for the precision RV community include distinguishing center of mass (COM) Keplerian motion from photospheric velocities (time correlated noise) and the proper treatment of telluric contamination. Success here is coupled to the instrument design, but also requires the implementation of robust statistical and modeling techniques. COM velocities produce Doppler shifts that affect every line identically, while photospheric velocities produce line profile asymmetries with wavelength and temporal dependencies that are different from Keplerian signals. Exoplanets are an important subfield of astronomy and there has been an impressive rate of discovery over the past two decades. However, higher precision RV measurements are required to serve as a discovery technique for potentially habitable worlds, to confirm and characterize detections from transit missions, and to provide mass measurements for other space-based missions. The future of exoplanet science has very different trajectories depending on the precision that can

  12. State of the Field: Extreme Precision Radial Velocities

    NASA Astrophysics Data System (ADS)

    Fischer, Debra A.; Anglada-Escude, Guillem; Arriagada, Pamela; Baluev, Roman V.; Bean, Jacob L.; Bouchy, Francois; Buchhave, Lars A.; Carroll, Thorsten; Chakraborty, Abhijit; Crepp, Justin R.; Dawson, Rebekah I.; Diddams, Scott A.; Dumusque, Xavier; Eastman, Jason D.; Endl, Michael; Figueira, Pedro; Ford, Eric B.; Foreman-Mackey, Daniel; Fournier, Paul; Fűrész, Gabor; Gaudi, B. Scott; Gregory, Philip C.; Grundahl, Frank; Hatzes, Artie P.; Hébrard, Guillaume; Herrero, Enrique; Hogg, David W.; Howard, Andrew W.; Johnson, John A.; Jorden, Paul; Jurgenson, Colby A.; Latham, David W.; Laughlin, Greg; Loredo, Thomas J.; Lovis, Christophe; Mahadevan, Suvrath; McCracken, Tyler M.; Pepe, Francesco; Perez, Mario; Phillips, David F.; Plavchan, Peter P.; Prato, Lisa; Quirrenbach, Andreas; Reiners, Ansgar; Robertson, Paul; Santos, Nuno C.; Sawyer, David; Segransan, Damien; Sozzetti, Alessandro; Steinmetz, Tilo; Szentgyorgyi, Andrew; Udry, Stéphane; Valenti, Jeff A.; Wang, Sharon X.; Wittenmyer, Robert A.; Wright, Jason T.

    2016-06-01

    The Second Workshop on Extreme Precision Radial Velocities defined circa 2015 the state of the art Doppler precision and identified the critical path challenges for reaching 10 cm s‑1 measurement precision. The presentations and discussion of key issues for instrumentation and data analysis and the workshop recommendations for achieving this bold precision are summarized here. Beginning with the High Accuracy Radial Velocity Planet Searcher spectrograph, technological advances for precision radial velocity (RV) measurements have focused on building extremely stable instruments. To reach still higher precision, future spectrometers will need to improve upon the state of the art, producing even higher fidelity spectra. This should be possible with improved environmental control, greater stability in the illumination of the spectrometer optics, better detectors, more precise wavelength calibration, and broader bandwidth spectra. Key data analysis challenges for the precision RV community include distinguishing center of mass (COM) Keplerian motion from photospheric velocities (time correlated noise) and the proper treatment of telluric contamination. Success here is coupled to the instrument design, but also requires the implementation of robust statistical and modeling techniques. COM velocities produce Doppler shifts that affect every line identically, while photospheric velocities produce line profile asymmetries with wavelength and temporal dependencies that are different from Keplerian signals. Exoplanets are an important subfield of astronomy and there has been an impressive rate of discovery over the past two decades. However, higher precision RV measurements are required to serve as a discovery technique for potentially habitable worlds, to confirm and characterize detections from transit missions, and to provide mass measurements for other space-based missions. The future of exoplanet science has very different trajectories depending on the precision that

  13. Visual information throughout a reach determines endpoint precision.

    PubMed

    Ma-Wyatt, Anna; McKee, Suzanne P

    2007-05-01

    People make rapid, goal-directed movements to interact with their environment. Because these movements have consequences, it is important to be able to control them with a high level of precision and accuracy. Our hypothesis is that vision guides rapid hand movements, thereby enhancing their accuracy and precision. To test this idea, we asked observers to point to a briefly presented target (110 ms). We measured the impact of visual information on endpoint precision by using a shutter to close off view of the hand 50, 110 and 250 ms into the reach. We found that precision was degraded if the view of the hand was restricted at any time during the reach, despite the fact that the target disappeared long before the reach was completed. We therefore conclude that vision keeps the hand on the planned trajectory. We then investigated the effects of a perturbation of target position during the reach. For these experiments, the target remained visible until the reach was completed. The target position was shifted at 110, 180 or 250 ms into the reach. Early shifts in target position were easily compensated for, but late shifts led to a shift in the mean position of the endpoints; observers pointed to the center of the two locations, as a kind of best bet on the position of the target. Visual information is used to guide the hand throughout a reach and has a significant impact on endpoint precision.

  14. Precision active pharmaceutical ingredients are the goal.

    PubMed

    Miller, Andrew D

    2016-07-01

    Understanding and exploiting molecular mechanisms in biology is central to chemical biology. Chemical biology studies of biological macromolecules are now in a perfect continuum with molecular level and nanomolecular level mechanistic studies involving whole organisms. The potential opportunity presented by such studies is the design and creation of genuine precision active pharmaceutical ingredients (APIs; including DNA, siRNA, smaller-molecule bioactives) that demonstrate exceptional levels of disease target specificity and selectivity. This article covers the best of my personal and collaborative academic research work using an organic chemistry and chemical biology approach towards understanding biological molecular recognition processes, work that appears to be leading to the generation of novel precision APIs with genuine potential for the treatments of major chronic diseases that afflict globally. PMID:27476703

  15. Interference examiner for certification of precision autocollimators

    SciTech Connect

    Martynov, V.T.; Brda, V.A.; Likhttsinder, B.A.; Shestopalov, Y.N.

    1985-05-01

    Regular polygonal prisms together with an autocollimator are usually employed as standard means in the study and certification of angle-measuring instruments and apparatus; the prisims, in turn, must be certified with high accuracy. The interference examiner employs an optical system similar to that of the examiner in the new State primary standard plane-angle unit. The examiner is based on a twin-wave Michelson interferometer. Instead of a separate scale, the interference examiner uses two vertical marks applied directly to the end reflectors. A comparison was made of the rotation angles of the mirror in the range of 0-1/sup 0/ as reproduced by the examiner alpha /sub T/ (trigonometric method) and as measured by a UDP-025 precision angle-measuring instrument alpha /sub g/ (goniometric method). Processing of the obtained measurement results showed that the difference between alpha /sub T/ and alpha /sub g/ does not exceed the measurement error of the UDP-025.

  16. High precision fabrication of antennas and sensors

    NASA Astrophysics Data System (ADS)

    Balčytis, A.; Seniutinas, G.; Urbonas, D.; Gabalis, M.; Vaškevičius, K.; Petruškevičius, R.; Molis, G.; Valušis, G. `.; Juodkazis, S.

    2015-02-01

    Electron and ion beam lithographies were used to fabricate and/or functionalize large scale - millimetre footprint - micro-optical elements: coupled waveguide-resonator structures on silicon-on-insulator (SOI) and THz antennas on low temperature grown LT-GaAs. Waveguide elements on SOI were made without stitching errors using a fixed beam moving stage approach. THz antennas were created using a three-step litography process. First, gold THz antennas defined by standard mask projection lithography were annealed to make an ohmic contact on LT-GaAs and post-processing with Ga-ion beam was used to define nano-gaps and inter digitised contacts for better charge collection. These approaches show the possibility to fabricate large footprint patterns with nanoscale precision features and overlay accuracy. Emerging 3D nanofabrication trends are discussed.

  17. Image-guided precision manipulation of cells and nanoparticles in microfluidics

    NASA Astrophysics Data System (ADS)

    Cummins, Zachary

    Manipulation of single cells and particles is important to biology and nanotechnology. Our electrokinetic (EK) tweezers manipulate objects in simple microfluidic devices using gentle fluid and electric forces under vision-based feedback control. In this dissertation, I detail a user-friendly implementation of EK tweezers that allows users to select, position, and assemble cells and nanoparticles. This EK system was used to measure attachment forces between living breast cancer cells, trap single quantum dots with 45 nm accuracy, build nanophotonic circuits, and scan optical properties of nanowires. With a novel multi-layer microfluidic device, EK was also used to guide single microspheres along complex 3D trajectories. The schemes, software, and methods developed here can be used in many settings to precisely manipulate most visible objects, assemble objects into useful structures, and improve the function of lab-on-a-chip microfluidic systems.

  18. Evaluating the Accuracy of dem Generation Algorithms from Uav Imagery

    NASA Astrophysics Data System (ADS)

    Ruiz, J. J.; Diaz-Mas, L.; Perez, F.; Viguria, A.

    2013-08-01

    In this work we evaluated how the use of different positioning systems affects the accuracy of Digital Elevation Models (DEMs) generated from aerial imagery obtained with Unmanned Aerial Vehicles (UAVs). In this domain, state-of-the-art DEM generation algorithms suffer from typical errors obtained by GPS/INS devices in the position measurements associated with each picture obtained. The deviations from these measurements to real world positions are about meters. The experiments have been carried out using a small quadrotor in the indoor testbed at the Center for Advanced Aerospace Technologies (CATEC). This testbed houses a system that is able to track small markers mounted on the UAV and along the scenario with millimeter precision. This provides very precise position measurements, to which we can add random noise to simulate errors in different GPS receivers. The results showed that final DEM accuracy clearly depends on the positioning information.

  19. Precision Measurements at the ILC

    SciTech Connect

    Nelson, T.K.; /SLAC

    2006-12-06

    With relatively low backgrounds and a well-determined initial state, the proposed International Linear Collider (ILC) would provide a precision complement to the LHC experiments at the energy frontier. Completely and precisely exploring the discoveries of the LHC with such a machine will be critical in understanding the nature of those discoveries and what, if any, new physics they represent. The unique ability to form a complete picture of the Higgs sector is a prime example of the probative power of the ILC and represents a new era in precision physics.

  20. Ultra-rare Disease and Genomics-Driven Precision Medicine

    PubMed Central

    Lee, Sangmoon

    2016-01-01

    Since next-generation sequencing (NGS) technique was adopted into clinical practices, revolutionary advances in diagnosing rare genetic diseases have been achieved through translating genomic medicine into precision or personalized management. Indeed, several successful cases of molecular diagnosis and treatment with personalized or targeted therapies of rare genetic diseases have been reported. Still, there are several obstacles to be overcome for wider application of NGS-based precision medicine, including high sequencing cost, incomplete variant sensitivity and accuracy, practical complexities, and a shortage of available treatment options. PMID:27445646