Science.gov

Sample records for accuracy precision robustness

  1. Accuracy and precision of manual baseline determination.

    PubMed

    Jirasek, A; Schulze, G; Yu, M M L; Blades, M W; Turner, R F B

    2004-12-01

    Vibrational spectra often require baseline removal before further data analysis can be performed. Manual (i.e., user) baseline determination and removal is a common technique used to perform this operation. Currently, little data exists that details the accuracy and precision that can be expected with manual baseline removal techniques. This study addresses this current lack of data. One hundred spectra of varying signal-to-noise ratio (SNR), signal-to-baseline ratio (SBR), baseline slope, and spectral congestion were constructed and baselines were subtracted by 16 volunteers who were categorized as being either experienced or inexperienced in baseline determination. In total, 285 baseline determinations were performed. The general level of accuracy and precision that can be expected for manually determined baselines from spectra of varying SNR, SBR, baseline slope, and spectral congestion is established. Furthermore, the effects of user experience on the accuracy and precision of baseline determination is estimated. The interactions between the above factors in affecting the accuracy and precision of baseline determination is highlighted. Where possible, the functional relationships between accuracy, precision, and the given spectral characteristic are detailed. The results provide users of manual baseline determination useful guidelines in establishing limits of accuracy and precision when performing manual baseline determination, as well as highlighting conditions that confound the accuracy and precision of manual baseline determination.

  2. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032

  3. Accuracy and Precision of an IGRT Solution

    SciTech Connect

    Webster, Gareth J. Rowbottom, Carl G.; Mackay, Ranald I.

    2009-07-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within {+-} 3% in dose over the range of sample points. For some points in high-dose gradients

  4. [History, accuracy and precision of SMBG devices].

    PubMed

    Dufaitre-Patouraux, L; Vague, P; Lassmann-Vague, V

    2003-04-01

    Self-monitoring of blood glucose started only fifty years ago. Until then metabolic control was evaluated by means of qualitative urinary blood measure often of poor reliability. Reagent strips were the first semi quantitative tests to monitor blood glucose, and in the late seventies meters were launched on the market. Initially the use of such devices was intended for medical staff, but thanks to handiness improvement they became more and more adequate to patients and are now a necessary tool for self-blood glucose monitoring. The advanced technologies allow to develop photometric measurements but also more recently electrochemical one. In the nineties, improvements were made mainly in meters' miniaturisation, reduction of reaction time and reading, simplification of blood sampling and capillary blood laying. Although accuracy and precision concern was in the heart of considerations at the beginning of self-blood glucose monitoring, the recommendations of societies of diabetology came up in the late eighties. Now, the French drug agency: AFSSAPS asks for a control of meter before any launching on the market. According to recent publications very few meters meet reliability criteria set up by societies of diabetology in the late nineties. Finally because devices may be handled by numerous persons in hospitals, meters use as possible source of nosocomial infections have been recently questioned and is subject to very strict guidelines published by AFSSAPS.

  5. Assessing the Accuracy of the Precise Point Positioning Technique

    NASA Astrophysics Data System (ADS)

    Bisnath, S. B.; Collins, P.; Seepersad, G.

    2012-12-01

    The Precise Point Positioning (PPP) GPS data processing technique has developed over the past 15 years to become a standard method for growing categories of positioning and navigation applications. The technique relies on single receiver point positioning combined with the use of precise satellite orbit and clock information and high-fidelity error modelling. The research presented here uniquely addresses the current accuracy of the technique, explains the limits of performance, and defines paths to improvements. For geodetic purposes, performance refers to daily static position accuracy. PPP processing of over 80 IGS stations over one week results in few millimetre positioning rms error in the north and east components and few centimetres in the vertical (all one sigma values). Larger error statistics for real-time and kinematic processing are also given. GPS PPP with ambiguity resolution processing is also carried out, producing slight improvements over the float solution results. These results are categorised into quality classes in order to analyse the root error causes of the resultant accuracies: "best", "worst", multipath, site displacement effects, satellite availability and geometry, etc. Also of interest in PPP performance is solution convergence period. Static, conventional solutions are slow to converge, with approximately 35 minutes required for 95% of solutions to reach the 20 cm or better horizontal accuracy. Ambiguity resolution can significantly reduce this period without biasing solutions. The definition of a PPP error budget is a complex task even with the resulting numerical assessment, as unlike the epoch-by-epoch processing in the Standard Position Service, PPP processing involving filtering. An attempt is made here to 1) define the magnitude of each error source in terms of range, 2) transform ranging error to position error via Dilution Of Precision (DOP), and 3) scale the DOP through the filtering process. The result is a deeper

  6. Robust methods for assessing the accuracy of linear interpolated DEM

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Shi, Wenzhong; Liu, Eryong

    2015-02-01

    Methods for assessing the accuracy of a digital elevation model (DEM) with emphasis on robust methods have been studied in this paper. Based on the squared DEM residual population generated by the bi-linear interpolation method, three average-error statistics including (a) mean, (b) median, and (c) M-estimator are thoroughly investigated for measuring the interpolated DEM accuracy. Correspondingly, their confidence intervals are also constructed for each average error statistic to further evaluate the DEM quality. The first method mainly utilizes the student distribution while the second and third are derived from the robust theories. These innovative robust methods possess the capability of counteracting the outlier effects or even the skew distributed residuals in DEM accuracy assessment. Experimental studies using Monte Carlo simulation have commendably investigated the asymptotic convergence behavior of confidence intervals constructed by these three methods with the increase of sample size. It is demonstrated that the robust methods can produce more reliable DEM accuracy assessment results compared with those by the classical t-distribution-based method. Consequently, these proposed robust methods are strongly recommended for assessing DEM accuracy, particularly for those cases where the DEM residual population is evidently non-normal or heavily contaminated with outliers.

  7. A study of laseruler accuracy and precision (1986-1987)

    SciTech Connect

    Ramachandran, R.S.; Armstrong, K.P.

    1989-06-22

    A study was conducted to investigate Laserruler accuracy and precision. Tests were performed on 0.050 in., 0.100 in., and 0.120 in. gauge block standards. Results showed and accuracy of 3.7 {mu}in. for the 0.12 in. standard, with higher accuracies for the two thinner blocks. The Laserruler precision was 4.83 {mu}in. for the 0.120 in. standard, 3.83 {mu}in. for the 0.100 in. standard, and 4.2 {mu}in. for the 0.050 in. standard.

  8. Precision and accuracy in diffusion tensor magnetic resonance imaging.

    PubMed

    Jones, Derek K

    2010-04-01

    This article reviews some of the key factors influencing the accuracy and precision of quantitative metrics derived from diffusion magnetic resonance imaging data. It focuses on the study pipeline beginning at the choice of imaging protocol, through preprocessing and model fitting up to the point of extracting quantitative estimates for subsequent analysis. The aim was to provide the newcomers to the field with sufficient knowledge of how their decisions at each stage along this process might impact on precision and accuracy, to design their study/approach, and to use diffusion tensor magnetic resonance imaging in the clinic. More specifically, emphasis is placed on improving accuracy and precision. I illustrate how careful choices along the way can substantially affect the sample size needed to make an inference from the data.

  9. Accuracy and precision of temporal artery thermometers in febrile patients.

    PubMed

    Wolfson, Margaret; Granstrom, Patsy; Pomarico, Bernie; Reimanis, Cathryn

    2013-01-01

    The noninvasive temporal artery thermometer offers a way to measure temperature when oral assessment is contraindicated, uncomfortable, or difficult to obtain. In this study, the accuracy and precision of the temporal artery thermometer exceeded levels recommended by experts for use in acute care clinical practice.

  10. Accuracy-precision trade-off in visual orientation constancy.

    PubMed

    De Vrijer, M; Medendorp, W P; Van Gisbergen, J A M

    2009-02-09

    Using the subjective visual vertical task (SVV), previous investigations on the maintenance of visual orientation constancy during lateral tilt have found two opposite bias effects in different tilt ranges. The SVV typically shows accurate performance near upright but severe undercompensation at tilts beyond 60 deg (A-effect), frequently with slight overcompensation responses (E-effect) in between. Here we investigate whether a Bayesian spatial-perception model can account for this error pattern. The model interprets A- and E-effects as the drawback of a computational strategy, geared at maintaining visual stability with optimal precision at small tilt angles. In this study, we test whether these systematic errors can be seen as the consequence of a precision-accuracy trade-off when combining a veridical but noisy signal about eye orientation in space with the visual signal. To do so, we used a psychometric approach to assess both precision and accuracy of the SVV in eight subjects laterally tilted at 9 different tilt angles (-120 degrees to 120 degrees). Results show that SVV accuracy and precision worsened with tilt angle, according to a pattern that could be fitted quite adequately by the Bayesian model. We conclude that spatial vision essentially follows the rules of Bayes' optimal observer theory.

  11. The Plus or Minus Game - Teaching Estimation, Precision, and Accuracy

    NASA Astrophysics Data System (ADS)

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-03-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in TPT (Larry Weinstein's "Fermi Questions.") For several years the authors (a college physics professor, a retired algebra teacher, and a fifth-grade teacher) have been playing a game, primarily at home to challenge each other for fun, but also in the classroom as an educational tool. We call the game "The Plus or Minus Game." The game combines estimation with the principle of precision and uncertainty in a competitive and fun way.

  12. Fluorescence Axial Localization with Nanometer Accuracy and Precision

    SciTech Connect

    Li, Hui; Yen, Chi-Fu; Sivasankar, Sanjeevi

    2012-06-15

    We describe a new technique, standing wave axial nanometry (SWAN), to image the axial location of a single nanoscale fluorescent object with sub-nanometer accuracy and 3.7 nm precision. A standing wave, generated by positioning an atomic force microscope tip over a focused laser beam, is used to excite fluorescence; axial position is determined from the phase of the emission intensity. We use SWAN to measure the orientation of single DNA molecules of different lengths, grafted on surfaces with different functionalities.

  13. Scatterometry measurement precision and accuracy below 70 nm

    NASA Astrophysics Data System (ADS)

    Sendelbach, Matthew; Archie, Charles N.

    2003-05-01

    Scatterometry is a contender for various measurement applications where structure widths and heights can be significantly smaller than 70 nm within one or two ITRS generations. For example, feedforward process control in the post-lithography transistor gate formation is being actively pursued by a number of RIE tool manufacturers. Several commercial forms of scatterometry are available or under development which promise to provide satisfactory performance in this regime. Scatterometry, as commercially practiced today, involves analyzing the zeroth order reflected light from a grating of lines. Normal incidence spectroscopic reflectometry, 2-theta fixed-wavelength ellipsometry, and spectroscopic ellipsometry are among the optical techniques, while library based spectra matching and realtime regression are among the analysis techniques. All these commercial forms will find accurate and precise measurement a challenge when the material constituting the critical structure approaches a very small volume. Equally challenging is executing an evaluation methodology that first determines the true properties (critical dimensions and materials) of semiconductor wafer artifacts and then compares measurement performance of several scatterometers. How well do scatterometers track process induced changes in bottom CD and sidewall profile? This paper introduces a general 3D metrology assessment methodology and reports upon work involving sub-70 nm structures and several scatterometers. The methodology combines results from multiple metrologies (CD-SEM, CD-AFM, TEM, and XSEM) to form a Reference Measurement System (RMS). The methodology determines how well the scatterometry measurement tracks critical structure changes even in the presence of other noncritical changes that take place at the same time; these are key components of accuracy. Because the assessment rewards scatterometers that measure with good precision (reproducibility) and good accuracy, the most precise

  14. Robust alignment of prostate histology slices with quantified accuracy

    NASA Astrophysics Data System (ADS)

    Hughes, Cecilia; Rouviere, Olivier; Mege Lechevallier, Florence; Souchon, Rémi; Prost, Rémy

    2012-02-01

    Prostate cancer is the most common malignancy among men yet no current imaging technique is capable of detecting the tumours with precision. To evaluate each technique, the histology data must be precisely mapped to the imaged data. As it cannot be assumed that the histology slices are cut along the same plane as the imaged data is acquired, the registration is a 3D problem. This requires the prior accurate alignment of the histology slices. We propose a protocol to create in a rapid and standardised manner internal fiducial markers in fresh prostate specimens and an algorithm by which these markers can then be automatically detected and classified enabling the automatic rigid alignment of each slice. The protocol and algorithm were tested on 10 prostate specimens, with 19.2 histology slices on average per specimen. On average 90.9% of the fiducial markers created were visible in the slices, of which 96.1% were automatically correctly detected and classified. The average accuracy of the alignment was 0.19 +/- 0.15 mm at the fiducial markers. The algorithm took 5.46 min on average per specimen. The proposed protocol and algorithm were also tested using simulated images and a beef liver sample. The simulated images showed that the algorithm has no associated residual error and justified the choice of a rigid registration. In the beef liver images, the average accuracy of the alignment was 0.11 +/- 0.09 mm at the fiducial markers and 0.63 +/- 0.47 mm at a validation marker approximately 20 mm from the fiducial markers.

  15. Measuring changes in Plasmodium falciparum transmission: precision, accuracy and costs of metrics.

    PubMed

    Tusting, Lucy S; Bousema, Teun; Smith, David L; Drakeley, Chris

    2014-01-01

    As malaria declines in parts of Africa and elsewhere, and as more countries move towards elimination, it is necessary to robustly evaluate the effect of interventions and control programmes on malaria transmission. To help guide the appropriate design of trials to evaluate transmission-reducing interventions, we review 11 metrics of malaria transmission, discussing their accuracy, precision, collection methods and costs and presenting an overall critique. We also review the nonlinear scaling relationships between five metrics of malaria transmission: the entomological inoculation rate, force of infection, sporozoite rate, parasite rate and the basic reproductive number, R0. Our chapter highlights that while the entomological inoculation rate is widely considered the gold standard metric of malaria transmission and may be necessary for measuring changes in transmission in highly endemic areas, it has limited precision and accuracy and more standardised methods for its collection are required. In areas of low transmission, parasite rate, seroconversion rates and molecular metrics including MOI and mFOI may be most appropriate. When assessing a specific intervention, the most relevant effects will be detected by examining the metrics most directly affected by that intervention. Future work should aim to better quantify the precision and accuracy of malaria metrics and to improve methods for their collection.

  16. Robust and precise baseline determination of distributed spacecraft in LEO

    NASA Astrophysics Data System (ADS)

    Allende-Alba, Gerardo; Montenbruck, Oliver

    2016-01-01

    Recent experience with prominent formation flying missions in Low Earth Orbit (LEO), such as GRACE and TanDEM-X, has shown the feasibility of precise relative navigation at millimeter and sub-millimeter levels using GPS carrier phase measurements with fixed integer ambiguities. However, the robustness and availability of the solutions provided by current algorithms may be highly dependent on the mission profile. The main challenges faced in the LEO scenario are the resulting short continuous carrier phase tracking arcs along with the observed rapidly changing ionospheric conditions, which in the particular situation of long baselines increase the difficulty of correct integer ambiguity resolution. To reduce the impact of these factors, the present study proposes a strategy based on a reduced-dynamics filtering of dual-frequency GPS measurements for precise baseline determination along with a dedicated scheme for integer ambiguity resolution, consisting of a hybrid sequential/batch algorithm based on the maximum a posteriori and integer aperture estimators. The algorithms have been tested using flight data from the GRACE, TanDEM-X and Swarm missions in order to assess their robustness to different formation and baseline configurations. Results with the GRACE mission show an average 0.7 mm consistency with the K/Ka-band ranging measurements over a period of more than two years in a baseline configuration of 220 km. Results with TanDEM-X data show an average of 3.8 mm consistency of kinematic and reduced-dynamic solutions in the along-track component over a period of 40 days in baseline configurations of 500 m and 75 km. Data from Swarm A and Swarm C spacecraft are largely affected by atmospheric scintillation and contain half cycle ambiguities. The results obtained under such conditions show an overall consistency between kinematic and reduced-dynamic solutions of 1.7 cm in the along-track component over a period of 30 days in a variable baseline of approximately 60

  17. Improved DORIS accuracy for precise orbit determination and geodesy

    NASA Technical Reports Server (NTRS)

    Willis, Pascal; Jayles, Christian; Tavernier, Gilles

    2004-01-01

    In 2001 and 2002, 3 more DORIS satellites were launched. Since then, all DORIS results have been significantly improved. For precise orbit determination, 20 cm are now available in real-time with DIODE and 1.5 to 2 cm in post-processing. For geodesy, 1 cm precision can now be achieved regularly every week, making now DORIS an active part of a Global Observing System for Geodesy through the IDS.

  18. Robustness and Accuracy in Sea Urchin Developmental Gene Regulatory Networks

    PubMed Central

    Ben-Tabou de-Leon, Smadar

    2016-01-01

    Developmental gene regulatory networks robustly control the timely activation of regulatory and differentiation genes. The structure of these networks underlies their capacity to buffer intrinsic and extrinsic noise and maintain embryonic morphology. Here I illustrate how the use of specific architectures by the sea urchin developmental regulatory networks enables the robust control of cell fate decisions. The Wnt-βcatenin signaling pathway patterns the primary embryonic axis while the BMP signaling pathway patterns the secondary embryonic axis in the sea urchin embryo and across bilateria. Interestingly, in the sea urchin in both cases, the signaling pathway that defines the axis controls directly the expression of a set of downstream regulatory genes. I propose that this direct activation of a set of regulatory genes enables a uniform regulatory response and a clear cut cell fate decision in the endoderm and in the dorsal ectoderm. The specification of the mesodermal pigment cell lineage is activated by Delta signaling that initiates a triple positive feedback loop that locks down the pigment specification state. I propose that the use of compound positive feedback circuitry provides the endodermal cells enough time to turn off mesodermal genes and ensures correct mesoderm vs. endoderm fate decision. Thus, I argue that understanding the control properties of repeatedly used regulatory architectures illuminates their role in embryogenesis and provides possible explanations to their resistance to evolutionary change. PMID:26913048

  19. S-193 scatterometer backscattering cross section precision/accuracy for Skylab 2 and 3 missions

    NASA Technical Reports Server (NTRS)

    Krishen, K.; Pounds, D. J.

    1975-01-01

    Procedures for measuring the precision and accuracy with which the S-193 scatterometer measured the background cross section of ground scenes are described. Homogeneous ground sites were selected, and data from Skylab missions were analyzed. The precision was expressed as the standard deviation of the scatterometer-acquired backscattering cross section. In special cases, inference of the precision of measurement was made by considering the total range from the maximum to minimum of the backscatter measurements within a data segment, rather than the standard deviation. For Skylab 2 and 3 missions a precision better than 1.5 dB is indicated. This procedure indicates an accuracy of better than 3 dB for the Skylab 2 and 3 missions. The estimates of precision and accuracy given in this report are for backscattering cross sections from -28 to 18 dB. Outside this range the precision and accuracy decrease significantly.

  20. Robust control of an active precision truss structure

    NASA Technical Reports Server (NTRS)

    Chu, C. C.; Smith, R. S.; Fanson, J. L.

    1990-01-01

    A description is given of the efforts in control of an active precision truss structure experiment. The control objective is to provide vibration suppression to selected modes of the structure subject to a bandlimited disturbance and modeling errors. Based on performance requirements and an uncertainty description, several control laws using the H-infinity optimization method are synthesized. The controllers are implemented on the experimental facility. Preliminary experimental results are presented.

  1. Robust adhesive precision bonding in automated assembly cells

    NASA Astrophysics Data System (ADS)

    Müller, Tobias; Haag, Sebastian; Bastuck, Thomas; Gisler, Thomas; Moser, Hansruedi; Uusimaa, Petteri; Axt, Christoph; Brecher, Christian

    2014-03-01

    Diode lasers are gaining importance, making their way to higher output powers along with improved BPP. The assembly of micro-optics for diode laser systems goes along with the highest requirements regarding assembly precision. Assembly costs for micro-optics are driven by the requirements regarding alignment in a submicron and the corresponding challenges induced by adhesive bonding. For micro-optic assembly tasks a major challenge in adhesive bonding at highest precision level is the fact, that the bonding process is irreversible. Accordingly, the first bonding attempt needs to be successful. Today's UV-curing adhesives inherit shrinkage effects crucial for submicron tolerances of e.g. FACs. The impact of the shrinkage effects can be tackled by a suitable bonding area design, such as minimal adhesive gaps and an adapted shrinkage offset value for the specific assembly parameters. Compensating shrinkage effects is difficult, as the shrinkage of UV-curing adhesives is not constant between two different lots and varies even over the storage period even under ideal circumstances as first test results indicate. An up-to-date characterization of the adhesive appears necessary for maximum precision in optics assembly to reach highest output yields, minimal tolerances and ideal beamshaping results. Therefore, a measurement setup to precisely determine the up-to-date level of shrinkage has been setup. The goal is to provide necessary information on current shrinkage to the operator or assembly cell to adjust the compensation offset on a daily basis. Impacts of this information are expected to be an improved beam shaping result and a first-time-right production.

  2. [Accuracy and precision in the evaluation of computer assisted surgical systems. A definition].

    PubMed

    Strauss, G; Hofer, M; Korb, W; Trantakis, C; Winkler, D; Burgert, O; Schulz, T; Dietz, A; Meixensberger, J; Koulechov, K

    2006-02-01

    Accuracy represents the outstanding criterion for navigation systems. Surgeons have noticed a great discrepancy between the values from the literature and system specifications on one hand, and intraoperative accuracy on the other. A unitary understanding for the term accuracy does not exist in clinical practice. Furthermore, an incorrect equality for the terms precision and accuracy can be found in the literature. On top of this, clinical accuracy differs from mechanical (technical) accuracy. From a clinical point of view, we had to deal with remarkably many different terms all describing accuracy. This study has the goals of: 1. Defining "accuracy" and related terms, 2. Differentiating between "precision" and "accuracy", 3. Deriving the term "surgical accuracy", 4. Recommending use of the the term "surgical accuracy" for a navigation system. To a great extent, definitions were applied from the International Standardisation Organisation-ISO and the norm from the Deutsches Institut für Normung e.V.-DIN (the German Institute for Standardization). For defining surgical accuracy, the terms reference value, expectation, accuracy and precision are of major interest. Surgical accuracy should indicate the maximum values for the deviation between test results and the reference value (true value) A(max), and additionally indicate precision P(surg). As a basis for measurements, a standardized technical model was used. Coordinates of the model were acquired by CT. To determine statistically and reality relevant results for head surgery, 50 measurements with an accuracy of 50, 75, 100 and 150 mm from the centre of the registration geometry are adequate. In the future, we recommend labeling the system's overall performance with the following specifications: maximum accuracy deviation A(max), precision P and information on the measurement method. This could be displayed on a seal of quality.

  3. Adaptive Spike Threshold Enables Robust and Temporally Precise Neuronal Encoding

    PubMed Central

    Resnik, Andrey; Celikel, Tansu; Englitz, Bernhard

    2016-01-01

    Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information. PMID:27304526

  4. Accuracy and precision in measurements of biomass oxidative ratios

    NASA Astrophysics Data System (ADS)

    Gallagher, M. E.; Masiello, C. A.; Randerson, J. T.; Chadwick, O. A.

    2005-12-01

    One fundamental property of the Earth system is the oxidative ratio (OR) of the terrestrial biosphere, or the mols CO2 fixed per mols O2 released via photosynthesis. This is also an essential, poorly constrained parameter in the calculation of the size of the terrestrial and oceanic carbon sinks via atmospheric O2 and CO2 measurements. We are pursuing a number of techniques to accurately measure natural variations in above- and below-ground OR. For aboveground biomass, OR can be calculated directly from percent C, H, N, and O data measured via elemental analysis; however, the precision of this technique is a function of 4 measurements, resulting in increased data variability. It is also possible to measure OR via bomb calorimetry and percent C, using relationships between the heat of combustion of a sample and its OR. These measurements hold the potential for generation of more precise data, as error depends only on 2 measurements instead of 4. We present data comparing these two OR measurement techniques.

  5. Accuracy of GIPSY PPP from version 6.2: a robust method to remove outliers

    NASA Astrophysics Data System (ADS)

    Hayal, Adem G.; Ugur Sanli, D.

    2014-05-01

    In this paper, we figure out the accuracy of GIPSY PPP from the latest version, version 6.2. As the research community prepares for the real-time PPP, it would be interesting to revise the accuracy of static GPS from the latest version of well established research software, the first among its kinds. Although the results do not significantly differ from the previous version, version 6.1.1, we still observe the slight improvement on the vertical component due to an enhanced second order ionospheric modeling which came out with the latest version. However, in this study, we rather turned our attention into outlier detection. Outliers usually occur among the solutions from shorter observation sessions and degrade the quality of the accuracy modeling. In our previous analysis from version 6.1.1, we argued that the elimination of outliers was cumbersome with the traditional method since repeated trials were needed, and subjectivity that could affect the statistical significance of the solutions might have been existed among the results (Hayal and Sanli, 2013). Here we overcome this problem using a robust outlier elimination method. Median is perhaps the simplest of the robust outlier detection methods in terms of applicability. At the same time, it might be considered to be the most efficient one with its highest breakdown point. In our analysis, we used a slightly different version of the median as introduced in Tut et al. 2013. Hence, we were able to remove suspected outliers at one run; which were, with the traditional methods, more problematic to remove this time from the solutions produced using the latest version of the software. References Hayal, AG, Sanli DU, Accuracy of GIPSY PPP from version 6, GNSS Precise Point Positioning Workshop: Reaching Full Potential, Vol. 1, pp. 41-42, (2013) Tut,İ., Sanli D.U., Erdogan B., Hekimoglu S., Efficiency of BERNESE single baseline rapid static positioning solutions with SEARCH strategy, Survey Review, Vol. 45, Issue 331

  6. Highly precise and robust packaging of optical components

    NASA Astrophysics Data System (ADS)

    Leers, Michael; Winzen, Matthias; Liermann, Erik; Faidel, Heinrich; Westphalen, Thomas; Miesner, Jörn; Luttmann, Jörg; Hoffmann, Dieter

    2012-03-01

    In this paper we present the development of a compact, thermo-optically stable and vibration and mechanical shock resistant mounting technique by soldering of optical components. Based on this technique a new generation of laser sources for aerospace applications is designed. In these laser systems solder technique replaces the glued and bolted connections between optical component, mount and base plate. Alignment precision in the arc second range and realization of long term stability of every single part in the laser system is the main challenge. At the Fraunhofer Institute for Laser Technology ILT a soldering and mounting technique has been developed for high precision packaging. The specified environmental boundary conditions (e.g. a temperature range of -40 °C to +50 °C) and the required degrees of freedom for the alignment of the components have been taken into account for this technique. In general the advantage of soldering compared to gluing is that there is no outgassing. In addition no flux is needed in our special process. The joining process allows multiple alignments by remelting the solder. The alignment is done in the liquid phase of the solder by a 6 axis manipulator with a step width in the nm range and a tilt in the arc second range. In a next step the optical components have to pass the environmental tests. The total misalignment of the component to its adapter after the thermal cycle tests is less than 10 arc seconds. The mechanical stability tests regarding shear, vibration and shock behavior are well within the requirements.

  7. Gamma-Ray Peak Integration: Accuracy and Precision

    SciTech Connect

    Richard M. Lindstrom

    2000-11-12

    The accuracy of singlet gamma-ray peak areas obtained by a peak analysis program is immaterial. If the same algorithm is used for sample measurement as for calibration and if the peak shapes are similar, then biases in the integration method cancel. Reproducibility is the only important issue. Even the uncertainty of the areas computed by the program is trivial because the true standard uncertainty can be experimentally assessed by repeated measurements of the same source. Reproducible peak integration was important in a recent standard reference material certification task. The primary tool used for spectrum analysis was SUM, a National Institute of Standards and Technology interactive program to sum peaks and subtract a linear background, using the same channels to integrate all 20 spectra. For comparison, this work examines other peak integration programs. Unlike some published comparisons of peak performance in which synthetic spectra were used, this experiment used spectra collected for a real (though exacting) analytical project, analyzed by conventional software used in routine ways. Because both components of the 559- to 564-keV doublet are from {sup 76}As, they were integrated together with SUM. The other programs, however, deconvoluted the peaks. A sensitive test of the fitting algorithm is the ratio of reported peak areas. In almost all the cases, this ratio was much more variable than expected from the reported uncertainties reported by the program. Other comparisons to be reported indicate that peak integration is still an imperfect tool in the analysis of gamma-ray spectra.

  8. Accuracy improvement techniques in Precise Point Positioning method using multiple GNSS constellations

    NASA Astrophysics Data System (ADS)

    Vasileios Psychas, Dimitrios; Delikaraoglou, Demitris

    2016-04-01

    The future Global Navigation Satellite Systems (GNSS), including modernized GPS, GLONASS, Galileo and BeiDou, offer three or more signal carriers for civilian use and much more redundant observables. The additional frequencies can significantly improve the capabilities of the traditional geodetic techniques based on GPS signals at two frequencies, especially with regard to the availability, accuracy, interoperability and integrity of high-precision GNSS applications. Furthermore, highly redundant measurements can allow for robust simultaneous estimation of static or mobile user states including more parameters such as real-time tropospheric biases and more reliable ambiguity resolution estimates. This paper presents an investigation and analysis of accuracy improvement techniques in the Precise Point Positioning (PPP) method using signals from the fully operational (GPS and GLONASS), as well as the emerging (Galileo and BeiDou) GNSS systems. The main aim was to determine the improvement in both the positioning accuracy achieved and the time convergence it takes to achieve geodetic-level (10 cm or less) accuracy. To this end, freely available observation data from the recent Multi-GNSS Experiment (MGEX) of the International GNSS Service, as well as the open source program RTKLIB were used. Following a brief background of the PPP technique and the scope of MGEX, the paper outlines the various observational scenarios that were used in order to test various data processing aspects of PPP solutions with multi-frequency, multi-constellation GNSS systems. Results from the processing of multi-GNSS observation data from selected permanent MGEX stations are presented and useful conclusions and recommendations for further research are drawn. As shown, data fusion from GPS, GLONASS, Galileo and BeiDou systems is becoming increasingly significant nowadays resulting in a position accuracy increase (mostly in the less favorable East direction) and a large reduction of convergence

  9. Spectropolarimetry with PEPSI at the LBT: accuracy vs. precision in magnetic field measurements

    NASA Astrophysics Data System (ADS)

    Ilyin, Ilya; Strassmeier, Klaus G.; Woche, Manfred; Hofmann, Axel

    2009-04-01

    We present the design of the new PEPSI spectropolarimeter to be installed at the Large Binocular Telescope (LBT) in Arizona to measure the full set of Stokes parameters in spectral lines and outline its precision and the accuracy limiting factors.

  10. Precision and Accuracy in Measurements: A Tale of Four Graduated Cylinders.

    ERIC Educational Resources Information Center

    Treptow, Richard S.

    1998-01-01

    Expands upon the concepts of precision and accuracy at a level suitable for general chemistry. Serves as a bridge to the more extensive treatments in analytical chemistry textbooks and the advanced literature on error analysis. Contains 22 references. (DDR)

  11. Accuracy and precision of silicon based impression media for quantitative areal texture analysis.

    PubMed

    Goodall, Robert H; Darras, Laurent P; Purnell, Mark A

    2015-05-20

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis.

  12. Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis

    PubMed Central

    Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.

    2015-01-01

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505

  13. A Comparison of the Astrometric Precision and Accuracy of Double Star Observations with Two Telescopes

    NASA Astrophysics Data System (ADS)

    Alvarez, Pablo; Fishbein, Amos E.; Hyland, Michael W.; Kight, Cheyne L.; Lopez, Hairold; Navarro, Tanya; Rosas, Carlos A.; Schachter, Aubrey E.; Summers, Molly A.; Weise, Eric D.; Hoffman, Megan A.; Mires, Robert C.; Johnson, Jolyon M.; Genet, Russell M.; White, Robin

    2009-01-01

    Using a manual Meade 6" Newtonian telescope and a computerized Meade 10" Schmidt-Cassegrain telescope, students from Arroyo Grande High School measured the well-known separation and position angle of the bright visual double star Albireo. The precision and accuracy of the observations from the two telescopes were compared to each other and to published values of Albireo taken as the standard. It was hypothesized that the larger, computerized telescope would be both more precise and more accurate.

  14. Evaluation of optoelectronic Plethysmography accuracy and precision in recording displacements during quiet breathing simulation.

    PubMed

    Massaroni, C; Schena, E; Saccomandi, P; Morrone, M; Sterzi, S; Silvestri, S

    2015-08-01

    Opto-electronic Plethysmography (OEP) is a motion analysis system used to measure chest wall kinematics and to indirectly evaluate respiratory volumes during breathing. Its working principle is based on the computation of marker displacements placed on the chest wall. This work aims at evaluating the accuracy and precision of OEP in measuring displacement in the range of human chest wall displacement during quiet breathing. OEP performances were investigated by the use of a fully programmable chest wall simulator (CWS). CWS was programmed to move 10 times its eight shafts in the range of physiological displacement (i.e., between 1 mm and 8 mm) at three different frequencies (i.e., 0.17 Hz, 0.25 Hz, 0.33 Hz). Experiments were performed with the aim to: (i) evaluate OEP accuracy and precision error in recording displacement in the overall calibrated volume and in three sub-volumes, (ii) evaluate the OEP volume measurement accuracy due to the measurement accuracy of linear displacements. OEP showed an accuracy better than 0.08 mm in all trials, considering the whole 2m(3) calibrated volume. The mean measurement discrepancy was 0.017 mm. The precision error, expressed as the ratio between measurement uncertainty and the recorded displacement by OEP, was always lower than 0.55%. Volume overestimation due to OEP linear measurement accuracy was always <; 12 mL (<; 3.2% of total volume), considering all settings. PMID:26736504

  15. Sex differences in accuracy and precision when judging time to arrival: data from two Internet studies.

    PubMed

    Sanders, Geoff; Sinclair, Kamila

    2011-12-01

    We report two Internet studies that investigated sex differences in the accuracy and precision of judging time to arrival. We used accuracy to mean the ability to match the actual time to arrival and precision to mean the consistency with which each participant made their judgments. Our task was presented as a computer game in which a toy UFO moved obliquely towards the participant through a virtual three-dimensional space on route to a docking station. The UFO disappeared before docking and participants pressed their space bar at the precise moment they thought the UFO would have docked. Study 1 showed it was possible to conduct quantitative studies of spatiotemporal judgments in virtual reality via the Internet and confirmed reports that men are more accurate because women underestimate, but found no difference in precision measured as intra-participant variation. Study 2 repeated Study 1 with five additional presentations of one condition to provide a better measure of precision. Again, men were more accurate than women but there were no sex differences in precision. However, within the coincidence-anticipation timing (CAT) literature, of those studies that report sex differences, a majority found that males are both more accurate and more precise than females. Noting that many CAT studies report no sex differences, we discuss appropriate interpretations of such null findings. While acknowledging that CAT performance may be influenced by experience we suggest that the sex difference may have originated among our ancestors with the evolutionary selection of men for hunting and women for gathering.

  16. The Plus or Minus Game--Teaching Estimation, Precision, and Accuracy

    ERIC Educational Resources Information Center

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-01-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in "TPT" (Larry Weinstein's "Fermi…

  17. Commissioning Procedures for Mechanical Precision and Accuracy in a Dedicated LINAC

    SciTech Connect

    Ballesteros-Zebadua, P.; Larrga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Juarez, J.; Prieto, I.; Moreno-Jimenez, S.; Celis, M. A.

    2008-08-11

    Mechanical precision measurements are fundamental procedures for the commissioning of a dedicated LINAC. At our Radioneurosurgery Unit, these procedures can be suitable as quality assurance routines that allow the verification of the equipment geometrical accuracy and precision. In this work mechanical tests were performed for gantry and table rotation, obtaining mean associated uncertainties of 0.3 mm and 0.71 mm, respectively. Using an anthropomorphic phantom and a series of localized surface markers, isocenter accuracy showed to be smaller than 0.86 mm for radiosurgery procedures and 0.95 mm for fractionated treatments with mask. All uncertainties were below tolerances. The highest contribution to mechanical variations is due to table rotation, so it is important to correct variations using a localization frame with printed overlays. Mechanical precision knowledge would allow to consider the statistical errors in the treatment planning volume margins.

  18. Increasing average period lengths by switching of robust chaos maps in finite precision

    NASA Astrophysics Data System (ADS)

    Nagaraj, N.; Shastry, M. C.; Vaidya, P. G.

    2008-12-01

    Grebogi, Ott and Yorke (Phys. Rev. A 38, 1988) have investigated the effect of finite precision on average period length of chaotic maps. They showed that the average length of periodic orbits (T) of a dynamical system scales as a function of computer precision (ɛ) and the correlation dimension (d) of the chaotic attractor: T ˜ɛ-d/2. In this work, we are concerned with increasing the average period length which is desirable for chaotic cryptography applications. Our experiments reveal that random and chaotic switching of deterministic chaotic dynamical systems yield higher average length of periodic orbits as compared to simple sequential switching or absence of switching. To illustrate the application of switching, a novel generalization of the Logistic map that exhibits Robust Chaos (absence of attracting periodic orbits) is first introduced. We then propose a pseudo-random number generator based on chaotic switching between Robust Chaos maps which is found to successfully pass stringent statistical tests of randomness.

  19. Sex differences in accuracy and precision when judging time to arrival: data from two Internet studies.

    PubMed

    Sanders, Geoff; Sinclair, Kamila

    2011-12-01

    We report two Internet studies that investigated sex differences in the accuracy and precision of judging time to arrival. We used accuracy to mean the ability to match the actual time to arrival and precision to mean the consistency with which each participant made their judgments. Our task was presented as a computer game in which a toy UFO moved obliquely towards the participant through a virtual three-dimensional space on route to a docking station. The UFO disappeared before docking and participants pressed their space bar at the precise moment they thought the UFO would have docked. Study 1 showed it was possible to conduct quantitative studies of spatiotemporal judgments in virtual reality via the Internet and confirmed reports that men are more accurate because women underestimate, but found no difference in precision measured as intra-participant variation. Study 2 repeated Study 1 with five additional presentations of one condition to provide a better measure of precision. Again, men were more accurate than women but there were no sex differences in precision. However, within the coincidence-anticipation timing (CAT) literature, of those studies that report sex differences, a majority found that males are both more accurate and more precise than females. Noting that many CAT studies report no sex differences, we discuss appropriate interpretations of such null findings. While acknowledging that CAT performance may be influenced by experience we suggest that the sex difference may have originated among our ancestors with the evolutionary selection of men for hunting and women for gathering. PMID:21125324

  20. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new

  1. Comparison between predicted and actual accuracies for an Ultra-Precision CNC measuring machine

    SciTech Connect

    Thompson, D.C.; Fix, B.L.

    1995-05-30

    At the 1989 CIRP annual meeting, we reported on the design of a specialized, ultra-precision CNC measuring machine, and on the error budget that was developed to guide the design process. In our paper we proposed a combinatorial rule for merging estimated and/or calculated values for all known sources of error, to yield a single overall predicted accuracy for the machine. In this paper we compare our original predictions with measured performance of the completed instrument.

  2. Precision and accuracy of 3D lower extremity residua measurement systems

    NASA Astrophysics Data System (ADS)

    Commean, Paul K.; Smith, Kirk E.; Vannier, Michael W.; Hildebolt, Charles F.; Pilgram, Thomas K.

    1996-04-01

    Accurate and reproducible geometric measurement of lower extremity residua is required for custom prosthetic socket design. We compared spiral x-ray computed tomography (SXCT) and 3D optical surface scanning (OSS) with caliper measurements and evaluated the precision and accuracy of each system. Spiral volumetric CT scanned surface and subsurface information was used to make external and internal measurements, and finite element models (FEMs). SXCT and OSS were used to measure lower limb residuum geometry of 13 below knee (BK) adult amputees. Six markers were placed on each subject's BK residuum and corresponding plaster casts and distance measurements were taken to determine precision and accuracy for each system. Solid models were created from spiral CT scan data sets with the prosthesis in situ under different loads using p-version finite element analysis (FEA). Tissue properties of the residuum were estimated iteratively and compared with values taken from the biomechanics literature. The OSS and SXCT measurements were precise within 1% in vivo and 0.5% on plaster casts, and accuracy was within 3.5% in vivo and 1% on plaster casts compared with caliper measures. Three-dimensional optical surface and SXCT imaging systems are feasible for capturing the comprehensive 3D surface geometry of BK residua, and provide distance measurements statistically equivalent to calipers. In addition, SXCT can readily distinguish internal soft tissue and bony structure of the residuum. FEM can be applied to determine tissue material properties interactively using inverse methods.

  3. Evaluation of precision and accuracy of selenium measurements in biological materials using neutron activation analysis

    SciTech Connect

    Greenberg, R.R.

    1988-01-01

    In recent years, the accurate determination of selenium in biological materials has become increasingly important in view of the essential nature of this element for human nutrition and its possible role as a protective agent against cancer. Unfortunately, the accurate determination of selenium in biological materials is often difficult for most analytical techniques for a variety of reasons, including interferences, complicated selenium chemistry due to the presence of this element in multiple oxidation states and in a variety of different organic species, stability and resistance to destruction of some of these organo-selenium species during acid dissolution, volatility of some selenium compounds, and potential for contamination. Neutron activation analysis (NAA) can be one of the best analytical techniques for selenium determinations in biological materials for a number of reasons. Currently, precision at the 1% level (1s) and overall accuracy at the 1 to 2% level (95% confidence interval) can be attained at the U.S. National Bureau of Standards (NBS) for selenium determinations in biological materials when counting statistics are not limiting (using the {sup 75}Se isotope). An example of this level of precision and accuracy is summarized. Achieving this level of accuracy, however, requires strict attention to all sources of systematic error. Precise and accurate results can also be obtained after radiochemical separations.

  4. Large format focal plane array integration with precision alignment, metrology and accuracy capabilities

    NASA Astrophysics Data System (ADS)

    Neumann, Jay; Parlato, Russell; Tracy, Gregory; Randolph, Max

    2015-09-01

    Focal plane alignment for large format arrays and faster optical systems require enhanced precision methodology and stability over temperature. The increase in focal plane array size continues to drive the alignment capability. Depending on the optical system, the focal plane flatness of less than 25μm (.001") is required over transition temperatures from ambient to cooled operating temperatures. The focal plane flatness requirement must also be maintained in airborne or launch vibration environments. This paper addresses the challenge of the detector integration into the focal plane module and housing assemblies, the methodology to reduce error terms during integration and the evaluation of thermal effects. The driving factors influencing the alignment accuracy include: datum transfers, material effects over temperature, alignment stability over test, adjustment precision and traceability to NIST standard. The FPA module design and alignment methodology reduces the error terms by minimizing the measurement transfers to the housing. In the design, the proper material selection requires matched coefficient of expansion materials minimizes both the physical shift over temperature as well as lowering the stress induced into the detector. When required, the co-registration of focal planes and filters can achieve submicron relative positioning by applying precision equipment, interferometry and piezoelectric positioning stages. All measurements and characterizations maintain traceability to NIST standards. The metrology characterizes the equipment's accuracy, repeatability and precision of the measurements.

  5. Integrative fitting of absorption line profiles with high accuracy, robustness, and speed

    NASA Astrophysics Data System (ADS)

    Skrotzki, Julian; Habig, Jan Christoph; Ebert, Volker

    2014-08-01

    The principle of the integrative evaluation of absorption line profiles relies on the numeric integration of absorption line signals to retrieve absorber concentrations, e.g., of trace gases. Thus, it is a fast and robust technique. However, previous implementations of the integrative evaluation principle showed shortcomings in terms of accuracy and the lack of a fit quality indicator. This has motivated the development of an advanced integrative (AI) fitting algorithm. The AI fitting algorithm retains the advantages of previous integrative implementations—robustness and speed—and is able to achieve high accuracy by introduction of a novel iterative fitting process. A comparison of the AI fitting algorithm with the widely used Levenberg-Marquardt (LM) fitting algorithm indicates that the AI algorithm has advantages in terms of robustness due to its independence from appropriately chosen start values for the initialization of the fitting process. In addition, the AI fitting algorithm shows speed advantages typically resulting in a factor of three to four shorter computational times on a standard personal computer. The LM algorithm on the other hand retains advantages in terms of a much higher flexibility, as the AI fitting algorithm is restricted to the evaluation of single absorption lines with precomputed line width. Comparing both fitting algorithms for the specific application of in situ laser hygrometry at 1,370 nm using direct tunable diode laser absorption spectroscopy (TDLAS) suggests that the accuracy of the AI algorithm is equivalent to that of the LM algorithm. For example, a signal-to-noise ratio of 80 and better typically yields a deviation of <1 % between both fitting algorithms. The properties of the AI fitting algorithm make it an interesting alternative if robustness and speed are crucial in an application and if the restriction to a single absorption line is possible. These conditions are fulfilled for the 1,370 nm TDLAS hygrometry at the

  6. Multiple ping sonar accuracy improvement using robust motion estimation and ping fusion.

    PubMed

    Yu, Lian; Neretti, Nicola; Intrator, Nathan

    2006-04-01

    Noise degrades the accuracy of sonar systems. We demonstrate a practical method for increasing the effective signal-to-noise ratio (SNR) by fusing time delay information from a burst of multiple sonar pings. This approach can be useful when there is no relative motion between the sonar and the target during the burst of sonar pinging. Otherwise, the relative motion degrades the fusion and therefore, has to be addressed before fusion can be used. In this paper, we present a robust motion estimation algorithm which uses information from multiple receivers to estimate the relative motion between pings in the burst. We then compensate for motion, and show that the fusion of information from the burst of motion compensated pings improves both the resilience to noise and sonar accuracy, consequently increasing the operating range of the sonar system.

  7. Robust Flight Path Determination for Mars Precision Landing Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Kohen, Hamid

    1997-01-01

    This paper documents the application of genetic algorithms (GAs) to the problem of robust flight path determination for Mars precision landing. The robust flight path problem is defined here as the determination of the flight path which delivers a low-lift open-loop controlled vehicle to its desired final landing location while minimizing the effect of perturbations due to uncertainty in the atmospheric model and entry conditions. The genetic algorithm was capable of finding solutions which reduced the landing error from 111 km RMS radial (open-loop optimal) to 43 km RMS radial (optimized with respect to perturbations) using 200 hours of computation on an Ultra-SPARC workstation. Further reduction in the landing error is possible by going to closed-loop control which can utilize the GA optimized paths as nominal trajectories for linearization.

  8. Accuracy and precision of ice stream bed topography derived from ground-based radar surveys

    NASA Astrophysics Data System (ADS)

    King, Edward

    2016-04-01

    There is some confusion within the glaciological community as to the accuracy of the basal topography derived from radar measurements. A number of texts and papers state that basal topography cannot be determined to better than one quarter of the wavelength of the radar system. On the other hand King et al (Nature Geoscience, 2009) claimed that features of the bed topography beneath Rutford Ice Stream, Antarctica can be distinguished to +/- 3m using a 3 MHz radar system (which has a quarter wavelength of 14m in ice). These statements of accuracy are mutually exclusive. I will show in this presentation that the measurement of ice thickness is a radar range determination to a single strongly-reflective target. This measurement has much higher accuracy than the resolution of two targets of similar reflection strength, which is governed by the quarter-wave criterion. The rise time of the source signal and the sensitivity and digitisation interval of the recording system are the controlling criteria on radar range accuracy. A dataset from Pine Island Glacier, West Antarctica will be used to illustrate these points, as well as the repeatability or precision of radar range measurements, and the influence of gridding parameters and positioning accuracy on the final DEM product.

  9. Wound Area Measurement with Digital Planimetry: Improved Accuracy and Precision with Calibration Based on 2 Rulers

    PubMed Central

    Foltynski, Piotr

    2015-01-01

    Introduction In the treatment of chronic wounds the wound surface area change over time is useful parameter in assessment of the applied therapy plan. The more precise the method of wound area measurement the earlier may be identified and changed inappropriate treatment plan. Digital planimetry may be used in wound area measurement and therapy assessment when it is properly used, but the common problem is the camera lens orientation during the taking of a picture. The camera lens axis should be perpendicular to the wound plane, and if it is not, the measured area differ from the true area. Results Current study shows that the use of 2 rulers placed in parallel below and above the wound for the calibration increases on average 3.8 times the precision of area measurement in comparison to the measurement with one ruler used for calibration. The proposed procedure of calibration increases also 4 times accuracy of area measurement. It was also showed that wound area range and camera type do not influence the precision of area measurement with digital planimetry based on two ruler calibration, however the measurements based on smartphone camera were significantly less accurate than these based on D-SLR or compact cameras. Area measurement on flat surface was more precise with the digital planimetry with 2 rulers than performed with the Visitrak device, the Silhouette Mobile device or the AreaMe software-based method. Conclusion The calibration in digital planimetry with using 2 rulers remarkably increases precision and accuracy of measurement and therefore should be recommended instead of calibration based on single ruler. PMID:26252747

  10. Optimizing ELISAs for precision and robustness using laboratory automation and statistical design of experiments.

    PubMed

    Joelsson, Daniel; Moravec, Phil; Troutman, Matthew; Pigeon, Joseph; DePhillips, Pete

    2008-08-20

    Transferring manual ELISAs to automated platforms requires optimizing the assays for each particular robotic platform. These optimization experiments are often time consuming and difficult to perform using a traditional one-factor-at-a-time strategy. In this manuscript we describe the development of an automated process using statistical design of experiments (DOE) to quickly optimize immunoassays for precision and robustness on the Tecan EVO liquid handler. By using fractional factorials and a split-plot design, five incubation time variables and four reagent concentration variables can be optimized in a short period of time.

  11. Accuracy or precision: Implications of sample design and methodology on abundance estimation

    USGS Publications Warehouse

    Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.

    2015-01-01

    Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.

  12. Automated Gravimetric Calibration to Optimize the Accuracy and Precision of TECAN Freedom EVO Liquid Handler.

    PubMed

    Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique

    2016-10-01

    High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications. PMID:26905719

  13. The tradeoff between accuracy and precision in latent variable models of mediation processes

    PubMed Central

    Ledgerwood, Alison; Shrout, Patrick E.

    2016-01-01

    Social psychologists place high importance on understanding mechanisms, and frequently employ mediation analyses to shed light on the process underlying an effect. Such analyses can be conducted using observed variables (e.g., a typical regression approach) or latent variables (e.g., a SEM approach), and choosing between these methods can be a more complex and consequential decision than researchers often realize. The present paper adds to the literature on mediation by examining the relative tradeoff between accuracy and precision in latent versus observed variable modeling. Whereas past work has shown that latent variable models tend to produce more accurate estimates, we demonstrate that observed variable models tend to produce more precise estimates, and examine this relative tradeoff both theoretically and empirically in a typical three-variable mediation model across varying levels of effect size and reliability. We discuss implications for social psychologists seeking to uncover mediating variables, and recommend practical approaches for maximizing both accuracy and precision in mediation analyses. PMID:21806305

  14. Automated Gravimetric Calibration to Optimize the Accuracy and Precision of TECAN Freedom EVO Liquid Handler

    PubMed Central

    Bessemans, Laurent; Jully, Vanessa; de Raikem, Caroline; Albanese, Mathieu; Moniotte, Nicolas; Silversmet, Pascal; Lemoine, Dominique

    2016-01-01

    High-throughput screening technologies are increasingly integrated into the formulation development process of biopharmaceuticals. The performance of liquid handling systems is dependent on the ability to deliver accurate and precise volumes of specific reagents to ensure process quality. We have developed an automated gravimetric calibration procedure to adjust the accuracy and evaluate the precision of the TECAN Freedom EVO liquid handling system. Volumes from 3 to 900 µL using calibrated syringes and fixed tips were evaluated with various solutions, including aluminum hydroxide and phosphate adjuvants, β-casein, sucrose, sodium chloride, and phosphate-buffered saline. The methodology to set up liquid class pipetting parameters for each solution was to split the process in three steps: (1) screening of predefined liquid class, including different pipetting parameters; (2) adjustment of accuracy parameters based on a calibration curve; and (3) confirmation of the adjustment. The run of appropriate pipetting scripts, data acquisition, and reports until the creation of a new liquid class in EVOware was fully automated. The calibration and confirmation of the robotic system was simple, efficient, and precise and could accelerate data acquisition for a wide range of biopharmaceutical applications. PMID:26905719

  15. Accuracy, precision, usability, and cost of free chlorine residual testing methods.

    PubMed

    Murray, Anna; Lantagne, Daniele

    2015-03-01

    Chlorine is the most widely used disinfectant worldwide, partially because residual protection is maintained after treatment. This residual is measured using colorimetric test kits varying in accuracy, precision, training required, and cost. Seven commercially available colorimeters, color wheel and test tube comparator kits, pool test kits, and test strips were evaluated for use in low-resource settings by: (1) measuring in quintuplicate 11 samples from 0.0-4.0 mg/L free chlorine residual in laboratory and natural light settings to determine accuracy and precision; (2) conducting volunteer testing where participants used and evaluated each test kit; and (3) comparing costs. Laboratory accuracy ranged from 5.1-40.5% measurement error, with colorimeters the most accurate and test strip methods the least. Variation between laboratory and natural light readings occurred with one test strip method. Volunteer participants found test strip methods easiest and color wheel methods most difficult, and were most confident in the colorimeter and least confident in test strip methods. Costs range from 3.50-444 USD for 100 tests. Application of a decision matrix found colorimeters and test tube comparator kits were most appropriate for use in low-resource settings; it is recommended users apply the decision matrix themselves, as the appropriate kit might vary by context.

  16. Accuracy and precision of stream reach water surface slopes estimated in the field and from maps

    USGS Publications Warehouse

    Isaak, D.J.; Hubert, W.A.; Krueger, K.L.

    1999-01-01

    The accuracy and precision of five tools used to measure stream water surface slope (WSS) were evaluated. Water surface slopes estimated in the field with a clinometer or from topographic maps used in conjunction with a map wheel or geographic information system (GIS) were significantly higher than WSS estimated in the field with a surveying level (biases of 34, 41, and 53%, respectively). Accuracy of WSS estimates obtained with an Abney level did not differ from surveying level estimates, but conclusions regarding the accuracy of Abney levels and clinometers were weakened by intratool variability. The surveying level estimated WSS most precisely (coefficient of variation [CV] = 0.26%), followed by the GIS (CV = 1.87%), map wheel (CV = 6.18%), Abney level (CV = 13.68%), and clinometer (CV = 21.57%). Estimates of WSS measured in the field with an Abney level and estimated for the same reaches with a GIS used in conjunction with l:24,000-scale topographic maps were significantly correlated (r = 0.86), but there was a tendency for the GIS to overestimate WSS. Detailed accounts of the methods used to measure WSS and recommendations regarding the measurement of WSS are provided.

  17. High-accuracy and robust localization of large control markers for geometric camera calibration.

    PubMed

    Douxchamps, Damien; Chihara, Kunihiro

    2009-02-01

    Accurate measurement of the position of features in an image is subject to a fundamental compromise: The features must be both small, to limit the effect of nonlinear distortions, and large, to limit the effect of noise and discretization. This constrains both the accuracy and the robustness of image measurements, which play an important role in geometric camera calibration as well as in all subsequent measurements based on that calibration. In this paper, we present a new geometric camera calibration technique that exploits the complete camera model during the localization of control markers, thereby abolishing the marker size compromise. Large markers allow a dense pattern to be used instead of a simple disc, resulting in a significant increase in accuracy and robustness. When highly planar markers are used, geometric camera calibration based on synthetic images leads to true errors of 0.002 pixels, even in the presence of artifacts such as noise, illumination gradients, compression, blurring, and limited dynamic range. The camera parameters are also accurately recovered, even for complex camera models.

  18. Accuracy and precision of protein-ligand interaction kinetics determined from chemical shift titrations.

    PubMed

    Markin, Craig J; Spyracopoulos, Leo

    2012-12-01

    NMR-monitored chemical shift titrations for the study of weak protein-ligand interactions represent a rich source of information regarding thermodynamic parameters such as dissociation constants (K ( D )) in the micro- to millimolar range, populations for the free and ligand-bound states, and the kinetics of interconversion between states, which are typically within the fast exchange regime on the NMR timescale. We recently developed two chemical shift titration methods wherein co-variation of the total protein and ligand concentrations gives increased precision for the K ( D ) value of a 1:1 protein-ligand interaction (Markin and Spyracopoulos in J Biomol NMR 53: 125-138, 2012). In this study, we demonstrate that classical line shape analysis applied to a single set of (1)H-(15)N 2D HSQC NMR spectra acquired using precise protein-ligand chemical shift titration methods we developed, produces accurate and precise kinetic parameters such as the off-rate (k ( off )). For experimentally determined kinetics in the fast exchange regime on the NMR timescale, k ( off ) ~ 3,000 s(-1) in this work, the accuracy of classical line shape analysis was determined to be better than 5 % by conducting quantum mechanical NMR simulations of the chemical shift titration methods with the magnetic resonance toolkit GAMMA. Using Monte Carlo simulations, the experimental precision for k ( off ) from line shape analysis of NMR spectra was determined to be 13 %, in agreement with the theoretical precision of 12 % from line shape analysis of the GAMMA simulations in the presence of noise and protein concentration errors. In addition, GAMMA simulations were employed to demonstrate that line shape analysis has the potential to provide reasonably accurate and precise k ( off ) values over a wide range, from 100 to 15,000 s(-1). The validity of line shape analysis for k ( off ) values approaching intermediate exchange (~100 s(-1)), may be facilitated by more accurate K ( D ) measurements

  19. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients

    PubMed Central

    Asadian, Simin; Khatony, Alireza; Moradi, Gholamreza; Abdi, Alireza; Rezaei, Mansour

    2016-01-01

    Introduction An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. Methods In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients’ body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. Results There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (P<0.001). Kappa coefficients showed good agreement between the temperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%). Paired t-test demonstrated an acceptable precision with forehead (P=0.132), left (P=0.18) and right (P=0.318) tympanic membranes, oral (P=1.00), and axillary (P=1.00) methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. Conclusion The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left) for assessing a patient’s body temperature in the intensive care units because of high accuracy and acceptable precision.

  20. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients

    PubMed Central

    Asadian, Simin; Khatony, Alireza; Moradi, Gholamreza; Abdi, Alireza; Rezaei, Mansour

    2016-01-01

    Introduction An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. Methods In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients’ body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. Results There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (P<0.001). Kappa coefficients showed good agreement between the temperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%). Paired t-test demonstrated an acceptable precision with forehead (P=0.132), left (P=0.18) and right (P=0.318) tympanic membranes, oral (P=1.00), and axillary (P=1.00) methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. Conclusion The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left) for assessing a patient’s body temperature in the intensive care units because of high accuracy and acceptable precision. PMID:27621673

  1. Assessing accuracy and precision for field and laboratory data: a perspective in ecosystem restoration

    USGS Publications Warehouse

    Stapanian, Martin A.; Lewis, Timothy E; Palmer, Craig J.; Middlebrook Amos, Molly

    2016-01-01

    Unlike most laboratory studies, rigorous quality assurance/quality control (QA/QC) procedures may be lacking in ecosystem restoration (“ecorestoration”) projects, despite legislative mandates in the United States. This is due, in part, to ecorestoration specialists making the false assumption that some types of data (e.g. discrete variables such as species identification and abundance classes) are not subject to evaluations of data quality. Moreover, emergent behavior manifested by complex, adapting, and nonlinear organizations responsible for monitoring the success of ecorestoration projects tend to unconsciously minimize disorder, QA/QC being an activity perceived as creating disorder. We discuss similarities and differences in assessing precision and accuracy for field and laboratory data. Although the concepts for assessing precision and accuracy of ecorestoration field data are conceptually the same as laboratory data, the manner in which these data quality attributes are assessed is different. From a sample analysis perspective, a field crew is comparable to a laboratory instrument that requires regular “recalibration,” with results obtained by experts at the same plot treated as laboratory calibration standards. Unlike laboratory standards and reference materials, the “true” value for many field variables is commonly unknown. In the laboratory, specific QA/QC samples assess error for each aspect of the measurement process, whereas field revisits assess precision and accuracy of the entire data collection process following initial calibration. Rigorous QA/QC data in an ecorestoration project are essential for evaluating the success of a project, and they provide the only objective “legacy” of the dataset for potential legal challenges and future uses.

  2. Mapping stream habitats with a global positioning system: Accuracy, precision, and comparison with traditional methods

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.

    2006-01-01

    We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media

  3. Accuracy, precision, and method detection limits of quantitative PCR for airborne bacteria and fungi.

    PubMed

    Hospodsky, Denina; Yamamoto, Naomichi; Peccia, Jordan

    2010-11-01

    Real-time quantitative PCR (qPCR) for rapid and specific enumeration of microbial agents is finding increased use in aerosol science. The goal of this study was to determine qPCR accuracy, precision, and method detection limits (MDLs) within the context of indoor and ambient aerosol samples. Escherichia coli and Bacillus atrophaeus vegetative bacterial cells and Aspergillus fumigatus fungal spores loaded onto aerosol filters were considered. Efficiencies associated with recovery of DNA from aerosol filters were low, and excluding these efficiencies in quantitative analysis led to underestimating the true aerosol concentration by 10 to 24 times. Precision near detection limits ranged from a 28% to 79% coefficient of variation (COV) for the three test organisms, and the majority of this variation was due to instrument repeatability. Depending on the organism and sampling filter material, precision results suggest that qPCR is useful for determining dissimilarity between two samples only if the true differences are greater than 1.3 to 3.2 times (95% confidence level at n = 7 replicates). For MDLs, qPCR was able to produce a positive response with 99% confidence from the DNA of five B. atrophaeus cells and less than one A. fumigatus spore. Overall MDL values that included sample processing efficiencies ranged from 2,000 to 3,000 B. atrophaeus cells per filter and 10 to 25 A. fumigatus spores per filter. Applying the concepts of accuracy, precision, and MDL to qPCR aerosol measurements demonstrates that sample processing efficiencies must be accounted for in order to accurately estimate bioaerosol exposure, provides guidance on the necessary statistical rigor required to understand significant differences among separate aerosol samples, and prevents undetected (i.e., nonquantifiable) values for true aerosol concentrations that may be significant.

  4. A robust and high precision optimal explicit guidance scheme for solid motor propelled launch vehicles with thrust and drag uncertainty

    NASA Astrophysics Data System (ADS)

    Maity, Arnab; Padhi, Radhakant; Mallaram, Sanjeev; Mallikarjuna Rao, G.; Manickavasagam, M.

    2016-10-01

    A new nonlinear optimal and explicit guidance law is presented in this paper for launch vehicles propelled by solid motors. It can ensure very high terminal precision despite not having the exact knowledge of the thrust-time curve apriori. This was motivated from using it for a carrier launch vehicle in a hypersonic mission, which demands an extremely narrow terminal accuracy window for the launch vehicle for successful initiation of operation of the hypersonic vehicle. The proposed explicit guidance scheme, which computes the optimal guidance command online, ensures the required stringent final conditions with high precision at the injection point. A key feature of the proposed guidance law is an innovative extension of the recently developed model predictive static programming guidance with flexible final time. A penalty function approach is also followed to meet the input and output inequality constraints throughout the vehicle trajectory. In this paper, the guidance law has been successfully validated from nonlinear six degree-of-freedom simulation studies by designing an inner-loop autopilot as well, which enhances confidence of its usefulness significantly. In addition to excellent nominal results, the proposed guidance has been found to have good robustness for perturbed cases as well.

  5. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    PubMed

    Wells, Emma; Wolfe, Marlene K; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4-19% error), then test strips (5.2-48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration. Given the

  6. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    PubMed

    Wells, Emma; Wolfe, Marlene K; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4-19% error), then test strips (5.2-48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration. Given the

  7. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions

    PubMed Central

    Wells, Emma; Wolfe, Marlene K.; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4–19% error), then test strips (5.2–48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5–11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14–37 for test strips and $33–609 for titration

  8. To address accuracy and precision using methods from analytical chemistry and computational physics.

    PubMed

    Kozmutza, Cornelia; Picó, Yolanda

    2009-04-01

    In this work the pesticides were determined by liquid chromatography-mass spectrometry (LC-MS). In present study the occurrence of imidacloprid in 343 samples of oranges, tangerines, date plum, and watermelons from Valencian Community (Spain) has been investigated. The nine additional pesticides were chosen as they have been recommended for orchard treatment together with imidacloprid. The Mulliken population analysis has been applied to present the charge distribution in imidacloprid. Partitioned energy terms and the virial ratios have been calculated for certain molecules entering in interaction. A new technique based on the comparison of the decomposed total energy terms at various configurations is demonstrated in this work. The interaction ability could be established correctly in the studied case. An attempt is also made in this work to address accuracy and precision. These quantities are well-known in experimental measurements. In case precise theoretical description is achieved for the contributing monomers and also for the interacting complex structure some properties of this latter system can be predicted to quite a good accuracy. Based on simple hypothetical considerations we estimate the impact of applying computations on reducing the amount of analytical work.

  9. Accuracy and Precision in Measurements of Biomass Oxidative Ratio and Carbon Oxidation State

    NASA Astrophysics Data System (ADS)

    Gallagher, M. E.; Masiello, C. A.; Randerson, J. T.; Chadwick, O. A.; Robertson, G. P.

    2007-12-01

    Ecosystem oxidative ratio (OR) is a critical parameter in the apportionment of anthropogenic CO2 between the terrestrial biosphere and ocean carbon reservoirs. OR is the ratio of O2 to CO2 in gas exchange fluxes between the terrestrial biosphere and atmosphere. Ecosystem OR is linearly related to biomass carbon oxidation state (Cox), a fundamental property of the earth system describing the bonding environment of carbon in molecules. Cox can range from -4 to +4 (CH4 to CO2). Variations in both Cox and OR are driven by photosynthesis, respiration, and decomposition. We are developing several techniques to accurately measure variations in ecosystem Cox and OR; these include elemental analysis, bomb calorimetry, and 13C nuclear magnetic resonance spectroscopy. A previous study, comparing the accuracy and precision of elemental analysis versus bomb calorimetry for pure chemicals, showed that elemental analysis-based measurements are more accurate, while calorimetry- based measurements yield more precise data. However, the limited biochemical range of natural samples makes it possible that calorimetry may ultimately prove most accurate, as well as most cost-effective. Here we examine more closely the accuracy of Cox and OR values generated by calorimetry on a large set of natural biomass samples collected from the Kellogg Biological Station-Long Term Ecological Research (KBS-LTER) site in Michigan.

  10. Precision and accuracy of spectrophotometric pH measurements at environmental conditions in the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Hammer, Karoline; Schneider, Bernd; Kuliński, Karol; Schulz-Bull, Detlef E.

    2014-06-01

    The increasing uptake of anthropogenic CO2 by the oceans has raised an interest in precise and accurate pH measurement in order to assess the impact on the marine CO2-system. Spectrophotometric pH measurements were refined during the last decade yielding a precision and accuracy that cannot be achieved with the conventional potentiometric method. However, until now the method was only tested in oceanic systems with a relative stable and high salinity and a small pH range. This paper describes the first application of such a pH measurement system at conditions in the Baltic Sea which is characterized by a wide salinity and pH range. The performance of the spectrophotometric system at pH values as low as 7.0 (“total” scale) and salinities between 0 and 35 was examined using TRIS-buffer solutions, certified reference materials, and tests of consistency with measurements of other parameters of the marine CO2 system. Using m-cresol purple as indicator dye and a spectrophotometric measurement system designed at Scripps Institution of Oceanography (B. Carter, A. Dickson), a precision better than ±0.001 and an accuracy between ±0.01 and ±0.02 was achieved within the observed pH and salinity ranges in the Baltic Sea. The influence of the indicator dye on the pH of the sample was determined theoretically and is presented as a pH correction term for the different alkalinity regimes in the Baltic Sea. Because of the encouraging tests, the ease of operation and the fact that the measurements refer to the internationally accepted “total” pH scale, it is recommended to use the spectrophotometric method also for pH monitoring and trend detection in the Baltic Sea.

  11. Improvement in precision, accuracy, and efficiency in sstandardizing the characterization of granular materials

    SciTech Connect

    Tucker, Jonathan R.; Shadle, Lawrence J.; Benyahia, Sofiane; Mei, Joseph; Guenther, Chris; Koepke, M. E.

    2013-01-01

    Useful prediction of the kinematics, dynamics, and chemistry of a system relies on precision and accuracy in the quantification of component properties, operating mechanisms, and collected data. In an attempt to emphasize, rather than gloss over, the benefit of proper characterization to fundamental investigations of multiphase systems incorporating solid particles, a set of procedures were developed and implemented for the purpose of providing a revised methodology having the desirable attributes of reduced uncertainty, expanded relevance and detail, and higher throughput. Better, faster, cheaper characterization of multiphase systems result. Methodologies are presented to characterize particle size, shape, size distribution, density (particle, skeletal and bulk), minimum fluidization velocity, void fraction, particle porosity, and assignment within the Geldart Classification. A novel form of the Ergun equation was used to determine the bulk void fractions and particle density. Accuracy of properties-characterization methodology was validated on materials of known properties prior to testing materials of unknown properties. Several of the standard present-day techniques were scrutinized and improved upon where appropriate. Validity, accuracy, and repeatability were assessed for the procedures presented and deemed higher than present-day techniques. A database of over seventy materials has been developed to assist in model validation efforts and future desig

  12. Hepatic perfusion in a tumor model using DCE-CT: an accuracy and precision study

    NASA Astrophysics Data System (ADS)

    Stewart, Errol E.; Chen, Xiaogang; Hadway, Jennifer; Lee, Ting-Yim

    2008-08-01

    In the current study we investigate the accuracy and precision of hepatic perfusion measurements based on the Johnson and Wilson model with the adiabatic approximation. VX2 carcinoma cells were implanted into the livers of New Zealand white rabbits. Simultaneous dynamic contrast-enhanced computed tomography (DCE-CT) and radiolabeled microsphere studies were performed under steady-state normo-, hyper- and hypo-capnia. The hepatic arterial blood flows (HABF) obtained using both techniques were compared with ANOVA. The precision was assessed by the coefficient of variation (CV). Under normo-capnia the microsphere HABF were 51.9 ± 4.2, 40.7 ± 4.9 and 99.7 ± 6.0 ml min-1 (100 g)-1 while DCE-CT HABF were 50.0 ± 5.7, 37.1 ± 4.5 and 99.8 ± 6.8 ml min-1 (100 g)-1 in normal tissue, tumor core and rim, respectively. There were no significant differences between HABF measurements obtained with both techniques (P > 0.05). Furthermore, a strong correlation was observed between HABF values from both techniques: slope of 0.92 ± 0.05, intercept of 4.62 ± 2.69 ml min-1 (100 g)-1 and R2 = 0.81 ± 0.05 (P < 0.05). The Bland-Altman plot comparing DCE-CT and microsphere HABF measurements gives a mean difference of -0.13 ml min-1 (100 g)-1, which is not significantly different from zero. DCE-CT HABF is precise, with CV of 5.7, 24.9 and 1.4% in the normal tissue, tumor core and rim, respectively. Non-invasive measurement of HABF with DCE-CT is accurate and precise. DCE-CT can be an important extension of CT to assess hepatic function besides morphology in liver diseases.

  13. Accuracy and precision of integumental linear dimensions in a three-dimensional facial imaging system

    PubMed Central

    Kim, Soo-Hwan; Jung, Woo-Young; Seo, Yu-Jin; Kim, Kyung-A; Park, Ki-Ho

    2015-01-01

    Objective A recently developed facial scanning method uses three-dimensional (3D) surface imaging with a light-emitting diode. Such scanning enables surface data to be captured in high-resolution color and at relatively fast speeds. The purpose of this study was to evaluate the accuracy and precision of 3D images obtained using the Morpheus 3D® scanner (Morpheus Co., Seoul, Korea). Methods The sample comprised 30 subjects aged 24-34 years (mean 29.0 ± 2.5 years). To test the correlation between direct and 3D image measurements, 21 landmarks were labeled on the face of each subject. Sixteen direct measurements were obtained twice using digital calipers; the same measurements were then made on two sets of 3D facial images. The mean values of measurements obtained from both methods were compared. To investigate the precision, a comparison was made between two sets of measurements taken with each method. Results When comparing the variables from both methods, five of the 16 possible anthropometric variables were found to be significantly different. However, in 12 of the 16 cases, the mean difference was under 1 mm. The average value of the differences for all variables was 0.75 mm. Precision was high in both methods, with error magnitudes under 0.5 mm. Conclusions 3D scanning images have high levels of precision and fairly good congruence with traditional anthropometry methods, with mean differences of less than 1 mm. 3D surface imaging using the Morpheus 3D® scanner is therefore a clinically acceptable method of recording facial integumental data. PMID:26023538

  14. Slight pressure imbalances can affect accuracy and precision of dual inlet-based clumped isotope analysis.

    PubMed

    Fiebig, Jens; Hofmann, Sven; Löffler, Niklas; Lüdecke, Tina; Methner, Katharina; Wacker, Ulrike

    2016-01-01

    It is well known that a subtle nonlinearity can occur during clumped isotope analysis of CO2 that - if remaining unaddressed - limits accuracy. The nonlinearity is induced by a negative background on the m/z 47 ion Faraday cup, whose magnitude is correlated with the intensity of the m/z 44 ion beam. The origin of the negative background remains unclear, but is possibly due to secondary electrons. Usually, CO2 gases of distinct bulk isotopic compositions are equilibrated at 1000 °C and measured along with the samples in order to be able to correct for this effect. Alternatively, measured m/z 47 beam intensities can be corrected for the contribution of secondary electrons after monitoring how the negative background on m/z 47 evolves with the intensity of the m/z 44 ion beam. The latter correction procedure seems to work well if the m/z 44 cup exhibits a wider slit width than the m/z 47 cup. Here we show that the negative m/z 47 background affects precision of dual inlet-based clumped isotope measurements of CO2 unless raw m/z 47 intensities are directly corrected for the contribution of secondary electrons. Moreover, inaccurate results can be obtained even if the heated gas approach is used to correct for the observed nonlinearity. The impact of the negative background on accuracy and precision arises from small imbalances in m/z 44 ion beam intensities between reference and sample CO2 measurements. It becomes the more significant the larger the relative contribution of secondary electrons to the m/z 47 signal is and the higher the flux rate of CO2 into the ion source is set. These problems can be overcome by correcting the measured m/z 47 ion beam intensities of sample and reference gas for the contributions deriving from secondary electrons after scaling these contributions to the intensities of the corresponding m/z 49 ion beams. Accuracy and precision of this correction are demonstrated by clumped isotope analysis of three internal carbonate standards. The

  15. Slight pressure imbalances can affect accuracy and precision of dual inlet-based clumped isotope analysis.

    PubMed

    Fiebig, Jens; Hofmann, Sven; Löffler, Niklas; Lüdecke, Tina; Methner, Katharina; Wacker, Ulrike

    2016-01-01

    It is well known that a subtle nonlinearity can occur during clumped isotope analysis of CO2 that - if remaining unaddressed - limits accuracy. The nonlinearity is induced by a negative background on the m/z 47 ion Faraday cup, whose magnitude is correlated with the intensity of the m/z 44 ion beam. The origin of the negative background remains unclear, but is possibly due to secondary electrons. Usually, CO2 gases of distinct bulk isotopic compositions are equilibrated at 1000 °C and measured along with the samples in order to be able to correct for this effect. Alternatively, measured m/z 47 beam intensities can be corrected for the contribution of secondary electrons after monitoring how the negative background on m/z 47 evolves with the intensity of the m/z 44 ion beam. The latter correction procedure seems to work well if the m/z 44 cup exhibits a wider slit width than the m/z 47 cup. Here we show that the negative m/z 47 background affects precision of dual inlet-based clumped isotope measurements of CO2 unless raw m/z 47 intensities are directly corrected for the contribution of secondary electrons. Moreover, inaccurate results can be obtained even if the heated gas approach is used to correct for the observed nonlinearity. The impact of the negative background on accuracy and precision arises from small imbalances in m/z 44 ion beam intensities between reference and sample CO2 measurements. It becomes the more significant the larger the relative contribution of secondary electrons to the m/z 47 signal is and the higher the flux rate of CO2 into the ion source is set. These problems can be overcome by correcting the measured m/z 47 ion beam intensities of sample and reference gas for the contributions deriving from secondary electrons after scaling these contributions to the intensities of the corresponding m/z 49 ion beams. Accuracy and precision of this correction are demonstrated by clumped isotope analysis of three internal carbonate standards. The

  16. Automated optogenetic feedback control for precise and robust regulation of gene expression and cell growth.

    PubMed

    Milias-Argeitis, Andreas; Rullan, Marc; Aoki, Stephanie K; Buchmann, Peter; Khammash, Mustafa

    2016-01-01

    Dynamic control of gene expression can have far-reaching implications for biotechnological applications and biological discovery. Thanks to the advantages of light, optogenetics has emerged as an ideal technology for this task. Current state-of-the-art methods for optical expression control fail to combine precision with repeatability and cannot withstand changing operating culture conditions. Here, we present a novel fully automatic experimental platform for the robust and precise long-term optogenetic regulation of protein production in liquid Escherichia coli cultures. Using a computer-controlled light-responsive two-component system, we accurately track prescribed dynamic green fluorescent protein expression profiles through the application of feedback control, and show that the system adapts to global perturbations such as nutrient and temperature changes. We demonstrate the efficacy and potential utility of our approach by placing a key metabolic enzyme under optogenetic control, thus enabling dynamic regulation of the culture growth rate with potential applications in bacterial physiology studies and biotechnology. PMID:27562138

  17. Automated optogenetic feedback control for precise and robust regulation of gene expression and cell growth

    PubMed Central

    Milias-Argeitis, Andreas; Rullan, Marc; Aoki, Stephanie K.; Buchmann, Peter; Khammash, Mustafa

    2016-01-01

    Dynamic control of gene expression can have far-reaching implications for biotechnological applications and biological discovery. Thanks to the advantages of light, optogenetics has emerged as an ideal technology for this task. Current state-of-the-art methods for optical expression control fail to combine precision with repeatability and cannot withstand changing operating culture conditions. Here, we present a novel fully automatic experimental platform for the robust and precise long-term optogenetic regulation of protein production in liquid Escherichia coli cultures. Using a computer-controlled light-responsive two-component system, we accurately track prescribed dynamic green fluorescent protein expression profiles through the application of feedback control, and show that the system adapts to global perturbations such as nutrient and temperature changes. We demonstrate the efficacy and potential utility of our approach by placing a key metabolic enzyme under optogenetic control, thus enabling dynamic regulation of the culture growth rate with potential applications in bacterial physiology studies and biotechnology. PMID:27562138

  18. Estimated results analysis and application of the precise point positioning based high-accuracy ionosphere delay

    NASA Astrophysics Data System (ADS)

    Wang, Shi-tai; Peng, Jun-huan

    2015-12-01

    The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.

  19. Precision and accuracy testing of FMCW ladar-based length metrology.

    PubMed

    Mateo, Ana Baselga; Barber, Zeb W

    2015-07-01

    The calibration and traceability of high-resolution frequency modulated continuous wave (FMCW) ladar sources is a requirement for their use in length and volume metrology. We report the calibration of FMCW ladar length measurement systems by use of spectroscopy of molecular frequency references HCN (C-band) or CO (L-band) to calibrate the chirp rate of the FMCW sources. Propagating the stated uncertainties from the molecular calibrations provided by NIST and measurement errors provide an estimated uncertainty of a few ppm for the FMCW system. As a test of this calibration, a displacement measurement interferometer with a laser wavelength close to that of our FMCW system was built to make comparisons of the relative precision and accuracy. The comparisons performed show <10  ppm agreement, which was within the combined estimated uncertainties of the FMCW system and interferometer. PMID:26193146

  20. Accuracy improvement of protrusion angle of carbon nanotube tips by precision multiaxis nanomanipulator

    SciTech Connect

    Young Song, Won; Young Jung, Ki; O, Beom-Hoan; Park, Byong Chon

    2005-02-01

    In order to manufacture a carbon nanotube (CNT) tip in which the attachment angle and position of CNT were precisely adjusted, a nanomanipulator was installed inside a scanning electron microscope (SEM). A CNT tip, atomic force microscopy (AFM) probe to which a nanotube is attached, is known to be the most appropriate probe for measuring the shape of high aspect ratio. The developed nanomanipulator has two sets of modules with the degree of freedom of three-directional rectilinear motion and one-directional rotational motion at an accuracy of tens of nanometers, so it enables the manufacturing of more accurate CNT tips. The present study developed a CNT tip with the error of attachment angle less then 10 deg. through three-dimensional operation of a multiwalled carbon nanotube and an AFM probe inside a SEM.

  1. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    DOE PAGES

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence ofmore » a significant quantity of 238U in the samples.« less

  2. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    SciTech Connect

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence of a significant quantity of 238U in the samples.

  3. Accuracy and precision of estimating age of gray wolves by tooth wear

    USGS Publications Warehouse

    Gipson, P.S.; Ballard, W.B.; Nowak, R.M.; Mech, L.D.

    2000-01-01

    We evaluated the accuracy and precision of tooth wear for aging gray wolves (Canis lupus) from Alaska, Minnesota, and Ontario based on 47 known-age or known-minimum-age skulls. Estimates of age using tooth wear and a commercial cementum annuli-aging service were useful for wolves up to 14 years old. The precision of estimates from cementum annuli was greater than estimates from tooth wear, but tooth wear estimates are more applicable in the field. We tended to overestimate age by 1-2 years and occasionally by 3 or 4 years. The commercial service aged young wolves with cementum annuli to within ?? 1 year of actual age, but under estimated ages of wolves ???9 years old by 1-3 years. No differences were detected in tooth wear patterns for wild wolves from Alaska, Minnesota, and Ontario, nor between captive and wild wolves. Tooth wear was not appropriate for aging wolves with an underbite that prevented normal wear or severely broken and missing teeth.

  4. Accuracy, Precision, and Reliability of Chemical Measurements in Natural Products Research

    PubMed Central

    Betz, Joseph M.; Brown, Paula N.; Roman, Mark C.

    2010-01-01

    Natural products chemistry is the discipline that lies at the heart of modern pharmacognosy. The field encompasses qualitative and quantitative analytical tools that range from spectroscopy and spectrometry to chromatography. Among other things, modern research on crude botanicals is engaged in the discovery of the phytochemical constituents necessary for therapeutic efficacy, including the synergistic effects of components of complex mixtures in the botanical matrix. In the phytomedicine field, these botanicals and their contained mixtures are considered the active pharmaceutical ingredient (API), and pharmacognosists are increasingly called upon to supplement their molecular discovery work by assisting in the development and utilization of analytical tools for assessing the quality and safety of these products. Unlike single-chemical entity APIs, botanical raw materials and their derived products are highly variable because their chemistry and morphology depend on the genotypic and phenotypic variation, geographical origin and weather exposure, harvesting practices, and processing conditions of the source material. Unless controlled, this inherent variability in the raw material stream can result in inconsistent finished products that are under-potent, over-potent, and/or contaminated. Over the decades, natural products chemists have routinely developed quantitative analytical methods for phytochemicals of interest. Quantitative methods for the determination of product quality bear the weight of regulatory scrutiny. These methods must be accurate, precise, and reproducible. Accordingly, this review discusses the principles of accuracy (relationship between experimental and true value), precision (distribution of data values), and reliability in the quantitation of phytochemicals in natural products. PMID:20884340

  5. Transfer accuracy and precision scoring in planar bone cutting validated with ex vivo data.

    PubMed

    Milano, Federico Edgardo; Ritacco, Lucas Eduardo; Farfalli, Germán Luis; Bahamonde, Luis Alberto; Aponte-Tinao, Luis Alberto; Risk, Marcelo

    2015-05-01

    The use of interactive surgical scenarios for virtual preoperative planning of osteotomies has increased in the last 5 years. As it has been reported by several authors, this technology has been used in tumor resection osteotomies, knee osteotomies, and spine surgery with good results. A digital three-dimensional preoperative plan makes possible to quantitatively evaluate the transfer process from the virtual plan to the anatomy of the patient. We introduce an exact definition of accuracy and precision of this transfer process for planar bone cutting. We present a method to compute these properties from ex vivo data. We also propose a clinical score to assess the goodness of a cut. A computer simulation is used to characterize the definitions and the data generated by the measurement method. The definitions and method are evaluated in 17 ex vivo planar cuts of tumor resection osteotomies. The results show that the proposed method and definitions are highly correlated with a previous definition of accuracy based in ISO 1101. The score is also evaluated by showing that it distinguishes among different transfer techniques based in its distribution location and shape. The introduced definitions produce acceptable results in cases where the ISO-based definition produce counter intuitive results.

  6. Balancing accuracy, robustness, and efficiency in simulations of coupled magma/mantle dynamics

    NASA Astrophysics Data System (ADS)

    Katz, R. F.

    2011-12-01

    Magmatism plays a central role in many Earth-science problems, and is particularly important for the chemical evolution of the mantle. The standard theory for coupled magma/mantle dynamics is fundamentally multi-physical, comprising mass and force balance for two phases, plus conservation of energy and composition in a two-component (minimum) thermochemical system. The tight coupling of these various aspects of the physics makes obtaining numerical solutions a significant challenge. Previous authors have advanced by making drastic simplifications, but these have limited applicability. Here I discuss progress, enabled by advanced numerical software libraries, in obtaining numerical solutions to the full system of governing equations. The goals in developing the code are as usual: accuracy of solutions, robustness of the simulation to non-linearities, and efficiency of code execution. I use the cutting-edge example of magma genesis and migration in a heterogeneous mantle to elucidate these issues. I describe the approximations employed and their consequences, as a means to frame the question of where and how to make improvements. I conclude that the capabilities needed to advance multi-physics simulation are, in part, distinct from those of problems with weaker coupling, or fewer coupled equations. Chief among these distinct requirements is the need to dynamically adjust the solution algorithm to maintain robustness in the face of coupled nonlinearities that would otherwise inhibit convergence. This may mean introducing Picard iteration rather than full coupling, switching between semi-implicit and explicit time-stepping, or adaptively increasing the strength of preconditioners. All of these can be accomplished by the user with, for example, PETSc. Formalising this adaptivity should be a goal for future development of software packages that seek to enable multi-physics simulation.

  7. Accuracy and precision of gait events derived from motion capture in horses during walk and trot.

    PubMed

    Boye, Jenny Katrine; Thomsen, Maj Halling; Pfau, Thilo; Olsen, Emil

    2014-03-21

    This study aimed to create an evidence base for detection of stance-phase timings from motion capture in horses. The objective was to compare the accuracy (bias) and precision (SD) for five published algorithms for the detection of hoof-on and hoof-off using force plates as the reference standard. Six horses were walked and trotted over eight force plates surrounded by a synchronised 12-camera infrared motion capture system. The five algorithms (A-E) were based on: (A) horizontal velocity of the hoof; (B) Fetlock angle and horizontal hoof velocity; (C) horizontal displacement of the hoof relative to the centre of mass; (D) horizontal velocity of the hoof relative to the Centre of Mass and; (E) vertical acceleration of the hoof. A total of 240 stance phases in walk and 240 stance phases in trot were included in the assessment. Method D provided the most accurate and precise results in walk for stance phase duration with a bias of 4.1% for front limbs and 4.8% for hind limbs. For trot we derived a combination of method A for hoof-on and method E for hoof-off resulting in a bias of -6.2% of stance in the front limbs and method B for the hind limbs with a bias of 3.8% of stance phase duration. We conclude that motion capture yields accurate and precise detection of gait events for horses walking and trotting over ground and the results emphasise a need for different algorithms for front limbs versus hind limbs in trot.

  8. Gaining Precision and Accuracy on Microprobe Trace Element Analysis with the Multipoint Background Method

    NASA Astrophysics Data System (ADS)

    Allaz, J. M.; Williams, M. L.; Jercinovic, M. J.; Donovan, J. J.

    2014-12-01

    Electron microprobe trace element analysis is a significant challenge, but can provide critical data when high spatial resolution is required. Due to the low peak intensity, the accuracy and precision of such analyses relies critically on background measurements, and on the accuracy of any pertinent peak interference corrections. A linear regression between two points selected at appropriate off-peak positions is a classical approach for background characterization in microprobe analysis. However, this approach disallows an accurate assessment of background curvature (usually exponential). Moreover, if present, background interferences can dramatically affect the results if underestimated or ignored. The acquisition of a quantitative WDS scan over the spectral region of interest is still a valuable option to determine the background intensity and curvature from a fitted regression of background portions of the scan, but this technique retains an element of subjectivity as the analyst has to select areas in the scan, which appear to represent background. We present here a new method, "Multi-Point Background" (MPB), that allows acquiring up to 24 off-peak background measurements from wavelength positions around the peaks. This method aims to improve the accuracy, precision, and objectivity of trace element analysis. The overall efficiency is amended because no systematic WDS scan needs to be acquired in order to check for the presence of possible background interferences. Moreover, the method is less subjective because "true" backgrounds are selected by the statistical exclusion of erroneous background measurements, reducing the need for analyst intervention. This idea originated from efforts to refine EPMA monazite U-Th-Pb dating, where it was recognised that background errors (peak interference or background curvature) could result in errors of several tens of million years on the calculated age. Results obtained on a CAMECA SX-100 "UltraChron" using monazite

  9. Systematic accuracy and precision analysis of video motion capturing systems--exemplified on the Vicon-460 system.

    PubMed

    Windolf, Markus; Götzen, Nils; Morlock, Michael

    2008-08-28

    With rising demand on highly accurate acquisition of small motion the use of video-based motion capturing becomes more and more popular. However, the performance of these systems strongly depends on a variety of influencing factors. A method was developed in order to systematically assess accuracy and precision of motion capturing systems with regard to influential system parameters. A calibration and measurement robot was designed to perform a repeatable dynamic calibration and to determine the resultant system accuracy and precision in a control volume investigating small motion magnitudes (180 x 180 x 150 mm3). The procedure was exemplified on the Vicon-460 system. Following parameters were analyzed: Camera setup, calibration volume, marker size and lens filter application. Equipped with four cameras the Vicon-460 system provided an overall accuracy of 63+/-5 microm and overall precision (noise level) of 15 microm for the most favorable parameter setting. Arbitrary changes in camera arrangement revealed variations in mean accuracy between 76 and 129 microm. The noise level normal to the cameras' projection plane was found higher compared to the other coordinate directions. Measurements including regions unaffected by the dynamic calibration reflected considerably lower accuracy (221+/-79 microm). Lager marker diameters led to higher accuracy and precision. Accuracy dropped significantly when using an optical lens filter. This study revealed significant influence of the system environment on the performance of video-based motion capturing systems. With careful configuration, optical motion capturing provides a powerful measuring opportunity for the majority of biomechanical applications.

  10. Improving accuracy and precision in biological applications of fluorescence lifetime imaging microscopy

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Wei

    The quantitative understanding of cellular and molecular responses in living cells is important for many reasons, including identifying potential molecular targets for treatments of diseases like cancer. Fluorescence lifetime imaging microscopy (FLIM) can quantitatively measure these responses in living cells by producing spatially resolved images of fluorophore lifetime, and has advantages over intensity-based measurements. However, in live-cell microscopy applications using high-intensity light sources such as lasers, maintaining biological viability remains critical. Although high-speed, time-gated FLIM significantly reduces light delivered to live cells, making measurements at low light levels remains a challenge affecting quantitative FLIM results. We can significantly improve both accuracy and precision in gated FLIM applications. We use fluorescence resonance energy transfer (FRET) with fluorescent proteins to detect molecular interactions in living cells: the use of FLIM, better fluorophores, and temperature/CO2 controls can improve live-cell FRET results with higher consistency, better statistics, and less non-specific FRET (for negative control comparisons, p-value = 0.93 (physiological) vs. 9.43E-05 (non-physiological)). Several lifetime determination methods are investigated to optimize gating schemes. We demonstrate a reduction in relative standard deviation (RSD) from 52.57% to 18.93% with optimized gating in an example under typical experimental conditions. We develop two novel total variation (TV) image denoising algorithms, FWTV ( f-weighted TV) and UWTV (u-weighted TV), that can achieve significant improvements for real imaging systems. With live-cell images, they improve the precision of local lifetime determination without significantly altering the global mean lifetime values (<5% lifetime changes). Finally, by combining optimal gating and TV denoising, even low-light excitation can achieve precision better than that obtained in high

  11. On accuracy, robustness, and security of bag-of-word search systems

    NASA Astrophysics Data System (ADS)

    Voloshynovskiy, Svyatoslav; Diephuis, Maurits; Kostadinov, Dimche; Farhadzadeh, Farzad; Holotyak, Taras

    2014-02-01

    In this paper, we present a statistical framework for the analysis of the performance of Bag-of-Words (BOW) systems. The paper aims at establishing a better understanding of the impact of different elements of BOW systems such as the robustness of descriptors, accuracy of assignment, descriptor compression and pooling and finally decision making. We also study the impact of geometrical information on the BOW system performance and compare the results with different pooling strategies. The proposed framework can also be of interest for a security and privacy analysis of BOW systems. The experimental results on real images and descriptors confirm our theoretical findings. Notation: We use capital letters to denote scalar random variables X and X to denote vector random variables, corresponding small letters x and x to denote the realisations of scalar and vector random variables, respectively. We use X pX(x) or simply X p(x) to indicate that a random variable X is distributed according to pX(x). N(μ, σ 2 X ) stands for the Gaussian distribution with mean μ and variance σ2 X . B(L, Pb) denotes the binomial distribution with sequence length L and probability of success Pb. ||.|| denotes the Euclidean vector norm and Q(.) stands for the Q-function. D(.||.) denotes the divergence and E{.} denotes the expectation.

  12. Accuracy and Robustness Improvements of Echocardiographic Particle Image Velocimetry for Routine Clinical Cardiac Evaluation

    NASA Astrophysics Data System (ADS)

    Meyers, Brett; Vlachos, Pavlos; Charonko, John; Giarra, Matthew; Goergen, Craig

    2015-11-01

    Echo Particle Image Velocimetry (echoPIV) is a recent development in flow visualization that provides improved spatial resolution with high temporal resolution in cardiac flow measurement. Despite increased interest a limited number of published echoPIV studies are clinical, demonstrating that the method is not broadly accepted within the medical community. This is due to the fact that use of contrast agents are typically reserved for subjects whose initial evaluation produced very low quality recordings. Thus high background noise and low contrast levels characterize most scans, which hinders echoPIV from producing accurate measurements. To achieve clinical acceptance it is necessary to develop processing strategies that improve accuracy and robustness. We hypothesize that using a short-time moving window ensemble (MWE) correlation can improve echoPIV flow measurements on low image quality clinical scans. To explore the potential of the short-time MWE correlation, evaluation of artificial ultrasound images was performed. Subsequently, a clinical cohort of patients with diastolic dysfunction was evaluated. Qualitative and quantitative comparisons between echoPIV measurements and Color M-mode scans were carried out to assess the improvements delivered by the proposed methodology.

  13. Parallaxes and Proper Motions of QSOs: A Test of Astrometric Precision and Accuracy

    NASA Astrophysics Data System (ADS)

    Harris, Hugh C.; Dahn, Conard C.; Zacharias, Norbert; Canzian, Blaise; Guetter, Harry H.; Levine, Stephen E.; Luginbuhl, Christian B.; Monet, Alice K. B.; Monet, David G.; Pier, Jeffrey R.; Stone, Ronald C.; Subasavage, John P.; Tilleman, Trudy; Walker, Richard L.; Johnston, Kenneth J.

    2016-11-01

    Optical astrometry of 12 fields containing quasi-stellar objects (QSOs) is presented. The targets are radio sources in the International Celestial Reference Frame with accurate radio positions that also have optical counterparts. The data are used to test several quantities: the internal precision of the relative optical astrometry, the relative parallaxes and proper motions, the procedures to correct from relative to absolute parallax and proper motion, the accuracy of the absolute parallaxes and proper motions, and the stability of the optical photocenters for these optically variable QSOs. For these 12 fields, the mean error in absolute parallax is 0.38 mas and the mean error in each coordinate of absolute proper motion is 1.1 mas yr‑1. The results yield a mean absolute parallax of ‑0.03 ± 0.11 mas. For 11 targets, we find no significant systematic motions of the photocenters at the level of 1–2 mas over the 10 years of this study; for one BL Lac object, we find a possible motion of 4 mas correlated with its brightness.

  14. 13 Years of TOPEX/POSEIDON Precision Orbit Determination and the 10-fold Improvement in Expected Orbit Accuracy

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Beckley, B. D.; Klosko, S. M.

    2006-01-01

    Launched in the summer of 1992, TOPEX/POSEIDON (T/P) was a joint mission between NASA and the Centre National d Etudes Spatiales (CNES), the French Space Agency, to make precise radar altimeter measurements of the ocean surface. After the remarkably successful 13-years of mapping the ocean surface T/P lost its ability to maneuver and was de-commissioned January 2006. T/P revolutionized the study of the Earth s oceans by vastly exceeding pre-launch estimates of surface height accuracy recoverable from radar altimeter measurements. The precision orbit lies at the heart of the altimeter measurement providing the reference frame from which the radar altimeter measurements are made. The expected quality of orbit knowledge had limited the measurement accuracy expectations of past altimeter missions, and still remains a major component in the error budget of all altimeter missions. This paper describes critical improvements made to the T/P orbit time series over the 13-years of precise orbit determination (POD) provided by the GSFC Space Geodesy Laboratory. The POD improvements from the pre-launch T/P expectation of radial orbit accuracy and Mission requirement of 13-cm to an expected accuracy of about 1.5-cm with today s latest orbits will be discussed. The latest orbits with 1.5 cm RMS radial accuracy represent a significant improvement to the 2.0-cm accuracy orbits currently available on the T/P Geophysical Data Record (GDR) altimeter product.

  15. Evaluation of Accuracy in Kinematic GPS Analyses Using a Precision Roving Antenna Platform

    NASA Astrophysics Data System (ADS)

    Miura, S.; Sweeney, A.; Fujimoto, H.; Osaki, H.; Kawai, E.; Ichikawa, R.; Kondo, T.; Osada, Y.; Chadwell, C. D.

    2002-12-01

    Most tectonic plate boundaries and seismogenic zones of interplate earthquakes exist beneath the ocean and our knowledge on interplate coupling and on generation processes of those earthquakes remain limited. Seafloor geodesy will consequently play a very important role in improving our understanding of the physical process near plate boundaries. Seafloor positioning using a GPS/Acoustic technique is the one potential method to detect the displacement occurring at the ocean bottom. The accuracy of the technique depends on two parts: acoustic ranging in seawater, and kinematic GPS (KGPS) analysis. Accuracy of KGPS have evaluated with following way: 1) Static test: First, we carried out an experiment to confirm the capability of the KGPS analysis using GIPSY/OASIS-II for a long baseline of about 310 km. We used two GPS stations on land, one as a reference station in Sendai, and the other in Tokyo as a rover one, whose coordinate can vary from epoch to epoch. This baseline length is required for our project because the farthest seafloor transponder array is 280 km east of the nearest coastal GPS station. The 1 cm stability of the KGPS solution was achieved in the horizontal components of the 310-km baseline over the course of one day. The vertical component showed fluctuation probably due to parameters unmodeled in the analysis such as multipath and/or tropospheric delay. 2) Sea surface experiment: During cruise KT01-11 of the R/V Tansei-maru, Ocean Research Institute (ORI), University of Tokyo, around the Japan Trench in late July 2001, we deployed three precision acoustic transponders on both the Pacific plate (280 km from the coast, depth around 5450 m) and the landward slope (110 km from the coast, depth around 1600 m). We used a surface buoy with 3 GPS antennas, a motion sensor, a hydrophone, and a computer for data acquisition and control to make combined GPS/Acoustic observations. The buoy was towed about 80 m away from the R/V to reduce the impact of ship

  16. Towards an understanding of dark matter: Precise gravitational lensing analysis complemented by robust photometric redshifts

    NASA Astrophysics Data System (ADS)

    Coe, Daniel Aaron

    The goal of thesis is to help scientists resolve one of the great mysteries of our time: the nature of Dark Matter. Dark Matter is currently believed to make up over 80% of the material in our universe, yet we have so far inferred but a few of its basic properties. Here we study the Dark Matter surrounding a galaxy cluster, Abell 1689, via the most direct method currently available--gravitational lensing. Abell 1689 is a "strong" gravitational lens, meaning it produces multiple images of more distant galaxies. The observed positions of these images can be measured very precisely and act as a blueprint allowing us to reconstruct the Dark Matter distribution of the lens. Until now, such mass models of Abell 1689 have reproduced the observed multiple images well but with significant positional offsets. Using a new method we develop here, we obtain a new mass model which perfectly reproduces the observed positions of 168 knots identified within 135 multiple images of 42 galaxies. An important ingredient to our mass model is the accurate measurement of distances to the lensed galaxies via their photometric redshifts. Here we develop tools which improve the accuracy of these measurements based on our study of the Hubble Ultra Deep Field, the only image yet taken to comparable depth as the magnified regions of Abell 1689. We present results both for objects in the Hubble Ultra Deep Field and for galaxies gravitationally lensed by Abell 1689. As part of this thesis, we also provide reviews of Dark Matter and Gravitational Lensing, including a chapter devoted to the mass profiles of Dark Matter halos realized in simulations. The original work presented here was performed primarily by myself under the guidance of Narciso Benítez and Holland Ford as a member of the Advanced Camera for Surveys GTO Science Team at Johns Hopkins University and the Instituto de Astrofisica de Andalucfa. My advisors served on my thesis committee along with Rick White, Gabor Domokos, and Steve

  17. Robustness

    NASA Technical Reports Server (NTRS)

    Ryan, R.

    1993-01-01

    Robustness is a buzz word common to all newly proposed space systems design as well as many new commercial products. The image that one conjures up when the word appears is a 'Paul Bunyon' (lumberjack design), strong and hearty; healthy with margins in all aspects of the design. In actuality, robustness is much broader in scope than margins, including such factors as simplicity, redundancy, desensitization to parameter variations, control of parameter variations (environments flucation), and operational approaches. These must be traded with concepts, materials, and fabrication approaches against the criteria of performance, cost, and reliability. This includes manufacturing, assembly, processing, checkout, and operations. The design engineer or project chief is faced with finding ways and means to inculcate robustness into an operational design. First, however, be sure he understands the definition and goals of robustness. This paper will deal with these issues as well as the need for the requirement for robustness.

  18. Sensitivity Analysis for Characterizing the Accuracy and Precision of JEM/SMILES Mesospheric O3

    NASA Astrophysics Data System (ADS)

    Esmaeili Mahani, M.; Baron, P.; Kasai, Y.; Murata, I.; Kasaba, Y.

    2011-12-01

    The main purpose of this study is to evaluate the Superconducting sub-Millimeter Limb Emission Sounder (SMILES) measurements of mesospheric ozone, O3. As the first step, the error due to the impact of Mesospheric Temperature Inversions (MTIs) on ozone retrieval has been determined. The impacts of other parameters such as pressure variability, solar events, and etc. on mesospheric O3 will also be investigated. Ozone, is known to be important due to the stratospheric O3 layer protection of life on Earth by absorbing harmful UV radiations. However, O3 chemistry can be studied purely in the mesosphere without distraction of heterogeneous situation and dynamical variations due to the short lifetime of O3 in this region. Mesospheric ozone is produced by the photo-dissociation of O2 and the subsequent reaction of O with O2. Diurnal and semi-diurnal variations of mesospheric ozone are associated with variations in solar activity. The amplitude of the diurnal variation increases from a few percent at an altitude of 50 km, to about 80 percent at 70 km. Although despite the apparent simplicity of this situation, significant disagreements exist between the predictions from the existing models and observations, which need to be resolved. SMILES is a highly sensitive radiometer with a few to several tens percent of precision from upper troposphere to the mesosphere. SMILES was developed by the Japanese Aerospace eXploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT) located at the Japanese Experiment Module (JEM) on the International Space Station (ISS). SMILES has successfully measured the vertical distributions and the diurnal variations of various atmospheric species in the latitude range of 38S to 65N from October 2009 to April 2010. A sensitivity analysis is being conducted to investigate the expected precision and accuracy of the mesospheric O3 profiles (from 50 to 90 km height) due to the impact of Mesospheric Temperature

  19. Accuracy and precision of total mixed rations fed on commercial dairy farms.

    PubMed

    Sova, A D; LeBlanc, S J; McBride, B W; DeVries, T J

    2014-01-01

    Despite the significant time and effort spent formulating total mixed rations (TMR), it is evident that the ration delivered by the producer and that consumed by the cow may not accurately reflect that originally formulated. The objectives of this study were to (1) determine how TMR fed agrees with or differs from TMR formulation (accuracy), (2) determine daily variability in physical and chemical characteristics of TMR delivered (precision), and (3) investigate the relationship between daily variability in ration characteristics and group-average measures of productivity [dry matter intake (DMI), milk yield, milk components, efficiency, and feed sorting] on commercial dairy farms. Twenty-two commercial freestall herds were visited for 7 consecutive days in both summer and winter months. Fresh and refusal feed samples were collected daily to assess particle size distribution, dry matter, and chemical composition. Milk test data, including yield, fat, and protein were collected from a coinciding Dairy Herd Improvement test. Multivariable mixed-effect regression models were used to analyze associations between productivity measures and daily ration variability, measured as coefficient of variation (CV) over 7d. The average TMR [crude protein=16.5%, net energy for lactation (NEL) = 1.7 Mcal/kg, nonfiber carbohydrates = 41.3%, total digestible nutrients = 73.3%, neutral detergent fiber=31.3%, acid detergent fiber=20.5%, Ca = 0.92%, p=0.42%, Mg = 0.35%, K = 1.45%, Na = 0.41%] delivered exceeded TMR formulation for NEL (+0.05 Mcal/kg), nonfiber carbohydrates (+1.2%), acid detergent fiber (+0.7%), Ca (+0.08%), P (+0.02%), Mg (+0.02%), and K (+0.04%) and underfed crude protein (-0.4%), neutral detergent fiber (-0.6%), and Na (-0.1%). Dietary measures with high day-to-day CV were average feed refusal rate (CV = 74%), percent long particles (CV = 16%), percent medium particles (CV = 7.7%), percent short particles (CV = 6.1%), percent fine particles (CV = 13%), Ca (CV = 7

  20. Analysis of the Accuracy and Robustness of the Leap Motion Controller

    PubMed Central

    Weichert, Frank; Bachmann, Daniel; Rudak, Bartholomäus; Fisseler, Denis

    2013-01-01

    The Leap Motion Controller is a new device for hand gesture controlled user interfaces with declared sub-millimeter accuracy. However, up to this point its capabilities in real environments have not been analyzed. Therefore, this paper presents a first study of a Leap Motion Controller. The main focus of attention is on the evaluation of the accuracy and repeatability. For an appropriate evaluation, a novel experimental setup was developed making use of an industrial robot with a reference pen allowing a position accuracy of 0.2 mm. Thereby, a deviation between a desired 3D position and the average measured positions below 0.2 mm has been obtained for static setups and of 1.2 mm for dynamic setups. Using the conclusion of this analysis can improve the development of applications for the Leap Motion controller in the field of Human-Computer Interaction. PMID:23673678

  1. Analysis of the accuracy and robustness of the leap motion controller.

    PubMed

    Weichert, Frank; Bachmann, Daniel; Rudak, Bartholomäus; Fisseler, Denis

    2013-05-14

    The Leap Motion Controller is a new device for hand gesture controlled user interfaces with declared sub-millimeter accuracy. However, up to this point its capabilities in real environments have not been analyzed. Therefore, this paper presents a first study of a Leap Motion Controller. The main focus of attention is on the evaluation of the accuracy and repeatability. For an appropriate evaluation, a novel experimental setup was developed making use of an industrial robot with a reference pen allowing a position accuracy of 0.2 mm. Thereby, a deviation between a desired 3D position and the average measured positions below 0.2 mm has been obtained for static setups and of 1.2 mm for dynamic setups. Using the conclusion of this analysis can improve the development of applications for the Leap Motion controller in the field of Human-Computer Interaction.

  2. Accuracy and precisions of water quality parameters retrieved from particle swarm optimisation in a sub-tropical lake

    NASA Astrophysics Data System (ADS)

    Campbell, Glenn; Phinn, Stuart R.

    2009-09-01

    Optical remote sensing has been used to map and monitor water quality parameters such as the concentrations of hydrosols (chlorophyll and other pigments, total suspended material, and coloured dissolved organic matter). In the inversion / optimisation approach a forward model is used to simulate the water reflectance spectra from a set of parameters and the set that gives the closest match is selected as the solution. The accuracy of the hydrosol retrieval is dependent on an efficient search of the solution space and the reliability of the similarity measure. In this paper the Particle Swarm Optimisation (PSO) was used to search the solution space and seven similarity measures were trialled. The accuracy and precision of this method depends on the inherent noise in the spectral bands of the sensor being employed, as well as the radiometric corrections applied to images to calculate the subsurface reflectance. Using the Hydrolight® radiative transfer model and typical hydrosol concentrations from Lake Wivenhoe, Australia, MERIS reflectance spectra were simulated. The accuracy and precision of hydrosol concentrations derived from each similarity measure were evaluated after errors associated with the air-water interface correction, atmospheric correction and the IOP measurement were modelled and applied to the simulated reflectance spectra. The use of band specific empirically estimated values for the anisotropy value in the forward model improved the accuracy of hydrosol retrieval. The results of this study will be used to improve an algorithm for the remote sensing of water quality for freshwater impoundments.

  3. Nano-accuracy measurements and the surface profiler by use of Monolithic Hollow Penta-Prism for precision mirror testing

    NASA Astrophysics Data System (ADS)

    Qian, Shinan; Wayne, Lewis; Idir, Mourad

    2014-09-01

    We developed a Monolithic Hollow Penta-Prism Long Trace Profiler-NOM (MHPP-LTP-NOM) to attain nano-accuracy in testing plane- and near-plane-mirrors. A new developed Monolithic Hollow Penta-Prism (MHPP) combined with the advantages of PPLTP and autocollimator ELCOMAT of the Nano-Optic-Measuring Machine (NOM) is used to enhance the accuracy and stability of our measurements. Our precise system-alignment method by using a newly developed CCD position-monitor system (PMS) assured significant thermal stability and, along with our optimized noise-reduction analytic method, ensured nano-accuracy measurements. Herein we report our tests results; all errors are about 60 nrad rms or less in tests of plane- and near-plane- mirrors.

  4. Precise Point Positioning for the Efficient and Robust Analysis of GPS Data from Large Networks

    NASA Technical Reports Server (NTRS)

    Zumberge, J. F.; Heflin, M. B.; Jefferson, D. C.; Watkins, M. M.; Webb, F. H.

    1997-01-01

    Networks of dozens to hundreds of permanently operating precision Global Positioning System (GPS) receivers are emerging at spatial scales that range from 10(exp 0) to 10(exp 3) km. To keep the computational burden associated with the analysis of such data economically feasible, one approach is to first determine precise GPS satellite positions and clock corrections from a globally distributed network of GPS receivers. Their, data from the local network are analyzed by estimating receiver- specific parameters with receiver-specific data satellite parameters are held fixed at their values determined in the global solution. This "precise point positioning" allows analysis of data from hundreds to thousands of sites every (lay with 40-Mflop computers, with results comparable in quality to the simultaneous analysis of all data. The reference frames for the global and network solutions can be free of distortion imposed by erroneous fiducial constraints on any sites.

  5. Precise Point Positioning for the Efficient and Robust Analysis of GPS Data From Large Networks

    NASA Technical Reports Server (NTRS)

    Zumberge, J. F.; Heflin, M. B.; Jefferson, D. C.; Watkins, M. M.; Webb, F. H.

    1997-01-01

    Networks of dozens to hundreds of permanently operating precision Global Positioning System (GPS) receivers are emerging at spatial scales that range from 10(exp 0) to 10(exp 3) km. To keep the computational burden associated with the analysis of such data economically feasible, one approach is to first determine precise GPS satellite positions and clock corrections from a globally distributed network of GPS receivers. Then, data from the local network are analyzed by estimating receiver specific parameters with receiver-specific data; satellite parameters are held fixed at their values determined in the global solution. This "precise point positioning" allows analysis of data from hundreds to thousands of sites every day with 40 Mflop computers, with results comparable in quality to the simultaneous analysis of all data. The reference frames for the global and network solutions can be free of distortion imposed by erroneous fiducial constraints on any sites.

  6. A comprehensive meta-reanalysis of the robustness of the experience-accuracy effect in clinical judgment.

    PubMed

    Spengler, Paul M; Pilipis, Lois A

    2015-07-01

    Experience is one of the most commonly studied variables in clinical judgment research. In a meta-analysis of research from 1970 to 1996 of judgments made by 4,607 participants from 74 studies, Spengler, White, Ægisdóttir, Maugherman, Anderson, et al. (2009) found an experience-accuracy fixed effect of d = .121 (95% CI [.06, .18]), indicating that with more experience, counseling and other psychologists obtain only modest gains in decision-making accuracy. We sought to conduct a more rigorous assessment of the experience-accuracy effect by synthesizing 40 years of research from 1970 to 2010, assessing the same and additional moderators, including subgroup analyses of extremes of experience, and conducting a sensitivity analysis. The judgments formed by 11,584 clinicians from 113 studies resulted in a random effects d of .146 (95% CI [.08, .21]), reflecting the robustness of only a small impact of experience on decision-making accuracy. The sensitivity analysis revealed that the effect is consistent across analysis and methodological considerations. Mixed effects metaregression revealed no statistically significant relation between 40 years of time and the experience-accuracy effect. A cumulative meta-analysis indicated that the experience-accuracy effect stabilized in the literature in the year 1999, after the accumulation of 82 studies, with no appreciable change since. We assessed a broader range of experience comparing no experience to some experience and comparing nonexperts with experts, and for differences as a function of decision making based on psychological tests; however, these and most other moderators were not significant. Implications are discussed for clinical decision-making research, training, and practice.

  7. Simulations of thermally transferred OSL signals in quartz: Accuracy and precision of the protocols for equivalent dose evaluation

    NASA Astrophysics Data System (ADS)

    Pagonis, Vasilis; Adamiec, Grzegorz; Athanassas, C.; Chen, Reuven; Baker, Atlee; Larsen, Meredith; Thompson, Zachary

    2011-06-01

    Thermally-transferred optically stimulated luminescence (TT-OSL) signals in sedimentary quartz have been the subject of several recent studies, due to the potential shown by these signals to increase the range of luminescence dating by an order of magnitude. Based on these signals, a single aliquot protocol termed the ReSAR protocol has been developed and tested experimentally. This paper presents extensive numerical simulations of this ReSAR protocol. The purpose of the simulations is to investigate several aspects of the ReSAR protocol which are believed to cause difficulties during application of the protocol. Furthermore, several modified versions of the ReSAR protocol are simulated, and their relative accuracy and precision are compared. The simulations are carried out using a recently published kinetic model for quartz, consisting of 11 energy levels. One hundred random variants of the natural samples were generated by keeping the transition probabilities between energy levels fixed, while allowing simultaneous random variations of the concentrations of the 11 energy levels. The relative intrinsic accuracy and precision of the protocols are simulated by calculating the equivalent dose (ED) within the model, for a given natural burial dose of the sample. The complete sequence of steps undertaken in several versions of the dating protocols is simulated. The relative intrinsic precision of these techniques is estimated by fitting Gaussian probability functions to the resulting simulated distribution of ED values. New simulations are presented for commonly used OSL sensitivity tests, consisting of successive cycles of sample irradiation with the same dose, followed by measurements of the sensitivity corrected L/T signals. We investigate several experimental factors which may be affecting both the intrinsic precision and intrinsic accuracy of the ReSAR protocol. The results of the simulation show that the four different published versions of the ReSAR protocol can

  8. Potassium conductance dynamics confer robust spike-time precision in a neuromorphic model of the auditory brain stem

    PubMed Central

    Boahen, Kwabena

    2013-01-01

    A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (gKL) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking gKL in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced gKL with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of gKL were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability. PMID:23554436

  9. Accuracy and robustness of a simple algorithm to measure vessel diameter from B-mode ultrasound images.

    PubMed

    Hunt, Brian E; Flavin, Daniel C; Bauschatz, Emily; Whitney, Heather M

    2016-06-01

    Measurement of changes in arterial vessel diameter can be used to assess the state of cardiovascular health, but the use of such measurements as biomarkers is contingent upon the accuracy and robustness of the measurement. This work presents a simple algorithm for measuring diameter from B-mode images derived from vascular ultrasound. The algorithm is based upon Gaussian curve fitting and a Viterbi search process. We assessed the accuracy of the algorithm by measuring the diameter of a digital reference object (DRO) and ultrasound-derived images of a carotid artery. We also assessed the robustness of the algorithm by manipulating the quality of the image. Across a broad range of signal-to-noise ratio and with varying image edge error, the algorithm measured vessel diameter within 0.7% of the creation dimensions of the DRO. This was a similar level of difference (0.8%) to when an ultrasound image was used. When SNR dropped to 18 dB, measurement error increased to 1.3%. When edge position was varied by as much as 10%, measurement error was well maintained between 0.68 and 0.75%. All these errors fall well within the margin of error established by the medical physics community for quantitative ultrasound measurements. We conclude that this simple algorithm provides consistent and accurate measurement of lumen diameter from B-mode images across a broad range of image quality. PMID:27055985

  10. A high-precision Jacob's staff with improved spatial accuracy and laser sighting capability

    NASA Astrophysics Data System (ADS)

    Patacci, Marco

    2016-04-01

    A new Jacob's staff design incorporating a 3D positioning stage and a laser sighting stage is described. The first combines a compass and a circular spirit level on a movable bracket and the second introduces a laser able to slide vertically and rotate on a plane parallel to bedding. The new design allows greater precision in stratigraphic thickness measurement while restricting the cost and maintaining speed of measurement to levels similar to those of a traditional Jacob's staff. Greater precision is achieved as a result of: a) improved 3D positioning of the rod through the use of the integrated compass and spirit level holder; b) more accurate sighting of geological surfaces by tracing with height adjustable rotatable laser; c) reduced error when shifting the trace of the log laterally (i.e. away from the dip direction) within the trace of the laser plane, and d) improved measurement of bedding dip and direction necessary to orientate the Jacob's staff, using the rotatable laser. The new laser holder design can also be used to verify parallelism of a geological surface with structural dip by creating a visual planar datum in the field and thus allowing determination of surfaces which cut the bedding at an angle (e.g., clinoforms, levees, erosion surfaces, amalgamation surfaces, etc.). Stratigraphic thickness measurements and estimates of measurement uncertainty are valuable to many applications of sedimentology and stratigraphy at different scales (e.g., bed statistics, reconstruction of palaeotopographies, depositional processes at bed scale, architectural element analysis), especially when a quantitative approach is applied to the analysis of the data; the ability to collect larger data sets with improved precision will increase the quality of such studies.

  11. Note: electronic circuit for two-way time transfer via a single coaxial cable with picosecond accuracy and precision.

    PubMed

    Prochazka, Ivan; Kodet, Jan; Panek, Petr

    2012-11-01

    We have designed, constructed, and tested the overall performance of the electronic circuit for the two-way time transfer between two timing devices over modest distances with sub-picosecond precision and a systematic error of a few picoseconds. The concept of the electronic circuit enables to carry out time tagging of pulses of interest in parallel to the comparison of the time scales of these timing devices. The key timing parameters of the circuit are: temperature change of the delay is below 100 fs/K, timing stability time deviation better than 8 fs for averaging time from minutes to hours, sub-picosecond time transfer precision, and a few picoseconds time transfer accuracy.

  12. Accuracy and Precision of Three-Dimensional Low Dose CT Compared to Standard RSA in Acetabular Cups: An Experimental Study

    PubMed Central

    Olivecrona, Henrik; Maguire, Gerald Q.; Noz, Marilyn E.; Zeleznik, Michael P.

    2016-01-01

    Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting. PMID:27478832

  13. Accuracy and Precision of Three-Dimensional Low Dose CT Compared to Standard RSA in Acetabular Cups: An Experimental Study.

    PubMed

    Brodén, Cyrus; Olivecrona, Henrik; Maguire, Gerald Q; Noz, Marilyn E; Zeleznik, Michael P; Sköldenberg, Olof

    2016-01-01

    Background and Purpose. The gold standard for detection of implant wear and migration is currently radiostereometry (RSA). The purpose of this study is to compare a three-dimensional computed tomography technique (3D CT) to standard RSA as an alternative technique for measuring migration of acetabular cups in total hip arthroplasty. Materials and Methods. With tantalum beads, we marked one cemented and one uncemented cup and mounted these on a similarly marked pelvic model. A comparison was made between 3D CT and standard RSA for measuring migration. Twelve repeated stereoradiographs and CT scans with double examinations in each position and gradual migration of the implants were made. Precision and accuracy of the 3D CT were calculated. Results. The accuracy of the 3D CT ranged between 0.07 and 0.32 mm for translations and 0.21 and 0.82° for rotation. The precision ranged between 0.01 and 0.09 mm for translations and 0.06 and 0.29° for rotations, respectively. For standard RSA, the precision ranged between 0.04 and 0.09 mm for translations and 0.08 and 0.32° for rotations, respectively. There was no significant difference in precision between 3D CT and standard RSA. The effective radiation dose of the 3D CT method, comparable to RSA, was estimated to be 0.33 mSv. Interpretation. Low dose 3D CT is a comparable method to standard RSA in an experimental setting. PMID:27478832

  14. A Time Projection Chamber for High Accuracy and Precision Fission Cross-Section Measurements

    SciTech Connect

    T. Hill; K. Jewell; M. Heffner; D. Carter; M. Cunningham; V. Riot; J. Ruz; S. Sangiorgio; B. Seilhan; L. Snyder; D. M. Asner; S. Stave; G. Tatishvili; L. Wood; R. G. Baker; J. L. Klay; R. Kudo; S. Barrett; J. King; M. Leonard; W. Loveland; L. Yao; C. Brune; S. Grimes; N. Kornilov; T. N. Massey; J. Bundgaard; D. L. Duke; U. Greife; U. Hager; E. Burgett; J. Deaven; V. Kleinrath; C. McGrath; B. Wendt; N. Hertel; D. Isenhower; N. Pickle; H. Qu; S. Sharma; R. T. Thornton; D. Tovwell; R. S. Towell; S.

    2014-09-01

    The fission Time Projection Chamber (fissionTPC) is a compact (15 cm diameter) two-chamber MICROMEGAS TPC designed to make precision cross-section measurements of neutron-induced fission. The actinide targets are placed on the central cathode and irradiated with a neutron beam that passes axially through the TPC inducing fission in the target. The 4p acceptance for fission fragments and complete charged particle track reconstruction are powerful features of the fissionTPC which will be used to measure fission cross-sections and examine the associated systematic errors. This paper provides a detailed description of the design requirements, the design solutions, and the initial performance of the fissionTPC.

  15. Incorporating precision, accuracy and alternative sampling designs into a continental monitoring program for colonial waterbirds

    USGS Publications Warehouse

    Steinkamp, M.J.; Peterjohn, B.G.; Keisman, J.L.

    2003-01-01

    A comprehensive monitoring program for colonial waterbirds in North America has never existed. At smaller geographic scales, many states and provinces conduct surveys of colonial waterbird populations. Periodic regional surveys are conducted at varying times during the breeding season using a variety of survey methods, which complicates attempts to estimate population trends for most species. The US Geological Survey Patuxent Wildlife Research Center has recently started to coordinate colonial waterbird monitoring efforts throughout North America. A centralized database has been developed with an Internet-based data entry and retrieval page. The extent of existing colonial waterbird surveys has been defined, allowing gaps in coverage to be identified and basic inventories completed where desirable. To enable analyses of comparable data at regional or larger geographic scales, sampling populations through statistically sound sampling designs should supersede obtaining counts at every colony. Standardized breeding season survey techniques have been agreed upon and documented in a monitoring manual. Each survey in the manual has associated with it recommendations for bias estimation, and includes specific instructions on measuring detectability. The methods proposed in the manual are for developing reliable, comparable indices of population size to establish trend information at multiple spatial and temporal scales, but they will not result in robust estimates of total population numbers.

  16. Clinical decision support systems for improving diagnostic accuracy and achieving precision medicine.

    PubMed

    Castaneda, Christian; Nalley, Kip; Mannion, Ciaran; Bhattacharyya, Pritish; Blake, Patrick; Pecora, Andrew; Goy, Andre; Suh, K Stephen

    2015-01-01

    As research laboratories and clinics collaborate to achieve precision medicine, both communities are required to understand mandated electronic health/medical record (EHR/EMR) initiatives that will be fully implemented in all clinics in the United States by 2015. Stakeholders will need to evaluate current record keeping practices and optimize and standardize methodologies to capture nearly all information in digital format. Collaborative efforts from academic and industry sectors are crucial to achieving higher efficacy in patient care while minimizing costs. Currently existing digitized data and information are present in multiple formats and are largely unstructured. In the absence of a universally accepted management system, departments and institutions continue to generate silos of information. As a result, invaluable and newly discovered knowledge is difficult to access. To accelerate biomedical research and reduce healthcare costs, clinical and bioinformatics systems must employ common data elements to create structured annotation forms enabling laboratories and clinics to capture sharable data in real time. Conversion of these datasets to knowable information should be a routine institutionalized process. New scientific knowledge and clinical discoveries can be shared via integrated knowledge environments defined by flexible data models and extensive use of standards, ontologies, vocabularies, and thesauri. In the clinical setting, aggregated knowledge must be displayed in user-friendly formats so that physicians, non-technical laboratory personnel, nurses, data/research coordinators, and end-users can enter data, access information, and understand the output. The effort to connect astronomical numbers of data points, including '-omics'-based molecular data, individual genome sequences, experimental data, patient clinical phenotypes, and follow-up data is a monumental task. Roadblocks to this vision of integration and interoperability include ethical, legal

  17. Precise and Continuous Time and Frequency Synchronisation at the 5×10-19 Accuracy Level

    PubMed Central

    Wang, B.; Gao, C.; Chen, W. L.; Miao, J.; Zhu, X.; Bai, Y.; Zhang, J. W.; Feng, Y. Y.; Li, T. C.; Wang, L. J.

    2012-01-01

    The synchronisation of time and frequency between remote locations is crucial for many important applications. Conventional time and frequency dissemination often makes use of satellite links. Recently, the communication fibre network has become an attractive option for long-distance time and frequency dissemination. Here, we demonstrate accurate frequency transfer and time synchronisation via an 80 km fibre link between Tsinghua University (THU) and the National Institute of Metrology of China (NIM). Using a 9.1 GHz microwave modulation and a timing signal carried by two continuous-wave lasers and transferred across the same 80 km urban fibre link, frequency transfer stability at the level of 5×10−19/day was achieved. Time synchronisation at the 50 ps precision level was also demonstrated. The system is reliable and has operated continuously for several months. We further discuss the feasibility of using such frequency and time transfer over 1000 km and its applications to long-baseline radio astronomy. PMID:22870385

  18. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation.

    PubMed

    Choe, Kyoung Whan; Blake, Randolph; Lee, Sang-Hun

    2016-01-01

    Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (<16s) and large (∼4min) time scales, and these pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task.

  19. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation.

    PubMed

    Choe, Kyoung Whan; Blake, Randolph; Lee, Sang-Hun

    2016-01-01

    Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (<16s) and large (∼4min) time scales, and these pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task. PMID:25578924

  20. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review).

    PubMed

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194

  1. A simple device for high-precision head image registration: Preliminary performance and accuracy tests

    SciTech Connect

    Pallotta, Stefania

    2007-05-15

    The purpose of this paper is to present a new device for multimodal head study registration and to examine its performance in preliminary tests. The device consists of a system of eight markers fixed to mobile carbon pipes and bars which can be easily mounted on the patient's head using the ear canals and the nasal bridge. Four graduated scales fixed to the rigid support allow examiners to find the same device position on the patient's head during different acquisitions. The markers can be filled with appropriate substances for visualisation in computed tomography (CT), magnetic resonance, single photon emission computer tomography (SPECT) and positron emission tomography images. The device's rigidity and its position reproducibility were measured in 15 repeated CT acquisitions of the Alderson Rando anthropomorphic phantom and in two SPECT studies of a patient. The proposed system displays good rigidity and reproducibility characteristics. A relocation accuracy of less than 1,5 mm was found in more than 90% of the results. The registration parameters obtained using such a device were compared to those obtained using fiducial markers fixed on phantom and patient heads, resulting in differences of less than 1 deg. and 1 mm for rotation and translation parameters, respectively. Residual differences between fiducial marker coordinates in reference and in registered studies were less than 1 mm in more than 90% of the results, proving that the device performed as accurately as noninvasive stereotactic devices. Finally, an example of multimodal employment of the proposed device is reported.

  2. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment.

    PubMed

    Vu, An T; Phillips, Jeffrey S; Kay, Kendrick; Phillips, Matthew E; Johnson, Matthew R; Shinkareva, Svetlana V; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2016-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms. PMID:27686111

  3. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment.

    PubMed

    Vu, An T; Phillips, Jeffrey S; Kay, Kendrick; Phillips, Matthew E; Johnson, Matthew R; Shinkareva, Svetlana V; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2016-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms.

  4. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review)

    PubMed Central

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194

  5. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review).

    PubMed

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong.

  6. Fragile associations coexist with robust memories for precise details in long-term memory.

    PubMed

    Lew, Timothy F; Pashler, Harold E; Vul, Edward

    2016-03-01

    What happens to memories as we forget? They might gradually lose fidelity, lose their associations (and thus be retrieved in response to the incorrect cues), or be completely lost. Typical long-term memory studies assess memory as a binary outcome (correct/incorrect), and cannot distinguish these different kinds of forgetting. Here we assess long-term memory for scalar information, thus allowing us to quantify how different sources of error diminish as we learn, and accumulate as we forget. We trained subjects on visual and verbal continuous quantities (the locations of objects and the distances between major cities, respectively), tested subjects after extended delays, and estimated whether recall errors arose due to imprecise estimates, misassociations, or complete forgetting. Although subjects quickly formed precise memories and retained them for a long time, they were slow to learn correct associations and quick to forget them. These results suggest that long-term recall is especially limited in its ability to form and retain associations. PMID:26371498

  7. High-Precise and Robust Face-Recognition System Based on Optical Parallel Correlator

    NASA Astrophysics Data System (ADS)

    Kodate, Kashiko

    2005-10-01

    Facial recognition is applied in a wide range of security systems, and has been studied since the 1970s, with extensive research into and development of digital processing. However, there is only available a 1:1 verification system combined with ID card identification, or an ID-less system with a small number of images in the database. The number of images that can be stored is limited, and recognition has to be improved to account for photos taken at different angles. Commercially available facial recognition systems for the most part utilize digital computers performing electronic pattern recognition. In contrast, optical analog operations can process two-dimensional images instantaneously in parallel using a lens-based Fourier transform function. In the 1960s two methods were proposed, the Vanderlugt correlator and the joint transform correlator (JTC). We present a new scheme using a multi-channel parallel JTC to make better use of spatial parallelism, through the use of a diffraction-type multi-level zone-plate array to extend a single-channel JTC. Our project's objectives were: (i) to design a matched filter which equips the system with high recognition capability at a faster calculation speed by analyzing the spatial frequency of facial image elements, and (ii) to create a four-channel Vanderlugt correlator with super-high-speed (1000 frame/s) optical parallel facial recognition system, robust enough for 1:N identification, for a large database with 4000 images. Automation was also achieved for the entire process via a practical controlling system. The achieved super-high-speed facial recognition system based on optical parallelism is faster in its processing time than the JTC optical correlator.

  8. A Method of Determining Accuracy and Precision for Dosimeter Systems Using Accreditation Data

    SciTech Connect

    Rick Cummings and John Flood

    2010-12-01

    A study of the uncertainty of dosimeter results is required by the national accreditation programs for each dosimeter model for which accreditation is sought. Typically, the methods used to determine uncertainty have included the partial differentiation method described in the U.S. Guide to Uncertainty in Measurements or the use of Monte Carlo techniques and probability distribution functions to generate simulated dose results. Each of these techniques has particular strengths and should be employed when the areas of uncertainty are required to be understood in detail. However, the uncertainty of dosimeter results can also be determined using a Model II One-Way Analysis of Variance technique and accreditation testing data. The strengths of the technique include (1) the method is straightforward and the data are provided under accreditation testing and (2) the method provides additional data for the analysis of long-term uncertainty using Statistical Process Control (SPC) techniques. The use of SPC to compare variances and standard deviations over time is described well in other areas and is not discussed in detail in this paper. The application of Analysis of Variance to historic testing data indicated that the accuracy in a representative dosimetry system (Panasonic® Model UD-802) was 8.2%, 5.1%, and 4.8% and the expanded uncertainties at the 95% confidence level were 10.7%, 14.9%, and 15.2% for the Accident, Protection Level-Shallow, and Protection Level-Deep test categories in the Department of Energy Laboratory Accreditation Program, respectively. The 95% level of confidence ranges were (0.98 to 1.19), (0.90 to 1.20), and (0.90 to 1.20) for the three groupings of test categories, respectively.

  9. A method of determining accuracy and precision for dosimeter systems using accreditation data.

    PubMed

    Cummings, Frederick; Flood, John R

    2010-12-01

    A study of the uncertainty of dosimeter results is required by the national accreditation programs for each dosimeter model for which accreditation is sought. Typically, the methods used to determine uncertainty have included the partial differentiation method described in the U.S. Guide to Uncertainty in Measurements or the use of Monte Carlo techniques and probability distribution functions to generate simulated dose results. Each of these techniques has particular strengths and should be employed when the areas of uncertainty are required to be understood in detail. However, the uncertainty of dosimeter results can also be determined using a Model II One-Way Analysis of Variance technique and accreditation testing data. The strengths of the technique include (1) the method is straightforward and the data are provided under accreditation testing and (2) the method provides additional data for the analysis of long-term uncertainty using Statistical Process Control (SPC) techniques. The use of SPC to compare variances and standard deviations over time is described well in other areas and is not discussed in detail in this paper. The application of Analysis of Variance to historic testing data indicated that the accuracy in a representative dosimetry system (Panasonic® Model UD-802) was 8.2%, 5.1%, and 4.8% and the expanded uncertainties at the 95% confidence level were 10.7%, 14.9%, and 15.2% for the Accident, Protection Level-Shallow, and Protection Level-Deep test categories in the Department of Energy Laboratory Accreditation Program, respectively. The 95% level of confidence ranges were (0.98 to 1.19), (0.90 to 1.20), and (0.90 to 1.20) for the three groupings of test categories, respectively. PMID:21068596

  10. Accuracy and Precision of Equine Gait Event Detection during Walking with Limb and Trunk Mounted Inertial Sensors

    PubMed Central

    Olsen, Emil; Andersen, Pia Haubro; Pfau, Thilo

    2012-01-01

    The increased variations of temporal gait events when pathology is present are good candidate features for objective diagnostic tests. We hypothesised that the gait events hoof-on/off and stance can be detected accurately and precisely using features from trunk and distal limb-mounted Inertial Measurement Units (IMUs). Four IMUs were mounted on the distal limb and five IMUs were attached to the skin over the dorsal spinous processes at the withers, fourth lumbar vertebrae and sacrum as well as left and right tuber coxae. IMU data were synchronised to a force plate array and a motion capture system. Accuracy (bias) and precision (SD of bias) was calculated to compare force plate and IMU timings for gait events. Data were collected from seven horses. One hundred and twenty three (123) front limb steps were analysed; hoof-on was detected with a bias (SD) of −7 (23) ms, hoof-off with 0.7 (37) ms and front limb stance with −0.02 (37) ms. A total of 119 hind limb steps were analysed; hoof-on was found with a bias (SD) of −4 (25) ms, hoof-off with 6 (21) ms and hind limb stance with 0.2 (28) ms. IMUs mounted on the distal limbs and sacrum can detect gait events accurately and precisely. PMID:22969392

  11. Precisely Molded Nanoparticle Displaying DENV-E Proteins Induces Robust Serotype-Specific Neutralizing Antibody Responses

    PubMed Central

    Hoekstra, Gabriel; Yi, Xianwen; Stone, Michelle; Horvath, Katie; Miley, Michael J.; DeSimone, Joseph; Luft, Chris J.; de Silva, Aravinda M.

    2016-01-01

    Dengue virus (DENV) is the causative agent of dengue fever and dengue hemorrhagic fever. The virus is endemic in over 120 countries, causing over 350 million infections per year. Dengue vaccine development is challenging because of the need to induce simultaneous protection against four antigenically distinct DENV serotypes and evidence that, under some conditions, vaccination can enhance disease due to specific immunity to the virus. While several live-attenuated tetravalent dengue virus vaccines display partial efficacy, it has been challenging to induce balanced protective immunity to all 4 serotypes. Instead of using whole-virus formulations, we are exploring the potentials for a particulate subunit vaccine, based on DENV E-protein displayed on nanoparticles that have been precisely molded using Particle Replication in Non-wetting Template (PRINT) technology. Here we describe immunization studies with a DENV2-nanoparticle vaccine candidate. The ectodomain of DENV2-E protein was expressed as a secreted recombinant protein (sRecE), purified and adsorbed to poly (lactic-co-glycolic acid) (PLGA) nanoparticles of different sizes and shape. We show that PRINT nanoparticle adsorbed sRecE without any adjuvant induces higher IgG titers and a more potent DENV2-specific neutralizing antibody response compared to the soluble sRecE protein alone. Antigen trafficking indicate that PRINT nanoparticle display of sRecE prolongs the bio-availability of the antigen in the draining lymph nodes by creating an antigen depot. Our results demonstrate that PRINT nanoparticles are a promising platform for delivering subunit vaccines against flaviviruses such as dengue and Zika. PMID:27764114

  12. Flight control and landing precision in the nocturnal bee Megalopta is robust to large changes in light intensity

    PubMed Central

    Baird, Emily; Fernandez, Diana C.; Wcislo, William T.; Warrant, Eric J.

    2015-01-01

    Like their diurnal relatives, Megalopta genalis use visual information to control flight. Unlike their diurnal relatives, however, they do this at extremely low light intensities. Although Megalopta has developed optical specializations to increase visual sensitivity, theoretical studies suggest that this enhanced sensitivity does not enable them to capture enough light to use visual information to reliably control flight in the rainforest at night. It has been proposed that Megalopta gain extra sensitivity by summing visual information over time. While enhancing the reliability of vision, this strategy would decrease the accuracy with which they can detect image motion—a crucial cue for flight control. Here, we test this temporal summation hypothesis by investigating how Megalopta's flight control and landing precision is affected by light intensity and compare our findings with the results of similar experiments performed on the diurnal bumblebee Bombus terrestris, to explore the extent to which Megalopta's adaptations to dim light affect their precision. We find that, unlike Bombus, light intensity does not affect flight and landing precision in Megalopta. Overall, we find little evidence that Megalopta uses a temporal summation strategy in dim light, while we find strong support for the use of this strategy in Bombus. PMID:26578977

  13. Flight control and landing precision in the nocturnal bee Megalopta is robust to large changes in light intensity.

    PubMed

    Baird, Emily; Fernandez, Diana C; Wcislo, William T; Warrant, Eric J

    2015-01-01

    Like their diurnal relatives, Megalopta genalis use visual information to control flight. Unlike their diurnal relatives, however, they do this at extremely low light intensities. Although Megalopta has developed optical specializations to increase visual sensitivity, theoretical studies suggest that this enhanced sensitivity does not enable them to capture enough light to use visual information to reliably control flight in the rainforest at night. It has been proposed that Megalopta gain extra sensitivity by summing visual information over time. While enhancing the reliability of vision, this strategy would decrease the accuracy with which they can detect image motion-a crucial cue for flight control. Here, we test this temporal summation hypothesis by investigating how Megalopta's flight control and landing precision is affected by light intensity and compare our findings with the results of similar experiments performed on the diurnal bumblebee Bombus terrestris, to explore the extent to which Megalopta's adaptations to dim light affect their precision. We find that, unlike Bombus, light intensity does not affect flight and landing precision in Megalopta. Overall, we find little evidence that Megalopta uses a temporal summation strategy in dim light, while we find strong support for the use of this strategy in Bombus. PMID:26578977

  14. How good is a PCR efficiency estimate: Recommendations for precise and robust qPCR efficiency assessments.

    PubMed

    Svec, David; Tichopad, Ales; Novosadova, Vendula; Pfaffl, Michael W; Kubista, Mikael

    2015-03-01

    We have examined the imprecision in the estimation of PCR efficiency by means of standard curves based on strategic experimental design with large number of technical replicates. In particular, how robust this estimation is in terms of a commonly varying factors: the instrument used, the number of technical replicates performed and the effect of the volume transferred throughout the dilution series. We used six different qPCR instruments, we performed 1-16 qPCR replicates per concentration and we tested 2-10 μl volume of analyte transferred, respectively. We find that the estimated PCR efficiency varies significantly across different instruments. Using a Monte Carlo approach, we find the uncertainty in the PCR efficiency estimation may be as large as 42.5% (95% CI) if standard curve with only one qPCR replicate is used in 16 different plates. Based on our investigation we propose recommendations for the precise estimation of PCR efficiency: (1) one robust standard curve with at least 3-4 qPCR replicates at each concentration shall be generated, (2) the efficiency is instrument dependent, but reproducibly stable on one platform, and (3) using a larger volume when constructing serial dilution series reduces sampling error and enables calibration across a wider dynamic range. PMID:27077029

  15. How good is a PCR efficiency estimate: Recommendations for precise and robust qPCR efficiency assessments.

    PubMed

    Svec, David; Tichopad, Ales; Novosadova, Vendula; Pfaffl, Michael W; Kubista, Mikael

    2015-03-01

    We have examined the imprecision in the estimation of PCR efficiency by means of standard curves based on strategic experimental design with large number of technical replicates. In particular, how robust this estimation is in terms of a commonly varying factors: the instrument used, the number of technical replicates performed and the effect of the volume transferred throughout the dilution series. We used six different qPCR instruments, we performed 1-16 qPCR replicates per concentration and we tested 2-10 μl volume of analyte transferred, respectively. We find that the estimated PCR efficiency varies significantly across different instruments. Using a Monte Carlo approach, we find the uncertainty in the PCR efficiency estimation may be as large as 42.5% (95% CI) if standard curve with only one qPCR replicate is used in 16 different plates. Based on our investigation we propose recommendations for the precise estimation of PCR efficiency: (1) one robust standard curve with at least 3-4 qPCR replicates at each concentration shall be generated, (2) the efficiency is instrument dependent, but reproducibly stable on one platform, and (3) using a larger volume when constructing serial dilution series reduces sampling error and enables calibration across a wider dynamic range.

  16. Accuracy and precision of minimally-invasive cardiac output monitoring in children: a systematic review and meta-analysis.

    PubMed

    Suehiro, Koichi; Joosten, Alexandre; Murphy, Linda Suk-Ling; Desebbe, Olivier; Alexander, Brenton; Kim, Sang-Hyun; Cannesson, Maxime

    2016-10-01

    Several minimally-invasive technologies are available for cardiac output (CO) measurement in children, but the accuracy and precision of these devices have not yet been evaluated in a systematic review and meta-analysis. We conducted a comprehensive search of the medical literature in PubMed, Cochrane Library of Clinical Trials, Scopus, and Web of Science from its inception to June 2014 assessing the accuracy and precision of all minimally-invasive CO monitoring systems used in children when compared with CO monitoring reference methods. Pooled mean bias, standard deviation, and mean percentage error of included studies were calculated using a random-effects model. The inter-study heterogeneity was also assessed using an I(2) statistic. A total of 20 studies (624 patients) were included. The overall random-effects pooled bias, and mean percentage error were 0.13 ± 0.44 l min(-1) and 29.1 %, respectively. Significant inter-study heterogeneity was detected (P < 0.0001, I(2) = 98.3 %). In the sub-analysis regarding the device, electrical cardiometry showed the smallest bias (-0.03 l min(-1)) and lowest percentage error (23.6 %). Significant residual heterogeneity remained after conducting sensitivity and subgroup analyses based on the various study characteristics. By meta-regression analysis, we found no independent effects of study characteristics on weighted mean difference between reference and tested methods. Although the pooled bias was small, the mean pooled percentage error was in the gray zone of clinical applicability. In the sub-group analysis, electrical cardiometry was the device that provided the most accurate measurement. However, a high heterogeneity between studies was found, likely due to a wide range of study characteristics. PMID:26315477

  17. Community-based Approaches to Improving Accuracy, Precision, and Reproducibility in U-Pb and U-Th Geochronology

    NASA Astrophysics Data System (ADS)

    McLean, N. M.; Condon, D. J.; Bowring, S. A.; Schoene, B.; Dutton, A.; Rubin, K. H.

    2015-12-01

    The last two decades have seen a grassroots effort by the international geochronology community to "calibrate Earth history through teamwork and cooperation," both as part of the EARTHTIME initiative and though several daughter projects with similar goals. Its mission originally challenged laboratories "to produce temporal constraints with uncertainties approaching 0.1% of the radioisotopic ages," but EARTHTIME has since exceeded its charge in many ways. Both the U-Pb and Ar-Ar chronometers first considered for high-precision timescale calibration now regularly produce dates at the sub-per mil level thanks to instrumentation, laboratory, and software advances. At the same time new isotope systems, including U-Th dating of carbonates, have developed comparable precision. But the larger, inter-related scientific challenges envisioned at EARTHTIME's inception remain - for instance, precisely calibrating the global geologic timescale, estimating rates of change around major climatic perturbations, and understanding evolutionary rates through time - and increasingly require that data from multiple geochronometers be combined. To solve these problems, the next two decades of uranium-daughter geochronology will require further advances in accuracy, precision, and reproducibility. The U-Th system has much in common with U-Pb, in that both parent and daughter isotopes are solids that can easily be weighed and dissolved in acid, and have well-characterized reference materials certified for isotopic composition and/or purity. For U-Pb, improving lab-to-lab reproducibility has entailed dissolving precisely weighed U and Pb metals of known purity and isotopic composition together to make gravimetric solutions, then using these to calibrate widely distributed tracers composed of artificial U and Pb isotopes. To mimic laboratory measurements, naturally occurring U and Pb isotopes were also mixed in proportions to mimic samples of three different ages, to be run as internal

  18. Cascade impactor (CI) mensuration--an assessment of the accuracy and precision of commercially available optical measurement systems.

    PubMed

    Chambers, Frank; Ali, Aziz; Mitchell, Jolyon; Shelton, Christopher; Nichols, Steve

    2010-03-01

    Multi-stage cascade impactors (CIs) are the preferred measurement technique for characterizing the aerodynamic particle size distribution of an inhalable aerosol. Stage mensuration is the recommended pharmacopeial method for monitoring CI "fitness for purpose" within a GxP environment. The Impactor Sub-Team of the European Pharmaceutical Aerosol Group has undertaken an inter-laboratory study to assess both the precision and accuracy of a range of makes and models of instruments currently used for optical inspection of impactor stages. Measurement of two Andersen 8-stage 'non-viable' cascade impactor "reference" stages that were representative of jet sizes for this instrument type (stages 2 and 7) confirmed that all instruments evaluated were capable of reproducible jet measurement, with the overall capability being within the current pharmacopeial stage specifications for both stages. In the assessment of absolute accuracy, small, but consistent differences (ca. 0.6% of the certified value) observed between 'dots' and 'spots' of a calibrated chromium-plated reticule were observed, most likely the result of treatment of partially lit pixels along the circumference of this calibration standard. Measurements of three certified ring gauges, the smallest having a nominal diameter of 1.0 mm, were consistent with the observation where treatment of partially illuminated pixels at the periphery of the projected image can result in undersizing. However, the bias was less than 1% of the certified diameter. The optical inspection instruments evaluated are fully capable of confirming cascade impactor suitability in accordance with pharmacopeial practice.

  19. Precision and accuracy of manual water-level measurements taken in the Yucca Mountain area, Nye County, Nevada, 1988-90

    USGS Publications Warehouse

    Boucher, M.S.

    1994-01-01

    Water-level measurements have been made in deep boreholes in the Yucca Mountain area, Nye County, Nevada, since 1983 in support of the U.S. Department of Energy's Yucca Mountain Project, which is an evaluation of the area to determine its suitability as a potential storage area for high-level nuclear waste. Water-level measurements were taken either manually, using various water-level measuring equipment such as steel tapes, or they were taken continuously, using automated data recorders and pressure transducers. This report presents precision range and accuracy data established for manual water-level measurements taken in the Yucca Mountain area, 1988-90. Precision and accuracy ranges were determined for all phases of the water-level measuring process, and overall accuracy ranges are presented. Precision ranges were determined for three steel tapes using a total of 462 data points. Mean precision ranges of these three tapes ranged from 0.014 foot to 0.026 foot. A mean precision range of 0.093 foot was calculated for the multiconductor cable, using 72 data points. Mean accuracy values were calculated on the basis of calibrations of the steel tapes and the multiconductor cable against a reference steel tape. The mean accuracy values of the steel tapes ranged from 0.053 foot, based on three data points to 0.078, foot based on six data points. The mean accuracy of the multiconductor cable was O. 15 foot, based on six data points. Overall accuracy of the water-level measurements was calculated by taking the square root of the sum of the squares of the individual accuracy values. Overall accuracy was calculated to be 0.36 foot for water-level measurements taken with steel tapes, without accounting for the inaccuracy of borehole deviations from vertical. An overall accuracy of 0.36 foot for measurements made with steel tapes is considered satisfactory for this project.

  20. An evaluation of the accuracy and precision of a stand-alone submersible continuous ruminal pH measurement system.

    PubMed

    Penner, G B; Beauchemin, K A; Mutsvangwa, T

    2006-06-01

    The objectives of this study were 1) to develop and evaluate the accuracy and precision of a new stand-alone submersible continuous ruminal pH measurement system called the Lethbridge Research Centre ruminal pH measurement system (LRCpH; Experiment 1); 2) to establish the accuracy and precision of a well-documented, previously used continuous indwelling ruminal pH system (CIpH) to ensure that the new system (LRCpH) was as accurate and precise as the previous system (CIpH; Experiment 2); and 3) to determine the required frequency for pH electrode standardization by comparing baseline millivolt readings of pH electrodes in pH buffers 4 and 7 after 0, 24, 48, and 72 h of ruminal incubation (Experiment 3). In Experiment 1, 6 pregnant Holstein heifers, 3 lactating, primiparous Holstein cows, and 2 Black Angus heifers were used. All experimental animals were fitted with permanent ruminal cannulas. In Experiment 2, the 3 cannulated, lactating, primiparous Holstein cows were used. In both experiments, ruminal pH was determined continuously using indwelling pH electrodes. Subsequently, mean pH values were then compared with ruminal pH values obtained using spot samples of ruminal fluid (MANpH) obtained at the same time. A correlation coefficient accounting for repeated measures was calculated and results were used to calculate the concordance correlation to examine the relationships between the LRCpH-derived values and MANpH, and the CIpH-derived values and MANpH. In Experiment 3, the 6 pregnant Holstein heifers were used along with 6 new submersible pH electrodes. In Experiments 1 and 2, the comparison of the LRCpH output (1- and 5-min averages) to MANpH had higher correlation coefficients after accounting for repeated measures (0.98 and 0.97 for 1- and 5-min averages, respectively) and concordance correlation coefficients (0.96 and 0.97 for 1- and 5-min averages, respectively) than the comparison of CIpH to MANpH (0.88 and 0.87, correlation coefficient and concordance

  1. Enhancement of the accuracy of the ( P-ω) method through the implementation of a nonlinear robust observer

    NASA Astrophysics Data System (ADS)

    Kfoury, G. A.; Chalhoub, N. G.; Henein, N. A.; Bryzik, W.

    2006-04-01

    The ( P-ω) method is a model-based approach developed for determining the instantaneous friction torque in internal combustion engines. This scheme requires measurements of the cylinder gas pressure, the engine load torque, the crankshaft angular displacement and its time derivatives. The effects of the higher order dynamics of the crank-slider mechanism on the measured angular motion of the crankshaft have caused the ( P-ω) method to yield erroneous results, especially, at high engine speeds. To alleviate this problem, a nonlinear sliding mode observer has been developed herein to accurately estimate the rigid and flexible motions of the piston-assembly/connecting-rod/crankshaft mechanism of a single cylinder engine. The observer has been designed to yield a robust performance in the presence of disturbances and modeling imprecision. The digital simulation results, generated under transient conditions representing a decrease in the engine speed, have illustrated the rapid convergence of the estimated state variables to the actual ones in the presence of both structured and unstructured uncertainties. Moreover, this study has proven that the use of the estimated rather than the measured angular displacement of the crankshaft and its time derivatives can significantly improve the accuracy of the ( P-ω) method in determining the instantaneous engine friction torque.

  2. Single-frequency receivers as master permanent stations in GNSS networks: precision and accuracy of the positioning in mixed networks

    NASA Astrophysics Data System (ADS)

    Dabove, Paolo; Manzino, Ambrogio Maria

    2015-04-01

    The use of GPS/GNSS instruments is a common practice in the world at both a commercial and academic research level. Since last ten years, Continuous Operating Reference Stations (CORSs) networks were born in order to achieve the possibility to extend a precise positioning more than 15 km far from the master station. In this context, the Geomatics Research Group of DIATI at the Politecnico di Torino has carried out several experiments in order to evaluate the achievable precision obtainable with different GNSS receivers (geodetic and mass-market) and antennas if a CORSs network is considered. This work starts from the research above described, in particular focusing the attention on the usefulness of single frequency permanent stations in order to thicken the existing CORSs, especially for monitoring purposes. Two different types of CORSs network are available today in Italy: the first one is the so called "regional network" and the second one is the "national network", where the mean inter-station distances are about 25/30 and 50/70 km respectively. These distances are useful for many applications (e.g. mobile mapping) if geodetic instruments are considered but become less useful if mass-market instruments are used or if the inter-station distance between master and rover increases. In this context, some innovative GNSS networks were developed and tested, analyzing the performance of rover's positioning in terms of quality, accuracy and reliability both in real-time and post-processing approach. The use of single frequency GNSS receivers leads to have some limits, especially due to a limited baseline length, the possibility to obtain a correct fixing of the phase ambiguity for the network and to fix the phase ambiguity correctly also for the rover. These factors play a crucial role in order to reach a positioning with a good level of accuracy (as centimetric o better) in a short time and with an high reliability. The goal of this work is to investigate about the

  3. Standardization of Operator-Dependent Variables Affecting Precision and Accuracy of the Disk Diffusion Method for Antibiotic Susceptibility Testing.

    PubMed

    Hombach, Michael; Maurer, Florian P; Pfiffner, Tamara; Böttger, Erik C; Furrer, Reinhard

    2015-12-01

    Parameters like zone reading, inoculum density, and plate streaking influence the precision and accuracy of disk diffusion antibiotic susceptibility testing (AST). While improved reading precision has been demonstrated using automated imaging systems, standardization of the inoculum and of plate streaking have not been systematically investigated yet. This study analyzed whether photometrically controlled inoculum preparation and/or automated inoculation could further improve the standardization of disk diffusion. Suspensions of Escherichia coli ATCC 25922 and Staphylococcus aureus ATCC 29213 of 0.5 McFarland standard were prepared by 10 operators using both visual comparison to turbidity standards and a Densichek photometer (bioMérieux), and the resulting CFU counts were determined. Furthermore, eight experienced operators each inoculated 10 Mueller-Hinton agar plates using a single 0.5 McFarland standard bacterial suspension of E. coli ATCC 25922 using regular cotton swabs, dry flocked swabs (Copan, Brescia, Italy), or an automated streaking device (BD-Kiestra, Drachten, Netherlands). The mean CFU counts obtained from 0.5 McFarland standard E. coli ATCC 25922 suspensions were significantly different for suspensions prepared by eye and by Densichek (P < 0.001). Preparation by eye resulted in counts that were closer to the CLSI/EUCAST target of 10(8) CFU/ml than those resulting from Densichek preparation. No significant differences in the standard deviations of the CFU counts were observed. The interoperator differences in standard deviations when dry flocked swabs were used decreased significantly compared to the differences when regular cotton swabs were used, whereas the mean of the standard deviations of all operators together was not significantly altered. In contrast, automated streaking significantly reduced both interoperator differences, i.e., the individual standard deviations, compared to the standard deviations for the manual method, and the mean of

  4. Standardization of Operator-Dependent Variables Affecting Precision and Accuracy of the Disk Diffusion Method for Antibiotic Susceptibility Testing.

    PubMed

    Hombach, Michael; Maurer, Florian P; Pfiffner, Tamara; Böttger, Erik C; Furrer, Reinhard

    2015-12-01

    Parameters like zone reading, inoculum density, and plate streaking influence the precision and accuracy of disk diffusion antibiotic susceptibility testing (AST). While improved reading precision has been demonstrated using automated imaging systems, standardization of the inoculum and of plate streaking have not been systematically investigated yet. This study analyzed whether photometrically controlled inoculum preparation and/or automated inoculation could further improve the standardization of disk diffusion. Suspensions of Escherichia coli ATCC 25922 and Staphylococcus aureus ATCC 29213 of 0.5 McFarland standard were prepared by 10 operators using both visual comparison to turbidity standards and a Densichek photometer (bioMérieux), and the resulting CFU counts were determined. Furthermore, eight experienced operators each inoculated 10 Mueller-Hinton agar plates using a single 0.5 McFarland standard bacterial suspension of E. coli ATCC 25922 using regular cotton swabs, dry flocked swabs (Copan, Brescia, Italy), or an automated streaking device (BD-Kiestra, Drachten, Netherlands). The mean CFU counts obtained from 0.5 McFarland standard E. coli ATCC 25922 suspensions were significantly different for suspensions prepared by eye and by Densichek (P < 0.001). Preparation by eye resulted in counts that were closer to the CLSI/EUCAST target of 10(8) CFU/ml than those resulting from Densichek preparation. No significant differences in the standard deviations of the CFU counts were observed. The interoperator differences in standard deviations when dry flocked swabs were used decreased significantly compared to the differences when regular cotton swabs were used, whereas the mean of the standard deviations of all operators together was not significantly altered. In contrast, automated streaking significantly reduced both interoperator differences, i.e., the individual standard deviations, compared to the standard deviations for the manual method, and the mean of

  5. Precision and accuracy in the quantitative analysis of biological samples by accelerator mass spectrometry: application in microdose absolute bioavailability studies.

    PubMed

    Gao, Lan; Li, Jing; Kasserra, Claudia; Song, Qi; Arjomand, Ali; Hesk, David; Chowdhury, Swapan K

    2011-07-15

    Determination of the pharmacokinetics and absolute bioavailability of an experimental compound, SCH 900518, following a 89.7 nCi (100 μg) intravenous (iv) dose of (14)C-SCH 900518 2 h post 200 mg oral administration of nonradiolabeled SCH 900518 to six healthy male subjects has been described. The plasma concentration of SCH 900518 was measured using a validated LC-MS/MS system, and accelerator mass spectrometry (AMS) was used for quantitative plasma (14)C-SCH 900518 concentration determination. Calibration standards and quality controls were included for every batch of sample analysis by AMS to ensure acceptable quality of the assay. Plasma (14)C-SCH 900518 concentrations were derived from the regression function established from the calibration standards, rather than directly from isotopic ratios from AMS measurement. The precision and accuracy of quality controls and calibration standards met the requirements of bioanalytical guidance (U.S. Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research, Center for Veterinary Medicine. Guidance for Industry: Bioanalytical Method Validation (ucm070107), May 2001. http://www.fda.gov/downloads/Drugs/GuidanceCompilanceRegulatoryInformation/Guidances/ucm070107.pdf ). The AMS measurement had a linear response range from 0.0159 to 9.07 dpm/mL for plasma (14)C-SCH 900158 concentrations. The CV and accuracy were 3.4-8.5% and 94-108% (82-119% for the lower limit of quantitation (LLOQ)), respectively, with a correlation coefficient of 0.9998. The absolute bioavailability was calculated from the dose-normalized area under the curve of iv and oral doses after the plasma concentrations were plotted vs the sampling time post oral dose. The mean absolute bioavailability of SCH 900518 was 40.8% (range 16.8-60.6%). The typical accuracy and standard deviation in AMS quantitative analysis of drugs from human plasma samples have been reported for the first time, and the impact of these

  6. Improved precision and accuracy for high-performance liquid chromatography/Fourier transform ion cyclotron resonance mass spectrometric exact mass measurement of small molecules from the simultaneous and controlled introduction of internal calibrants via a second electrospray nebuliser.

    PubMed

    Herniman, Julie M; Bristow, Tony W T; O'Connor, Gavin; Jarvis, Jackie; Langley, G John

    2004-01-01

    The use of a second electrospray nebuliser has proved to be highly successful for exact mass measurement during high-performance liquid chromatography/Fourier transform ion cyclotron resonance mass spectrometry (HPLC/FTICRMS). Much improved accuracy and precision of mass measurement were afforded by the introduction of the internal calibration solution, thus overcoming space charge issues due to the lack of control over relative ion abundances of the species eluting from the HPLC column. Further, issues of suppression of ionisation, observed when using a T-piece method, are addressed and this simple system has significant benefits over other more elaborate approaches providing data that compares very favourably with these other approaches. The technique is robust, flexible and transferable and can be used in conjunction with HPLC, infusion or flow injection analysis (FIA) to provide constant internal calibration signals to allow routine, accurate and precise mass measurements to be recorded.

  7. Determination of the precision and accuracy of morphological measurements using the Kinect™ sensor: comparison with standard stereophotogrammetry.

    PubMed

    Bonnechère, B; Jansen, B; Salvia, P; Bouzahouene, H; Sholukha, V; Cornelis, J; Rooze, M; Van Sint Jan, S

    2014-01-01

    The recent availability of the Kinect™ sensor, a low-cost Markerless Motion Capture (MMC) system, could give new and interesting insights into ergonomics (e.g. the creation of a morphological database). Extensive validation of this system is still missing. The aim of the study was to determine if the Kinect™ sensor can be used as an easy, cheap and fast tool to conduct morphology estimation. A total of 48 subjects were analysed using MMC. Results were compared with measurements obtained from a high-resolution stereophotogrammetric system, a marker-based system (MBS). Differences between MMC and MBS were found; however, these differences were systematically correlated and enabled regression equations to be obtained to correct MMC results. After correction, final results were in agreement with MBS data (p = 0.99). Results show that measurements were reproducible and precise after applying regression equations. Kinect™ sensors-based systems therefore seem to be suitable for use as fast and reliable tools to estimate morphology. Practitioner Summary: The Kinect™ sensor could eventually be used for fast morphology estimation as a body scanner. This paper presents an extensive validation of this device for anthropometric measurements in comparison to manual measurements and stereophotogrammetric devices. The accuracy is dependent on the segment studied but the reproducibility is excellent. PMID:24646374

  8. Progress integrating ID-TIMS U-Pb geochronology with accessory mineral geochemistry: towards better accuracy and higher precision time

    NASA Astrophysics Data System (ADS)

    Schoene, B.; Samperton, K. M.; Crowley, J. L.; Cottle, J. M.

    2012-12-01

    It is increasingly common that hand samples of plutonic and volcanic rocks contain zircon with dates that span between zero and >100 ka. This recognition comes from the increased application of U-series geochronology on young volcanic rocks and the increased precision to better than 0.1% on single zircons by the U-Pb ID-TIMS method. It has thus become more difficult to interpret such complicated datasets in terms of ashbed eruption or magma emplacement, which are critical constraints for geochronologic applications ranging from biotic evolution and the stratigraphic record to magmatic and metamorphic processes in orogenic belts. It is important, therefore, to develop methods that aid in interpreting which minerals, if any, date the targeted process. One promising tactic is to better integrate accessory mineral geochemistry with high-precision ID-TIMS U-Pb geochronology. These dual constraints can 1) identify cogenetic populations of minerals, and 2) record magmatic or metamorphic fluid evolution through time. Goal (1) has been widely sought with in situ geochronology and geochemical analysis but is limited by low-precision dates. Recent work has attempted to bridge this gap by retrieving the typically discarded elution from ion exchange chemistry that precedes ID-TIMS U-Pb geochronology and analyzing it by ICP-MS (U-Pb TIMS-TEA). The result integrates geochemistry and high-precision geochronology from the exact same volume of material. The limitation of this method is the relatively coarse spatial resolution compared to in situ techniques, and thus averages potentially complicated trace element profiles through single minerals or mineral fragments. In continued work, we test the effect of this on zircon by beginning with CL imaging to reveal internal zonation and growth histories. This is followed by in situ LA-ICPMS trace element transects of imaged grains to reveal internal geochemical zonation. The same grains are then removed from grain-mount, fragmented, and

  9. Strategy for high-accuracy-and-precision retrieval of atmospheric methane from the mid-infrared FTIR network

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Forster, F.; Rettinger, M.; Jones, N.

    2011-05-01

    We present a strategy (MIR-GBM v1.0) for the retrieval of column-averaged dry-air mole fractions of methane (XCH4) with a precision <0.3 % (1-σ diurnal variation, 7-min integration) and a seasonal bias <0.14 % from mid-infrared ground-based solar FTIR measurements of the Network for the Detection of Atmospheric Composition Change (NDACC, comprising 22 FTIR stations). This makes NDACC methane data useful for satellite validation and for the inversion of regional-scale sources and sinks in addition to long-term trend analysis. Such retrievals complement the high accuracy and precision near-infrared observations of the younger Total Carbon Column Observing Network (TCCON) with time series dating back 15 yr or so before TCCON operations began. MIR-GBM v1.0 is using HITRAN 2000 (including the 2001 update release) and 3 spectral micro windows (2613.70-2615.40 cm-1, 2835.50-2835.80 cm-1, 2921.00-2921.60 cm-1). A first-order Tikhonov constraint is applied to the state vector given in units of per cent of volume mixing ratio. It is tuned to achieve minimum diurnal variation without damping seasonality. Final quality selection of the retrievals uses a threshold for the ratio of root-mean-square spectral residuals and information content (<0.15 %). Column-averaged dry-air mole fractions are calculated using the retrieved methane profiles and four-times-daily pressure-temperature-humidity profiles from National Center for Environmental Prediction (NCEP) interpolated to the time of measurement. MIR-GBM v1.0 is the optimum of 24 tested retrieval strategies (8 different spectral micro-window selections, 3 spectroscopic line lists: HITRAN 2000, 2004, 2008). Dominant errors of the non-optimum retrieval strategies are HDO/H2O-CH4 interference errors (seasonal bias up to ≈4 %). Therefore interference errors have been quantified at 3 test sites covering clear-sky integrated water vapor levels representative for all NDACC sites (Wollongong maximum = 44.9 mm, Garmisch mean = 14.9 mm

  10. Strategy for high-accuracy-and-precision retrieval of atmospheric methane from the mid-infrared FTIR network

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Forster, F.; Rettinger, M.; Jones, N.

    2011-09-01

    We present a strategy (MIR-GBM v1.0) for the retrieval of column-averaged dry-air mole fractions of methane (XCH4) with a precision <0.3% (1-σ diurnal variation, 7-min integration) and a seasonal bias <0.14% from mid-infrared ground-based solar FTIR measurements of the Network for the Detection of Atmospheric Composition Change (NDACC, comprising 22 FTIR stations). This makes NDACC methane data useful for satellite validation and for the inversion of regional-scale sources and sinks in addition to long-term trend analysis. Such retrievals complement the high accuracy and precision near-infrared observations of the younger Total Carbon Column Observing Network (TCCON) with time series dating back 15 years or so before TCCON operations began. MIR-GBM v1.0 is using HITRAN 2000 (including the 2001 update release) and 3 spectral micro windows (2613.70-2615.40 cm-1, 2835.50-2835.80 cm-1, 2921.00-2921.60 cm-1). A first-order Tikhonov constraint is applied to the state vector given in units of per cent of volume mixing ratio. It is tuned to achieve minimum diurnal variation without damping seasonality. Final quality selection of the retrievals uses a threshold for the goodness of fit (χ2 < 1) as well as for the ratio of root-mean-square spectral noise and information content (<0.15%). Column-averaged dry-air mole fractions are calculated using the retrieved methane profiles and four-times-daily pressure-temperature-humidity profiles from National Center for Environmental Prediction (NCEP) interpolated to the time of measurement. MIR-GBM v1.0 is the optimum of 24 tested retrieval strategies (8 different spectral micro-window selections, 3 spectroscopic line lists: HITRAN 2000, 2004, 2008). Dominant errors of the non-optimum retrieval strategies are systematic HDO/H2O-CH4 interference errors leading to a seasonal bias up to ≈5%. Therefore interference errors have been quantified at 3 test sites covering clear-sky integrated water vapor levels representative for all NDACC

  11. Engineered, Robust Polyelectrolyte Multilayers by Precise Control of Surface Potential for Designer Protein, Cell, and Bacteria Adsorption.

    PubMed

    Zhu, Xiaoying; Guo, Shifeng; He, Tao; Jiang, Shan; Jańczewski, Dominik; Vancso, G Julius

    2016-02-01

    Cross-linked layer-by-layer (LbL) assemblies with a precisely tuned surface ζ-potential were fabricated to control the adsorption of proteins, mammalian cells, and bacteria for different biomedical applications. Two weak polyions including a synthesized polyanion and polyethylenimine were assembled under controlled conditions and cross-linked to prepare three robust LbL films as model surfaces with similar roughness and water affinity but displaying negative, zero, and positive net charges at the physiological pH (7.4). These surfaces were tested for their abilities to adsorb proteins, including bovine serum albumin (BSA) and lysozyme (LYZ). In the adsorption tests, the LbL films bind more proteins with opposite charges but less of those with like charges, indicating that electrostatic interactions play a major role in protein adsorption. However, LYZ showed higher nonspecific adsorption than BSA, because of the specific behavior of LYZ molecules, such as stacked multilayer formation during adsorption. To exclude such stacking effects from experiments, protein molecules were covalently immobilized on AFM colloidal probes to measure the adhesion forces against the model surfaces utilizing direct protein molecule-surface contacts. The results confirmed the dominating role of electrostatic forces in protein adhesion. In fibroblast cell and bacteria adhesion tests, similar trends (high adhesion on positively charged surfaces, but much lower on neutral and negatively charged surfaces) were observed because the fibroblast cell and bacterial surfaces studied possess negative potentials. The cross-linked LbL films with improved stability and engineered surface charge described in this study provide an excellent platform to control the behavior of different charged objects and can be utilized in practical biomedical applications. PMID:26756285

  12. Engineered, Robust Polyelectrolyte Multilayers by Precise Control of Surface Potential for Designer Protein, Cell, and Bacteria Adsorption.

    PubMed

    Zhu, Xiaoying; Guo, Shifeng; He, Tao; Jiang, Shan; Jańczewski, Dominik; Vancso, G Julius

    2016-02-01

    Cross-linked layer-by-layer (LbL) assemblies with a precisely tuned surface ζ-potential were fabricated to control the adsorption of proteins, mammalian cells, and bacteria for different biomedical applications. Two weak polyions including a synthesized polyanion and polyethylenimine were assembled under controlled conditions and cross-linked to prepare three robust LbL films as model surfaces with similar roughness and water affinity but displaying negative, zero, and positive net charges at the physiological pH (7.4). These surfaces were tested for their abilities to adsorb proteins, including bovine serum albumin (BSA) and lysozyme (LYZ). In the adsorption tests, the LbL films bind more proteins with opposite charges but less of those with like charges, indicating that electrostatic interactions play a major role in protein adsorption. However, LYZ showed higher nonspecific adsorption than BSA, because of the specific behavior of LYZ molecules, such as stacked multilayer formation during adsorption. To exclude such stacking effects from experiments, protein molecules were covalently immobilized on AFM colloidal probes to measure the adhesion forces against the model surfaces utilizing direct protein molecule-surface contacts. The results confirmed the dominating role of electrostatic forces in protein adhesion. In fibroblast cell and bacteria adhesion tests, similar trends (high adhesion on positively charged surfaces, but much lower on neutral and negatively charged surfaces) were observed because the fibroblast cell and bacterial surfaces studied possess negative potentials. The cross-linked LbL films with improved stability and engineered surface charge described in this study provide an excellent platform to control the behavior of different charged objects and can be utilized in practical biomedical applications.

  13. Accuracy, precision and response time of consumer fork, remote digital probe and disposable indicator thermometers for cooked ground beef patties and chicken breasts

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Nine different commercially available instant-read consumer thermometers (forks, remotes, digital probe and disposable color change indicators) were tested for accuracy and precision compared to a calibrated thermocouple in 80 percent and 90 percent lean ground beef patties, and boneless and bone-in...

  14. An Examination of the Precision and Technical Accuracy of the First Wave of Group-Randomized Trials Funded by the Institute of Education Sciences

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Raudenbush, Stephen W.

    2009-01-01

    This article examines the power analyses for the first wave of group-randomized trials funded by the Institute of Education Sciences. Specifically, it assesses the precision and technical accuracy of the studies. The authors identified the appropriate experimental design and estimated the minimum detectable standardized effect size (MDES) for each…

  15. Deformable Image Registration for Adaptive Radiation Therapy of Head and Neck Cancer: Accuracy and Precision in the Presence of Tumor Changes

    SciTech Connect

    Mencarelli, Angelo; Kranen, Simon Robert van; Hamming-Vrieze, Olga; Beek, Suzanne van; Nico Rasch, Coenraad Robert; Herk, Marcel van; Sonke, Jan-Jakob

    2014-11-01

    Purpose: To compare deformable image registration (DIR) accuracy and precision for normal and tumor tissues in head and neck cancer patients during the course of radiation therapy (RT). Methods and Materials: Thirteen patients with oropharyngeal tumors, who underwent submucosal implantation of small gold markers (average 6, range 4-10) around the tumor and were treated with RT were retrospectively selected. Two observers identified 15 anatomical features (landmarks) representative of normal tissues in the planning computed tomography (pCT) scan and in weekly cone beam CTs (CBCTs). Gold markers were digitally removed after semiautomatic identification in pCTs and CBCTs. Subsequently, landmarks and gold markers on pCT were propagated to CBCTs, using a b-spline-based DIR and, for comparison, rigid registration (RR). To account for observer variability, the pair-wise difference analysis of variance method was applied. DIR accuracy (systematic error) and precision (random error) for landmarks and gold markers were quantified. Time trend of the precisions for RR and DIR over the weekly CBCTs were evaluated. Results: DIR accuracies were submillimeter and similar for normal and tumor tissue. DIR precision (1 SD) on the other hand was significantly different (P<.01), with 2.2 mm vector length in normal tissue versus 3.3 mm in tumor tissue. No significant time trend in DIR precision was found for normal tissue, whereas in tumor, DIR precision was significantly (P<.009) degraded during the course of treatment by 0.21 mm/week. Conclusions: DIR for tumor registration proved to be less precise than that for normal tissues due to limited contrast and complex non-elastic tumor response. Caution should therefore be exercised when applying DIR for tumor changes in adaptive procedures.

  16. International normalised ratio (INR) measured on the CoaguChek S and XS compared with the laboratory for determination of precision and accuracy.

    PubMed

    Christensen, Thomas D; Larsen, Torben B; Jensen, Claus; Maegaard, Marianne; Sørensen, Benny

    2009-03-01

    Oral anticoagulation therapy is monitored by the use of international normalised ratio (INR). Patients performing self-management estimate INR using a coagulometer, but studies have been partly flawed regarding the estimated precision and accuracy. The objective was to estimate the imprecision and accuracy for two different coagulometers (CoaguChek S and XS). Twenty-four patients treated with coumarin were prospectively followed for six weeks. INR's were analyzed weekly in duplicates on both coagulometers, and compared with results from the hospital laboratory. Statistical analysis included Bland-Altman plot, 95% limits of agreement, coefficient of variance (CV), and an analysis of variance using a mixed effect model. Comparing 141 duplicate measurements (a total of 564 measurements) of INR, we found that the CoaguChek S and CoaguChek XS had a precision (CV) of 3.4% and 2.3%, respectively. Regarding analytical accuracy, the INR measurements tended to be lower on the coagulometers, and regarding diagnostic accuracy the CoaguChek S and CoaguChek XS deviated more than 15% from the laboratory measurements in 40% and 43% of the measurements, respectively. In conclusion, the precision of the coagulometers was found to be good, but only the CoaguChek XS had a precision within the predefined limit of 3%. Regarding analytical accuracy, the INR measurements tended to be lower on the coagulometers, compared to the laboratory. A large proportion of measurement of the coagulometers deviated more than 15% from the laboratory measurements. Whether this will have a clinical impact awaits further studies.

  17. Towards the GEOSAT Follow-On Precise Orbit Determination Goals of High Accuracy and Near-Real-Time Processing

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Zelensky, Nikita P.; Chinn, Douglas S.; Beckley, Brian D.; Lillibridge, John L.

    2006-01-01

    The US Navy's GEOSAT Follow-On spacecraft (GFO) primary mission objective is to map the oceans using a radar altimeter. Satellite laser ranging data, especially in combination with altimeter crossover data, offer the only means of determining high-quality precise orbits. Two tuned gravity models, PGS7727 and PGS7777b, were created at NASA GSFC for GFO that reduce the predicted radial orbit through degree 70 to 13.7 and 10.0 mm. A macromodel was developed to model the nonconservative forces and the SLR spacecraft measurement offset was adjusted to remove a mean bias. Using these improved models, satellite-ranging data, altimeter crossover data, and Doppler data are used to compute both daily medium precision orbits with a latency of less than 24 hours. Final precise orbits are also computed using these tracking data and exported with a latency of three to four weeks to NOAA for use on the GFO Geophysical Data Records (GDR s). The estimated orbit precision of the daily orbits is between 10 and 20 cm, whereas the precise orbits have a precision of 5 cm.

  18. The precision and accuracy of iterative and non-iterative methods of photopeak integration in activation analysis, with particular reference to the analysis of multiplets

    USGS Publications Warehouse

    Baedecker, P.A.

    1977-01-01

    The relative precisions obtainable using two digital methods, and three iterative least squares fitting procedures of photopeak integration have been compared empirically using 12 replicate counts of a test sample with 14 photopeaks of varying intensity. The accuracy by which the various iterative fitting methods could analyse synthetic doublets has also been evaluated, and compared with a simple non-iterative approach. ?? 1977 Akade??miai Kiado??.

  19. Precision of high-resolution multibeam echo sounding coupled with high-accuracy positioning in a shallow water coastal environment

    NASA Astrophysics Data System (ADS)

    Ernstsen, Verner B.; Noormets, Riko; Hebbeln, Dierk; Bartholomä, Alex; Flemming, Burg W.

    2006-09-01

    Over 4 years, repetitive bathymetric measurements of a shipwreck in the Grådyb tidal inlet channel in the Danish Wadden Sea were carried out using a state-of-the-art high-resolution multibeam echosounder (MBES) coupled with a real-time long range kinematic (LRK™) global positioning system. Seven measurements during a single survey in 2003 ( n=7) revealed a horizontal and vertical precision of the MBES system of ±20 and ±2 cm, respectively, at a 95% confidence level. By contrast, four annual surveys from 2002 to 2005 ( n=4) yielded a horizontal and vertical precision (at 95% confidence level) of only ±30 and ±8 cm, respectively. This difference in precision can be explained by three main factors: (1) the dismounting of the system between the annual surveys, (2) rougher sea conditions during the survey in 2004 and (3) the limited number of annual surveys. In general, the precision achieved here did not correspond to the full potential of the MBES system, as this could certainly have been improved by an increase in coverage density (soundings/m2), achievable by reducing the survey speed of the vessel. Nevertheless, precision was higher than that reported to date for earlier offshore test surveys using comparable equipment.

  20. Hybrid formulation of the model-based non-rigid registration problem to improve accuracy and robustness.

    PubMed

    Clatz, Olivier; Delingette, Hervé; Talos, Ion-Florin; Golby, Alexandra J; Kikinis, Ron; Jolesz, Ferenc A; Ayache, Nicholas; Warfield, Simon K

    2005-01-01

    We present a new algorithm to register 3D pre-operative Magnetic Resonance (MR) images with intra-operative MR images of the brain. This algorithm relies on a robust estimation of the deformation from a sparse set of measured displacements. We propose a new framework to compute iteratively the displacement field starting from an approximation formulation (minimizing the sum of a regularization term and a data error term) and converging toward an interpolation formulation (least square minimization of the data error term). The robustness of the algorithm is achieved through the introduction of an outliers rejection step in this gradual registration process. We ensure the validity of the deformation by the use of a biomechanical model of the brain specific to the patient, discretized with the finite element method. The algorithm has been tested on six cases of brain tumor resection, presenting a brain shift up to 13 mm.

  1. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms.

    PubMed

    Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B; Liu, Shih-Chii

    2015-01-01

    Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.

  2. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  3. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms

    PubMed Central

    Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B.; Liu, Shih-Chii

    2015-01-01

    Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time. PMID:26217169

  4. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms.

    PubMed

    Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B; Liu, Shih-Chii

    2015-01-01

    Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time. PMID:26217169

  5. Optimizing the accuracy and precision of the single-pulse Laue technique for synchrotron photo-crystallography

    PubMed Central

    Kamiński, Radosław; Graber, Timothy; Benedict, Jason B.; Henning, Robert; Chen, Yu-Sheng; Scheins, Stephan; Messerschmidt, Marc; Coppens, Philip

    2010-01-01

    The accuracy that can be achieved in single-pulse pump-probe Laue experiments is discussed. It is shown that with careful tuning of the experimental conditions a reproducibility of the intensity ratios of equivalent intensities obtained in different measurements of 3–4% can be achieved. The single-pulse experiments maximize the time resolution that can be achieved and, unlike stroboscopic techniques in which the pump-probe cycle is rapidly repeated, minimize the temperature increase due to the laser exposure of the sample. PMID:20567080

  6. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique

    PubMed Central

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan

    2014-01-01

    Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823

  7. High-accuracy and robust face recognition system based on optical parallel correlator using a temporal image sequence

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko

    2005-09-01

    Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.

  8. Survey of Branch Support Methods Demonstrates Accuracy, Power, and Robustness of Fast Likelihood-based Approximation Schemes

    PubMed Central

    Anisimova, Maria; Gil, Manuel; Dufayard, Jean-François; Dessimoz, Christophe; Gascuel, Olivier

    2011-01-01

    Phylogenetic inference and evaluating support for inferred relationships is at the core of many studies testing evolutionary hypotheses. Despite the popularity of nonparametric bootstrap frequencies and Bayesian posterior probabilities, the interpretation of these measures of tree branch support remains a source of discussion. Furthermore, both methods are computationally expensive and become prohibitive for large data sets. Recent fast approximate likelihood-based measures of branch supports (approximate likelihood ratio test [aLRT] and Shimodaira–Hasegawa [SH]-aLRT) provide a compelling alternative to these slower conventional methods, offering not only speed advantages but also excellent levels of accuracy and power. Here we propose an additional method: a Bayesian-like transformation of aLRT (aBayes). Considering both probabilistic and frequentist frameworks, we compare the performance of the three fast likelihood-based methods with the standard bootstrap (SBS), the Bayesian approach, and the recently introduced rapid bootstrap. Our simulations and real data analyses show that with moderate model violations, all tests are sufficiently accurate, but aLRT and aBayes offer the highest statistical power and are very fast. With severe model violations aLRT, aBayes and Bayesian posteriors can produce elevated false-positive rates. With data sets for which such violation can be detected, we recommend using SH-aLRT, the nonparametric version of aLRT based on a procedure similar to the Shimodaira–Hasegawa tree selection. In general, the SBS seems to be excessively conservative and is much slower than our approximate likelihood-based methods. PMID:21540409

  9. Clock accuracy and precision evolve as a consequence of selection for adult emergence in a narrow window of time in fruit flies Drosophila melanogaster.

    PubMed

    Kannan, Nisha N; Vaze, Koustubh M; Sharma, Vijay Kumar

    2012-10-15

    Although circadian clocks are believed to have evolved under the action of periodic selection pressures (selection on phasing) present in the geophysical environment, there is very little rigorous and systematic empirical evidence to support this. In the present study, we examined the effect of selection for adult emergence in a narrow window of time on the circadian rhythms of fruit flies Drosophila melanogaster. Selection was imposed in every generation by choosing flies that emerged during a 1 h window of time close to the emergence peak of baseline/control flies under 12 h:12 h light:dark cycles. To study the effect of selection on circadian clocks we estimated several quantifiable features that reflect inter- and intra-individual variance in adult emergence and locomotor activity rhythms. The results showed that with increasing generations, incidence of adult emergence and activity of adult flies during the 1 h selection window increased gradually in the selected populations. Flies from the selected populations were more homogenous in their clock period, were more coherent in their phase of entrainment, and displayed enhanced accuracy and precision in their emergence and activity rhythms compared with controls. These results thus suggest that circadian clocks in D. melanogaster evolve enhanced accuracy and precision when subjected to selection for emergence in a narrow window of time.

  10. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, Jacquelyn C.; Thompson, Anne M.; Schmidlin, F. J.; Oltmans, S. J.; Smit, H. G. J.

    2004-01-01

    Since 1998 the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 ozone profiles over eleven southern hemisphere tropical and subtropical stations. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used to measure ozone. The data are archived at: &ttp://croc.gsfc.nasa.gov/shadoz>. In analysis of ozonesonde imprecision within the SHADOZ dataset, Thompson et al. [JGR, 108,8238,20031 we pointed out that variations in ozonesonde technique (sensor solution strength, instrument manufacturer, data processing) could lead to station-to-station biases within the SHADOZ dataset. Imprecisions and accuracy in the SHADOZ dataset are examined in light of new data. First, SHADOZ total ozone column amounts are compared to version 8 TOMS (2004 release). As for TOMS version 7, satellite total ozone is usually higher than the integrated column amount from the sounding. Discrepancies between the sonde and satellite datasets decline two percentage points on average, compared to version 7 TOMS offsets. Second, the SHADOZ station data are compared to results of chamber simulations (JOSE-2000, Juelich Ozonesonde Intercomparison Experiment) in which the various SHADOZ techniques were evaluated. The range of JOSE column deviations from a standard instrument (-10%) in the chamber resembles that of the SHADOZ station data. It appears that some systematic variations in the SHADOZ ozone record are accounted for by differences in solution strength, data processing and instrument type (manufacturer).

  11. TanDEM-X IDEM precision and accuracy assessment based on a large assembly of differential GNSS measurements in Kruger National Park, South Africa

    NASA Astrophysics Data System (ADS)

    Baade, J.; Schmullius, C.

    2016-09-01

    High resolution Digital Elevation Models (DEM) represent fundamental data for a wide range of Earth surface process studies. Over the past years, the German TanDEM-X mission acquired data for a new, truly global Digital Elevation Model with unprecedented geometric resolution, precision and accuracy. First TanDEM Intermediate Digital Elevation Models (i.e. IDEM) with a geometric resolution from 0.4 to 3 arcsec have been made available for scientific purposes in November 2014. This includes four 1° × 1° tiles covering the Kruger National Park in South Africa. Here, we document the results of a local scale IDEM height accuracy validation exercise utilizing over 10,000 RTK-GNSS-based ground survey points from fourteen sites characterized by mainly pristine Savanna vegetation. The vertical precision of the ground checkpoints is 0.02 m (1σ). Selected precursor data sets (SRTMGL1, SRTM41, ASTER-GDEM2) are included in the analysis to facilitate the comparison. Although IDEM represents an intermediate product on the way to the new global TanDEM-X DEM, expected to be released in late 2016, it allows first insight into the properties of the forthcoming product. Remarkably, the TanDEM-X tiles include a number of auxiliary files providing detailed information pertinent to a user-based quality assessment. We present examples for the utilization of this information in the framework of a local scale study including the identification of height readings contaminated by water. Furthermore, this study provides evidence for the high precision and accuracy of IDEM height readings and the sensitivity to canopy cover. For open terrain, the 0.4 arcsec resolution edition (IDEM04) yields an average bias of 0.20 ± 0.05 m (95% confidence interval, Cl95), a RMSE = 1.03 m and an absolute vertical height error (LE90) of 1.5 [1.4, 1.7] m (Cl95). The corresponding values for the lower resolution IDEM editions are about the same and provide evidence for the high quality of the IDEM products

  12. Evaluation of the geomorphometric results and residual values of a robust plane fitting method applied to different DTMs of various scales and accuracy

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor

    2013-04-01

    Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is

  13. Deployment of precise and robust sensors on board ISS-for scientific experiments and for operation of the station.

    PubMed

    Stenzel, Christian

    2016-09-01

    The International Space Station (ISS) is the largest technical vehicle ever built by mankind. It provides a living area for six astronauts and also represents a laboratory in which scientific experiments are conducted in an extraordinary environment. The deployed sensor technology contributes significantly to the operational and scientific success of the station. The sensors on board the ISS can be thereby classified into two categories which differ significantly in their key features: (1) sensors related to crew and station health, and (2) sensors to provide specific measurements in research facilities. The operation of the station requires robust, long-term stable and reliable sensors, since they assure the survival of the astronauts and the intactness of the station. Recently, a wireless sensor network for measuring environmental parameters like temperature, pressure, and humidity was established and its function could be successfully verified over several months. Such a network enhances the operational reliability and stability for monitoring these critical parameters compared to single sensors. The sensors which are implemented into the research facilities have to fulfil other objectives. The high performance of the scientific experiments that are conducted in different research facilities on-board demands the perfect embedding of the sensor in the respective instrumental setup which forms the complete measurement chain. It is shown that the performance of the single sensor alone does not determine the success of the measurement task; moreover, the synergy between different sensors and actuators as well as appropriate sample taking, followed by an appropriate sample preparation play an essential role. The application in a space environment adds additional challenges to the sensor technology, for example the necessity for miniaturisation, automation, reliability, and long-term operation. An alternative is the repetitive calibration of the sensors. This approach

  14. Deployment of precise and robust sensors on board ISS-for scientific experiments and for operation of the station.

    PubMed

    Stenzel, Christian

    2016-09-01

    The International Space Station (ISS) is the largest technical vehicle ever built by mankind. It provides a living area for six astronauts and also represents a laboratory in which scientific experiments are conducted in an extraordinary environment. The deployed sensor technology contributes significantly to the operational and scientific success of the station. The sensors on board the ISS can be thereby classified into two categories which differ significantly in their key features: (1) sensors related to crew and station health, and (2) sensors to provide specific measurements in research facilities. The operation of the station requires robust, long-term stable and reliable sensors, since they assure the survival of the astronauts and the intactness of the station. Recently, a wireless sensor network for measuring environmental parameters like temperature, pressure, and humidity was established and its function could be successfully verified over several months. Such a network enhances the operational reliability and stability for monitoring these critical parameters compared to single sensors. The sensors which are implemented into the research facilities have to fulfil other objectives. The high performance of the scientific experiments that are conducted in different research facilities on-board demands the perfect embedding of the sensor in the respective instrumental setup which forms the complete measurement chain. It is shown that the performance of the single sensor alone does not determine the success of the measurement task; moreover, the synergy between different sensors and actuators as well as appropriate sample taking, followed by an appropriate sample preparation play an essential role. The application in a space environment adds additional challenges to the sensor technology, for example the necessity for miniaturisation, automation, reliability, and long-term operation. An alternative is the repetitive calibration of the sensors. This approach

  15. The science of and advanced technology for cost-effective manufacture of high precision engineering products. Volume 4. Thermal effects on the accuracy of numerically controlled machine tool

    NASA Astrophysics Data System (ADS)

    Venugopal, R.; Barash, M. M.; Liu, C. R.

    1985-10-01

    Thermal effects on the accuracy of numerically controlled machine tools are specially important in the context of unmanned manufacture or under conditions of precision metal cutting. Removal of the operator from the direct control of the metal cutting process has created problems in terms of maintaining accuracy. The objective of this research is to study thermal effects on the accuracy of numerically controlled machine tools. The initial part of the research report is concerned with the analysis of a hypothetical machine. The thermal characteristics of this machine are studied. Numerical methods for evaluating the errors exhibited by the slides of the machine are proposed and the possibility of predicting thermally induced errors by the use of regression equations is investigated. A method for computing the workspace error is also presented. The final part is concerned with the actual measurement of errors on a modern CNC machining center. Thermal influences on the errors is the main objective of the experimental work. Thermal influences on the errors of machine tools are predictable. Techniques for determining thermal effects on machine tools at a design stage are also presented. ; Error models and prediction; Metrology; Automation.

  16. Ultra-Precision Measurement and Control of Angle Motion in Piezo-Based Platforms Using Strain Gauge Sensors and a Robust Composite Controller

    PubMed Central

    Liu, Lei; Bai, Yu-Guang; Zhang, Da-Li; Wu, Zhi-Gang

    2013-01-01

    The measurement and control strategy of a piezo-based platform by using strain gauge sensors (SGS) and a robust composite controller is investigated in this paper. First, the experimental setup is constructed by using a piezo-based platform, SGS sensors, an AD5435 platform and two voltage amplifiers. Then, the measurement strategy to measure the tip/tilt angles accurately in the order of sub-μrad is presented. A comprehensive composite control strategy design to enhance the tracking accuracy with a novel driving principle is also proposed. Finally, an experiment is presented to validate the measurement and control strategy. The experimental results demonstrate that the proposed measurement and control strategy provides accurate angle motion with a root mean square (RMS) error of 0.21 μrad, which is approximately equal to the noise level. PMID:23860316

  17. Ultra-precision measurement and control of angle motion in piezo-based platforms using strain gauge sensors and a robust composite controller.

    PubMed

    Liu, Lei; Bai, Yu-Guang; Zhang, Da-Li; Wu, Zhi-Gang

    2013-07-15

    The measurement and control strategy of a piezo-based platform by using strain gauge sensors (SGS) and a robust composite controller is investigated in this paper. First, the experimental setup is constructed by using a piezo-based platform, SGS sensors, an AD5435 platform and two voltage amplifiers. Then, the measurement strategy to measure the tip/tilt angles accurately in the order of sub-μrad is presented. A comprehensive composite control strategy design to enhance the tracking accuracy with a novel driving principle is also proposed. Finally, an experiment is presented to validate the measurement and control strategy. The experimental results demonstrate that the proposed measurement and control strategy provides accurate angle motion with a root mean square (RMS) error of 0.21 μrad, which is approximately equal to the noise level.

  18. SU-E-J-147: Monte Carlo Study of the Precision and Accuracy of Proton CT Reconstructed Relative Stopping Power Maps

    SciTech Connect

    Dedes, G; Asano, Y; Parodi, K; Arbor, N; Dauvergne, D; Testa, E; Letang, J; Rit, S

    2015-06-15

    Purpose: The quantification of the intrinsic performances of proton computed tomography (pCT) as a modality for treatment planning in proton therapy. The performance of an ideal pCT scanner is studied as a function of various parameters. Methods: Using GATE/Geant4, we simulated an ideal pCT scanner and scans of several cylindrical phantoms with various tissue equivalent inserts of different sizes. Insert materials were selected in order to be of clinical relevance. Tomographic images were reconstructed using a filtered backprojection algorithm taking into account the scattering of protons into the phantom. To quantify the performance of the ideal pCT scanner, we study the precision and the accuracy with respect to the theoretical relative stopping power ratios (RSP) values for different beam energies, imaging doses, insert sizes and detector positions. The planning range uncertainty resulting from the reconstructed RSP is also assessed by comparison with the range of the protons in the analytically simulated phantoms. Results: The results indicate that pCT can intrinsically achieve RSP resolution below 1%, for most examined tissues at beam energies below 300 MeV and for imaging doses around 1 mGy. RSP maps accuracy of less than 0.5 % is observed for most tissue types within the studied dose range (0.2–1.5 mGy). Finally, the uncertainty in the proton range due to the accuracy of the reconstructed RSP map is well below 1%. Conclusion: This work explores the intrinsic performance of pCT as an imaging modality for proton treatment planning. The obtained results show that under ideal conditions, 3D RSP maps can be reconstructed with an accuracy better than 1%. Hence, pCT is a promising candidate for reducing the range uncertainties introduced by the use of X-ray CT alongside with a semiempirical calibration to RSP.Supported by the DFG Cluster of Excellence Munich-Centre for Advanced Photonics (MAP)

  19. Measuring the bias, precision, accuracy, and validity of self-reported height and weight in assessing overweight and obesity status among adolescents using a surveillance system

    PubMed Central

    2015-01-01

    Background Evidence regarding bias, precision, and accuracy in adolescent self-reported height and weight across demographic subpopulations is lacking. The bias, precision, and accuracy of adolescent self-reported height and weight across subpopulations were examined using a large, diverse and representative sample of adolescents. A second objective was to develop correction equations for self-reported height and weight to provide more accurate estimates of body mass index (BMI) and weight status. Methods A total of 24,221 students from 8th and 11th grade in Texas participated in the School Physical Activity and Nutrition (SPAN) surveillance system in years 2000–2002 and 2004–2005. To assess bias, the differences between the self-reported and objective measures, for height and weight were estimated. To assess precision and accuracy, the Lin’s concordance correlation coefficient was used. BMI was estimated for self-reported and objective measures. The prevalence of students’ weight status was estimated using self-reported and objective measures; absolute (bias) and relative error (relative bias) were assessed subsequently. Correction equations for sex and race/ethnicity subpopulations were developed to estimate objective measures of height, weight and BMI from self-reported measures using weighted linear regression. Sensitivity, specificity and positive predictive values of weight status classification using self-reported measures and correction equations are assessed by sex and grade. Results Students in 8th- and 11th-grade overestimated their height from 0.68cm (White girls) to 2.02 cm (African-American boys), and underestimated their weight from 0.4 kg (Hispanic girls) to 0.98 kg (African-American girls). The differences in self-reported versus objectively-measured height and weight resulted in underestimation of BMI ranging from -0.23 kg/m2 (White boys) to -0.7 kg/m2 (African-American girls). The sensitivity of self-reported measures to classify weight

  20. Precision powder feeder

    DOEpatents

    Schlienger, M. Eric; Schmale, David T.; Oliver, Michael S.

    2001-07-10

    A new class of precision powder feeders is disclosed. These feeders provide a precision flow of a wide range of powdered materials, while remaining robust against jamming or damage. These feeders can be precisely controlled by feedback mechanisms.

  1. An experimental analysis of accuracy and precision of a high-speed strain-gage system based on the direct-resistance method

    NASA Astrophysics Data System (ADS)

    Cappa, P.; del Prete, Z.

    1992-03-01

    An experimental study on the relative merits of using a high-speed digital-acquisition system to measure directly the strain-gage resistance, rather than using a conventional Wheatstone bridge, is carried out. Both strain gages, with a nominal resistance of 120 ohm and 1 kohm, were simulated with precision resistors, and the output signals were acquired over a time of 48 and 144 hours; furthermore, the effects in metrological performances caused by a statistical filtering were evaluated. The results show that the implementation of the statistical filtering gains a considerable improvement in gathering strain-gage-resistance readings. On the other hand, such a procedure causes, obviously, a loss of performance with regard to the acquisition rate, and therefore to the dynamic data-collecting capabilities. In any case, the intrinsic resolution of the 12-bit a/d converter, utilized in the present experimental analysis, causes a limitation for measurement accuracy in the range of hundreds microns/m.

  2. High-precision, high-accuracy ultralong-range swept-source optical coherence tomography using vertical cavity surface emitting laser light source.

    PubMed

    Grulkowski, Ireneusz; Liu, Jonathan J; Potsaid, Benjamin; Jayaraman, Vijaysekhar; Jiang, James; Fujimoto, James G; Cable, Alex E

    2013-03-01

    We demonstrate ultralong-range swept-source optical coherence tomography (OCT) imaging using vertical cavity surface emitting laser technology. The ability to adjust laser parameters and high-speed acquisition enables imaging ranges from a few centimeters up to meters using the same instrument. We discuss the challenges of long-range OCT imaging. In vivo human-eye imaging and optical component characterization are presented. The precision and accuracy of OCT-based measurements are assessed and are important for ocular biometry and reproducible intraocular distance measurement before cataract surgery. Additionally, meter-range measurement of fiber length and multicentimeter-range imaging are reported. 3D visualization supports a class of industrial imaging applications of OCT.

  3. In situ sulfur isotope analysis of sulfide minerals by SIMS: Precision and accuracy, with application to thermometry of ~3.5Ga Pilbara cherts

    USGS Publications Warehouse

    Kozdon, R.; Kita, N.T.; Huberty, J.M.; Fournelle, J.H.; Johnson, C.A.; Valley, J.W.

    2010-01-01

    Secondary ion mass spectrometry (SIMS) measurement of sulfur isotope ratios is a potentially powerful technique for in situ studies in many areas of Earth and planetary science. Tests were performed to evaluate the accuracy and precision of sulfur isotope analysis by SIMS in a set of seven well-characterized, isotopically homogeneous natural sulfide standards. The spot-to-spot and grain-to-grain precision for δ34S is ± 0.3‰ for chalcopyrite and pyrrhotite, and ± 0.2‰ for pyrite (2SD) using a 1.6 nA primary beam that was focused to 10 µm diameter with a Gaussian-beam density distribution. Likewise, multiple δ34S measurements within single grains of sphalerite are within ± 0.3‰. However, between individual sphalerite grains, δ34S varies by up to 3.4‰ and the grain-to-grain precision is poor (± 1.7‰, n = 20). Measured values of δ34S correspond with analysis pit microstructures, ranging from smooth surfaces for grains with high δ34S values, to pronounced ripples and terraces in analysis pits from grains featuring low δ34S values. Electron backscatter diffraction (EBSD) shows that individual sphalerite grains are single crystals, whereas crystal orientation varies from grain-to-grain. The 3.4‰ variation in measured δ34S between individual grains of sphalerite is attributed to changes in instrumental bias caused by different crystal orientations with respect to the incident primary Cs+ beam. High δ34S values in sphalerite correlate to when the Cs+ beam is parallel to the set of directions , from [111] to [110], which are preferred directions for channeling and focusing in diamond-centered cubic crystals. Crystal orientation effects on instrumental bias were further detected in galena. However, as a result of the perfect cleavage along {100} crushed chips of galena are typically cube-shaped and likely to be preferentially oriented, thus crystal orientation effects on instrumental bias may be obscured. Test were made to improve the analytical

  4. Improving Precision and Accuracy of Isotope Ratios from Short Transient Laser Ablation-Multicollector-Inductively Coupled Plasma Mass Spectrometry Signals: Application to Micrometer-Size Uranium Particles.

    PubMed

    Claverie, Fanny; Hubert, Amélie; Berail, Sylvain; Donard, Ariane; Pointurier, Fabien; Pécheyran, Christophe

    2016-04-19

    The isotope drift encountered on short transient signals measured by multicollector inductively coupled plasma mass spectrometry (MC-ICPMS) is related to differences in detector time responses. Faraday to Faraday and Faraday to ion counter time lags were determined and corrected using VBA data processing based on the synchronization of the isotope signals. The coefficient of determination of the linear fit between the two isotopes was selected as the best criterion to obtain accurate detector time lag. The procedure was applied to the analysis by laser ablation-MC-ICPMS of micrometer sized uranium particles (1-3.5 μm). Linear regression slope (LRS) (one isotope plotted over the other), point-by-point, and integration methods were tested to calculate the (235)U/(238)U and (234)U/(238)U ratios. Relative internal precisions of 0.86 to 1.7% and 1.2 to 2.4% were obtained for (235)U/(238)U and (234)U/(238)U, respectively, using LRS calculation, time lag, and mass bias corrections. A relative external precision of 2.1% was obtained for (235)U/(238)U ratios with good accuracy (relative difference with respect to the reference value below 1%). PMID:27031645

  5. An in-depth evaluation of accuracy and precision in Hg isotopic analysis via pneumatic nebulization and cold vapor generation multi-collector ICP-mass spectrometry.

    PubMed

    Rua-Ibarz, Ana; Bolea-Fernandez, Eduardo; Vanhaecke, Frank

    2016-01-01

    Mercury (Hg) isotopic analysis via multi-collector inductively coupled plasma (ICP)-mass spectrometry (MC-ICP-MS) can provide relevant biogeochemical information by revealing sources, pathways, and sinks of this highly toxic metal. In this work, the capabilities and limitations of two different sample introduction systems, based on pneumatic nebulization (PN) and cold vapor generation (CVG), respectively, were evaluated in the context of Hg isotopic analysis via MC-ICP-MS. The effect of (i) instrument settings and acquisition parameters, (ii) concentration of analyte element (Hg), and internal standard (Tl)-used for mass discrimination correction purposes-and (iii) different mass bias correction approaches on the accuracy and precision of Hg isotope ratio results was evaluated. The extent and stability of mass bias were assessed in a long-term study (18 months, n = 250), demonstrating a precision ≤0.006% relative standard deviation (RSD). CVG-MC-ICP-MS showed an approximately 20-fold enhancement in Hg signal intensity compared with PN-MC-ICP-MS. For CVG-MC-ICP-MS, the mass bias induced by instrumental mass discrimination was accurately corrected for by using either external correction in a sample-standard bracketing approach (SSB) or double correction, consisting of the use of Tl as internal standard in a revised version of the Russell law (Baxter approach), followed by SSB. Concomitant matrix elements did not affect CVG-ICP-MS results. Neither with PN, nor with CVG, any evidence for mass-independent discrimination effects in the instrument was observed within the experimental precision obtained. CVG-MC-ICP-MS was finally used for Hg isotopic analysis of reference materials (RMs) of relevant environmental origin. The isotopic composition of Hg in RMs of marine biological origin testified of mass-independent fractionation that affected the odd-numbered Hg isotopes. While older RMs were used for validation purposes, novel Hg isotopic data are provided for the

  6. A robust method for high-precision quantification of the complex three-dimensional vasculatures acquired by X-ray microtomography.

    PubMed

    Tan, Hai; Wang, Dadong; Li, Rongxin; Sun, Changming; Lagerstrom, Ryan; He, You; Xue, Yanling; Xiao, Tiqiao

    2016-09-01

    The quantification of micro-vasculatures is important for the analysis of angiogenesis on which the detection of tumor growth or hepatic fibrosis depends. Synchrotron-based X-ray computed micro-tomography (SR-µCT) allows rapid acquisition of micro-vasculature images at micrometer-scale spatial resolution. Through skeletonization, the statistical features of the micro-vasculature can be extracted from the skeleton of the micro-vasculatures. Thinning is a widely used algorithm to produce the vascular skeleton in medical research. Existing three-dimensional thinning methods normally emphasize the preservation of topological structure rather than geometrical features in generating the skeleton of a volumetric object. This results in three problems and limits the accuracy of the quantitative results related to the geometrical structure of the vasculature. The problems include the excessively shortened length of elongated objects, eliminated branches of blood vessel tree structure, and numerous noisy spurious branches. The inaccuracy of the skeleton directly introduces errors in the quantitative analysis, especially on the parameters concerning the vascular length and the counts of vessel segments and branching points. In this paper, a robust method using a consolidated end-point constraint for thinning, which generates geometry-preserving skeletons in addition to maintaining the topology of the vasculature, is presented. The improved skeleton can be used to produce more accurate quantitative results. Experimental results from high-resolution SR-µCT images show that the end-point constraint produced by the proposed method can significantly improve the accuracy of the skeleton obtained using the existing ITK three-dimensional thinning filter. The produced skeleton has laid the groundwork for accurate quantification of the angiogenesis. This is critical for the early detection of tumors and assessing anti-angiogenesis treatments. PMID:27577778

  7. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    NASA Astrophysics Data System (ADS)

    He, Bin; Frey, Eric C.

    2010-06-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were

  8. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods.

    PubMed

    He, Bin; Frey, Eric C

    2010-06-21

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed (111)In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were

  9. Guidelines for Dual Energy X-Ray Absorptiometry Analysis of Trabecular Bone-Rich Regions in Mice: Improved Precision, Accuracy, and Sensitivity for Assessing Longitudinal Bone Changes.

    PubMed

    Shi, Jiayu; Lee, Soonchul; Uyeda, Michael; Tanjaya, Justine; Kim, Jong Kil; Pan, Hsin Chuan; Reese, Patricia; Stodieck, Louis; Lin, Andy; Ting, Kang; Kwak, Jin Hee; Soo, Chia

    2016-05-01

    Trabecular bone is frequently studied in osteoporosis research because changes in trabecular bone are the most common cause of osteoporotic fractures. Dual energy X-ray absorptiometry (DXA) analysis specific to trabecular bone-rich regions is crucial to longitudinal osteoporosis research. The purpose of this study is to define a novel method for accurately analyzing trabecular bone-rich regions in mice via DXA. This method will be utilized to analyze scans obtained from the International Space Station in an upcoming study of microgravity-induced bone loss. Thirty 12-week-old BALB/c mice were studied. The novel method was developed by preanalyzing trabecular bone-rich sites in the distal femur, proximal tibia, and lumbar vertebrae via high-resolution X-ray imaging followed by DXA and micro-computed tomography (micro-CT) analyses. The key DXA steps described by the novel method were (1) proper mouse positioning, (2) region of interest (ROI) sizing, and (3) ROI positioning. The precision of the new method was assessed by reliability tests and a 14-week longitudinal study. The bone mineral content (BMC) data from DXA was then compared to the BMC data from micro-CT to assess accuracy. Bone mineral density (BMD) intra-class correlation coefficients of the new method ranging from 0.743 to 0.945 and Levene's test showing that there was significantly lower variances of data generated by new method both verified its consistency. By new method, a Bland-Altman plot displayed good agreement between DXA BMC and micro-CT BMC for all sites and they were strongly correlated at the distal femur and proximal tibia (r=0.846, p<0.01; r=0.879, p<0.01, respectively). The results suggest that the novel method for site-specific analysis of trabecular bone-rich regions in mice via DXA yields more precise, accurate, and repeatable BMD measurements than the conventional method.

  10. Guidelines for Dual Energy X-Ray Absorptiometry Analysis of Trabecular Bone-Rich Regions in Mice: Improved Precision, Accuracy, and Sensitivity for Assessing Longitudinal Bone Changes.

    PubMed

    Shi, Jiayu; Lee, Soonchul; Uyeda, Michael; Tanjaya, Justine; Kim, Jong Kil; Pan, Hsin Chuan; Reese, Patricia; Stodieck, Louis; Lin, Andy; Ting, Kang; Kwak, Jin Hee; Soo, Chia

    2016-05-01

    Trabecular bone is frequently studied in osteoporosis research because changes in trabecular bone are the most common cause of osteoporotic fractures. Dual energy X-ray absorptiometry (DXA) analysis specific to trabecular bone-rich regions is crucial to longitudinal osteoporosis research. The purpose of this study is to define a novel method for accurately analyzing trabecular bone-rich regions in mice via DXA. This method will be utilized to analyze scans obtained from the International Space Station in an upcoming study of microgravity-induced bone loss. Thirty 12-week-old BALB/c mice were studied. The novel method was developed by preanalyzing trabecular bone-rich sites in the distal femur, proximal tibia, and lumbar vertebrae via high-resolution X-ray imaging followed by DXA and micro-computed tomography (micro-CT) analyses. The key DXA steps described by the novel method were (1) proper mouse positioning, (2) region of interest (ROI) sizing, and (3) ROI positioning. The precision of the new method was assessed by reliability tests and a 14-week longitudinal study. The bone mineral content (BMC) data from DXA was then compared to the BMC data from micro-CT to assess accuracy. Bone mineral density (BMD) intra-class correlation coefficients of the new method ranging from 0.743 to 0.945 and Levene's test showing that there was significantly lower variances of data generated by new method both verified its consistency. By new method, a Bland-Altman plot displayed good agreement between DXA BMC and micro-CT BMC for all sites and they were strongly correlated at the distal femur and proximal tibia (r=0.846, p<0.01; r=0.879, p<0.01, respectively). The results suggest that the novel method for site-specific analysis of trabecular bone-rich regions in mice via DXA yields more precise, accurate, and repeatable BMD measurements than the conventional method. PMID:26956416

  11. Robustness of thermal error compensation model of CNC machine tool

    NASA Astrophysics Data System (ADS)

    Lang, Xianli; Miao, Enming; Gong, Yayun; Niu, Pengcheng; Xu, Zhishang

    2013-01-01

    Thermal error is the major factor in restricting the accuracy of CNC machining. The modeling accuracy is the key of thermal error compensation which can achieve precision machining of CNC machine tool. The traditional thermal error compensation models mostly focus on the fitting accuracy without considering the robustness of the models, it makes the research results into practice is difficult. In this paper, the experiment of model robustness is done in different spinde speeds of leaderway V-450 machine tool. Combining fuzzy clustering and grey relevance selects temperature-sensitive points of thermal error. Using multiple linear regression model (MLR) and distributed lag model (DL) establishes model of the multi-batch experimental data and then gives robustness analysis, demonstrates the difference between fitting precision and prediction precision in engineering application, and provides a reference method to choose thermal error compensation model of CNC machine tool in the practical engineering application.

  12. Toward robust deconvolution of pass-through paleomagnetic measurements: new tool to estimate magnetometer sensor response and laser interferometry of sample positioning accuracy

    NASA Astrophysics Data System (ADS)

    Oda, Hirokuni; Xuan, Chuang; Yamamoto, Yuhji

    2016-07-01

    Pass-through superconducting rock magnetometers (SRM) offer rapid and high-precision remanence measurements for continuous samples that are essential for modern paleomagnetism studies. However, continuous SRM measurements are inevitably smoothed and distorted due to the convolution effect of SRM sensor response. Deconvolution is necessary to restore accurate magnetization from pass-through SRM data, and robust deconvolution requires reliable estimate of SRM sensor response as well as understanding of uncertainties associated with the SRM measurement system. In this paper, we use the SRM at Kochi Core Center (KCC), Japan, as an example to introduce new tool and procedure for accurate and efficient estimate of SRM sensor response. To quantify uncertainties associated with the SRM measurement due to track positioning errors and test their effects on deconvolution, we employed laser interferometry for precise monitoring of track positions both with and without placing a u-channel sample on the SRM tray. The acquired KCC SRM sensor response shows significant cross-term of Z-axis magnetization on the X-axis pick-up coil and full widths of ~46-54 mm at half-maximum response for the three pick-up coils, which are significantly narrower than those (~73-80 mm) for the liquid He-free SRM at Oregon State University. Laser interferometry measurements on the KCC SRM tracking system indicate positioning uncertainties of ~0.1-0.2 and ~0.5 mm for tracking with and without u-channel sample on the tray, respectively. Positioning errors appear to have reproducible components of up to ~0.5 mm possibly due to patterns or damages on tray surface or rope used for the tracking system. Deconvolution of 50,000 simulated measurement data with realistic error introduced based on the position uncertainties indicates that although the SRM tracking system has recognizable positioning uncertainties, they do not significantly debilitate the use of deconvolution to accurately restore high

  13. Tephrochronology of last termination sequences in Europe: a protocol for improved analytical precision and robust correlation procedures (a joint SCOTAV-INTIMATE proposal)

    NASA Astrophysics Data System (ADS)

    Turney, Chris S. M.; Lowe, J. John; Davies, Siwan M.; Hall, Valerie; Lowe, David J.; Wastegård, Stefan; Hoek, Wim Z.; Alloway, Brent

    2004-02-01

    The precise sequence of events during the Last Termination (18 000-9000 ka 14C yr BP), and the extent to which major environmental changes were synchronous, are difficult to establish using the radiocarbon method alone because of serious distortions of the radiocarbon time-scale, as well as the influences of site-specific errors that can affect the materials dated. Attention has therefore turned to other methods that can provide independent tests of the chronology and correlation of events during the Last Termination. With emphasis on European sequences, we summarise here the potential of tephrostratigraphy and tephrochronology to fulfil this role. Recent advances in the detection and analysis of hidden tephra layers (cryptotephra) indicate that some tephras of Last Termination age are much more widespread in Europe than appreciated hitherto, and a number of new tephra deposits have also been identified. There is much potential for developing an integrated tephrochronological framework for Europe, which can help to underpin the overall chronology of events during the Last Termination. For that potential to be realised, however, there needs to be a more systematic and robust analysis of tephra layers than has been the practice in the past. We propose a protocol for improving analytical and reporting procedures, as well as the establishment of a centralised data base of the results, which will provide an important geochronological tool to support a diverse range of stratigraphical studies, including opportunities to reassess volcanic hazards. Although aimed primarily at Europe, the protocol proposed here is of equal relevance to other regions and periods of interest. Copyright

  14. Improved Accuracy and Precision in LA-ICP-MS U-Th/Pb Dating of Zircon through the Reduction of Crystallinity Related Bias

    NASA Astrophysics Data System (ADS)

    Matthews, W.; McDonald, A.; Hamilton, B.; Guest, B.

    2015-12-01

    The accuracy of zircon U-Th/Pb ages generated by LA-ICP-MS is limited by systematic bias resulting from differences in crystallinity of the primary reference and that of the unknowns being analyzed. In general, the use of a highly crystalline primary reference will tend to bias analyses of materials of lesser crystallinity toward older ages. When dating igneous rocks, bias can be minimized by matching the crystallinity of the primary reference to that of the unknowns. However, the crystallinity of the unknowns is often not well constrained prior to ablation, as it is a function of U and Th concentration, crystallization age, and thermal history. Likewise, selecting an appropriate primary reference is impossible when dating detrital rocks where zircons with differing ages, protoliths, and thermal histories are analyzed in the same session. We investigate the causes of systematic bias using Raman spectroscopy and measurements of the ablated pit geometry. The crystallinity of five zircon reference materials with ages between 28.2 Ma and 2674 Ma was estimated using Raman spectroscopy. Zircon references varied from being highly crystalline to highly metamict, with individual reference materials plotting as distinct clusters in peak wavelength versus Full-Width Half-Maximum (FWHM) space. A strong positive correlation (R2=0.69) was found between the FWHM for the band at ~1000 cm-1 in the Raman spectrum of the zircon and its ablation rate, suggesting the degree of crystallinity is a primary control on ablation rate in zircons. A moderate positive correlation (R2=0.37) was found between ablation rate and the difference between the age determined by LA-ICP-MS and the accepted ID-TIMS age (ΔAge). We use the measured, intra-sessional relationship between ablation rate and ΔAge of secondary references to reduce systematic bias. Rapid, high-precision measurement of ablated pit geometries using an optical profilometer and custom MatLab algorithm facilitates the implementation

  15. Technical Note: Precision and accuracy of a commercially available CT optically stimulated luminescent dosimetry system for the measurement of CT dose index

    SciTech Connect

    Vrieze, Thomas J.; Sturchio, Glenn M.; McCollough, Cynthia H.

    2012-11-15

    Purpose: To determine the precision and accuracy of CTDI{sub 100} measurements made using commercially available optically stimulated luminescent (OSL) dosimeters (Landaur, Inc.) as beam width, tube potential, and attenuating material were varied. Methods: One hundred forty OSL dosimeters were individually exposed to a single axial CT scan, either in air, a 16-cm (head), or 32-cm (body) CTDI phantom at both center and peripheral positions. Scans were performed using nominal total beam widths of 3.6, 6, 19.2, and 28.8 mm at 120 kV and 28.8 mm at 80 kV. Five measurements were made for each of 28 parameter combinations. Measurements were made under the same conditions using a 100-mm long CTDI ion chamber. Exposed OSL dosimeters were returned to the manufacturer, who reported dose to air (in mGy) as a function of distance along the probe, integrated dose, and CTDI{sub 100}. Results: The mean precision averaged over 28 datasets containing five measurements each was 1.4%{+-} 0.6%, range = 0.6%-2.7% for OSL and 0.08%{+-} 0.06%, range = 0.02%-0.3% for ion chamber. The root mean square (RMS) percent differences between OSL and ion chamber CTDI{sub 100} values were 13.8%, 6.4%, and 8.7% for in-air, head, and body measurements, respectively, with an overall RMS percent difference of 10.1%. OSL underestimated CTDI{sub 100} relative to the ion chamber 21/28 times (75%). After manual correction of the 80 kV measurements, the RMS percent differences between OSL and ion chamber measurements were 9.9% and 10.0% for 80 and 120 kV, respectively. Conclusions: Measurements of CTDI{sub 100} with commercially available CT OSL dosimeters had a percent standard deviation of 1.4%. After energy-dependent correction factors were applied, the RMS percent difference in the measured CTDI{sub 100} values was about 10%, with a tendency of OSL to underestimate CTDI relative to the ion chamber. Unlike ion chamber methods, however, OSL dosimeters allow measurement of the radiation dose profile.

  16. Technical Note: Precision and accuracy of a commercially available CT optically stimulated luminescent dosimetry system for the measurement of CT dose index

    PubMed Central

    Vrieze, Thomas J.; Sturchio, Glenn M.; McCollough, Cynthia H.

    2012-01-01

    Purpose: To determine the precision and accuracy of CTDI100 measurements made using commercially available optically stimulated luminescent (OSL) dosimeters (Landaur, Inc.) as beam width, tube potential, and attenuating material were varied. Methods: One hundred forty OSL dosimeters were individually exposed to a single axial CT scan, either in air, a 16-cm (head), or 32-cm (body) CTDI phantom at both center and peripheral positions. Scans were performed using nominal total beam widths of 3.6, 6, 19.2, and 28.8 mm at 120 kV and 28.8 mm at 80 kV. Five measurements were made for each of 28 parameter combinations. Measurements were made under the same conditions using a 100-mm long CTDI ion chamber. Exposed OSL dosimeters were returned to the manufacturer, who reported dose to air (in mGy) as a function of distance along the probe, integrated dose, and CTDI100. Results: The mean precision averaged over 28 datasets containing five measurements each was 1.4% ± 0.6%, range = 0.6%–2.7% for OSL and 0.08% ± 0.06%, range = 0.02%–0.3% for ion chamber. The root mean square (RMS) percent differences between OSL and ion chamber CTDI100 values were 13.8%, 6.4%, and 8.7% for in-air, head, and body measurements, respectively, with an overall RMS percent difference of 10.1%. OSL underestimated CTDI100 relative to the ion chamber 21/28 times (75%). After manual correction of the 80 kV measurements, the RMS percent differences between OSL and ion chamber measurements were 9.9% and 10.0% for 80 and 120 kV, respectively. Conclusions: Measurements of CTDI100 with commercially available CT OSL dosimeters had a percent standard deviation of 1.4%. After energy-dependent correction factors were applied, the RMS percent difference in the measured CTDI100 values was about 10%, with a tendency of OSL to underestimate CTDI relative to the ion chamber. Unlike ion chamber methods, however, OSL dosimeters allow measurement of the radiation dose profile. PMID:23127052

  17. Accuracy and precision of 14C-based source apportionment of organic and elemental carbon in aerosols using the Swiss_4S protocol

    NASA Astrophysics Data System (ADS)

    Mouteva, G. O.; Fahrni, S. M.; Santos, G. M.; Randerson, J. T.; Zhang, Y.-L.; Szidat, S.; Czimczik, C. I.

    2015-09-01

    Aerosol source apportionment remains a critical challenge for understanding the transport and aging of aerosols, as well as for developing successful air pollution mitigation strategies. The contributions of fossil and non-fossil sources to organic carbon (OC) and elemental carbon (EC) in carbonaceous aerosols can be quantified by measuring the radiocarbon (14C) content of each carbon fraction. However, the use of 14C in studying OC and EC has been limited by technical challenges related to the physical separation of the two fractions and small sample sizes. There is no common procedure for OC/EC 14C analysis, and uncertainty studies have largely focused on the precision of yields. Here, we quantified the uncertainty in 14C measurement of aerosols associated with the isolation and analysis of each carbon fraction with the Swiss_4S thermal-optical analysis (TOA) protocol. We used an OC/EC analyzer (Sunset Laboratory Inc., OR, USA) coupled to a vacuum line to separate the two components. Each fraction was thermally desorbed and converted to carbon dioxide (CO2) in pure oxygen (O2). On average, 91 % of the evolving CO2 was then cryogenically trapped on the vacuum line, reduced to filamentous graphite, and measured for its 14C content via accelerator mass spectrometry (AMS). To test the accuracy of our setup, we quantified the total amount of extraneous carbon introduced during the TOA sample processing and graphitization as the sum of modern and fossil (14C-depleted) carbon introduced during the analysis of fossil reference materials (adipic acid for OC and coal for EC) and contemporary standards (oxalic acid for OC and rice char for EC) as a function of sample size. We further tested our methodology by analyzing five ambient airborne particulate matter (PM2.5) samples with a range of OC and EC concentrations and 14C contents in an interlaboratory comparison. The total modern and fossil carbon blanks of our setup were 0.8 ± 0.4 and 0.67 ± 0.34 μg C, respectively

  18. Accuracy and precision of 14C-based source apportionment of organic and elemental carbon in aerosols using the Swiss_4S protocol

    NASA Astrophysics Data System (ADS)

    Mouteva, G. O.; Fahrni, S. M.; Santos, G. M.; Randerson, J. T.; Zhang, Y. L.; Szidat, S.; Czimczik, C. I.

    2015-04-01

    Aerosol source apportionment remains a critical challenge for understanding the transport and aging of aerosols, as well as for developing successful air pollution mitigation strategies. The contributions of fossil and non-fossil sources to organic carbon (OC) and elemental carbon (EC) in carbonaceous aerosols can be quantified by measuring the radiocarbon (14C) content of each carbon fraction. However, the use of 14C in studying OC and EC has been limited by technical challenges related to the physical separation of the two fractions and small sample sizes. There is no common procedure for OC/EC 14C analysis, and uncertainty studies have largely focused on the precision of yields. Here, we quantified the uncertainty in 14C measurement of aerosols associated with the isolation and analysis of each carbon fraction with the Swiss_4S thermal-optical analysis (TOA) protocol. We used an OC/EC analyzer (Sunset Laboratory Inc., OR, USA) coupled to vacuum line to separate the two components. Each fraction was thermally desorbed and converted to carbon dioxide (CO2) in pure oxygen (O2). On average 91% of the evolving CO2 was then cryogenically trapped on the vacuum line, reduced to filamentous graphite, and measured for its 14C content via accelerator mass spectrometry (AMS). To test the accuracy of our set-up, we quantified the total amount of extraneous carbon introduced during the TOA sample processing and graphitization as the sum of modern and fossil (14C-depleted) carbon introduced during the analysis of fossil reference materials (adipic acid for OC and coal for EC) and contemporary standards (oxalic acid for OC and rice char for EC) as a function of sample size. We further tested our methodology by analyzing five ambient airborne particulate matter (PM2.5) samples with a range of OC and EC concentrations and 14C contents in an interlaboratory comparison. The total modern and fossil carbon blanks of our set-up were 0.8 ± 0.4 and 0.67 ± 0.34 μg C, respectively

  19. SU-E-J-03: Characterization of the Precision and Accuracy of a New, Preclinical, MRI-Guided Focused Ultrasound System for Image-Guided Interventions in Small-Bore, High-Field Magnets

    SciTech Connect

    Ellens, N; Farahani, K

    2015-06-15

    Purpose: MRI-guided focused ultrasound (MRgFUS) has many potential and realized applications including controlled heating and localized drug delivery. The development of many of these applications requires extensive preclinical work, much of it in small animal models. The goal of this study is to characterize the spatial targeting accuracy and reproducibility of a preclinical high field MRgFUS system for thermal ablation and drug delivery applications. Methods: The RK300 (FUS Instruments, Toronto, Canada) is a motorized, 2-axis FUS positioning system suitable for small bore (72 mm), high-field MRI systems. The accuracy of the system was assessed in three ways. First, the precision of the system was assessed by sonicating regular grids of 5 mm squares on polystyrene plates and comparing the resulting focal dimples to the intended pattern, thereby assessing the reproducibility and precision of the motion control alone. Second, the targeting accuracy was assessed by imaging a polystyrene plate with randomly drilled holes and replicating the hole pattern by sonicating the observed hole locations on intact polystyrene plates and comparing the results. Third, the practicallyrealizable accuracy and precision were assessed by comparing the locations of transcranial, FUS-induced blood-brain-barrier disruption (BBBD) (observed through Gadolinium enhancement) to the intended targets in a retrospective analysis of animals sonicated for other experiments. Results: The evenly-spaced grids indicated that the precision was 0.11 +/− 0.05 mm. When image-guidance was included by targeting random locations, the accuracy was 0.5 +/− 0.2 mm. The effective accuracy in the four rodent brains assessed was 0.8 +/− 0.6 mm. In all cases, the error appeared normally distributed (p<0.05) in both orthogonal axes, though the left/right error was systematically greater than the superior/inferior error. Conclusions: The targeting accuracy of this device is sub-millimeter, suitable for many

  20. Detecting declines in the abundance of a bull trout (Salvelinus confluentus) population: Understanding the accuracy, precision, and costs of our efforts

    USGS Publications Warehouse

    Al-Chokhachy, R.; Budy, P.; Conner, M.

    2009-01-01

    Using empirical field data for bull trout (Salvelinus confluentus), we evaluated the trade-off between power and sampling effort-cost using Monte Carlo simulations of commonly collected mark-recapture-resight and count data, and we estimated the power to detect changes in abundance across different time intervals. We also evaluated the effects of monitoring different components of a population and stratification methods on the precision of each method. Our results illustrate substantial variability in the relative precision, cost, and information gained from each approach. While grouping estimates by age or stage class substantially increased the precision of estimates, spatial stratification of sampling units resulted in limited increases in precision. Although mark-resight methods allowed for estimates of abundance versus indices of abundance, our results suggest snorkel surveys may be a more affordable monitoring approach across large spatial scales. Detecting a 25% decline in abundance after 5 years was not possible, regardless of technique (power = 0.80), without high sampling effort (48% of study site). Detecting a 25% decline was possible after 15 years, but still required high sampling efforts. Our results suggest detecting moderate changes in abundance of freshwater salmonids requires considerable resource and temporal commitments and highlight the difficulties of using abundance measures for monitoring bull trout populations.

  1. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  2. Precision and accuracy of manual water-level measurements taken in the Yucca Mountain area, Nye County, Nevada, 1988--1990; Water-resources investigations report 93-4025

    SciTech Connect

    Boucher, M.S.

    1994-05-01

    Water-level measurements have been made in deep boreholes in the Yucca Mountain area, Nye County, Nevada, since 1983 in support of the US Department of Energy`s Yucca Mountain Project, which is an evaluation of the area to determine its suit-ability as a potential storage area for high-level nuclear waste. Water-level measurements were taken either manually, using various water-level measuring equipment such as steel tapes, or they were taken continuously, using automated data recorders and pressure transducers. This report presents precision range and accuracy data established for manual water-level measurements taken in the Yucca Mountain area, 1988--90.

  3. Method and system using power modulation for maskless vapor deposition of spatially graded thin film and multilayer coatings with atomic-level precision and accuracy

    DOEpatents

    Montcalm, Claude; Folta, James Allen; Tan, Swie-In; Reiss, Ira

    2002-07-30

    A method and system for producing a film (preferably a thin film with highly uniform or highly accurate custom graded thickness) on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source operated with time-varying flux distribution. In preferred embodiments, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. A user selects a source flux modulation recipe for achieving a predetermined desired thickness profile of the deposited film. The method relies on precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  4. Accuracy and precision of reconstruction of complex refractive index in near-field single-distance propagation-based phase-contrast tomography

    NASA Astrophysics Data System (ADS)

    Gureyev, Timur; Mohammadi, Sara; Nesterets, Yakov; Dullin, Christian; Tromba, Giuliana

    2013-10-01

    We investigate the quantitative accuracy and noise sensitivity of reconstruction of the 3D distribution of complex refractive index, n(r)=1-δ(r)+iβ(r), in samples containing materials with different refractive indices using propagation-based phase-contrast computed tomography (PB-CT). Our present study is limited to the case of parallel-beam geometry with monochromatic synchrotron radiation, but can be readily extended to cone-beam CT and partially coherent polychromatic X-rays at least in the case of weakly absorbing samples. We demonstrate that, except for regions near the interfaces between distinct materials, the distribution of imaginary part of the refractive index, β(r), can be accurately reconstructed from a single projection image per view angle using phase retrieval based on the so-called homogeneous version of the Transport of Intensity equation (TIE-Hom) in combination with conventional CT reconstruction. In contrast, the accuracy of reconstruction of δ(r) depends strongly on the choice of the "regularization" parameter in TIE-Hom. We demonstrate by means of an instructive example that for some multi-material samples, a direct application of the TIE-Hom method in PB-CT produces qualitatively incorrect results for δ(r), which can be rectified either by collecting additional projection images at each view angle, or by utilising suitable a priori information about the sample. As a separate observation, we also show that, in agreement with previous reports, it is possible to significantly improve signal-to-noise ratio by increasing the sample-to-detector distance in combination with TIE-Hom phase retrieval in PB-CT compared to conventional ("contact") CT, with the maximum achievable gain of the order of 0.3δ /β. This can lead to improved image quality and/or reduction of the X-ray dose delivered to patients in medical imaging.

  5. The accuracy and precision of a micro computer tomography volumetric measurement technique for the analysis of in-vitro tested total disc replacements.

    PubMed

    Vicars, R; Fisher, J; Hall, R M

    2009-04-01

    Total disc replacements (TDRs) in the spine have been clinically successful in the short term, but there are concerns over long-term failure due to wear, as seen in other joint replacements. Simulators have been used to investigate the wear of TDRs, but only gravimetric measurements have been used to assess material loss. Micro computer tomography (microCT) has been used for volumetric measurement of explanted components but has yet to be used for in-vitro studies with the wear typically less than < 20 mm3 per 10(6) cycles. The aim of this study was to compare microCT volume measurements with gravimetric measurements and to assess whether microCT can quantify wear volumes of in-vitro tested TDRs. microCT measurements of TDR polyethylene cores were undertaken and the results compared with gravimetric assessments. The effects of repositioning, integration time, and scan resolution were investigated. The best volume measurement resolution was found to be +/- 3 mm3, at least three orders of magnitude greater than those determined for gravimetric measurements. In conclusion, the microCT measurement technique is suitable for quantifying in-vitro TDR polyethylene wear volumes and can provide qualitative data (e.g. wear location), and also further quantitative data (e.g. height loss), assisting comparisons with in-vivo and ex-vivo data. It is best used alongside gravimetric measurements to maintain the high level of precision that these measurements provide.

  6. Leaf Vein Length per Unit Area Is Not Intrinsically Dependent on Image Magnification: Avoiding Measurement Artifacts for Accuracy and Precision1[W][OPEN

    PubMed Central

    Sack, Lawren; Caringella, Marissa; Scoffoni, Christine; Mason, Chase; Rawls, Michael; Markesteijn, Lars; Poorter, Lourens

    2014-01-01

    Leaf vein length per unit leaf area (VLA; also known as vein density) is an important determinant of water and sugar transport, photosynthetic function, and biomechanical support. A range of software methods are in use to visualize and measure vein systems in cleared leaf images; typically, users locate veins by digital tracing, but recent articles introduced software by which users can locate veins using thresholding (i.e. based on the contrasting of veins in the image). Based on the use of this method, a recent study argued against the existence of a fixed VLA value for a given leaf, proposing instead that VLA increases with the magnification of the image due to intrinsic properties of the vein system, and recommended that future measurements use a common, low image magnification for measurements. We tested these claims with new measurements using the software LEAFGUI in comparison with digital tracing using ImageJ software. We found that the apparent increase of VLA with magnification was an artifact of (1) using low-quality and low-magnification images and (2) errors in the algorithms of LEAFGUI. Given the use of images of sufficient magnification and quality, and analysis with error-free software, the VLA can be measured precisely and accurately. These findings point to important principles for improving the quantity and quality of important information gathered from leaf vein systems. PMID:25096977

  7. High-accuracy, high-precision, high-resolution, continuous monitoring of urban greenhouse gas emissions? Results to date from INFLUX

    NASA Astrophysics Data System (ADS)

    Davis, K. J.; Brewer, A.; Cambaliza, M. O. L.; Deng, A.; Hardesty, M.; Gurney, K. R.; Heimburger, A. M. F.; Karion, A.; Lauvaux, T.; Lopez-Coto, I.; McKain, K.; Miles, N. L.; Patarasuk, R.; Prasad, K.; Razlivanov, I. N.; Richardson, S.; Sarmiento, D. P.; Shepson, P. B.; Sweeney, C.; Turnbull, J. C.; Whetstone, J. R.; Wu, K.

    2015-12-01

    The Indianapolis Flux Experiment (INFLUX) is testing the boundaries of our ability to use atmospheric measurements to quantify urban greenhouse gas (GHG) emissions. The project brings together inventory assessments, tower-based and aircraft-based atmospheric measurements, and atmospheric modeling to provide high-accuracy, high-resolution, continuous monitoring of emissions of GHGs from the city. Results to date include a multi-year record of tower and aircraft based measurements of the urban CO2 and CH4 signal, long-term atmospheric modeling of GHG transport, and emission estimates for both CO2 and CH4 based on both tower and aircraft measurements. We will present these emissions estimates, the uncertainties in each, and our assessment of the primary needs for improvements in these emissions estimates. We will also present ongoing efforts to improve our understanding of atmospheric transport and background atmospheric GHG mole fractions, and to disaggregate GHG sources (e.g. biogenic vs. fossil fuel CO2 fluxes), topics that promise significant improvement in urban GHG emissions estimates.

  8. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset 1998-2000 in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, A. M.; Schmidlin, F. J.; Oltmans, S. J.; McPeters, R. D.; Smit, H. G. J.

    2003-01-01

    A network of 12 southern hemisphere tropical and subtropical stations in the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 profiles of stratospheric and tropospheric ozone since 1998. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used with standard radiosondes for pressure, temperature and relative humidity measurements. The archived data are available at:http: //croc.gsfc.nasa.gov/shadoz. In Thompson et al., accuracies and imprecisions in the SHADOZ 1998- 2000 dataset were examined using ground-based instruments and the TOMS total ozone measurement (version 7) as references. Small variations in ozonesonde technique introduced possible biases from station-to-station. SHADOZ total ozone column amounts are now compared to version 8 TOMS; discrepancies between the two datasets are reduced 2\\% on average. An evaluation of ozone variations among the stations is made using the results of a series of chamber simulations of ozone launches (JOSIE-2000, Juelich Ozonesonde Intercomparison Experiment) in which a standard reference ozone instrument was employed with the various sonde techniques used in SHADOZ. A number of variations in SHADOZ ozone data are explained when differences in solution strength, data processing and instrument type (manufacturer) are taken into account.

  9. The effect of dilution and the use of a post-extraction nucleic acid purification column on the accuracy, precision, and inhibition of environmental DNA samples

    USGS Publications Warehouse

    Mckee, Anna M.; Spear, Stephen F.; Pierson, Todd W.

    2015-01-01

    Isolation of environmental DNA (eDNA) is an increasingly common method for detecting presence and assessing relative abundance of rare or elusive species in aquatic systems via the isolation of DNA from environmental samples and the amplification of species-specific sequences using quantitative PCR (qPCR). Co-extracted substances that inhibit qPCR can lead to inaccurate results and subsequent misinterpretation about a species’ status in the tested system. We tested three treatments (5-fold and 10-fold dilutions, and spin-column purification) for reducing qPCR inhibition from 21 partially and fully inhibited eDNA samples collected from coastal plain wetlands and mountain headwater streams in the southeastern USA. All treatments reduced the concentration of DNA in the samples. However, column purified samples retained the greatest sensitivity. For stream samples, all three treatments effectively reduced qPCR inhibition. However, for wetland samples, the 5-fold dilution was less effective than other treatments. Quantitative PCR results for column purified samples were more precise than the 5-fold and 10-fold dilutions by 2.2× and 3.7×, respectively. Column purified samples consistently underestimated qPCR-based DNA concentrations by approximately 25%, whereas the directional bias in qPCR-based DNA concentration estimates differed between stream and wetland samples for both dilution treatments. While the directional bias of qPCR-based DNA concentration estimates differed among treatments and locations, the magnitude of inaccuracy did not. Our results suggest that 10-fold dilution and column purification effectively reduce qPCR inhibition in mountain headwater stream and coastal plain wetland eDNA samples, and if applied to all samples in a study, column purification may provide the most accurate relative qPCR-based DNA concentrations estimates while retaining the greatest assay sensitivity.

  10. Re-Os geochronology of the El Salvador porphyry Cu-Mo deposit, Chile: Tracking analytical improvements in accuracy and precision over the past decade

    NASA Astrophysics Data System (ADS)

    Zimmerman, Aaron; Stein, Holly J.; Morgan, John W.; Markey, Richard J.; Watanabe, Yasushi

    2014-04-01

    deposit geochronology. The timing and duration of mineralization from Re-Os dating of ore minerals is more precise than estimates from previously reported 40Ar/39Ar and K-Ar ages on alteration minerals. The Re-Os results suggest that the mineralization is temporally distinct from pre-mineral rhyolite porphyry (42.63 ± 0.28 Ma) and is immediately prior to or overlapping with post-mineral latite dike emplacement (41.16 ± 0.48 Ma). Based on the Re-Os and other geochronologic data, the Middle Eocene intrusive activity in the El Salvador district is divided into three pulses: (1) 44-42.5 Ma for weakly mineralized porphyry intrusions, (2) 41.8-41.2 Ma for intensely mineralized porphyry intrusions, and (3) ∼41 Ma for small latite dike intrusions without major porphyry stocks. The orientation of igneous dikes and porphyry stocks changed from NNE-SSW during the first pulse to WNW-ESE for the second and third pulses. This implies that the WNW-ESE striking stress changed from σ3 (minimum principal compressive stress) during the first pulse to σHmax (maximum principal compressional stress in a horizontal plane) during the second and third pulses. Therefore, the focus of intense porphyry Cu-Mo mineralization occurred during a transient geodynamic reconfiguration just before extinction of major intrusive activity in the region.

  11. Acceptability, Precision and Accuracy of 3D Photonic Scanning for Measurement of Body Shape in a Multi-Ethnic Sample of Children Aged 5-11 Years: The SLIC Study

    PubMed Central

    Wells, Jonathan C. K.; Stocks, Janet; Bonner, Rachel; Raywood, Emma; Legg, Sarah; Lee, Simon; Treleaven, Philip; Lum, Sooky

    2015-01-01

    Background Information on body size and shape is used to interpret many aspects of physiology, including nutritional status, cardio-metabolic risk and lung function. Such data have traditionally been obtained through manual anthropometry, which becomes time-consuming when many measurements are required. 3D photonic scanning (3D-PS) of body surface topography represents an alternative digital technique, previously applied successfully in large studies of adults. The acceptability, precision and accuracy of 3D-PS in young children have not been assessed. Methods We attempted to obtain data on girth, width and depth of the chest and waist, and girth of the knee and calf, manually and by 3D-PS in a multi-ethnic sample of 1484 children aged 5–11 years. The rate of 3D-PS success, and reasons for failure, were documented. Precision and accuracy of 3D-PS were assessed relative to manual measurements using the methods of Bland and Altman. Results Manual measurements were successful in all cases. Although 97.4% of children agreed to undergo 3D-PS, successful scans were only obtained in 70.7% of these. Unsuccessful scans were primarily due to body movement, or inability of the software to extract shape outputs. The odds of scan failure, and the underlying reason, differed by age, size and ethnicity. 3D-PS measurements tended to be greater than those obtained manually (p<0.05), however ranking consistency was high (r2>0.90 for most outcomes). Conclusions 3D-PS is acceptable in children aged ≥5 years, though with current hardware/software, and body movement artefacts, approximately one third of scans may be unsuccessful. The technique had poorer technical success than manual measurements, and had poorer precision when the measurements were viable. Compared to manual measurements, 3D-PS showed modest average biases but acceptable limits of agreement for large surveys, and little evidence that bias varied substantially with size. Most of the issues we identified could be

  12. Precision volume measurement system.

    SciTech Connect

    Fischer, Erin E.; Shugard, Andrew D.

    2004-11-01

    A new precision volume measurement system based on a Kansas City Plant (KCP) design was built to support the volume measurement needs of the Gas Transfer Systems (GTS) department at Sandia National Labs (SNL) in California. An engineering study was undertaken to verify or refute KCP's claims of 0.5% accuracy. The study assesses the accuracy and precision of the system. The system uses the ideal gas law and precise pressure measurements (of low-pressure helium) in a temperature and computer controlled environment to ratio a known volume to an unknown volume.

  13. Assessing the Accuracy and Precision of Inorganic Geochemical Data Produced through Flux Fusion and Acid Digestions: Multiple (60+) Comprehensive Analyses of BHVO-2 and the Development of Improved "Accepted" Values

    NASA Astrophysics Data System (ADS)

    Ireland, T. J.; Scudder, R.; Dunlea, A. G.; Anderson, C. H.; Murray, R. W.

    2014-12-01

    The use of geological standard reference materials (SRMs) to assess both the accuracy and the reproducibility of geochemical data is a vital consideration in determining the major and trace element abundances of geologic, oceanographic, and environmental samples. Calibration curves commonly are generated that are predicated on accurate analyses of these SRMs. As a means to verify the robustness of these calibration curves, a SRM can also be run as an unknown item (i.e., not included as a data point in the calibration). The experimentally derived composition of the SRM can thus be compared to the certified (or otherwise accepted) value. This comparison gives a direct measure of the accuracy of the method used. Similarly, if the same SRM is analyzed as an unknown over multiple analytical sessions, the external reproducibility of the method can be evaluated. Two common bulk digestion methods used in geochemical analysis are flux fusion and acid digestion. The flux fusion technique is excellent at ensuring complete digestion of a variety of sample types, is quick, and does not involve much use of hazardous acids. However, this technique is hampered by a high amount of total dissolved solids and may be accompanied by an increased analytical blank for certain trace elements. On the other hand, acid digestion (using a cocktail of concentrated nitric, hydrochloric and hydrofluoric acids) provides an exceptionally clean digestion with very low analytical blanks. However, this technique results in a loss of Si from the system and may compromise results for a few other elements (e.g., Ge). Our lab uses flux fusion for the determination of major elements and a few key trace elements by ICP-ES, while acid digestion is used for Ti and trace element analyses by ICP-MS. Here we present major and trace element data for BHVO-2, a frequently used SRM derived from a Hawaiian basalt, gathered over a period of over two years (30+ analyses by each technique). We show that both digestion

  14. Comparative Analysis of the Equivital EQ02 Lifemonitor with Holter Ambulatory ECG Device for Continuous Measurement of ECG, Heart Rate, and Heart Rate Variability: A Validation Study for Precision and Accuracy

    PubMed Central

    Akintola, Abimbola A.; van de Pol, Vera; Bimmel, Daniel; Maan, Arie C.; van Heemst, Diana

    2016-01-01

    Background: The Equivital (EQ02) is a multi-parameter telemetric device offering both real-time and/or retrospective, synchronized monitoring of ECG, HR, and HRV, respiration, activity, and temperature. Unlike the Holter, which is the gold standard for continuous ECG measurement, EQO2 continuously monitors ECG via electrodes interwoven in the textile of a wearable belt. Objective: To compare EQ02 with the Holter for continuous home measurement of ECG, heart rate (HR), and heart rate variability (HRV). Methods: Eighteen healthy participants wore, simultaneously for 24 h, the Holter and EQ02 monitors. Per participant, averaged HR, and HRV per 5 min from the two devices were compared using Pearson correlation, paired T-test, and Bland-Altman analyses. Accuracy and precision metrics included mean absolute relative difference (MARD). Results: Artifact content of EQ02 data varied widely between (range 1.93–56.45%) and within (range 0.75–9.61%) participants. Comparing the EQ02 to the Holter, the Pearson correlations were respectively 0.724, 0.955, and 0.997 for datasets containing all data and data with < 50 or < 20% artifacts respectively. For datasets containing respectively all data, data with < 50, or < 20% artifacts, bias estimated by Bland-Altman analysis was −2.8, −1.0, and −0.8 beats per minute and 24 h MARD was 7.08, 3.01, and 1.5. After selecting a 3-h stretch of data containing 1.15% artifacts, Pearson correlation was 0.786 for HRV measured as standard deviation of NN intervals (SDNN). Conclusions: Although the EQ02 can accurately measure ECG and HRV, its accuracy and precision is highly dependent on artifact content. This is a limitation for clinical use in individual patients. However, the advantages of the EQ02 (ability to simultaneously monitor several physiologic parameters) may outweigh its disadvantages (higher artifact load) for research purposes and/ or for home monitoring in larger groups of study participants. Further studies can be aimed

  15. Using measurements of muscle color, pH, and electrical impedance to augment the current USDA beef quality grading standards and improve the accuracy and precision of sorting carcasses into palatability groups.

    PubMed

    Wulf, D M; Page, J K

    2000-10-01

    This research was conducted to determine whether objective measures of muscle color, muscle pH, and(or) electrical impedance are useful in segregating palatable beef from unpalatable beef, and to determine whether the current USDA quality grading standards for beef carcasses could be revised to improve their effectiveness at distinguishing palatable from unpalatable beef. One hundred beef carcasses were selected from packing plants in Texas, Illinois, and Ohio to represent the full range of muscle color observed in the U.S. beef carcass population. Steaks from these 100 carcasses were used to determine shear force on eight cooked beef muscles and taste panel ratings on three cooked beef muscles. It was discovered that the darkest-colored 20 to 25% of the beef carcasses sampled were less palatable and considerably less consistent than the other 75 to 80% sampled. Marbling score, by itself, explained 12% of the variation in beef palatability; hump height, by itself, explained 8% of the variation in beef palatability; measures of muscle color or pH, by themselves, explained 15 to 23% of the variation in beef palatability. When combined together, marbling score, hump height, and some measure of muscle color or pH explained 36 to 46% of the variation in beef palatability. Alternative quality grading systems were proposed to improve the accuracy and precision of sorting carcasses into palatability groups. The two proposed grading systems decreased palatability variation by 29% and 39%, respectively, within the Choice grade and decreased palatability variation by 37% and 12%, respectively, within the Select grade, when compared with current USDA standards. The percentage of unpalatable Choice carcasses was reduced from 14% under the current USDA grading standards to 4% and 1%, respectively, for the two proposed systems. The percentage of unpalatable Select carcasses was reduced from 36% under the current USDA standards to 7% and 29%, respectively, for the proposed systems

  16. Relative Accuracy Evaluation

    PubMed Central

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  17. Precision electron polarimetry

    SciTech Connect

    Chudakov, Eugene A.

    2013-11-01

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. M{\\o}ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at ~300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100\\%-polarized electron target for M{\\o}ller polarimetry.

  18. Precision electron polarimetry

    NASA Astrophysics Data System (ADS)

    Chudakov, E.

    2013-11-01

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. Mo/ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at 300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100%-polarized electron target for Mo/ller polarimetry.

  19. SU-E-P-54: Evaluation of the Accuracy and Precision of IGPS-O X-Ray Image-Guided Positioning System by Comparison with On-Board Imager Cone-Beam Computed Tomography

    SciTech Connect

    Zhang, D; Wang, W; Jiang, B; Fu, D

    2015-06-15

    Purpose: The purpose of this study is to assess the positioning accuracy and precision of IGPS-O system which is a novel radiographic kilo-voltage x-ray image-guided positioning system developed for clinical IGRT applications. Methods: IGPS-O x-ray image-guided positioning system consists of two oblique sets of radiographic kilo-voltage x-ray projecting and imaging devices which were equiped on the ground and ceiling of treatment room. This system can determine the positioning error in the form of three translations and three rotations according to the registration of two X-ray images acquired online and the planning CT image. An anthropomorphic head phantom and an anthropomorphic thorax phantom were used for this study. The phantom was set up on the treatment table with correct position and various “planned” setup errors. Both IGPS-O x-ray image-guided positioning system and the commercial On-board Imager Cone-beam Computed Tomography (OBI CBCT) were used to obtain the setup errors of the phantom. Difference of the Result between the two image-guided positioning systems were computed and analyzed. Results: The setup errors measured by IGPS-O x-ray image-guided positioning system and the OBI CBCT system showed a general agreement, the means and standard errors of the discrepancies between the two systems in the left-right, anterior-posterior, superior-inferior directions were −0.13±0.09mm, 0.03±0.25mm, 0.04±0.31mm, respectively. The maximum difference was only 0.51mm in all the directions and the angular discrepancy was 0.3±0.5° between the two systems. Conclusion: The spatial and angular discrepancies between IGPS-O system and OBI CBCT for setup error correction was minimal. There is a general agreement between the two positioning system. IGPS-O x-ray image-guided positioning system can achieve as good accuracy as CBCT and can be used in the clinical IGRT applications.

  20. Application of AFINCH as a Tool for Evaluating the Effects of Streamflow-Gaging-Network Size and Composition on the Accuracy and Precision of Streamflow Estimates at Ungaged Locations in the Southeast Lake Michigan Hydrologic Subregion

    USGS Publications Warehouse

    Koltun, G.F.; Holtschlag, David J.

    2010-01-01

    Bootstrapping techniques employing random subsampling were used with the AFINCH (Analysis of Flows In Networks of CHannels) model to gain insights into the effects of variation in streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the 0405 (Southeast Lake Michigan) hydrologic subregion. AFINCH uses stepwise-regression techniques to estimate monthly water yields from catchments based on geospatial-climate and land-cover data in combination with available streamflow and water-use data. Calculations are performed on a hydrologic-subregion scale for each catchment and stream reach contained in a National Hydrography Dataset Plus (NHDPlus) subregion. Water yields from contributing catchments are multiplied by catchment areas and resulting flow values are accumulated to compute streamflows in stream reaches which are referred to as flow lines. AFINCH imposes constraints on water yields to ensure that observed streamflows are conserved at gaged locations. Data from the 0405 hydrologic subregion (referred to as Southeast Lake Michigan) were used for the analyses. Daily streamflow data were measured in the subregion for 1 or more years at a total of 75 streamflow-gaging stations during the analysis period which spanned water years 1971-2003. The number of streamflow gages in operation each year during the analysis period ranged from 42 to 56 and averaged 47. Six sets (one set for each censoring level), each composed of 30 random subsets of the 75 streamflow gages, were created by censoring (removing) approximately 10, 20, 30, 40, 50, and 75 percent of the streamflow gages (the actual percentage of operating streamflow gages censored for each set varied from year to year, and within the year from subset to subset, but averaged approximately the indicated percentages). Streamflow estimates for six flow lines each were aggregated by censoring level, and results were analyzed to assess (a) how the size

  1. Precision Nova operations

    SciTech Connect

    Ehrlich, R.B.; Miller, J.L.; Saunders, R.L.; Thompson, C.E.; Weiland, T.L.; Laumann, C.W.

    1995-09-01

    To improve the symmetry of x-ray drive on indirectly driven ICF capsules, we have increased the accuracy of operating procedures and diagnostics on the Nova laser. Precision Nova operations includes routine precision power balance to within 10% rms in the ``foot`` and 5% nns in the peak of shaped pulses, beam synchronization to within 10 ps rms, and pointing of the beams onto targets to within 35 {mu}m rms. We have also added a ``fail-safe chirp`` system to avoid Stimulated Brillouin Scattering (SBS) in optical components during high energy shots.

  2. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  3. State of the Field: Extreme Precision Radial Velocities

    NASA Astrophysics Data System (ADS)

    Fischer, Debra A.; Anglada-Escude, Guillem; Arriagada, Pamela; Baluev, Roman V.; Bean, Jacob L.; Bouchy, Francois; Buchhave, Lars A.; Carroll, Thorsten; Chakraborty, Abhijit; Crepp, Justin R.; Dawson, Rebekah I.; Diddams, Scott A.; Dumusque, Xavier; Eastman, Jason D.; Endl, Michael; Figueira, Pedro; Ford, Eric B.; Foreman-Mackey, Daniel; Fournier, Paul; Fűrész, Gabor; Gaudi, B. Scott; Gregory, Philip C.; Grundahl, Frank; Hatzes, Artie P.; Hébrard, Guillaume; Herrero, Enrique; Hogg, David W.; Howard, Andrew W.; Johnson, John A.; Jorden, Paul; Jurgenson, Colby A.; Latham, David W.; Laughlin, Greg; Loredo, Thomas J.; Lovis, Christophe; Mahadevan, Suvrath; McCracken, Tyler M.; Pepe, Francesco; Perez, Mario; Phillips, David F.; Plavchan, Peter P.; Prato, Lisa; Quirrenbach, Andreas; Reiners, Ansgar; Robertson, Paul; Santos, Nuno C.; Sawyer, David; Segransan, Damien; Sozzetti, Alessandro; Steinmetz, Tilo; Szentgyorgyi, Andrew; Udry, Stéphane; Valenti, Jeff A.; Wang, Sharon X.; Wittenmyer, Robert A.; Wright, Jason T.

    2016-06-01

    The Second Workshop on Extreme Precision Radial Velocities defined circa 2015 the state of the art Doppler precision and identified the critical path challenges for reaching 10 cm s-1 measurement precision. The presentations and discussion of key issues for instrumentation and data analysis and the workshop recommendations for achieving this bold precision are summarized here. Beginning with the High Accuracy Radial Velocity Planet Searcher spectrograph, technological advances for precision radial velocity (RV) measurements have focused on building extremely stable instruments. To reach still higher precision, future spectrometers will need to improve upon the state of the art, producing even higher fidelity spectra. This should be possible with improved environmental control, greater stability in the illumination of the spectrometer optics, better detectors, more precise wavelength calibration, and broader bandwidth spectra. Key data analysis challenges for the precision RV community include distinguishing center of mass (COM) Keplerian motion from photospheric velocities (time correlated noise) and the proper treatment of telluric contamination. Success here is coupled to the instrument design, but also requires the implementation of robust statistical and modeling techniques. COM velocities produce Doppler shifts that affect every line identically, while photospheric velocities produce line profile asymmetries with wavelength and temporal dependencies that are different from Keplerian signals. Exoplanets are an important subfield of astronomy and there has been an impressive rate of discovery over the past two decades. However, higher precision RV measurements are required to serve as a discovery technique for potentially habitable worlds, to confirm and characterize detections from transit missions, and to provide mass measurements for other space-based missions. The future of exoplanet science has very different trajectories depending on the precision that can

  4. State of the Field: Extreme Precision Radial Velocities

    NASA Astrophysics Data System (ADS)

    Fischer, Debra A.; Anglada-Escude, Guillem; Arriagada, Pamela; Baluev, Roman V.; Bean, Jacob L.; Bouchy, Francois; Buchhave, Lars A.; Carroll, Thorsten; Chakraborty, Abhijit; Crepp, Justin R.; Dawson, Rebekah I.; Diddams, Scott A.; Dumusque, Xavier; Eastman, Jason D.; Endl, Michael; Figueira, Pedro; Ford, Eric B.; Foreman-Mackey, Daniel; Fournier, Paul; Fűrész, Gabor; Gaudi, B. Scott; Gregory, Philip C.; Grundahl, Frank; Hatzes, Artie P.; Hébrard, Guillaume; Herrero, Enrique; Hogg, David W.; Howard, Andrew W.; Johnson, John A.; Jorden, Paul; Jurgenson, Colby A.; Latham, David W.; Laughlin, Greg; Loredo, Thomas J.; Lovis, Christophe; Mahadevan, Suvrath; McCracken, Tyler M.; Pepe, Francesco; Perez, Mario; Phillips, David F.; Plavchan, Peter P.; Prato, Lisa; Quirrenbach, Andreas; Reiners, Ansgar; Robertson, Paul; Santos, Nuno C.; Sawyer, David; Segransan, Damien; Sozzetti, Alessandro; Steinmetz, Tilo; Szentgyorgyi, Andrew; Udry, Stéphane; Valenti, Jeff A.; Wang, Sharon X.; Wittenmyer, Robert A.; Wright, Jason T.

    2016-06-01

    The Second Workshop on Extreme Precision Radial Velocities defined circa 2015 the state of the art Doppler precision and identified the critical path challenges for reaching 10 cm s‑1 measurement precision. The presentations and discussion of key issues for instrumentation and data analysis and the workshop recommendations for achieving this bold precision are summarized here. Beginning with the High Accuracy Radial Velocity Planet Searcher spectrograph, technological advances for precision radial velocity (RV) measurements have focused on building extremely stable instruments. To reach still higher precision, future spectrometers will need to improve upon the state of the art, producing even higher fidelity spectra. This should be possible with improved environmental control, greater stability in the illumination of the spectrometer optics, better detectors, more precise wavelength calibration, and broader bandwidth spectra. Key data analysis challenges for the precision RV community include distinguishing center of mass (COM) Keplerian motion from photospheric velocities (time correlated noise) and the proper treatment of telluric contamination. Success here is coupled to the instrument design, but also requires the implementation of robust statistical and modeling techniques. COM velocities produce Doppler shifts that affect every line identically, while photospheric velocities produce line profile asymmetries with wavelength and temporal dependencies that are different from Keplerian signals. Exoplanets are an important subfield of astronomy and there has been an impressive rate of discovery over the past two decades. However, higher precision RV measurements are required to serve as a discovery technique for potentially habitable worlds, to confirm and characterize detections from transit missions, and to provide mass measurements for other space-based missions. The future of exoplanet science has very different trajectories depending on the precision that

  5. Precise Orbit Determination for ALOS

    NASA Technical Reports Server (NTRS)

    Nakamura, Ryo; Nakamura, Shinichi; Kudo, Nobuo; Katagiri, Seiji

    2007-01-01

    The Advanced Land Observing Satellite (ALOS) has been developed to contribute to the fields of mapping, precise regional land coverage observation, disaster monitoring, and resource surveying. Because the mounted sensors need high geometrical accuracy, precise orbit determination for ALOS is essential for satisfying the mission objectives. So ALOS mounts a GPS receiver and a Laser Reflector (LR) for Satellite Laser Ranging (SLR). This paper deals with the precise orbit determination experiments for ALOS using Global and High Accuracy Trajectory determination System (GUTS) and the evaluation of the orbit determination accuracy by SLR data. The results show that, even though the GPS receiver loses lock of GPS signals more frequently than expected, GPS-based orbit is consistent with SLR-based orbit. And considering the 1 sigma error, orbit determination accuracy of a few decimeters (peak-to-peak) was achieved.

  6. An improved robust hand-eye calibration for endoscopy navigation system

    NASA Astrophysics Data System (ADS)

    He, Wei; Kang, Kumsok; Li, Yanfang; Shi, Weili; Miao, Yu; He, Fei; Yan, Fei; Yang, Huamin; Zhang, Huimao; Mori, Kensaku; Jiang, Zhengang

    2016-03-01

    Endoscopy is widely used in clinical application, and surgical navigation system is an extremely important way to enhance the safety of endoscopy. The key to improve the accuracy of the navigation system is to solve the positional relationship between camera and tracking marker precisely. The problem can be solved by the hand-eye calibration method based on dual quaternions. However, because of the tracking error and the limited motion of the endoscope, the sample motions may contain some incomplete motion samples. Those motions will cause the algorithm unstable and inaccurate. An advanced selection rule for sample motions is proposed in this paper to improve the stability and accuracy of the methods based on dual quaternion. By setting the motion filter to filter out the incomplete motion samples, finally, high precision and robust result is achieved. The experimental results show that the accuracy and stability of camera registration have been effectively improved by selecting sample motion data automatically.

  7. Robust Regression.

    PubMed

    Huang, Dong; Cabral, Ricardo; De la Torre, Fernando

    2016-02-01

    Discriminative methods (e.g., kernel regression, SVM) have been extensively used to solve problems such as object recognition, image alignment and pose estimation from images. These methods typically map image features ( X) to continuous (e.g., pose) or discrete (e.g., object category) values. A major drawback of existing discriminative methods is that samples are directly projected onto a subspace and hence fail to account for outliers common in realistic training sets due to occlusion, specular reflections or noise. It is important to notice that existing discriminative approaches assume the input variables X to be noise free. Thus, discriminative methods experience significant performance degradation when gross outliers are present. Despite its obvious importance, the problem of robust discriminative learning has been relatively unexplored in computer vision. This paper develops the theory of robust regression (RR) and presents an effective convex approach that uses recent advances on rank minimization. The framework applies to a variety of problems in computer vision including robust linear discriminant analysis, regression with missing data, and multi-label classification. Several synthetic and real examples with applications to head pose estimation from images, image and video classification and facial attribute classification with missing data are used to illustrate the benefits of RR. PMID:26761740

  8. New multi-station and multi-decadal trend data on precipitable water. Recipe to match FTIR retrievals from NDACC long-time records to radio sondes within 1 mm accuracy/precision

    NASA Astrophysics Data System (ADS)

    Sussmann, R.; Borsdorff, T.; Rettinger, M.; Camy-Peyret, C.; Demoulin, P.; Duchatelet, P.; Mahieu, E.

    2009-04-01

    We present an original optimum strategy for retrieval of precipitable water from routine ground-based mid-infrared FTS measurements performed at a number globally distributed stations within the NDACC network. The strategy utilizes FTIR retrievals which are set in a way to match standard radio sonde operations. Thereby, an unprecedented accuracy and precision for measurements of precipitable water can be demonstrated: the correlation between Zugspitze FTIR water vapor columns from a 3 months measurement campaign with total columns derived from coincident radio sondes shows a regression coefficient of R = 0.988, a bias of 0.05 mm, a standard deviation of 0.28 mm, an intercept of 0.01 mm, and a slope of 1.01. This appears to be even better than what can be achieved with state-of-the-art micro wave techniques, see e.g., Morland et al. (2006, Fig. 9 therein). Our approach is based upon a careful selection of spectral micro windows, comprising a set of both weak and strong water vapor absorption lines between 839.4 - 840.6 cm-1, 849.0 - 850.2 cm-1, and 852.0 - 853.1 cm-1, which is not contaminated by interfering absorptions of any other trace gases. From existing spectroscopic line lists, a careful selection of the best available parameter set was performed, leading to nearly perfect spectral fits without significant forward model parameter errors. To set up the FTIR water vapor profile inversion, a set of FTIR measurements and coincident radio sondes has been utilized. To eliminate/minimize mismatch in time and space, the Tobin best estimate of the state of the atmosphere principle has been applied to the radio sondes. This concept uses pairs of radio sondes launched with a 1-hour separation, and derives the gradient from the two radio sonde measurements, in order to construct a virtual PTU profile for a certain time and location. Coincident FTIR measurements of water vapor columns (two hour mean values) have then been matched to the water columns obtained by

  9. Robust and intelligent bearing estimation

    SciTech Connect

    Claassen, J.P.

    1998-07-01

    As the monitoring thresholds of global and regional networks are lowered, bearing estimates become more important to the processes which associate (sparse) detections and which locate events. Current methods of estimating bearings from observations by 3-component stations and arrays lack both accuracy and precision. Methods are required which will develop all the precision inherently available in the arrival, determine the measurability of the arrival, provide better estimates of the bias induced by the medium, permit estimates at lower SNRs, and provide physical insight into the effects of the medium on the estimates. Initial efforts have focused on 3-component stations since the precision is poorest there. An intelligent estimation process for 3-component stations has been developed and explored. The method, called SEE for Search, Estimate, and Evaluation, adaptively exploits all the inherent information in the arrival at every step of the process to achieve optimal results. In particular, the approach uses a consistent and robust mathematical framework to define the optimal time-frequency windows on which to make estimates, to make the bearing estimates themselves, and to withdraw metrics helpful in choosing the best estimate(s) or admitting that the bearing is immeasurable. The approach is conceptually superior to current methods, particular those which rely on real values signals. The method has been evaluated to a considerable extent in a seismically active region and has demonstrated remarkable utility by providing not only the best estimates possible but also insight into the physical processes affecting the estimates. It has been shown, for example, that the best frequency at which to make an estimate seldom corresponds to the frequency having the best detection SNR and sometimes the best time interval is not at the onset of the signal. The method is capable of measuring bearing dispersion, thereby withdrawing the bearing bias as a function of frequency

  10. Precision translator

    DOEpatents

    Reedy, R.P.; Crawford, D.W.

    1982-03-09

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  11. Precision translator

    DOEpatents

    Reedy, Robert P.; Crawford, Daniel W.

    1984-01-01

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  12. Phantom instabilities in adiabatically driven systems: dynamical sensitivity to computational precision.

    PubMed

    Jafri, Haider Hasan; Singh, Thounaojam Umeshkanta; Ramaswamy, Ramakrishna

    2012-09-01

    We study the robustness of dynamical phenomena in adiabatically driven nonlinear mappings with skew-product structure. Deviations from true orbits are observed when computations are performed with inadequate numerical precision for monotone, periodic, or quasiperiodic driving. The effect of slow modulation is to "freeze" orbits in long intervals of purely contracting or purely expanding dynamics in the phase space. When computations are carried out with low precision, numerical errors build up phantom instabilities which ultimately force trajectories to depart from the true motion. Thus, the dynamics observed with finite precision computation shows sensitivity to numerical precision: the minimum accuracy required to obtain "true" trajectories is proportional to an internal timescale that can be defined for the adiabatic system.

  13. Precision GPS ephemerides and baselines

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Based on the research, the area of precise ephemerides for GPS satellites, the following observations can be made pertaining to the status and future work needed regarding orbit accuracy. There are several aspects which need to be addressed in discussing determination of precise orbits, such as force models, kinematic models, measurement models, data reduction/estimation methods, etc. Although each one of these aspects was studied at CSR in research efforts, only points pertaining to the force modeling aspect are addressed.

  14. Precision synchrotron radiation detectors

    SciTech Connect

    Levi, M.; Rouse, F.; Butler, J.; Jung, C.K.; Lateur, M.; Nash, J.; Tinsman, J.; Wormser, G.; Gomez, J.J.; Kent, J.

    1989-03-01

    Precision detectors to measure synchrotron radiation beam positions have been designed and installed as part of beam energy spectrometers at the Stanford Linear Collider (SLC). The distance between pairs of synchrotron radiation beams is measured absolutely to better than 28 /mu/m on a pulse-to-pulse basis. This contributes less than 5 MeV to the error in the measurement of SLC beam energies (approximately 50 GeV). A system of high-resolution video cameras viewing precisely-aligned fiducial wire arrays overlaying phosphorescent screens has achieved this accuracy. Also, detectors of synchrotron radiation using the charge developed by the ejection of Compton-recoil electrons from an array of fine wires are being developed. 4 refs., 5 figs., 1 tab.

  15. Ultra precision machining

    NASA Astrophysics Data System (ADS)

    Debra, Daniel B.; Hesselink, Lambertus; Binford, Thomas

    1990-05-01

    There are a number of fields that require or can use to advantage very high precision in machining. For example, further development of high energy lasers and x ray astronomy depend critically on the manufacture of light weight reflecting metal optical components. To fabricate these optical components with machine tools they will be made of metal with mirror quality surface finish. By mirror quality surface finish, it is meant that the dimensions tolerances on the order of 0.02 microns and surface roughness of 0.07. These accuracy targets fall in the category of ultra precision machining. They cannot be achieved by a simple extension of conventional machining processes and techniques. They require single crystal diamond tools, special attention to vibration isolation, special isolation of machine metrology, and on line correction of imperfection in the motion of the machine carriages on their way.

  16. Precision Pointing System Development

    SciTech Connect

    BUGOS, ROBERT M.

    2003-03-01

    The development of precision pointing systems has been underway in Sandia's Electronic Systems Center for over thirty years. Important areas of emphasis are synthetic aperture radars and optical reconnaissance systems. Most applications are in the aerospace arena, with host vehicles including rockets, satellites, and manned and unmanned aircraft. Systems have been used on defense-related missions throughout the world. Presently in development are pointing systems with accuracy goals in the nanoradian regime. Future activity will include efforts to dramatically reduce system size and weight through measures such as the incorporation of advanced materials and MEMS inertial sensors.

  17. Classification of LIDAR Data for Generating a High-Precision Roadway Map

    NASA Astrophysics Data System (ADS)

    Jeong, J.; Lee, I.

    2016-06-01

    Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.

  18. Robust control of hypersonic aircraft

    NASA Astrophysics Data System (ADS)

    Fan, Yong-hua; Yang, Jun; Zhang, Yu-zhuo

    2007-11-01

    Design of a robust controller for the longitudinal dynamics of a hypersonic aircraft by using parameter space method is present. The desirable poles are mapped to the parameter space of the controller using pole placement approach in this method. The intersection of the parameter space is the common controller for the multiple mode system. This controller can meet the need of the different phases of aircraft. It has been proved by simulation that the controller has highly performance of precision and robustness for the disturbance caused by separation, cowl open, fuel on and fuel off and perturbation caused by unknown dynamics.

  19. RoPEUS: A New Robust Algorithm for Static Positioning in Ultrasonic Systems

    PubMed Central

    Prieto, José Carlos; Croux, Christophe; Jiménez, Antonio Ramón

    2009-01-01

    A well known problem for precise positioning in real environments is the presence of outliers in the measurement sample. Its importance is even bigger in ultrasound based systems since this technology needs a direct line of sight between emitters and receivers. Standard techniques for outlier detection in range based systems do not usually employ robust algorithms, failing when multiple outliers are present. The direct application of standard robust regression algorithms fails in static positioning (where only the current measurement sample is considered) in real ultrasound based systems mainly due to the limited number of measurements and the geometry effects. This paper presents a new robust algorithm, called RoPEUS, based on MM estimation, that follows a typical two-step strategy: 1) a high breakdown point algorithm to obtain a clean sample, and 2) a refinement algorithm to increase the accuracy of the solution. The main modifications proposed to the standard MM robust algorithm are a built in check of partial solutions in the first step (rejecting bad geometries) and the off-line calculation of the scale of the measurements. The algorithm is tested with real samples obtained with the 3D-LOCUS ultrasound localization system in an ideal environment without obstacles. These measurements are corrupted with typical outlying patterns to numerically evaluate the algorithm performance with respect to the standard parity space algorithm. The algorithm proves to be robust under single or multiple outliers, providing similar accuracy figures in all cases. PMID:22408522

  20. Dynamic Precision for Electron Repulsion Integral Evaluation on Graphical Processing Units (GPUs).

    PubMed

    Luehr, Nathan; Ufimtsev, Ivan S; Martínez, Todd J

    2011-04-12

    It has recently been demonstrated that novel streaming architectures found in consumer video gaming hardware such as graphical processing units (GPUs) are well-suited to a broad range of computations including electronic structure theory (quantum chemistry). Although recent GPUs have developed robust support for double precision arithmetic, they continue to provide 2-8× more hardware units for single precision. In order to maximize performance on GPU architectures, we present a technique of dynamically selecting double or single precision evaluation for electron repulsion integrals (ERIs) in Hartree-Fock and density functional self-consistent field (SCF) calculations. We show that precision error can be effectively controlled by evaluating only the largest integrals in double precision. By dynamically scaling the precision cutoff over the course of the SCF procedure, we arrive at a scheme that minimizes the number of double precision integral evaluations for any desired accuracy. This dynamic precision scheme is shown to be effective for an array of molecules ranging in size from 20 to nearly 2000 atoms. PMID:26606344

  1. An accuracy measurement method for star trackers based on direct astronomic observation.

    PubMed

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-03-07

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers.

  2. An accuracy measurement method for star trackers based on direct astronomic observation

    PubMed Central

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-01-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412

  3. An accuracy measurement method for star trackers based on direct astronomic observation.

    PubMed

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-01-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412

  4. Precision orbit determination for Topex

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Schutz, B. E.; Ries, J. C.; Shum, C. K.

    1990-01-01

    The ability of radar altimeters to measure the distance from a satellite to the ocean surface with a precision of the order of 2 cm imposes unique requirements for the orbit determination accuracy. The orbit accuracy requirements will be especially demanding for the joint NASA/CNES Ocean Topography Experiment (Topex/Poseidon). For this mission, a radial orbit accuracy of 13 centimeters will be required for a mission period of three to five years. This is an order of magnitude improvement in the accuracy achieved during any previous satellite mission. This investigation considers the factors which limit the orbit accuracy for the Topex mission. Particular error sources which are considered include the geopotential, the radiation pressure and the atmospheric drag model.

  5. FTRAC--A robust fluoroscope tracking fiducial

    SciTech Connect

    Jain, Ameet Kumar; Mustafa, Tabish; Zhou, Yu; Burdette, Clif; Chirikjian, Gregory S.; Fichtinger, Gabor

    2005-10-15

    C-arm fluoroscopy is ubiquitous in contemporary surgery, but it lacks the ability to accurately reconstruct three-dimensional (3D) information. A major obstacle in fluoroscopic reconstruction is discerning the pose of the x-ray image, in 3D space. Optical/magnetic trackers tend to be prohibitively expensive, intrusive and cumbersome in many applications. We present single-image-based fluoroscope tracking (FTRAC) with the use of an external radiographic fiducial consisting of a mathematically optimized set of ellipses, lines, and points. This is an improvement over contemporary fiducials, which use only points. The fiducial encodes six degrees of freedom in a single image by creating a unique view from any direction. A nonlinear optimizer can rapidly compute the pose of the fiducial using this image. The current embodiment has salient attributes: small dimensions (3x3x5 cm); need not be close to the anatomy of interest; and accurately segmentable. We tested the fiducial and the pose recovery method on synthetic data and also experimentally on a precisely machined mechanical phantom. Pose recovery in phantom experiments had an accuracy of 0.56 mm in translation and 0.33 deg. in orientation. Object reconstruction had a mean error of 0.53 mm with 0.16 mm STD. The method offers accuracies similar to commercial tracking systems, and appears to be sufficiently robust for intraoperative quantitative C-arm fluoroscopy. Simulation experiments indicate that the size can be further reduced to 1x1x2 cm, with only a marginal drop in accuracy.

  6. Accuracy and precision of gravitational-wave models of inspiraling neutron star-black hole binaries with spin: Comparison with matter-free numerical relativity in the low-frequency regime

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Brown, Duncan A.; Lovelace, Geoffrey; Scheel, Mark A.; Szilágyi, Béla

    2015-11-01

    Coalescing binaries of neutron stars and black holes are one of the most important sources of gravitational waves for the upcoming network of ground-based detectors. Detection and extraction of astrophysical information from gravitational-wave signals requires accurate waveform models. The effective-one-body and other phenomenological models interpolate between analytic results and numerical relativity simulations, that typically span O (10 ) orbits before coalescence. In this paper we study the faithfulness of these models for neutron star-black hole binaries. We investigate their accuracy using new numerical relativity (NR) simulations that span 36-88 orbits, with mass ratios q and black hole spins χBH of (q ,χBH)=(7 ,±0.4 ),(7 ,±0.6 ) , and (5 ,-0.9 ). These simulations were performed treating the neutron star as a low-mass black hole, ignoring its matter effects. We find that (i) the recently published SEOBNRv1 and SEOBNRv2 models of the effective-one-body family disagree with each other (mismatches of a few percent) for black hole spins χBH≥0.5 or χBH≤-0.3 , with waveform mismatch accumulating during early inspiral; (ii) comparison with numerical waveforms indicates that this disagreement is due to phasing errors of SEOBNRv1, with SEOBNRv2 in good agreement with all of our simulations; (iii) phenomenological waveforms agree with SEOBNRv2 only for comparable-mass low-spin binaries, with overlaps below 0.7 elsewhere in the neutron star-black hole binary parameter space; (iv) comparison with numerical waveforms shows that most of this model's dephasing accumulates near the frequency interval where it switches to a phenomenological phasing prescription; and finally (v) both SEOBNR and post-Newtonian models are effectual for neutron star-black hole systems, but post-Newtonian waveforms will give a significant bias in parameter recovery. Our results suggest that future gravitational-wave detection searches and parameter estimation efforts would benefit

  7. Robust omniphobic surfaces

    PubMed Central

    Tuteja, Anish; Choi, Wonjae; Mabry, Joseph M.; McKinley, Gareth H.; Cohen, Robert E.

    2008-01-01

    Superhydrophobic surfaces display water contact angles greater than 150° in conjunction with low contact angle hysteresis. Microscopic pockets of air trapped beneath the water droplets placed on these surfaces lead to a composite solid-liquid-air interface in thermodynamic equilibrium. Previous experimental and theoretical studies suggest that it may not be possible to form similar fully-equilibrated, composite interfaces with drops of liquids, such as alkanes or alcohols, that possess significantly lower surface tension than water (γlv = 72.1 mN/m). In this work we develop surfaces possessing re-entrant texture that can support strongly metastable composite solid-liquid-air interfaces, even with very low surface tension liquids such as pentane (γlv = 15.7 mN/m). Furthermore, we propose four design parameters that predict the measured contact angles for a liquid droplet on a textured surface, as well as the robustness of the composite interface, based on the properties of the solid surface and the contacting liquid. These design parameters allow us to produce two different families of re-entrant surfaces— randomly-deposited electrospun fiber mats and precisely fabricated microhoodoo surfaces—that can each support a robust composite interface with essentially any liquid. These omniphobic surfaces display contact angles greater than 150° and low contact angle hysteresis with both polar and nonpolar liquids possessing a wide range of surface tensions. PMID:19001270

  8. Nickel solution prepared for precision electroforming

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Lightweight, precision optical reflectors are made by electroforming nickel onto masters. Steps for the plating bath preparation, process control testing, and bath composition adjustments are prescribed to avoid internal stresses and maintain dimensional accuracy of the electrodeposited metal.

  9. Optimized robust plasma sampling for glomerular filtration rate studies.

    PubMed

    Murray, Anthony W; Gannon, Mark A; Barnfield, Mark C; Waller, Michael L

    2012-09-01

    In the presence of abnormal fluid collection (e.g. ascites), the measurement of glomerular filtration rate (GFR) based on a small number (1-4) of plasma samples fails. This study investigated how a few samples will allow adequate characterization of plasma clearance to give a robust and accurate GFR measurement. A total of 68 nine-sample GFR tests (from 45 oncology patients) with abnormal clearance of a glomerular tracer were audited to develop a Monte Carlo model. This was used to generate 20 000 synthetic but clinically realistic clearance curves, which were sampled at the 10 time points suggested by the British Nuclear Medicine Society. All combinations comprising between four and 10 samples were then used to estimate the area under the clearance curve by nonlinear regression. The audited clinical plasma curves were all well represented pragmatically as biexponential curves. The area under the curve can be well estimated using as few as five judiciously timed samples (5, 10, 15, 90 and 180 min). Several seven-sample schedules (e.g. 5, 10, 15, 60, 90, 180 and 240 min) are tolerant to any one sample being discounted without significant loss of accuracy or precision. A research tool has been developed that can be used to estimate the accuracy and precision of any pattern of plasma sampling in the presence of 'third-space' kinetics. This could also be used clinically to estimate the accuracy and precision of GFR calculated from mistimed or incomplete sets of samples. It has been used to identify optimized plasma sampling schedules for GFR measurement. PMID:22825040

  10. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  11. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  12. Robust Optimization of Alginate-Carbopol 940 Bead Formulations

    PubMed Central

    López-Cacho, J. M.; González-R, Pedro L.; Talero, B.; Rabasco, A. M.; González-Rodríguez, M. L.

    2012-01-01

    Formulation process is a very complex activity which sometimes implicates taking decisions about parameters or variables to obtain the best results in a high variability or uncertainty context. Therefore, robust optimization tools can be very useful for obtaining high quality formulations. This paper proposes the optimization of different responses through the robust Taguchi method. Each response was evaluated like a noise variable, allowing the application of Taguchi techniques to obtain a response under the point of view of the signal to noise ratio. A L18 Taguchi orthogonal array design was employed to investigate the effect of eight independent variables involved in the formulation of alginate-Carbopol beads. Responses evaluated were related to drug release profile from beads (t50% and AUC), swelling performance, encapsulation efficiency, shape and size parameters. Confirmation tests to verify the prediction model were carried out and the obtained results were very similar to those predicted in every profile. Results reveal that the robust optimization is a very useful approach that allows greater precision and accuracy to the desired value. PMID:22645438

  13. Precision spectroscopy of Helium

    SciTech Connect

    Cancio, P.; Giusfredi, G.; Mazzotti, D.; De Natale, P.; De Mauro, C.; Krachmalnicoff, V.; Inguscio, M.

    2005-05-05

    Accurate Quantum-Electrodynamics (QED) tests of the simplest bound three body atomic system are performed by precise laser spectroscopic measurements in atomic Helium. In this paper, we present a review of measurements between triplet states at 1083 nm (23S-23P) and at 389 nm (23S-33P). In 4He, such data have been used to measure the fine structure of the triplet P levels and, then, to determine the fine structure constant when compared with equally accurate theoretical calculations. Moreover, the absolute frequencies of the optical transitions have been used for Lamb-shift determinations of the levels involved with unprecedented accuracy. Finally, determination of the He isotopes nuclear structure and, in particular, a measurement of the nuclear charge radius, are performed by using hyperfine structure and isotope-shift measurements.

  14. A robust fluoroscope tracking (FTRAC) fiducial

    NASA Astrophysics Data System (ADS)

    Jain, Ameet K.; Mustufa, Tabish; Zhou, Yu; Burdette, E. C.; Chirikjian, Gregory S.; Fichtinger, Gabor

    2005-04-01

    Purpose: C-arm fluoroscopy is ubiquitous in contemporary surgery, but it lacks the ability to accurately reconstruct 3D information. A major obstacle in fluoroscopic reconstruction is discerning the pose of the X-ray image, in 3D space. Optical/magnetic trackers are prohibitively expensive, intrusive and cumbersome. Method: We present single-image-based fluoroscope tracking (FTRAC) with the use of an external radiographic fiducial consisting of a mathematically optimized set of points, lines, and ellipses. The fiducial encodes six degrees of freedom in a single image by creating a unique view from any direction. A non-linear optimizer can rapidly compute the pose of the fiducial using this image. The current embodiment has salient attributes: small dimensions (3 x 3 x 5 cm), it need not be close to the anatomy of interest and can be segmented automatically. Results: We tested the fiducial and the pose recovery method on synthetic data and also experimentally on a precisely machined mechanical phantom. Pose recovery had an error of 0.56 mm in translation and 0.33° in orientation. Object reconstruction had a mean error of 0.53 mm with 0.16 mm STD. Conclusion: The method offers accuracies similar to commercial tracking systems, and is sufficiently robust for intra-operative quantitative C-arm fluoroscopy.

  15. Development of a Robust Method for Simultaneous Quantification of Polymer (HPMC) and Surfactant (Dodecyl β-D-Maltoside) in Nanosuspensions.

    PubMed

    Patel, Salin Gupta; Bummer, Paul M

    2016-10-01

    This report describes the development of a chromatographic method for the simultaneous quantification of a polymer, hydroxypropyl methylcellulose (HPMC), and a surfactant, dodecyl β-D-maltoside (DM), that are commonly used in the physical stabilization of pharmaceutical formulations such as nanosuspensions and solid dispersions. These excipients are often challenging to quantify due to the lack of chromophores. A reverse phase size exclusion chromatography (SEC) with evaporative light scattering detector (ELSD) technique was utilized to develop an accurate and robust assay for the simultaneous quantification of HPMC and DM in a nanosuspension formulation. The statistical design of experiments was used to determine the influence of critical ELSD variables including temperature, pressure, and gain on accuracy, precision, and sensitivity of the assay. A robust design space was identified where it was determined that an increase in the temperature of the drift tube and gain of the instrument increased the accuracy and precision of the assay and a decrease in the nebulizer pressure value increased the sensitivity of the assay. In the optimized design space, response data showed that the assay could quantify HPMC and DM simultaneously with good accuracy, precision, and reproducibility. Overall, SEC-ELSD proved to be a powerful technique for the simultaneous quantification of HPMC and DM. This technique can be used to quantify the amount of HPMC and DM in nanosuspensions, which is critical to understanding their effects on the physical stability of nanosuspensions.

  16. Precision ozone vapor pressure measurements

    NASA Technical Reports Server (NTRS)

    Hanson, D.; Mauersberger, K.

    1985-01-01

    The vapor pressure above liquid ozone has been measured with a high accuracy over a temperature range of 85 to 95 K. At the boiling point of liquid argon (87.3 K) an ozone vapor pressure of 0.0403 Torr was obtained with an accuracy of + or - 0.7 percent. A least square fit of the data provided the Clausius-Clapeyron equation for liquid ozone; a latent heat of 82.7 cal/g was calculated. High-precision vapor pressure data are expected to aid research in atmospheric ozone measurements and in many laboratory ozone studies such as measurements of cross sections and reaction rates.

  17. Investigation of stability of precise geodetic instruments used in displacements monitoring

    NASA Astrophysics Data System (ADS)

    Wozniak, Marek; Odziemczyk, Waldemar

    2014-05-01

    Stability of the geometry of the geodetic instrument in displacements monitoring systems is particular important. It is important to provide the robust reference system for position determination during control measurements. In measuring systems the fundamental role perform motorized tacheometers. In order to provide a high reliability and accuracy of the measurements results the testing of instruments and measuring techniques must be perform. In this paper results of laboratory and field investigations using precise tacheometers TDA5005 as well as TCRP1201+ are presented. The temperature influence on changes of geometry stability of the instruments were examined. In this paper we propose methods to avoid the negative impact on results of the displacement monitoring.

  18. Fast Switching and Precision Relative Astrometry at the DSN

    NASA Technical Reports Server (NTRS)

    Majid, Walid; Bagri, Durgadas

    2008-01-01

    A) We are developing new techniques to improve astrometric accuracy: 1) Reducing switching time (approx. 60s) and angular separation (approx.1deg) between quasars; 2) Use of phase delay & bandpass calibration; 3) Techniques also applicable with future DSN Array. B) Initial set of observations carried out at the DSN show great promise. C) Continue DSN observations using fainter calibrators to study robustness and to verify and validate error estimates. (Eventually demo technique with spacecraft measurements). D) Viability of technique depends on existence of sufficient number of calibrators. (Determining what fraction of radio sources are compact at the VLBA) E) May be able to use calibrators with flux density approx. 50 mJy with calibrator 1deg. G) Relative precision of approx.0.5 nrad may be achievable. E) Absolute measurement always depends on knowledge of calibrator position. (Catalog maintained and improved).

  19. Global positioning system measurements for crustal deformation: Precision and accuracy

    USGS Publications Warehouse

    Prescott, W.H.; Davis, J.L.; Svarc, J.L.

    1989-01-01

    Analysis of 27 repeated observations of Global Positioning System (GPS) position-difference vectors, up to 11 kilometers in length, indicates that the standard deviation of the measurements is 4 millimeters for the north component, 6 millimeters for the east component, and 10 to 20 millimeters for the vertical component. The uncertainty grows slowly with increasing vector length. At 225 kilometers, the standard deviation of the measurement is 6, 11, and 40 millimeters for the north, east, and up components, respectively. Measurements with GPS and Geodolite, an electromagnetic distance-measuring system, over distances of 10 to 40 kilometers agree within 0.2 part per million. Measurements with GPS and very long baseline interferometry of the 225-kilometer vector agree within 0.05 part per million.

  20. Tomography & Geochemistry: Precision, Repeatability, Accuracy and Joint Interpretations

    NASA Astrophysics Data System (ADS)

    Foulger, G. R.; Panza, G. F.; Artemieva, I. M.; Bastow, I. D.; Cammarano, F.; Doglioni, C.; Evans, J. R.; Hamilton, W. B.; Julian, B. R.; Lustrino, M.; Thybo, H.; Yanovskaya, T. B.

    2015-12-01

    Seismic tomography can reveal the spatial seismic structure of the mantle, but has little ability to constrain composition, phase or temperature. In contrast, petrology and geochemistry can give insights into mantle composition, but have severely limited spatial control on magma sources. For these reasons, results from these three disciplines are often interpreted jointly. Nevertheless, the limitations of each method are often underestimated, and underlying assumptions de-emphasized. Examples of the limitations of seismic tomography include its ability to image in detail the three-dimensional structure of the mantle or to determine with certainty the strengths of anomalies. Despite this, published seismic anomaly strengths are often unjustifiably translated directly into physical parameters. Tomography yields seismological parameters such as wave speed and attenuation, not geological or thermal parameters. Much of the mantle is poorly sampled by seismic waves, and resolution- and error-assessment methods do not express the true uncertainties. These and other problems have become highlighted in recent years as a result of multiple tomography experiments performed by different research groups, in areas of particular interest e.g., Yellowstone. The repeatability of the results is often poorer than the calculated resolutions. The ability of geochemistry and petrology to identify magma sources and locations is typically overestimated. These methods have little ability to determine source depths. Models that assign geochemical signatures to specific layers in the mantle, including the transition zone, the lower mantle, and the core-mantle boundary, are based on speculative models that cannot be verified and for which viable, less-astonishing alternatives are available. Our knowledge is poor of the size, distribution and location of protoliths, and of metasomatism of magma sources, the nature of the partial-melting and melt-extraction process, the mixing of disparate melts, and the re-assimilation of crust and mantle lithosphere by rising melt. Interpretations of seismic tomography, petrologic and geochemical observations, and all three together, are ambiguous, and this needs to be emphasized more in presenting interpretations so that the viability of the models can be assessed more reliably.

  1. Precision and accuracy of visual foliar injury assessments

    SciTech Connect

    Gumpertz, M.L.; Tingey, D.T.; Hogsett, W.E.

    1982-07-01

    The study compared three measures of foliar injury: (i) mean percent leaf area injured of all leaves on the plant, (ii) mean percent leaf area injured of the three most injured leaves, and (iii) the proportion of injured leaves to total number of leaves. For the first measure, the variation caused by reader biases and day-to-day variations were compared with the innate plant-to-plant variation. Bean (Phaseolus vulgaris 'Pinto'), pea (Pisum sativum 'Little Marvel'), radish (Rhaphanus sativus 'Cherry Belle'), and spinach (Spinacia oleracea 'Northland') plants were exposed to either 3 ..mu..L L/sup -1/ SO/sub 2/ or 0.3 ..mu..L L/sup -1/ ozone for 2 h. Three leaf readers visually assessed the percent injury on every leaf of each plant while a fourth reader used a transparent grid to make an unbiased assessment for each plant. The mean leaf area injured of the three most injured leaves was highly correlated with all leaves on the plant only if the three most injured leaves were <100% injured. The proportion of leaves injured was not highly correlated with percent leaf area injured of all leaves on the plant for any species in this study. The largest source of variation in visual assessments was plant-to-plant variation, which ranged from 44 to 97% of the total variance, followed by variation among readers (0-32% of the variance). Except for radish exposed to ozone, the day-to-day variation accounted for <18% of the total. Reader bias in assessment of ozone injury was significant but could be adjusted for each reader by a simple linear regression (R/sup 2/ = 0.89-0.91) of the visual assessments against the grid assessments.

  2. Precision and accuracy of decay constants and age standards

    NASA Astrophysics Data System (ADS)

    Villa, I. M.

    2011-12-01

    40 years of round-robin experiments with age standards teach us that systematic errors must be present in at least N-1 labs if participants provide N mutually incompatible data. In EarthTime, the U-Pb community has produced and distributed synthetic solutions with full metrological traceability. Collector linearity is routinely calibrated under variable conditions (e.g. [1]). Instrumental mass fractionation is measured in-run with double spikes (e.g. 233U-236U). Parent-daughter ratios are metrologically traceable, so the full uncertainty budget of a U-Pb age should coincide with interlaboratory uncertainty. TIMS round-robin experiments indeed show a decrease of N towards the ideal value of 1. Comparing 235U-207Pb with 238U-206Pb ages (e.g. [2]) has resulted in a credible re-evaluation of the 235U decay constant, with lower uncertainty than gamma counting. U-Pb microbeam techniques reveal the link petrology-microtextures-microchemistry-isotope record but do not achieve the low uncertainty of TIMS. In the K-Ar community, N is large; interlaboratory bias is > 10 times self-assessed uncertainty. Systematic errors may have analytical and petrological reasons. Metrological traceability is not yet implemented (substantial advance may come from work in progress, e.g. [7]). One of the worst problems is collector stability and linearity. Using electron multipliers (EM) instead of Faraday buckets (FB) reduces both dynamic range and collector linearity. Mass spectrometer backgrounds are never zero; the extent as well as the predictability of their variability must be propagated into the uncertainty evaluation. The high isotope ratio of the atmospheric Ar requires a large dynamic range over which linearity must be demonstrated under all analytical conditions to correctly estimate mass fractionation. The only assessment of EM linearity in Ar analyses [3] points out many fundamental problems; the onus of proof is on every laboratory claiming low uncertainties. Finally, sample size reduction is often associated to reducing clean-up time to increase sample/blank ratio; this may be self-defeating, as "dry blanks" [4] do not represent either the isotopic composition or the amount of Ar released by the sample chamber when exposed to unpurified sample gas. Single grains enhance background and purification problems relative to large sample sizes measured on FB. Petrologically, many natural "standards" are not ideal (e.g. MMhb1 [5], B4M [6]), as their original distributors never conceived petrology as the decisive control on isotope retention. Comparing ever smaller aliquots of unequilibrated minerals causes ever larger age variations. Metrologically traceable synthetic isotope mixtures still lie in the future. Petrological non-ideality of natural standards does not allow a metrological uncertainty budget. Collector behavior, on the contrary, does. Its quantification will, by definition, make true intralaboratory uncertainty greater or equal to interlaboratory bias. [1] Chen J, Wasserburg GJ, 1981. Analyt Chem 53, 2060-2067 [2] Mattinson JM, 2010. Chem Geol 275, 186-198 [3] Turrin B et al, 2010. G-cubed, 11, Q0AA09 [4] Baur H, 1975. PhD thesis, ETH Zürich, No. 6596 [5] Villa IM et al, 1996. Contrib Mineral Petrol 126, 67-80 [6] Villa IM, Heri AR, 2010. AGU abstract V31A-2296 [7] Morgan LE et al, in press. G-cubed, 2011GC003719

  3. Quality, precision and accuracy of the maximum No. 40 anemometer

    SciTech Connect

    Obermeir, J.; Blittersdorf, D.

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  4. Global positioning system measurements for crustal deformation: precision and accuracy.

    PubMed

    Prescott, W H; Davis, J L; Svarc, J L

    1989-06-16

    Analysis of 27 repeated observations of Global Positioning System (GPS) position-difference vectors, up to 11 kilometers in length, indicates that the standard deviation of the measurements is 4 millimeters for the north component, 6 millimeters for the east component, and 10 to 20 millimeters for the vertical component. The uncertainty grows slowly with increasing vector length. At 225 kilometers, the standard deviation of the measurement is 6, 11, and 40 millimeters for the north, east, and up components, respectively. Measurements with GPS and Geodolite, an electromagnetic distance-measuring system, over distances of 10 to 40 kilometers agree within 0.2 part per million. Measurements with GPS and very long baseline interferometry of the 225-kilometer vector agree within 0.05 part per million. PMID:17820661

  5. Mixed-Precision Spectral Deferred Correction: Preprint

    SciTech Connect

    Grout, Ray W. S.

    2015-09-02

    Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.

  6. Arrival Metering Precision Study

    NASA Technical Reports Server (NTRS)

    Prevot, Thomas; Mercer, Joey; Homola, Jeffrey; Hunt, Sarah; Gomez, Ashley; Bienert, Nancy; Omar, Faisal; Kraut, Joshua; Brasil, Connie; Wu, Minghong, G.

    2015-01-01

    This paper describes the background, method and results of the Arrival Metering Precision Study (AMPS) conducted in the Airspace Operations Laboratory at NASA Ames Research Center in May 2014. The simulation study measured delivery accuracy, flight efficiency, controller workload, and acceptability of time-based metering operations to a meter fix at the terminal area boundary for different resolution levels of metering delay times displayed to the air traffic controllers and different levels of airspeed information made available to the Time-Based Flow Management (TBFM) system computing the delay. The results show that the resolution of the delay countdown timer (DCT) on the controllers display has a significant impact on the delivery accuracy at the meter fix. Using the 10 seconds rounded and 1 minute rounded DCT resolutions resulted in more accurate delivery than 1 minute truncated and were preferred by the controllers. Using the speeds the controllers entered into the fourth line of the data tag to update the delay computation in TBFM in high and low altitude sectors increased air traffic control efficiency and reduced fuel burn for arriving aircraft during time based metering.

  7. Accuracy in optical overlay metrology

    NASA Astrophysics Data System (ADS)

    Bringoltz, Barak; Marciano, Tal; Yaziv, Tal; DeLeeuw, Yaron; Klein, Dana; Feler, Yoel; Adam, Ido; Gurevich, Evgeni; Sella, Noga; Lindenfeld, Ze'ev; Leviant, Tom; Saltoun, Lilach; Ashwal, Eltsafon; Alumot, Dror; Lamhot, Yuval; Gao, Xindong; Manka, James; Chen, Bryan; Wagner, Mark

    2016-03-01

    In this paper we discuss the mechanism by which process variations determine the overlay accuracy of optical metrology. We start by focusing on scatterometry, and showing that the underlying physics of this mechanism involves interference effects between cavity modes that travel between the upper and lower gratings in the scatterometry target. A direct result is the behavior of accuracy as a function of wavelength, and the existence of relatively well defined spectral regimes in which the overlay accuracy and process robustness degrades (`resonant regimes'). These resonances are separated by wavelength regions in which the overlay accuracy is better and independent of wavelength (we term these `flat regions'). The combination of flat and resonant regions forms a spectral signature which is unique to each overlay alignment and carries certain universal features with respect to different types of process variations. We term this signature the `landscape', and discuss its universality. Next, we show how to characterize overlay performance with a finite set of metrics that are available on the fly, and that are derived from the angular behavior of the signal and the way it flags resonances. These metrics are used to guarantee the selection of accurate recipes and targets for the metrology tool, and for process control with the overlay tool. We end with comments on the similarity of imaging overlay to scatterometry overlay, and on the way that pupil overlay scatterometry and field overlay scatterometry differ from an accuracy perspective.

  8. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    LRO definitive and predictive accuracy requirements were easily met in the nominal mission orbit, using the LP150Q lunar gravity model. center dot Accuracy of the LP150Q model is poorer in the extended mission elliptical orbit. center dot Later lunar gravity models, in particular GSFC-GRAIL-270, improve OD accuracy in the extended mission. center dot Implementation of a constrained plane when the orbit is within 45 degrees of the Earth-Moon line improves cross-track accuracy. center dot Prediction accuracy is still challenged during full-Sun periods due to coarse spacecraft area modeling - Implementation of a multi-plate area model with definitive attitude input can eliminate prediction violations. - The FDF is evaluating using analytic and predicted attitude modeling to improve full-Sun prediction accuracy. center dot Comparison of FDF ephemeris file to high-precision ephemeris files provides gross confirmation that overlap compares properly assess orbit accuracy.

  9. Fast robust correlation.

    PubMed

    Fitch, Alistair J; Kadyrov, Alexander; Christmas, William J; Kittler, Josef

    2005-08-01

    A new, fast, statistically robust, exhaustive, translational image-matching technique is presented: fast robust correlation. Existing methods are either slow or non-robust, or rely on optimization. Fast robust correlation works by expressing a robust matching surface as a series of correlations. Speed is obtained by computing correlations in the frequency domain. Computational cost is analyzed and the method is shown to be fast. Speed is comparable to conventional correlation and, for large images, thousands of times faster than direct robust matching. Three experiments demonstrate the advantage of the technique over standard correlation.

  10. Robust design of dynamic observers

    NASA Technical Reports Server (NTRS)

    Bhattacharyya, S. P.

    1974-01-01

    The two (identity) observer realizations z = Mz + Ky and z = transpose of Az + transpose of K(y - transpose of Cz), respectively called the open loop and closed loop realizations, for the linear system x = Ax, y = Cx are analyzed with respect to the requirement of robustness; i.e., the requirement that the observer continue to regulate the error x - z satisfactorily despite small variations in the observer parameters from the projected design values. The results show that the open loop realization is never robust, that robustness requires a closed loop implementation, and that the closed loop realization is robust with respect to small perturbations in the gains transpose of K if and only if the observer can be built to contain an exact replica of the unstable and underdamped dynamics of the system being observed. These results clarify the stringent accuracy requirements on both models and hardware that must be met before an observer can be considered for use in a control system.

  11. Central difference predictive filter for attitude determination with low precision sensors and model errors

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Chen, Xiaoqian; Misra, Arun K.

    2014-12-01

    Attitude determination is one of the key technologies for Attitude Determination and Control System (ADCS) of a satellite. However, serious model errors may exist which will affect the estimation accuracy of ACDS, especially for a small satellite with low precision sensors. In this paper, a central difference predictive filter (CDPF) is proposed for attitude determination of small satellites with model errors and low precision sensors. The new filter is proposed by introducing the Stirling's polynomial interpolation formula to extend the traditional predictive filter (PF). It is shown that the proposed filter has higher accuracy for the estimation of system states than the traditional PF. It is known that the unscented Kalman filter (UKF) has also been used in the ADCS of small satellites with low precision sensors. In order to evaluate the performance of the proposed filter, the UKF is also employed to compare it with the CDPF. Numerical simulations show that the proposed CDPF is more effective and robust in dealing with model errors and low precision sensors compared with the UKF or traditional PF.

  12. A robust method for processing scanning probe microscopy images and determining nanoobject position and dimensions.

    PubMed

    Silly, F

    2009-12-01

    Processing of scanning probe microscopy (SPM) images is essential to explore nanoscale phenomena. Image processing and pattern recognition techniques are developed to improve the accuracy and consistency of nanoobject and surface characterization. We present a robust and versatile method to process SPM images and reproducibly estimate nanoobject position and dimensions. This method is using dedicated fits based on the least-square method and the matrix operations. The corresponding algorithms have been implemented in the FabViewer portable application. We illustrate how these algorithms permit not only to correct SPM images but also to precisely determine the position and dimensions of nanocrystals and adatoms on surface. A robustness test is successfully performed using distorted SPM images. PMID:19941561

  13. Robust extraction of the aorta and pulmonary artery from 3D MDCT image data

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2010-03-01

    Accurate definition of the aorta and pulmonary artery from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents robust methods for defining the aorta and pulmonary artery in the central chest. The methods work on both contrast enhanced and no-contrast 3D MDCT image data. The automatic methods use a common approach employing model fitting and selection and adaptive refinement. During the occasional event that more precise vascular extraction is desired or the method fails, we also have an alternate semi-automatic fail-safe method. The semi-automatic method extracts the vasculature by extending the medial axes into a user-guided direction. A ground-truth study over a series of 40 human 3D MDCT images demonstrates the efficacy, accuracy, robustness, and efficiency of the methods.

  14. Precision injection molding of freeform optics

    NASA Astrophysics Data System (ADS)

    Fang, Fengzhou; Zhang, Nan; Zhang, Xiaodong

    2016-08-01

    Precision injection molding is the most efficient mass production technology for manufacturing plastic optics. Applications of plastic optics in field of imaging, illumination, and concentration demonstrate a variety of complex surface forms, developing from conventional plano and spherical surfaces to aspheric and freeform surfaces. It requires high optical quality with high form accuracy and lower residual stresses, which challenges both optical tool inserts machining and precision injection molding process. The present paper reviews recent progress in mold tool machining and precision injection molding, with more emphasis on precision injection molding. The challenges and future development trend are also discussed.

  15. Robust Fault Detection and Isolation for Stochastic Systems

    NASA Technical Reports Server (NTRS)

    George, Jemin; Gregory, Irene M.

    2010-01-01

    This paper outlines the formulation of a robust fault detection and isolation scheme that can precisely detect and isolate simultaneous actuator and sensor faults for uncertain linear stochastic systems. The given robust fault detection scheme based on the discontinuous robust observer approach would be able to distinguish between model uncertainties and actuator failures and therefore eliminate the problem of false alarms. Since the proposed approach involves precise reconstruction of sensor faults, it can also be used for sensor fault identification and the reconstruction of true outputs from faulty sensor outputs. Simulation results presented here validate the effectiveness of the robust fault detection and isolation system.

  16. High-accuracy EUV reflectometer

    NASA Astrophysics Data System (ADS)

    Hinze, U.; Fokoua, M.; Chichkov, B.

    2007-03-01

    Developers and users of EUV-optics need precise tools for the characterization of their products. Often a measurement accuracy of 0.1% or better is desired to detect and study slow-acting aging effect or degradation by organic contaminants. To achieve a measurement accuracy of 0.1% an EUV-source is required which provides an excellent long-time stability, namely power stability, spatial stability and spectral stability. Naturally, it should be free of debris. An EUV-source particularly suitable for this task is an advanced electron-based EUV-tube. This EUV source provides an output of up to 300 μW at 13.5 nm. Reflectometers benefit from the excellent long-time stability of this tool. We design and set up different reflectometers using EUV-tubes for the precise characterisation of EUV-optics, such as debris samples, filters, multilayer mirrors, grazing incidence optics, collectors and masks. Reflectivity measurements from grazing incidence to near normal incidence as well as transmission studies were realised at a precision of down to 0.1%. The reflectometers are computer-controlled and allow varying and scanning all important parameters online. The concepts of a sample reflectometer is discussed and results are presented. The devices can be purchased from the Laser Zentrum Hannover e.V.

  17. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  18. Robustness of phylogenetic inference based on minimum evolution.

    PubMed

    Pardi, Fabio; Guillemot, Sylvain; Gascuel, Olivier

    2010-10-01

    Minimum evolution is the guiding principle of an important class of distance-based phylogeny reconstruction methods, including neighbor-joining (NJ), which is the most cited tree inference algorithm to date. The minimum evolution principle involves searching for the tree with minimum length, where the length is estimated using various least-squares criteria. Since evolutionary distances cannot be known precisely but only estimated, it is important to investigate the robustness of phylogenetic reconstruction to imprecise estimates for these distances. The safety radius is a measure of this robustness: it consists of the maximum relative deviation that the input distances can have from the correct distances, without compromising the reconstruction of the correct tree structure. Answering some open questions, we here derive the safety radius of two popular minimum evolution criteria: balanced minimum evolution (BME) and minimum evolution based on ordinary least squares (OLS + ME). Whereas BME has a radius of 1/2, which is the best achievable, OLS + ME has a radius tending to 0 as the number of taxa increases. This difference may explain the gap in reconstruction accuracy observed in practice between OLS + ME and BME (which forms the basis of popular programs such as NJ and FastME).

  19. Evaluation of approaches for estimating the accuracy of genomic prediction in plant breeding

    PubMed Central

    2013-01-01

    fastest and produced the least biased, the most precise, robust and stable estimates of predictive accuracy. These properties argue for routinely using Methods 5 and 7 to assess predictive accuracy in genomic selection studies. PMID:24314298

  20. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  1. Knowledge discovery by accuracy maximization.

    PubMed

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-04-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold's topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan's presidency and not from its beginning.

  2. Knowledge discovery by accuracy maximization

    PubMed Central

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-01-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821

  3. High-precision arithmetic in mathematical physics

    DOE PAGES

    Bailey, David H.; Borwein, Jonathan M.

    2015-05-12

    For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.

  4. Precision and performance of polysilicon micromirrors for hybrid integrated optics

    NASA Astrophysics Data System (ADS)

    Solgaard, Olav; Tien, Norman C.; Daneman, Michael J.; Kiang, Meng-Hsiung; Friedberger, Alois; Muller, Richard S.; Lau, Kam Y.

    1995-05-01

    We have designed and built integrated, movable micromirrors for on-chip alignment in silicon- optical-bench technology. The mirrors are fabricated using surface micromachining with three polysilicon layers. A polysilicon-hinge technology was used to achieve the required vertical dimensions and functionality for alignment in hybrid photonic integrated circuits. The positioning accuracy of the mirrors is measured to be on the order of 0.2 micrometers . This precision is shown theoretically and experimentally to be sufficient for laser-to-fiber coupling. In the experimental verification, we used external actuators to position the micromirror and obtained 45% coupling efficiency from a semiconductor laser (operating at 1.3 micrometers ) to a standard single-mode optical fiber. The stability and robustness of the micromirrors were demonstrated in shock and vibration tests that showed that the micromirrors will withstand normal handling and operation without the need for welding or gluing. This micromirror technology combines the low-cost advantage of passive alignment and the accuracy of active alignment. In addition to optoelectronic packaging, the micromirrors can be expected to find applications in grating-tuned external-cavity lasers, scanning lasers, and interferometers.

  5. Mechanisms for Robust Cognition

    ERIC Educational Resources Information Center

    Walsh, Matthew M.; Gluck, Kevin A.

    2015-01-01

    To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within…

  6. Contemporary flow meters: an assessment of their accuracy and reliability.

    PubMed

    Christmas, T J; Chapple, C R; Rickards, D; Milroy, E J; Turner-Warwick, R T

    1989-05-01

    The accuracy, reliability and cost effectiveness of 5 currently marketed flow meters have been assessed. The mechanics of each meter is briefly described in relation to its accuracy and robustness. The merits and faults of the meters are discussed and the important features of flow measurements that need to be taken into account when making diagnostic interpretations are emphasised.

  7. Accurate and precise determination of isotopic ratios by MC-ICP-MS: a review.

    PubMed

    Yang, Lu

    2009-01-01

    For many decades the accurate and precise determination of isotope ratios has remained a very strong interest to many researchers due to its important applications in earth, environmental, biological, archeological, and medical sciences. Traditionally, thermal ionization mass spectrometry (TIMS) has been the technique of choice for achieving the highest accuracy and precision. However, recent developments in multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) have brought a new dimension to this field. In addition to its simple and robust sample introduction, high sample throughput, and high mass resolution, the flat-topped peaks generated by this technique provide for accurate and precise determination of isotope ratios with precision reaching 0.001%, comparable to that achieved with TIMS. These features, in combination with the ability of the ICP source to ionize nearly all elements in the periodic table, have resulted in an increased use of MC-ICP-MS for such measurements in various sample matrices. To determine accurate and precise isotope ratios with MC-ICP-MS, utmost care must be exercised during sample preparation, optimization of the instrument, and mass bias corrections. Unfortunately, there are inconsistencies and errors evident in many MC-ICP-MS publications, including errors in mass bias correction models. This review examines "state-of-the-art" methodologies presented in the literature for achievement of precise and accurate determinations of isotope ratios by MC-ICP-MS. Some general rules for such accurate and precise measurements are suggested, and calculations of combined uncertainty of the data using a few common mass bias correction models are outlined.

  8. Precision CW laser automatic tracking system investigated

    NASA Technical Reports Server (NTRS)

    Lang, K. T.; Lucy, R. F.; Mcgann, E. J.; Peters, C. J.

    1966-01-01

    Precision laser tracker capable of tracking a low acceleration target to an accuracy of about 20 microradians rms is being constructed and tested. This laser tracking has the advantage of discriminating against other optical sources and the capability of simultaneously measuring range.

  9. Using satellite data to increase accuracy of PMF calculations

    SciTech Connect

    Mettel, M.C.

    1992-03-01

    The accuracy of a flood severity estimate depends on the data used. The more detailed and precise the data, the more accurate the estimate. Earth observation satellites gather detailed data for determining the probable maximum flood at hydropower projects.

  10. Precise Positioning with Multi-GNSS and its Advantage for Seismic Parameters Inversion

    NASA Astrophysics Data System (ADS)

    Chen, K.; Li, X.; Babeyko, A. Y.; Ge, M.

    2015-12-01

    Together with the ongoing modernization of U.S. GPS and Russian GLONASS, the two new emerging global navigation satellite systems (BeiDou from China and Galileo from European Union) have already been running and multi-GNSS era is coming. Compared with single system, multi-GNSS can significantly improve the satellite visibility, optimize the spatial geometry, reduce dilution of precision and will be of great benefits to both scientific applications and engineering services. In this contribution, we focus mainly on its potential advantages for earthquake parameters estimation and tsunami early warning. First, we assess the precise positioning performance of multi-GNSS by an out-door experiment on a shaking table. Three positioning methods were used to retrieve the simulated seismic signal: precise point positioning (PPP), variometric approach for displacements analysis stand-alone engine (VADASE) and temporal point positioning (TPP). In addition to that, with respect to VADASE and TPP, we extended the original dual-frequency model to single-frequency model and then tested the algorithms. Accuracy, reliability, and continuity were evaluated and analyzed in detail accordingly. Our results revealed that multi-GNSS offer more precise and robust positioning results over GPS-only. At last, as a case study, multi-GNSS data recorded during 2014 Pisagua Earthquake were re-processed. Using co-seismic displacements from GPS and multi-GNSS, earthquake and the aftermath tsunami were inverted, respectively.

  11. Precision performance lamp technology

    NASA Astrophysics Data System (ADS)

    Bell, Dean A.; Kiesa, James E.; Dean, Raymond A.

    1997-09-01

    A principal function of a lamp is to produce light output with designated spectra, intensity, and/or geometric radiation patterns. The function of a precision performance lamp is to go beyond these parameters and into the precision repeatability of performance. All lamps are not equal. There are a variety of incandescent lamps, from the vacuum incandescent indictor lamp to the precision lamp of a blood analyzer. In the past the definition of a precision lamp was described in terms of wattage, light center length (LCL), filament position, and/or spot alignment. This paper presents a new view of precision lamps through the discussion of a new segment of lamp design, which we term precision performance lamps. The definition of precision performance lamps will include (must include) the factors of a precision lamp. But what makes a precision lamp a precision performance lamp is the manner in which the design factors of amperage, mscp (mean spherical candlepower), efficacy (lumens/watt), life, not considered individually but rather considered collectively. There is a statistical bias in a precision performance lamp for each of these factors; taken individually and as a whole. When properly considered the results can be dramatic to the system design engineer, system production manage and the system end-user. It can be shown that for the lamp user, the use of precision performance lamps can translate to: (1) ease of system design, (2) simplification of electronics, (3) superior signal to noise ratios, (4) higher manufacturing yields, (5) lower system costs, (6) better product performance. The factors mentioned above are described along with their interdependent relationships. It is statistically shown how the benefits listed above are achievable. Examples are provided to illustrate how proper attention to precision performance lamp characteristics actually aid in system product design and manufacturing to build and market more, market acceptable product products in the

  12. A Precise Lunar Photometric Function

    NASA Astrophysics Data System (ADS)

    McEwen, A. S.

    1996-03-01

    The Clementine multispectral dataset will enable compositional mapping of the entire lunar surface at a resolution of ~100-200 m, but a highly accurate photometric normalization is needed to achieve challenging scientific objectives such as mapping petrographic or elemental compositions. The goal of this work is to normalize the Clementine data to an accuracy of 1% for the UVVIS images (0.415, 0.75, 0.9, 0.95, and 1.0 micrometers) and 2% for NIR images (1.1, 1.25, 1.5, 2.0, 2.6, and 2.78 micrometers), consistent with radiometric calibration goals. The data will be normalized to R30, the reflectance expected at an incidence angle (i) and phase angle (alpha) of 30 degrees and emission angle (e) of 0 degree, matching the photometric geometry of lunar samples measured at the reflectance laboratory (RELAB) at Brown University The focus here is on the precision of the normalization, not the putative physical significance of the photometric function parameters. The 2% precision achieved is significantly better than the ~10% precision of a previous normalization.

  13. Robust fault detection and isolation in stochastic systems

    NASA Astrophysics Data System (ADS)

    George, Jemin

    2012-07-01

    This article outlines the formulation of a robust fault detection and isolation (FDI) scheme that can precisely detect and isolate simultaneous actuator and sensor faults for uncertain linear stochastic systems. The given robust fault detection scheme based on the discontinuous robust observer approach would be able to distinguish between model uncertainties and actuator failures and therefore eliminate the problem of false alarms. Since the proposed approach involves estimating sensor faults, it can also be used for sensor fault identification and the reconstruction of true outputs from faulty sensor outputs. Simulation results presented here validate the effectiveness of the proposed robust FDI system.

  14. Robust Methods in Qsar

    NASA Astrophysics Data System (ADS)

    Walczak, Beata; Daszykowski, Michał; Stanimirova, Ivana

    A large progress in the development of robust methods as an efficient tool for processing of data contaminated with outlying objects has been made over the last years. Outliers in the QSAR studies are usually the result of an improper calculation of some molecular descriptors and/or experimental error in determining the property to be modelled. They influence greatly any least square model, and therefore the conclusions about the biological activity of a potential component based on such a model are misleading. With the use of robust approaches, one can solve this problem building a robust model describing the data majority well. On the other hand, the proper identification of outliers may pinpoint a new direction of a drug development. The outliers' assessment can exclusively be done with robust methods and these methods are to be described in this chapter

  15. Precision volume measuring system

    SciTech Connect

    Klevgard, P.A.

    1984-11-01

    An engineering study was undertaken to calibrate and certify a precision volume measurement system that uses the ideal gas law and precise pressure measurements (of low-pressure helium) to ratio a known to an unknown volume. The constant-temperature, computer-controlled system was tested for thermodynamic instabilities, for precision (0.01%), and for bias (0.01%). Ratio scaling was used to optimize the quartz crystal pressure transducer calibration.

  16. Robust visual tracking with contiguous occlusion constraint

    NASA Astrophysics Data System (ADS)

    Wang, Pengcheng; Qian, Weixian; Chen, Qian

    2016-02-01

    Visual tracking plays a fundamental role in video surveillance, robot vision and many other computer vision applications. In this paper, a robust visual tracking method that is motivated by the regularized ℓ1 tracker is proposed. We focus on investigating the case that the object target is occluded. Generally, occlusion can be treated as some kind of contiguous outlier with the target object as background. However, the penalty function of the ℓ1 tracker is not robust for relatively dense error distributed in the contiguous regions. Thus, we exploit a nonconvex penalty function and MRFs for outlier modeling, which is more probable to detect the contiguous occluded regions and recover the target appearance. For long-term tracking, a particle filter framework along with a dynamic model update mechanism is developed. Both qualitative and quantitative evaluations demonstrate a robust and precise performance.

  17. Precision goniometer equipped with a 22-bit absolute rotary encoder.

    PubMed

    Xiaowei, Z; Ando, M; Jidong, W

    1998-05-01

    The calibration of a compact precision goniometer equipped with a 22-bit absolute rotary encoder is presented. The goniometer is a modified Huber 410 goniometer: the diffraction angles can be coarsely generated by a stepping-motor-driven worm gear and precisely interpolated by a piezoactuator-driven tangent arm. The angular accuracy of the precision rotary stage was evaluated with an autocollimator. It was shown that the deviation from circularity of the rolling bearing utilized in the precision rotary stage restricts the angular positioning accuracy of the goniometer, and results in an angular accuracy ten times larger than the angular resolution of 0.01 arcsec. The 22-bit encoder was calibrated by an incremental rotary encoder. It became evident that the accuracy of the absolute encoder is approximately 18 bit due to systematic errors.

  18. Precision aerial application for site-specific rice crop management

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Precision agriculture includes different technologies that allow agricultural professional to use information management tools to optimize agriculture production. The new technologies allow aerial application applicators to improve application accuracy and efficiency, which saves time and money for...

  19. Precision positioning device

    DOEpatents

    McInroy, John E.

    2005-01-18

    A precision positioning device is provided. The precision positioning device comprises a precision measuring/vibration isolation mechanism. A first plate is provided with the precision measuring mean secured to the first plate. A second plate is secured to the first plate. A third plate is secured to the second plate with the first plate being positioned between the second plate and the third plate. A fourth plate is secured to the third plate with the second plate being positioned between the third plate and the fourth plate. An adjusting mechanism for adjusting the position of the first plate, the second plate, the third plate, and the fourth plate relative to each other.

  20. Robust Bayesian Fluorescence Lifetime Estimation, Decay Model Selection and Instrument Response Determination for Low-Intensity FLIM Imaging

    PubMed Central

    Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.

    2016-01-01

    We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322

  1. Backward smoothing for precise GNSS applications

    NASA Astrophysics Data System (ADS)

    Vaclavovic, Pavel; Dousa, Jan

    2015-10-01

    The Extended Kalman filter is widely used for its robustness and simple implementation. Parameters estimated for solving dynamical systems usually require certain time to converge and need to be smoothed by a dedicated algorithms. The purpose of our study was to implement smoothing algorithms for processing both code and carrier phase observations with Precise Point Positioning method. We implemented and used the well known Rauch-Tung-Striebel smoother (RTS). It has been found out that the RTS suffer from significant numerical instability in smoothed state covariance matrix determination. We improved the processing with algorithms based on Singular Value Decomposition, which was more robust. Observations from many permanent stations have been processed with final orbits and clocks provided by the International GNSS service (IGS), and the smoothing improved stability and precision in every cases. Moreover, (re)convergence of the parameters were always successfully eliminated.

  2. Simultaneous HPLC determination of 22 components of essential oils; method robustness with experimental design.

    PubMed

    Porel, A; Sanyal, Y; Kundu, A

    2014-01-01

    The aim of the present study was the development and validation of a simple, precise and specific reversed phase HPLC method for the simultaneous determination of 22 components present in different essential oils namely cinnamon bark oil, caraway oil and cardamom fruit oil. The chromatographic separation of all the components was achieved on Wakosil-II C18 column with mixture of 30 mM ammonium acetate buffer (pH 4.7), methanol and acetonitrile in different ratio as mobile phase in a ternary linear gradient mode. The calibration graphs plotted with five different concentrations of each component were linear with a regression coefficient R(2) >0.999. The limit of detection and limit of quantitation were estimated for all the components. Effect on analytical responses by small and deliberate variation of critical factors was examined by robustness testing with Design of Experiment employing Central Composite Design and established that this method was robust. The method was then validated for linearity, precision, accuracy, specificity and demonstrated to be applicable to the determination of the ingredients in commercial sample of essential oil.

  3. Simultaneous HPLC Determination of 22 Components of Essential Oils; Method Robustness with Experimental Design

    PubMed Central

    Porel, A.; Sanyal, Y.; Kundu, A.

    2014-01-01

    The aim of the present study was the development and validation of a simple, precise and specific reversed phase HPLC method for the simultaneous determination of 22 components present in different essential oils namely cinnamon bark oil, caraway oil and cardamom fruit oil. The chromatographic separation of all the components was achieved on Wakosil–II C18 column with mixture of 30 mM ammonium acetate buffer (pH 4.7), methanol and acetonitrile in different ratio as mobile phase in a ternary linear gradient mode. The calibration graphs plotted with five different concentrations of each component were linear with a regression coefficient R2 >0.999. The limit of detection and limit of quantitation were estimated for all the components. Effect on analytical responses by small and deliberate variation of critical factors was examined by robustness testing with Design of Experiment employing Central Composite Design and established that this method was robust. The method was then validated for linearity, precision, accuracy, specificity and demonstrated to be applicable to the determination of the ingredients in commercial sample of essential oil. PMID:24799735

  4. Simultaneous HPLC determination of 22 components of essential oils; method robustness with experimental design.

    PubMed

    Porel, A; Sanyal, Y; Kundu, A

    2014-01-01

    The aim of the present study was the development and validation of a simple, precise and specific reversed phase HPLC method for the simultaneous determination of 22 components present in different essential oils namely cinnamon bark oil, caraway oil and cardamom fruit oil. The chromatographic separation of all the components was achieved on Wakosil-II C18 column with mixture of 30 mM ammonium acetate buffer (pH 4.7), methanol and acetonitrile in different ratio as mobile phase in a ternary linear gradient mode. The calibration graphs plotted with five different concentrations of each component were linear with a regression coefficient R(2) >0.999. The limit of detection and limit of quantitation were estimated for all the components. Effect on analytical responses by small and deliberate variation of critical factors was examined by robustness testing with Design of Experiment employing Central Composite Design and established that this method was robust. The method was then validated for linearity, precision, accuracy, specificity and demonstrated to be applicable to the determination of the ingredients in commercial sample of essential oil. PMID:24799735

  5. Fast, Accurate and Precise Mid-Sagittal Plane Location in 3D MR Images of the Brain

    NASA Astrophysics Data System (ADS)

    Bergo, Felipe P. G.; Falcão, Alexandre X.; Yasuda, Clarissa L.; Ruppert, Guilherme C. S.

    Extraction of the mid-sagittal plane (MSP) is a key step for brain image registration and asymmetry analysis. We present a fast MSP extraction method for 3D MR images, based on automatic segmentation of the brain and on heuristic maximization of the cerebro-spinal fluid within the MSP. The method is robust to severe anatomical asymmetries between the hemispheres, caused by surgical procedures and lesions. The method is also accurate with respect to MSP delineations done by a specialist. The method was evaluated on 64 MR images (36 pathological, 20 healthy, 8 synthetic), and it found a precise and accurate approximation of the MSP in all of them with a mean time of 60.0 seconds per image, mean angular variation within a same image (precision) of 1.26o and mean angular difference from specialist delineations (accuracy) of 1.64o.

  6. System and method for high precision isotope ratio destructive analysis

    SciTech Connect

    Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R

    2013-07-02

    A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).

  7. A novel low-complexity post-processing algorithm for precise QRS localization.

    PubMed

    Fonseca, Pedro; Aarts, Ronald M; Foussier, Jérôme; Long, Xi

    2014-01-01

    Precise localization of QRS complexes is an essential step in the analysis of small transient changes in instant heart rate and before signal averaging in QRS morphological analysis. Most localization algorithms reported in literature are either not robust to artifacts, depend on the sampling rate of the ECG recordings or are too computationally expensive for real-time applications, especially in low-power embedded devices. This paper proposes a localization algorithm based on the intersection of tangents fitted to the slopes of R waves detected by any QRS detector. Despite having a lower complexity, this algorithm achieves comparable trigger jitter to more complex localization methods without requiring the data to first be upsampled. It also achieves high localization precision regardless of which QRS detector is used as input. It is robust to clipping artifacts and to noise, achieving an average localization error below 2 ms and a trigger jitter below 1 ms on recordings where no additional artifacts were added, and below 8 ms for recordings where the signal was severely degraded. Finally, it increases the accuracy of template-based false positive rejection, allowing nearly all mock false positives added to a set of QRS detections to be removed at the cost of a very small decrease in sensitivity. The localization algorithm proposed is particularly well-suited for implementation in embedded, low-power devices for real-time applications. PMID:26034664

  8. A novel low-complexity post-processing algorithm for precise QRS localization.

    PubMed

    Fonseca, Pedro; Aarts, Ronald M; Foussier, Jérôme; Long, Xi

    2014-01-01

    Precise localization of QRS complexes is an essential step in the analysis of small transient changes in instant heart rate and before signal averaging in QRS morphological analysis. Most localization algorithms reported in literature are either not robust to artifacts, depend on the sampling rate of the ECG recordings or are too computationally expensive for real-time applications, especially in low-power embedded devices. This paper proposes a localization algorithm based on the intersection of tangents fitted to the slopes of R waves detected by any QRS detector. Despite having a lower complexity, this algorithm achieves comparable trigger jitter to more complex localization methods without requiring the data to first be upsampled. It also achieves high localization precision regardless of which QRS detector is used as input. It is robust to clipping artifacts and to noise, achieving an average localization error below 2 ms and a trigger jitter below 1 ms on recordings where no additional artifacts were added, and below 8 ms for recordings where the signal was severely degraded. Finally, it increases the accuracy of template-based false positive rejection, allowing nearly all mock false positives added to a set of QRS detections to be removed at the cost of a very small decrease in sensitivity. The localization algorithm proposed is particularly well-suited for implementation in embedded, low-power devices for real-time applications.

  9. Precision Teaching: An Introduction.

    ERIC Educational Resources Information Center

    West, Richard P.; And Others

    1990-01-01

    Precision teaching is introduced as a method of helping students develop fluency or automaticity in the performance of academic skills. Precision teaching involves being aware of the relationship between teaching and learning, measuring student performance regularly and frequently, and analyzing the measurements to develop instructional and…

  10. Precision Optics Curriculum.

    ERIC Educational Resources Information Center

    Reid, Robert L.; And Others

    This guide outlines the competency-based, two-year precision optics curriculum that the American Precision Optics Manufacturers Association has proposed to fill the void that it suggests will soon exist as many of the master opticians currently employed retire. The model, which closely resembles the old European apprenticeship model, calls for 300…

  11. Asynchronous RTK precise DGNSS positioning method for deriving a low-latency high-rate output

    NASA Astrophysics Data System (ADS)

    Liang, Zhang; Hanfeng, Lv; Dingjie, Wang; Yanqing, Hou; Jie, Wu

    2015-07-01

    Low-latency high-rate (1 Hz) precise real-time kinematic (RTK) can be applied in high-speed scenarios such as aircraft automatic landing, precise agriculture and intelligent vehicle. The classic synchronous RTK (SRTK) precise differential GNSS (DGNSS) positioning technology, however, is not able to obtain a low-latency high-rate output for the rover receiver because of long data link transmission time delays (DLTTD) from the reference receiver. To overcome the long DLTTD, this paper proposes an asynchronous real-time kinematic (ARTK) method using asynchronous observations from two receivers. The asynchronous observation model (AOM) is developed based on undifferenced carrier phase observation equations of the two receivers at different epochs with short baseline. The ephemeris error and atmosphere delay are the possible main error sources on positioning accuracy in this model, and they are analyzed theoretically. In a short DLTTD and during a period of quiet ionosphere activity, the main error sources decreasing positioning accuracy are satellite orbital errors: the "inverted ephemeris error" and the integration of satellite velocity error which increase linearly along with DLTTD. The cycle slip of asynchronous double-differencing carrier phase is detected by TurboEdit method and repaired by the additional ambiguity parameter method. The AOM can deal with synchronous observation model (SOM) and achieve precise positioning solution with synchronous observations as well, since the SOM is only a specific case of AOM. The proposed method not only can reduce the cost of data collection and transmission, but can also support the mobile phone network data link transfer mode for the data of the reference receiver. This method can avoid data synchronizing process besides ambiguity initialization step, which is very convenient for real-time navigation of vehicles. The static and kinematic experiment results show that this method achieves 20 Hz or even higher rate output in

  12. Robust control of accelerators

    SciTech Connect

    Johnson, W.J.D. ); Abdallah, C.T. )

    1990-01-01

    The problem of controlling the variations in the rf power system can be effectively cast as an application of modern control theory. Two components of this theory are obtaining a model and a feedback structure. The model inaccuracies influence the choice of a particular controller structure. Because of the modeling uncertainty, one has to design either a variable, adaptive controller or a fixed, robust controller to achieve the desired objective. The adaptive control scheme usually results in very complex hardware; and, therefore, shall not be pursued in this research. In contrast, the robust control methods leads to simpler hardware. However, robust control requires a more accurate mathematical model of the physical process than is required by adaptive control. Our research at the Los Alamos National Laboratory (LANL) and the University of New Mexico (UNM) has led to the development and implementation of a new robust rf power feedback system. In this paper, we report on our research progress. In section one, the robust control problem for the rf power system and the philosophy adopted for the beginning phase of our research is presented. In section two, the results of our proof-of-principle experiments are presented. In section three, we describe the actual controller configuration that is used in LANL FEL physics experiments. The novelty of our approach is that the control hardware is implemented directly in rf without demodulating, compensating, and then remodulating.

  13. Classification accuracy improvement

    NASA Technical Reports Server (NTRS)

    Kistler, R.; Kriegler, F. J.

    1977-01-01

    Improvements made in processing system designed for MIDAS (prototype multivariate interactive digital analysis system) effects higher accuracy in classification of pixels, resulting in significantly-reduced processing time. Improved system realizes cost reduction factor of 20 or more.

  14. Trap Array Configuration Influences Estimates and Precision of Black Bear Density and Abundance

    PubMed Central

    Wilton, Clay M.; Puckett, Emily E.; Beringer, Jeff; Gardner, Beth; Eggert, Lori S.; Belant, Jerrold L.

    2014-01-01

    Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI = 193–406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information

  15. Precision and Power Grip Priming by Observed Grasping

    ERIC Educational Resources Information Center

    Vainio, Lari; Tucker, Mike; Ellis, Rob

    2007-01-01

    The coupling of hand grasping stimuli and the subsequent grasp execution was explored in normal participants. Participants were asked to respond with their right- or left-hand to the accuracy of an observed (dynamic) grasp while they were holding precision or power grasp response devices in their hands (e.g., precision device/right-hand; power…

  16. Robust Unit Commitment Considering Uncertain Demand Response

    DOE PAGES

    Liu, Guodong; Tomsovic, Kevin

    2014-09-28

    Although price responsive demand response has been widely accepted as playing an important role in the reliable and economic operation of power system, the real response from demand side can be highly uncertain due to limited understanding of consumers' response to pricing signals. To model the behavior of consumers, the price elasticity of demand has been explored and utilized in both research and real practice. However, the price elasticity of demand is not precisely known and may vary greatly with operating conditions and types of customers. To accommodate the uncertainty of demand response, alternative unit commitment methods robust to themore » uncertainty of the demand response require investigation. In this paper, a robust unit commitment model to minimize the generalized social cost is proposed for the optimal unit commitment decision taking into account uncertainty of the price elasticity of demand. By optimizing the worst case under proper robust level, the unit commitment solution of the proposed model is robust against all possible realizations of the modeled uncertain demand response. Numerical simulations on the IEEE Reliability Test System show the e ectiveness of the method. Finally, compared to unit commitment with deterministic price elasticity of demand, the proposed robust model can reduce the average Locational Marginal Prices (LMPs) as well as the price volatility.« less

  17. Robust Unit Commitment Considering Uncertain Demand Response

    SciTech Connect

    Liu, Guodong; Tomsovic, Kevin

    2014-09-28

    Although price responsive demand response has been widely accepted as playing an important role in the reliable and economic operation of power system, the real response from demand side can be highly uncertain due to limited understanding of consumers' response to pricing signals. To model the behavior of consumers, the price elasticity of demand has been explored and utilized in both research and real practice. However, the price elasticity of demand is not precisely known and may vary greatly with operating conditions and types of customers. To accommodate the uncertainty of demand response, alternative unit commitment methods robust to the uncertainty of the demand response require investigation. In this paper, a robust unit commitment model to minimize the generalized social cost is proposed for the optimal unit commitment decision taking into account uncertainty of the price elasticity of demand. By optimizing the worst case under proper robust level, the unit commitment solution of the proposed model is robust against all possible realizations of the modeled uncertain demand response. Numerical simulations on the IEEE Reliability Test System show the e ectiveness of the method. Finally, compared to unit commitment with deterministic price elasticity of demand, the proposed robust model can reduce the average Locational Marginal Prices (LMPs) as well as the price volatility.

  18. Robust Signal Processing in Living Cells

    PubMed Central

    Steuer, Ralf; Waldherr, Steffen; Sourjik, Victor; Kollmann, Markus

    2011-01-01

    Cellular signaling networks have evolved an astonishing ability to function reliably and with high fidelity in uncertain environments. A crucial prerequisite for the high precision exhibited by many signaling circuits is their ability to keep the concentrations of active signaling compounds within tightly defined bounds, despite strong stochastic fluctuations in copy numbers and other detrimental influences. Based on a simple mathematical formalism, we identify topological organizing principles that facilitate such robust control of intracellular concentrations in the face of multifarious perturbations. Our framework allows us to judge whether a multiple-input-multiple-output reaction network is robust against large perturbations of network parameters and enables the predictive design of perfectly robust synthetic network architectures. Utilizing the Escherichia coli chemotaxis pathway as a hallmark example, we provide experimental evidence that our framework indeed allows us to unravel the topological organization of robust signaling. We demonstrate that the specific organization of the pathway allows the system to maintain global concentration robustness of the diffusible response regulator CheY with respect to several dominant perturbations. Our framework provides a counterpoint to the hypothesis that cellular function relies on an extensive machinery to fine-tune or control intracellular parameters. Rather, we suggest that for a large class of perturbations, there exists an appropriate topology that renders the network output invariant to the respective perturbations. PMID:22215991

  19. A 3-D Multilateration: A Precision Geodetic Measurement System

    NASA Technical Reports Server (NTRS)

    Escobal, P. R.; Fliegel, H. F.; Jaffe, R. M.; Muller, P. M.; Ong, K. M.; Vonroos, O. H.

    1972-01-01

    A system was designed with the capability of determining 1-cm accuracy station positions in three dimensions using pulsed laser earth satellite tracking stations coupled with strictly geometric data reduction. With this high accuracy, several crucial geodetic applications become possible, including earthquake hazards assessment, precision surveying, plate tectonics, and orbital determination.

  20. Unscented predictive variable structure filter for satellite attitude estimation with model errors when using low precision sensors

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Li, Hengnian

    2016-10-01

    For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).

  1. Deep Coupled Integration of CSAC and GNSS for Robust PNT.

    PubMed

    Ma, Lin; You, Zheng; Li, Bin; Zhou, Bin; Han, Runqi

    2015-01-01

    Global navigation satellite systems (GNSS) are the most widely used positioning, navigation, and timing (PNT) technology. However, a GNSS cannot provide effective PNT services in physical blocks, such as in a natural canyon, canyon city, underground, underwater, and indoors. With the development of micro-electromechanical system (MEMS) technology, the chip scale atomic clock (CSAC) gradually matures, and performance is constantly improved. A deep coupled integration of CSAC and GNSS is explored in this thesis to enhance PNT robustness. "Clock coasting" of CSAC provides time synchronized with GNSS and optimizes navigation equations. However, errors of clock coasting increase over time and can be corrected by GNSS time, which is stable but noisy. In this paper, weighted linear optimal estimation algorithm is used for CSAC-aided GNSS, while Kalman filter is used for GNSS-corrected CSAC. Simulations of the model are conducted, and field tests are carried out. Dilution of precision can be improved by integration. Integration is more accurate than traditional GNSS. When only three satellites are visible, the integration still works, whereas the traditional method fails. The deep coupled integration of CSAC and GNSS can improve the accuracy, reliability, and availability of PNT. PMID:26378542

  2. Deep Coupled Integration of CSAC and GNSS for Robust PNT.

    PubMed

    Ma, Lin; You, Zheng; Li, Bin; Zhou, Bin; Han, Runqi

    2015-09-11

    Global navigation satellite systems (GNSS) are the most widely used positioning, navigation, and timing (PNT) technology. However, a GNSS cannot provide effective PNT services in physical blocks, such as in a natural canyon, canyon city, underground, underwater, and indoors. With the development of micro-electromechanical system (MEMS) technology, the chip scale atomic clock (CSAC) gradually matures, and performance is constantly improved. A deep coupled integration of CSAC and GNSS is explored in this thesis to enhance PNT robustness. "Clock coasting" of CSAC provides time synchronized with GNSS and optimizes navigation equations. However, errors of clock coasting increase over time and can be corrected by GNSS time, which is stable but noisy. In this paper, weighted linear optimal estimation algorithm is used for CSAC-aided GNSS, while Kalman filter is used for GNSS-corrected CSAC. Simulations of the model are conducted, and field tests are carried out. Dilution of precision can be improved by integration. Integration is more accurate than traditional GNSS. When only three satellites are visible, the integration still works, whereas the traditional method fails. The deep coupled integration of CSAC and GNSS can improve the accuracy, reliability, and availability of PNT.

  3. Deep Coupled Integration of CSAC and GNSS for Robust PNT

    PubMed Central

    Ma, Lin; You, Zheng; Li, Bin; Zhou, Bin; Han, Runqi

    2015-01-01

    Global navigation satellite systems (GNSS) are the most widely used positioning, navigation, and timing (PNT) technology. However, a GNSS cannot provide effective PNT services in physical blocks, such as in a natural canyon, canyon city, underground, underwater, and indoors. With the development of micro-electromechanical system (MEMS) technology, the chip scale atomic clock (CSAC) gradually matures, and performance is constantly improved. A deep coupled integration of CSAC and GNSS is explored in this thesis to enhance PNT robustness. “Clock coasting” of CSAC provides time synchronized with GNSS and optimizes navigation equations. However, errors of clock coasting increase over time and can be corrected by GNSS time, which is stable but noisy. In this paper, weighted linear optimal estimation algorithm is used for CSAC-aided GNSS, while Kalman filter is used for GNSS-corrected CSAC. Simulations of the model are conducted, and field tests are carried out. Dilution of precision can be improved by integration. Integration is more accurate than traditional GNSS. When only three satellites are visible, the integration still works, whereas the traditional method fails. The deep coupled integration of CSAC and GNSS can improve the accuracy, reliability, and availability of PNT. PMID:26378542

  4. Accuracy of analyses of microelectronics nanostructures in atom probe tomography

    NASA Astrophysics Data System (ADS)

    Vurpillot, F.; Rolland, N.; Estivill, R.; Duguay, S.; Blavette, D.

    2016-07-01

    The routine use of atom probe tomography (APT) as a nano-analysis microscope in the semiconductor industry requires the precise evaluation of the metrological parameters of this instrument (spatial accuracy, spatial precision, composition accuracy or composition precision). The spatial accuracy of this microscope is evaluated in this paper in the analysis of planar structures such as high-k metal gate stacks. It is shown both experimentally and theoretically that the in-depth accuracy of reconstructed APT images is perturbed when analyzing this structure composed of an oxide layer of high electrical permittivity (higher-k dielectric constant) that separates the metal gate and the semiconductor channel of a field emitter transistor. Large differences in the evaporation field between these layers (resulting from large differences in material properties) are the main sources of image distortions. An analytic model is used to interpret inaccuracy in the depth reconstruction of these devices in APT.

  5. Overview of the national precision database for ozone

    SciTech Connect

    Mikel, D.K.

    1999-07-01

    One of the most important ambient air monitoring quality assurance indicators is the precision test. Code of Federal Regulation Title 40, Section 58 (40 CFR 58) Appendix A1 states that all automated analyzers must have precision tests performed at least once every two weeks. Precision tests can be the best indicator of quality of data for the following reasons: Precision tests are performed once every two weeks. There are approximately 24 to 26 tests per year per instrument. Accuracy tests (audits) usually occur only 1--2 times per year. Precision tests and the subsequent statistical tests can be used to calculate the bias in a set of data. Precision test are used to calculate 95% confidence (probability) limits for the data set. This is important because the confidence of any data point can be determined. If the authors examine any exceedances or near exceedances of the ozone NAAQS, the confidence limits must be examined as well. Precision tests are performed by the monitoring staff and the precision standards are certified against the internal agency primary standards. Precision data are submitted by all state and local agencies that are required to submit criteria pollutant data to the Aerometric and Information Retrieval System (AIRS) database. This subset of the AIRS database is named Precision and Accuracy Retrieval Systems (PARS). In essence, the precision test is an internally performed test performed by the agency collecting and reporting the data.

  6. 3D robust digital image correlation for vibration measurement.

    PubMed

    Chen, Zhong; Zhang, Xianmin; Fatikow, Sergej

    2016-03-01

    Discrepancies of speckle images under dynamic measurement due to the different viewing angles will deteriorate the correspondence in 3D digital image correlation (3D-DIC) for vibration measurement. Facing this kind of bottleneck, this paper presents two types of robust 3D-DIC methods for vibration measurement, SSD-robust and SWD-robust, which use a sum of square difference (SSD) estimator plus a Geman-McClure regulating term and a Welch estimator plus a Geman-McClure regulating term, respectively. Because the regulating term with an adaptive rejecting bound can lessen the influence of the abnormal pixel data in the dynamical measuring process, the robustness of the algorithm is enhanced. The robustness and precision evaluation experiments using a dual-frequency laser interferometer are implemented. The experimental results indicate that the two presented robust estimators can suppress the effects of the abnormality in the speckle images and, meanwhile, keep higher precision in vibration measurement in contrast with the traditional SSD method; thus, the SWD-robust and SSD-robust methods are suitable for weak image noise and strong image noise, respectively. PMID:26974624

  7. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    A precision liquid level sensor utilizes a balanced bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  8. Precision Measurement in Biology

    NASA Astrophysics Data System (ADS)

    Quake, Stephen

    Is biology a quantitative science like physics? I will discuss the role of precision measurement in both physics and biology, and argue that in fact both fields can be tied together by the use and consequences of precision measurement. The elementary quanta of biology are twofold: the macromolecule and the cell. Cells are the fundamental unit of life, and macromolecules are the fundamental elements of the cell. I will describe how precision measurements have been used to explore the basic properties of these quanta, and more generally how the quest for higher precision almost inevitably leads to the development of new technologies, which in turn catalyze further scientific discovery. In the 21st century, there are no remaining experimental barriers to biology becoming a truly quantitative and mathematical science.

  9. Robustness of spatial micronetworks

    NASA Astrophysics Data System (ADS)

    McAndrew, Thomas C.; Danforth, Christopher M.; Bagrow, James P.

    2015-04-01

    Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure.

  10. Precision Environmental Radiation Monitoring System

    SciTech Connect

    Vladimir Popov, Pavel Degtiarenko

    2010-07-01

    A new precision low-level environmental radiation monitoring system has been developed and tested at Jefferson Lab. This system provides environmental radiation measurements with accuracy and stability of the order of 1 nGy/h in an hour, roughly corresponding to approximately 1% of the natural cosmic background at the sea level. Advanced electronic front-end has been designed and produced for use with the industry-standard High Pressure Ionization Chamber detector hardware. A new highly sensitive readout electronic circuit was designed to measure charge from the virtually suspended ionization chamber ion collecting electrode. New signal processing technique and dedicated data acquisition were tested together with the new readout. The designed system enabled data collection in a remote Linux-operated computer workstation, which was connected to the detectors using a standard telephone cable line. The data acquisition system algorithm is built around the continuously running 24-bit resolution 192 kHz data sampling analog to digital convertor. The major features of the design include: extremely low leakage current in the input circuit, true charge integrating mode operation, and relatively fast response to the intermediate radiation change. These features allow operating of the device as an environmental radiation monitor, at the perimeters of the radiation-generating installations in densely populated areas, like in other monitoring and security applications requiring high precision and long-term stability. Initial system evaluation results are presented.

  11. Precision displacement reference system

    DOEpatents

    Bieg, Lothar F.; Dubois, Robert R.; Strother, Jerry D.

    2000-02-22

    A precision displacement reference system is described, which enables real time accountability over the applied displacement feedback system to precision machine tools, positioning mechanisms, motion devices, and related operations. As independent measurements of tool location is taken by a displacement feedback system, a rotating reference disk compares feedback counts with performed motion. These measurements are compared to characterize and analyze real time mechanical and control performance during operation.

  12. Seasonal Effects on GPS PPP Accuracy

    NASA Astrophysics Data System (ADS)

    Saracoglu, Aziz; Ugur Sanli, D.

    2016-04-01

    GPS Precise Point Positioning (PPP) is now routinely used in many geophysical applications. Static positioning and 24 h data are requested for high precision results however real life situations do not always let us collect 24 h data. Thus repeated GPS surveys of 8-10 h observation sessions are still used by some research groups. Positioning solutions from shorter data spans are subject to various systematic influences, and the positioning quality as well as the estimated velocity is degraded. Researchers pay attention to the accuracy of GPS positions and of the estimated velocities derived from short observation sessions. Recently some research groups turned their attention to the study of seasonal effects (i.e. meteorological seasons) on GPS solutions. Up to now usually regional studies have been reported. In this study, we adopt a global approach and study the various seasonal effects (including the effect of the annual signal) on GPS solutions produced from short observation sessions. We use the PPP module of the NASA/JPL's GIPSY/OASIS II software and globally distributed GPS stations' data of the International GNSS Service. Accuracy studies previously performed with 10-30 consecutive days of continuous data. Here, data from each month of a year, incorporating two years in succession, is used in the analysis. Our major conclusion is that a reformulation for the GPS positioning accuracy is necessary when taking into account the seasonal effects, and typical one term accuracy formulation is expanded to a two-term one.

  13. A precision analogue integrator system for heavy current measurement in MFDC resistance spot welding

    NASA Astrophysics Data System (ADS)

    Xia, Yu-Jun; Zhang, Zhong-Dian; Xia, Zhen-Xin; Zhu, Shi-Liang; Zhang, Rui

    2016-02-01

    In order to control and monitor the quality of middle frequency direct current (MFDC) resistance spot welding (RSW), precision measurement of the welding current up to 100 kA is required, for which Rogowski coils are the only viable current transducers at present. Thus, a highly accurate analogue integrator is the key to restoring the converted signals collected from the Rogowski coils. Previous studies emphasised that the integration drift is a major factor that influences the performance of analogue integrators, but capacitive leakage error also has a significant impact on the result, especially in long-time pulse integration. In this article, new methods of measuring and compensating capacitive leakage error are proposed to fabricate a precision analogue integrator system for MFDC RSW. A voltage holding test is carried out to measure the integration error caused by capacitive leakage, and an original integrator with a feedback adder is designed to compensate capacitive leakage error in real time. The experimental results and statistical analysis show that the new analogue integrator system could constrain both drift and capacitive leakage error, of which the effect is robust to different voltage levels of output signals. The total integration error is limited within  ±0.09 mV s-1 0.005% s-1 or full scale at a 95% confidence level, which makes it possible to achieve the precision measurement of the welding current of MFDC RSW with Rogowski coils of 0.1% accuracy class.

  14. Precise Orbit Determination of GPS Satellites Using Phase Observables

    NASA Astrophysics Data System (ADS)

    Jee, Myung-Kook; Choi, Kyu-Hong; Park, Pil-Ho

    1997-12-01

    The accuracy of user position by GPS is heavily dependent upon the accuracy of satellite position which is usually transmitted to GPS users in radio signals. The real-time satellite position information directly obtained from broadcast ephimerides has the accuracy of 3 x 10 meters which is very unsatisfactory to measure 100km baseline to the accuracy of less than a few mili-meters. There are globally at present seven orbit analysis centers capable of generating precise GPS ephimerides and their orbit quality is of the order of about 10cm. Therefore, precise orbit model and phase processing technique were reviewed and consequently precise GPS ephimerides were produced after processing the phase observables of 28 global GPS stations for 1 day. Initial 6 orbit parameters and 2 solar radiation coefficients were estimated using batch least square algorithm and the final results were compared with the orbit of IGS, the International GPS Service for Geodynamics.

  15. Accuracy metrics for judging time scale algorithms

    NASA Technical Reports Server (NTRS)

    Douglas, R. J.; Boulanger, J.-S.; Jacques, C.

    1994-01-01

    Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.

  16. Robust acoustic object detection

    NASA Astrophysics Data System (ADS)

    Amit, Yali; Koloydenko, Alexey; Niyogi, Partha

    2005-10-01

    We consider a novel approach to the problem of detecting phonological objects like phonemes, syllables, or words, directly from the speech signal. We begin by defining local features in the time-frequency plane with built in robustness to intensity variations and time warping. Global templates of phonological objects correspond to the coincidence in time and frequency of patterns of the local features. These global templates are constructed by using the statistics of the local features in a principled way. The templates have clear phonetic interpretability, are easily adaptable, have built in invariances, and display considerable robustness in the face of additive noise and clutter from competing speakers. We provide a detailed evaluation of the performance of some diphone detectors and a word detector based on this approach. We also perform some phonetic classification experiments based on the edge-based features suggested here.

  17. Doubly robust survival trees.

    PubMed

    Steingrimsson, Jon Arni; Diao, Liqun; Molinaro, Annette M; Strawderman, Robert L

    2016-09-10

    Estimating a patient's mortality risk is important in making treatment decisions. Survival trees are a useful tool and employ recursive partitioning to separate patients into different risk groups. Existing 'loss based' recursive partitioning procedures that would be used in the absence of censoring have previously been extended to the setting of right censored outcomes using inverse probability censoring weighted estimators of loss functions. In this paper, we propose new 'doubly robust' extensions of these loss estimators motivated by semiparametric efficiency theory for missing data that better utilize available data. Simulations and a data analysis demonstrate strong performance of the doubly robust survival trees compared with previously used methods. Copyright © 2016 John Wiley & Sons, Ltd. PMID:27037609

  18. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  19. Robust reinforcement learning.

    PubMed

    Morimoto, Jun; Doya, Kenji

    2005-02-01

    This letter proposes a new reinforcement learning (RL) paradigm that explicitly takes into account input disturbance as well as modeling errors. The use of environmental models in RL is quite popular for both offline learning using simulations and for online action planning. However, the difference between the model and the real environment can lead to unpredictable, and often unwanted, results. Based on the theory of H(infinity) control, we consider a differential game in which a "disturbing" agent tries to make the worst possible disturbance while a "control" agent tries to make the best control input. The problem is formulated as finding a min-max solution of a value function that takes into account the amount of the reward and the norm of the disturbance. We derive online learning algorithms for estimating the value function and for calculating the worst disturbance and the best control in reference to the value function. We tested the paradigm, which we call robust reinforcement learning (RRL), on the control task of an inverted pendulum. In the linear domain, the policy and the value function learned by online algorithms coincided with those derived analytically by the linear H(infinity) control theory. For a fully nonlinear swing-up task, RRL achieved robust performance with changes in the pendulum weight and friction, while a standard reinforcement learning algorithm could not deal with these changes. We also applied RRL to the cart-pole swing-up task, and a robust swing-up policy was acquired.

  20. Precision gap particle separator

    DOEpatents

    Benett, William J.; Miles, Robin; Jones, II., Leslie M.; Stockton, Cheryl

    2004-06-08

    A system for separating particles entrained in a fluid includes a base with a first channel and a second channel. A precision gap connects the first channel and the second channel. The precision gap is of a size that allows small particles to pass from the first channel into the second channel and prevents large particles from the first channel into the second channel. A cover is positioned over the base unit, the first channel, the precision gap, and the second channel. An port directs the fluid containing the entrained particles into the first channel. An output port directs the large particles out of the first channel. A port connected to the second channel directs the small particles out of the second channel.

  1. How Physics Got Precise

    SciTech Connect

    Kleppner, Daniel

    2005-01-19

    Although the ancients knew the length of the year to about ten parts per million, it was not until the end of the 19th century that precision measurements came to play a defining role in physics. Eventually such measurements made it possible to replace human-made artifacts for the standards of length and time with natural standards. For a new generation of atomic clocks, time keeping could be so precise that the effects of the local gravitational potentials on the clock rates would be important. This would force us to re-introduce an artifact into the definition of the second - the location of the primary clock. I will describe some of the events in the history of precision measurements that have led us to this pleasing conundrum, and some of the unexpected uses of atomic clocks today.

  2. Precision Muonium Spectroscopy

    NASA Astrophysics Data System (ADS)

    Jungmann, Klaus P.

    2016-09-01

    The muonium atom is the purely leptonic bound state of a positive muon and an electron. It has a lifetime of 2.2 µs. The absence of any known internal structure provides for precision experiments to test fundamental physics theories and to determine accurate values of fundamental constants. In particular ground state hyperfine structure transitions can be measured by microwave spectroscopy to deliver the muon magnetic moment. The frequency of the 1s-2s transition in the hydrogen-like atom can be determined with laser spectroscopy to obtain the muon mass. With such measurements fundamental physical interactions, in particular quantum electrodynamics, can also be tested at highest precision. The results are important input parameters for experiments on the muon magnetic anomaly. The simplicity of the atom enables further precise experiments, such as a search for muonium-antimuonium conversion for testing charged lepton number conservation and searches for possible antigravity of muons and dark matter.

  3. Competition improves robustness against loss of information.

    PubMed

    Kermani Kolankeh, Arash; Teichmann, Michael; Hamker, Fred H

    2015-01-01

    A substantial number of works have aimed at modeling the receptive field properties of the primary visual cortex (V1). Their evaluation criterion is usually the similarity of the model response properties to the recorded responses from biological organisms. However, as several algorithms were able to demonstrate some degree of similarity to biological data based on the existing criteria, we focus on the robustness against loss of information in the form of occlusions as an additional constraint for better understanding the algorithmic level of early vision in the brain. We try to investigate the influence of competition mechanisms on the robustness. Therefore, we compared four methods employing different competition mechanisms, namely, independent component analysis, non-negative matrix factorization with sparseness constraint, predictive coding/biased competition, and a Hebbian neural network with lateral inhibitory connections. Each of those methods is known to be capable of developing receptive fields comparable to those of V1 simple-cells. Since measuring the robustness of methods having simple-cell like receptive fields against occlusion is difficult, we measure the robustness using the classification accuracy on the MNIST hand written digit dataset. For this we trained all methods on the training set of the MNIST hand written digits dataset and tested them on a MNIST test set with different levels of occlusions. We observe that methods which employ competitive mechanisms have higher robustness against loss of information. Also the kind of the competition mechanisms plays an important role in robustness. Global feedback inhibition as employed in predictive coding/biased competition has an advantage compared to local lateral inhibition learned by an anti-Hebb rule.

  4. Robust keyword retrieval method for OCRed text

    NASA Astrophysics Data System (ADS)

    Fujii, Yusaku; Takebe, Hiroaki; Tanaka, Hiroshi; Hotta, Yoshinobu

    2011-01-01

    Document management systems have become important because of the growing popularity of electronic filing of documents and scanning of books, magazines, manuals, etc., through a scanner or a digital camera, for storage or reading on a PC or an electronic book. Text information acquired by optical character recognition (OCR) is usually added to the electronic documents for document retrieval. Since texts generated by OCR generally include character recognition errors, robust retrieval methods have been introduced to overcome this problem. In this paper, we propose a retrieval method that is robust against both character segmentation and recognition errors. In the proposed method, the insertion of noise characters and dropping of characters in the keyword retrieval enables robustness against character segmentation errors, and character substitution in the keyword of the recognition candidate for each character in OCR or any other character enables robustness against character recognition errors. The recall rate of the proposed method was 15% higher than that of the conventional method. However, the precision rate was 64% lower.

  5. Precision Heating Process

    NASA Technical Reports Server (NTRS)

    1992-01-01

    A heat sealing process was developed by SEBRA based on technology that originated in work with NASA's Jet Propulsion Laboratory. The project involved connecting and transferring blood and fluids between sterile plastic containers while maintaining a closed system. SEBRA markets the PIRF Process to manufacturers of medical catheters. It is a precisely controlled method of heating thermoplastic materials in a mold to form or weld catheters and other products. The process offers advantages in fast, precise welding or shape forming of catheters as well as applications in a variety of other industries.

  6. Precision manometer gauge

    DOEpatents

    McPherson, Malcolm J.; Bellman, Robert A.

    1984-01-01

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  7. Precision manometer gauge

    DOEpatents

    McPherson, M.J.; Bellman, R.A.

    1982-09-27

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  8. Asymptotic accuracy of two-class discrimination

    SciTech Connect

    Ho, T.K.; Baird, H.S.

    1994-12-31

    Poor quality-e.g. sparse or unrepresentative-training data is widely suspected to be one cause of disappointing accuracy of isolated-character classification in modern OCR machines. We conjecture that, for many trainable classification techniques, it is in fact the dominant factor affecting accuracy. To test this, we have carried out a study of the asymptotic accuracy of three dissimilar classifiers on a difficult two-character recognition problem. We state this problem precisely in terms of high-quality prototype images and an explicit model of the distribution of image defects. So stated, the problem can be represented as a stochastic source of an indefinitely long sequence of simulated images labeled with ground truth. Using this sequence, we were able to train all three classifiers to high and statistically indistinguishable asymptotic accuracies (99.9%). This result suggests that the quality of training data was the dominant factor affecting accuracy. The speed of convergence during training, as well as time/space trade-offs during recognition, differed among the classifiers.

  9. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  10. Demons deformable registration for CBCT-guided procedures in the head and neck: Convergence and accuracy

    SciTech Connect

    Nithiananthan, S.; Brock, K. K.; Daly, M. J.; Chan, H.; Irish, J. C.; Siewerdsen, J. H.

    2009-10-15

    Purpose: The accuracy and convergence behavior of a variant of the Demons deformable registration algorithm were investigated for use in cone-beam CT (CBCT)-guided procedures of the head and neck. Online use of deformable registration for guidance of therapeutic procedures such as image-guided surgery or radiation therapy places trade-offs on accuracy and computational expense. This work describes a convergence criterion for Demons registration developed to balance these demands; the accuracy of a multiscale Demons implementation using this convergence criterion is quantified in CBCT images of the head and neck. Methods: Using an open-source ''symmetric'' Demons registration algorithm, a convergence criterion based on the change in the deformation field between iterations was developed to advance among multiple levels of a multiscale image pyramid in a manner that optimized accuracy and computation time. The convergence criterion was optimized in cadaver studies involving CBCT images acquired using a surgical C-arm prototype modified for 3D intraoperative imaging. CBCT-to-CBCT registration was performed and accuracy was quantified in terms of the normalized cross-correlation (NCC) and target registration error (TRE). The accuracy and robustness of the algorithm were then tested in clinical CBCT images of ten patients undergoing radiation therapy of the head and neck. Results: The cadaver model allowed optimization of the convergence factor and initial measurements of registration accuracy: Demons registration exhibited TRE=(0.8{+-}0.3) mm and NCC=0.99 in the cadaveric head compared to TRE=(2.6{+-}1.0) mm and NCC=0.93 with rigid registration. Similarly for the patient data, Demons registration gave mean TRE=(1.6{+-}0.9) mm compared to rigid registration TRE=(3.6{+-}1.9) mm, suggesting registration accuracy at or near the voxel size of the patient images (1x1x2 mm{sup 3}). The multiscale implementation based on optimal convergence criteria completed registration in

  11. Precision bolometer bridge

    NASA Technical Reports Server (NTRS)

    White, D. R.

    1968-01-01

    Prototype precision bolometer calibration bridge is manually balanced device for indicating dc bias and balance with either dc or ac power. An external galvanometer is used with the bridge for null indication, and the circuitry monitors voltage and current simultaneously without adapters in testing 100 and 200 ohm thin film bolometers.

  12. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    1985-01-29

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge. 2 figs.

  13. Precision physics at LHC

    SciTech Connect

    Hinchliffe, I.

    1997-05-01

    In this talk the author gives a brief survey of some physics topics that will be addressed by the Large Hadron Collider currently under construction at CERN. Instead of discussing the reach of this machine for new physics, the author gives examples of the types of precision measurements that might be made if new physics is discovered.

  14. Precision in Stereochemical Terminology

    ERIC Educational Resources Information Center

    Wade, Leroy G., Jr.

    2006-01-01

    An analysis of relatively new terminology that has given multiple definitions often resulting in students learning principles that are actually false is presented with an example of the new term stereogenic atom introduced by Mislow and Siegel. The Mislow terminology would be useful in some cases if it were used precisely and correctly, but it is…

  15. High Precision Astrometry

    NASA Astrophysics Data System (ADS)

    Riess, Adam

    2012-10-01

    This |*|program |*|uses |*|the |*|enhanced |*|astrometric |*|precision |*|enabled |*|by |*|spatial |*|scanning |*|to |*|calibrate |*|remaining |*|obstacles |*|toreaching |*|<<40 |*|microarc|*|second |*|astrometry |*|{<1 |*|millipixel} |*|with |*|WFC3/UVIS |*|by |*|1} |*|improving |*|geometric |*|distor-on |*|2} |*|calibratingthe |*|e|*|ect |*|of |*|breathing |*|on |*|astrometry|*|3} |*|calibrating |*|the |*|e|*|ect |*|of |*|CTE |*|on |*|astrometry, |*|4} |*|characterizing |*|the |*|boundaries |*|andorientations |*|of |*|the |*|WFC3 |*|lithograph |*|cells.

  16. Precision liquid level sensor

    DOEpatents

    Field, Michael E.; Sullivan, William H.

    1985-01-01

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  17. Blink detection robust to various facial poses.

    PubMed

    Lee, Won Oh; Lee, Eui Chul; Park, Kang Ryoung

    2010-11-30

    Applications based on eye-blink detection have increased, as a result of which it is essential for eye-blink detection to be robust and non-intrusive irrespective of the changes in the user's facial pose. However, most previous studies on camera-based blink detection have the disadvantage that their performances were affected by the facial pose. They also focused on blink detection using only frontal facial images. To overcome these disadvantages, we developed a new method for blink detection, which maintains its accuracy despite changes in the facial pose of the subject. This research is novel in the following four ways. First, the face and eye regions are detected by using both the AdaBoost face detector and a Lucas-Kanade-Tomasi (LKT)-based method, in order to achieve robustness to facial pose. Secondly, the determination of the state of the eye (being open or closed), needed for blink detection, is based on two features: the ratio of height to width of the eye region in a still image, and the cumulative difference of the number of black pixels of the eye region using an adaptive threshold in successive images. These two features are robustly extracted irrespective of the lighting variations by using illumination normalization. Thirdly, the accuracy of determining the eye state - open or closed - is increased by combining the above two features on the basis of the support vector machine (SVM). Finally, the SVM classifier for determining the eye state is adaptively selected according to the facial rotation. Experimental results using various databases showed that the blink detection by the proposed method is robust to various facial poses. PMID:20826183

  18. Astrophysics with Microarcsecond Accuracy Astrometry

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  19. Precision Falling Body Experiment

    ERIC Educational Resources Information Center

    Blackburn, James A.; Koenig, R.

    1976-01-01

    Described is a simple apparatus to determine acceleration due to gravity. It utilizes direct contact switches in lieu of conventional photocells to time the fall of a ball bearing. Accuracies to better than one part in a thousand were obtained. (SL)

  20. Robust Systems Test Framework

    2003-01-01

    The Robust Systems Test Framework (RSTF) provides a means of specifying and running test programs on various computation platforms. RSTF provides a level of specification above standard scripting languages. During a set of runs, standard timing information is collected. The RSTF specification can also gather job-specific information, and can include ways to classify test outcomes. All results and scripts can be stored into and retrieved from an SQL database for later data analysis. RSTF alsomore » provides operations for managing the script and result files, and for compiling applications and gathering compilation information such as optimization flags.« less

  1. Robust quantum spatial search

    NASA Astrophysics Data System (ADS)

    Tulsi, Avatar

    2016-07-01

    Quantum spatial search has been widely studied with most of the study focusing on quantum walk algorithms. We show that quantum walk algorithms are extremely sensitive to systematic errors. We present a recursive algorithm which offers significant robustness to certain systematic errors. To search N items, our recursive algorithm can tolerate errors of size O(1{/}√{ln N}) which is exponentially better than quantum walk algorithms for which tolerable error size is only O(ln N{/}√{N}). Also, our algorithm does not need any ancilla qubit. Thus our algorithm is much easier to implement experimentally compared to quantum walk algorithms.

  2. Robust Kriged Kalman Filtering

    SciTech Connect

    Baingana, Brian; Dall'Anese, Emiliano; Mateos, Gonzalo; Giannakis, Georgios B.

    2015-11-11

    Although the kriged Kalman filter (KKF) has well-documented merits for prediction of spatial-temporal processes, its performance degrades in the presence of outliers due to anomalous events, or measurement equipment failures. This paper proposes a robust KKF model that explicitly accounts for presence of measurement outliers. Exploiting outlier sparsity, a novel l1-regularized estimator that jointly predicts the spatial-temporal process at unmonitored locations, while identifying measurement outliers is put forth. Numerical tests are conducted on a synthetic Internet protocol (IP) network, and real transformer load data. Test results corroborate the effectiveness of the novel estimator in joint spatial prediction and outlier identification.

  3. Robust Systems Test Framework

    SciTech Connect

    Ballance, Robert A.

    2003-01-01

    The Robust Systems Test Framework (RSTF) provides a means of specifying and running test programs on various computation platforms. RSTF provides a level of specification above standard scripting languages. During a set of runs, standard timing information is collected. The RSTF specification can also gather job-specific information, and can include ways to classify test outcomes. All results and scripts can be stored into and retrieved from an SQL database for later data analysis. RSTF also provides operations for managing the script and result files, and for compiling applications and gathering compilation information such as optimization flags.

  4. Robust telescope scheduling

    NASA Technical Reports Server (NTRS)

    Swanson, Keith; Bresina, John; Drummond, Mark

    1994-01-01

    This paper presents a technique for building robust telescope schedules that tend not to break. The technique is called Just-In-Case (JIC) scheduling and it implements the common sense idea of being prepared for likely errors, just in case they should occur. The JIC algorithm analyzes a given schedule, determines where it is likely to break, reinvokes a scheduler to generate a contingent schedule for each highly probable break case, and produces a 'multiply contingent' schedule. The technique was developed for an automatic telescope scheduling problem, and the paper presents empirical results showing that Just-In-Case scheduling performs extremely well for this problem.

  5. Robust Photon Locking

    SciTech Connect

    Bayer, T.; Wollenhaupt, M.; Sarpe-Tudoran, C.; Baumert, T.

    2009-01-16

    We experimentally demonstrate a strong-field coherent control mechanism that combines the advantages of photon locking (PL) and rapid adiabatic passage (RAP). Unlike earlier implementations of PL and RAP by pulse sequences or chirped pulses, we use shaped pulses generated by phase modulation of the spectrum of a femtosecond laser pulse with a generalized phase discontinuity. The novel control scenario is characterized by a high degree of robustness achieved via adiabatic preparation of a state of maximum coherence. Subsequent phase control allows for efficient switching among different target states. We investigate both properties by photoelectron spectroscopy on potassium atoms interacting with the intense shaped light field.

  6. Robust control for uncertain structures

    NASA Technical Reports Server (NTRS)

    Douglas, Joel; Athans, Michael

    1991-01-01

    Viewgraphs on robust control for uncertain structures are presented. Topics covered include: robust linear quadratic regulator (RLQR) formulas; mismatched LQR design; RLQR design; interpretations of RLQR design; disturbance rejection; and performance comparisons: RLQR vs. mismatched LQR.

  7. High-precision positioning of radar scatterers

    NASA Astrophysics Data System (ADS)

    Dheenathayalan, Prabu; Small, David; Schubert, Adrian; Hanssen, Ramon F.

    2016-05-01

    Remote sensing radar satellites cover wide areas and provide spatially dense measurements, with millions of scatterers. Knowledge of the precise position of each radar scatterer is essential to identify the corresponding object and interpret the estimated deformation. The absolute position accuracy of synthetic aperture radar (SAR) scatterers in a 2D radar coordinate system, after compensating for atmosphere and tidal effects, is in the order of centimeters for TerraSAR-X (TSX) spotlight images. However, the absolute positioning in 3D and its quality description are not well known. Here, we exploit time-series interferometric SAR to enhance the positioning capability in three dimensions. The 3D positioning precision is parameterized by a variance-covariance matrix and visualized as an error ellipsoid centered at the estimated position. The intersection of the error ellipsoid with objects in the field is exploited to link radar scatterers to real-world objects. We demonstrate the estimation of scatterer position and its quality using 20 months of TSX stripmap acquisitions over Delft, the Netherlands. Using trihedral corner reflectors (CR) for validation, the accuracy of absolute positioning in 2D is about 7 cm. In 3D, an absolute accuracy of up to ˜ 66 cm is realized, with a cigar-shaped error ellipsoid having centimeter precision in azimuth and range dimensions, and elongated in cross-range dimension with a precision in the order of meters (the ratio of the ellipsoid axis lengths is 1/3/213, respectively). The CR absolute 3D position, along with the associated error ellipsoid, is found to be accurate and agree with the ground truth position at a 99 % confidence level. For other non-CR coherent scatterers, the error ellipsoid concept is validated using 3D building models. In both cases, the error ellipsoid not only serves as a quality descriptor, but can also help to associate radar scatterers to real-world objects.

  8. Micro-Precision Interferometer: Pointing Control System

    NASA Technical Reports Server (NTRS)

    O'Brien, John

    1995-01-01

    This paper describes the development of the wavefront tilt (pointing) control system for the JPL Micro-Precision Interferometer (MPI). This control system employs piezo-electric actuators and a digital imaging sensor with feedback compensation to reject errors in instrument pointing. Stringent performance goals require large feedback, however, several characteristics of the plant tend to restrict the available bandwidth. A robust 7th-order wavefront tilt control system was successfully implemented on the MPI instrument, providing sufficient disturbance rejection performance to satisfy the established interference fringe visibility.

  9. Atomically Precise Surface Engineering for Producing Imagers

    NASA Technical Reports Server (NTRS)

    Greer, Frank (Inventor); Jones, Todd J. (Inventor); Nikzad, Shouleh (Inventor); Hoenk, Michael E. (Inventor)

    2015-01-01

    High-quality surface coatings, and techniques combining the atomic precision of molecular beam epitaxy and atomic layer deposition, to fabricate such high-quality surface coatings are provided. The coatings made in accordance with the techniques set forth by the invention are shown to be capable of forming silicon CCD detectors that demonstrate world record detector quantum efficiency (>50%) in the near and far ultraviolet (155 nm-300 nm). The surface engineering approaches used demonstrate the robustness of detector performance that is obtained by achieving atomic level precision at all steps in the coating fabrication process. As proof of concept, the characterization, materials, and exemplary devices produced are presented along with a comparison to other approaches.

  10. Moving Liquids with Sound: The Physics of Acoustic Droplet Ejection for Robust Laboratory Automation in Life Sciences.

    PubMed

    Hadimioglu, Babur; Stearns, Richard; Ellson, Richard

    2016-02-01

    Liquid handling instruments for life science applications based on droplet formation with focused acoustic energy or acoustic droplet ejection (ADE) were introduced commercially more than a decade ago. While the idea of "moving liquids with sound" was known in the 20th century, the development of precise methods for acoustic dispensing to aliquot life science materials in the laboratory began in earnest in the 21st century with the adaptation of the controlled "drop on demand" acoustic transfer of droplets from high-density microplates for high-throughput screening (HTS) applications. Robust ADE implementations for life science applications achieve excellent accuracy and precision by using acoustics first to sense the liquid characteristics relevant for its transfer, and then to actuate transfer of the liquid with customized application of sound energy to the given well and well fluid in the microplate. This article provides an overview of the physics behind ADE and its central role in both acoustical and rheological aspects of robust implementation of ADE in the life science laboratory and its broad range of ejectable materials.

  11. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  12. The Precision Field Lysimeter Concept

    NASA Astrophysics Data System (ADS)

    Fank, J.

    2009-04-01

    The understanding and interpretation of leaching processes have improved significantly during the past decades. Unlike laboratory experiments, which are mostly performed under very controlled conditions (e.g. homogeneous, uniform packing of pre-treated test material, saturated steady-state flow conditions, and controlled uniform hydraulic conditions), lysimeter experiments generally simulate actual field conditions. Lysimeters may be classified according to different criteria such as type of soil block used (monolithic or reconstructed), drainage (drainage by gravity or vacuum or a water table may be maintained), or weighing or non-weighing lysimeters. In 2004 experimental investigations have been set up to assess the impact of different farming systems on groundwater quality of the shallow floodplain aquifer of the river Mur in Wagna (Styria, Austria). The sediment is characterized by a thin layer (30 - 100 cm) of sandy Dystric Cambisol and underlying gravel and sand. Three precisely weighing equilibrium tension block lysimeters have been installed in agricultural test fields to compare water flow and solute transport under (i) organic farming, (ii) conventional low input farming and (iii) extensification by mulching grass. Specific monitoring equipment is used to reduce the well known shortcomings of lysimeter investigations: The lysimeter core is excavated as an undisturbed monolithic block (circular, 1 m2 surface area, 2 m depth) to prevent destruction of the natural soil structure, and pore system. Tracing experiments have been achieved to investigate the occurrence of artificial preferential flow and transport along the walls of the lysimeters. The results show that such effects can be neglected. Precisely weighing load cells are used to constantly determine the weight loss of the lysimeter due to evaporation and transpiration and to measure different forms of precipitation. The accuracy of the weighing apparatus is 0.05 kg, or 0.05 mm water equivalent

  13. A passion for precision

    ScienceCinema

    None

    2016-07-12

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  14. A passion for precision

    SciTech Connect

    2010-05-19

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  15. Towards precision medicine.

    PubMed

    Ashley, Euan A

    2016-08-16

    There is great potential for genome sequencing to enhance patient care through improved diagnostic sensitivity and more precise therapeutic targeting. To maximize this potential, genomics strategies that have been developed for genetic discovery - including DNA-sequencing technologies and analysis algorithms - need to be adapted to fit clinical needs. This will require the optimization of alignment algorithms, attention to quality-coverage metrics, tailored solutions for paralogous or low-complexity areas of the genome, and the adoption of consensus standards for variant calling and interpretation. Global sharing of this more accurate genotypic and phenotypic data will accelerate the determination of causality for novel genes or variants. Thus, a deeper understanding of disease will be realized that will allow its targeting with much greater therapeutic precision. PMID:27528417

  16. Principles and techniques for designing precision machines

    SciTech Connect

    Hale, L C

    1999-02-01

    This thesis is written to advance the reader's knowledge of precision-engineering principles and their application to designing machines that achieve both sufficient precision and minimum cost. It provides the concepts and tools necessary for the engineer to create new precision machine designs. Four case studies demonstrate the principles and showcase approaches and solutions to specific problems that generally have wider applications. These come from projects at the Lawrence Livermore National Laboratory in which the author participated: the Large Optics Diamond Turning Machine, Accuracy Enhancement of High- Productivity Machine Tools, the National Ignition Facility, and Extreme Ultraviolet Lithography. Although broad in scope, the topics go into sufficient depth to be useful to practicing precision engineers and often fulfill more academic ambitions. The thesis begins with a chapter that presents significant principles and fundamental knowledge from the Precision Engineering literature. Following this is a chapter that presents engineering design techniques that are general and not specific to precision machines. All subsequent chapters cover specific aspects of precision machine design. The first of these is Structural Design, guidelines and analysis techniques for achieving independently stiff machine structures. The next chapter addresses dynamic stiffness by presenting several techniques for Deterministic Damping, damping designs that can be analyzed and optimized with predictive results. Several chapters present a main thrust of the thesis, Exact-Constraint Design. A main contribution is a generalized modeling approach developed through the course of creating several unique designs. The final chapter is the primary case study of the thesis, the Conceptual Design of a Horizontal Machining Center.

  17. Precision orbit determination of altimetric satellites

    NASA Astrophysics Data System (ADS)

    Shum, C. K.; Ries, John C.; Tapley, Byron D.

    1994-11-01

    The ability to determine accurate global sea level variations is important to both detection and understanding of changes in climate patterns. Sea level variability occurs over a wide spectrum of temporal and spatial scales, and precise global measurements are only recently possible with the advent of spaceborne satellite radar altimetry missions. One of the inherent requirements for accurate determination of absolute sea surface topography is that the altimetric satellite orbits be computed with sub-decimeter accuracy within a well defined terrestrial reference frame. SLR tracking in support of precision orbit determination of altimetric satellites is significant. Recent examples are the use of SLR as the primary tracking systems for TOPEX/Poseidon and for ERS-1 precision orbit determination. The current radial orbit accuracy for TOPEX/Poseidon is estimated to be around 3-4 cm, with geographically correlated orbit errors around 2 cm. The significance of the SLR tracking system is its ability to allow altimetric satellites to obtain absolute sea level measurements and thereby provide a link to other altimetry measurement systems for long-term sea level studies. SLR tracking allows the production of precise orbits which are well centered in an accurate terrestrial reference frame. With proper calibration of the radar altimeter, these precise orbits, along with the altimeter measurements, provide long term absolute sea level measurements. The U.S. Navy's Geosat mission is equipped with only Doppler beacons and lacks laser retroreflectors. However, its orbits, and even the Geosat orbits computed using the available full 40-station Tranet tracking network, yield orbits with significant north-south shifts with respect to the IERS terrestrial reference frame. The resulting Geosat sea surface topography will be tilted accordingly, making interpretation of long-term sea level variability studies difficult.

  18. Ultra-Precision Optics

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Under a Joint Sponsored Research Agreement with Goddard Space Flight Center, SEMATECH, Inc., the Silicon Valley Group, Inc. and Tinsley Laboratories, known as SVG-Tinsley, developed an Ultra-Precision Optics Manufacturing System for space and microlithographic applications. Continuing improvements in optics manufacture will be able to meet unique NASA requirements and the production needs of the lithography industry for many years to come.

  19. Precise clock synchronization protocol

    NASA Astrophysics Data System (ADS)

    Luit, E. J.; Martin, J. M. M.

    1993-12-01

    A distributed clock synchronization protocol is presented which achieves a very high precision without the need for very frequent resynchronizations. The protocol tolerates failures of the clocks: clocks may be too slow or too fast, exhibit omission failures and report inconsistent values. Synchronization takes place in synchronization rounds as in many other synchronization protocols. At the end of each round, clock times are exchanged between the clocks. Each clock applies a convergence function (CF) to the values obtained. This function estimates the difference between its clock and an average clock and corrects its clock accordingly. Clocks are corrected for drift relative to this average clock during the next synchronization round. The protocol is based on the assumption that clock reading errors are small with respect to the required precision of synchronization. It is shown that the CF resynchronizes the clocks with high precision even when relatively large clock drifts are possible. It is also shown that the drift-corrected clocks remain synchronized until the end of the next synchronization round. The stability of the protocol is proven.

  20. Precision Experiments at LEP

    NASA Astrophysics Data System (ADS)

    de Boer, W.

    2015-07-01

    The Large Electron-Positron Collider (LEP) established the Standard Model (SM) of particle physics with unprecedented precision, including all its radiative corrections. These led to predictions for the masses of the top quark and Higgs boson, which were beautifully confirmed later on. After these precision measurements the Nobel Prize in Physics was awarded in 1999 jointly to 't Hooft and Veltman "for elucidating the quantum structure of electroweak interactions in physics". Another hallmark of the LEP results were the precise measurements of the gauge coupling constants, which excluded unification of the forces within the SM, but allowed unification within the supersymmetric extension of the SM. This increased the interest in Supersymmetry (SUSY) and Grand Unified Theories, especially since the SM has no candidate for the elusive dark matter, while SUSY provides an excellent candidate for dark matter. In addition, SUSY removes the quadratic divergencies of the SM and predicts the Higgs mechanism from radiative electroweak symmetry breaking with a SM-like Higgs boson having a mass below 130 GeV in agreement with the Higgs boson discovery at the LHC. However, the predicted SUSY particles have not been found either because they are too heavy for the present LHC energy and luminosity or Nature has found alternative ways to circumvent the shortcomings of the SM.

  1. Precision Experiments at LEP

    NASA Astrophysics Data System (ADS)

    de Boer, W.

    2015-09-01

    The Large Electron Positron Collider (LEP) established the Standard Model (SM) of particle physics with unprecedented precision, including all its radiative corrections. These led to predictions for the masses of the top quark and Higgs boson, which were beautifully confirmed later on. After these precision measurements the Nobel Prize in Physics was awarded in 1999 jointly to 't Hooft and Veltman "for elucidating the quantum structure of electroweak interactions in physics". Another hallmark of the LEP results were the precise measurements of the gauge coupling constants, which excluded unification of the forces within the SM, but allowed unification within the supersymmetric extension of the SM. This increased the interest in Supersymmetry (SUSY) and Grand Unified Theories, especially since the SM has no candidate for the elusive dark matter, while Supersymmetry provides an excellent candidate for dark matter. In addition, Supersymmetry removes the quadratic divergencies of the SM and {\\it predicts} the Higgs mechanism from radiative electroweak symmetry breaking with a SM-like Higgs boson having a mass below 130 GeV in agreement with the Higgs boson discovery at the LHC. However, the predicted SUSY particles have not been found either because they are too heavy for the present LHC energy and luminosity or Nature has found alternative ways to circumvent the shortcomings of the SM.

  2. Standardization of radon measurements. 2. Accuracy and proficiency testing

    SciTech Connect

    Matuszek, J.M.

    1990-01-01

    The accuracy of in situ environmental radon measurement techniques is reviewed and new data for charcoal canister, alpha-track (track-etch) and electret detectors are presented. Deficiencies reported at the 1987 meeting in Wurenlingen, Federal Republic of Germany, for measurements using charcoal detectors are confirmed by the new results. Accuracy and precision of the alpha-track measurements laboratory were better than in 1987. Electret detectors appear to provide a convenient, accurate, and precise system for the measurement of radon concentration. The need for a comprehensive, blind proficiency-testing programs is discussed.

  3. Multi-oriented windowed harmonic phase reconstruction for robust cardiac strain imaging.

    PubMed

    Cordero-Grande, Lucilio; Royuela-del-Val, Javier; Sanz-Estébanez, Santiago; Martín-Fernández, Marcos; Alberola-López, Carlos

    2016-04-01

    The purpose of this paper is to develop a method for direct estimation of the cardiac strain tensor by extending the harmonic phase reconstruction on tagged magnetic resonance images to obtain more precise and robust measurements. The extension relies on the reconstruction of the local phase of the image by means of the windowed Fourier transform and the acquisition of an overdetermined set of stripe orientations in order to avoid the phase interferences from structures outside the myocardium and the instabilities arising from the application of a gradient operator. Results have shown that increasing the number of acquired orientations provides a significant improvement in the reproducibility of the strain measurements and that the acquisition of an extended set of orientations also improves the reproducibility when compared with acquiring repeated samples from a smaller set of orientations. Additionally, biases in local phase estimation when using the original harmonic phase formulation are greatly diminished by the one here proposed. The ideas here presented allow the design of new methods for motion sensitive magnetic resonance imaging, which could simultaneously improve the resolution, robustness and accuracy of motion estimates.

  4. Multi-oriented windowed harmonic phase reconstruction for robust cardiac strain imaging.

    PubMed

    Cordero-Grande, Lucilio; Royuela-del-Val, Javier; Sanz-Estébanez, Santiago; Martín-Fernández, Marcos; Alberola-López, Carlos

    2016-04-01

    The purpose of this paper is to develop a method for direct estimation of the cardiac strain tensor by extending the harmonic phase reconstruction on tagged magnetic resonance images to obtain more precise and robust measurements. The extension relies on the reconstruction of the local phase of the image by means of the windowed Fourier transform and the acquisition of an overdetermined set of stripe orientations in order to avoid the phase interferences from structures outside the myocardium and the instabilities arising from the application of a gradient operator. Results have shown that increasing the number of acquired orientations provides a significant improvement in the reproducibility of the strain measurements and that the acquisition of an extended set of orientations also improves the reproducibility when compared with acquiring repeated samples from a smaller set of orientations. Additionally, biases in local phase estimation when using the original harmonic phase formulation are greatly diminished by the one here proposed. The ideas here presented allow the design of new methods for motion sensitive magnetic resonance imaging, which could simultaneously improve the resolution, robustness and accuracy of motion estimates. PMID:26745763

  5. Evolving Robust Gene Regulatory Networks

    PubMed Central

    Noman, Nasimul; Monjo, Taku; Moscato, Pablo; Iba, Hitoshi

    2015-01-01

    Design and implementation of robust network modules is essential for construction of complex biological systems through hierarchical assembly of ‘parts’ and ‘devices’. The robustness of gene regulatory networks (GRNs) is ascribed chiefly to the underlying topology. The automatic designing capability of GRN topology that can exhibit robust behavior can dramatically change the current practice in synthetic biology. A recent study shows that Darwinian evolution can gradually develop higher topological robustness. Subsequently, this work presents an evolutionary algorithm that simulates natural evolution in silico, for identifying network topologies that are robust to perturbations. We present a Monte Carlo based method for quantifying topological robustness and designed a fitness approximation approach for efficient calculation of topological robustness which is computationally very intensive. The proposed framework was verified using two classic GRN behaviors: oscillation and bistability, although the framework is generalized for evolving other types of responses. The algorithm identified robust GRN architectures which were verified using different analysis and comparison. Analysis of the results also shed light on the relationship among robustness, cooperativity and complexity. This study also shows that nature has already evolved very robust architectures for its crucial systems; hence simulation of this natural process can be very valuable for designing robust biological systems. PMID:25616055

  6. Robustness in Digital Hardware

    NASA Astrophysics Data System (ADS)

    Woods, Roger; Lightbody, Gaye

    The growth in electronics has probably been the equivalent of the Industrial Revolution in the past century in terms of how much it has transformed our daily lives. There is a great dependency on technology whether it is in the devices that control travel (e.g., in aircraft or cars), our entertainment and communication systems, or our interaction with money, which has been empowered by the onset of Internet shopping and banking. Despite this reliance, there is still a danger that at some stage devices will fail within the equipment's lifetime. The purpose of this chapter is to look at the factors causing failure and address possible measures to improve robustness in digital hardware technology and specifically chip technology, giving a long-term forecast that will not reassure the reader!

  7. Robust automated knowledge capture.

    SciTech Connect

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  8. Robust springback compensation

    NASA Astrophysics Data System (ADS)

    Carleer, Bart; Grimm, Peter

    2013-12-01

    Springback simulation and springback compensation are more and more applied in productive use of die engineering. In order to successfully compensate a tool accurate springback results are needed as well as an effective compensation approach. In this paper a methodology has been introduce in order to effectively compensate tools. First step is the full process simulation meaning that not only the drawing operation will be simulated but also all secondary operations like trimming and flanging. Second will be the verification whether the process is robust meaning that it obtains repeatable results. In order to effectively compensate a minimum clamping concept will be defined. Once these preconditions are fulfilled the tools can be compensated effectively.

  9. Highly Parallel, High-Precision Numerical Integration

    SciTech Connect

    Bailey, David H.; Borwein, Jonathan M.

    2005-04-22

    This paper describes a scheme for rapidly computing numerical values of definite integrals to very high accuracy, ranging from ordinary machine precision to hundreds or thousands of digits, even for functions with singularities or infinite derivatives at endpoints. Such a scheme is of interest not only in computational physics and computational chemistry, but also in experimental mathematics, where high-precision numerical values of definite integrals can be used to numerically discover new identities. This paper discusses techniques for a parallel implementation of this scheme, then presents performance results for 1-D and 2-D test suites. Results are also given for a certain problem from mathematical physics, which features a difficult singularity, confirming a conjecture to 20,000 digit accuracy. The performance rate for this latter calculation on 1024 CPUs is 690 Gflop/s. We believe that this and one other 20,000-digit integral evaluation that we report are the highest-precision non-trivial numerical integrations performed to date.

  10. Robust stability of second-order systems

    NASA Technical Reports Server (NTRS)

    Chuang, C.-H.

    1995-01-01

    It has been shown recently how virtual passive controllers can be designed for second-order dynamic systems to achieve robust stability. The virtual controllers were visualized as systems made up of spring, mass and damping elements. In this paper, a new approach emphasizing on the notion of positive realness to the same second-order dynamic systems is used. Necessary and sufficient conditions for positive realness are presented for scalar spring-mass-dashpot systems. For multi-input multi-output systems, we show how a mass-spring-dashpot system can be made positive real by properly choosing its output variables. In particular, sufficient conditions are shown for the system without output velocity. Furthermore, if velocity cannot be measured then the system parameters must be precise to keep the system positive real. In practice, system parameters are not always constant and cannot be measured precisely. Therefore, in order to be useful positive real systems must be robust to some degrees. This can be achieved with the design presented in this paper.

  11. Extensibility of a linear rapid robust design methodology

    NASA Astrophysics Data System (ADS)

    Steinfeldt, Bradley A.; Braun, Robert D.

    2016-05-01

    The extensibility of a linear rapid robust design methodology is examined. This analysis is approached from a computational cost and accuracy perspective. The sensitivity of the solution's computational cost is examined by analysing effects such as the number of design variables, nonlinearity of the CAs, and nonlinearity of the response in addition to several potential complexity metrics. Relative to traditional robust design methods, the linear rapid robust design methodology scaled better with the size of the problem and had performance that exceeded the traditional techniques examined. The accuracy of applying a method with linear fundamentals to nonlinear problems was examined. It is observed that if the magnitude of nonlinearity is less than 1000 times that of the nominal linear response, the error associated with applying successive linearization will result in ? errors in the response less than 10% compared to the full nonlinear error.

  12. Study of fringe tracking for high-precision space-based interferometers

    NASA Astrophysics Data System (ADS)

    Padilla, Carlos E.; Karlov, Valeri I.; Li, Jun; Chun, Hon M.; Tsitsiklis, John N.; Reasenberg, Robert D.

    1995-06-01

    The purpose of the fringe tracking algorithms is to maintain lock on the target star after acquisition and to obtain the most accurate estimate possible of the scientific quantity (or quantities) of interest in the presence of dynamic disturbances to the spacecraft/interferometer ensemble. This study carries out an analysis of the performance and robustness achievable by four candidate estimation techniques when applied to an ultra-high-precision fringe tracking task (5 micro-arcsecond ultimate accuracy). The first class of fringe trackers studied include the Extended Kalman Filter. This class is followed by extensions to second and third order nonlinear filters developed by the authors. The higher order filters have expanded regions of convergence. Third, we consider the use of an invariant filter (IF) to estimate the angle between two target stars (using POINTS as a test case). The IF offers the advantage of improved robustness in the dynamical case, being in effect `invariant' to dynamics. Finally Discrete Bayes Algorithms make use of Bayes' decision rule to propagate the a posteriori distribution of the true parameter and take into account the discrete character of the Poisson photon arrival events. Variations of these algorithms, known as multiple hypotheses trackers, offer great promise for dim star tracking. An exploration of filter performance with respect to several parameters is carried out analytically and selected Monte Carlo simulations are carried out both to verify analytical predictions and to study performance.

  13. Accuracy of Digital vs. Conventional Implant Impressions

    PubMed Central

    Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.

    2015-01-01

    The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423

  14. Biometric feature embedding using robust steganography technique

    NASA Astrophysics Data System (ADS)

    Rashid, Rasber D.; Sellahewa, Harin; Jassim, Sabah A.

    2013-05-01

    This paper is concerned with robust steganographic techniques to hide and communicate biometric data in mobile media objects like images, over open networks. More specifically, the aim is to embed binarised features extracted using discrete wavelet transforms and local binary patterns of face images as a secret message in an image. The need for such techniques can arise in law enforcement, forensics, counter terrorism, internet/mobile banking and border control. What differentiates this problem from normal information hiding techniques is the added requirement that there should be minimal effect on face recognition accuracy. We propose an LSB-Witness embedding technique in which the secret message is already present in the LSB plane but instead of changing the cover image LSB values, the second LSB plane will be changed to stand as a witness/informer to the receiver during message recovery. Although this approach may affect the stego quality, it is eliminating the weakness of traditional LSB schemes that is exploited by steganalysis techniques for LSB, such as PoV and RS steganalysis, to detect the existence of secrete message. Experimental results show that the proposed method is robust against PoV and RS attacks compared to other variants of LSB. We also discussed variants of this approach and determine capacity requirements for embedding face biometric feature vectors while maintain accuracy of face recognition.

  15. Making Activity Recognition Robust against Deceptive Behavior.

    PubMed

    Saeb, Sohrab; Körding, Konrad; Mohr, David C

    2015-01-01

    Healthcare services increasingly use the activity recognition technology to track the daily activities of individuals. In some cases, this is used to provide incentives. For example, some health insurance companies offer discount to customers who are physically active, based on the data collected from their activity tracking devices. Therefore, there is an increasing motivation for individuals to cheat, by making activity trackers detect activities that increase their benefits rather than the ones they actually do. In this study, we used a novel method to make activity recognition robust against deceptive behavior. We asked 14 subjects to attempt to trick our smartphone-based activity classifier by making it detect an activity other than the one they actually performed, for example by shaking the phone while seated to make the classifier detect walking. If they succeeded, we used their motion data to retrain the classifier, and asked them to try to trick it again. The experiment ended when subjects could no longer cheat. We found that some subjects were not able to trick the classifier at all, while others required five rounds of retraining. While classifiers trained on normal activity data predicted true activity with ~38% accuracy, training on the data gathered during the deceptive behavior increased their accuracy to ~84%. We conclude that learning the deceptive behavior of one individual helps to detect the deceptive behavior of others. Thus, we can make current activity recognition robust to deception by including deceptive activity data from a few individuals. PMID:26659118

  16. Making Activity Recognition Robust against Deceptive Behavior

    PubMed Central

    Saeb, Sohrab; Körding, Konrad; Mohr, David C.

    2015-01-01

    Healthcare services increasingly use the activity recognition technology to track the daily activities of individuals. In some cases, this is used to provide incentives. For example, some health insurance companies offer discount to customers who are physically active, based on the data collected from their activity tracking devices. Therefore, there is an increasing motivation for individuals to cheat, by making activity trackers detect activities that increase their benefits rather than the ones they actually do. In this study, we used a novel method to make activity recognition robust against deceptive behavior. We asked 14 subjects to attempt to trick our smartphone-based activity classifier by making it detect an activity other than the one they actually performed, for example by shaking the phone while seated to make the classifier detect walking. If they succeeded, we used their motion data to retrain the classifier, and asked them to try to trick it again. The experiment ended when subjects could no longer cheat. We found that some subjects were not able to trick the classifier at all, while others required five rounds of retraining. While classifiers trained on normal activity data predicted true activity with ~38% accuracy, training on the data gathered during the deceptive behavior increased their accuracy to ~84%. We conclude that learning the deceptive behavior of one individual helps to detect the deceptive behavior of others. Thus, we can make current activity recognition robust to deception by including deceptive activity data from a few individuals. PMID:26659118

  17. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  18. Robust Decision-making Applied to Model Selection

    SciTech Connect

    Hemez, Francois M.

    2012-08-06

    The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define each of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.

  19. Galvanometer deflection: a precision high-speed system.

    PubMed

    Jablonowski, D P; Raamot, J

    1976-06-01

    An X-Y galvanometer deflection system capable of high precision in a random access mode of operation is described. Beam positional information in digitized form is obtained by employing a Ronchi grating with a sophisticated optical detection scheme. This information is used in a control interface to locate the beam to the required precision. The system is characterized by high accuracy at maximum speed and is designed for operation in a variable environment, with particular attention placed on thermal insensitivity.

  20. Galvanometer deflection: a precision high-speed system.

    PubMed

    Jablonowski, D P; Raamot, J

    1976-06-01

    An X-Y galvanometer deflection system capable of high precision in a random access mode of operation is described. Beam positional information in digitized form is obtained by employing a Ronchi grating with a sophisticated optical detection scheme. This information is used in a control interface to locate the beam to the required precision. The system is characterized by high accuracy at maximum speed and is designed for operation in a variable environment, with particular attention placed on thermal insensitivity. PMID:20165203

  1. Precision electroweak measurements

    SciTech Connect

    Demarteau, M.

    1996-11-01

    Recent electroweak precision measurements fro {ital e}{sup +}{ital e}{sup -} and {ital p{anti p}} colliders are presented. Some emphasis is placed on the recent developments in the heavy flavor sector. The measurements are compared to predictions from the Standard Model of electroweak interactions. All results are found to be consistent with the Standard Model. The indirect constraint on the top quark mass from all measurements is in excellent agreement with the direct {ital m{sub t}} measurements. Using the world`s electroweak data in conjunction with the current measurement of the top quark mass, the constraints on the Higgs` mass are discussed.

  2. Precision Robotic Assembly Machine

    ScienceCinema

    None

    2016-07-12

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  3. Instrument Attitude Precision Control

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    2004-01-01

    A novel approach is presented in this paper to analyze attitude precision and control for an instrument gimbaled to a spacecraft subject to an internal disturbance caused by a moving component inside the instrument. Nonlinear differential equations of motion for some sample cases are derived and solved analytically to gain insight into the influence of the disturbance on the attitude pointing error. A simple control law is developed to eliminate the instrument pointing error caused by the internal disturbance. Several cases are presented to demonstrate and verify the concept presented in this paper.

  4. Precision Robotic Assembly Machine

    SciTech Connect

    2009-08-14

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  5. Precision mass measurements

    NASA Astrophysics Data System (ADS)

    Gläser, M.; Borys, M.

    2009-12-01

    Mass as a physical quantity and its measurement are described. After some historical remarks, a short summary of the concept of mass in classical and modern physics is given. Principles and methods of mass measurements, for example as energy measurement or as measurement of weight forces and forces caused by acceleration, are discussed. Precision mass measurement by comparing mass standards using balances is described in detail. Measurement of atomic masses related to 12C is briefly reviewed as well as experiments and recent discussions for a future new definition of the kilogram, the SI unit of mass.

  6. Robust 3D-2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Wang, Adam S.; Webster Stayman, J.; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A. Jay; Gokaslan, Ziya L.; Siewerdsen, Jeffrey H.

    2013-12-01

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with ‘success’ defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial

  7. Robust 3D-2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation.

    PubMed

    Otake, Yoshito; Wang, Adam S; Webster Stayman, J; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A Jay; Gokaslan, Ziya L; Siewerdsen, Jeffrey H

    2013-12-01

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with 'success' defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the

  8. Precise image-guided irradiation of small animals: a flexible non-profit platform.

    PubMed

    Tillner, Falk; Thute, Prasad; Löck, Steffen; Dietrich, Antje; Fursov, Andriy; Haase, Robert; Lukas, Mathias; Rimarzig, Bernd; Sobiella, Manfred; Krause, Mechthild; Baumann, Michael; Bütof, Rebecca; Enghardt, Wolfgang

    2016-04-21

    Preclinical in vivo studies using small animals are essential to develop new therapeutic options in radiation oncology. Of particular interest are orthotopic tumour models, which better reflect the clinical situation in terms of growth patterns and microenvironmental parameters of the tumour as well as the interplay of tumours with the surrounding normal tissues. Such orthotopic models increase the technical demands and the complexity of preclinical studies as local irradiation with therapeutically relevant doses requires image-guided target localisation and accurate beam application. Moreover, advanced imaging techniques are needed for monitoring treatment outcome. We present a novel small animal image-guided radiation therapy (SAIGRT) system, which allows for precise and accurate, conformal irradiation and x-ray imaging of small animals. High accuracy is achieved by its robust construction, the precise movement of its components and a fast high-resolution flat-panel detector. Field forming and x-ray imaging is accomplished close to the animal resulting in a small penumbra and a high image quality. Feasibility for irradiating orthotopic models has been proven using lung tumour and glioblastoma models in mice. The SAIGRT system provides a flexible, non-profit academic research platform which can be adapted to specific experimental needs and therefore enables systematic preclinical trials in multicentre research networks.

  9. Precise image-guided irradiation of small animals: a flexible non-profit platform

    NASA Astrophysics Data System (ADS)

    Tillner, Falk; Thute, Prasad; Löck, Steffen; Dietrich, Antje; Fursov, Andriy; Haase, Robert; Lukas, Mathias; Rimarzig, Bernd; Sobiella, Manfred; Krause, Mechthild; Baumann, Michael; Bütof, Rebecca; Enghardt, Wolfgang

    2016-04-01

    Preclinical in vivo studies using small animals are essential to develop new therapeutic options in radiation oncology. Of particular interest are orthotopic tumour models, which better reflect the clinical situation in terms of growth patterns and microenvironmental parameters of the tumour as well as the interplay of tumours with the surrounding normal tissues. Such orthotopic models increase the technical demands and the complexity of preclinical studies as local irradiation with therapeutically relevant doses requires image-guided target localisation and accurate beam application. Moreover, advanced imaging techniques are needed for monitoring treatment outcome. We present a novel small animal image-guided radiation therapy (SAIGRT) system, which allows for precise and accurate, conformal irradiation and x-ray imaging of small animals. High accuracy is achieved by its robust construction, the precise movement of its components and a fast high-resolution flat-panel detector. Field forming and x-ray imaging is accomplished close to the animal resulting in a small penumbra and a high image quality. Feasibility for irradiating orthotopic models has been proven using lung tumour and glioblastoma models in mice. The SAIGRT system provides a flexible, non-profit academic research platform which can be adapted to specific experimental needs and therefore enables systematic preclinical trials in multicentre research networks.

  10. Precision and power grip priming by observed grasping.

    PubMed

    Vainio, Lari; Tucker, Mike; Ellis, Rob

    2007-11-01

    The coupling of hand grasping stimuli and the subsequent grasp execution was explored in normal participants. Participants were asked to respond with their right- or left-hand to the accuracy of an observed (dynamic) grasp while they were holding precision or power grasp response devices in their hands (e.g., precision device/right-hand; power device/left-hand). The observed hand was making either accurate or inaccurate precision or power grasps and participants signalled the accuracy of the observed grip by making one or other response depending on instructions. Responses were made faster when they matched the observed grip type. The two grasp types differed in their sensitivity to the end-state (i.e., accuracy) of the observed grip. The end-state influenced the power grasp congruency effect more than the precision grasp effect when the observed hand was performing the grasp without any goal object (Experiments 1 and 2). However, the end-state also influenced the precision grip congruency effect (Experiment 3) when the action was object-directed. The data are interpreted as behavioural evidence of the automatic imitation coding of the observed actions. The study suggests that, in goal-oriented imitation coding, the context of an action (e.g., being object-directed) is more important factor in coding precision grips than power grips.

  11. Precision flyer initiator

    DOEpatents

    Frank, Alan M.; Lee, Ronald S.

    1998-01-01

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or "flyer" is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices.

  12. Precision flyer initiator

    DOEpatents

    Frank, A.M.; Lee, R.S.

    1998-05-26

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or ``flyer`` is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices. 10 figs.

  13. Precision Joining Center

    SciTech Connect

    Powell, J.W.; Westphal, D.A.

    1991-08-01

    A workshop to obtain input from industry on the establishment of the Precision Joining Center (PJC) was held on July 10--12, 1991. The PJC is a center for training Joining Technologists in advanced joining techniques and concepts in order to promote the competitiveness of US industry. The center will be established as part of the DOE Defense Programs Technology Commercialization Initiative, and operated by EG G Rocky Flats in cooperation with the American Welding Society and the Colorado School of Mines Center for Welding and Joining Research. The overall objectives of the workshop were to validate the need for a Joining Technologists to fill the gap between the welding operator and the welding engineer, and to assure that the PJC will train individuals to satisfy that need. The consensus of the workshop participants was that the Joining Technologist is a necessary position in industry, and is currently used, with some variation, by many companies. It was agreed that the PJC core curriculum, as presented, would produce a Joining Technologist of value to industries that use precision joining techniques. The advantage of the PJC would be to train the Joining Technologist much more quickly and more completely. The proposed emphasis of the PJC curriculum on equipment intensive and hands-on training was judged to be essential.

  14. Precision measurements in supersymmetry

    SciTech Connect

    Feng, J.L.

    1995-05-01

    Supersymmetry is a promising framework in which to explore extensions of the standard model. If candidates for supersymmetric particles are found, precision measurements of their properties will then be of paramount importance. The prospects for such measurements and their implications are the subject of this thesis. If charginos are produced at the LEP II collider, they are likely to be one of the few available supersymmetric signals for many years. The author considers the possibility of determining fundamental supersymmetry parameters in such a scenario. The study is complicated by the dependence of observables on a large number of these parameters. He proposes a straightforward procedure for disentangling these dependences and demonstrate its effectiveness by presenting a number of case studies at representative points in parameter space. In addition to determining the properties of supersymmetric particles, precision measurements may also be used to establish that newly-discovered particles are, in fact, supersymmetric. Supersymmetry predicts quantitative relations among the couplings and masses of superparticles. The author discusses tests of such relations at a future e{sup +}e{sup {minus}} linear collider, using measurements that exploit the availability of polarizable beams. Stringent tests of supersymmetry from chargino production are demonstrated in two representative cases, and fermion and neutralino processes are also discussed.

  15. A robust, inexpensive wavelength meter using a commercial color sensors

    NASA Astrophysics Data System (ADS)

    Jones, Tyler; Otterstrom, Nils; Jackson, Jarom; Archibald, James; Durfee, Dallin

    2015-05-01

    Commercial color sensor chips are used in a variety of consumer electronics. Many are built to specifications far above those needed for their typical uses, some having temperature coefficients of only a few parts per million, and using precision 16 bit analog to digital converters. Using such a device, we were able to measure the wavelength of a laser with a precision of 0.01 nm with a calibration drift of similar magnitude over several days. Factors that influence the precision and accuracy, such as etalon effects in the sensor, temperature dependence, intensity variations, and timing, will be discussed. Funding by Brigham Young University and the National Science Foundation.

  16. The neglected tool in the Bayesian ecologist's shed: a case study testing informative priors' effect on model accuracy.

    PubMed

    Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J

    2015-01-01

    Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy.

  17. New High Precision Linelist of H_3^+

    NASA Astrophysics Data System (ADS)

    Hodges, James N.; Perry, Adam J.; Markus, Charles; Jenkins, Paul A., II; Kocheril, G. Stephen; McCall, Benjamin J.

    2014-06-01

    As the simplest polyatomic molecule, H_3^+ serves as an ideal benchmark for theoretical predictions of rovibrational energy levels. By strictly ab initio methods, the current accuracy of theoretical predictions is limited to an impressive one hundredth of a wavenumber, which has been accomplished by consideration of relativistic, adiabatic, and non-adiabatic corrections to the Born-Oppenheimer PES. More accurate predictions rely on a treatment of quantum electrodynamic effects, which have improved the accuracies of vibrational transitions in molecular hydrogen to a few MHz. High precision spectroscopy is of the utmost importance for extending the frontiers of ab initio calculations, as improved precision and accuracy enable more rigorous testing of calculations. Additionally, measuring rovibrational transitions of H_3^+ can be used to predict its forbidden rotational spectrum. Though the existing data can be used to determine rotational transition frequencies, the uncertainties are prohibitively large. Acquisition of rovibrational spectra with smaller experimental uncertainty would enable a spectroscopic search for the rotational transitions. The technique Noise Immune Cavity Enhanced Optical Heterodyne Velocity Modulation Spectroscopy, or NICE-OHVMS has been previously used to precisely and accurately measure transitions of H_3^+, CH_5^+, and HCO^+ to sub-MHz uncertainty. A second module for our optical parametric oscillator has extended our instrument's frequency coverage from 3.2-3.9 μm to 2.5-3.9 μm. With extended coverage, we have improved our previous linelist by measuring additional transitions. O. L. Polyansky, et al. Phil. Trans. R. Soc. A (2012), 370, 5014--5027. J. Komasa, et al. J. Chem. Theor. Comp. (2011), 7, 3105--3115. C. M. Lindsay, B. J. McCall, J. Mol. Spectrosc. (2001), 210, 66--83. J. N. Hodges, et al. J. Chem. Phys. (2013), 139, 164201.

  18. Accuracy of laser beam center and width calculations.

    PubMed

    Mana, G; Massa, E; Rovera, A

    2001-03-20

    The application of lasers in high-precision measurements and the demand for accuracy make the plane-wave model of laser beams unsatisfactory. Measurements of the variance of the transverse components of the photon impulse are essential for wavelength determination. Accuracy evaluation of the relevant calculations is thus an integral part of the assessment of the wavelength of stabilized-laser radiation. We present a propagation-of-error analysis on variance calculations when digitized intensity profiles are obtained by means of silicon video cameras. Image clipping criteria are obtained that maximize the accuracy of the computed result.

  19. A robust DCT domain watermarking algorithm based on chaos system

    NASA Astrophysics Data System (ADS)

    Xiao, Mingsong; Wan, Xiaoxia; Gan, Chaohua; Du, Bo

    2009-10-01

    Digital watermarking is a kind of technique that can be used for protecting and enforcing the intellectual property (IP) rights of the digital media like the digital images containting in the transaction copyright. There are many kinds of digital watermarking algorithms. However, existing digital watermarking algorithms are not robust enough against geometric attacks and signal processing operations. In this paper, a robust watermarking algorithm based on chaos array in DCT (discrete cosine transform)-domain for gray images is proposed. The algorithm provides an one-to-one method to extract the watermark.Experimental results have proved that this new method has high accuracy and is highly robust against geometric attacks, signal processing operations and geometric transformations. Furthermore, the one who have on idea of the key can't find the position of the watermark embedded in. As a result, the watermark not easy to be modified, so this scheme is secure and robust.

  20. Robust Nonlinear Neural Codes

    NASA Astrophysics Data System (ADS)

    Yang, Qianli; Pitkow, Xaq

    2015-03-01

    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  1. Development of a precision large deployable antenna

    NASA Astrophysics Data System (ADS)

    Iwata, Yoji; Yamamoto, Kazuo; Noda, Takahiko; Tamai, Yasuo; Ebisui, Takashi; Miura, Koryo; Takano, Tadashi

    This paper describes the results of a study of a precision large deployable antenna for the space VLBI satellite 'MUSES-B'. An antenna with high gain and pointing accuracy is required for the mission objective. The frequency bands required are 22, 5 and 1.6 GHz. The required aperture diameter of the reflector is 10 meters. A displaced axis Cassegrain antenna is adopted with a mesh reflector formed in a tension truss concept. Analysis shows the possibility to achieve aperture efficiency of 60 percent at 22.15 GHz and surface accuracy of 0.5 mm rms. A one-fourth scale model of the reflector has been assembled in order to verify the design and clarify problems in manufacturing and assembly processes.

  2. Precise autofocusing microscope with rapid response

    NASA Astrophysics Data System (ADS)

    Liu, Chien-Sheng; Jiang, Sheng-Hong

    2015-03-01

    The rapid on-line or off-line automated vision inspection is a critical operation in the manufacturing fields. Accordingly, this present study designs and characterizes a novel precise optics-based autofocusing microscope with a rapid response and no reduction in the focusing accuracy. In contrast to conventional optics-based autofocusing microscopes with centroid method, the proposed microscope comprises a high-speed rotating optical diffuser in which the variation of the image centroid position is reduced and consequently the focusing response is improved. The proposed microscope is characterized and verified experimentally using a laboratory-built prototype. The experimental results show that compared to conventional optics-based autofocusing microscopes, the proposed microscope achieves a more rapid response with no reduction in the focusing accuracy. Consequently, the proposed microscope represents another solution for both existing and emerging industrial applications of automated vision inspection.

  3. Visual inspection reliability for precision manufactured parts

    SciTech Connect

    See, Judi E.

    2015-09-04

    Sandia National Laboratories conducted an experiment for the National Nuclear Security Administration to determine the reliability of visual inspection of precision manufactured parts used in nuclear weapons. In addition visual inspection has been extensively researched since the early 20th century; however, the reliability of visual inspection for nuclear weapons parts has not been addressed. In addition, the efficacy of using inspector confidence ratings to guide multiple inspections in an effort to improve overall performance accuracy is unknown. Further, the workload associated with inspection has not been documented, and newer measures of stress have not been applied.

  4. Precision Electroforming For Optical Disk Manufacturing

    NASA Astrophysics Data System (ADS)

    Rodia, Carl M.

    1985-04-01

    Precision electroforming in replication of optical discs is discussed with overview of electro-forming technology capabilities, limitations, and tolerance criteria. Use of expendable and reusable mandrels is treated along with techniques for resist master preparation and processing. A review of applications and common reasons for success and failure is offered. Problems such as tensile/compressive stress, roughness and flatness are discussed. Advice is given on approaches, classic and novel, for remedying and avoiding specific problems. An abridged process description of optical memory disk mold electroforming is presented from resist master through metallization and electroforming. Emphasis is placed on methods of achieving accuracy and quality assurance.

  5. The GBT precision telescope control system

    NASA Astrophysics Data System (ADS)

    Prestage, Richard M.; Constantikes, Kim T.; Balser, Dana S.; Condon, James J.

    2004-10-01

    The NRAO Robert C. Byrd Green Bank Telescope (GBT) is a 100m diameter advanced single dish radio telescope designed for a wide range of astronomical projects with special emphasis on precision imaging. Open-loop adjustments of the active surface, and real-time corrections to pointing and focus on the basis of structural temperatures already allow observations at frequencies up to 50GHz. Our ultimate goal is to extend the observing frequency limit up to 115GHz; this will require a two dimensional tracking error better than 1.3", and an rms surface accuracy better than 210μm. The Precision Telescope Control System project has two main components. One aspect is the continued deployment of appropriate metrology systems, including temperature sensors, inclinometers, laser rangefinders and other devices. An improved control system architecture will harness this measurement capability with the existing servo systems, to deliver the precision operation required. The second aspect is the execution of a series of experiments to identify, understand and correct the residual pointing and surface accuracy errors. These can have multiple causes, many of which depend on variable environmental conditions. A particularly novel approach is to solve simultaneously for gravitational, thermal and wind effects in the development of the telescope pointing and focus tracking models. Our precision temperature sensor system has already allowed us to compensate for thermal gradients in the antenna, which were previously responsible for the largest "non-repeatable" pointing and focus tracking errors. We are currently targetting the effects of wind as the next, currently uncompensated, source of error.

  6. The robustness of complex networks

    NASA Astrophysics Data System (ADS)

    Albert, Reka

    2002-03-01

    Many complex networks display a surprising degree of tolerance against errors. For example, organisms and ecosystems exhibit remarkable robustness to large variations in temperature, moisture, and nutrients, and communication networks continue to function despite local failures. This presentation will explore the effects of the network topology on its robust functioning. First, we will consider the topological integrity of several networks under node disruption. Then we will focus on the functional robustness of biological signaling networks, and the decisive role played by the network topology in this robustness.

  7. Robust rate-control for wavelet-based image coding via conditional probability models.

    PubMed

    Gaubatz, Matthew D; Hemami, Sheila S

    2007-03-01

    Real-time rate-control for wavelet image coding requires characterization of the rate required to code quantized wavelet data. An ideal robust solution can be used with any wavelet coder and any quantization scheme. A large number of wavelet quantization schemes (perceptual and otherwise) are based on scalar dead-zone quantization of wavelet coefficients. A key to performing rate-control is, thus, fast, accurate characterization of the relationship between rate and quantization step size, the R-Q curve. A solution is presented using two invocations of the coder that estimates the slope of each R-Q curve via probability modeling. The method is robust to choices of probability models, quantization schemes and wavelet coders. Because of extreme robustness to probability modeling, a fast approximation to spatially adaptive probability modeling can be used in the solution, as well. With respect to achieving a target rate, the proposed approach and associated fast approximation yield average percentage errors around 0.5% and 1.0% on images in the test set. By comparison, 2-coding-pass rho-domain modeling yields errors around 2.0%, and post-compression rate-distortion optimization yields average errors of around 1.0% at rates below 0.5 bits-per-pixel (bpp) that decrease down to about 0.5% at 1.0 bpp; both methods exhibit more competitive performance on the larger images. The proposed method and fast approximation approach are also similar in speed to the other state-of-the-art methods. In addition to possessing speed and accuracy, the proposed method does not require any training and can maintain precise control over wavelet step sizes, which adds flexibility to a wavelet-based image-coding system.

  8. Reticence, Accuracy and Efficacy

    NASA Astrophysics Data System (ADS)

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  9. Robust image segmentation using local robust statistics and correntropy-based K-means clustering

    NASA Astrophysics Data System (ADS)

    Huang, Chencheng; Zeng, Li

    2015-03-01

    It is an important work to segment the real world images with intensity inhomogeneity such as magnetic resonance (MR) and computer tomography (CT) images. In practice, such images are often polluted by noise which make them difficult to be segmented by traditional level set based segmentation models. In this paper, we propose a robust level set image segmentation model combining local with global fitting energies to segment noised images. In the proposed model, the local fitting energy is based on the local robust statistics (LRS) information of an input image, which can efficiently reduce the effects of the noise, and the global fitting energy utilizes the correntropy-based K-means (CK) method, which can adaptively emphasize the samples that are close to their corresponding cluster centers. By integrating the advantages of global information and local robust statistics characteristics, the proposed model can efficiently segment images with intensity inhomogeneity and noise. Then, a level set regularization term is used to avoid re-initialization procedures in the process of curve evolution. In addition, the Gaussian filter is utilized to keep the level set smoothing in the curve evolution process. The proposed model first appeared as a two-phase model and then extended to a multi-phase one. Experimental results show the advantages of our model in terms of accuracy and robustness to the noise. In particular, our method has been applied on some synthetic and real images with desirable results.

  10. Automatic Mode Transition Enabled Robust Triboelectric Nanogenerators.

    PubMed

    Chen, Jun; Yang, Jin; Guo, Hengyu; Li, Zhaoling; Zheng, Li; Su, Yuanjie; Wen, Zhen; Fan, Xing; Wang, Zhong Lin

    2015-12-22

    Although the triboelectric nanogenerator (TENG) has been proven to be a renewable and effective route for ambient energy harvesting, its robustness remains a great challenge due to the requirement of surface friction for a decent output, especially for the in-plane sliding mode TENG. Here, we present a rationally designed TENG for achieving a high output performance without compromising the device robustness by, first, converting the in-plane sliding electrification into a contact separation working mode and, second, creating an automatic transition between a contact working state and a noncontact working state. The magnet-assisted automatic transition triboelectric nanogenerator (AT-TENG) was demonstrated to effectively harness various ambient rotational motions to generate electricity with greatly improved device robustness. At a wind speed of 6.5 m/s or a water flow rate of 5.5 L/min, the harvested energy was capable of lighting up 24 spot lights (0.6 W each) simultaneously and charging a capacitor to greater than 120 V in 60 s. Furthermore, due to the rational structural design and unique output characteristics, the AT-TENG was not only capable of harvesting energy from natural bicycling and car motion but also acting as a self-powered speedometer with ultrahigh accuracy. Given such features as structural simplicity, easy fabrication, low cost, wide applicability even in a harsh environment, and high output performance with superior device robustness, the AT-TENG renders an effective and practical approach for ambient mechanical energy harvesting as well as self-powered active sensing.

  11. Automatic Mode Transition Enabled Robust Triboelectric Nanogenerators.

    PubMed

    Chen, Jun; Yang, Jin; Guo, Hengyu; Li, Zhaoling; Zheng, Li; Su, Yuanjie; Wen, Zhen; Fan, Xing; Wang, Zhong Lin

    2015-12-22

    Although the triboelectric nanogenerator (TENG) has been proven to be a renewable and effective route for ambient energy harvesting, its robustness remains a great challenge due to the requirement of surface friction for a decent output, especially for the in-plane sliding mode TENG. Here, we present a rationally designed TENG for achieving a high output performance without compromising the device robustness by, first, converting the in-plane sliding electrification into a contact separation working mode and, second, creating an automatic transition between a contact working state and a noncontact working state. The magnet-assisted automatic transition triboelectric nanogenerator (AT-TENG) was demonstrated to effectively harness various ambient rotational motions to generate electricity with greatly improved device robustness. At a wind speed of 6.5 m/s or a water flow rate of 5.5 L/min, the harvested energy was capable of lighting up 24 spot lights (0.6 W each) simultaneously and charging a capacitor to greater than 120 V in 60 s. Furthermore, due to the rational structural design and unique output characteristics, the AT-TENG was not only capable of harvesting energy from natural bicycling and car motion but also acting as a self-powered speedometer with ultrahigh accuracy. Given such features as structural simplicity, easy fabrication, low cost, wide applicability even in a harsh environment, and high output performance with superior device robustness, the AT-TENG renders an effective and practical approach for ambient mechanical energy harvesting as well as self-powered active sensing. PMID:26529374

  12. Truss Assembly and Welding by Intelligent Precision Jigging Robots

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Dorsey, John T.; Doggett, William R.; Correll, Nikolaus

    2014-01-01

    This paper describes an Intelligent Precision Jigging Robot (IPJR) prototype that enables the precise alignment and welding of titanium space telescope optical benches. The IPJR, equipped with micron accuracy sensors and actuators, worked in tandem with a lower precision remote controlled manipulator. The combined system assembled and welded a 2 m truss from stock titanium components. The calibration of the IPJR, and the difference between the predicted and the truss dimensions as-built, identified additional sources of error that should be addressed in the next generation of IPJRs in 2D and 3D.

  13. Precision Joining Center

    NASA Technical Reports Server (NTRS)

    Powell, John W.

    1991-01-01

    The establishment of a Precision Joining Center (PJC) is proposed. The PJC will be a cooperatively operated center with participation from U.S. private industry, the Colorado School of Mines, and various government agencies, including the Department of Energy's Nuclear Weapons Complex (NWC). The PJC's primary mission will be as a training center for advanced joining technologies. This will accomplish the following objectives: (1) it will provide an effective mechanism to transfer joining technology from the NWC to private industry; (2) it will provide a center for testing new joining processes for the NWC and private industry; and (3) it will provide highly trained personnel to support advance joining processes for the NWC and private industry.

  14. Precision laser cutting

    SciTech Connect

    Kautz, D.D.; Anglin, C.D.; Ramos, T.J.

    1990-01-19

    Many materials that are otherwise difficult to fabricate can be cut precisely with lasers. This presentation discusses the advantages and limitations of laser cutting for refractory metals, ceramics, and composites. Cutting in these materials was performed with a 400-W, pulsed Nd:YAG laser. Important cutting parameters such as beam power, pulse waveforms, cutting gases, travel speed, and laser coupling are outlined. The effects of process parameters on cut quality are evaluated. Three variables are used to determine the cut quality: kerf width, slag adherence, and metallurgical characteristics of recast layers and heat-affected zones around the cuts. Results indicate that ductile materials with good coupling characteristics (such as stainless steel alloys and tantalum) cut well. Materials lacking one or both of these properties (such as tungsten and ceramics) are difficult to cut without proper part design, stress relief, or coupling aids. 3 refs., 2 figs., 1 tab.

  15. 40 CFR 91.314 - Analyzer accuracy and specifications.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... 91.314 Section 91.314 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Provisions § 91.314 Analyzer accuracy and specifications. (a) Measurement accuracy—general. The analyzers... precision is defined as 2.5 times the standard deviation(s) of 10 repetitive responses to a...

  16. 40 CFR 91.314 - Analyzer accuracy and specifications.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... 91.314 Section 91.314 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR... Provisions § 91.314 Analyzer accuracy and specifications. (a) Measurement accuracy—general. The analyzers... precision is defined as 2.5 times the standard deviation(s) of 10 repetitive responses to a...

  17. Precision Spectroscopy of Tellurium

    NASA Astrophysics Data System (ADS)

    Coker, J.; Furneaux, J. E.

    2013-06-01

    Tellurium (Te_2) is widely used as a frequency reference, largely due to the fact that it has an optical transition roughly every 2-3 GHz throughout a large portion of the visible spectrum. Although a standard atlas encompassing over 5200 cm^{-1} already exists [1], Doppler broadening present in that work buries a significant portion of the features [2]. More recent studies of Te_2 exist which do not exhibit Doppler broadening, such as Refs. [3-5], and each covers different parts of the spectrum. This work adds to that knowledge a few hundred transitions in the vicinity of 444 nm, measured with high precision in order to improve measurement of the spectroscopic constants of Te_2's excited states. Using a Fabry Perot cavity in a shock-absorbing, temperature and pressure regulated chamber, locked to a Zeeman stabilized HeNe laser, we measure changes in frequency of our diode laser to ˜1 MHz precision. This diode laser is scanned over 1000 GHz for use in a saturated-absorption spectroscopy cell filled with Te_2 vapor. Details of the cavity and its short and long-term stability are discussed, as well as spectroscopic properties of Te_2. References: J. Cariou, and P. Luc, Atlas du spectre d'absorption de la molecule de tellure, Laboratoire Aime-Cotton (1980). J. Coker et al., J. Opt. Soc. Am. B {28}, 2934 (2011). J. Verges et al., Physica Scripta {25}, 338 (1982). Ph. Courteille et al., Appl. Phys. B {59}, 187 (1994) T.J. Scholl et al., J. Opt. Soc. Am. B {22}, 1128 (2005).

  18. Operating a real time high accuracy positioning system

    NASA Astrophysics Data System (ADS)

    Johnston, G.; Hanley, J.; Russell, D.; Vooght, A.

    2003-04-01

    The paper shall review the history and development of real time DGPS services prior to then describing the design of a high accuracy GPS commercial augmentation system and service currently delivering over a wide area to users of precise positioning products. The infrastructure and system shall be explained in relation to the need for high accuracy and high integrity of positioning for users. A comparison of the different techniques for the delivery of data shall be provided to outline the technical approach taken. Examples of the performance of the real time system shall be shown in various regions and modes to outline the current achievable accuracies. Having described and established the current GPS based situation, a review of the potential of the Galileo system shall be presented. Following brief contextual information relating to the Galileo project, core system and services, the paper will identify possible key applications and the main user communities for sub decimetre level precise positioning. The paper will address the Galileo and modernised GPS signals in space that are relevant to commercial precise positioning for the future and will discuss the implications for precise positioning performance. An outline of the proposed architecture shall be described and associated with pointers towards a successful implementation. Central to this discussion will be an assessment of the likely evolution of system infrastructure and user equipment implementation, prospects for new applications and their effect upon the business case for precise positioning services.

  19. High precision anatomy for MEG.

    PubMed

    Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth

    2014-02-01

    Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1mm. Estimates of relative co-registration error were <1.5mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. PMID:23911673

  20. Precision laser automatic tracking system.

    PubMed

    Lucy, R F; Peters, C J; McGann, E J; Lang, K T

    1966-04-01

    A precision laser tracker has been constructed and tested that is capable of tracking a low-acceleration target to an accuracy of about 25 microrad root mean square. In tracking high-acceleration targets, the error is directly proportional to the angular acceleration. For an angular acceleration of 0.6 rad/sec(2), the measured tracking error was about 0.1 mrad. The basic components in this tracker, similar in configuration to a heliostat, are a laser and an image dissector, which are mounted on a stationary frame, and a servocontrolled tracking mirror. The daytime sensitivity of this system is approximately 3 x 10(-10) W/m(2); the ultimate nighttime sensitivity is approximately 3 x 10(-14) W/m(2). Experimental tests were performed to evaluate both dynamic characteristics of this system and the system sensitivity. Dynamic performance of the system was obtained, using a small rocket covered with retroreflective material launched at an acceleration of about 13 g at a point 204 m from the tracker. The daytime sensitivity of the system was checked, using an efficient retroreflector mounted on a light aircraft. This aircraft was tracked out to a maximum range of 15 km, which checked the daytime sensitivity of the system measured by other means. The system also has been used to track passively stars and the Echo I satellite. Also, the system tracked passively a +7.5 magnitude star, and the signal-to-noise ratio in this experiment indicates that it should be possible to track a + 12.5 magnitude star.

  1. High precision anatomy for MEG☆

    PubMed Central

    Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth

    2014-01-01

    Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1 mm. Estimates of relative co-registration error were < 1.5 mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6 month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5 mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. PMID:23911673

  2. Robust, Optimal Subsonic Airfoil Shapes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2014-01-01

    A method has been developed to create an airfoil robust enough to operate satisfactorily in different environments. This method determines a robust, optimal, subsonic airfoil shape, beginning with an arbitrary initial airfoil shape, and imposes the necessary constraints on the design. Also, this method is flexible and extendible to a larger class of requirements and changes in constraints imposed.

  3. Robust Understanding of Statistical Variation

    ERIC Educational Resources Information Center

    Peters, Susan A.

    2011-01-01

    This paper presents a framework that captures the complexity of reasoning about variation in ways that are indicative of robust understanding and describes reasoning as a blend of design, data-centric, and modeling perspectives. Robust understanding is indicated by integrated reasoning about variation within each perspective and across…

  4. Robust skew estimation using straight lines in document images

    NASA Astrophysics Data System (ADS)

    Koo, Hyung Il; Cho, Nam Ik

    2016-05-01

    A skew-estimation method using straight lines in document images is presented. Unlike conventional approaches exploiting the properties of text, we formulate the skew-estimation problem as an estimation task using straight lines in images and focus on robust and accurate line detection. To be precise, we adopt a block-based edge detector followed by a progressive line detector to take clues from a variety of sources such as text lines, boundaries of figures/tables, vertical/horizontal separators, and boundaries of textblocks. Extensive experiments on the datasets of skewed images and competition results reveal that the proposed method works robustly and yields accurate skew-estimation results.

  5. Robust adaptive dynamic programming with an application to power systems.

    PubMed

    Jiang, Yu; Jiang, Zhong-Ping

    2013-07-01

    This brief presents a novel framework of robust adaptive dynamic programming (robust-ADP) aimed at computing globally stabilizing and suboptimal control policies in the presence of dynamic uncertainties. A key strategy is to integrate ADP theory with techniques in modern nonlinear control with a unique objective of filling up a gap in the past literature of ADP without taking into account dynamic uncertainties. Neither the system dynamics nor the system order are required to be precisely known. As an illustrative example, the computational algorithm is applied to the controller design of a two-machine power system. PMID:24808528

  6. High accuracy wavelength calibration for a scanning visible spectrometer

    SciTech Connect

    Scotti, Filippo; Bell, Ronald E.

    2010-10-15

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies {<=}0.2 A. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of {approx}0.25 A has been demonstrated. With the addition of a high resolution (0.075 arc sec) optical encoder on the grating stage, greater precision ({approx}0.005 A) is possible, allowing absolute velocity measurements within {approx}0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  7. High Accuracy Wavelength Calibration For A Scanning Visible Spectrometer

    SciTech Connect

    Filippo Scotti and Ronald Bell

    2010-07-29

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤ 0.2Â. An automated calibration for a scanning spectrometer has been developed to achieve a high wavelength accuracy overr the visible spectrum, stable over time and environmental conditions, without the need to recalibrate after each grating movement. The method fits all relevant spectrometer paraameters using multiple calibration spectra. With a steping-motor controlled sine-drive, accuracies of ~0.025 Â have been demonstrated. With the addition of high resolution (0.075 aresec) optical encoder on the grading stage, greater precision (~0.005 Â) is possible, allowing absolute velocity measurements with ~0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  8. Facial symmetry in robust anthropometrics.

    PubMed

    Kalina, Jan

    2012-05-01

    Image analysis methods commonly used in forensic anthropology do not have desirable robustness properties, which can be ensured by robust statistical methods. In this paper, the face localization in images is carried out by detecting symmetric areas in the images. Symmetry is measured between two neighboring rectangular areas in the images using a new robust correlation coefficient, which down-weights regions in the face violating the symmetry. Raw images of faces without usual preliminary transformations are considered. The robust correlation coefficient based on the least weighted squares regression yields very promising results also in the localization of such faces, which are not entirely symmetric. Standard methods of statistical machine learning are applied for comparison. The robust correlation analysis can be applicable to other problems of forensic anthropology.

  9. A Robust Biomarker

    NASA Technical Reports Server (NTRS)

    Westall, F.; Steele, A.; Toporski, J.; Walsh, M. M.; Allen, C. C.; Guidry, S.; McKay, D. S.; Gibson, E. K.; Chafetz, H. S.

    2000-01-01

    containing fossil biofilm, including the 3.5 b.y..-old carbonaceous cherts from South Africa and Australia. As a result of the unique compositional, structural and "mineralisable" properties of bacterial polymer and biofilms, we conclude that bacterial polymers and biofilms constitute a robust and reliable biomarker for life on Earth and could be a potential biomarker for extraterrestrial life.

  10. Precision cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Fendt, William Ashton, Jr.

    2009-09-01

    Experimental efforts of the last few decades have brought. a golden age to mankind's endeavor to understand tine physical properties of the Universe throughout its history. Recent measurements of the cosmic microwave background (CMB) provide strong confirmation of the standard big bang paradigm, as well as introducing new mysteries, to unexplained by current physical models. In the following decades. even more ambitious scientific endeavours will begin to shed light on the new physics by looking at the detailed structure of the Universe both at very early and recent times. Modern data has allowed us to begins to test inflationary models of the early Universe, and the near future will bring higher precision data and much stronger tests. Cracking the codes hidden in these cosmological observables is a difficult and computationally intensive problem. The challenges will continue to increase as future experiments bring larger and more precise data sets. Because of the complexity of the problem, we are forced to use approximate techniques and make simplifying assumptions to ease the computational workload. While this has been reasonably sufficient until now, hints of the limitations of our techniques have begun to come to light. For example, the likelihood approximation used for analysis of CMB data from the Wilkinson Microwave Anistropy Probe (WMAP) satellite was shown to have short falls, leading to pre-emptive conclusions drawn about current cosmological theories. Also it can he shown that an approximate method used by all current analysis codes to describe the recombination history of the Universe will not be sufficiently accurate for future experiments. With a new CMB satellite scheduled for launch in the coming months, it is vital that we develop techniques to improve the analysis of cosmological data. This work develops a novel technique of both avoiding the use of approximate computational codes as well as allowing the application of new, more precise analysis

  11. Ground Truth Accuracy Tests of GPS Seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Oberlander, D. J.; Davis, J. L.; Baena, R.; Ekstrom, G.

    2005-12-01

    As the precision of GPS determinations of site position continues to improve the detection of smaller and faster geophysical signals becomes possible. However, lack of independent measurements of these signals often precludes an assessment of the accuracy of such GPS position determinations. This may be particularly true for high-rate GPS applications. We have built an apparatus to assess the accuracy of GPS position determinations for high-rate applications, in particular the application known as "GPS seismology." The apparatus consists of a bidirectional, single-axis positioning table coupled to a digitally controlled stepping motor. The motor, in turn, is connected to a Field Programmable Gate Array (FPGA) chip that synchronously sequences through real historical earthquake profiles stored in Erasable Programmable Read Only Memory's (EPROM). A GPS antenna attached to this positioning table undergoes the simulated seismic motions of the Earth's surface while collecting high-rate GPS data. Analysis of the time-dependent position estimates can then be compared to the "ground truth," and the resultant GPS error spectrum can be measured. We have made extensive measurements with this system while inducing simulated seismic motions either in the horizontal plane or the vertical axis. A second stationary GPS antenna at a distance of several meters was simultaneously collecting high-rate (5 Hz) GPS data. We will present the calibration of this system, describe the GPS observations and data analysis, and assess the accuracy of GPS for high-rate geophysical applications and natural hazards mitigation.

  12. A Precision Variable, Double Prism Attenuator for CO(2) Lasers.

    PubMed

    Oseki, T; Saito, S

    1971-01-01

    A precision, double prism attenuator for CO(2) lasers, calibrated by its gap capacitance, was constructed to evaluate its possible use as a standard for attenuation measurements. It was found that the accuracy was about 0.1 dB with a dynamic range of about 40 dB.

  13. EVALUATION OF METRIC PRECISION FOR A RIPARIAN FOREST SURVEY

    EPA Science Inventory

    This paper evaluates the performance of a protocol to monitor riparian forests in western Oregon based on the quality of the data obtained from a recent field survey. Precision and accuracy are the criteria used to determine the quality of 19 field metrics. The field survey con...

  14. Soviet precision timekeeping research and technology

    SciTech Connect

    Vessot, R.F.C.; Allan, D.W.; Crampton, S.J.B.; Cutler, L.S.; Kern, R.H.; McCoubrey, A.O.; White, J.D.

    1991-08-01

    This report is the result of a study of Soviet progress in precision timekeeping research and timekeeping capability during the last two decades. The study was conducted by a panel of seven US scientists who have expertise in timekeeping, frequency control, time dissemination, and the direct applications of these disciplines to scientific investigation. The following topics are addressed in this report: generation of time by atomic clocks at the present level of their technology, new and emerging technologies related to atomic clocks, time and frequency transfer technology, statistical processes involving metrological applications of time and frequency, applications of precise time and frequency to scientific investigations, supporting timekeeping technology, and a comparison of Soviet research efforts with those of the United States and the West. The number of Soviet professionals working in this field is roughly 10 times that in the United States. The Soviet Union has facilities for large-scale production of frequency standards and has concentrated its efforts on developing and producing rubidium gas cell devices (relatively compact, low-cost frequency standards of modest accuracy and stability) and atomic hydrogen masers (relatively large, high-cost standards of modest accuracy and high stability). 203 refs., 45 figs., 9 tabs.

  15. Glass ceramic ZERODUR enabling nanometer precision

    NASA Astrophysics Data System (ADS)

    Jedamzik, Ralf; Kunisch, Clemens; Nieder, Johannes; Westerhoff, Thomas

    2014-03-01

    The IC Lithography roadmap foresees manufacturing of devices with critical dimension of < 20 nm. Overlay specification of single digit nanometer asking for nanometer positioning accuracy requiring sub nanometer position measurement accuracy. The glass ceramic ZERODUR® is a well-established material in critical components of microlithography wafer stepper and offered with an extremely low coefficient of thermal expansion (CTE), the tightest tolerance available on market. SCHOTT is continuously improving manufacturing processes and it's method to measure and characterize the CTE behavior of ZERODUR® to full fill the ever tighter CTE specification for wafer stepper components. In this paper we present the ZERODUR® Lithography Roadmap on the CTE metrology and tolerance. Additionally, simulation calculations based on a physical model are presented predicting the long term CTE behavior of ZERODUR® components to optimize dimensional stability of precision positioning devices. CTE data of several low thermal expansion materials are compared regarding their temperature dependence between - 50°C and + 100°C. ZERODUR® TAILORED 22°C is full filling the tight CTE tolerance of +/- 10 ppb / K within the broadest temperature interval compared to all other materials of this investigation. The data presented in this paper explicitly demonstrates the capability of ZERODUR® to enable the nanometer precision required for future generation of lithography equipment and processes.

  16. Prompt and Precise Prototyping

    NASA Technical Reports Server (NTRS)

    2003-01-01

    For Sanders Design International, Inc., of Wilton, New Hampshire, every passing second between the concept and realization of a product is essential to succeed in the rapid prototyping industry where amongst heavy competition, faster time-to-market means more business. To separate itself from its rivals, Sanders Design aligned with NASA's Marshall Space Flight Center to develop what it considers to be the most accurate rapid prototyping machine for fabrication of extremely precise tooling prototypes. The company's Rapid ToolMaker System has revolutionized production of high quality, small-to-medium sized prototype patterns and tooling molds with an exactness that surpasses that of computer numerically-controlled (CNC) machining devices. Created with funding and support from Marshall under a Small Business Innovation Research (SBIR) contract, the Rapid ToolMaker is a dual-use technology with applications in both commercial and military aerospace fields. The advanced technology provides cost savings in the design and manufacturing of automotive, electronic, and medical parts, as well as in other areas of consumer interest, such as jewelry and toys. For aerospace applications, the Rapid ToolMaker enables fabrication of high-quality turbine and compressor blades for jet engines on unmanned air vehicles, aircraft, and missiles.

  17. Environment Assisted Precision Magnetometry

    NASA Astrophysics Data System (ADS)

    Cappellaro, P.; Goldstein, G.; Maze, J. R.; Jiang, L.; Hodges, J. S.; Sorensen, A. S.; Lukin, M. D.

    2010-03-01

    We describe a method to enhance the sensitivity of magnetometry and achieve nearly Heisenberg-limited precision measurement using a novel class of entangled states. An individual qubit is used to sense the dynamics of surrounding ancillary qubits, which are in turn affected by the external field to be measured. The resulting sensitivity enhancement is determined by the number of ancillas strongly coupled to the sensor qubit, it does not depend on the exact values of the couplings (allowing to use disordered systems), and is resilient to decoherence. As a specific example we consider electronic spins in the solid-state, where the ancillary system is associated with the surrounding spin bath. The conventional approach has been to consider these spins only as a source of decoherence and to adopt decoupling scheme to mitigate their effects. Here we describe novel control techniques that transform the environment spins into a resource used to amplify the sensor spin response to weak external perturbations, while maintaining the beneficial effects of dynamical decoupling sequences. We discuss specific applications to improve magnetic sensing with diamond nano-crystals, using one Nitrogen-Vacancy center spin coupled to Nitrogen electronic spins.

  18. Noise Robust Speech Recognition Applied to Voice-Driven Wheelchair

    NASA Astrophysics Data System (ADS)

    Sasou, Akira; Kojima, Hiroaki

    2009-12-01

    Conventional voice-driven wheelchairs usually employ headset microphones that are capable of achieving sufficient recognition accuracy, even in the presence of surrounding noise. However, such interfaces require users to wear sensors such as a headset microphone, which can be an impediment, especially for the hand disabled. Conversely, it is also well known that the speech recognition accuracy drastically degrades when the microphone is placed far from the user. In this paper, we develop a noise robust speech recognition system for a voice-driven wheelchair. This system can achieve almost the same recognition accuracy as the headset microphone without wearing sensors. We verified the effectiveness of our system in experiments in different environments, and confirmed that our system can achieve almost the same recognition accuracy as the headset microphone without wearing sensors.

  19. The empirical accuracy of uncertain inference models

    NASA Technical Reports Server (NTRS)

    Vaughan, David S.; Yadrick, Robert M.; Perrin, Bruce M.; Wise, Ben P.

    1987-01-01

    Uncertainty is a pervasive feature of the domains in which expert systems are designed to function. Research design to test uncertain inference methods for accuracy and robustness, in accordance with standard engineering practice is reviewed. Several studies were conducted to assess how well various methods perform on problems constructed so that correct answers are known, and to find out what underlying features of a problem cause strong or weak performance. For each method studied, situations were identified in which performance deteriorates dramatically. Over a broad range of problems, some well known methods do only about as well as a simple linear regression model, and often much worse than a simple independence probability model. The results indicate that some commercially available expert system shells should be used with caution, because the uncertain inference models that they implement can yield rather inaccurate results.

  20. RSRE: RNA structural robustness evaluator.

    PubMed

    Shu, Wenjie; Bo, Xiaochen; Zheng, Zhiqiang; Wang, Shengqi

    2007-07-01

    Biological robustness, defined as the ability to maintain stable functioning in the face of various perturbations, is an important and fundamental topic in current biology, and has become a focus of numerous studies in recent years. Although structural robustness has been explored in several types of RNA molecules, the origins of robustness are still controversial. Computational analysis results are needed to make up for the lack of evidence of robustness in natural biological systems. The RNA structural robustness evaluator (RSRE) web server presented here provides a freely available online tool to quantitatively evaluate the structural robustness of RNA based on the widely accepted definition of neutrality. Several classical structure comparison methods are employed; five randomization methods are implemented to generate control sequences; sub-optimal predicted structures can be optionally utilized to mitigate the uncertainty of secondary structure prediction. With a user-friendly interface, the web application is easy to use. Intuitive illustrations are provided along with the original computational results to facilitate analysis. The RSRE will be helpful in the wide exploration of RNA structural robustness and will catalyze our understanding of RNA evolution. The RSRE web server is freely available at http://biosrv1.bmi.ac.cn/RSRE/ or http://biotech.bmi.ac.cn/RSRE/.

  1. Low Cost Precision Lander for Lunar Exploration

    NASA Astrophysics Data System (ADS)

    Hoppa, G. V.; Head, J. N.; Gardner, T. G.; Seybold, K. G.

    2004-12-01

    For 60 years the US Defense Department has invested heavily in producing small, low mass, precision-guided vehicles. The technologies matured under these programs include terrain-aided navigation, closed loop terminal guidance algorithms, robust autopilots, high thrust-to-weight propulsion, autonomous mission management software, sensors, and data fusion. These technologies will aid NASA in addressing New Millennium Science and Technology goals as well as the requirements flowing from the Moon to Mars vision articulated in January 2004. Establishing and resupplying a long-term lunar presence will require automated landing precision not yet demonstrated. Precision landing will increase safety and assure mission success. In our lander design, science instruments amount to 10 kg, 16% of the lander vehicle mass. This compares favorably with 7% for Mars Pathfinder and less than 15% for Surveyor. The mission design relies on a cruise stage for navigation and TCMs for the lander's flight to the moon. The landing sequence begins with a solid motor burn to reduce the vehicle speed to 300-450 m/s. At this point the lander is about 2 minutes from touchdown and has 600 to 700 m/s delta-v capability. This allows for about 10 km of vehicle divert during terminal descent. This concept of operations closely mimics missile operational protocol used for decades: the vehicle remains inert, then must execute its mission flawlessly on a moment's notice. The vehicle design uses a propulsion system derived from heritage MDA programs. A redesigned truss provides hard points for landing gear, electronics, power supply, and science instruments. A radar altimeter and a Digital Scene Matching Area Correlator (DSMAC) provide data for the terminal guidance algorithms. This approach leverages the billions of dollars DoD has invested in these technologies, to land useful science payloads precisely on the lunar surface at relatively low cost.

  2. Improving the precision of astrometry for space debris

    SciTech Connect

    Sun, Rongyu; Zhao, Changyin; Zhang, Xiaoxiang

    2014-03-01

    The data reduction method for optical space debris observations has many similarities with the one adopted for surveying near-Earth objects; however, due to several specific issues, the image degradation is particularly critical, which makes it difficult to obtain precise astrometry. An automatic image reconstruction method was developed to improve the astrometry precision for space debris, based on the mathematical morphology operator. Variable structural elements along multiple directions are adopted for image transformation, and then all the resultant images are stacked to obtain a final result. To investigate its efficiency, trial observations are made with Global Positioning System satellites and the astrometry accuracy improvement is obtained by comparison with the reference positions. The results of our experiments indicate that the influence of degradation in astrometric CCD images is reduced, and the position accuracy of both objects and stellar stars is improved distinctly. Our technique will contribute significantly to optical data reduction and high-order precision astrometry for space debris.

  3. Landsat classification accuracy assessment procedures

    USGS Publications Warehouse

    Mead, R. R.; Szajgin, John

    1982-01-01

    A working conference was held in Sioux Falls, South Dakota, 12-14 November, 1980 dealing with Landsat classification Accuracy Assessment Procedures. Thirteen formal presentations were made on three general topics: (1) sampling procedures, (2) statistical analysis techniques, and (3) examples of projects which included accuracy assessment and the associated costs, logistical problems, and value of the accuracy data to the remote sensing specialist and the resource manager. Nearly twenty conference attendees participated in two discussion sessions addressing various issues associated with accuracy assessment. This paper presents an account of the accomplishments of the conference.

  4. Centimeter-Level Robust Gnss-Aided Inertial Post-Processing for Mobile Mapping Without Local Reference Stations

    NASA Astrophysics Data System (ADS)

    Hutton, J. J.; Gopaul, N.; Zhang, X.; Wang, J.; Menon, V.; Rieck, D.; Kipka, A.; Pastor, F.

    2016-06-01

    For almost two decades mobile mapping systems have done their georeferencing using Global Navigation Satellite Systems (GNSS) to measure position and inertial sensors to measure orientation. In order to achieve cm level position accuracy, a technique referred to as post-processed carrier phase differential GNSS (DGNSS) is used. For this technique to be effective the maximum distance to a single Reference Station should be no more than 20 km, and when using a network of Reference Stations the distance to the nearest station should no more than about 70 km. This need to set up local Reference Stations limits productivity and increases costs, especially when mapping large areas or long linear features such as roads or pipelines. An alternative technique to DGNSS for high-accuracy positioning from GNSS is the so-called Precise Point Positioning or PPP method. In this case instead of differencing the rover observables with the Reference Station observables to cancel out common errors, an advanced model for every aspect of the GNSS error chain is developed and parameterized to within an accuracy of a few cm. The Trimble Centerpoint RTX positioning solution combines the methodology of PPP with advanced ambiguity resolution technology to produce cm level accuracies without the need for local reference stations. It achieves this through a global deployment of highly redundant monitoring stations that are connected through the internet and are used to determine the precise satellite data with maximum accuracy, robustness, continuity and reliability, along with advance algorithms and receiver and antenna calibrations. This paper presents a new post-processed realization of the Trimble Centerpoint RTX technology integrated into the Applanix POSPac MMS GNSS-Aided Inertial software for mobile mapping. Real-world results from over 100 airborne flights evaluated against a DGNSS network reference are presented which show that the post-processed Centerpoint RTX solution agrees with

  5. Precision positioning of earth orbiting remote sensing systems

    NASA Technical Reports Server (NTRS)

    Melbourne, William G.; Yunck, T. P.; Wu, S. C.

    1987-01-01

    Decimeter tracking accuracy is sought for a number of precise earth sensing satellites to be flown in the 1990's. This accuracy can be achieved with techniques which use the Global Positioning System (GPS) in a differential mode. A precisely located global network of GPS ground receivers and a receiver aboard the user satellite are needed, and all techniques simultaneously estimate the user and GPS satellite states. Three basic navigation approaches include classical dynamic, wholly nondynamic, and reduced dynamic or hybrid formulations. The first two are simply special cases of the third, which promises to deliver subdecimeter accuracy for dynamically unpredictable vehicles down to the lowest orbit altitudes. The potential of these techniques for tracking and gravity field recovery will be demonstrated on NASA's Topex satellite beginning in 1991. Applications to the Shuttle, Space Station, and dedicated remote sensing platforms are being pursued.

  6. Precision medicine in myasthenia graves: begin from the data precision

    PubMed Central

    Hong, Yu; Xie, Yanchen; Hao, Hong-Jun; Sun, Ren-Cheng

    2016-01-01

    Myasthenia gravis (MG) is a prototypic autoimmune disease with overt clinical and immunological heterogeneity. The data of MG is far from individually precise now, partially due to the rarity and heterogeneity of this disease. In this review, we provide the basic insights of MG data precision, including onset age, presenting symptoms, generalization, thymus status, pathogenic autoantibodies, muscle involvement, severity and response to treatment based on references and our previous studies. Subgroups and quantitative traits of MG are discussed in the sense of data precision. The role of disease registries and scientific bases of precise analysis are also discussed to ensure better collection and analysis of MG data. PMID:27127759

  7. Precise Truss Assembly using Commodity Parts and Low Precision Welding

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, William R.; Correll, Nikolaus

    2013-01-01

    We describe an Intelligent Precision Jigging Robot (IPJR), which allows high precision assembly of commodity parts with low-precision bonding. We present preliminary experiments in 2D that are motivated by the problem of assembling a space telescope optical bench on orbit using inexpensive, stock hardware and low-precision welding. An IPJR is a robot that acts as the precise "jigging", holding parts of a local assembly site in place while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (in this case, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. We report the challenges of designing the IPJR hardware and software, analyze the error in assembly, document the test results over several experiments including a large-scale ring structure, and describe future work to implement the IPJR in 3D and with micron precision.

  8. Precise Truss Assembly Using Commodity Parts and Low Precision Welding

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, W. R.; Correll, Nikolaus

    2014-01-01

    Hardware and software design and system integration for an intelligent precision jigging robot (IPJR), which allows high precision assembly using commodity parts and low-precision bonding, is described. Preliminary 2D experiments that are motivated by the problem of assembling space telescope optical benches and very large manipulators on orbit using inexpensive, stock hardware and low-precision welding are also described. An IPJR is a robot that acts as the precise "jigging", holding parts of a local structure assembly site in place, while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (for this prototype, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. The analysis of the assembly error and the results of building a square structure and a ring structure are discussed. Options for future work, to extend the IPJR paradigm to building in 3D structures at micron precision are also summarized.

  9. Pervasive robustness in biological systems.

    PubMed

    Félix, Marie-Anne; Barkoulas, Michalis

    2015-08-01

    Robustness is characterized by the invariant expression of a phenotype in the face of a genetic and/or environmental perturbation. Although phenotypic variance is a central measure in the mapping of the genotype and environment to the phenotype in quantitative evolutionary genetics, robustness is also a key feature in systems biology, resulting from nonlinearities in quantitative relationships between upstream and downstream components. In this Review, we provide a synthesis of these two lines of investigation, converging on understanding how variation propagates across biological systems. We critically assess the recent proliferation of studies identifying robustness-conferring genes in the context of the nonlinearity in biological systems. PMID:26184598

  10. Population genetics of translational robustness.

    PubMed

    Wilke, Claus O; Drummond, D Allan

    2006-05-01

    Recent work has shown that expression level is the main predictor of a gene's evolutionary rate and that more highly expressed genes evolve slower. A possible explanation for this observation is selection for proteins that fold properly despite mistranslation, in short selection for translational robustness. Translational robustness leads to the somewhat paradoxical prediction that highly expressed genes are extremely tolerant to missense substitutions but nevertheless evolve very slowly. Here, we study a simple theoretical model of translational robustness that allows us to gain analytic insight into how this paradoxical behavior arises.

  11. Robustness of airline route networks

    NASA Astrophysics Data System (ADS)

    Lordan, Oriol; Sallan, Jose M.; Escorihuela, Nuria; Gonzalez-Prieto, David

    2016-03-01

    Airlines shape their route network by defining their routes through supply and demand considerations, paying little attention to network performance indicators, such as network robustness. However, the collapse of an airline network can produce high financial costs for the airline and all its geographical area of influence. The aim of this study is to analyze the topology and robustness of the network route of airlines following Low Cost Carriers (LCCs) and Full Service Carriers (FSCs) business models. Results show that FSC hubs are more central than LCC bases in their route network. As a result, LCC route networks are more robust than FSC networks.

  12. Precision mass measurements of highly charged ions

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, A. A.; Bale, J. C.; Brunner, T.; Chaudhuri, A.; Chowdhury, U.; Ettenauer, S.; Frekers, D.; Gallant, A. T.; Grossheim, A.; Lennarz, A.; Mane, E.; MacDonald, T. D.; Schultz, B. E.; Simon, M. C.; Simon, V. V.; Dilling, J.

    2012-10-01

    The reputation of Penning trap mass spectrometry for accuracy and precision was established with singly charged ions (SCI); however, the achievable precision and resolving power can be extended by using highly charged ions (HCI). The TITAN facility has demonstrated these enhancements for long-lived (T1/2>=50 ms) isobars and low-lying isomers, including ^71Ge^21+, ^74Rb^8+, ^78Rb^8+, and ^98Rb^15+. The Q-value of ^71Ge enters into the neutrino cross section, and the use of HCI reduced the resolving power required to distinguish the isobars from 3 x 10^5 to 20. The precision achieved in the measurement of ^74Rb^8+, a superallowed β-emitter and candidate to test the CVC hypothesis, rivaled earlier measurements with SCI in a fraction of the time. The 111.19(22) keV isomeric state in ^78Rb was resolved from the ground state. Mass measurements of neutron-rich Rb and Sr isotopes near A = 100 aid in determining the r-process pathway. Advanced ion manipulation techniques and recent results will be presented.

  13. Robust atomistic calculation of dislocation line tension

    NASA Astrophysics Data System (ADS)

    Szajewski, B. A.; Pavia, F.; Curtin, W. A.

    2015-12-01

    The line tension Γ of a dislocation is an important and fundamental property ubiquitous to continuum scale models of metal plasticity. However, the precise value of Γ in a given material has proven difficult to assess, with literature values encompassing a wide range. Here results from a multiscale simulation and robust analysis of the dislocation line tension, for dislocation bow-out between pinning points, are presented for two widely-used interatomic potentials for Al. A central part of the analysis involves an effective Peierls stress applicable to curved dislocation structures that markedly differs from that of perfectly straight dislocations but is required to describe the bow-out both in loading and unloading. The line tensions for the two interatomic potentials are similar and provide robust numerical values for Al. Most importantly, the atomic results show notable differences with singular anisotropic elastic dislocation theory in that (i) the coefficient of the \\text{ln}(L) scaling with dislocation length L differs and (ii) the ratio of screw to edge line tension is smaller than predicted by anisotropic elasticity. These differences are attributed to local dislocation core interactions that remain beyond the scope of elasticity theory. The many differing literature values for Γ are attributed to various approximations and inaccuracies in previous approaches. The results here indicate that continuum line dislocation models, based on elasticity theory and various core-cut-off assumptions, may be fundamentally unable to reproduce full atomistic results, thus hampering the detailed predictive ability of such continuum models.

  14. Robust atomic force microscopy using multiple sensors.

    PubMed

    Baranwal, Mayank; Gorugantu, Ram S; Salapaka, Srinivasa M

    2016-08-01

    Atomic force microscopy typically relies on high-resolution high-bandwidth cantilever deflection measurements based control for imaging and estimating sample topography and properties. More precisely, in amplitude-modulation atomic force microscopy (AM-AFM), the control effort that regulates deflection amplitude is used as an estimate of sample topography; similarly, contact-mode AFM uses regulation of deflection signal to generate sample topography. In this article, a control design scheme based on an additional feedback mechanism that uses vertical z-piezo motion sensor, which augments the deflection based control scheme, is proposed and evaluated. The proposed scheme exploits the fact that the piezo motion sensor, though inferior to the cantilever deflection signal in terms of resolution and bandwidth, provides information on piezo actuator dynamics that is not easily retrievable from the deflection signal. The augmented design results in significant improvements in imaging bandwidth and robustness, especially in AM-AFM, where the complicated underlying nonlinear dynamics inhibits estimating piezo motions from deflection signals. In AM-AFM experiments, the two-sensor based design demonstrates a substantial improvement in robustness to modeling uncertainties by practically eliminating the peak in the sensitivity plot without affecting the closed-loop bandwidth when compared to a design that does not use the piezo-position sensor based feedback. The contact-mode imaging results, which use proportional-integral controllers for cantilever-deflection regulation, demonstrate improvements in bandwidth and robustness to modeling uncertainties, respectively, by over 30% and 20%. The piezo-sensor based feedback is developed using H∞ control framework. PMID:27587128

  15. Robust atomic force microscopy using multiple sensors

    NASA Astrophysics Data System (ADS)

    Baranwal, Mayank; Gorugantu, Ram S.; Salapaka, Srinivasa M.

    2016-08-01

    Atomic force microscopy typically relies on high-resolution high-bandwidth cantilever deflection measurements based control for imaging and estimating sample topography and properties. More precisely, in amplitude-modulation atomic force microscopy (AM-AFM), the control effort that regulates deflection amplitude is used as an estimate of sample topography; similarly, contact-mode AFM uses regulation of deflection signal to generate sample topography. In this article, a control design scheme based on an additional feedback mechanism that uses vertical z-piezo motion sensor, which augments the deflection based control scheme, is proposed and evaluated. The proposed scheme exploits the fact that the piezo motion sensor, though inferior to the cantilever deflection signal in terms of resolution and bandwidth, provides information on piezo actuator dynamics that is not easily retrievable from the deflection signal. The augmented design results in significant improvements in imaging bandwidth and robustness, especially in AM-AFM, where the complicated underlying nonlinear dynamics inhibits estimating piezo motions from deflection signals. In AM-AFM experiments, the two-sensor based design demonstrates a substantial improvement in robustness to modeling uncertainties by practically eliminating the peak in the sensitivity plot without affecting the closed-loop bandwidth when compared to a design that does not use the piezo-position sensor based feedback. The contact-mode imaging results, which use proportional-integral controllers for cantilever-deflection regulation, demonstrate improvements in bandwidth and robustness to modeling uncertainties, respectively, by over 30% and 20%. The piezo-sensor based feedback is developed using H∞ control framework.

  16. Accuracy Evaluation of Electron-Probe Microanalysis as Applied to Semiconductors and Silicates

    NASA Technical Reports Server (NTRS)

    Carpenter, Paul; Armstrong, John

    2003-01-01

    An evaluation of precision and accuracy will be presented for representative semiconductor and silicate compositions. The accuracy of electron-probe analysis depends on high precision measurements and instrumental calibration, as well as correction algorithms and fundamental parameter data sets. A critical assessment of correction algorithms and mass absorption coefficient data sets can be made using the alpha factor technique. Alpha factor analysis can be used to identify systematic errors in data sets and also of microprobe standards used for calibration.

  17. On preserving robustness-false alarm tradeoff in media hashing

    NASA Astrophysics Data System (ADS)

    Roy, S.; Zhu, X.; Yuan, J.; Chang, E.-C.

    2007-01-01

    This paper discusses one of the important issues in generating a robust media hash. Robustness of a media hashing algorithm is primarily determined by three factors, (1) robustness-false alarm tradeoff achieved by the chosen feature representation, (2) accuracy of the bit extraction step and (3) the distance measure used to measure similarity (dissimilarity) between two hashes. The robustness-false alarm tradeoff in feature space is measured by a similarity (dissimilarity) measure and it defines a limit on the performance of the hashing algorithm. The distance measure used to compute the distance between the hashes determines how far this tradeoff in the feature space is preserved through the bit extraction step. Hence the bit extraction step is crucial, in defining the robustness of a hashing algorithm. Although this is recognized as an important requirement by all, to our knowledge there is no work in the existing literature that elucidates the effcacy of their algorithm based on their effectiveness in improving this tradeoff compared to other methods. This paper specifically demonstrates the kind of robustness false alarm tradeoff achieved by existing methods and proposes a method for hashing that clearly improves this tradeoff.

  18. The efficacy of bedside chest ultrasound: from accuracy to outcomes.

    PubMed

    Hew, Mark; Tay, Tunn Ren

    2016-09-01

    For many respiratory physicians, point-of-care chest ultrasound is now an integral part of clinical practice. The diagnostic accuracy of ultrasound to detect abnormalities of the pleura, the lung parenchyma and the thoracic musculoskeletal system is well described. However, the efficacy of a test extends beyond just diagnostic accuracy. The true value of a test depends on the degree to which diagnostic accuracy efficacy influences decision-making efficacy, and the subsequent extent to which this impacts health outcome efficacy. We therefore reviewed the demonstrable levels of test efficacy for bedside ultrasound of the pleura, lung parenchyma and thoracic musculoskeletal system.For bedside ultrasound of the pleura, there is evidence supporting diagnostic accuracy efficacy, decision-making efficacy and health outcome efficacy, predominantly in guiding pleural interventions. For the lung parenchyma, chest ultrasound has an impact on diagnostic accuracy and decision-making for patients presenting with acute respiratory failure or breathlessness, but there are no data as yet on actual health outcomes. For ultrasound of the thoracic musculoskeletal system, there is robust evidence only for diagnostic accuracy efficacy.We therefore outline avenues to further validate bedside chest ultrasound beyond diagnostic accuracy, with an emphasis on confirming enhanced health outcomes. PMID:27581823

  19. Maximum Correntropy Criterion for Robust Face Recognition.

    PubMed

    He, Ran; Zheng, Wei-Shi; Hu, Bao-Gang

    2011-08-01

    In this paper, we present a sparse correntropy framework for computing robust sparse representations of face images for recognition. Compared with the state-of-the-art l(1)norm-based sparse representation classifier (SRC), which assumes that noise also has a sparse representation, our sparse algorithm is developed based on the maximum correntropy criterion, which is much more insensitive to outliers. In order to develop a more tractable and practical approach, we in particular impose nonnegativity constraint on the variables in the maximum correntropy criterion and develop a half-quadratic optimization technique to approximately maximize the objective function in an alternating way so that the complex optimization problem is reduced to learning a sparse representation through a weighted linear least squares problem with nonnegativity constraint at each iteration. Our extensive experiments demonstrate that the proposed method is more robust and efficient in dealing with the occlusion and corruption problems in face recognition as compared to the related state-of-the-art methods. In particular, it shows that the proposed method can improve both recognition accuracy and receiver operator characteristic (ROC) curves, while the computational cost is much lower than the SRC algorithms.

  20. Robust Optimization of Biological Protocols

    PubMed Central

    Flaherty, Patrick; Davis, Ronald W.

    2015-01-01

    When conducting high-throughput biological experiments, it is often necessary to develop a protocol that is both inexpensive and robust. Standard approaches are either not cost-effective or arrive at an optimized protocol that is sensitive to experimental variations. We show here a novel approach that directly minimizes the cost of the protocol while ensuring the protocol is robust to experimental variation. Our approach uses a risk-averse conditional value-at-risk criterion in a robust parameter design framework. We demonstrate this approach on a polymerase chain reaction protocol and show that our improved protocol is less expensive than the standard protocol and more robust than a protocol optimized without consideration of experimental variation. PMID:26417115

  1. Test Expectancy Affects Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2011-01-01

    Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…

  2. Robust Vehicle and Traffic Information Extraction for Highway Surveillance

    NASA Astrophysics Data System (ADS)

    Yoneyama, Akio; Yeh, Chia-Hung; Kuo, C.-C. Jay

    2005-12-01

    A robust vision-based traffic monitoring system for vehicle and traffic information extraction is developed in this research. It is challenging to maintain detection robustness at all time for a highway surveillance system. There are three major problems in detecting and tracking a vehicle: (1) the moving cast shadow effect, (2) the occlusion effect, and (3) nighttime detection. For moving cast shadow elimination, a 2D joint vehicle-shadow model is employed. For occlusion detection, a multiple-camera system is used to detect occlusion so as to extract the exact location of each vehicle. For vehicle nighttime detection, a rear-view monitoring technique is proposed to maintain tracking and detection accuracy. Furthermore, we propose a method to improve the accuracy of background extraction, which usually serves as the first step in any vehicle detection processing. Experimental results are given to demonstrate that the proposed techniques are effective and efficient for vision-based highway surveillance.

  3. Robust controls with structured perturbations

    NASA Technical Reports Server (NTRS)

    Keel, Leehyun

    1993-01-01

    This final report summarizes the recent results obtained by the principal investigator and his coworkers on the robust stability and control of systems containing parametric uncertainty. The starting point is a generalization of Kharitonov's theorem obtained in 1989, and its generalization to the multilinear case, the singling out of extremal stability subsets, and other ramifications now constitutes an extensive and coherent theory of robust parametric stability that is summarized in the results contained here.

  4. Atmospheric effects and ultimate ranging accuracy for lunar laser ranging

    NASA Astrophysics Data System (ADS)

    Currie, Douglas G.; Prochazka, Ivan

    2014-10-01

    The deployment of next generation lunar laser retroreflectors is planned in the near future. With proper robotic deployment, these will support single shot single photo-electron ranging accuracy at the 100 micron level or better. There are available technologies for the support at this accuracy by advanced ground stations, however, the major question is the ultimate limit imposed on the ranging accuracy due to the changing timing delays due to turbulence and horizontal gradients in the earth's atmosphere. In particular, there are questions of the delay and temporal broadening of a very narrow laser pulse. Theoretical and experimental results will be discussed that address estimates of the magnitudes of these effects and the issue of precision vs. accuracy.

  5. What do we mean by accuracy in geomagnetic measurements?

    USGS Publications Warehouse

    Green, A.W.

    1990-01-01

    High accuracy is what distinguishes measurements made at the world's magnetic observatories from other types of geomagnetic measurements. High accuracy in determining the absolute values of the components of the Earth's magnetic field is essential to studying geomagnetic secular variation and processes at the core mantle boundary, as well as some magnetospheric processes. In some applications of geomagnetic data, precision (or resolution) of measurements may also be important. In addition to accuracy and resolution in the amplitude domain, it is necessary to consider these same quantities in the frequency and space domains. New developments in geomagnetic instruments and communications make real-time, high accuracy, global geomagnetic observatory data sets a real possibility. There is a growing realization in the scientific community of the unique relevance of geomagnetic observatory data to the principal contemporary problems in solid Earth and space physics. Together, these factors provide the promise of a 'renaissance' of the world's geomagnetic observatory system. ?? 1990.

  6. Precise Adaptation in Bacterial Chemotaxis through ``Assistance Neighborhoods''

    NASA Astrophysics Data System (ADS)

    Endres, Robert

    2007-03-01

    The chemotaxis network in Escherichia coli is remarkable for its sensitivity to small relative changes in the concentrations of multiple chemical signals over a broad range of ambient concentrations. Key to this sensitivity is an adaptation system that relies on methylation and demethylation (or deamidation) of specific modification sites of the chemoreceptors by the enzymes CheR and CheB, respectively. It was recently discovered that these enzymes can access five to seven receptors when tethered to a particular receptor. We show that these ``assistance neighborhoods'' (ANs) are necessary for precise and robust adaptation in a model for signaling by clusters of chemoreceptors: (1) ANs suppress fluctuations of the receptor methylation level; (2) ANs lead to robustness with respect to biochemical parameters. We predict two limits of precise adaptation at large attractant concentrations: either receptors reach full methylation and turn off, or receptors become saturated and cease to respond to attractant but retain their adapted activity.

  7. Designing robust control laws using genetic algorithms

    NASA Technical Reports Server (NTRS)

    Marrison, Chris

    1994-01-01

    The purpose of this research is to create a method of finding practical, robust control laws. The robustness of a controller is judged by Stochastic Robustness metrics and the level of robustness is optimized by searching for design parameters that minimize a robustness cost function.

  8. How robust is a robust policy? A comparative analysis of alternative robustness metrics for supporting robust decision analysis.

    NASA Astrophysics Data System (ADS)

    Kwakkel, Jan; Haasnoot, Marjolijn

    2015-04-01

    In response to climate and socio-economic change, in various policy domains there is increasingly a call for robust plans or policies. That is, plans or policies that performs well in a very large range of plausible futures. In the literature, a wide range of alternative robustness metrics can be found. The relative merit of these alternative conceptualizations of robustness has, however, received less attention. Evidently, different robustness metrics can result in different plans or policies being adopted. This paper investigates the consequences of several robustness metrics on decision making, illustrated here by the design of a flood risk management plan. A fictitious case, inspired by a river reach in the Netherlands is used. The performance of this system in terms of casualties, damages, and costs for flood and damage mitigation actions is explored using a time horizon of 100 years, and accounting for uncertainties pertaining to climate change and land use change. A set of candidate policy options is specified up front. This set of options includes dike raising, dike strengthening, creating more space for the river, and flood proof building and evacuation options. The overarching aim is to design an effective flood risk mitigation strategy that is designed from the outset to be adapted over time in response to how the future actually unfolds. To this end, the plan will be based on the dynamic adaptive policy pathway approach (Haasnoot, Kwakkel et al. 2013) being used in the Dutch Delta Program. The policy problem is formulated as a multi-objective robust optimization problem (Kwakkel, Haasnoot et al. 2014). We solve the multi-objective robust optimization problem using several alternative robustness metrics, including both satisficing robustness metrics and regret based robustness metrics. Satisficing robustness metrics focus on the performance of candidate plans across a large ensemble of plausible futures. Regret based robustness metrics compare the

  9. Robust process design and springback compensation of a decklid inner

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaojing; Grimm, Peter; Carleer, Bart; Jin, Weimin; Liu, Gang; Cheng, Yingchao

    2013-12-01

    Springback compensation is one of the key topics in current die face engineering. The accuracy of the springback simulation, the robustness of method planning and springback are considered to be the main factors which influences the effectiveness of springback compensation. In the present paper, the basic principles of springback compensation are presented firstly. These principles consist of an accurate full cycle simulation with final validation setting and the robust process design and optimization are discussed in detail via an industrial example, a decklid inner. Moreover, an effective compensation strategy is put forward based on the analysis of springback and the simulation based springback compensation is introduced in the phase of process design. In the end, the final verification and comparison in tryout and production is given in this paper, which verified that the methodology of robust springback compensation is effective during the die development.

  10. Probabilistic collocation for simulation-based robust concept exploration

    NASA Astrophysics Data System (ADS)

    Rippel, Markus; Choi, Seung-Kyum; Allen, Janet K.; Mistree, Farrokh

    2012-08-01

    In the early stages of an engineering design process it is necessary to explore the design space to find a feasible range that satisfies design requirements. When robustness of the system is among the requirements, the robust concept exploration method can be used. In this method, a global metamodel, such as a global response surface of the design space, is used to evaluate robustness. However, for large design spaces, this is computationally expensive and may be relatively inaccurate for some local regions. In this article, a method is developed for successively generating local response models at points of interest as the design space is explored. This approach is based on the probabilistic collocation method. Although the focus of this article is on the method, it is demonstrated using an artificial performance function and a linear cellular alloy heat exchanger. For these problems, this approach substantially reduces computation time while maintaining accuracy.

  11. High current high accuracy IGBT pulse generator

    SciTech Connect

    Nesterov, V.V.; Donaldson, A.R.

    1995-05-01

    A solid state pulse generator capable of delivering high current triangular or trapezoidal pulses into an inductive load has been developed at SLAC. Energy stored in a capacitor bank of the pulse generator is switched to the load through a pair of insulated gate bipolar transistors (IGBT). The circuit can then recover the remaining energy and transfer it back to the capacitor bank without reversing the capacitor voltage. A third IGBT device is employed to control the initial charge to the capacitor bank, a command charging technique, and to compensate for pulse to pulse power losses. The rack mounted pulse generator contains a 525 {mu}F capacitor bank. It can deliver 500 A at 900V into inductive loads up to 3 mH. The current amplitude and discharge time are controlled to 0.02% accuracy by a precision controller through the SLAC central computer system. This pulse generator drives a series pair of extraction dipoles.

  12. Positional Accuracy of Gps Satellite Almanac

    NASA Astrophysics Data System (ADS)

    Ma, Lihua; Zhou, Shangli

    2014-12-01

    How to accelerate signal acquisition and shorten starting time are key problems in the Global Positioning System (GPS). GPS satellite almanac plays an important role in signal reception period. Almanac accuracy directly affects the speed of GPS signal acquisition, the start time of the receiver, and even the system performance to some extent. Combined with precise ephemeris products released by the International GNSS Service (IGS), the authors analyse GPS satellite almanac from the first day to the third day in the 1805th GPS week (from August 11 to 13, 2014 in the Gregorian calendar). The results show that mean of position errors in three-dimensional coordinate system varies from about 1 kilometer to 3 kilometers, which can satisfy the needs of common users.

  13. Measuring and balancing dynamic unbalance of precision centrifuge

    NASA Astrophysics Data System (ADS)

    Yang, Yafei; Huo, Xin

    2008-10-01

    A precision centrifuge is used to test and calibrate accelerometer model parameters. Its dynamic unbalance may cause the perturbation of the centrifuge to deteriorate the test and calibration accuracy of an accelerometer. By analyzing the causes of dynamic unbalance, the influences on precision centrifuge from static unbalance and couple unbalance are developed. It is considered measuring and balancing of static unbalance is a key to resolving a dynamic unbalance problem of precision centrifuge with a disk in structure. Measuring means and calculating formulas of static unbalance amount are given, and balancing principle and method are provided. The correctness and effectiveness of this method are confirmed by experiments on a device under tuning, thereby the accurate and high-effective measuring and balancing method of dynamic unbalance of this precision centrifuge was provided.

  14. High-precision thermal and electrical characterization of thermoelectric modules

    SciTech Connect

    Kolodner, Paul

    2014-05-15

    This paper describes an apparatus for performing high-precision electrical and thermal characterization of thermoelectric modules (TEMs). The apparatus is calibrated for operation between 20 °C and 80 °C and is normally used for measurements of heat currents in the range 0–10 W. Precision thermometry based on miniature thermistor probes enables an absolute temperature accuracy of better than 0.010 °C. The use of vacuum isolation, thermal guarding, and radiation shielding, augmented by a careful accounting of stray heat leaks and uncertainties, allows the heat current through the TEM under test to be determined with a precision of a few mW. The fractional precision of all measured parameters is approximately 0.1%.

  15. French Meteor Network for High Precision Orbits of Meteoroids

    NASA Technical Reports Server (NTRS)

    Atreya, P.; Vaubaillon, J.; Colas, F.; Bouley, S.; Gaillard, B.; Sauli, I.; Kwon, M. K.

    2011-01-01

    There is a lack of precise meteoroids orbit from video observations as most of the meteor stations use off-the-shelf CCD cameras. Few meteoroids orbit with precise semi-major axis are available using film photographic method. Precise orbits are necessary to compute the dust flux in the Earth s vicinity, and to estimate the ejection time of the meteoroids accurately by comparing them with the theoretical evolution model. We investigate the use of large CCD sensors to observe multi-station meteors and to compute precise orbit of these meteoroids. An ideal spatial and temporal resolution to get an accuracy to those similar of photographic plates are discussed. Various problems faced due to the use of large CCD, such as increasing the spatial and the temporal resolution at the same time and computational problems in finding the meteor position are illustrated.

  16. The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control

    ERIC Educational Resources Information Center

    Page, A.; Moreno, R.; Candelas, P.; Belmar, F.

    2008-01-01

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…

  17. MEASUREMENT AND PRECISION, EXPERIMENTAL VERSION.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    THIS DOCUMENT IS AN EXPERIMENTAL VERSION OF A PROGRAMED TEXT ON MEASUREMENT AND PRECISION. PART I CONTAINS 24 FRAMES DEALING WITH PRECISION AND SIGNIFICANT FIGURES ENCOUNTERED IN VARIOUS MATHEMATICAL COMPUTATIONS AND MEASUREMENTS. PART II BEGINS WITH A BRIEF SECTION ON EXPERIMENTAL DATA, COVERING SUCH POINTS AS (1) ESTABLISHING THE ZERO POINT, (2)…

  18. More Questions on Precision Teaching.

    ERIC Educational Resources Information Center

    Raybould, E. C.; Solity, J. E.

    1988-01-01

    Precision teaching can accelerate basic skills progress of special needs children. Issues discussed include using probes as performance tests, charting daily progress, using the charted data to modify teaching methods, determining appropriate age levels, assessing the number of students to be precision taught, and carefully allocating time. (JDD)

  19. Precision Teaching: Discoveries and Effects.

    ERIC Educational Resources Information Center

    Lindsley, Ogden R.

    1992-01-01

    This paper defines precision teaching; describes its monitoring methods by displaying a standard celeration chart and explaining charting conventions; points out precision teaching's roots in laboratory free-operant conditioning; discusses its learning tactics and performance principles; and describes its effectiveness in producing learning gains.…

  20. The precise temporal calibration of dinosaur origins

    NASA Astrophysics Data System (ADS)

    Marsicano, Claudia A.; Irmis, Randall B.; Mancuso, Adriana C.; Mundil, Roland; Chemale, Farid

    2016-01-01

    Dinosaurs have been major components of ecosystems for over 200 million years. Although different macroevolutionary scenarios exist to explain the Triassic origin and subsequent rise to dominance of dinosaurs and their closest relatives (dinosauromorphs), all lack critical support from a precise biostratigraphically independent temporal framework. The absence of robust geochronologic age control for comparing alternative scenarios makes it impossible to determine if observed faunal differences vary across time, space, or a combination of both. To better constrain the origin of dinosaurs, we produced radioisotopic ages for the Argentinian Chañares Formation, which preserves a quintessential assemblage of dinosaurian precursors (early dinosauromorphs) just before the first dinosaurs. Our new high-precision chemical abrasion thermal ionization mass spectrometry (CA-TIMS) U-Pb zircon ages reveal that the assemblage is early Carnian (early Late Triassic), 5- to 10-Ma younger than previously thought. Combined with other geochronologic data from the same basin, we constrain the rate of dinosaur origins, demonstrating their relatively rapid origin in a less than 5-Ma interval, thus halving the temporal gap between assemblages containing only dinosaur precursors and those with early dinosaurs. After their origin, dinosaurs only gradually dominated mid- to high-latitude terrestrial ecosystems millions of years later, closer to the Triassic-Jurassic boundary.