Science.gov

Sample records for accuracy precision robustness

  1. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement.

  2. Precision cosmology, Accuracy cosmology and Statistical cosmology

    NASA Astrophysics Data System (ADS)

    Verde, Licia

    2014-05-01

    The avalanche of data over the past 10-20 years has propelled cosmology into the ``precision era''. The next challenge cosmology has to meet is to enter the era of accuracy. Because of the intrinsic nature of studying the Cosmos and the sheer amount of data available now and coming soon, the only way to meet this challenge is by developing suitable and specific statistical techniques. The road from precision Cosmology to accurate Cosmology goes through statistical Cosmology. I will outline some open challenges and discuss some specific examples.

  3. Ultra-wideband ranging precision and accuracy

    NASA Astrophysics Data System (ADS)

    MacGougan, Glenn; O'Keefe, Kyle; Klukas, Richard

    2009-09-01

    This paper provides an overview of ultra-wideband (UWB) in the context of ranging applications and assesses the precision and accuracy of UWB ranging from both a theoretical perspective and a practical perspective using real data. The paper begins with a brief history of UWB technology and the most current definition of what constitutes an UWB signal. The potential precision of UWB ranging is assessed using Cramer-Rao lower bound analysis. UWB ranging methods are described and potential error sources are discussed. Two types of commercially available UWB ranging radios are introduced which are used in testing. Actual ranging accuracy is assessed from line-of-sight testing under benign signal conditions by comparison to high-accuracy electronic distance measurements and to ranges derived from GPS real-time kinematic positioning. Range measurements obtained in outdoor testing with line-of-sight obstructions and strong reflection sources are compared to ranges derived from classically surveyed positions. The paper concludes with a discussion of the potential applications for UWB ranging.

  4. Accuracy and Precision of an IGRT Solution

    SciTech Connect

    Webster, Gareth J. Rowbottom, Carl G.; Mackay, Ranald I.

    2009-07-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within {+-} 3% in dose over the range of sample points. For some points in high-dose gradients

  5. Accuracy and robustness evaluation in stereo matching

    NASA Astrophysics Data System (ADS)

    Nguyen, Duc M.; Hanca, Jan; Lu, Shao-Ping; Schelkens, Peter; Munteanu, Adrian

    2016-09-01

    Stereo matching has received a lot of attention from the computer vision community, thanks to its wide range of applications. Despite of the large variety of algorithms that have been proposed so far, it is not trivial to select suitable algorithms for the construction of practical systems. One of the main problems is that many algorithms lack sufficient robustness when employed in various operational conditions. This problem is due to the fact that most of the proposed methods in the literature are usually tested and tuned to perform well on one specific dataset. To alleviate this problem, an extensive evaluation in terms of accuracy and robustness of state-of-the-art stereo matching algorithms is presented. Three datasets (Middlebury, KITTI, and MPEG FTV) representing different operational conditions are employed. Based on the analysis, improvements over existing algorithms have been proposed. The experimental results show that our improved versions of cross-based and cost volume filtering algorithms outperform the original versions with large margins on Middlebury and KITTI datasets. In addition, the latter of the two proposed algorithms ranks itself among the best local stereo matching approaches on the KITTI benchmark. Under evaluations using specific settings for depth-image-based-rendering applications, our improved belief propagation algorithm is less complex than MPEG's FTV depth estimation reference software (DERS), while yielding similar depth estimation performance. Finally, several conclusions on stereo matching algorithms are also presented.

  6. [History, accuracy and precision of SMBG devices].

    PubMed

    Dufaitre-Patouraux, L; Vague, P; Lassmann-Vague, V

    2003-04-01

    Self-monitoring of blood glucose started only fifty years ago. Until then metabolic control was evaluated by means of qualitative urinary blood measure often of poor reliability. Reagent strips were the first semi quantitative tests to monitor blood glucose, and in the late seventies meters were launched on the market. Initially the use of such devices was intended for medical staff, but thanks to handiness improvement they became more and more adequate to patients and are now a necessary tool for self-blood glucose monitoring. The advanced technologies allow to develop photometric measurements but also more recently electrochemical one. In the nineties, improvements were made mainly in meters' miniaturisation, reduction of reaction time and reading, simplification of blood sampling and capillary blood laying. Although accuracy and precision concern was in the heart of considerations at the beginning of self-blood glucose monitoring, the recommendations of societies of diabetology came up in the late eighties. Now, the French drug agency: AFSSAPS asks for a control of meter before any launching on the market. According to recent publications very few meters meet reliability criteria set up by societies of diabetology in the late nineties. Finally because devices may be handled by numerous persons in hospitals, meters use as possible source of nosocomial infections have been recently questioned and is subject to very strict guidelines published by AFSSAPS.

  7. Establishing precision and accuracy in PDV results

    SciTech Connect

    Briggs, Matthew E.; Howard, Marylesa; Diaz, Abel

    2016-04-19

    We need to know uncertainties and systematic errors because we create and compare against archival weapons data, we constrain the models, and we provide scientific results. Good estimates of precision from the data record are available and should be incorporated into existing results; reanalysis of valuable data is suggested. Estimates of systematic errors are largely absent. The original work by Jensen et al. using gun shots for window corrections, and the integrated velocity comparison with X-rays by Schultz are two examples where any systematic errors appear to be <1% level.

  8. Accuracy vs. Robustness: Bi-criteria Optimized Ensemble of Metamodels

    DTIC Science & Technology

    2014-12-01

    Kriging , Support Vector Regression and Radial Basis Function), where uncertainties are modeled for evaluating robustness. Twenty-eight functions from...optimized ensemble framework to optimally identify the contributions from each metamodel ( Kriging , Support Vector Regression and Radial Basis Function...motivation, a bi-criteria (accuracy and robustness) ensemble optimization framework of three well-known metamodel techniques, namely Kriging (Matheron 1960

  9. Precision and Accuracy of Topography Measurements on Europa

    NASA Astrophysics Data System (ADS)

    Greenberg, R.; Hurford, T. A.; Foley, M. A.; Varland, K.

    2007-03-01

    Reports of the death of the melt-through model for chaotic terrain on Europa have been greatly exaggerated, to paraphrase Mark Twain. They are based on topographic maps of insufficient quantitative accuracy and precision.

  10. A study of laseruler accuracy and precision (1986-1987)

    SciTech Connect

    Ramachandran, R.S.; Armstrong, K.P.

    1989-06-22

    A study was conducted to investigate Laserruler accuracy and precision. Tests were performed on 0.050 in., 0.100 in., and 0.120 in. gauge block standards. Results showed and accuracy of 3.7 {mu}in. for the 0.12 in. standard, with higher accuracies for the two thinner blocks. The Laserruler precision was 4.83 {mu}in. for the 0.120 in. standard, 3.83 {mu}in. for the 0.100 in. standard, and 4.2 {mu}in. for the 0.050 in. standard.

  11. On precision and accuracy (bias) statements for measurement procedures

    SciTech Connect

    Bruckner, L.A.; Hume, M.W.; Delvin, W.L.

    1988-01-01

    Measurement procedures are often required to contain precision and accuracy of precision and bias statements. This paper contains a glossary that explains various terms that often appear in these statements as well as an example illustrating such statements for a specific set of data. Precision and bias statements are shown to vary according to the conditions under which the data were collected. This paper emphasizes that the error model (an algebraic expression that describes how the various sources of variation affect the measurement) is an important consideration in the formation of precision and bias statements.

  12. Accuracy and precision of temporal artery thermometers in febrile patients.

    PubMed

    Wolfson, Margaret; Granstrom, Patsy; Pomarico, Bernie; Reimanis, Cathryn

    2013-01-01

    The noninvasive temporal artery thermometer offers a way to measure temperature when oral assessment is contraindicated, uncomfortable, or difficult to obtain. In this study, the accuracy and precision of the temporal artery thermometer exceeded levels recommended by experts for use in acute care clinical practice.

  13. Characterizing geometric accuracy and precision in image guided gated radiotherapy

    NASA Astrophysics Data System (ADS)

    Tenn, Stephen Edward

    Gated radiotherapy combined with intensity modulated or three-dimensional conformal radiotherapy for tumors in the thorax and abdomen can deliver dose distributions which conform closely to tumor shapes allowing increased tumor dose while sparing healthy tissues. These conformal fields require more accurate and precise placement than traditional fields or tumors may receive suboptimal dose thereby reducing tumor control probability. Image guidance based on four-dimensional computed tomography (4DCT) provides a means to improve accuracy and precision in radiotherapy. The ability of 4DCT to accurately reproduce patient geometry and the ability of image guided gating equipment to position tumors and place fields around them must be characterized in order to determine treatment parameters such as tumor margins. Fiducial based methods of characterizing accuracy and precision of equipment for 4DCT planning and image guided gated radiotherapy (IGGRT) are presented with results for specific equipment. Fiducial markers of known geometric orientation are used to characterize 4DCT image reconstruction accuracy. Accuracy is determined under different acquisition protocols, reconstruction phases, and phantom trajectories. Targeting accuracy of fiducial based image guided gating is assessed by measuring in-phantom field positions for different motions, gating levels and target rotations. Synchronization parameters for gating equipment are also determined. Finally, end-to-end testing is performed to assess overall accuracy and precision of the equipment under controlled conditions. 4DCT limits fiducial geometric distance errors to 2 mm for repeatable target trajectories and to 5 mm for a pseudo-random trajectory. Largest offsets were in the longitudinal direction. If correctly calibrated and synchronized, the IGGRT system tested here can target reproducibly moving tumors with accuracy better than 1.2 mm. Gating level can affect accuracy if target motion is asymmetric about the

  14. Accuracy, precision, and lower detection limits (a deficit reduction approach)

    SciTech Connect

    Bishop, C.T.

    1993-10-12

    The evaluation of the accuracy, precision and lower detection limits of the determination of trace radionuclides in environmental samples can become quite sophisticated and time consuming. This in turn could add significant cost to the analyses being performed. In the present method, a {open_quotes}deficit reduction approach{close_quotes} has been taken to keep costs low, but at the same time provide defensible data. In order to measure the accuracy of a particular method, reference samples are measured over the time period that the actual samples are being analyzed. Using a Lotus spreadsheet, data are compiled and an average accuracy is computed. If pairs of reference samples are analyzed, then precision can also be evaluated from the duplicate data sets. The standard deviation can be calculated if the reference concentrations of the duplicates are all in the same general range. Laboratory blanks are used to estimate the lower detection limits. The lower detection limit is calculated as 4.65 times the standard deviation of a set of blank determinations made over a given period of time. A Lotus spreadsheet is again used to compile data and LDLs over different periods of time can be compared.

  15. The Plus or Minus Game - Teaching Estimation, Precision, and Accuracy

    NASA Astrophysics Data System (ADS)

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-03-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in TPT (Larry Weinstein's "Fermi Questions.") For several years the authors (a college physics professor, a retired algebra teacher, and a fifth-grade teacher) have been playing a game, primarily at home to challenge each other for fun, but also in the classroom as an educational tool. We call the game "The Plus or Minus Game." The game combines estimation with the principle of precision and uncertainty in a competitive and fun way.

  16. Calibration, linearity, precision, and accuracy of a PIXE system

    NASA Astrophysics Data System (ADS)

    Richter, F.-W.; Wätjen, U.

    1984-04-01

    An accuracy and precision of better than 10% each can be achieved with PIXE analysis, with both thin and thick samples. Measures we took to obtain these values for routine analyses in the Marburg PIXE system are discussed. The advantages of an experimental calibration procedure, using thin evaporated standard foils, over the "absolute" method of employing X-ray production cross sections are outlined. The importance of X-ray line intensity ratios, even of weak transitions, for the accurate analysis of interfering elements of low mass content is demonstrated for the Se K α-Pb L ηline overlap. Matrix effects including secondary excitation can be corrected for very well without degrading accuracy under certain conditions.

  17. Fluorescence Axial Localization with Nanometer Accuracy and Precision

    SciTech Connect

    Li, Hui; Yen, Chi-Fu; Sivasankar, Sanjeevi

    2012-06-15

    We describe a new technique, standing wave axial nanometry (SWAN), to image the axial location of a single nanoscale fluorescent object with sub-nanometer accuracy and 3.7 nm precision. A standing wave, generated by positioning an atomic force microscope tip over a focused laser beam, is used to excite fluorescence; axial position is determined from the phase of the emission intensity. We use SWAN to measure the orientation of single DNA molecules of different lengths, grafted on surfaces with different functionalities.

  18. Measuring changes in Plasmodium falciparum transmission: Precision, accuracy and costs of metrics

    PubMed Central

    Tusting, Lucy S.; Bousema, Teun; Smith, David L.; Drakeley, Chris

    2016-01-01

    As malaria declines in parts of Africa and elsewhere, and as more countries move towards elimination, it is necessary to robustly evaluate the effect of interventions and control programmes on malaria transmission. To help guide the appropriate design of trials to evaluate transmission-reducing interventions, we review eleven metrics of malaria transmission, discussing their accuracy, precision, collection methods and costs, and presenting an overall critique. We also review the non-linear scaling relationships between five metrics of malaria transmission; the entomological inoculation rate, force of infection, sporozoite rate, parasite rate and the basic reproductive number, R0. Our review highlights that while the entomological inoculation rate is widely considered the gold standard metric of malaria transmission and may be necessary for measuring changes in transmission in highly endemic areas, it has limited precision and accuracy and more standardised methods for its collection are required. In areas of low transmission, parasite rate, sero-conversion rates and molecular metrics including MOI and mFOI may be most appropriate. When assessing a specific intervention, the most relevant effects will be detected by examining the metrics most directly affected by that intervention. Future work should aim to better quantify the precision and accuracy of malaria metrics and to improve methods for their collection. PMID:24480314

  19. Improved DORIS accuracy for precise orbit determination and geodesy

    NASA Technical Reports Server (NTRS)

    Willis, Pascal; Jayles, Christian; Tavernier, Gilles

    2004-01-01

    In 2001 and 2002, 3 more DORIS satellites were launched. Since then, all DORIS results have been significantly improved. For precise orbit determination, 20 cm are now available in real-time with DIODE and 1.5 to 2 cm in post-processing. For geodesy, 1 cm precision can now be achieved regularly every week, making now DORIS an active part of a Global Observing System for Geodesy through the IDS.

  20. Mineral element analyses of switchgrass biomass: comparison of the accuracy and precision of laboratories

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Mineral concentration of plant biomass can affect its use in thermal conversion to energy. The objective of this study was to compare the precision and accuracy of university and private laboratories that conduct mineral analyses of plant biomass on a fee basis. Accuracy and precision of the laborat...

  1. Imputation accuracy is robust to cattle reference genome updates.

    PubMed

    Milanesi, M; Vicario, D; Stella, A; Valentini, A; Ajmone-Marsan, P; Biffani, S; Biscarini, F; Jansen, G; Nicolazzi, E L

    2015-02-01

    Genotype imputation is routinely applied in a large number of cattle breeds. Imputation has become a need due to the large number of SNP arrays with variable density (currently, from 2900 to 777,962 SNPs). Although many authors have studied the effect of different statistical methods on imputation accuracy, the impact of a (likely) change in the reference genome assembly on imputation from lower to higher density has not been determined so far. In this work, 1021 Italian Simmental SNP genotypes were remapped on the three most recent reference genome assemblies. Four imputation methods were used to assess the impact of an update in the reference genome. As expected, the four methods behaved differently, with large differences in terms of accuracy. Updating SNP coordinates on the three tested cattle reference genome assemblies determined only a slight variation on imputation results within method.

  2. S-193 scatterometer backscattering cross section precision/accuracy for Skylab 2 and 3 missions

    NASA Technical Reports Server (NTRS)

    Krishen, K.; Pounds, D. J.

    1975-01-01

    Procedures for measuring the precision and accuracy with which the S-193 scatterometer measured the background cross section of ground scenes are described. Homogeneous ground sites were selected, and data from Skylab missions were analyzed. The precision was expressed as the standard deviation of the scatterometer-acquired backscattering cross section. In special cases, inference of the precision of measurement was made by considering the total range from the maximum to minimum of the backscatter measurements within a data segment, rather than the standard deviation. For Skylab 2 and 3 missions a precision better than 1.5 dB is indicated. This procedure indicates an accuracy of better than 3 dB for the Skylab 2 and 3 missions. The estimates of precision and accuracy given in this report are for backscattering cross sections from -28 to 18 dB. Outside this range the precision and accuracy decrease significantly.

  3. Accuracy and Precision of GPS Carrier-Phase Clock Estimates

    DTIC Science & Technology

    2001-01-01

    L‘Geodesy using the Global Positioning System : The eflects of signal scattering o n esti- mates of site positions , ” Journal of Geophysical Research...maia.usno.navy.mil Abstract The accuracy of GPS -based clock estimates is determined by the pseudorange data. For 24-hour arcs of global data sampled...ps) for 1-day integrations. Assuming such positioning results can be realized also as equivalent light-travel times, the po- tential of GPS carrier

  4. Robustness and Accuracy in Sea Urchin Developmental Gene Regulatory Networks.

    PubMed

    Ben-Tabou de-Leon, Smadar

    2016-01-01

    Developmental gene regulatory networks robustly control the timely activation of regulatory and differentiation genes. The structure of these networks underlies their capacity to buffer intrinsic and extrinsic noise and maintain embryonic morphology. Here I illustrate how the use of specific architectures by the sea urchin developmental regulatory networks enables the robust control of cell fate decisions. The Wnt-βcatenin signaling pathway patterns the primary embryonic axis while the BMP signaling pathway patterns the secondary embryonic axis in the sea urchin embryo and across bilateria. Interestingly, in the sea urchin in both cases, the signaling pathway that defines the axis controls directly the expression of a set of downstream regulatory genes. I propose that this direct activation of a set of regulatory genes enables a uniform regulatory response and a clear cut cell fate decision in the endoderm and in the dorsal ectoderm. The specification of the mesodermal pigment cell lineage is activated by Delta signaling that initiates a triple positive feedback loop that locks down the pigment specification state. I propose that the use of compound positive feedback circuitry provides the endodermal cells enough time to turn off mesodermal genes and ensures correct mesoderm vs. endoderm fate decision. Thus, I argue that understanding the control properties of repeatedly used regulatory architectures illuminates their role in embryogenesis and provides possible explanations to their resistance to evolutionary change.

  5. Adaptive Spike Threshold Enables Robust and Temporally Precise Neuronal Encoding

    PubMed Central

    Resnik, Andrey; Celikel, Tansu; Englitz, Bernhard

    2016-01-01

    Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information. PMID:27304526

  6. Interspecies translation of disease networks increases robustness and predictive accuracy.

    PubMed

    Anvar, Seyed Yahya; Tucker, Allan; Vinciotti, Veronica; Venema, Andrea; van Ommen, Gert-Jan B; van der Maarel, Silvere M; Raz, Vered; 't Hoen, Peter A C

    2011-11-01

    Gene regulatory networks give important insights into the mechanisms underlying physiology and pathophysiology. The derivation of gene regulatory networks from high-throughput expression data via machine learning strategies is problematic as the reliability of these models is often compromised by limited and highly variable samples, heterogeneity in transcript isoforms, noise, and other artifacts. Here, we develop a novel algorithm, dubbed Dandelion, in which we construct and train intraspecies Bayesian networks that are translated and assessed on independent test sets from other species in a reiterative procedure. The interspecies disease networks are subjected to multi-layers of analysis and evaluation, leading to the identification of the most consistent relationships within the network structure. In this study, we demonstrate the performance of our algorithms on datasets from animal models of oculopharyngeal muscular dystrophy (OPMD) and patient materials. We show that the interspecies network of genes coding for the proteasome provide highly accurate predictions on gene expression levels and disease phenotype. Moreover, the cross-species translation increases the stability and robustness of these networks. Unlike existing modeling approaches, our algorithms do not require assumptions on notoriously difficult one-to-one mapping of protein orthologues or alternative transcripts and can deal with missing data. We show that the identified key components of the OPMD disease network can be confirmed in an unseen and independent disease model. This study presents a state-of-the-art strategy in constructing interspecies disease networks that provide crucial information on regulatory relationships among genes, leading to better understanding of the disease molecular mechanisms.

  7. Highly precise and robust packaging of optical components

    NASA Astrophysics Data System (ADS)

    Leers, Michael; Winzen, Matthias; Liermann, Erik; Faidel, Heinrich; Westphalen, Thomas; Miesner, Jörn; Luttmann, Jörg; Hoffmann, Dieter

    2012-03-01

    In this paper we present the development of a compact, thermo-optically stable and vibration and mechanical shock resistant mounting technique by soldering of optical components. Based on this technique a new generation of laser sources for aerospace applications is designed. In these laser systems solder technique replaces the glued and bolted connections between optical component, mount and base plate. Alignment precision in the arc second range and realization of long term stability of every single part in the laser system is the main challenge. At the Fraunhofer Institute for Laser Technology ILT a soldering and mounting technique has been developed for high precision packaging. The specified environmental boundary conditions (e.g. a temperature range of -40 °C to +50 °C) and the required degrees of freedom for the alignment of the components have been taken into account for this technique. In general the advantage of soldering compared to gluing is that there is no outgassing. In addition no flux is needed in our special process. The joining process allows multiple alignments by remelting the solder. The alignment is done in the liquid phase of the solder by a 6 axis manipulator with a step width in the nm range and a tilt in the arc second range. In a next step the optical components have to pass the environmental tests. The total misalignment of the component to its adapter after the thermal cycle tests is less than 10 arc seconds. The mechanical stability tests regarding shear, vibration and shock behavior are well within the requirements.

  8. Spectropolarimetry with PEPSI at the LBT: accuracy vs. precision in magnetic field measurements

    NASA Astrophysics Data System (ADS)

    Ilyin, Ilya; Strassmeier, Klaus G.; Woche, Manfred; Hofmann, Axel

    2009-04-01

    We present the design of the new PEPSI spectropolarimeter to be installed at the Large Binocular Telescope (LBT) in Arizona to measure the full set of Stokes parameters in spectral lines and outline its precision and the accuracy limiting factors.

  9. Precision and Accuracy in Measurements: A Tale of Four Graduated Cylinders.

    ERIC Educational Resources Information Center

    Treptow, Richard S.

    1998-01-01

    Expands upon the concepts of precision and accuracy at a level suitable for general chemistry. Serves as a bridge to the more extensive treatments in analytical chemistry textbooks and the advanced literature on error analysis. Contains 22 references. (DDR)

  10. Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis

    PubMed Central

    Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.

    2015-01-01

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis. PMID:25991505

  11. Accuracy and Precision of Silicon Based Impression Media for Quantitative Areal Texture Analysis

    NASA Astrophysics Data System (ADS)

    Goodall, Robert H.; Darras, Laurent P.; Purnell, Mark A.

    2015-05-01

    Areal surface texture analysis is becoming widespread across a diverse range of applications, from engineering to ecology. In many studies silicon based impression media are used to replicate surfaces, and the fidelity of replication defines the quality of data collected. However, while different investigators have used different impression media, the fidelity of surface replication has not been subjected to quantitative analysis based on areal texture data. Here we present the results of an analysis of the accuracy and precision with which different silicon based impression media of varying composition and viscosity replicate rough and smooth surfaces. Both accuracy and precision vary greatly between different media. High viscosity media tested show very low accuracy and precision, and most other compounds showed either the same pattern, or low accuracy and high precision, or low precision and high accuracy. Of the media tested, mid viscosity President Jet Regular Body and low viscosity President Jet Light Body (Coltène Whaledent) are the only compounds to show high levels of accuracy and precision on both surface types. Our results show that data acquired from different impression media are not comparable, supporting calls for greater standardisation of methods in areal texture analysis.

  12. Accuracy and robustness of Kinect pose estimation in the context of coaching of elderly population.

    PubMed

    Obdrzálek, Stepán; Kurillo, Gregorij; Ofli, Ferda; Bajcsy, Ruzena; Seto, Edmund; Jimison, Holly; Pavel, Michael

    2012-01-01

    The Microsoft Kinect camera is becoming increasingly popular in many areas aside from entertainment, including human activity monitoring and rehabilitation. Many people, however, fail to consider the reliability and accuracy of the Kinect human pose estimation when they depend on it as a measuring system. In this paper we compare the Kinect pose estimation (skeletonization) with more established techniques for pose estimation from motion capture data, examining the accuracy of joint localization and robustness of pose estimation with respect to the orientation and occlusions. We have evaluated six physical exercises aimed at coaching of elderly population. Experimental results present pose estimation accuracy rates and corresponding error bounds for the Kinect system.

  13. [Assessment of precision and accuracy of digital surface photogrammetry with the DSP 400 system].

    PubMed

    Krimmel, M; Kluba, S; Dietz, K; Reinert, S

    2005-03-01

    The objective of the present study was to evaluate the precision and accuracy of facial anthropometric measurements obtained through digital 3-D surface photogrammetry with the DSP 400 system in comparison to traditional 2-D photogrammetry. Fifty plaster casts of cleft infants were imaged and 21 standard anthropometric measurements were obtained. For precision assessment the measurements were performed twice in a subsample. Accuracy was determined by comparison of direct measurements and indirect 2-D and 3-D image measurements. Precision of digital surface photogrammetry was almost as good as direct anthropometry and clearly better than 2-D photogrammetry. Measurements derived from 3-D images showed better congruence to direct measurements than from 2-D photos. Digital surface photogrammetry with the DSP 400 system is sufficiently precise and accurate for craniofacial anthropometric examinations.

  14. A Comparison of the Astrometric Precision and Accuracy of Double Star Observations with Two Telescopes

    NASA Astrophysics Data System (ADS)

    Alvarez, Pablo; Fishbein, Amos E.; Hyland, Michael W.; Kight, Cheyne L.; Lopez, Hairold; Navarro, Tanya; Rosas, Carlos A.; Schachter, Aubrey E.; Summers, Molly A.; Weise, Eric D.; Hoffman, Megan A.; Mires, Robert C.; Johnson, Jolyon M.; Genet, Russell M.; White, Robin

    2009-01-01

    Using a manual Meade 6" Newtonian telescope and a computerized Meade 10" Schmidt-Cassegrain telescope, students from Arroyo Grande High School measured the well-known separation and position angle of the bright visual double star Albireo. The precision and accuracy of the observations from the two telescopes were compared to each other and to published values of Albireo taken as the standard. It was hypothesized that the larger, computerized telescope would be both more precise and more accurate.

  15. Sex differences in accuracy and precision when judging time to arrival: data from two Internet studies.

    PubMed

    Sanders, Geoff; Sinclair, Kamila

    2011-12-01

    We report two Internet studies that investigated sex differences in the accuracy and precision of judging time to arrival. We used accuracy to mean the ability to match the actual time to arrival and precision to mean the consistency with which each participant made their judgments. Our task was presented as a computer game in which a toy UFO moved obliquely towards the participant through a virtual three-dimensional space on route to a docking station. The UFO disappeared before docking and participants pressed their space bar at the precise moment they thought the UFO would have docked. Study 1 showed it was possible to conduct quantitative studies of spatiotemporal judgments in virtual reality via the Internet and confirmed reports that men are more accurate because women underestimate, but found no difference in precision measured as intra-participant variation. Study 2 repeated Study 1 with five additional presentations of one condition to provide a better measure of precision. Again, men were more accurate than women but there were no sex differences in precision. However, within the coincidence-anticipation timing (CAT) literature, of those studies that report sex differences, a majority found that males are both more accurate and more precise than females. Noting that many CAT studies report no sex differences, we discuss appropriate interpretations of such null findings. While acknowledging that CAT performance may be influenced by experience we suggest that the sex difference may have originated among our ancestors with the evolutionary selection of men for hunting and women for gathering.

  16. Accuracy and Precision of Partial-Volume Correction in Oncological PET/CT Studies.

    PubMed

    Cysouw, Matthijs C F; Kramer, Gerbrand Maria; Hoekstra, Otto S; Frings, Virginie; de Langen, Adrianus Johannes; Smit, Egbert F; van den Eertwegh, Alfons J M; Oprea-Lager, Daniela E; Boellaard, Ronald

    2016-10-01

    Accurate quantification of tracer uptake in small tumors using PET is hampered by the partial-volume effect as well as by the method of volume-of-interest (VOI) delineation. This study aimed to investigate the effect of partial-volume correction (PVC) combined with several VOI methods on the accuracy and precision of quantitative PET.

  17. Improving the accuracy and precision of cognitive testing in mild dementia.

    PubMed

    Wouters, Hans; Appels, Bregje; van der Flier, Wiesje M; van Campen, Jos; Klein, Martin; Zwinderman, Aeilko H; Schmand, Ben; van Gool, Willem A; Scheltens, Philip; Lindeboom, Robert

    2012-03-01

    The CAMCOG, ADAS-cog, and MMSE, designed to grade global cognitive ability in dementia have inadequate precision and accuracy in distinguishing mild dementia from normal ageing. Adding neuropsychological tests to their scale might improve precision and accuracy in mild dementia. We, therefore, pooled neuropsychological test-batteries from two memory clinics (ns = 135 and 186) with CAMCOG data from a population study and 2 memory clinics (n = 829) and ADAS-cog data from 3 randomized controlled trials (n = 713) to estimate a common dimension of global cognitive ability using Rasch analysis. Item difficulties and individuals' global cognitive ability levels were estimated. Difficulties of 57 items (of 64) could be validly estimated. Neuropsychological tests were more difficult than the CAMCOG, ADAS-cog, and MMSE items. Most neuropsychological tests had difficulties in the ability range of normal ageing to mild dementia. Higher than average ability levels were more precisely measured when neuropsychological tests were added to the MMSE than when these were measured with the MMSE alone. Diagnostic accuracy in mild dementia was consistently better after adding neuropsychological tests to the MMSE. We conclude that extending dementia specific instruments with neuropsychological tests improves measurement precision and accuracy of cognitive impairment in mild dementia.

  18. The Plus or Minus Game--Teaching Estimation, Precision, and Accuracy

    ERIC Educational Resources Information Center

    Forringer, Edward R.; Forringer, Richard S.; Forringer, Daniel S.

    2016-01-01

    A quick survey of physics textbooks shows that many (Knight, Young, and Serway for example) cover estimation, significant digits, precision versus accuracy, and uncertainty in the first chapter. Estimation "Fermi" questions are so useful that there has been a column dedicated to them in "TPT" (Larry Weinstein's "Fermi…

  19. 40 CFR 80.584 - What are the precision and accuracy criteria for approval of test methods for determining the...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 16 2011-07-01 2011-07-01 false What are the precision and accuracy....584 What are the precision and accuracy criteria for approval of test methods for determining the sulfur content of motor vehicle diesel fuel, NRLM diesel fuel, and ECA marine fuel? (a) Precision....

  20. 40 CFR 80.584 - What are the precision and accuracy criteria for approval of test methods for determining the...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false What are the precision and accuracy....584 What are the precision and accuracy criteria for approval of test methods for determining the sulfur content of motor vehicle diesel fuel, NRLM diesel fuel, and ECA marine fuel? (a) Precision....

  1. Commissioning Procedures for Mechanical Precision and Accuracy in a Dedicated LINAC

    NASA Astrophysics Data System (ADS)

    Ballesteros-Zebadúa, P.; Lárrga-Gutierrez, J. M.; García-Garduño, O. A.; Juárez, J.; Prieto, I.; Moreno-Jiménez, S.; Celis, M. A.

    2008-08-01

    Mechanical precision measurements are fundamental procedures for the commissioning of a dedicated LINAC. At our Radioneurosurgery Unit, these procedures can be suitable as quality assurance routines that allow the verification of the equipment geometrical accuracy and precision. In this work mechanical tests were performed for gantry and table rotation, obtaining mean associated uncertainties of 0.3 mm and 0.71 mm, respectively. Using an anthropomorphic phantom and a series of localized surface markers, isocenter accuracy showed to be smaller than 0.86 mm for radiosurgery procedures and 0.95 mm for fractionated treatments with mask. All uncertainties were below tolerances. The highest contribution to mechanical variations is due to table rotation, so it is important to correct variations using a localization frame with printed overlays. Mechanical precision knowledge would allow to consider the statistical errors in the treatment planning volume margins.

  2. Commissioning Procedures for Mechanical Precision and Accuracy in a Dedicated LINAC

    SciTech Connect

    Ballesteros-Zebadua, P.; Larrga-Gutierrez, J. M.; Garcia-Garduno, O. A.; Juarez, J.; Prieto, I.; Moreno-Jimenez, S.; Celis, M. A.

    2008-08-11

    Mechanical precision measurements are fundamental procedures for the commissioning of a dedicated LINAC. At our Radioneurosurgery Unit, these procedures can be suitable as quality assurance routines that allow the verification of the equipment geometrical accuracy and precision. In this work mechanical tests were performed for gantry and table rotation, obtaining mean associated uncertainties of 0.3 mm and 0.71 mm, respectively. Using an anthropomorphic phantom and a series of localized surface markers, isocenter accuracy showed to be smaller than 0.86 mm for radiosurgery procedures and 0.95 mm for fractionated treatments with mask. All uncertainties were below tolerances. The highest contribution to mechanical variations is due to table rotation, so it is important to correct variations using a localization frame with printed overlays. Mechanical precision knowledge would allow to consider the statistical errors in the treatment planning volume margins.

  3. Evaluation of the Accuracy and Precision of a Next Generation Computer-Assisted Surgical System

    PubMed Central

    Dai, Yifei; Liebelt, Ralph A.; Gao, Bo; Gulbransen, Scott W.; Silver, Xeve S.

    2015-01-01

    Background Computer-assisted orthopaedic surgery (CAOS) improves accuracy and reduces outliers in total knee arthroplasty (TKA). However, during the evaluation of CAOS systems, the error generated by the guidance system (hardware and software) has been generally overlooked. Limited information is available on the accuracy and precision of specific CAOS systems with regard to intraoperative final resection measurements. The purpose of this study was to assess the accuracy and precision of a next generation CAOS system and investigate the impact of extra-articular deformity on the system-level errors generated during intraoperative resection measurement. Methods TKA surgeries were performed on twenty-eight artificial knee inserts with various types of extra-articular deformity (12 neutral, 12 varus, and 4 valgus). Surgical resection parameters (resection depths and alignment angles) were compared between postoperative three-dimensional (3D) scan-based measurements and intraoperative CAOS measurements. Using the 3D scan-based measurements as control, the accuracy (mean error) and precision (associated standard deviation) of the CAOS system were assessed. The impact of extra-articular deformity on the CAOS system measurement errors was also investigated. Results The pooled mean unsigned errors generated by the CAOS system were equal or less than 0.61 mm and 0.64° for resection depths and alignment angles, respectively. No clinically meaningful biases were found in the measurements of resection depths (< 0.5 mm) and alignment angles (< 0.5°). Extra-articular deformity did not show significant effect on the measurement errors generated by the CAOS system investigated. Conclusions This study presented a set of methodology and workflow to assess the system-level accuracy and precision of CAOS systems. The data demonstrated that the CAOS system investigated can offer accurate and precise intraoperative measurements of TKA resection parameters, regardless of the presence

  4. Robustness of single-electron pumps at sub-ppm current accuracy level

    NASA Astrophysics Data System (ADS)

    Stein, F.; Scherer, H.; Gerster, T.; Behr, R.; Götz, M.; Pesel, E.; Leicht, C.; Ubbelohde, N.; Weimann, T.; Pierz, K.; Schumacher, H. W.; Hohls, F.

    2017-02-01

    We report on characterizations of single-electron pumps at the highest accuracy level, enabled by improvements of the small-current measurement technique. With these improvements a new accuracy record in measurements on single-electron pumps is demonstrated: 0.16 µA · A-1 of relative combined uncertainty was reached within less than 1 d of measurement time. Additionally, robustness tests of pump operation on a sub-ppm level revealed a good stability of tunable-barrier single-electron pumps against variations in the operating parameters.

  5. Evaluation of precision and accuracy assessment of different 3-D surface imaging systems for biomedical purposes.

    PubMed

    Eder, Maximilian; Brockmann, Gernot; Zimmermann, Alexander; Papadopoulos, Moschos A; Schwenzer-Zimmerer, Katja; Zeilhofer, Hans Florian; Sader, Robert; Papadopulos, Nikolaos A; Kovacs, Laszlo

    2013-04-01

    Three-dimensional (3-D) surface imaging has gained clinical acceptance, especially in the field of cranio-maxillo-facial and plastic, reconstructive, and aesthetic surgery. Six scanners based on different scanning principles (Minolta Vivid 910®, Polhemus FastSCAN™, GFM PRIMOS®, GFM TopoCAM®, Steinbichler Comet® Vario Zoom 250, 3dMD DSP 400®) were used to measure five sheep skulls of different sizes. In three areas with varying anatomical complexity (areas, 1 = high; 2 = moderate; 3 = low), 56 distances between 20 landmarks are defined on each skull. Manual measurement (MM), coordinate machine measurements (CMM) and computer tomography (CT) measurements were used to define a reference method for further precision and accuracy evaluation of different 3-D scanning systems. MM showed high correlation to CMM and CT measurements (both r = 0.987; p < 0.001) and served as the reference method. TopoCAM®, Comet® and Vivid 910® showed highest measurement precision over all areas of complexity; Vivid 910®, the Comet® and the DSP 400® demonstrated highest accuracy over all areas with Vivid 910® being most accurate in areas 1 and 3, and the DSP 400® most accurate in area 2. In accordance to the measured distance length, most 3-D devices present higher measurement precision and accuracy for large distances and lower degrees of precision and accuracy for short distances. In general, higher degrees of complexity are associated with lower 3-D assessment accuracy, suggesting that for optimal results, different types of scanners should be applied to specific clinical applications and medical problems according to their special construction designs and characteristics.

  6. A Comparative Study of Precise Point Positioning (PPP) Accuracy Using Online Services

    NASA Astrophysics Data System (ADS)

    Malinowski, Marcin; Kwiecień, Janusz

    2016-12-01

    Precise Point Positioning (PPP) is a technique used to determine the position of receiver antenna without communication with the reference station. It may be an alternative solution to differential measurements, where maintaining a connection with a single RTK station or a regional network of reference stations RTN is necessary. This situation is especially common in areas with poorly developed infrastructure of ground stations. A lot of research conducted so far on the use of the PPP technique has been concerned about the development of entire day observation sessions. However, this paper presents the results of a comparative analysis of accuracy of absolute determination of position from observations which last between 1 to 7 hours with the use of four permanent services which execute calculations with PPP technique such as: Automatic Precise Positioning Service (APPS), Canadian Spatial Reference System Precise Point Positioning (CSRS-PPP), GNSS Analysis and Positioning Software (GAPS) and magicPPP - Precise Point Positioning Solution (magicGNSS). On the basis of acquired results of measurements, it can be concluded that at least two-hour long measurements allow acquiring an absolute position with an accuracy of 2-4 cm. An evaluation of the impact on the accuracy of simultaneous positioning of three points test network on the change of the horizontal distance and the relative height difference between measured triangle vertices was also conducted. Distances and relative height differences between points of the triangular test network measured with a laser station Leica TDRA6000 were adopted as references. The analyses of results show that at least two hours long measurement sessions can be used to determine the horizontal distance or the difference in height with an accuracy of 1-2 cm. Rapid products employed in calculations conducted with PPP technique reached the accuracy of determining coordinates on a close level as in elaborations which employ Final products.

  7. A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures

    NASA Technical Reports Server (NTRS)

    Moore, Ashley

    2005-01-01

    The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.

  8. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new

  9. Evaluation of precision and accuracy of selenium measurements in biological materials using neutron activation analysis

    SciTech Connect

    Greenberg, R.R.

    1988-01-01

    In recent years, the accurate determination of selenium in biological materials has become increasingly important in view of the essential nature of this element for human nutrition and its possible role as a protective agent against cancer. Unfortunately, the accurate determination of selenium in biological materials is often difficult for most analytical techniques for a variety of reasons, including interferences, complicated selenium chemistry due to the presence of this element in multiple oxidation states and in a variety of different organic species, stability and resistance to destruction of some of these organo-selenium species during acid dissolution, volatility of some selenium compounds, and potential for contamination. Neutron activation analysis (NAA) can be one of the best analytical techniques for selenium determinations in biological materials for a number of reasons. Currently, precision at the 1% level (1s) and overall accuracy at the 1 to 2% level (95% confidence interval) can be attained at the U.S. National Bureau of Standards (NBS) for selenium determinations in biological materials when counting statistics are not limiting (using the {sup 75}Se isotope). An example of this level of precision and accuracy is summarized. Achieving this level of accuracy, however, requires strict attention to all sources of systematic error. Precise and accurate results can also be obtained after radiochemical separations.

  10. Large format focal plane array integration with precision alignment, metrology and accuracy capabilities

    NASA Astrophysics Data System (ADS)

    Neumann, Jay; Parlato, Russell; Tracy, Gregory; Randolph, Max

    2015-09-01

    Focal plane alignment for large format arrays and faster optical systems require enhanced precision methodology and stability over temperature. The increase in focal plane array size continues to drive the alignment capability. Depending on the optical system, the focal plane flatness of less than 25μm (.001") is required over transition temperatures from ambient to cooled operating temperatures. The focal plane flatness requirement must also be maintained in airborne or launch vibration environments. This paper addresses the challenge of the detector integration into the focal plane module and housing assemblies, the methodology to reduce error terms during integration and the evaluation of thermal effects. The driving factors influencing the alignment accuracy include: datum transfers, material effects over temperature, alignment stability over test, adjustment precision and traceability to NIST standard. The FPA module design and alignment methodology reduces the error terms by minimizing the measurement transfers to the housing. In the design, the proper material selection requires matched coefficient of expansion materials minimizes both the physical shift over temperature as well as lowering the stress induced into the detector. When required, the co-registration of focal planes and filters can achieve submicron relative positioning by applying precision equipment, interferometry and piezoelectric positioning stages. All measurements and characterizations maintain traceability to NIST standards. The metrology characterizes the equipment's accuracy, repeatability and precision of the measurements.

  11. Accuracy, precision, usability, and cost of portable silver test methods for ceramic filter factories.

    PubMed

    Meade, Rhiana D; Murray, Anna L; Mittelman, Anjuliee M; Rayner, Justine; Lantagne, Daniele S

    2017-02-01

    Locally manufactured ceramic water filters are one effective household drinking water treatment technology. During manufacturing, silver nanoparticles or silver nitrate are applied to prevent microbiological growth within the filter and increase bacterial removal efficacy. Currently, there is no recommendation for manufacturers to test silver concentrations of application solutions or filtered water. We identified six commercially available silver test strips, kits, and meters, and evaluated them by: (1) measuring in quintuplicate six samples from 100 to 1,000 mg/L (application range) and six samples from 0.0 to 1.0 mg/L (effluent range) of silver nanoparticles and silver nitrate to determine accuracy and precision; (2) conducting volunteer testing to assess ease-of-use; and (3) comparing costs. We found no method accurately detected silver nanoparticles, and accuracy ranged from 4 to 91% measurement error for silver nitrate samples. Most methods were precise, but only one method could test both application and effluent concentration ranges of silver nitrate. Volunteers considered test strip methods easiest. The cost for 100 tests ranged from 36 to 1,600 USD. We found no currently available method accurately and precisely measured both silver types at reasonable cost and ease-of-use, thus these methods are not recommended to manufacturers. We recommend development of field-appropriate methods that accurately and precisely measure silver nanoparticle and silver nitrate concentrations.

  12. Theoretical study of precision and accuracy of strain analysis by nano-beam electron diffraction.

    PubMed

    Mahr, Christoph; Müller-Caspary, Knut; Grieb, Tim; Schowalter, Marco; Mehrtens, Thorsten; Krause, Florian F; Zillmann, Dennis; Rosenauer, Andreas

    2015-11-01

    Measurement of lattice strain is important to characterize semiconductor nanostructures. As strain has large influence on the electronic band structure, methods for the measurement of strain with high precision, accuracy and spatial resolution in a large field of view are mandatory. In this paper we present a theoretical study of precision and accuracy of measurement of strain by convergent nano-beam electron diffraction. It is found that the accuracy of the evaluation suffers from halos in the diffraction pattern caused by a variation of strain within the area covered by the focussed electron beam. This effect, which is expected to be strong at sharp interfaces between materials with different lattice plane distances, will be discussed for convergent-beam electron diffraction patterns using a conventional probe and for patterns formed by a precessing electron beam. Furthermore, we discuss approaches to optimize the accuracy of strain measured at interfaces. The study is based on the evaluation of diffraction patterns simulated for different realistic structures that have been investigated experimentally in former publications. These simulations account for thermal diffuse scattering using the frozen-lattice approach and the modulation-transfer function of the image-recording system. The influence of Poisson noise is also investigated.

  13. Accelerator mass spectrometry best practices for accuracy and precision in bioanalytical (14)C measurements.

    PubMed

    Vogel, John S; Giacomo, Jason A; Schulze-König, Tim; Keck, Bradly D; Lohstroh, Peter; Dueker, Stephen

    2010-03-01

    Accelerator mass spectrometers have an energy acceleration and charge exchange between mass definition stages to destroy molecular isobars and allow single ion counting of long-lived isotopes such as (14)C (t½=5370 years.). 'Low' voltage accelerations to 200 kV allow laboratory-sized accelerator mass spectrometers instruments for bioanalytical quantitation of (14)C to 2-3% precision and accuracy in isolated biochemical fractions. After demonstrating this accuracy and precision for our new accelerator mass spectrometer, we discuss the critical aspects of maintaining quantitative accuracy from the defined biological fraction to the accelerator mass spectrometry quantitation. These aspects include sufficient sample mass for routine rapid sample preparation, isotope dilution to assure this mass, isolation of the carbon from other sample combustion gasses and use of high-efficiency biochemical separations. This review seeks to address a bioanalytical audience, who should know that high accuracy data of physiochemical processes within living human subjects are available, as long as a (14)C quantitation can be made indicative of the physiochemistry of interest.

  14. Using statistics and software to maximize precision and accuracy in U-Pb geochronological measurements

    NASA Astrophysics Data System (ADS)

    McLean, N.; Bowring, J. F.; Bowring, S. A.

    2009-12-01

    Uncertainty in U-Pb geochronology results from a wide variety of factors, including isotope ratio determinations, common Pb corrections, initial daughter product disequilibria, instrumental mass fractionation, isotopic tracer calibration, and U decay constants and isotopic composition. The relative contribution of each depends on the proportion of radiogenic to common Pb, the measurement technique, and the quality of systematic error determinations. Random and systematic uncertainty contributions may be propagated into individual analyses or for an entire population, and must be propagated correctly to accurately interpret data. Tripoli and U-Pb_Redux comprise a new data reduction and error propagation software package that combines robust cycle measurement statistics with rigorous multivariate data analysis and presents the results graphically and interactively. Maximizing the precision and accuracy of a measurement begins with correct appraisal and codification of the systematic and random errors for each analysis. For instance, a large dataset of total procedural Pb blank analyses defines a multivariate normal distribution, describing the mean of and variation in isotopic composition (IC) that must be subtracted from each analysis. Uncertainty in the size and IC of each Pb blank is related to the (random) uncertainty in ratio measurements and the (systematic) uncertainty involved in tracer subtraction. Other sample and measurement parameters can be quantified in the same way, represented as statistical distributions that describe their uncertainty or variation, and are input into U-Pb_Redux as such before the raw sample isotope ratios are measured. During sample measurement, U-Pb_Redux and Tripoli can relay cycle data in real time, calculating a date and uncertainty for each new cycle or block. The results are presented in U-Pb_Redux as an interactive user interface with multiple visualization tools. One- and two-dimensional plots of each calculated date and

  15. Robust Flight Path Determination for Mars Precision Landing Using Genetic Algorithms

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Kohen, Hamid

    1997-01-01

    This paper documents the application of genetic algorithms (GAs) to the problem of robust flight path determination for Mars precision landing. The robust flight path problem is defined here as the determination of the flight path which delivers a low-lift open-loop controlled vehicle to its desired final landing location while minimizing the effect of perturbations due to uncertainty in the atmospheric model and entry conditions. The genetic algorithm was capable of finding solutions which reduced the landing error from 111 km RMS radial (open-loop optimal) to 43 km RMS radial (optimized with respect to perturbations) using 200 hours of computation on an Ultra-SPARC workstation. Further reduction in the landing error is possible by going to closed-loop control which can utilize the GA optimized paths as nominal trajectories for linearization.

  16. Accuracy or precision: Implications of sample design and methodology on abundance estimation

    USGS Publications Warehouse

    Kowalewski, Lucas K.; Chizinski, Christopher J.; Powell, Larkin A.; Pope, Kevin L.; Pegg, Mark A.

    2015-01-01

    Sampling by spatially replicated counts (point-count) is an increasingly popular method of estimating population size of organisms. Challenges exist when sampling by point-count method, and it is often impractical to sample entire area of interest and impossible to detect every individual present. Ecologists encounter logistical limitations that force them to sample either few large-sample units or many small sample-units, introducing biases to sample counts. We generated a computer environment and simulated sampling scenarios to test the role of number of samples, sample unit area, number of organisms, and distribution of organisms in the estimation of population sizes using N-mixture models. Many sample units of small area provided estimates that were consistently closer to true abundance than sample scenarios with few sample units of large area. However, sample scenarios with few sample units of large area provided more precise abundance estimates than abundance estimates derived from sample scenarios with many sample units of small area. It is important to consider accuracy and precision of abundance estimates during the sample design process with study goals and objectives fully recognized, although and with consequence, consideration of accuracy and precision of abundance estimates is often an afterthought that occurs during the data analysis process.

  17. Optimizing ELISAs for precision and robustness using laboratory automation and statistical design of experiments.

    PubMed

    Joelsson, Daniel; Moravec, Phil; Troutman, Matthew; Pigeon, Joseph; DePhillips, Pete

    2008-08-20

    Transferring manual ELISAs to automated platforms requires optimizing the assays for each particular robotic platform. These optimization experiments are often time consuming and difficult to perform using a traditional one-factor-at-a-time strategy. In this manuscript we describe the development of an automated process using statistical design of experiments (DOE) to quickly optimize immunoassays for precision and robustness on the Tecan EVO liquid handler. By using fractional factorials and a split-plot design, five incubation time variables and four reagent concentration variables can be optimized in a short period of time.

  18. Assessing accuracy and precision for field and laboratory data: a perspective in ecosystem restoration

    USGS Publications Warehouse

    Stapanian, Martin A.; Lewis, Timothy E; Palmer, Craig J.; Middlebrook Amos, Molly

    2016-01-01

    Unlike most laboratory studies, rigorous quality assurance/quality control (QA/QC) procedures may be lacking in ecosystem restoration (“ecorestoration”) projects, despite legislative mandates in the United States. This is due, in part, to ecorestoration specialists making the false assumption that some types of data (e.g. discrete variables such as species identification and abundance classes) are not subject to evaluations of data quality. Moreover, emergent behavior manifested by complex, adapting, and nonlinear organizations responsible for monitoring the success of ecorestoration projects tend to unconsciously minimize disorder, QA/QC being an activity perceived as creating disorder. We discuss similarities and differences in assessing precision and accuracy for field and laboratory data. Although the concepts for assessing precision and accuracy of ecorestoration field data are conceptually the same as laboratory data, the manner in which these data quality attributes are assessed is different. From a sample analysis perspective, a field crew is comparable to a laboratory instrument that requires regular “recalibration,” with results obtained by experts at the same plot treated as laboratory calibration standards. Unlike laboratory standards and reference materials, the “true” value for many field variables is commonly unknown. In the laboratory, specific QA/QC samples assess error for each aspect of the measurement process, whereas field revisits assess precision and accuracy of the entire data collection process following initial calibration. Rigorous QA/QC data in an ecorestoration project are essential for evaluating the success of a project, and they provide the only objective “legacy” of the dataset for potential legal challenges and future uses.

  19. Mapping stream habitats with a global positioning system: Accuracy, precision, and comparison with traditional methods

    USGS Publications Warehouse

    Dauwalter, D.C.; Fisher, W.L.; Belt, K.C.

    2006-01-01

    We tested the precision and accuracy of the Trimble GeoXT??? global positioning system (GPS) handheld receiver on point and area features and compared estimates of stream habitat dimensions (e.g., lengths and areas of riffles and pools) that were made in three different Oklahoma streams using the GPS receiver and a tape measure. The precision of differentially corrected GPS (DGPS) points was not affected by the number of GPS position fixes (i.e., geographic location estimates) averaged per DGPS point. Horizontal error of points ranged from 0.03 to 2.77 m and did not differ with the number of position fixes per point. The error of area measurements ranged from 0.1% to 110.1% but decreased as the area increased. Again, error was independent of the number of position fixes averaged per polygon corner. The estimates of habitat lengths, widths, and areas did not differ when measured using two methods of data collection (GPS and a tape measure), nor did the differences among methods change at three stream sites with contrasting morphologies. Measuring features with a GPS receiver was up to 3.3 times faster on average than using a tape measure, although signal interference from high streambanks or overhanging vegetation occasionally limited satellite signal availability and prolonged measurements with a GPS receiver. There were also no differences in precision of habitat dimensions when mapped using a continuous versus a position fix average GPS data collection method. Despite there being some disadvantages to using the GPS in stream habitat studies, measuring stream habitats with a GPS resulted in spatially referenced data that allowed the assessment of relative habitat position and changes in habitats over time, and was often faster than using a tape measure. For most spatial scales of interest, the precision and accuracy of DGPS data are adequate and have logistical advantages when compared to traditional methods of measurement. ?? 2006 Springer Science+Business Media

  20. Radiographic total disc replacement angle measurement accuracy using the Oxford Cobbometer: precision and bias

    PubMed Central

    Stafylas, Kosmas; McManus, John; Schizas, Constantin

    2008-01-01

    Total disc replacement (TDR) clinical success has been reported to be related to the residual motion of the operated level. Thus, accurate measurement of TDR range of motion (ROM) is of utmost importance. One commonly used tool in measuring ROM is the Oxford Cobbometer. Little is known however on its accuracy (precision and bias) in measuring TDR angles. The aim of this study was to assess the ability of the Cobbometer to accurately measure radiographic TDR angles. An anatomically accurate synthetic L4–L5 motion segment was instrumented with a CHARITE artificial disc. The TDR angle and anatomical position between L4 and L5 was fixed to prohibit motion while the motion segment was radiographically imaged in various degrees of rotation and elevation, representing a sample of possible patient placement positions. An experienced observer made ten readings of the TDR angle using the Cobbometer at each different position. The Cobbometer readings were analyzed to determine measurement accuracy at each position. Furthermore, analysis of variance was used to study rotation and elevation of the motion segment as treatment factors. Cobbometer TDR angle measurements were most accurate (highest precision and lowest bias) at the centered position (95.5%), which placed the TDR directly inline with the x-ray beam source without any rotation. In contrast, the lowest accuracy (75.2%) was observed in the most rotated and off-centered view. A difference as high as 4° between readings at any individual position, and as high as 6° between all the positions was observed. Furthermore, the Cobbometer was unable to detect the expected trend in TDR angle projection with changing position. Although the Cobbometer has been reported to be reliable in different clinical applications, it lacks the needed accuracy to measure TDR angles and ROM. More accurate ROM measurement methods need to be developed to help surgeons and researchers assess radiological success of TDRs. PMID:18496719

  1. A robust and high precision optimal explicit guidance scheme for solid motor propelled launch vehicles with thrust and drag uncertainty

    NASA Astrophysics Data System (ADS)

    Maity, Arnab; Padhi, Radhakant; Mallaram, Sanjeev; Mallikarjuna Rao, G.; Manickavasagam, M.

    2016-10-01

    A new nonlinear optimal and explicit guidance law is presented in this paper for launch vehicles propelled by solid motors. It can ensure very high terminal precision despite not having the exact knowledge of the thrust-time curve apriori. This was motivated from using it for a carrier launch vehicle in a hypersonic mission, which demands an extremely narrow terminal accuracy window for the launch vehicle for successful initiation of operation of the hypersonic vehicle. The proposed explicit guidance scheme, which computes the optimal guidance command online, ensures the required stringent final conditions with high precision at the injection point. A key feature of the proposed guidance law is an innovative extension of the recently developed model predictive static programming guidance with flexible final time. A penalty function approach is also followed to meet the input and output inequality constraints throughout the vehicle trajectory. In this paper, the guidance law has been successfully validated from nonlinear six degree-of-freedom simulation studies by designing an inner-loop autopilot as well, which enhances confidence of its usefulness significantly. In addition to excellent nominal results, the proposed guidance has been found to have good robustness for perturbed cases as well.

  2. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions.

    PubMed

    Wells, Emma; Wolfe, Marlene K; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4-19% error), then test strips (5.2-48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5-11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14-37 for test strips and $33-609 for titration. Given the

  3. Accuracy, Precision, Ease-Of-Use, and Cost of Methods to Test Ebola-Relevant Chlorine Solutions

    PubMed Central

    Wells, Emma; Wolfe, Marlene K.; Murray, Anna; Lantagne, Daniele

    2016-01-01

    To prevent transmission in Ebola Virus Disease (EVD) outbreaks, it is recommended to disinfect living things (hands and people) with 0.05% chlorine solution and non-living things (surfaces, personal protective equipment, dead bodies) with 0.5% chlorine solution. In the current West African EVD outbreak, these solutions (manufactured from calcium hypochlorite (HTH), sodium dichloroisocyanurate (NaDCC), and sodium hypochlorite (NaOCl)) have been widely used in both Ebola Treatment Unit and community settings. To ensure solution quality, testing is necessary, however test method appropriateness for these Ebola-relevant concentrations has not previously been evaluated. We identified fourteen commercially-available methods to test Ebola-relevant chlorine solution concentrations, including two titration methods, four DPD dilution methods, and six test strips. We assessed these methods by: 1) determining accuracy and precision by measuring in quintuplicate five different 0.05% and 0.5% chlorine solutions manufactured from NaDCC, HTH, and NaOCl; 2) conducting volunteer testing to assess ease-of-use; and, 3) determining costs. Accuracy was greatest in titration methods (reference-12.4% error compared to reference method), then DPD dilution methods (2.4–19% error), then test strips (5.2–48% error); precision followed this same trend. Two methods had an accuracy of <10% error across all five chlorine solutions with good precision: Hach digital titration for 0.05% and 0.5% solutions (recommended for contexts with trained personnel and financial resources), and Serim test strips for 0.05% solutions (recommended for contexts where rapid, inexpensive, and low-training burden testing is needed). Measurement error from test methods not including pH adjustment varied significantly across the five chlorine solutions, which had pH values 5–11. Volunteers found test strip easiest and titration hardest; costs per 100 tests were $14–37 for test strips and $33–609 for titration

  4. Integrated multi-ISE arrays with improved sensitivity, accuracy and precision

    PubMed Central

    Wang, Chunling; Yuan, Hongyan; Duan, Zhijuan; Xiao, Dan

    2017-01-01

    Increasing use of ion-selective electrodes (ISEs) in the biological and environmental fields has generated demand for high-sensitivity ISEs. However, improving the sensitivities of ISEs remains a challenge because of the limit of the Nernstian slope (59.2/n mV). Here, we present a universal ion detection method using an electronic integrated multi-electrode system (EIMES) that bypasses the Nernstian slope limit of 59.2/n mV, thereby enabling substantial enhancement of the sensitivity of ISEs. The results reveal that the response slope is greatly increased from 57.2 to 1711.3 mV, 57.3 to 564.7 mV and 57.7 to 576.2 mV by electronic integrated 30 Cl− electrodes, 10 F− electrodes and 10 glass pH electrodes, respectively. Thus, a tiny change in the ion concentration can be monitored, and correspondingly, the accuracy and precision are substantially improved. The EIMES is suited for all types of potentiometric sensors and may pave the way for monitoring of various ions with high accuracy and precision because of its high sensitivity. PMID:28303939

  5. Accuracy and Precision in Measurements of Biomass Oxidative Ratio and Carbon Oxidation State

    NASA Astrophysics Data System (ADS)

    Gallagher, M. E.; Masiello, C. A.; Randerson, J. T.; Chadwick, O. A.; Robertson, G. P.

    2007-12-01

    Ecosystem oxidative ratio (OR) is a critical parameter in the apportionment of anthropogenic CO2 between the terrestrial biosphere and ocean carbon reservoirs. OR is the ratio of O2 to CO2 in gas exchange fluxes between the terrestrial biosphere and atmosphere. Ecosystem OR is linearly related to biomass carbon oxidation state (Cox), a fundamental property of the earth system describing the bonding environment of carbon in molecules. Cox can range from -4 to +4 (CH4 to CO2). Variations in both Cox and OR are driven by photosynthesis, respiration, and decomposition. We are developing several techniques to accurately measure variations in ecosystem Cox and OR; these include elemental analysis, bomb calorimetry, and 13C nuclear magnetic resonance spectroscopy. A previous study, comparing the accuracy and precision of elemental analysis versus bomb calorimetry for pure chemicals, showed that elemental analysis-based measurements are more accurate, while calorimetry- based measurements yield more precise data. However, the limited biochemical range of natural samples makes it possible that calorimetry may ultimately prove most accurate, as well as most cost-effective. Here we examine more closely the accuracy of Cox and OR values generated by calorimetry on a large set of natural biomass samples collected from the Kellogg Biological Station-Long Term Ecological Research (KBS-LTER) site in Michigan.

  6. Integrated multi-ISE arrays with improved sensitivity, accuracy and precision

    NASA Astrophysics Data System (ADS)

    Wang, Chunling; Yuan, Hongyan; Duan, Zhijuan; Xiao, Dan

    2017-03-01

    Increasing use of ion-selective electrodes (ISEs) in the biological and environmental fields has generated demand for high-sensitivity ISEs. However, improving the sensitivities of ISEs remains a challenge because of the limit of the Nernstian slope (59.2/n mV). Here, we present a universal ion detection method using an electronic integrated multi-electrode system (EIMES) that bypasses the Nernstian slope limit of 59.2/n mV, thereby enabling substantial enhancement of the sensitivity of ISEs. The results reveal that the response slope is greatly increased from 57.2 to 1711.3 mV, 57.3 to 564.7 mV and 57.7 to 576.2 mV by electronic integrated 30 Cl‑ electrodes, 10 F‑ electrodes and 10 glass pH electrodes, respectively. Thus, a tiny change in the ion concentration can be monitored, and correspondingly, the accuracy and precision are substantially improved. The EIMES is suited for all types of potentiometric sensors and may pave the way for monitoring of various ions with high accuracy and precision because of its high sensitivity.

  7. Integrated multi-ISE arrays with improved sensitivity, accuracy and precision.

    PubMed

    Wang, Chunling; Yuan, Hongyan; Duan, Zhijuan; Xiao, Dan

    2017-03-17

    Increasing use of ion-selective electrodes (ISEs) in the biological and environmental fields has generated demand for high-sensitivity ISEs. However, improving the sensitivities of ISEs remains a challenge because of the limit of the Nernstian slope (59.2/n mV). Here, we present a universal ion detection method using an electronic integrated multi-electrode system (EIMES) that bypasses the Nernstian slope limit of 59.2/n mV, thereby enabling substantial enhancement of the sensitivity of ISEs. The results reveal that the response slope is greatly increased from 57.2 to 1711.3 mV, 57.3 to 564.7 mV and 57.7 to 576.2 mV by electronic integrated 30 Cl(-) electrodes, 10 F(-) electrodes and 10 glass pH electrodes, respectively. Thus, a tiny change in the ion concentration can be monitored, and correspondingly, the accuracy and precision are substantially improved. The EIMES is suited for all types of potentiometric sensors and may pave the way for monitoring of various ions with high accuracy and precision because of its high sensitivity.

  8. To address accuracy and precision using methods from analytical chemistry and computational physics.

    PubMed

    Kozmutza, Cornelia; Picó, Yolanda

    2009-04-01

    In this work the pesticides were determined by liquid chromatography-mass spectrometry (LC-MS). In present study the occurrence of imidacloprid in 343 samples of oranges, tangerines, date plum, and watermelons from Valencian Community (Spain) has been investigated. The nine additional pesticides were chosen as they have been recommended for orchard treatment together with imidacloprid. The Mulliken population analysis has been applied to present the charge distribution in imidacloprid. Partitioned energy terms and the virial ratios have been calculated for certain molecules entering in interaction. A new technique based on the comparison of the decomposed total energy terms at various configurations is demonstrated in this work. The interaction ability could be established correctly in the studied case. An attempt is also made in this work to address accuracy and precision. These quantities are well-known in experimental measurements. In case precise theoretical description is achieved for the contributing monomers and also for the interacting complex structure some properties of this latter system can be predicted to quite a good accuracy. Based on simple hypothetical considerations we estimate the impact of applying computations on reducing the amount of analytical work.

  9. Automated tracking of colloidal clusters with sub-pixel accuracy and precision

    NASA Astrophysics Data System (ADS)

    van der Wel, Casper; Kraft, Daniela J.

    2017-02-01

    Quantitative tracking of features from video images is a basic technique employed in many areas of science. Here, we present a method for the tracking of features that partially overlap, in order to be able to track so-called colloidal molecules. Our approach implements two improvements into existing particle tracking algorithms. Firstly, we use the history of previously identified feature locations to successfully find their positions in consecutive frames. Secondly, we present a framework for non-linear least-squares fitting to summed radial model functions and analyze the accuracy (bias) and precision (random error) of the method on artificial data. We find that our tracking algorithm correctly identifies overlapping features with an accuracy below 0.2% of the feature radius and a precision of 0.1 to 0.01 pixels for a typical image of a colloidal cluster. Finally, we use our method to extract the three-dimensional diffusion tensor from the Brownian motion of colloidal dimers. , which features invited work from the best early-career researchers working within the scope of Journal of Physics: Condensed Matter. This project is part of the Journal of Physics series’ 50th anniversary celebrations in 2017. Daniela Kraft was selected by the Editorial Board of Journal of Physics: Condensed Matter as an Emerging Leader.

  10. Estimates of laboratory accuracy and precision on Hanford waste tank samples

    SciTech Connect

    Dodd, D.A.

    1995-02-02

    A review was performed on three sets of analyses generated in Battelle, Pacific Northwest Laboratories and three sets generated by Westinghouse Hanford Company, 222-S Analytical Laboratory. Laboratory accuracy and precision was estimated by analyte and is reported in tables. The sources used to generate this estimate is of limited size but does include the physical forms, liquid and solid, which are representative of samples from tanks to be characterized. This estimate was published as an aid to programs developing data quality objectives in which specified limits are established. Data resulting from routine analyses of waste matrices can be expected to be bounded by the precision and accuracy estimates of the tables. These tables do not preclude or discourage direct negotiations between program and laboratory personnel while establishing bounding conditions. Programmatic requirements different than those listed may be reliably met on specific measurements and matrices. It should be recognized, however, that these are specific to waste tank matrices and may not be indicative of performance on samples from other sources.

  11. Training to Improve Precision and Accuracy in the Measurement of Fiber Morphology

    PubMed Central

    Jeon, Jun; Wade, Mary Beth; Luong, Derek; Palmer, Xavier-Lewis; Bharti, Kapil; Simon, Carl G.

    2016-01-01

    An estimated $7.1 billion dollars a year is spent due to irreproducibility in pre-clinical data from errors in data analysis and reporting. Therefore, developing tools to improve measurement comparability is paramount. Recently, an open source tool, DiameterJ, has been deployed for the automated analysis of scanning electron micrographs of fibrous scaffolds designed for tissue engineering applications. DiameterJ performs hundreds to thousands of scaffold fiber diameter measurements from a single micrograph within a few seconds, along with a variety of other scaffold morphological features, which enables a more rigorous and thorough assessment of scaffold properties. Herein, an online, publicly available training module is introduced for educating DiameterJ users on how to effectively analyze scanning electron micrographs of fibers and the large volume of data that a DiameterJ analysis yields. The end goal of this training was to improve user data analysis and reporting to enhance reproducibility of analysis of nanofiber scaffolds. User performance was assessed before and after training to evaluate the effectiveness of the training modules. Users were asked to use DiameterJ to analyze reference micrographs of fibers that had known diameters. The results showed that training improved the accuracy and precision of measurements of fiber diameter in scanning electron micrographs. Training also improved the precision of measurements of pore area, porosity, intersection density, and characteristic fiber length between fiber intersections. These results demonstrate that the DiameterJ training module improves precision and accuracy in fiber morphology measurements, which will lead to enhanced data comparability. PMID:27907145

  12. Freehand liver volumetry by using an electromagnetic pen tablet: accuracy, precision, and rapidity.

    PubMed

    Perandini, Simone; Faccioli, Niccolò; Inama, Marco; Pozzi Mucelli, Roberto

    2011-04-01

    The purpose of this study is to assess the accuracy, precision, and rapidity of liver volumes calculated by using a freehand electromagnetic pen tablet contourtracing method as compared with the volumes calculated by using the standard optical mouse contourtracing method. The imaging data used as input for accuracy and precision testing were computed by software developed in our institution. This computer software can generate models of solid organs and allows both standard mouse-based and electromagnetic pen-driven segmentation (number of data sets, n = 70). The images used as input for rapidity testing was partly computed by modeling software (n = 70) and partly selected from contrast-enhanced computed tomography (CT) examinations (n = 12). Mean volumes and time required to perform the segmentation, along with standard deviation and range values with both techniques, were calculated. Student's t test was used to assess significance regarding mean volumes and time calculated by using both segmentation techniques on phantom and CT data sets. P value was also calculated. The mean volume difference was significantly lower with the use of the freehand electromagnetic pen as compared with the optical mouse (0.2% vs. 1.8%; P < .001). The mean segmentation time per patient was significantly shorter with the use of the freehand electromagnetic pen contourtracing method (354.5 vs. 499.1 s on phantoms; 457.4 vs. 610.0 s on CT images; P < .001). Freehand electromagnetic pen-based volumetric technique represents a technologic advancement over manual mouse-based contourtracing because of the superior statistical accuracy and sensibly shorter time required. Further studies focused on intra- and interobserver variability of the technique need to be performed before its introduction in clinical application.

  13. Keystroke dynamics and timing: accuracy, precision and difference between hands in pianist's performance.

    PubMed

    Minetti, Alberto E; Ardigò, Luca P; McKee, Tom

    2007-01-01

    A commercially available acoustic grand piano, originally provided with keystroke speed sensors, is proposed as a standard instrument to quantitatively assess the technical side of pianist's performance, after the mechanical characteristics of the keyboard have been measured. We found a positional dependence of the relationship between the applied force and the resulting downstroke speed (i.e. treble keys descend fastest) due to the different hammer/hammer shaft mass to be accelerated. When this effect was removed by a custom software, the ability of 14 pianists was analysed in terms of variability in stroke intervals and keystroke speeds. C-major scales played by separate hands at different imposed tempos and at 5 subjectively chosen graded force levels were analysed to get insights into the achieved neuromuscular control. Accuracy and precision of time intervals and descent velocity of keystrokes were obtained by processing the generated MIDI files. The results quantitatively show: the difference between hands, the trade off between force range and tempo, and between time interval precision and tempo, the lower precision of descent speed associated to 'soft' playing, etc. Those results reflect well-established physiological and motor control characteristics of our movement system. Apart from revealing fundamental aspects of pianism, the proposed method could be used as a standard tool also for ergonomic (e.g. the mechanical work and power of playing), didactic and rehabilitation monitoring of pianists.

  14. Improvement in precision, accuracy, and efficiency in sstandardizing the characterization of granular materials

    SciTech Connect

    Tucker, Jonathan R.; Shadle, Lawrence J.; Benyahia, Sofiane; Mei, Joseph; Guenther, Chris; Koepke, M. E.

    2013-01-01

    Useful prediction of the kinematics, dynamics, and chemistry of a system relies on precision and accuracy in the quantification of component properties, operating mechanisms, and collected data. In an attempt to emphasize, rather than gloss over, the benefit of proper characterization to fundamental investigations of multiphase systems incorporating solid particles, a set of procedures were developed and implemented for the purpose of providing a revised methodology having the desirable attributes of reduced uncertainty, expanded relevance and detail, and higher throughput. Better, faster, cheaper characterization of multiphase systems result. Methodologies are presented to characterize particle size, shape, size distribution, density (particle, skeletal and bulk), minimum fluidization velocity, void fraction, particle porosity, and assignment within the Geldart Classification. A novel form of the Ergun equation was used to determine the bulk void fractions and particle density. Accuracy of properties-characterization methodology was validated on materials of known properties prior to testing materials of unknown properties. Several of the standard present-day techniques were scrutinized and improved upon where appropriate. Validity, accuracy, and repeatability were assessed for the procedures presented and deemed higher than present-day techniques. A database of over seventy materials has been developed to assist in model validation efforts and future desig

  15. Slight pressure imbalances can affect accuracy and precision of dual inlet-based clumped isotope analysis.

    PubMed

    Fiebig, Jens; Hofmann, Sven; Löffler, Niklas; Lüdecke, Tina; Methner, Katharina; Wacker, Ulrike

    2016-01-01

    It is well known that a subtle nonlinearity can occur during clumped isotope analysis of CO2 that - if remaining unaddressed - limits accuracy. The nonlinearity is induced by a negative background on the m/z 47 ion Faraday cup, whose magnitude is correlated with the intensity of the m/z 44 ion beam. The origin of the negative background remains unclear, but is possibly due to secondary electrons. Usually, CO2 gases of distinct bulk isotopic compositions are equilibrated at 1000 °C and measured along with the samples in order to be able to correct for this effect. Alternatively, measured m/z 47 beam intensities can be corrected for the contribution of secondary electrons after monitoring how the negative background on m/z 47 evolves with the intensity of the m/z 44 ion beam. The latter correction procedure seems to work well if the m/z 44 cup exhibits a wider slit width than the m/z 47 cup. Here we show that the negative m/z 47 background affects precision of dual inlet-based clumped isotope measurements of CO2 unless raw m/z 47 intensities are directly corrected for the contribution of secondary electrons. Moreover, inaccurate results can be obtained even if the heated gas approach is used to correct for the observed nonlinearity. The impact of the negative background on accuracy and precision arises from small imbalances in m/z 44 ion beam intensities between reference and sample CO2 measurements. It becomes the more significant the larger the relative contribution of secondary electrons to the m/z 47 signal is and the higher the flux rate of CO2 into the ion source is set. These problems can be overcome by correcting the measured m/z 47 ion beam intensities of sample and reference gas for the contributions deriving from secondary electrons after scaling these contributions to the intensities of the corresponding m/z 49 ion beams. Accuracy and precision of this correction are demonstrated by clumped isotope analysis of three internal carbonate standards. The

  16. Constructing a precise and robust chronology for the varved sediment record of Lake Czechowskie (Poland)

    NASA Astrophysics Data System (ADS)

    Ott, Florian; Brauer, Achim; Słowiński, Michał; Wulf, Sabine; Putyrskaya, Victoria; Blaszkiewicz, Miroslaw

    2014-05-01

    Annually laminated (varved) sediment records are essential for detailed investigations of past climate and environmental changes as they function as a natural memory far beyond instrumental datasets. However, reliable reconstructions of past changes need a robust chronology. In order to determine Holocene inter-annual and decadal-scale variability and to establish a precise time scale we investigated varved sediments of Lake Czechowskie (53°52' N/ 18°14' E, 108 m a.s.l.), northern Poland. During two coring campaigns in 2009 and 2012 we recovered several long and short cores with the longest core reaching 14.25 m. Here we present a multiple dating approach for the Lake Czechowskie sediments. The chronology comprises varve counting for the Holocene time period and AMS 14C dating (19 plant macro remains and two bulk samples) for the entire sediment record reaching back to 14.0 cal ka BP. Varve counting between 14C dated samples and Bayesian age modeling helped to identify and omit samples either too old or too young caused by redeposition or too low C contents, respectively. The good agreement between varve chronology and modeled age based on radiocarbon dates proves the robust age control for the sediment profile. Additionally, independent chronological anchor points derived from (i) 137Cs activity concentration measurements for the last ca. 50 years and (ii) newly detected tephra layers of the Askja AD 1875 eruption and the Laacher See Tephra (12880 varve yrs BP) are used as precise dated isochrones. These volcanic ash layers can be further used as tie points to synchronize and correlate different lake records and to investigate local and regional differences to climatic and environmental changes over a wider geographic region on a common age scale. This study is a contribution to the Virtual Institute of Integrated Climate and Landscape Evolution Analysis -ICLEA- of the Helmholtz Association and the Helmholtz Association climate initiative REKLIM topic 8 "Rapid

  17. Precision and accuracy testing of FMCW ladar-based length metrology.

    PubMed

    Mateo, Ana Baselga; Barber, Zeb W

    2015-07-01

    The calibration and traceability of high-resolution frequency modulated continuous wave (FMCW) ladar sources is a requirement for their use in length and volume metrology. We report the calibration of FMCW ladar length measurement systems by use of spectroscopy of molecular frequency references HCN (C-band) or CO (L-band) to calibrate the chirp rate of the FMCW sources. Propagating the stated uncertainties from the molecular calibrations provided by NIST and measurement errors provide an estimated uncertainty of a few ppm for the FMCW system. As a test of this calibration, a displacement measurement interferometer with a laser wavelength close to that of our FMCW system was built to make comparisons of the relative precision and accuracy. The comparisons performed show <10  ppm agreement, which was within the combined estimated uncertainties of the FMCW system and interferometer.

  18. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    SciTech Connect

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; Gates, S. D.; Knight, K. B.; Hutcheon, I. D.

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence of a significant quantity of 238U in the samples.

  19. Estimated results analysis and application of the precise point positioning based high-accuracy ionosphere delay

    NASA Astrophysics Data System (ADS)

    Wang, Shi-tai; Peng, Jun-huan

    2015-12-01

    The characterization of ionosphere delay estimated with precise point positioning is analyzed in this paper. The estimation, interpolation and application of the ionosphere delay are studied based on the processing of 24-h data from 5 observation stations. The results show that the estimated ionosphere delay is affected by the hardware delay bias from receiver so that there is a difference between the estimated and interpolated results. The results also show that the RMSs (root mean squares) are bigger, while the STDs (standard deviations) are better than 0.11 m. When the satellite difference is used, the hardware delay bias can be canceled. The interpolated satellite-differenced ionosphere delay is better than 0.11 m. Although there is a difference between the between the estimated and interpolated ionosphere delay results it cannot affect its application in single-frequency positioning and the positioning accuracy can reach cm level.

  20. Improved precision and accuracy in quantifying plutonium isotope ratios by RIMS

    DOE PAGES

    Isselhardt, B. H.; Savina, M. R.; Kucher, A.; ...

    2015-09-01

    Resonance ionization mass spectrometry (RIMS) holds the promise of rapid, isobar-free quantification of actinide isotope ratios in as-received materials (i.e. not chemically purified). Recent progress in achieving this potential using two Pu test materials is presented. RIMS measurements were conducted multiple times over a period of two months on two different Pu solutions deposited on metal surfaces. Measurements were bracketed with a Pu isotopic standard, and yielded absolute accuracies of the measured 240Pu/239Pu ratios of 0.7% and 0.58%, with precisions (95% confidence intervals) of 1.49% and 0.91%. In conclusion, the minor isotope 238Pu was also quantified despite the presence ofmore » a significant quantity of 238U in the samples.« less

  1. Automated optogenetic feedback control for precise and robust regulation of gene expression and cell growth

    PubMed Central

    Milias-Argeitis, Andreas; Rullan, Marc; Aoki, Stephanie K.; Buchmann, Peter; Khammash, Mustafa

    2016-01-01

    Dynamic control of gene expression can have far-reaching implications for biotechnological applications and biological discovery. Thanks to the advantages of light, optogenetics has emerged as an ideal technology for this task. Current state-of-the-art methods for optical expression control fail to combine precision with repeatability and cannot withstand changing operating culture conditions. Here, we present a novel fully automatic experimental platform for the robust and precise long-term optogenetic regulation of protein production in liquid Escherichia coli cultures. Using a computer-controlled light-responsive two-component system, we accurately track prescribed dynamic green fluorescent protein expression profiles through the application of feedback control, and show that the system adapts to global perturbations such as nutrient and temperature changes. We demonstrate the efficacy and potential utility of our approach by placing a key metabolic enzyme under optogenetic control, thus enabling dynamic regulation of the culture growth rate with potential applications in bacterial physiology studies and biotechnology. PMID:27562138

  2. Accuracy and precision of estimating age of gray wolves by tooth wear

    USGS Publications Warehouse

    Gipson, P.S.; Ballard, W.B.; Nowak, R.M.; Mech, L.D.

    2000-01-01

    We evaluated the accuracy and precision of tooth wear for aging gray wolves (Canis lupus) from Alaska, Minnesota, and Ontario based on 47 known-age or known-minimum-age skulls. Estimates of age using tooth wear and a commercial cementum annuli-aging service were useful for wolves up to 14 years old. The precision of estimates from cementum annuli was greater than estimates from tooth wear, but tooth wear estimates are more applicable in the field. We tended to overestimate age by 1-2 years and occasionally by 3 or 4 years. The commercial service aged young wolves with cementum annuli to within ?? 1 year of actual age, but under estimated ages of wolves ???9 years old by 1-3 years. No differences were detected in tooth wear patterns for wild wolves from Alaska, Minnesota, and Ontario, nor between captive and wild wolves. Tooth wear was not appropriate for aging wolves with an underbite that prevented normal wear or severely broken and missing teeth.

  3. A benchmark test of accuracy and precision in estimating dynamical systems characteristics from a time series.

    PubMed

    Rispens, S M; Pijnappels, M; van Dieën, J H; van Schooten, K S; Beek, P J; Daffertshofer, A

    2014-01-22

    Characteristics of dynamical systems are often estimated to describe physiological processes. For instance, Lyapunov exponents have been determined to assess the stability of the cardio-vascular system, respiration, and, more recently, human gait and posture. However, the systematic evaluation of the accuracy and precision of these estimates is problematic because the proper values of the characteristics are typically unknown. We fill this void with a set of standardized time series with well-defined dynamical characteristics that serve as a benchmark. Estimates ought to match these characteristics, at least to good approximation. We outline a procedure to employ this generic benchmark test and illustrate its capacity by examining methods for estimating the maximum Lyapunov exponent. In particular, we discuss algorithms by Wolf and co-workers and by Rosenstein and co-workers and evaluate their performances as a function of signal length and signal-to-noise ratio. In all scenarios, the precision of Rosenstein's algorithm was found to be equal to or greater than Wolf's algorithm. The latter, however, appeared more accurate if reasonably large signal lengths are available and noise levels are sufficiently low. Due to its modularity, the presented benchmark test can be used to evaluate and tune any estimation method to perform optimally for arbitrary experimental data.

  4. Increasing accuracy and precision of digital image correlation through pattern optimization

    NASA Astrophysics Data System (ADS)

    Bomarito, G. F.; Hochhalter, J. D.; Ruggles, T. J.; Cannon, A. H.

    2017-04-01

    The accuracy and precision of digital image correlation (DIC) is based on three primary components: image acquisition, image analysis, and the subject of the image. Focus on the third component, the image subject, has been relatively limited and primarily concerned with comparing pseudo-random surface patterns. In the current work, a strategy is proposed for the creation of optimal DIC patterns. In this strategy, a pattern quality metric is developed as a combination of quality metrics from the literature rather than optimization based on any single one of them. In this way, optimization produces a pattern which balances the benefits of multiple quality metrics. Specifically, sum of square of subset intensity gradients (SSSIG) was found to be the metric most strongly correlated to DIC accuracy and thus is the main component of the newly proposed pattern quality metric. A term related to the secondary auto-correlation peak height is also part of the proposed quality metric which effectively acts as a constraint upon SSSIG ensuring that a regular (e.g., checkerboard-type) pattern is not achieved. The combined pattern quality metric is used to generate a pattern that was on average 11.6% more accurate than a randomly generated pattern in a suite of numerical experiments. Furthermore, physical experiments were performed which confirm that there is indeed improvement of a similar magnitude in DIC measurements for the optimized pattern compared to a random pattern.

  5. Accuracy and precision of hind limb foot contact timings of horses determined using a pelvis-mounted inertial measurement unit.

    PubMed

    Starke, Sandra D; Witte, Thomas H; May, Stephen A; Pfau, Thilo

    2012-05-11

    Gait analysis using small sensor units is becoming increasingly popular in the clinical context. In order to segment continuous movement from a defined point of the stride cycle, knowledge about footfall timings is essential. We evaluated the accuracy and precision of foot contact timings of a defined limb determined using an inertial sensor mounted on the pelvis of ten horses during walk and trot at different speeds and in different directions. Foot contact was estimated from vertical velocity events occurring before maximum sensor roll towards the contralateral limb. Foot contact timings matched data from a synchronised hoof mounted accelerometer well when velocity minimum was used for walk (mean (SD) difference of 15 (18)ms across horses) and velocity zero-crossing for trot (mean (SD) difference from -4 (14) to 12 (7)ms depending on the condition). The stride segmentation method also remained robust when applied to movement data of hind limb lame horses. In future, this method may find application in segmenting overground sensor data of various species.

  6. Gaining Precision and Accuracy on Microprobe Trace Element Analysis with the Multipoint Background Method

    NASA Astrophysics Data System (ADS)

    Allaz, J. M.; Williams, M. L.; Jercinovic, M. J.; Donovan, J. J.

    2014-12-01

    Electron microprobe trace element analysis is a significant challenge, but can provide critical data when high spatial resolution is required. Due to the low peak intensity, the accuracy and precision of such analyses relies critically on background measurements, and on the accuracy of any pertinent peak interference corrections. A linear regression between two points selected at appropriate off-peak positions is a classical approach for background characterization in microprobe analysis. However, this approach disallows an accurate assessment of background curvature (usually exponential). Moreover, if present, background interferences can dramatically affect the results if underestimated or ignored. The acquisition of a quantitative WDS scan over the spectral region of interest is still a valuable option to determine the background intensity and curvature from a fitted regression of background portions of the scan, but this technique retains an element of subjectivity as the analyst has to select areas in the scan, which appear to represent background. We present here a new method, "Multi-Point Background" (MPB), that allows acquiring up to 24 off-peak background measurements from wavelength positions around the peaks. This method aims to improve the accuracy, precision, and objectivity of trace element analysis. The overall efficiency is amended because no systematic WDS scan needs to be acquired in order to check for the presence of possible background interferences. Moreover, the method is less subjective because "true" backgrounds are selected by the statistical exclusion of erroneous background measurements, reducing the need for analyst intervention. This idea originated from efforts to refine EPMA monazite U-Th-Pb dating, where it was recognised that background errors (peak interference or background curvature) could result in errors of several tens of million years on the calculated age. Results obtained on a CAMECA SX-100 "UltraChron" using monazite

  7. Impact of survey workflow on precision and accuracy of terrestrial LiDAR datasets

    NASA Astrophysics Data System (ADS)

    Gold, P. O.; Cowgill, E.; Kreylos, O.

    2009-12-01

    Ground-based LiDAR (Light Detection and Ranging) survey techniques are enabling remote visualization and quantitative analysis of geologic features at unprecedented levels of detail. For example, digital terrain models computed from LiDAR data have been used to measure displaced landforms along active faults and to quantify fault-surface roughness. But how accurately do terrestrial LiDAR data represent the true ground surface, and in particular, how internally consistent and precise are the mosaiced LiDAR datasets from which surface models are constructed? Addressing this question is essential for designing survey workflows that capture the necessary level of accuracy for a given project while minimizing survey time and equipment, which is essential for effective surveying of remote sites. To address this problem, we seek to define a metric that quantifies how scan registration error changes as a function of survey workflow. Specifically, we are using a Trimble GX3D laser scanner to conduct a series of experimental surveys to quantify how common variables in field workflows impact the precision of scan registration. Primary variables we are testing include 1) use of an independently measured network of control points to locate scanner and target positions, 2) the number of known-point locations used to place the scanner and point clouds in 3-D space, 3) the type of target used to measure distances between the scanner and the known points, and 4) setting up the scanner over a known point as opposed to resectioning of known points. Precision of the registered point cloud is quantified using Trimble Realworks software by automatic calculation of registration errors (errors between locations of the same known points in different scans). Accuracy of the registered cloud (i.e., its ground-truth) will be measured in subsequent experiments. To obtain an independent measure of scan-registration errors and to better visualize the effects of these errors on a registered point

  8. Robust Statistical Label Fusion through Consensus Level, Labeler Accuracy and Truth Estimation (COLLATE)

    PubMed Central

    Asman, Andrew J.; Landman, Bennett A.

    2011-01-01

    Segmentation and delineation of structures of interest in medical images is paramount to quantifying and characterizing structural, morphological, and functional correlations with clinically relevant conditions. The established gold standard for performing segmentation has been manual voxel-by-voxel labeling by a neuroanatomist expert. This process can be extremely time consuming, resource intensive and fraught with high inter-observer variability. Hence, studies involving characterizations of novel structures or appearances have been limited in scope (numbers of subjects), scale (extent of regions assessed), and statistical power. Statistical methods to fuse data sets from several different sources (e.g., multiple human observers) have been proposed to simultaneously estimate both rater performance and the ground truth labels. However, with empirical datasets, statistical fusion has been observed to result in visually inconsistent findings. So, despite the ease and elegance of a statistical approach, single observers and/or direct voting are often used in practice. Hence, rater performance is not systematically quantified and exploited during label estimation. To date, statistical fusion methods have relied on characterizations of rater performance that do not intrinsically include spatially varying models of rater performance. Herein, we present a novel, robust statistical label fusion algorithm to estimate and account for spatially varying performance. This algorithm, COnsensus Level, Labeler Accuracy and Truth Estimation (COLLATE), is based on the simple idea that some regions of an image are difficult to label (e.g., confusion regions: boundaries or low contrast areas) while other regions are intrinsically obvious (e.g., consensus regions: centers of large regions or high contrast edges). Unlike its predecessors, COLLATE estimates the consensus level of each voxel and estimates differing models of observer behavior in each region. We show that COLLATE provides

  9. On accuracy, robustness, and security of bag-of-word search systems

    NASA Astrophysics Data System (ADS)

    Voloshynovskiy, Svyatoslav; Diephuis, Maurits; Kostadinov, Dimche; Farhadzadeh, Farzad; Holotyak, Taras

    2014-02-01

    In this paper, we present a statistical framework for the analysis of the performance of Bag-of-Words (BOW) systems. The paper aims at establishing a better understanding of the impact of different elements of BOW systems such as the robustness of descriptors, accuracy of assignment, descriptor compression and pooling and finally decision making. We also study the impact of geometrical information on the BOW system performance and compare the results with different pooling strategies. The proposed framework can also be of interest for a security and privacy analysis of BOW systems. The experimental results on real images and descriptors confirm our theoretical findings. Notation: We use capital letters to denote scalar random variables X and X to denote vector random variables, corresponding small letters x and x to denote the realisations of scalar and vector random variables, respectively. We use X pX(x) or simply X p(x) to indicate that a random variable X is distributed according to pX(x). N(μ, σ 2 X ) stands for the Gaussian distribution with mean μ and variance σ2 X . B(L, Pb) denotes the binomial distribution with sequence length L and probability of success Pb. ||.|| denotes the Euclidean vector norm and Q(.) stands for the Q-function. D(.||.) denotes the divergence and E{.} denotes the expectation.

  10. Accuracy and Robustness Improvements of Echocardiographic Particle Image Velocimetry for Routine Clinical Cardiac Evaluation

    NASA Astrophysics Data System (ADS)

    Meyers, Brett; Vlachos, Pavlos; Charonko, John; Giarra, Matthew; Goergen, Craig

    2015-11-01

    Echo Particle Image Velocimetry (echoPIV) is a recent development in flow visualization that provides improved spatial resolution with high temporal resolution in cardiac flow measurement. Despite increased interest a limited number of published echoPIV studies are clinical, demonstrating that the method is not broadly accepted within the medical community. This is due to the fact that use of contrast agents are typically reserved for subjects whose initial evaluation produced very low quality recordings. Thus high background noise and low contrast levels characterize most scans, which hinders echoPIV from producing accurate measurements. To achieve clinical acceptance it is necessary to develop processing strategies that improve accuracy and robustness. We hypothesize that using a short-time moving window ensemble (MWE) correlation can improve echoPIV flow measurements on low image quality clinical scans. To explore the potential of the short-time MWE correlation, evaluation of artificial ultrasound images was performed. Subsequently, a clinical cohort of patients with diastolic dysfunction was evaluated. Qualitative and quantitative comparisons between echoPIV measurements and Color M-mode scans were carried out to assess the improvements delivered by the proposed methodology.

  11. Cumulative incidence of childhood autism: a total population study of better accuracy and precision.

    PubMed

    Honda, Hideo; Shimizu, Yasuo; Imai, Miho; Nitto, Yukari

    2005-01-01

    Most studies on the frequency of autism have had methodological problems. Most notable of these have been differences in diagnostic criteria between studies, degree of cases overlooked by the initial screening, and type of measurement. This study aimed to replicate the first report on childhood autism to address cumulative incidence as well as prevalence, as defined in the International Statistical Classification of Diseases and Related Health Problems, 10th revision (ICD-10) Diagnostic Criteria for Research. Here, the same methodological accuracy (exactness of a measurement to the true value) as the first study was used, but population size was four times larger to achieve greater precision (reduction of random error). A community-oriented system of early detection and early intervention for developmental disorders was established in the northern part of Yokohama, Japan. The city's routine health checkup for 18-month-old children served as the initial mass screening, and all facilities that provided child care services aimed to detect all cases of childhood autism and refer them to the Yokohama Rehabilitation Center. Cumulative incidence up to age 5 years was calculated for childhood autism among a birth cohort from four successive years (1988 to 1991). Cumulative incidence of childhood autism was 27.2 per 10000. Cumulative incidences by sex were 38.4 per 10000 in males, and 15.5 per 10000 in females. The male:female ratio was 2.5:1. The proportions of children with high-functioning autism who had Binet IQs of 70 and over and those with Binet IQs of 85 and over were 25.3% and 13.7% respectively. Data on cumulative incidence of childhood autism derived from this study are the first to be drawn from an accurate, as well as precise, screening methodology.

  12. The Accuracy and Precision of Flow Measurements Using Phase Contrast Techniques

    NASA Astrophysics Data System (ADS)

    Tang, Chao

    Quantitative volume flow rate measurements using the magnetic resonance imaging technique are studied in this dissertation because the volume flow rates have a special interest in the blood supply of the human body. The method of quantitative volume flow rate measurements is based on the phase contrast technique, which assumes a linear relationship between the phase and flow velocity of spins. By measuring the phase shift of nuclear spins and integrating velocity across the lumen of the vessel, we can determine the volume flow rate. The accuracy and precision of volume flow rate measurements obtained using the phase contrast technique are studied by computer simulations and experiments. The various factors studied include (1) the partial volume effect due to voxel dimensions and slice thickness relative to the vessel dimensions; (2) vessel angulation relative to the imaging plane; (3) intravoxel phase dispersion; (4) flow velocity relative to the magnitude of the flow encoding gradient. The partial volume effect is demonstrated to be the major obstacle to obtaining accurate flow measurements for both laminar and plug flow. Laminar flow can be measured more accurately than plug flow in the same condition. Both the experiment and simulation results for laminar flow show that, to obtain the accuracy of volume flow rate measurements to within 10%, at least 16 voxels are needed to cover the vessel lumen. The accuracy of flow measurements depends strongly on the relative intensity of signal from stationary tissues. A correction method is proposed to compensate for the partial volume effect. The correction method is based on a small phase shift approximation. After the correction, the errors due to the partial volume effect are compensated, allowing more accurate results to be obtained. An automatic program based on the correction method is developed and implemented on a Sun workstation. The correction method is applied to the simulation and experiment results. The

  13. Analysis of Current Position Determination Accuracy in Natural Resources Canada Precise Point Positioning Service

    NASA Astrophysics Data System (ADS)

    Krzan, Grzegorz; Dawidowicz, Karol; Krzysztof, Świaţek

    2013-09-01

    Precise Point Positioning (PPP) is a technique used to determine highprecision position with a single GNSS receiver. Unlike DGPS or RTK, satellite observations conducted by the PPP technique are not differentiated, therefore they require that parameter models should be used in data processing, such as satellite clock and orbit corrections. Apart from explaining the theory of the PPP technique, this paper describes the available web-based online services used in the post-processing of observation results. The results obtained in the post-processing of satellite observations at three points, with different characteristics of environment conditions, using the CSRS-PPP service, will be presented as the results of the experiment. This study examines the effect of the duration of the measurement session on the results and compares the results obtained by working out observations made by the GPS system and the combined observations from GPS and GLONASS. It also presents the analysis of the position determination accuracy using one and two measurement frequencies

  14. Precision and accuracy of regional radioactivity quantitation using the maximum likelihood EM reconstruction algorithm

    SciTech Connect

    Carson, R.E.; Yan, Y.; Chodkowski, B.; Yap, T.K.; Daube-Witherspoon, M.E. )

    1994-09-01

    The imaging characteristics of maximum likelihood (ML) reconstruction using the EM algorithm for emission tomography have been extensively evaluated. There has been less study of the precision and accuracy of ML estimates of regional radioactivity concentration. The authors developed a realistic brain slice simulation by segmenting a normal subject's MRI scan into gray matter, white matter, and CSF and produced PET sinogram data with a model that included detector resolution and efficiencies, attenuation, scatter, and randoms. Noisy realizations at different count levels were created, and ML and filtered backprojection (FBP) reconstructions were performed. The bias and variability of ROI values were determined. In addition, the effects of ML pixel size, image smoothing and region size reduction were assessed. ML estimates at 1,000 iterations (0.6 sec per iteration on a parallel computer) for 1-cm[sup 2] gray matter ROIs showed negative biases of 6% [+-] 2% which can be reduced to 0% [+-] 3% by removing the outer 1-mm rim of each ROI. FBP applied to the full-size ROIs had 15% [+-] 4% negative bias with 50% less noise than ML. Shrinking the FBP regions provided partial bias compensation with noise increases to levels similar to ML. Smoothing of ML images produced biases comparable to FBP with slightly less noise. Because of its heavy computational requirements, the ML algorithm will be most useful for applications in which achieving minimum bias is important.

  15. 13 Years of TOPEX/POSEIDON Precision Orbit Determination and the 10-fold Improvement in Expected Orbit Accuracy

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Zelensky, N. P.; Luthcke, S. B.; Rowlands, D. D.; Beckley, B. D.; Klosko, S. M.

    2006-01-01

    Launched in the summer of 1992, TOPEX/POSEIDON (T/P) was a joint mission between NASA and the Centre National d Etudes Spatiales (CNES), the French Space Agency, to make precise radar altimeter measurements of the ocean surface. After the remarkably successful 13-years of mapping the ocean surface T/P lost its ability to maneuver and was de-commissioned January 2006. T/P revolutionized the study of the Earth s oceans by vastly exceeding pre-launch estimates of surface height accuracy recoverable from radar altimeter measurements. The precision orbit lies at the heart of the altimeter measurement providing the reference frame from which the radar altimeter measurements are made. The expected quality of orbit knowledge had limited the measurement accuracy expectations of past altimeter missions, and still remains a major component in the error budget of all altimeter missions. This paper describes critical improvements made to the T/P orbit time series over the 13-years of precise orbit determination (POD) provided by the GSFC Space Geodesy Laboratory. The POD improvements from the pre-launch T/P expectation of radial orbit accuracy and Mission requirement of 13-cm to an expected accuracy of about 1.5-cm with today s latest orbits will be discussed. The latest orbits with 1.5 cm RMS radial accuracy represent a significant improvement to the 2.0-cm accuracy orbits currently available on the T/P Geophysical Data Record (GDR) altimeter product.

  16. Accuracy and precision of the three-dimensional assessment of the facial surface using a 3-D laser scanner.

    PubMed

    Kovacs, L; Zimmermann, A; Brockmann, G; Baurecht, H; Schwenzer-Zimmerer, K; Papadopulos, N A; Papadopoulos, M A; Sader, R; Biemer, E; Zeilhofer, H F

    2006-06-01

    Three-dimensional (3-D) recording of the surface of the human body or anatomical areas has gained importance in many medical specialties. Thus, it is important to determine scanner precision and accuracy in defined medical applications and to establish standards for the recording procedure. Here we evaluated the precision and accuracy of 3-D assessment of the facial area with the Minolta Vivid 910 3D Laser Scanner. We also investigated the influence of factors related to the recording procedure and the processing of scanner data on final results. These factors include lighting, alignment of scanner and object, the examiner, and the software used to convert measurements into virtual images. To assess scanner accuracy, we compared scanner data to those obtained by manual measurements on a dummy. Less than 7% of all results with the scanner method were outside a range of error of 2 mm when compared to corresponding reference measurements. Accuracy, thus, proved to be good enough to satisfy requirements for numerous clinical applications. Moreover, the experiments completed with the dummy yielded valuable information for optimizing recording parameters for best results. Thus, under defined conditions, precision and accuracy of surface models of the human face recorded with the Minolta Vivid 910 3D Scanner presumably can also be enhanced. Future studies will involve verification of our findings using test persons. The current findings indicate that the Minolta Vivid 910 3D Scanner might be used with benefit in medicine when recording the 3-D surface structures of the face.

  17. Robustness

    NASA Technical Reports Server (NTRS)

    Ryan, R.

    1993-01-01

    Robustness is a buzz word common to all newly proposed space systems design as well as many new commercial products. The image that one conjures up when the word appears is a 'Paul Bunyon' (lumberjack design), strong and hearty; healthy with margins in all aspects of the design. In actuality, robustness is much broader in scope than margins, including such factors as simplicity, redundancy, desensitization to parameter variations, control of parameter variations (environments flucation), and operational approaches. These must be traded with concepts, materials, and fabrication approaches against the criteria of performance, cost, and reliability. This includes manufacturing, assembly, processing, checkout, and operations. The design engineer or project chief is faced with finding ways and means to inculcate robustness into an operational design. First, however, be sure he understands the definition and goals of robustness. This paper will deal with these issues as well as the need for the requirement for robustness.

  18. Compensation of Environment and Motion Error for Accuracy Improvement of Ultra-Precision Lathe

    NASA Astrophysics Data System (ADS)

    Kwac, Lee-Ku; Kim, Jae-Yeol; Kim, Hong-Gun

    The technological manipulation of the piezo-electric actuator could compensate for the errors of the machining precision during the process of machining which lead to an elevation and enhancement in overall precisions. This manipulation is a very convenient method to advance the precision for nations without the solid knowledge of the ultra-precision machining technology. There were 2 divisions of researches conducted to develop the UPCU for precision enhancement of the current lathe and compensation for the environmental errors as shown below; The first research was designed to measure and real-time correct any deviations in variety of areas to achieve a compensation system through more effective optical fiber laser encoder than the encoder resolution which was currently used in the existing lathe. The deviations for a real-time correction were composed of followings; the surrounding air temperature, the thermal deviations of the machining materials, the thermal deviations in spindles, and the overall thermal deviation occurred due to the machine structure. The second research was to develop the UPCU and to improve the machining precision through the ultra-precision positioning and the real-time operative error compensation. The ultimate goal was to improve the machining precision of the existing lathe through completing the 2 research tasks mentioned above.

  19. Accuracy and precision of total mixed rations fed on commercial dairy farms.

    PubMed

    Sova, A D; LeBlanc, S J; McBride, B W; DeVries, T J

    2014-01-01

    Despite the significant time and effort spent formulating total mixed rations (TMR), it is evident that the ration delivered by the producer and that consumed by the cow may not accurately reflect that originally formulated. The objectives of this study were to (1) determine how TMR fed agrees with or differs from TMR formulation (accuracy), (2) determine daily variability in physical and chemical characteristics of TMR delivered (precision), and (3) investigate the relationship between daily variability in ration characteristics and group-average measures of productivity [dry matter intake (DMI), milk yield, milk components, efficiency, and feed sorting] on commercial dairy farms. Twenty-two commercial freestall herds were visited for 7 consecutive days in both summer and winter months. Fresh and refusal feed samples were collected daily to assess particle size distribution, dry matter, and chemical composition. Milk test data, including yield, fat, and protein were collected from a coinciding Dairy Herd Improvement test. Multivariable mixed-effect regression models were used to analyze associations between productivity measures and daily ration variability, measured as coefficient of variation (CV) over 7d. The average TMR [crude protein=16.5%, net energy for lactation (NEL) = 1.7 Mcal/kg, nonfiber carbohydrates = 41.3%, total digestible nutrients = 73.3%, neutral detergent fiber=31.3%, acid detergent fiber=20.5%, Ca = 0.92%, p=0.42%, Mg = 0.35%, K = 1.45%, Na = 0.41%] delivered exceeded TMR formulation for NEL (+0.05 Mcal/kg), nonfiber carbohydrates (+1.2%), acid detergent fiber (+0.7%), Ca (+0.08%), P (+0.02%), Mg (+0.02%), and K (+0.04%) and underfed crude protein (-0.4%), neutral detergent fiber (-0.6%), and Na (-0.1%). Dietary measures with high day-to-day CV were average feed refusal rate (CV = 74%), percent long particles (CV = 16%), percent medium particles (CV = 7.7%), percent short particles (CV = 6.1%), percent fine particles (CV = 13%), Ca (CV = 7

  20. Long-term accuracy and precision of PIXE and PIGE measurements for thin and thick sample analyses

    NASA Astrophysics Data System (ADS)

    Cohen, David D.; Siegele, Rainer; Orlic, Ivo; Stelcer, Ed

    2002-04-01

    This paper describes PIXE/PIGE measurements on thin Micromatter Standard (±5%) foils run over a period of 10 years. The selected foils were typically 50 μg/cm 2 thick and covered the commonly used PIXE X-ray energy range 1.4-20 keV and the light elements F and Na for PIGE studies. For the thousands of thick obsidian and pottery samples analysed over a 6-year period, the Ohio Red Clay standard has been used for both PIXE and PIGE calibration of a range of elements from Li to Rb. For PIXE, the long-term accuracy could be as low as ±1.6% for major elements with precision ranging from ±5% to ±10% depending on the elemental concentration. For PIGE, accuracies were around ±5% with precision ranging from ±5% in thick samples to ±15% in thin samples or for low yield γ-ray production.

  1. Quantifying Vegetation Change in Semiarid Environments: Precision and Accuracy of Spectral Mixture Analysis and the Normalized Difference Vegetation Index

    NASA Technical Reports Server (NTRS)

    Elmore, Andrew J.; Mustard, John F.; Manning, Sara J.; Elome, Andrew J.

    2000-01-01

    Because in situ techniques for determining vegetation abundance in semiarid regions are labor intensive, they usually are not feasible for regional analyses. Remotely sensed data provide the large spatial scale necessary, but their precision and accuracy in determining vegetation abundance and its change through time have not been quantitatively determined. In this paper, the precision and accuracy of two techniques, Spectral Mixture Analysis (SMA) and Normalized Difference Vegetation Index (NDVI) applied to Landsat TM data, are assessed quantitatively using high-precision in situ data. In Owens Valley, California we have 6 years of continuous field data (1991-1996) for 33 sites acquired concurrently with six cloudless Landsat TM images. The multitemporal remotely sensed data were coregistered to within 1 pixel, radiometrically intercalibrated using temporally invariante surface features and geolocated to within 30 m. These procedures facilitated the accurate location of field-monitoring sites within the remotely sensed data. Formal uncertainties in the registration, radiometric alignment, and modeling were determined. Results show that SMA absolute percent live cover (%LC) estimates are accurate to within ?4.0%LC and estimates of change in live cover have a precision of +/-3.8%LC. Furthermore, even when applied to areas of low vegetation cover, the SMA approach correctly determined the sense of clump, (i.e., positive or negative) in 87% of the samples. SMA results are superior to NDVI, which, although correlated with live cover, is not a quantitative measure and showed the correct sense of change in only 67%, of the samples.

  2. Analytical and numerical investigations on the accuracy and robustness of geometric features extracted from 3D point cloud data

    NASA Astrophysics Data System (ADS)

    Dittrich, André; Weinmann, Martin; Hinz, Stefan

    2017-04-01

    In photogrammetry, remote sensing, computer vision and robotics, a topic of major interest is represented by the automatic analysis of 3D point cloud data. This task often relies on the use of geometric features amongst which particularly the ones derived from the eigenvalues of the 3D structure tensor (e.g. the three dimensionality features of linearity, planarity and sphericity) have proven to be descriptive and are therefore commonly involved for classification tasks. Although these geometric features are meanwhile considered as standard, very little attention has been paid to their accuracy and robustness. In this paper, we hence focus on the influence of discretization and noise on the most commonly used geometric features. More specifically, we investigate the accuracy and robustness of the eigenvalues of the 3D structure tensor and also of the features derived from these eigenvalues. Thereby, we provide both analytical and numerical considerations which clearly reveal that certain features are more susceptible to discretization and noise whereas others are more robust.

  3. Interproton distance determinations by NOE--surprising accuracy and precision in a rigid organic molecule.

    PubMed

    Butts, Craig P; Jones, Catharine R; Towers, Emma C; Flynn, Jennifer L; Appleby, Lara; Barron, Nicholas J

    2011-01-07

    The accuracy inherent in the measurement of interproton distances in small molecules by nuclear Overhauser enhancement (NOE) and rotational Overhauser enhancement (ROE) methods is investigated with the rigid model compound strychnine. The results suggest that interproton distances can be established with a remarkable level of accuracy, within a few percent of their true values, using a straight-forward data analysis method if experiments are conducted under conditions that support the initial rate approximation. Dealing with deviations from these conditions and other practical issues regarding these measurements are discussed.

  4. Precise Point Positioning for the Efficient and Robust Analysis of GPS Data from Large Networks

    NASA Technical Reports Server (NTRS)

    Zumberge, J. F.; Heflin, M. B.; Jefferson, D. C.; Watkins, M. M.; Webb, F. H.

    1997-01-01

    Networks of dozens to hundreds of permanently operating precision Global Positioning System (GPS) receivers are emerging at spatial scales that range from 10(exp 0) to 10(exp 3) km. To keep the computational burden associated with the analysis of such data economically feasible, one approach is to first determine precise GPS satellite positions and clock corrections from a globally distributed network of GPS receivers. Their, data from the local network are analyzed by estimating receiver- specific parameters with receiver-specific data satellite parameters are held fixed at their values determined in the global solution. This "precise point positioning" allows analysis of data from hundreds to thousands of sites every (lay with 40-Mflop computers, with results comparable in quality to the simultaneous analysis of all data. The reference frames for the global and network solutions can be free of distortion imposed by erroneous fiducial constraints on any sites.

  5. Precise Point Positioning for the Efficient and Robust Analysis of GPS Data From Large Networks

    NASA Technical Reports Server (NTRS)

    Zumberge, J. F.; Heflin, M. B.; Jefferson, D. C.; Watkins, M. M.; Webb, F. H.

    1997-01-01

    Networks of dozens to hundreds of permanently operating precision Global Positioning System (GPS) receivers are emerging at spatial scales that range from 10(exp 0) to 10(exp 3) km. To keep the computational burden associated with the analysis of such data economically feasible, one approach is to first determine precise GPS satellite positions and clock corrections from a globally distributed network of GPS receivers. Then, data from the local network are analyzed by estimating receiver specific parameters with receiver-specific data; satellite parameters are held fixed at their values determined in the global solution. This "precise point positioning" allows analysis of data from hundreds to thousands of sites every day with 40 Mflop computers, with results comparable in quality to the simultaneous analysis of all data. The reference frames for the global and network solutions can be free of distortion imposed by erroneous fiducial constraints on any sites.

  6. Bloch-Siegert B1-Mapping Improves Accuracy and Precision of Longitudinal Relaxation Measurements in the Breast at 3 T.

    PubMed

    Whisenant, Jennifer G; Dortch, Richard D; Grissom, William; Kang, Hakmook; Arlinghaus, Lori R; Yankeelov, Thomas E

    2016-12-01

    Variable flip angle (VFA) sequences are a popular method of calculating T1 values, which are required in a quantitative analysis of dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI). B1 inhomogeneities are substantial in the breast at 3 T, and these errors negatively impact the accuracy of the VFA approach, thus leading to large errors in the DCE-MRI parameters that could limit clinical adoption of the technique. This study evaluated the ability of Bloch-Siegert B1 mapping to improve the accuracy and precision of VFA-derived T1 measurements in the breast. Test-retest MRI sessions were performed on 16 women with no history of breast disease. T1 was calculated using the VFA sequence, and B1 field variations were measured using the Bloch-Siegert methodology. As a gold standard, inversion recovery (IR) measurements of T1 were performed. Fibroglandular tissue and adipose tissue from each breast were segmented using the IR images, and the mean T1 was calculated for each tissue. Accuracy was evaluated by percent error (%err). Reproducibility was assessed via the 95% confidence interval (CI) of the mean difference and repeatability coefficient (r). After B1 correction, %err significantly (P < .001) decreased from 17% to 8.6%, and the 95% CI and r decreased from ±94 to ±38 milliseconds and from 276 to 111 milliseconds, respectively. Similar accuracy and reproducibility results were observed in the adipose tissue of the right breast and in both tissues of the left breast. Our data show that Bloch-Siegert B1 mapping improves accuracy and precision of VFA-derived T1 measurements in the breast.

  7. Meta-analysis of time perception and temporal processing in schizophrenia: Differential effects on precision and accuracy.

    PubMed

    Thoenes, Sven; Oberfeld, Daniel

    2017-03-29

    Numerous studies have reported that time perception and temporal processing are impaired in schizophrenia. In a meta-analytical review, we differentiate between time perception (judgments of time intervals) and basic temporal processing (e.g., judgments of temporal order) as well as between effects on accuracy (deviation of estimates from the veridical value) and precision (variability of judgments). In a meta-regression approach, we also included the specific tasks and the different time interval ranges as covariates. We considered 68 publications of the past 65years, and meta-analyzed data from 957 patients with schizophrenia and 1060 healthy control participants. Independent of tasks and interval durations, our results demonstrate that time perception and basic temporal processing are less precise (more variable) in patients (Hedges' g>1.00), whereas effects of schizophrenia on accuracy of time perception are rather small and task-dependent. Our review also shows that several aspects, e.g., potential influences of medication, have not yet been investigated in sufficient detail. In conclusion, the results are in accordance with theoretical assumptions and the notion of a more variable internal clock in patients with schizophrenia, but not with a strong effect of schizophrenia on clock speed. The impairment of temporal precision, however, may also be clock-unspecific as part of a general cognitive deficiency in schizophrenia.

  8. A high-precision Jacob's staff with improved spatial accuracy and laser sighting capability

    NASA Astrophysics Data System (ADS)

    Patacci, Marco

    2016-04-01

    A new Jacob's staff design incorporating a 3D positioning stage and a laser sighting stage is described. The first combines a compass and a circular spirit level on a movable bracket and the second introduces a laser able to slide vertically and rotate on a plane parallel to bedding. The new design allows greater precision in stratigraphic thickness measurement while restricting the cost and maintaining speed of measurement to levels similar to those of a traditional Jacob's staff. Greater precision is achieved as a result of: a) improved 3D positioning of the rod through the use of the integrated compass and spirit level holder; b) more accurate sighting of geological surfaces by tracing with height adjustable rotatable laser; c) reduced error when shifting the trace of the log laterally (i.e. away from the dip direction) within the trace of the laser plane, and d) improved measurement of bedding dip and direction necessary to orientate the Jacob's staff, using the rotatable laser. The new laser holder design can also be used to verify parallelism of a geological surface with structural dip by creating a visual planar datum in the field and thus allowing determination of surfaces which cut the bedding at an angle (e.g., clinoforms, levees, erosion surfaces, amalgamation surfaces, etc.). Stratigraphic thickness measurements and estimates of measurement uncertainty are valuable to many applications of sedimentology and stratigraphy at different scales (e.g., bed statistics, reconstruction of palaeotopographies, depositional processes at bed scale, architectural element analysis), especially when a quantitative approach is applied to the analysis of the data; the ability to collect larger data sets with improved precision will increase the quality of such studies.

  9. Event Clustering: Accuracy and Precision of Multiple Event Locations with Sparse Networks

    NASA Astrophysics Data System (ADS)

    Baldwin, T. K.; Wallace, T. C.

    2002-12-01

    In the last 15 years passive PASSCAL experiments have been fielded on every continent. Most of these deployments were designed to record teleseismic or large local seismic events to infer crustal and mantle structure. However, the deployments inevitably record small, local seismicity. Unfortunately, the configuration of the experiments are not optimal for location (typically the stations are arranged in linear arrays), and the seismicity is recorded at a very limited number of stations. The standard location procedure (Geiger's method) is severely limited without a detailed crustal model. A number of methods have been developed to improve relative location precision, including Joint Hypocenter Determination (JHD) and Progressive Multiple Event Location (PMEL). In this study we investigate the performance of PMEL for a very sparse network where there appears to be strong event clustering. CHARGE is a passive deployment of broadband seismometers in Chile and Argentina, with a primary focus of investigating the changes in dip along the descending Nazca Plate. The CHARGE stations recorded a large number of small, local events in 2000-2002. For this study events were selected from the northern profile (approximately along 30o S) in Chile. The events look similar, and appear to be clustered southeast of the city of La Serena. We performed three sets of experiments to investigate precision: (1) iterative Master Event Corrections to measure the scale length of clusters, (2) PMEL locations, and (3) PMEL locations using a cross-correlation to determine accurate relative phase timing. The analysis shows that for the PMEL experiment clusters must occupy an area of 600 km2 for the results to be consistent. We will present a method to estimate the precision errors based on bootstrapping. Charge Team: S. Beck, G. Zandt, M. Anderson, H. Folsom, R. Fromm, T. Shearer, L. Wagner, and P. Alvarado (all University of Arizona), J. Campos, E. Kausel, and J. Paredes (all University of

  10. Tissue probability map constrained CLASSIC for increased accuracy and robustness in serial image segmentation

    NASA Astrophysics Data System (ADS)

    Xue, Zhong; Shen, Dinggang; Wong, Stephen T. C.

    2009-02-01

    Traditional fuzzy clustering algorithms have been successfully applied in MR image segmentation for quantitative morphological analysis. However, the clustering results might be biased due to the variability of tissue intensities and anatomical structures. For example, clustering-based algorithms tend to over-segment white matter tissues of MR brain images. To solve this problem, we introduce a tissue probability map constrained clustering algorithm and apply it to serialMR brain image segmentation for longitudinal study of human brains. The tissue probability maps consist of segmentation priors obtained from a population and reflect the probability of different tissue types. More accurate image segmentation can be achieved by using these segmentation priors in the clustering algorithm. Experimental results of both simulated longitudinal MR brain data and the Alzheimer's Disease Neuroimaging Initiative (ADNI) data using the new serial image segmentation algorithm in the framework of CLASSIC show more accurate and robust longitudinal measures.

  11. Potassium conductance dynamics confer robust spike-time precision in a neuromorphic model of the auditory brain stem

    PubMed Central

    Boahen, Kwabena

    2013-01-01

    A fundamental question in neuroscience is how neurons perform precise operations despite inherent variability. This question also applies to neuromorphic engineering, where low-power microchips emulate the brain using large populations of diverse silicon neurons. Biological neurons in the auditory pathway display precise spike timing, critical for sound localization and interpretation of complex waveforms such as speech, even though they are a heterogeneous population. Silicon neurons are also heterogeneous, due to a key design constraint in neuromorphic engineering: smaller transistors offer lower power consumption and more neurons per unit area of silicon, but also more variability between transistors and thus between silicon neurons. Utilizing this variability in a neuromorphic model of the auditory brain stem with 1,080 silicon neurons, we found that a low-voltage-activated potassium conductance (gKL) enables precise spike timing via two mechanisms: statically reducing the resting membrane time constant and dynamically suppressing late synaptic inputs. The relative contribution of these two mechanisms is unknown because blocking gKL in vitro eliminates dynamic adaptation but also lengthens the membrane time constant. We replaced gKL with a static leak in silico to recover the short membrane time constant and found that silicon neurons could mimic the spike-time precision of their biological counterparts, but only over a narrow range of stimulus intensities and biophysical parameters. The dynamics of gKL were required for precise spike timing robust to stimulus variation across a heterogeneous population of silicon neurons, thus explaining how neural and neuromorphic systems may perform precise operations despite inherent variability. PMID:23554436

  12. [Studies on the accuracy and precision of total serum cholesterol in regional interlaboratory trials (author's transl)].

    PubMed

    Hohenwallner, W; Sommer, R; Wimmer, E

    1976-01-02

    The between-run precision of the Liebermann-Burchard reaction modified by Watson was, in our laboratory, 2-3%, the within-run coefficient of variation was 1-2%. The between-run precision of the enzymatic test was 3-4%, the within-run coefficient of variation was 3%. The regression analysis of 92 serum specimens from patients was y = -17.31 + 1.04 chi, the coefficient of regression was r = 0.996. Interlaboratory trials of serum cholesterol were studied in the normal and pathological range. Lyophilized samples of serum prepared commercially and from fresh specimens from patients were analysed by the method of Liebermann-Burchard as well as by the enzymatic procedure. Acceptable results estimated by Liebermann-Burchard were obtained in the different laboratories after using a common standard of cholesterol. The coefficient of variation of the enzymatic test in the interlaboratory trial was higher in comparison to the Liebermann-Burchard reaction. Methodological difficulties of the Liebermann-Burchard reaction are discussed and compared with the specific, enzymatic assay.

  13. Precision and Accuracy in the Determination of Sulfur Oxides, Fluoride, and Spherical Aluminosilicate Fly Ash Particles in Project MOHAVE.

    PubMed

    Eatough, Norman L; Eatough, Michele; Joseph, Jyothi M; Caka, Fern M; Lewis, Laura; Eatough, Delbert J

    1997-04-01

    The precision and accuracy of the determination of particulate sulfate and fluoride, and gas phase S02 and HF are estimated from the results obtained from collocated replicate samples and from collocated comparison samples for highland low-volume filter pack and annular diffusion denuder samplers. The results of replicate analysis of collocated samples and replicate analyses of a given sample for the determination of spherical aluminosilicate fly ash particles have also been compared. Each of these species is being used in the chemical mass balance source apportionment of sulfur oxides in the Grand Canyon region as part of Project MOHAVE, and the precision and accuracy analyses given in this paper provide input to that analysis. The precision of the various measurements reported here is ±1.8 nmol/m(3) and ±2.5 nmol/m(3) for the determination of S02 and sulfate, respectively, with an annular denuder. The precision is ±0.5 nmol/m(3) and ±2.0 nmol/m(3) for the determination of the same species with a high-volume or low-volume filter pack. The precision for the determination of the sum of HF(g) and fine particulate fluoride is +0.3 nmol/m(3). The precision for the determination of aluminosilicate fly ash particles is ±100 particles/m(3). At high concentrations of the various species, reproducibility of the various measurements is ±10% to ±14% of the measured concentration. The concentrations of sulfate determined using filter pack samplers are frequently higher than those determined using diffusion denuder sampling systems. The magnitude of the difference (e.g., 2-10 nmol sulfate/m(3)) is small, but important relative to the precision of the data and the concentrations of particulate sulfate present (typically 5-20 nmol sulfate/m(3)). The concentrations of S02(g) determined using a high-volume cascade impactor filter pack sampler are correspondingly lower than those obtained with diffusion denuder samplers. The concentrations of SOx (SOz(g) plus particulate

  14. ACCURACY AND PRECISION OF A METHOD TO STUDY KINEMATICS OF THE TEMPOROMANDIBULAR JOINT: COMBINATION OF MOTION DATA AND CT IMAGING

    PubMed Central

    Baltali, Evre; Zhao, Kristin D.; Koff, Matthew F.; Keller, Eugene E.; An, Kai-Nan

    2008-01-01

    The purpose of the study was to test the precision and accuracy of a method used to track selected landmarks during motion of the temporomandibular joint (TMJ). A precision phantom device was constructed and relative motions between two rigid bodies on the phantom device were measured using optoelectronic (OE) and electromagnetic (EM) motion tracking devices. The motion recordings were also combined with a 3D CT image for each type of motion tracking system (EM+CT and OE+CT) to mimic methods used in previous studies. In the OE and EM data collections, specific landmarks on the rigid bodies were determined using digitization. In the EM+CT and OE+CT data sets, the landmark locations were obtained from the CT images. 3D linear distances and 3D curvilinear path distances were calculated for the points. The accuracy and precision for all 4 methods were evaluated (EM, OE, EM+CT and OE+CT). In addition, results were compared with and without the CT imaging (EM vs. EM+CT, OE vs. OE+CT). All systems overestimated the actual 3D curvilinear path lengths. All systems also underestimated the actual rotation values. The accuracy of all methods was within 0.5 mm for 3D curvilinear path calculations, 0.05 mm for 3D linear distance calculations, and 0.2° for rotation calculations. In addition, Bland-Altman plots for each configuration of the systems suggest that measurements obtained from either system are repeatable and comparable. PMID:18617178

  15. Performance characterization of precision micro robot using a machine vision system over the Internet for guaranteed positioning accuracy

    NASA Astrophysics Data System (ADS)

    Kwon, Yongjin; Chiou, Richard; Rauniar, Shreepud; Sosa, Horacio

    2005-11-01

    There is a missing link between a virtual development environment (e.g., a CAD/CAM driven offline robotic programming) and production requirements of the actual robotic workcell. Simulated robot path planning and generation of pick-and-place coordinate points will not exactly coincide with the robot performance due to lack of consideration in variations in individual robot repeatability and thermal expansion of robot linkages. This is especially important when robots are controlled and programmed remotely (e.g., through Internet or Ethernet) since remote users have no physical contact with robotic systems. Using the current technology in Internet-based manufacturing that is limited to a web camera for live image transfer has been a significant challenge for the robot task performance. Consequently, the calibration and accuracy quantification of robot critical to precision assembly have to be performed on-site and the verification of robot positioning accuracy cannot be ascertained remotely. In worst case, the remote users have to assume the robot performance envelope provided by the manufacturers, which may causes a potentially serious hazard for system crash and damage to the parts and robot arms. Currently, there is no reliable methodology for remotely calibrating the robot performance. The objective of this research is, therefore, to advance the current state-of-the-art in Internet-based control and monitoring technology, with a specific aim in the accuracy calibration of micro precision robotic system for the development of a novel methodology utilizing Ethernet-based smart image sensors and other advanced precision sensory control network.

  16. Impact of improved models for precise orbits of altimetry satellites on the orbit accuracy and regional mean sea level trends

    NASA Astrophysics Data System (ADS)

    Rudenko, Sergei; Esselborn, Saskia; Dettmering, Denise; Schöne, Tilo; Neumayer, Karl-Hans

    2015-04-01

    Precise orbits of altimetry satellites are a prerequisite for investigations of global and regional sea level changes. We show a significant progress obtained in the recent decades in modeling and determination of the orbits of altimetry satellites. This progress was reached due to the improved knowledge of the Earth gravity field obtained by using CHAMP (CHAllenging Mini-Satellite Payload), GRACE (Gravity Recovery and Climate Experiment) and GOCE (Gravity field and Ocean Circulation Explorer) data, improved realizations of the terrestrial and celestial reference frames and transformations between these reference frames, improved modeling of ocean and solid Earth tides, improved accuracy of observations and other effects. New precise orbits of altimetry satellites ERS-1 (1991-1996), TOPEX/Poseidon (1992-2005), ERS-2 (1995-2006), Envisat (2002-2012) and Jason-1 (2002-2012) have been recently derived at the time intervals given within the DFG UHR-GravDat project and the ESA Climate Change Initiative Sea Level project using satellite laser ranging (SLR), Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS), Precise Range And Range-Rate Equipment (PRARE) and altimetry single-satellite crossover data (various observation types were used for various satellites). We show the current state of the orbit accuracy and the improvements obtained in the recent years. In particular, we demonstrate the impact of recently developed time-variable Earth gravity field models, improved tropospheric refraction models for DORIS observations, latest release 05 of the atmosphere-ocean dealiasing product (AOD1B) and some other models on the orbit accuracy of these altimetry satellites and regional mean sea level trends computed using these new orbit solutions.

  17. Note: electronic circuit for two-way time transfer via a single coaxial cable with picosecond accuracy and precision.

    PubMed

    Prochazka, Ivan; Kodet, Jan; Panek, Petr

    2012-11-01

    We have designed, constructed, and tested the overall performance of the electronic circuit for the two-way time transfer between two timing devices over modest distances with sub-picosecond precision and a systematic error of a few picoseconds. The concept of the electronic circuit enables to carry out time tagging of pulses of interest in parallel to the comparison of the time scales of these timing devices. The key timing parameters of the circuit are: temperature change of the delay is below 100 fs/K, timing stability time deviation better than 8 fs for averaging time from minutes to hours, sub-picosecond time transfer precision, and a few picoseconds time transfer accuracy.

  18. A time projection chamber for high accuracy and precision fission cross-section measurements

    DOE PAGES

    Heffner, M.; Asner, D. M.; Baker, R. G.; ...

    2014-05-22

    The fission Time Projection Chamber (fissionTPC) is a compact (15 cm diameter) two-chamber MICROMEGAS TPC designed to make precision cross-section measurements of neutron-induced fission. The actinide targets are placed on the central cathode and irradiated with a neutron beam that passes axially through the TPC inducing fission in the target. The 4π acceptance for fission fragments and complete charged particle track reconstruction are powerful features of the fissionTPC which will be used to measure fission cross-sections and examine the associated systematic errors. This study provides a detailed description of the design requirements, the design solutions, and the initial performance ofmore » the fissionTPC.« less

  19. A time projection chamber for high accuracy and precision fission cross-section measurements

    SciTech Connect

    Heffner, M.; Asner, D. M.; Baker, R. G.; Baker, J.; Barrett, S.; Brune, C.; Bundgaard, J.; Burgett, E.; Carter, D.; Cunningham, M.; Deaven, J.; Duke, D. L.; Greife, U.; Grimes, S.; Hager, U.; Hertel, N.; Hill, T.; Isenhower, D.; Jewell, K.; King, J.; Klay, J. L.; Kleinrath, V.; Kornilov, N.; Kudo, R.; Laptev, A. B.; Leonard, M.; Loveland, W.; Massey, T. N.; McGrath, C.; Meharchand, R.; Montoya, L.; Pickle, N.; Qu, H.; Riot, V.; Ruz, J.; Sangiorgio, S.; Seilhan, B.; Sharma, S.; Snyder, L.; Stave, S.; Tatishvili, G.; Thornton, R. T.; Tovesson, F.; Towell, D.; Towell, R. S.; Watson, S.; Wendt, B.; Wood, L.; Yao, L.

    2014-05-22

    The fission Time Projection Chamber (fissionTPC) is a compact (15 cm diameter) two-chamber MICROMEGAS TPC designed to make precision cross-section measurements of neutron-induced fission. The actinide targets are placed on the central cathode and irradiated with a neutron beam that passes axially through the TPC inducing fission in the target. The 4π acceptance for fission fragments and complete charged particle track reconstruction are powerful features of the fissionTPC which will be used to measure fission cross-sections and examine the associated systematic errors. This study provides a detailed description of the design requirements, the design solutions, and the initial performance of the fissionTPC.

  20. A time projection chamber for high accuracy and precision fission cross-section measurements

    NASA Astrophysics Data System (ADS)

    Heffner, M.; Asner, D. M.; Baker, R. G.; Baker, J.; Barrett, S.; Brune, C.; Bundgaard, J.; Burgett, E.; Carter, D.; Cunningham, M.; Deaven, J.; Duke, D. L.; Greife, U.; Grimes, S.; Hager, U.; Hertel, N.; Hill, T.; Isenhower, D.; Jewell, K.; King, J.; Klay, J. L.; Kleinrath, V.; Kornilov, N.; Kudo, R.; Laptev, A. B.; Leonard, M.; Loveland, W.; Massey, T. N.; McGrath, C.; Meharchand, R.; Montoya, L.; Pickle, N.; Qu, H.; Riot, V.; Ruz, J.; Sangiorgio, S.; Seilhan, B.; Sharma, S.; Snyder, L.; Stave, S.; Tatishvili, G.; Thornton, R. T.; Tovesson, F.; Towell, D.; Towell, R. S.; Watson, S.; Wendt, B.; Wood, L.; Yao, L.

    2014-09-01

    The fission Time Projection Chamber (fissionTPC) is a compact (15 cm diameter) two-chamber MICROMEGAS TPC designed to make precision cross-section measurements of neutron-induced fission. The actinide targets are placed on the central cathode and irradiated with a neutron beam that passes axially through the TPC inducing fission in the target. The 4π acceptance for fission fragments and complete charged particle track reconstruction are powerful features of the fissionTPC which will be used to measure fission cross-sections and examine the associated systematic errors. This paper provides a detailed description of the design requirements, the design solutions, and the initial performance of the fissionTPC.

  1. A Time Projection Chamber for High Accuracy and Precision Fission Cross-Section Measurements

    SciTech Connect

    T. Hill; K. Jewell; M. Heffner; D. Carter; M. Cunningham; V. Riot; J. Ruz; S. Sangiorgio; B. Seilhan; L. Snyder; D. M. Asner; S. Stave; G. Tatishvili; L. Wood; R. G. Baker; J. L. Klay; R. Kudo; S. Barrett; J. King; M. Leonard; W. Loveland; L. Yao; C. Brune; S. Grimes; N. Kornilov; T. N. Massey; J. Bundgaard; D. L. Duke; U. Greife; U. Hager; E. Burgett; J. Deaven; V. Kleinrath; C. McGrath; B. Wendt; N. Hertel; D. Isenhower; N. Pickle; H. Qu; S. Sharma; R. T. Thornton; D. Tovwell; R. S. Towell; S.

    2014-09-01

    The fission Time Projection Chamber (fissionTPC) is a compact (15 cm diameter) two-chamber MICROMEGAS TPC designed to make precision cross-section measurements of neutron-induced fission. The actinide targets are placed on the central cathode and irradiated with a neutron beam that passes axially through the TPC inducing fission in the target. The 4p acceptance for fission fragments and complete charged particle track reconstruction are powerful features of the fissionTPC which will be used to measure fission cross-sections and examine the associated systematic errors. This paper provides a detailed description of the design requirements, the design solutions, and the initial performance of the fissionTPC.

  2. Accuracy and precision of the i-STAT portable clinical analyzer: an analytical point of view.

    PubMed

    Pidetcha, P; Ornvichian, S; Chalachiva, S

    2000-04-01

    The introduction of a new point-of-care testing (POCT) instrument into the market affects medical practice and laboratory services. The i-STAT is designed to improve the speed in the decision making of the medical profession. However, reliability of results would ensure the quality of laboratory data. We, therefore, made an evaluation of the performance of i-STAT using a disposable cartridge EG7 + which is capable of measuring pH, pO2, pCO2 (blood gas), Sodium, Potassium (Electrolytes), Ionized calcium and Hematocrit with only 10 microliters of lithium heparinized blood in 2 minutes. The results were compared with those obtained from routine methods. The results were found to be accurate, precise and correlated with acceptable methods used routinely in the laboratory.

  3. Factors controlling precision and accuracy in isotope-ratio-monitoring mass spectrometry

    NASA Technical Reports Server (NTRS)

    Merritt, D. A.; Hayes, J. M.

    1994-01-01

    The performance of systems in which picomole quantities of sample are mixed with a carrier gas and passed through an isotope-ratio mass spectrometer system was examined experimentally and theoretically. Two different mass spectrometers were used, both having electron-impact ion sources and Faraday cup collector systems. One had an accelerating potential of 10kV and accepted 0.2 mL of He/min, producing, under those conditions, a maximum efficiency of 1 CO2 molecular ion collected per 700 molecules introduced. Comparable figures for the second instrument were 3 kV, 0.5 mL of He/min, and 14000 molecules/ion. Signal pathways were adjusted so that response times were <200 ms. Sample-related ion currents appeared as peaks with widths of 3-30 s. Isotope ratios were determined by comparison to signals produced by standard gases. In spite of rapid variations in signals, observed levels of performance were within a factor of 2 of shot-noise limits. For the 10-kV instrument, sample requirements for standard deviations of 0.1 and 0.5% were 45 and 1.7 pmol, respectively. Comparable requirements for the 3-kV instrument were 900 and 36 pmol. Drifts in instrumental characteristics were adequately neutralized when standards were observed at 20-min intervals. For the 10-kV instrument, computed isotopic compositions were independent of sample size and signal strength over the ranges examined. Nonlinearities of <0.04%/V were observed for the 3-kV system. Procedures for observation and subtraction of background ion currents were examined experimentally and theoretically. For sample/ background ratios varying from >10 to 0.3, precision is expected and observed to decrease approximately 2-fold and to depend only weakly on the precision with which background ion currents have been measured.

  4. Incorporating precision, accuracy and alternative sampling designs into a continental monitoring program for colonial waterbirds

    USGS Publications Warehouse

    Steinkamp, Melanie J.; Peterjohn, B.G.; Keisman, J.L.

    2003-01-01

    A comprehensive monitoring program for colonial waterbirds in North America has never existed. At smaller geographic scales, many states and provinces conduct surveys of colonial waterbird populations. Periodic regional surveys are conducted at varying times during the breeding season using a variety of survey methods, which complicates attempts to estimate population trends for most species. The US Geological Survey Patuxent Wildlife Research Center has recently started to coordinate colonial waterbird monitoring efforts throughout North America. A centralized database has been developed with an Internet-based data entry and retrieval page. The extent of existing colonial waterbird surveys has been defined, allowing gaps in coverage to be identified and basic inventories completed where desirable. To enable analyses of comparable data at regional or larger geographic scales, sampling populations through statistically sound sampling designs should supersede obtaining counts at every colony. Standardized breeding season survey techniques have been agreed upon and documented in a monitoring manual. Each survey in the manual has associated with it recommendations for bias estimation, and includes specific instructions on measuring detectability. The methods proposed in the manual are for developing reliable, comparable indices of population size to establish trend information at multiple spatial and temporal scales, but they will not result in robust estimates of total population numbers.

  5. Phylogenomic datasets provide both precision and accuracy in estimating the timescale of placental mammal phylogeny.

    PubMed

    dos Reis, Mario; Inoue, Jun; Hasegawa, Masami; Asher, Robert J; Donoghue, Philip C J; Yang, Ziheng

    2012-09-07

    The fossil record suggests a rapid radiation of placental mammals following the Cretaceous-Paleogene (K-Pg) mass extinction 65 million years ago (Ma); nevertheless, molecular time estimates, while highly variable, are generally much older. Early molecular studies suffer from inadequate dating methods, reliance on the molecular clock, and simplistic and over-confident interpretations of the fossil record. More recent studies have used Bayesian dating methods that circumvent those issues, but the use of limited data has led to large estimation uncertainties, precluding a decisive conclusion on the timing of mammalian diversifications. Here we use a powerful Bayesian method to analyse 36 nuclear genomes and 274 mitochondrial genomes (20.6 million base pairs), combined with robust but flexible fossil calibrations. Our posterior time estimates suggest that marsupials diverged from eutherians 168-178 Ma, and crown Marsupialia diverged 64-84 Ma. Placentalia diverged 88-90 Ma, and present-day placental orders (except Primates and Xenarthra) originated in a ∼20 Myr window (45-65 Ma) after the K-Pg extinction. Therefore we reject a pre K-Pg model of placental ordinal diversification. We suggest other infamous instances of mismatch between molecular and palaeontological divergence time estimates will be resolved with this same approach.

  6. Pupil size dynamics during fixation impact the accuracy and precision of video-based gaze estimation.

    PubMed

    Choe, Kyoung Whan; Blake, Randolph; Lee, Sang-Hun

    2016-01-01

    Video-based eye tracking relies on locating pupil center to measure gaze positions. Although widely used, the technique is known to generate spurious gaze position shifts up to several degrees in visual angle because pupil centration can change without eye movement during pupil constriction or dilation. Since pupil size can fluctuate markedly from moment to moment, reflecting arousal state and cognitive processing during human behavioral and neuroimaging experiments, the pupil size artifact is prevalent and thus weakens the quality of the video-based eye tracking measurements reliant on small fixational eye movements. Moreover, the artifact may lead to erroneous conclusions if the spurious signal is taken as an actual eye movement. Here, we measured pupil size and gaze position from 23 human observers performing a fixation task and examined the relationship between these two measures. Results disclosed that the pupils contracted as fixation was prolonged, at both small (<16s) and large (∼4min) time scales, and these pupil contractions were accompanied by systematic errors in gaze position estimation, in both the ellipse and the centroid methods of pupil tracking. When pupil size was regressed out, the accuracy and reliability of gaze position measurements were substantially improved, enabling differentiation of 0.1° difference in eye position. We confirmed the presence of systematic changes in pupil size, again at both small and large scales, and its tight relationship with gaze position estimates when observers were engaged in a demanding visual discrimination task.

  7. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment

    PubMed Central

    Vu, An T.; Phillips, Jeffrey S.; Kay, Kendrick; Phillips, Matthew E.; Johnson, Matthew R.; Shinkareva, Svetlana V.; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2017-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms. PMID:27686111

  8. Using precise word timing information improves decoding accuracy in a multiband-accelerated multimodal reading experiment.

    PubMed

    Vu, An T; Phillips, Jeffrey S; Kay, Kendrick; Phillips, Matthew E; Johnson, Matthew R; Shinkareva, Svetlana V; Tubridy, Shannon; Millin, Rachel; Grossman, Murray; Gureckis, Todd; Bhattacharyya, Rajan; Yacoub, Essa

    2016-01-01

    The blood-oxygen-level-dependent (BOLD) signal measured in functional magnetic resonance imaging (fMRI) experiments is generally regarded as sluggish and poorly suited for probing neural function at the rapid timescales involved in sentence comprehension. However, recent studies have shown the value of acquiring data with very short repetition times (TRs), not merely in terms of improvements in contrast to noise ratio (CNR) through averaging, but also in terms of additional fine-grained temporal information. Using multiband-accelerated fMRI, we achieved whole-brain scans at 3-mm resolution with a TR of just 500 ms at both 3T and 7T field strengths. By taking advantage of word timing information, we found that word decoding accuracy across two separate sets of scan sessions improved significantly, with better overall performance at 7T than at 3T. The effect of TR was also investigated; we found that substantial word timing information can be extracted using fast TRs, with diminishing benefits beyond TRs of 1000 ms.

  9. Accuracy and precision of cone beam computed tomography in periodontal defects measurement (systematic review)

    PubMed Central

    Anter, Enas; Zayet, Mohammed Khalifa; El-Dessouky, Sahar Hosny

    2016-01-01

    Systematic review of literature was made to assess the extent of accuracy of cone beam computed tomography (CBCT) as a tool for measurement of alveolar bone loss in periodontal defect. A systematic search of PubMed electronic database and a hand search of open access journals (from 2000 to 2015) yielded abstracts that were potentially relevant. The original articles were then retrieved and their references were hand searched for possible missing articles. Only articles that met the selection criteria were included and criticized. The initial screening revealed 47 potentially relevant articles, of which only 14 have met the selection criteria; their CBCT average measurements error ranged from 0.19 mm to 1.27 mm; however, no valid meta-analysis could be made due to the high heterogeneity between the included studies. Under the limitation of the number and strength of the available studies, we concluded that CBCT provides an assessment of alveolar bone loss in periodontal defect with a minimum reported mean measurements error of 0.19 ± 0.11 mm and a maximum reported mean measurements error of 1.27 ± 1.43 mm, and there is no agreement between the studies regarding the direction of the deviation whether over or underestimation. However, we should emphasize that the evidence to this data is not strong. PMID:27563194

  10. Onset-Duration Matching of Acoustic Stimuli Revisited: Conventional Arithmetic vs. Proposed Geometric Measures of Accuracy and Precision

    PubMed Central

    Friedrich, Björn; Heil, Peter

    2017-01-01

    Onsets of acoustic stimuli are salient transients and are relevant in humans for the perception of music and speech. Previous studies of onset-duration discrimination and matching focused on whether onsets are perceived categorically. In this study, we address two issues. First, we revisit onset-duration matching and measure, for 79 conditions, how accurately and precisely human listeners can adjust the onset duration of a comparison stimulus to subjectively match that of a standard stimulus. Second, we explore measures for quantifying performance in this and other matching tasks. The conventional measures of accuracy and precision are defined by arithmetic descriptive statistics and the Euclidean distance function on the real numbers. We propose novel measures based on geometric descriptive statistics and the log-ratio distance function, the Euclidean distance function on the positive-real numbers. Only these properly account for the fact that the magnitude of onset durations, like the magnitudes of most physical quantities, can attain only positive real values. The conventional (arithmetic) measures possess a convexity bias that yields errors that grow with the width of the distribution of matches. This convexity bias leads to misrepresentations of the constant error and could even imply the existence of perceptual illusions where none exist. This is not so for the proposed (geometric) measures. We collected up to 68 matches from a given listener for each condition (about 34,000 matches in total) and examined inter-listener variability and the effects of onset duration, plateau duration, sound level, carrier, and restriction of the range of adjustable comparison stimuli on measures of accuracy and precision. Results obtained with the conventional measures generally agree with those reported in the literature. The variance across listeners is highly heterogeneous for the conventional measures but is homogeneous for the proposed measures. Furthermore, the proposed

  11. High-precision realization of robust quantum anomalous Hall state in a hard ferromagnetic topological insulator.

    PubMed

    Chang, Cui-Zu; Zhao, Weiwei; Kim, Duk Y; Zhang, Haijun; Assaf, Badih A; Heiman, Don; Zhang, Shou-Cheng; Liu, Chaoxing; Chan, Moses H W; Moodera, Jagadeesh S

    2015-05-01

    The discovery of the quantum Hall (QH) effect led to the realization of a topological electronic state with dissipationless currents circulating in one direction along the edge of a two-dimensional electron layer under a strong magnetic field. The quantum anomalous Hall (QAH) effect shares a similar physical phenomenon to that of the QH effect, whereas its physical origin relies on the intrinsic spin-orbit coupling and ferromagnetism. Here, we report the experimental observation of the QAH state in V-doped (Bi,Sb)2Te3 films with the zero-field longitudinal resistance down to 0.00013 ± 0.00007h/e(2) (~3.35 ± 1.76 Ω), Hall conductance reaching 0.9998 ± 0.0006e(2)/h and the Hall angle becoming as high as 89.993° ± 0.004° at T = 25 mK. A further advantage of this system comes from the fact that it is a hard ferromagnet with a large coercive field (Hc > 1.0 T) and a relative high Curie temperature. This realization of a robust QAH state in hard ferromagnetic topological insulators (FMTIs) is a major step towards dissipationless electronic applications in the absence of external fields.

  12. Accuracy and reliability of multi-GNSS real-time precise positioning: GPS, GLONASS, BeiDou, and Galileo

    NASA Astrophysics Data System (ADS)

    Li, Xingxing; Ge, Maorong; Dai, Xiaolei; Ren, Xiaodong; Fritsche, Mathias; Wickert, Jens; Schuh, Harald

    2015-06-01

    In this contribution, we present a GPS+GLONASS+BeiDou+Galileo four-system model to fully exploit the observations of all these four navigation satellite systems for real-time precise orbit determination, clock estimation and positioning. A rigorous multi-GNSS analysis is performed to achieve the best possible consistency by processing the observations from different GNSS together in one common parameter estimation procedure. Meanwhile, an efficient multi-GNSS real-time precise positioning service system is designed and demonstrated by using the multi-GNSS Experiment, BeiDou Experimental Tracking Network, and International GNSS Service networks including stations all over the world. The statistical analysis of the 6-h predicted orbits show that the radial and cross root mean square (RMS) values are smaller than 10 cm for BeiDou and Galileo, and smaller than 5 cm for both GLONASS and GPS satellites, respectively. The RMS values of the clock differences between real-time and batch-processed solutions for GPS satellites are about 0.10 ns, while the RMS values for BeiDou, Galileo and GLONASS are 0.13, 0.13 and 0.14 ns, respectively. The addition of the BeiDou, Galileo and GLONASS systems to the standard GPS-only processing, reduces the convergence time almost by 70 %, while the positioning accuracy is improved by about 25 %. Some outliers in the GPS-only solutions vanish when multi-GNSS observations are processed simultaneous. The availability and reliability of GPS precise positioning decrease dramatically as the elevation cutoff increases. However, the accuracy of multi-GNSS precise point positioning (PPP) is hardly decreased and few centimeter are still achievable in the horizontal components even with 40 elevation cutoff. At 30 and 40 elevation cutoffs, the availability rates of GPS-only solution drop significantly to only around 70 and 40 %, respectively. However, multi-GNSS PPP can provide precise position estimates continuously (availability rate is more than 99

  13. Precision and accuracy of in vivo bone mineral measurement in rats using dual-energy X-ray absorptiometry.

    PubMed

    Rozenberg, S; Vandromme, J; Neve, J; Aguilera, A; Muregancuro, A; Peretz, A; Kinthaert, J; Ham, H

    1995-01-01

    The aim of this study was to evaluate the precision and accuracy of dual-energy X-ray absorptiometry (DXA) for measuring bone mineral content at different sites of the skeleton in rats. In vitro the reproducibility error was very small (< 1%), but in vivo the intra-observer variability ranged from 0.9% to 6.0%. Several factors have been shown to affect in vivo reproducibility: the reproducibility was better when the results were expressed as bone mineral density (BMD) rather than bone mineral content (BMC), intra-observer variability was better than the inter-observer variability, and a higher error was observed for the tibia compared with that for vertebrae and femur. The accuracy of measurement at the femur and tibia was assessed by comparing the values with ash weight and with biochemically determined calcium content. The correlation coefficients (R) between the in vitro BMC and the dry weight or the calcium content were higher than 0.99 for both the femur and the tibia. SEE ranged between 0.0 g (ash weight) and 2.0 mg (Ca content). Using in vitro BMC, ash weight could be estimated with an accuracy error close to 0 and calcium content with an error ranging between 0.82% and 6.80%. The R values obtained between the in vivo and in vitro BMC were 0.98 and 0.97 respectively for femur and tibia, with SEE of 0.04 and 0.02 g respectively. In conclusion, the in vivo precision of the technique was found to be too low. To be of practical use it is important in the design of experimentation to try to reduce the measurement error.(ABSTRACT TRUNCATED AT 250 WORDS)

  14. Accuracy and precision of equine gait event detection during walking with limb and trunk mounted inertial sensors.

    PubMed

    Olsen, Emil; Andersen, Pia Haubro; Pfau, Thilo

    2012-01-01

    The increased variations of temporal gait events when pathology is present are good candidate features for objective diagnostic tests. We hypothesised that the gait events hoof-on/off and stance can be detected accurately and precisely using features from trunk and distal limb-mounted Inertial Measurement Units (IMUs). Four IMUs were mounted on the distal limb and five IMUs were attached to the skin over the dorsal spinous processes at the withers, fourth lumbar vertebrae and sacrum as well as left and right tuber coxae. IMU data were synchronised to a force plate array and a motion capture system. Accuracy (bias) and precision (SD of bias) was calculated to compare force plate and IMU timings for gait events. Data were collected from seven horses. One hundred and twenty three (123) front limb steps were analysed; hoof-on was detected with a bias (SD) of -7 (23) ms, hoof-off with 0.7 (37) ms and front limb stance with -0.02 (37) ms. A total of 119 hind limb steps were analysed; hoof-on was found with a bias (SD) of -4 (25) ms, hoof-off with 6 (21) ms and hind limb stance with 0.2 (28) ms. IMUs mounted on the distal limbs and sacrum can detect gait events accurately and precisely.

  15. Accuracy and precision of free-energy calculations via molecular simulation

    NASA Astrophysics Data System (ADS)

    Lu, Nandou

    A quantitative characterization of the methodologies of free-energy perturbation (FEP) calculations is presented, and optimal implementation of the methods for reliable and efficient calculation is addressed. Some common misunderstandings in the FEP calculations are corrected. The two opposite directions of FEP calculations are uniquely defined as generalized insertion and generalized deletion, according to the entropy change along the perturbation direction. These two calculations are not symmetric; they produce free-energy results differing systematically due to the different capability of each to sample the important phase-space in a finite-length simulation. The FEP calculation errors are quantified by characterizing the simulation sampling process with the help of probability density functions for the potential energy change. While the random error in the FEP calculation is analyzed with a probabilistic approach, the systematic error is characterized as the most-likely inaccuracy, which is modeled considering the poor sampling of low-probability energy distribution tails. Our analysis shows that the entropy difference between the perturbation systems plays a key role in determining the reliability of FEP results, and the perturbation should be carried out in the insertion direction in order to ensure a good sampling and thus a reliable calculation. Easy-to-use heuristics are developed to estimate the simulation errors, as well as the simulation length that ensures a certain accuracy level of the calculation. The fundamental understanding obtained is then applied to tackle the problem of multistage FEP optimization. We provide the first principle of optimal staging: For each substage FEP calculation, the higher entropy system should be used as the reference to govern the sampling, i.e., the calculation should be conducted in the generalized insertion direction for each stage of perturbation. To minimize the simulation error, intermediate states should be

  16. Precisely Molded Nanoparticle Displaying DENV-E Proteins Induces Robust Serotype-Specific Neutralizing Antibody Responses

    PubMed Central

    Hoekstra, Gabriel; Yi, Xianwen; Stone, Michelle; Horvath, Katie; Miley, Michael J.; DeSimone, Joseph; Luft, Chris J.; de Silva, Aravinda M.

    2016-01-01

    Dengue virus (DENV) is the causative agent of dengue fever and dengue hemorrhagic fever. The virus is endemic in over 120 countries, causing over 350 million infections per year. Dengue vaccine development is challenging because of the need to induce simultaneous protection against four antigenically distinct DENV serotypes and evidence that, under some conditions, vaccination can enhance disease due to specific immunity to the virus. While several live-attenuated tetravalent dengue virus vaccines display partial efficacy, it has been challenging to induce balanced protective immunity to all 4 serotypes. Instead of using whole-virus formulations, we are exploring the potentials for a particulate subunit vaccine, based on DENV E-protein displayed on nanoparticles that have been precisely molded using Particle Replication in Non-wetting Template (PRINT) technology. Here we describe immunization studies with a DENV2-nanoparticle vaccine candidate. The ectodomain of DENV2-E protein was expressed as a secreted recombinant protein (sRecE), purified and adsorbed to poly (lactic-co-glycolic acid) (PLGA) nanoparticles of different sizes and shape. We show that PRINT nanoparticle adsorbed sRecE without any adjuvant induces higher IgG titers and a more potent DENV2-specific neutralizing antibody response compared to the soluble sRecE protein alone. Antigen trafficking indicate that PRINT nanoparticle display of sRecE prolongs the bio-availability of the antigen in the draining lymph nodes by creating an antigen depot. Our results demonstrate that PRINT nanoparticles are a promising platform for delivering subunit vaccines against flaviviruses such as dengue and Zika. PMID:27764114

  17. Minimally invasive measurement of cardiac output during surgery and critical care: a meta-analysis of accuracy and precision.

    PubMed

    Peyton, Philip J; Chong, Simon W

    2010-11-01

    When assessing the accuracy and precision of a new technique for cardiac output measurement, the commonly quoted criterion for acceptability of agreement with a reference standard is that the percentage error (95% limits of agreement/mean cardiac output) should be 30% or less. We reviewed published data on four different minimally invasive methods adapted for use during surgery and critical care: pulse contour techniques, esophageal Doppler, partial carbon dioxide rebreathing, and transthoracic bioimpedance, to assess their bias, precision, and percentage error in agreement with thermodilution. An English language literature search identified published papers since 2000 which examined the agreement in adult patients between bolus thermodilution and each method. For each method a meta-analysis was done using studies in which the first measurement point for each patient could be identified, to obtain a pooled mean bias, precision, and percentage error weighted according to the number of measurements in each study. Forty-seven studies were identified as suitable for inclusion: N studies, n measurements: mean weighted bias [precision, percentage error] were: pulse contour N = 24, n = 714: -0.00 l/min [1.22 l/min, 41.3%]; esophageal Doppler N = 2, n = 57: -0.77 l/min [1.07 l/min, 42.1%]; partial carbon dioxide rebreathing N = 8, n = 167: -0.05 l/min [1.12 l/min, 44.5%]; transthoracic bioimpedance N = 13, n = 435: -0.10 l/min [1.14 l/min, 42.9%]. None of the four methods has achieved agreement with bolus thermodilution which meets the expected 30% limits. The relevance in clinical practice of these arbitrary limits should be reassessed.

  18. How good is a PCR efficiency estimate: Recommendations for precise and robust qPCR efficiency assessments.

    PubMed

    Svec, David; Tichopad, Ales; Novosadova, Vendula; Pfaffl, Michael W; Kubista, Mikael

    2015-03-01

    We have examined the imprecision in the estimation of PCR efficiency by means of standard curves based on strategic experimental design with large number of technical replicates. In particular, how robust this estimation is in terms of a commonly varying factors: the instrument used, the number of technical replicates performed and the effect of the volume transferred throughout the dilution series. We used six different qPCR instruments, we performed 1-16 qPCR replicates per concentration and we tested 2-10 μl volume of analyte transferred, respectively. We find that the estimated PCR efficiency varies significantly across different instruments. Using a Monte Carlo approach, we find the uncertainty in the PCR efficiency estimation may be as large as 42.5% (95% CI) if standard curve with only one qPCR replicate is used in 16 different plates. Based on our investigation we propose recommendations for the precise estimation of PCR efficiency: (1) one robust standard curve with at least 3-4 qPCR replicates at each concentration shall be generated, (2) the efficiency is instrument dependent, but reproducibly stable on one platform, and (3) using a larger volume when constructing serial dilution series reduces sampling error and enables calibration across a wider dynamic range.

  19. Accuracy and precision of minimally-invasive cardiac output monitoring in children: a systematic review and meta-analysis.

    PubMed

    Suehiro, Koichi; Joosten, Alexandre; Murphy, Linda Suk-Ling; Desebbe, Olivier; Alexander, Brenton; Kim, Sang-Hyun; Cannesson, Maxime

    2016-10-01

    Several minimally-invasive technologies are available for cardiac output (CO) measurement in children, but the accuracy and precision of these devices have not yet been evaluated in a systematic review and meta-analysis. We conducted a comprehensive search of the medical literature in PubMed, Cochrane Library of Clinical Trials, Scopus, and Web of Science from its inception to June 2014 assessing the accuracy and precision of all minimally-invasive CO monitoring systems used in children when compared with CO monitoring reference methods. Pooled mean bias, standard deviation, and mean percentage error of included studies were calculated using a random-effects model. The inter-study heterogeneity was also assessed using an I(2) statistic. A total of 20 studies (624 patients) were included. The overall random-effects pooled bias, and mean percentage error were 0.13 ± 0.44 l min(-1) and 29.1 %, respectively. Significant inter-study heterogeneity was detected (P < 0.0001, I(2) = 98.3 %). In the sub-analysis regarding the device, electrical cardiometry showed the smallest bias (-0.03 l min(-1)) and lowest percentage error (23.6 %). Significant residual heterogeneity remained after conducting sensitivity and subgroup analyses based on the various study characteristics. By meta-regression analysis, we found no independent effects of study characteristics on weighted mean difference between reference and tested methods. Although the pooled bias was small, the mean pooled percentage error was in the gray zone of clinical applicability. In the sub-group analysis, electrical cardiometry was the device that provided the most accurate measurement. However, a high heterogeneity between studies was found, likely due to a wide range of study characteristics.

  20. Community-based Approaches to Improving Accuracy, Precision, and Reproducibility in U-Pb and U-Th Geochronology

    NASA Astrophysics Data System (ADS)

    McLean, N. M.; Condon, D. J.; Bowring, S. A.; Schoene, B.; Dutton, A.; Rubin, K. H.

    2015-12-01

    The last two decades have seen a grassroots effort by the international geochronology community to "calibrate Earth history through teamwork and cooperation," both as part of the EARTHTIME initiative and though several daughter projects with similar goals. Its mission originally challenged laboratories "to produce temporal constraints with uncertainties approaching 0.1% of the radioisotopic ages," but EARTHTIME has since exceeded its charge in many ways. Both the U-Pb and Ar-Ar chronometers first considered for high-precision timescale calibration now regularly produce dates at the sub-per mil level thanks to instrumentation, laboratory, and software advances. At the same time new isotope systems, including U-Th dating of carbonates, have developed comparable precision. But the larger, inter-related scientific challenges envisioned at EARTHTIME's inception remain - for instance, precisely calibrating the global geologic timescale, estimating rates of change around major climatic perturbations, and understanding evolutionary rates through time - and increasingly require that data from multiple geochronometers be combined. To solve these problems, the next two decades of uranium-daughter geochronology will require further advances in accuracy, precision, and reproducibility. The U-Th system has much in common with U-Pb, in that both parent and daughter isotopes are solids that can easily be weighed and dissolved in acid, and have well-characterized reference materials certified for isotopic composition and/or purity. For U-Pb, improving lab-to-lab reproducibility has entailed dissolving precisely weighed U and Pb metals of known purity and isotopic composition together to make gravimetric solutions, then using these to calibrate widely distributed tracers composed of artificial U and Pb isotopes. To mimic laboratory measurements, naturally occurring U and Pb isotopes were also mixed in proportions to mimic samples of three different ages, to be run as internal

  1. The Precision and Accuracy of Early Epoch of Reionization Foreground Models: Comparing MWA and PAPER 32-antenna Source Catalogs

    NASA Astrophysics Data System (ADS)

    Jacobs, Daniel C.; Bowman, Judd; Aguirre, James E.

    2013-05-01

    As observations of the Epoch of Reionization (EoR) in redshifted 21 cm emission begin, we assess the accuracy of the early catalog results from the Precision Array for Probing the Epoch of Reionization (PAPER) and the Murchison Wide-field Array (MWA). The MWA EoR approach derives much of its sensitivity from subtracting foregrounds to <1% precision, while the PAPER approach relies on the stability and symmetry of the primary beam. Both require an accurate flux calibration to set the amplitude of the measured power spectrum. The two instruments are very similar in resolution, sensitivity, sky coverage, and spectral range and have produced catalogs from nearly contemporaneous data. We use a Bayesian Markov Chain Monte Carlo fitting method to estimate that the two instruments are on the same flux scale to within 20% and find that the images are mostly in good agreement. We then investigate the source of the errors by comparing two overlapping MWA facets where we find that the differences are primarily related to an inaccurate model of the primary beam but also correlated errors in bright sources due to CLEAN. We conclude with suggestions for mitigating and better characterizing these effects.

  2. Precision and accuracy of manual water-level measurements taken in the Yucca Mountain area, Nye County, Nevada, 1988-90

    USGS Publications Warehouse

    Boucher, M.S.

    1994-01-01

    Water-level measurements have been made in deep boreholes in the Yucca Mountain area, Nye County, Nevada, since 1983 in support of the U.S. Department of Energy's Yucca Mountain Project, which is an evaluation of the area to determine its suitability as a potential storage area for high-level nuclear waste. Water-level measurements were taken either manually, using various water-level measuring equipment such as steel tapes, or they were taken continuously, using automated data recorders and pressure transducers. This report presents precision range and accuracy data established for manual water-level measurements taken in the Yucca Mountain area, 1988-90. Precision and accuracy ranges were determined for all phases of the water-level measuring process, and overall accuracy ranges are presented. Precision ranges were determined for three steel tapes using a total of 462 data points. Mean precision ranges of these three tapes ranged from 0.014 foot to 0.026 foot. A mean precision range of 0.093 foot was calculated for the multiconductor cable, using 72 data points. Mean accuracy values were calculated on the basis of calibrations of the steel tapes and the multiconductor cable against a reference steel tape. The mean accuracy values of the steel tapes ranged from 0.053 foot, based on three data points to 0.078, foot based on six data points. The mean accuracy of the multiconductor cable was O. 15 foot, based on six data points. Overall accuracy of the water-level measurements was calculated by taking the square root of the sum of the squares of the individual accuracy values. Overall accuracy was calculated to be 0.36 foot for water-level measurements taken with steel tapes, without accounting for the inaccuracy of borehole deviations from vertical. An overall accuracy of 0.36 foot for measurements made with steel tapes is considered satisfactory for this project.

  3. Bracketing method with certified reference materials for high precision and accuracy determination of trace cadmium in drinking water by Inductively Coupled Plasma - Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Ketrin, Rosi; Handayani, Eka Mardika; Komalasari, Isna

    2017-01-01

    Two significant parameters to evaluate the measurement results are known as precision and accuracy. Both are associated with indeterminate and determinate error, respectively, that normally happen in such spectrometric measurement method as Inductively Coupled Plasma - Mass Spectrometry (ICP-MS). These errors must be eliminated or suppressed to get high precision and accuracy of the method. Decreasing the errors thus increasing the precision and accuracy of the method. In this study, bracketing method using two-point standard calibration was proposed in order to suppress the indeterminate error caused by instrumental drift thus increasing the result precision, and applied for measuring cadmium in drinking water samples. Certified reference material of ERM CA011b-Hard drinking water UK-metals was used to know the determinate error or measurement bias. When bias is obtained, some corrections are needed to get the accurate measurement result. The result was compared to that by external calibration method.

  4. An evaluation of the accuracy and precision of a stand-alone submersible continuous ruminal pH measurement system.

    PubMed

    Penner, G B; Beauchemin, K A; Mutsvangwa, T

    2006-06-01

    The objectives of this study were 1) to develop and evaluate the accuracy and precision of a new stand-alone submersible continuous ruminal pH measurement system called the Lethbridge Research Centre ruminal pH measurement system (LRCpH; Experiment 1); 2) to establish the accuracy and precision of a well-documented, previously used continuous indwelling ruminal pH system (CIpH) to ensure that the new system (LRCpH) was as accurate and precise as the previous system (CIpH; Experiment 2); and 3) to determine the required frequency for pH electrode standardization by comparing baseline millivolt readings of pH electrodes in pH buffers 4 and 7 after 0, 24, 48, and 72 h of ruminal incubation (Experiment 3). In Experiment 1, 6 pregnant Holstein heifers, 3 lactating, primiparous Holstein cows, and 2 Black Angus heifers were used. All experimental animals were fitted with permanent ruminal cannulas. In Experiment 2, the 3 cannulated, lactating, primiparous Holstein cows were used. In both experiments, ruminal pH was determined continuously using indwelling pH electrodes. Subsequently, mean pH values were then compared with ruminal pH values obtained using spot samples of ruminal fluid (MANpH) obtained at the same time. A correlation coefficient accounting for repeated measures was calculated and results were used to calculate the concordance correlation to examine the relationships between the LRCpH-derived values and MANpH, and the CIpH-derived values and MANpH. In Experiment 3, the 6 pregnant Holstein heifers were used along with 6 new submersible pH electrodes. In Experiments 1 and 2, the comparison of the LRCpH output (1- and 5-min averages) to MANpH had higher correlation coefficients after accounting for repeated measures (0.98 and 0.97 for 1- and 5-min averages, respectively) and concordance correlation coefficients (0.96 and 0.97 for 1- and 5-min averages, respectively) than the comparison of CIpH to MANpH (0.88 and 0.87, correlation coefficient and concordance

  5. Single-frequency receivers as master permanent stations in GNSS networks: precision and accuracy of the positioning in mixed networks

    NASA Astrophysics Data System (ADS)

    Dabove, Paolo; Manzino, Ambrogio Maria

    2015-04-01

    The use of GPS/GNSS instruments is a common practice in the world at both a commercial and academic research level. Since last ten years, Continuous Operating Reference Stations (CORSs) networks were born in order to achieve the possibility to extend a precise positioning more than 15 km far from the master station. In this context, the Geomatics Research Group of DIATI at the Politecnico di Torino has carried out several experiments in order to evaluate the achievable precision obtainable with different GNSS receivers (geodetic and mass-market) and antennas if a CORSs network is considered. This work starts from the research above described, in particular focusing the attention on the usefulness of single frequency permanent stations in order to thicken the existing CORSs, especially for monitoring purposes. Two different types of CORSs network are available today in Italy: the first one is the so called "regional network" and the second one is the "national network", where the mean inter-station distances are about 25/30 and 50/70 km respectively. These distances are useful for many applications (e.g. mobile mapping) if geodetic instruments are considered but become less useful if mass-market instruments are used or if the inter-station distance between master and rover increases. In this context, some innovative GNSS networks were developed and tested, analyzing the performance of rover's positioning in terms of quality, accuracy and reliability both in real-time and post-processing approach. The use of single frequency GNSS receivers leads to have some limits, especially due to a limited baseline length, the possibility to obtain a correct fixing of the phase ambiguity for the network and to fix the phase ambiguity correctly also for the rover. These factors play a crucial role in order to reach a positioning with a good level of accuracy (as centimetric o better) in a short time and with an high reliability. The goal of this work is to investigate about the

  6. A comparative study of submicron particle sizing platforms: accuracy, precision and resolution analysis of polydisperse particle size distributions.

    PubMed

    Anderson, Will; Kozak, Darby; Coleman, Victoria A; Jämting, Åsa K; Trau, Matt

    2013-09-01

    The particle size distribution (PSD) of a polydisperse or multimodal system can often be difficult to obtain due to the inherent limitations in established measurement techniques. For this reason, the resolution, accuracy and precision of three new and one established, commercially available and fundamentally different particle size analysis platforms were compared by measuring both individual and a mixed sample of monodisperse, sub-micron (220, 330, and 410 nm - nominal modal size) polystyrene particles. The platforms compared were the qNano Tunable Resistive Pulse Sensor, Nanosight LM10 Particle Tracking Analysis System, the CPS Instruments's UHR24000 Disc Centrifuge, and the routinely used Malvern Zetasizer Nano ZS Dynamic Light Scattering system. All measurements were subjected to a peak detection algorithm so that the detected particle populations could be compared to 'reference' Transmission Electron Microscope measurements of the individual particle samples. Only the Tunable Resistive Pulse Sensor and Disc Centrifuge platforms provided the resolution required to resolve all three particle populations present in the mixed 'multimodal' particle sample. In contrast, the light scattering based Particle Tracking Analysis and Dynamic Light Scattering platforms were only able to detect a single population of particles corresponding to either the largest (410 nm) or smallest (220 nm) particles in the multimodal sample, respectively. When the particle sets were measured separately (monomodal) each platform was able to resolve and accurately obtain a mean particle size within 10% of the Transmission Electron Microscope reference values. However, the broadness of the PSD measured in the monomodal samples deviated greatly, with coefficients of variation being ~2-6-fold larger than the TEM measurements across all four platforms. The large variation in the PSDs obtained from these four, fundamentally different platforms, indicates that great care must still be taken in

  7. Standardization of Operator-Dependent Variables Affecting Precision and Accuracy of the Disk Diffusion Method for Antibiotic Susceptibility Testing

    PubMed Central

    Maurer, Florian P.; Pfiffner, Tamara; Böttger, Erik C.; Furrer, Reinhard

    2015-01-01

    Parameters like zone reading, inoculum density, and plate streaking influence the precision and accuracy of disk diffusion antibiotic susceptibility testing (AST). While improved reading precision has been demonstrated using automated imaging systems, standardization of the inoculum and of plate streaking have not been systematically investigated yet. This study analyzed whether photometrically controlled inoculum preparation and/or automated inoculation could further improve the standardization of disk diffusion. Suspensions of Escherichia coli ATCC 25922 and Staphylococcus aureus ATCC 29213 of 0.5 McFarland standard were prepared by 10 operators using both visual comparison to turbidity standards and a Densichek photometer (bioMérieux), and the resulting CFU counts were determined. Furthermore, eight experienced operators each inoculated 10 Mueller-Hinton agar plates using a single 0.5 McFarland standard bacterial suspension of E. coli ATCC 25922 using regular cotton swabs, dry flocked swabs (Copan, Brescia, Italy), or an automated streaking device (BD-Kiestra, Drachten, Netherlands). The mean CFU counts obtained from 0.5 McFarland standard E. coli ATCC 25922 suspensions were significantly different for suspensions prepared by eye and by Densichek (P < 0.001). Preparation by eye resulted in counts that were closer to the CLSI/EUCAST target of 108 CFU/ml than those resulting from Densichek preparation. No significant differences in the standard deviations of the CFU counts were observed. The interoperator differences in standard deviations when dry flocked swabs were used decreased significantly compared to the differences when regular cotton swabs were used, whereas the mean of the standard deviations of all operators together was not significantly altered. In contrast, automated streaking significantly reduced both interoperator differences, i.e., the individual standard deviations, compared to the standard deviations for the manual method, and the mean of the

  8. Precision and accuracy in the quantitative analysis of biological samples by accelerator mass spectrometry: application in microdose absolute bioavailability studies.

    PubMed

    Gao, Lan; Li, Jing; Kasserra, Claudia; Song, Qi; Arjomand, Ali; Hesk, David; Chowdhury, Swapan K

    2011-07-15

    Determination of the pharmacokinetics and absolute bioavailability of an experimental compound, SCH 900518, following a 89.7 nCi (100 μg) intravenous (iv) dose of (14)C-SCH 900518 2 h post 200 mg oral administration of nonradiolabeled SCH 900518 to six healthy male subjects has been described. The plasma concentration of SCH 900518 was measured using a validated LC-MS/MS system, and accelerator mass spectrometry (AMS) was used for quantitative plasma (14)C-SCH 900518 concentration determination. Calibration standards and quality controls were included for every batch of sample analysis by AMS to ensure acceptable quality of the assay. Plasma (14)C-SCH 900518 concentrations were derived from the regression function established from the calibration standards, rather than directly from isotopic ratios from AMS measurement. The precision and accuracy of quality controls and calibration standards met the requirements of bioanalytical guidance (U.S. Department of Health and Human Services, Food and Drug Administration, Center for Drug Evaluation and Research, Center for Veterinary Medicine. Guidance for Industry: Bioanalytical Method Validation (ucm070107), May 2001. http://www.fda.gov/downloads/Drugs/GuidanceCompilanceRegulatoryInformation/Guidances/ucm070107.pdf ). The AMS measurement had a linear response range from 0.0159 to 9.07 dpm/mL for plasma (14)C-SCH 900158 concentrations. The CV and accuracy were 3.4-8.5% and 94-108% (82-119% for the lower limit of quantitation (LLOQ)), respectively, with a correlation coefficient of 0.9998. The absolute bioavailability was calculated from the dose-normalized area under the curve of iv and oral doses after the plasma concentrations were plotted vs the sampling time post oral dose. The mean absolute bioavailability of SCH 900518 was 40.8% (range 16.8-60.6%). The typical accuracy and standard deviation in AMS quantitative analysis of drugs from human plasma samples have been reported for the first time, and the impact of these

  9. A first investigation of accuracy, precision and sensitivity of phase-based x-ray dark-field imaging

    NASA Astrophysics Data System (ADS)

    Astolfo, Alberto; Endrizzi, Marco; Kallon, Gibril; Millard, Thomas P.; Vittoria, Fabio A.; Olivo, Alessandro

    2016-12-01

    In the last two decades, x-ray phase contrast imaging (XPCI) has attracted attention as a potentially significant improvement over widespread and established x-ray imaging. The key is its capability to access a new physical quantity (the ‘phase shift’), which can be complementary to x-ray absorption. One additional advantage of XPCI is its sensitivity to micro structural details through the refraction induced dark-field (DF). While DF is extensively mentioned and used for several applications, predicting the capability of an XPCI system to retrieve DF quantitatively is not straightforward. In this article, we evaluate the impact of different design options and algorithms on DF retrieval for the edge-illumination (EI) XPCI technique. Monte Carlo simulations, supported by experimental data, are used to measure the accuracy, precision and sensitivity of DF retrieval performed with several EI systems based on conventional x-ray sources. The introduced tools are easy to implement, and general enough to assess the DF performance of systems based on alternative (i.e. non-EI) XPCI approaches.

  10. Evaluation of the accuracy and precision of four intraoral scanners with 70% reduced inlay and four-unit bridge models of international standard.

    PubMed

    Uhm, Soo-Hyuk; Kim, Jae-Hong; Jiang, Heng Bo; Woo, Chang-Woo; Chang, Minho; Kim, Kyoung-Nam; Bae, Ji-Myung; Oh, Seunghan

    2017-01-31

    The aims of this study were to evaluate the feasibility of 70% reduced inlay and 4-unit bridge models of International Standard (ISO 12836) assessing the accuracy of laboratory scanners to measure the accuracy of intraoral scanner. Four intraoral scanners (CS3500, Trios, Omnicam, and Bluecam) and one laboratory scanner (Ceramill MAP400) were used in this study. The height, depth, length, and angle of the models were measured from thirty scanned stereolithography (STL) images. There were no statistically significant mean deviations in distance accuracy and precision values of scanned images, except the angulation values of the inlay and 4-unit bridge models. The relative errors of inlay model and 4-unit bridge models quantifying the accuracy and precision of obtained mean deviations were less than 0.023 and 0.021, respectively. Thus, inlay and 4-unit bridge models suggested by this study is expected to be feasible tools for testing intraoral scanners.

  11. An Examination of the Precision and Technical Accuracy of the First Wave of Group-Randomized Trials Funded by the Institute of Education Sciences

    ERIC Educational Resources Information Center

    Spybrook, Jessaca; Raudenbush, Stephen W.

    2009-01-01

    This article examines the power analyses for the first wave of group-randomized trials funded by the Institute of Education Sciences. Specifically, it assesses the precision and technical accuracy of the studies. The authors identified the appropriate experimental design and estimated the minimum detectable standardized effect size (MDES) for each…

  12. Deformable Image Registration for Adaptive Radiation Therapy of Head and Neck Cancer: Accuracy and Precision in the Presence of Tumor Changes

    SciTech Connect

    Mencarelli, Angelo; Kranen, Simon Robert van; Hamming-Vrieze, Olga; Beek, Suzanne van; Nico Rasch, Coenraad Robert; Herk, Marcel van; Sonke, Jan-Jakob

    2014-11-01

    Purpose: To compare deformable image registration (DIR) accuracy and precision for normal and tumor tissues in head and neck cancer patients during the course of radiation therapy (RT). Methods and Materials: Thirteen patients with oropharyngeal tumors, who underwent submucosal implantation of small gold markers (average 6, range 4-10) around the tumor and were treated with RT were retrospectively selected. Two observers identified 15 anatomical features (landmarks) representative of normal tissues in the planning computed tomography (pCT) scan and in weekly cone beam CTs (CBCTs). Gold markers were digitally removed after semiautomatic identification in pCTs and CBCTs. Subsequently, landmarks and gold markers on pCT were propagated to CBCTs, using a b-spline-based DIR and, for comparison, rigid registration (RR). To account for observer variability, the pair-wise difference analysis of variance method was applied. DIR accuracy (systematic error) and precision (random error) for landmarks and gold markers were quantified. Time trend of the precisions for RR and DIR over the weekly CBCTs were evaluated. Results: DIR accuracies were submillimeter and similar for normal and tumor tissue. DIR precision (1 SD) on the other hand was significantly different (P<.01), with 2.2 mm vector length in normal tissue versus 3.3 mm in tumor tissue. No significant time trend in DIR precision was found for normal tissue, whereas in tumor, DIR precision was significantly (P<.009) degraded during the course of treatment by 0.21 mm/week. Conclusions: DIR for tumor registration proved to be less precise than that for normal tissues due to limited contrast and complex non-elastic tumor response. Caution should therefore be exercised when applying DIR for tumor changes in adaptive procedures.

  13. Accuracy, Precision, and Reproducibility of Four T1 Mapping Sequences: A Head-to-Head Comparison of MOLLI, ShMOLLI, SASHA, and SAPPHIRE

    PubMed Central

    Roujol, Sébastien; Weingärtner, Sebastian; Foppa, Murilo; Chow, Kelvin; Kawaji, Keigo; Ngo, Long H.; Kellman, Peter; Manning, Warren J.; Thompson, Richard B.

    2014-01-01

    Purpose To compare accuracy, precision, and reproducibility of four commonly used myocardial T1 mapping sequences: modified Look-Locker inversion recovery (MOLLI), shortened MOLLI (ShMOLLI), saturation recovery single-shot acquisition (SASHA), and saturation pulse prepared heart rate independent inversion recovery (SAPPHIRE). Materials and Methods This HIPAA-compliant study was approved by the institutional review board. All subjects provided written informed consent. Accuracy, precision, and reproducibility of the four T1 mapping sequences were first compared in phantom experiments. In vivo analysis was performed in seven healthy subjects (mean age ± standard deviation, 38 years ± 19; four men, three women) who were imaged twice on two separate days. In vivo reproducibility of native T1 mapping and extracellular volume (ECV) were measured. Differences between the sequences were assessed by using Kruskal-Wallis and Wilcoxon rank sum tests (phantom data) and mixed-effect models (in vivo data). Results T1 mapping accuracy in phantoms was lower with ShMOLLI (62 msec) and MOLLI (44 msec) than with SASHA (13 msec; P < .05) and SAPPHIRE (12 msec; P < .05). MOLLI had similar precision to ShMOLLI (4.0 msec vs 5.6 msec; P = .07) but higher precision than SAPPHIRE (6.8 msec; P = .002) and SASHA (8.7 msec; P < .001). All sequences had similar reproducibility in phantoms (P = .1). The four sequences had similar in vivo reproducibility for native T1 mapping (∼25–50 msec; P > .05) and ECV quantification (∼0.01–0.02; P > .05). Conclusion SASHA and SAPPHIRE yield higher accuracy, lower precision, and similar reproducibility compared with MOLLI and ShMOLLI for T1 measurement. Different sequences yield different ECV values; however, all sequences have similar reproducibility for ECV quantification. © RSNA, 2014 Online supplemental material is available for this article. PMID:24702727

  14. Application of U-Pb ID-TIMS dating to the end-Triassic global crisis: testing the limits on precision and accuracy in a multidisciplinary whodunnit (Invited)

    NASA Astrophysics Data System (ADS)

    Schoene, B.; Schaltegger, U.; Guex, J.; Bartolini, A.

    2010-12-01

    The ca. 201.4 Ma Triassic-Jurassic boundary is characterized by one of the most devastating mass-extinctions in Earth history, subsequent biologic radiation, rapid carbon cycle disturbances and enormous flood basalt volcanism (Central Atlantic Magmatic Province - CAMP). Considerable uncertainty remains regarding the temporal and causal relationship between these events though this link is important for understanding global environmental change under extreme stresses. We present ID-TIMS U-Pb zircon geochronology on volcanic ash beds from two marine sections that span the Triassic-Jurassic boundary and from the CAMP in North America. To compare the timing of the extinction with the onset of the CAMP, we assess the precision and accuracy of ID-TIMS U-Pb zircon geochronology by exploring random and systematic uncertainties, reproducibility, open-system behavior, and pre-eruptive crystallization of zircon. We find that U-Pb ID-TIMS dates on single zircons can be internally and externally reproducible at 0.05% of the age, consistent with recent experiments coordinated through the EARTHTIME network. Increased precision combined with methods alleviating Pb-loss in zircon reveals that these ash beds contain zircon that crystallized between 10^5 and 10^6 years prior to eruption. Mineral dates older than eruption ages are prone to affect all geochronologic methods and therefore new tools exploring this form of “geologic uncertainty” will lead to better time constraints for ash bed deposition. In an effort to understand zircon dates within the framework of a magmatic system, we analyzed zircon trace elements by solution ICPMS for the same volume of zircon dated by ID-TIMS. In one example we argue that zircon trace element patterns as a function of time result from a mix of xeno-, ante-, and autocrystic zircons in the ash bed, and approximate eruption age with the youngest zircon date. In a contrasting example from a suite of Cretaceous andesites, zircon trace elements

  15. Towards the GEOSAT Follow-On Precise Orbit Determination Goals of High Accuracy and Near-Real-Time Processing

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Zelensky, Nikita P.; Chinn, Douglas S.; Beckley, Brian D.; Lillibridge, John L.

    2006-01-01

    The US Navy's GEOSAT Follow-On spacecraft (GFO) primary mission objective is to map the oceans using a radar altimeter. Satellite laser ranging data, especially in combination with altimeter crossover data, offer the only means of determining high-quality precise orbits. Two tuned gravity models, PGS7727 and PGS7777b, were created at NASA GSFC for GFO that reduce the predicted radial orbit through degree 70 to 13.7 and 10.0 mm. A macromodel was developed to model the nonconservative forces and the SLR spacecraft measurement offset was adjusted to remove a mean bias. Using these improved models, satellite-ranging data, altimeter crossover data, and Doppler data are used to compute both daily medium precision orbits with a latency of less than 24 hours. Final precise orbits are also computed using these tracking data and exported with a latency of three to four weeks to NOAA for use on the GFO Geophysical Data Records (GDR s). The estimated orbit precision of the daily orbits is between 10 and 20 cm, whereas the precise orbits have a precision of 5 cm.

  16. Accuracy And Precision Of Algorithms To Determine The Extent Of Aquatic Plants: Empirical Scaling Of Spectral Indices Vs. Spectral Unmixing

    NASA Astrophysics Data System (ADS)

    Cheruiyot, E.; Menenti, M.; Gorte, B.; Mito, C.; Koenders, R.

    2013-12-01

    Assessing the accuracy of image classification results is an important but often neglected step. Accuracy information is necessary in assessing the reliability of map products, hence neglecting this step renders the products unusable. With a classified Landsat-7 TM image as reference, we assessed the accuracy of NDVI and linear spectral unmixing (LSU) in vegetation detection from 20 randomly selected MERIS sample pixels in the Winam Gulf section of Lake Victoria. We noted that though easy to compute, empirical scaling of NDVI is not suitable for quantitative estimation of vegetation cover as it is misleading and often omits useful information. LSU performed at 87% based on RMSE. For quick solutions, we propose the use of a conversion factor from NDVI to vegetation fractional abundance (FA). With this conversion which is 96% reliable, the resulting FA from our samples were classified at 84% accuracy, only 3% less than those directly computed using LSU.

  17. Validation Test Report for NFLUX PRE: Validation of Specific Humidity, Surface Air Temperature, and Wind Speed Precision and Accuracy for Assimilation into Global and Regional Models

    DTIC Science & Technology

    2014-04-02

    Test Report for NFLUX PRE: Validation of Specific Humidity, Surface Air Temperature, and Wind Speed Precision and Accuracy for Assimilation into...THIS PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT Validation Test Report for NFLUX PRE: Validation of Specific Humidity, Surface Air...The regional algorithm products overlay the existing global product estimate. The location of the observations is tested to see if it falls within one

  18. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  19. Accuracy and precision of a custom camera-based system for 2-d and 3-d motion tracking during speech and nonspeech motor tasks.

    PubMed

    Feng, Yongqiang; Max, Ludo

    2014-04-01

    PURPOSE Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and submillimeter accuracy. METHOD The authors examined the accuracy and precision of 2-D and 3-D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially available computer software (APAS, Ariel Dynamics), and a custom calibration device. RESULTS Overall root-mean-square error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3- vs. 6-mm diameter) was negligible at all frame rates for both 2-D and 3-D data. CONCLUSION Motion tracking with consumer-grade digital cameras and the APAS software can achieve submillimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes.

  20. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms.

    PubMed

    Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B; Liu, Shih-Chii

    2015-01-01

    Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.

  1. 40 CFR 80.584 - What are the precision and accuracy criteria for approval of test methods for determining the...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... diesel fuel, and ECA marine fuel? 80.584 Section 80.584 Protection of Environment ENVIRONMENTAL... Diesel Fuel; Nonroad, Locomotive, and Marine Diesel Fuel; and ECA Marine Fuel Sampling and Testing § 80... sulfur content of motor vehicle diesel fuel, NRLM diesel fuel, and ECA marine fuel? (a) Precision....

  2. Study of the Effect of Modes of Electroerosion Treatment on the Microstructure and Accuracy of Precision Sizes of Small Parts

    NASA Astrophysics Data System (ADS)

    Korobova, N. V.; Aksenenko, A. Yu.; Bashevskaya, O. S.; Nikitin, A. A.

    2016-01-01

    Results of a study of the effect of the parameters of electroerosion treatment in a GF Agie Charmilles CUT 1000 OilTech wire-cutting bench on the size accuracy, the quality of the surface layer of cuts, and the microstructure of the surface of the treated parts are presented.

  3. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms

    PubMed Central

    Stromatias, Evangelos; Neil, Daniel; Pfeiffer, Michael; Galluppi, Francesco; Furber, Steve B.; Liu, Shih-Chii

    2015-01-01

    Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time. PMID:26217169

  4. Optimizing the accuracy and precision of the single-pulse Laue technique for synchrotron photo-crystallography

    PubMed Central

    Kamiński, Radosław; Graber, Timothy; Benedict, Jason B.; Henning, Robert; Chen, Yu-Sheng; Scheins, Stephan; Messerschmidt, Marc; Coppens, Philip

    2010-01-01

    The accuracy that can be achieved in single-pulse pump-probe Laue experiments is discussed. It is shown that with careful tuning of the experimental conditions a reproducibility of the intensity ratios of equivalent intensities obtained in different measurements of 3–4% can be achieved. The single-pulse experiments maximize the time resolution that can be achieved and, unlike stroboscopic techniques in which the pump-probe cycle is rapidly repeated, minimize the temperature increase due to the laser exposure of the sample. PMID:20567080

  5. Accuracy and precision of polyurethane dental arch models fabricated using a three-dimensional subtractive rapid prototyping method with an intraoral scanning technique

    PubMed Central

    Kim, Jae-Hong; Kim, Ki-Baek; Kim, Woong-Chul; Kim, Ji-Hwan

    2014-01-01

    Objective This study aimed to evaluate the accuracy and precision of polyurethane (PUT) dental arch models fabricated using a three-dimensional (3D) subtractive rapid prototyping (RP) method with an intraoral scanning technique by comparing linear measurements obtained from PUT models and conventional plaster models. Methods Ten plaster models were duplicated using a selected standard master model and conventional impression, and 10 PUT models were duplicated using the 3D subtractive RP technique with an oral scanner. Six linear measurements were evaluated in terms of x, y, and z-axes using a non-contact white light scanner. Accuracy was assessed using mean differences between two measurements, and precision was examined using four quantitative methods and the Bland-Altman graphical method. Repeatability was evaluated in terms of intra-examiner variability, and reproducibility was assessed in terms of inter-examiner and inter-method variability. Results The mean difference between plaster models and PUT models ranged from 0.07 mm to 0.33 mm. Relative measurement errors ranged from 2.2% to 7.6% and intraclass correlation coefficients ranged from 0.93 to 0.96, when comparing plaster models and PUT models. The Bland-Altman plot showed good agreement. Conclusions The accuracy and precision of PUT dental models for evaluating the performance of oral scanner and subtractive RP technology was acceptable. Because of the recent improvements in block material and computerized numeric control milling machines, the subtractive RP method may be a good choice for dental arch models. PMID:24696823

  6. Analysis of the accuracy and precision of the Axis-Shield Afinion hemoglobin A1c measurement device.

    PubMed

    Little, Randie R

    2012-03-01

    Point-of-care (POC) hemoglobin A1c measurement is now used by many physicians to make more timely decisions on therapy changes. A few studies have highlighted the drawbacks of some POC methods, e.g., poor precision and lot-to-lot variability. Evaluating performance in the clinical setting is difficult because there is minimal proficiency testing data on POC methods. In this issue of Journal of Diabetes Science and Technology, Wood and colleagues describe their experience with the Afinion method in a pediatric clinic network, comparing these results to another POC method as well as to a laboratory high-performance liquid chromatography method. Although they conclude that the Afinion exhibits adequate performance, they do not evaluate lot-to-lot variability. As with laboratory methods, potential assay interferences must also be considered.

  7. A Balanced Accuracy Fitness Function Leads to Robust Analysis using Grammatical Evolution Neural Networks in the Case of Class Imbalance.

    PubMed

    Hardison, Nicholas E; Fanelli, Theresa J; Dudek, Scott M; Reif, David M; Ritchie, Marylyn D; Motsinger-Reif, Alison A

    2008-01-01

    Grammatical Evolution Neural Networks (GENN) is a computational method designed to detect gene-gene interactions in genetic epidemiology, but has so far only been evaluated in situations with balanced numbers of cases and controls. Real data, however, rarely has such perfectly balanced classes. In the current study, we test the power of GENN to detect interactions in data with a range of class imbalance using two fitness functions (classification error and balanced error), as well as data re-sampling. We show that when using classification error, class imbalance greatly decreases the power of GENN. Re-sampling methods demonstrated improved power, but using balanced accuracy resulted in the highest power. Based on the results of this study, balanced error has replaced classification error in the GENN algorithm.

  8. High-accuracy and robust face recognition system based on optical parallel correlator using a temporal image sequence

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko

    2005-09-01

    Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.

  9. Quantitative Thin-Film X-ray Microanalysis by STEM/HAADF: Statistical Analysis for Precision and Accuracy Determination

    NASA Astrophysics Data System (ADS)

    Armigliato, Aldo; Balboni, Roberto; Rosa, Rodolfo

    2006-07-01

    Silicon-germanium thin films have been analyzed by EDS microanalysis in a field emission gun scanning transmission electron microscope (FEG-STEM) equipped with a high angular dark-field detector (STEM/HAADF). Several spectra have been acquired in the same homogeneous area of the cross-sectioned sample by drift-corrected linescan acquisitions. The Ge concentrations and the local film thickness have been obtained by using a previously described Monte Carlo based “two tilt angles” method. Although the concentrations are in excellent agreement with the known values, the resulting confidence intervals are not as good as expected from the precision in beam positioning and tilt angle position and readout offered by our state-of-the-art microscope. The Gaussian shape of the SiK[alpha] and GeK[alpha] X-ray intensities allows one to use the parametric bootstrap method of statistics, whereby it becomes possible to perform the same quantitative analysis in sample regions of different compositions and thicknesses, but by doing only one measurement at the two angles.

  10. Toward High-precision Seismic Studies of White Dwarf Stars: Parametrization of the Core and Tests of Accuracy

    NASA Astrophysics Data System (ADS)

    Giammichele, N.; Charpinet, S.; Fontaine, G.; Brassard, P.

    2017-01-01

    We present a prescription for parametrizing the chemical profile in the core of white dwarfs in light of the recent discovery that pulsation modes may sometimes be deeply confined in some cool pulsating white dwarfs. Such modes may be used as unique probes of the complicated chemical stratification that results from several processes that occurred in previous evolutionary phases of intermediate-mass stars. This effort is part of our ongoing quest for more credible and realistic seismic models of white dwarfs using static, parametrized equilibrium structures. Inspired by successful techniques developed in design optimization fields (such as aerodynamics), we exploit Akima splines for the tracing of the chemical profile of oxygen (carbon) in the core of a white dwarf model. A series of tests are then presented to better seize the precision and significance of the results that can be obtained in an asteroseismological context. We also show that the new parametrization passes an essential basic test, as it successfully reproduces the chemical stratification of a full evolutionary model.

  11. Leaf vein length per unit area is not intrinsically dependent on image magnification: avoiding measurement artifacts for accuracy and precision.

    PubMed

    Sack, Lawren; Caringella, Marissa; Scoffoni, Christine; Mason, Chase; Rawls, Michael; Markesteijn, Lars; Poorter, Lourens

    2014-10-01

    Leaf vein length per unit leaf area (VLA; also known as vein density) is an important determinant of water and sugar transport, photosynthetic function, and biomechanical support. A range of software methods are in use to visualize and measure vein systems in cleared leaf images; typically, users locate veins by digital tracing, but recent articles introduced software by which users can locate veins using thresholding (i.e. based on the contrasting of veins in the image). Based on the use of this method, a recent study argued against the existence of a fixed VLA value for a given leaf, proposing instead that VLA increases with the magnification of the image due to intrinsic properties of the vein system, and recommended that future measurements use a common, low image magnification for measurements. We tested these claims with new measurements using the software LEAFGUI in comparison with digital tracing using ImageJ software. We found that the apparent increase of VLA with magnification was an artifact of (1) using low-quality and low-magnification images and (2) errors in the algorithms of LEAFGUI. Given the use of images of sufficient magnification and quality, and analysis with error-free software, the VLA can be measured precisely and accurately. These findings point to important principles for improving the quantity and quality of important information gathered from leaf vein systems.

  12. Anthropometric precision and accuracy of digital three-dimensional photogrammetry: comparing the Genex and 3dMD imaging systems with one another and with direct anthropometry.

    PubMed

    Weinberg, Seth M; Naidoo, Sybill; Govier, Daniel P; Martin, Rick A; Kane, Alex A; Marazita, Mary L

    2006-05-01

    A variety of commercially available three-dimensional (3D) surface imaging systems are currently in use by craniofacial specialists. Little is known, however, about how measurement data generated from alternative 3D systems compare, specifically in terms of accuracy and precision. The purpose of this study was to compare anthropometric measurements obtained by way of two different digital 3D photogrammetry systems (Genex and 3dMD) as well as direct anthropometry and to evaluate intraobserver precision across these three methods. On a sample of 18 mannequin heads, 12 linear distances were measured twice by each method. A two-factor repeated measures analysis of variance was used to test simultaneously for mean differences in precision across methods. Additional descriptive statistics (e.g., technical error of measurement [TEM]) were used to quantify measurement error magnitude. Statistically significant (P < 0.05) mean differences were observed across methods for nine anthropometric variables; however, the magnitude of these differences was consistently at the submillimeter level. No significant differences were noted for precision. Moreover, the magnitude of imprecision was determined to be very small, with TEM scores well under 1 mm, and intraclass correlation coefficients ranging from 0.98 to 1. Results indicate that overall mean differences across these three methods were small enough to be of little practical importance. In terms of intraobserver precision, all methods fared equally well. This study is the first attempt to simultaneously compare 3D surface imaging systems directly with one another and with traditional anthropometry. Results suggest that craniofacial surface data obtained by way of alternative 3D photogrammetric systems can be combined or compared statistically.

  13. Age modelling of late Quaternary marine sequences in the Adriatic: Towards improved precision and accuracy using volcanic event stratigraphy

    NASA Astrophysics Data System (ADS)

    Lowe, J. J.; Blockley, S.; Trincardi, F.; Asioli, A.; Cattaneo, A.; Matthews, I. P.; Pollard, M.; Wulf, S.

    2007-02-01

    The first part of this paper presents a review of the problems that constrain the reliability of radiocarbon-based age models with particular focus on those used to underpin marine records. The reasons why radiocarbon data-sets need to be much more comprehensive than has been the norm hitherto, and why age models should be based on calibrated data only, are outlined. The complexity of the probability structure of calibrated radiocarbon data and the advantages of a Bayesian statistical approach for constructing calibrated age models are illustrated. The second part of the paper tests the potential for reducing the uncertainties that constrain radiocarbon-based age models using tephrostratigraphy. Fine (distal) ash layers of Holocene age preserved in Adriatic prodelta sediments are analysed geochemically and compared to tephras preserved in the Lago Grande di Monticchio site in southern Italy. The Monticchio tephras have been dated both by radiocarbon and varve chronology. The importance of basing such comparisons on standardised geochemical and robust statistical procedures is stressed. In this instance, both the Adriatic and Monticchio geochemical measurements are based on wavelength dispersive spectrometry, while discriminant function analysis is employed for statistical comparisons. Using this approach, the ages of some of the Adriatic marine ash layers could be estimated in Monticchio varve years, circumventing some of the uncertainty of radiocarbon-based age models introduced by marine reservoir effects. Fine (distal) ash layers are more widespread and better preserved in Mediterranean marine sequences than realised hitherto and may offer much wider potential for refining the dating and correlation of Mediterranean marine sequences as well as marine-land correlations.

  14. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, Jacquelyn C.; Thompson, Anne M.; Schmidlin, F. J.; Oltmans, S. J.; Smit, H. G. J.

    2004-01-01

    Since 1998 the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 ozone profiles over eleven southern hemisphere tropical and subtropical stations. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used to measure ozone. The data are archived at: &ttp://croc.gsfc.nasa.gov/shadoz>. In analysis of ozonesonde imprecision within the SHADOZ dataset, Thompson et al. [JGR, 108,8238,20031 we pointed out that variations in ozonesonde technique (sensor solution strength, instrument manufacturer, data processing) could lead to station-to-station biases within the SHADOZ dataset. Imprecisions and accuracy in the SHADOZ dataset are examined in light of new data. First, SHADOZ total ozone column amounts are compared to version 8 TOMS (2004 release). As for TOMS version 7, satellite total ozone is usually higher than the integrated column amount from the sounding. Discrepancies between the sonde and satellite datasets decline two percentage points on average, compared to version 7 TOMS offsets. Second, the SHADOZ station data are compared to results of chamber simulations (JOSE-2000, Juelich Ozonesonde Intercomparison Experiment) in which the various SHADOZ techniques were evaluated. The range of JOSE column deviations from a standard instrument (-10%) in the chamber resembles that of the SHADOZ station data. It appears that some systematic variations in the SHADOZ ozone record are accounted for by differences in solution strength, data processing and instrument type (manufacturer).

  15. Charts of operational process specifications ("OPSpecs charts") for assessing the precision, accuracy, and quality control needed to satisfy proficiency testing performance criteria.

    PubMed

    Westgard, J O

    1992-07-01

    "Operational process specifications" have been derived from an analytical quality-planning model to assess the precision, accuracy, and quality control (QC) needed to satisfy Proficiency Testing (PT) criteria. These routine operating specifications are presented in the form of an "OPSpecs chart," which describes the operational limits for imprecision and inaccuracy when a desired level of quality assurance is provided by a specific QC procedure. OPSpecs charts can be used to compare the operational limits for different QC procedures and to select a QC procedure that is appropriate for the precision and accuracy of a specific measurement procedure. To select a QC procedure, one plots the inaccuracy and imprecision observed for a measurement procedure on the OPSpecs chart to define the current operating point, which is then compared with the operational limits of candidate QC procedures. Any QC procedure whose operational limits are greater than the measurement procedure's operating point will provide a known assurance, with the percent chance specified by the OPSpecs chart, that critical analytical errors will be detected. OPSpecs charts for a 10% PT criterion are presented to illustrate the selection of QC procedures for measurement procedures with different amounts of imprecision and inaccuracy. Normalized OPSpecs charts are presented to permit a more general assessment of the analytical performance required with commonly used QC procedures.

  16. Precision and accuracy of ST-EDXRF performance for As determination comparing with ICP-MS and evaluation of As deviation in the soil media.

    PubMed

    Akbulut, Songul; Cevik, Ugur; Van, Aydın Ali; De Wael, Karolien; Van Grieken, Rene

    2014-02-01

    The present study was conducted to (i) determine the precision and accuracy of arsenic measurement in soil samples using ST-EDXRF by comparison with the results of ICP-MS analyses and (ii) identify the relationship of As concentration with soil characteristics. For the analysis of samples, inductively coupled plasma mass spectrometry (ICP-MS) and energy dispersive X-ray fluorescence spectrometry (EDXRF) were performed. According to the results found in the soil samples, the addition of HCl to HNO3, used for the digestion gave significant variations in the recovery of As. However, spectral interferences between peaks for As and Pb can affect detection limits and accuracy for XRF analysis. When comparing the XRF and ICP-MS results a correlation was observed with R(2)=0.8414. This means that using a ST-EDXRF spectrometer, it is possible to achieve accurate and precise analysis by the calibration of certified reference materials and choosing an appropriate secondary target. On the other hand, with regard to soil characteristics analyses, the study highlighted that As is mostly anthropogenically enriched in the studied area.

  17. TanDEM-X IDEM precision and accuracy assessment based on a large assembly of differential GNSS measurements in Kruger National Park, South Africa

    NASA Astrophysics Data System (ADS)

    Baade, J.; Schmullius, C.

    2016-09-01

    High resolution Digital Elevation Models (DEM) represent fundamental data for a wide range of Earth surface process studies. Over the past years, the German TanDEM-X mission acquired data for a new, truly global Digital Elevation Model with unprecedented geometric resolution, precision and accuracy. First TanDEM Intermediate Digital Elevation Models (i.e. IDEM) with a geometric resolution from 0.4 to 3 arcsec have been made available for scientific purposes in November 2014. This includes four 1° × 1° tiles covering the Kruger National Park in South Africa. Here, we document the results of a local scale IDEM height accuracy validation exercise utilizing over 10,000 RTK-GNSS-based ground survey points from fourteen sites characterized by mainly pristine Savanna vegetation. The vertical precision of the ground checkpoints is 0.02 m (1σ). Selected precursor data sets (SRTMGL1, SRTM41, ASTER-GDEM2) are included in the analysis to facilitate the comparison. Although IDEM represents an intermediate product on the way to the new global TanDEM-X DEM, expected to be released in late 2016, it allows first insight into the properties of the forthcoming product. Remarkably, the TanDEM-X tiles include a number of auxiliary files providing detailed information pertinent to a user-based quality assessment. We present examples for the utilization of this information in the framework of a local scale study including the identification of height readings contaminated by water. Furthermore, this study provides evidence for the high precision and accuracy of IDEM height readings and the sensitivity to canopy cover. For open terrain, the 0.4 arcsec resolution edition (IDEM04) yields an average bias of 0.20 ± 0.05 m (95% confidence interval, Cl95), a RMSE = 1.03 m and an absolute vertical height error (LE90) of 1.5 [1.4, 1.7] m (Cl95). The corresponding values for the lower resolution IDEM editions are about the same and provide evidence for the high quality of the IDEM products

  18. Accuracy Maximization Analysis for Sensory-Perceptual Tasks: Computational Improvements, Filter Robustness, and Coding Advantages for Scaled Additive Noise

    PubMed Central

    Burge, Johannes

    2017-01-01

    Accuracy Maximization Analysis (AMA) is a recently developed Bayesian ideal observer method for task-specific dimensionality reduction. Given a training set of proximal stimuli (e.g. retinal images), a response noise model, and a cost function, AMA returns the filters (i.e. receptive fields) that extract the most useful stimulus features for estimating a user-specified latent variable from those stimuli. Here, we first contribute two technical advances that significantly reduce AMA’s compute time: we derive gradients of cost functions for which two popular estimators are appropriate, and we implement a stochastic gradient descent (AMA-SGD) routine for filter learning. Next, we show how the method can be used to simultaneously probe the impact on neural encoding of natural stimulus variability, the prior over the latent variable, noise power, and the choice of cost function. Then, we examine the geometry of AMA’s unique combination of properties that distinguish it from better-known statistical methods. Using binocular disparity estimation as a concrete test case, we develop insights that have general implications for understanding neural encoding and decoding in a broad class of fundamental sensory-perceptual tasks connected to the energy model. Specifically, we find that non-orthogonal (partially redundant) filters with scaled additive noise tend to outperform orthogonal filters with constant additive noise; non-orthogonal filters and scaled additive noise can interact to sculpt noise-induced stimulus encoding uncertainty to match task-irrelevant stimulus variability. Thus, we show that some properties of neural response thought to be biophysical nuisances can confer coding advantages to neural systems. Finally, we speculate that, if repurposed for the problem of neural systems identification, AMA may be able to overcome a fundamental limitation of standard subunit model estimation. As natural stimuli become more widely used in the study of psychophysical and

  19. Evaluating precision and accuracy when quantifying different endogenous control reference genes in maize using real-time PCR.

    PubMed

    Scholdberg, Tandace A; Norden, Tim D; Nelson, Daishia D; Jenkins, G Ronald

    2009-04-08

    The agricultural biotechnology industry routinely utilizes real-time quantitative PCR (RT-qPCR) for the detection of biotechnology-derived traits in plant material, particularly for meeting the requirements of legislative mandates that rely upon the trace detection of DNA. Quantification via real-time RT-qPCR in plant species involves the measurement of the copy number of a taxon-specific, endogenous control gene exposed to the same manipulations as the target gene prior to amplification. The International Organization for Standardization (ISO 21570:2005) specifies that the copy number of an endogenous reference gene be used for normalizing the concentration (expressed as a % w/w) of a trait-specific target gene when using RT-qPCR. For this purpose, the copy number of a constitutively expressed endogenous reference gene in the same sample is routinely monitored. Real-time qPCR was employed to evaluate the predictability and performance of commonly used endogenous control genes (starch synthase, SSIIb-2, SSIIb-3; alcohol dehydrogenase, ADH; high-mobility group, HMG; zein; and invertase, IVR) used to detect biotechnology-derived traits in maize. The data revealed relatively accurate and precise amplification efficiencies when isogenic maize was compared to certified reference standards, but highly variable results when 23 nonisogenic maize cultivars were compared to an IRMM Bt-11 reference standard. Identifying the most suitable endogenous control gene, one that amplifies consistently and predictably across different maize cultivars, and implementing this as an internationally recognized standard would contribute toward harmonized testing of biotechnology-derived traits in maize.

  20. EFFECT OF RADIATION DOSE LEVEL ON ACCURACY AND PRECISION OF MANUAL SIZE MEASUREMENTS IN CHEST TOMOSYNTHESIS EVALUATED USING SIMULATED PULMONARY NODULES

    PubMed Central

    Söderman, Christina; Johnsson, Åse Allansdotter; Vikgren, Jenny; Norrlund, Rauni Rossi; Molnar, David; Svalkvist, Angelica; Månsson, Lars Gunnar; Båth, Magnus

    2016-01-01

    The aim of the present study was to investigate the dependency of the accuracy and precision of nodule diameter measurements on the radiation dose level in chest tomosynthesis. Artificial ellipsoid-shaped nodules with known dimensions were inserted in clinical chest tomosynthesis images. Noise was added to the images in order to simulate radiation dose levels corresponding to effective doses for a standard-sized patient of 0.06 and 0.04 mSv. These levels were compared with the original dose level, corresponding to an effective dose of 0.12 mSv for a standard-sized patient. Four thoracic radiologists measured the longest diameter of the nodules. The study was restricted to nodules located in high-dose areas of the tomosynthesis projection radiographs. A significant decrease of the measurement accuracy and intraobserver variability was seen for the lowest dose level for a subset of the observers. No significant effect of dose level on the interobserver variability was found. The number of non-measurable small nodules (≤5 mm) was higher for the two lowest dose levels compared with the original dose level. In conclusion, for pulmonary nodules at positions in the lung corresponding to locations in high-dose areas of the projection radiographs, using a radiation dose level resulting in an effective dose of 0.06 mSv to a standard-sized patient may be possible in chest tomosynthesis without affecting the accuracy and precision of nodule diameter measurements to any large extent. However, an increasing number of non-measurable small nodules (≤5 mm) with decreasing radiation dose may raise some concerns regarding an applied general dose reduction for chest tomosynthesis examinations in the clinical praxis. PMID:26994093

  1. Evaluation of the geomorphometric results and residual values of a robust plane fitting method applied to different DTMs of various scales and accuracy

    NASA Astrophysics Data System (ADS)

    Koma, Zsófia; Székely, Balázs; Dorninger, Peter; Kovács, Gábor

    2013-04-01

    Due to the need for quantitative analysis of various geomorphological landforms, the importance of fast and effective automatic processing of the different kind of digital terrain models (DTMs) is increasing. The robust plane fitting (segmentation) method, developed at the Institute of Photogrammetry and Remote Sensing at Vienna University of Technology, allows the processing of large 3D point clouds (containing millions of points), performs automatic detection of the planar elements of the surface via parameter estimation, and provides a considerable data reduction for the modeled area. Its geoscientific application allows the modeling of different landforms with the fitted planes as planar facets. In our study we aim to analyze the accuracy of the resulting set of fitted planes in terms of accuracy, model reliability and dependence on the input parameters. To this end we used DTMs of different scales and accuracy: (1) artificially generated 3D point cloud model with different magnitudes of error; (2) LiDAR data with 0.1 m error; (3) SRTM (Shuttle Radar Topography Mission) DTM database with 5 m accuracy; (4) DTM data from HRSC (High Resolution Stereo Camera) of the planet Mars with 10 m error. The analysis of the simulated 3D point cloud with normally distributed errors comprised different kinds of statistical tests (for example Chi-square and Kolmogorov-Smirnov tests) applied on the residual values and evaluation of dependence of the residual values on the input parameters. These tests have been repeated on the real data supplemented with the categorization of the segmentation result depending on the input parameters, model reliability and the geomorphological meaning of the fitted planes. The simulation results show that for the artificially generated data with normally distributed errors the null hypothesis can be accepted based on the residual value distribution being also normal, but in case of the test on the real data the residual value distribution is

  2. Deployment of precise and robust sensors on board ISS-for scientific experiments and for operation of the station.

    PubMed

    Stenzel, Christian

    2016-09-01

    The International Space Station (ISS) is the largest technical vehicle ever built by mankind. It provides a living area for six astronauts and also represents a laboratory in which scientific experiments are conducted in an extraordinary environment. The deployed sensor technology contributes significantly to the operational and scientific success of the station. The sensors on board the ISS can be thereby classified into two categories which differ significantly in their key features: (1) sensors related to crew and station health, and (2) sensors to provide specific measurements in research facilities. The operation of the station requires robust, long-term stable and reliable sensors, since they assure the survival of the astronauts and the intactness of the station. Recently, a wireless sensor network for measuring environmental parameters like temperature, pressure, and humidity was established and its function could be successfully verified over several months. Such a network enhances the operational reliability and stability for monitoring these critical parameters compared to single sensors. The sensors which are implemented into the research facilities have to fulfil other objectives. The high performance of the scientific experiments that are conducted in different research facilities on-board demands the perfect embedding of the sensor in the respective instrumental setup which forms the complete measurement chain. It is shown that the performance of the single sensor alone does not determine the success of the measurement task; moreover, the synergy between different sensors and actuators as well as appropriate sample taking, followed by an appropriate sample preparation play an essential role. The application in a space environment adds additional challenges to the sensor technology, for example the necessity for miniaturisation, automation, reliability, and long-term operation. An alternative is the repetitive calibration of the sensors. This approach

  3. SU-E-J-147: Monte Carlo Study of the Precision and Accuracy of Proton CT Reconstructed Relative Stopping Power Maps

    SciTech Connect

    Dedes, G; Asano, Y; Parodi, K; Arbor, N; Dauvergne, D; Testa, E; Letang, J; Rit, S

    2015-06-15

    Purpose: The quantification of the intrinsic performances of proton computed tomography (pCT) as a modality for treatment planning in proton therapy. The performance of an ideal pCT scanner is studied as a function of various parameters. Methods: Using GATE/Geant4, we simulated an ideal pCT scanner and scans of several cylindrical phantoms with various tissue equivalent inserts of different sizes. Insert materials were selected in order to be of clinical relevance. Tomographic images were reconstructed using a filtered backprojection algorithm taking into account the scattering of protons into the phantom. To quantify the performance of the ideal pCT scanner, we study the precision and the accuracy with respect to the theoretical relative stopping power ratios (RSP) values for different beam energies, imaging doses, insert sizes and detector positions. The planning range uncertainty resulting from the reconstructed RSP is also assessed by comparison with the range of the protons in the analytically simulated phantoms. Results: The results indicate that pCT can intrinsically achieve RSP resolution below 1%, for most examined tissues at beam energies below 300 MeV and for imaging doses around 1 mGy. RSP maps accuracy of less than 0.5 % is observed for most tissue types within the studied dose range (0.2–1.5 mGy). Finally, the uncertainty in the proton range due to the accuracy of the reconstructed RSP map is well below 1%. Conclusion: This work explores the intrinsic performance of pCT as an imaging modality for proton treatment planning. The obtained results show that under ideal conditions, 3D RSP maps can be reconstructed with an accuracy better than 1%. Hence, pCT is a promising candidate for reducing the range uncertainties introduced by the use of X-ray CT alongside with a semiempirical calibration to RSP.Supported by the DFG Cluster of Excellence Munich-Centre for Advanced Photonics (MAP)

  4. An evaluation of the accuracy and precision of methane prediction equations for beef cattle fed high-forage and high-grain diets.

    PubMed

    Escobar-Bahamondes, P; Oba, M; Beauchemin, K A

    2017-01-01

    The study determined the performance of equations to predict enteric methane (CH4) from beef cattle fed forage- and grain-based diets. Many equations are available to predict CH4 from beef cattle and the predictions vary substantially among equations. The aims were to (1) construct a database of CH4 emissions for beef cattle from published literature, and (2) identify the most precise and accurate extant CH4 prediction models for beef cattle fed diets varying in forage content. The database was comprised of treatment means of CH4 production from in vivo beef studies published from 2000 to 2015. Criteria to include data in the database were as follows: animal description, intakes, diet composition and CH4 production. In all, 54 published equations that predict CH4 production from diet composition were evaluated. Precision and accuracy of the equations were evaluated using the concordance correlation coefficient (r c ), root mean square prediction error (RMSPE), model efficiency and analysis of errors. Equations were ranked using a combined index of the various statistical assessments based on principal component analysis. The final database contained 53 studies and 207 treatment means that were divided into two data sets: diets containing ⩾400 g/kg dry matter (DM) forage (n=116) and diets containing ⩽200 g/kg DM forage (n=42). Diets containing between ⩽400 and ⩾200 g/kg DM forage were not included in the analysis because of their limited numbers (n=6). Outliers, treatment means where feed was fed restrictively and diets with CH4 mitigation additives were omitted (n=43). Using the high-forage dataset the best-fit equations were the International Panel on Climate Change Tier 2 method, 3 equations for steers that considered gross energy intake (GEI) and body weight and an equation that considered dry matter intake and starch:neutral detergent fiber with r c ranging from 0.60 to 0.73 and RMSPE from 35.6 to 45.9 g/day. For the high-grain diets, the 5 best

  5. Ultra-Precision Measurement and Control of Angle Motion in Piezo-Based Platforms Using Strain Gauge Sensors and a Robust Composite Controller

    PubMed Central

    Liu, Lei; Bai, Yu-Guang; Zhang, Da-Li; Wu, Zhi-Gang

    2013-01-01

    The measurement and control strategy of a piezo-based platform by using strain gauge sensors (SGS) and a robust composite controller is investigated in this paper. First, the experimental setup is constructed by using a piezo-based platform, SGS sensors, an AD5435 platform and two voltage amplifiers. Then, the measurement strategy to measure the tip/tilt angles accurately in the order of sub-μrad is presented. A comprehensive composite control strategy design to enhance the tracking accuracy with a novel driving principle is also proposed. Finally, an experiment is presented to validate the measurement and control strategy. The experimental results demonstrate that the proposed measurement and control strategy provides accurate angle motion with a root mean square (RMS) error of 0.21 μrad, which is approximately equal to the noise level. PMID:23860316

  6. Accuracy, sensitivity and robustness of five different methods for the estimation of gait temporal parameters using a single inertial sensor mounted on the lower trunk.

    PubMed

    Trojaniello, Diana; Cereatti, Andrea; Della Croce, Ugo

    2014-09-01

    In the last decade, various methods for the estimation of gait events and temporal parameters from the acceleration signals of a single inertial measurement unit (IMU) mounted at waist level have been proposed. Despite the growing interest for such methodologies, a thorough comparative analysis of methods with regards to number of extra and missed events, accuracy and robustness to IMU location is still missing in the literature. The aim of this work was to fill this gap. Five methods have been tested on single IMU data acquired from fourteen healthy subjects walking while being recorded by a stereo-photogrammetric system and two force platforms. The sensitivity in detecting initial and final contacts varied between 81% and 100% across methods, whereas the positive predictive values ranged between 94% and 100%. For all tested methods, stride and step time estimates were obtained; three of the selected methods also allowed estimation of stance, swing and double support time. Results showed that the accuracy in estimating step and stride durations was acceptable for all methods. Conversely, a statistical difference was found in the error in estimating stance, swing and double support time, due to the larger errors in the final contact determination. Except for one method, the IMU positioning on the lower trunk did not represent a critical factor for the estimation of gait temporal parameters. Results obtained in this study may not be applicable to pathologic gait.

  7. Effects of x-ray and CT image enhancements on the robustness and accuracy of a rigid 3D/2D image registration.

    PubMed

    Kim, Jinkoo; Yin, Fang-Fang; Zhao, Yang; Kim, Jae Ho

    2005-04-01

    A rigid body three-dimensional/two-dimensional (3D/2D) registration method has been implemented using mutual information, gradient ascent, and 3D texturemap-based digitally reconstructed radiographs. Nine combinations of commonly used x-ray and computed tomography (CT) image enhancement methods, including window leveling, histogram equalization, and adaptive histogram equalization, were examined to assess their effects on accuracy and robustness of the registration method. From a set of experiments using an anthropomorphic chest phantom, we were able to draw several conclusions. First, the CT and x-ray preprocessing combination with the widest attraction range was the one that linearly stretched the histograms onto the entire display range on both CT and x-ray images. The average attraction ranges of this combination were 71.3 mm and 61.3 deg in the translation and rotation dimensions, respectively, and the average errors were 0.12 deg and 0.47 mm. Second, the combination of the CT image with tissue and bone information and the x-ray images with adaptive histogram equalization also showed subvoxel accuracy, especially the best in the translation dimensions. However, its attraction ranges were the smallest among the examined combinations (on average 36 mm and 19 deg). Last the bone-only information on the CT image did not show convergency property to the correct registration.

  8. Precision powder feeder

    DOEpatents

    Schlienger, M. Eric; Schmale, David T.; Oliver, Michael S.

    2001-07-10

    A new class of precision powder feeders is disclosed. These feeders provide a precision flow of a wide range of powdered materials, while remaining robust against jamming or damage. These feeders can be precisely controlled by feedback mechanisms.

  9. In situ sulfur isotope analysis of sulfide minerals by SIMS: Precision and accuracy, with application to thermometry of ~3.5Ga Pilbara cherts

    USGS Publications Warehouse

    Kozdon, R.; Kita, N.T.; Huberty, J.M.; Fournelle, J.H.; Johnson, C.A.; Valley, J.W.

    2010-01-01

    Secondary ion mass spectrometry (SIMS) measurement of sulfur isotope ratios is a potentially powerful technique for in situ studies in many areas of Earth and planetary science. Tests were performed to evaluate the accuracy and precision of sulfur isotope analysis by SIMS in a set of seven well-characterized, isotopically homogeneous natural sulfide standards. The spot-to-spot and grain-to-grain precision for δ34S is ± 0.3‰ for chalcopyrite and pyrrhotite, and ± 0.2‰ for pyrite (2SD) using a 1.6 nA primary beam that was focused to 10 µm diameter with a Gaussian-beam density distribution. Likewise, multiple δ34S measurements within single grains of sphalerite are within ± 0.3‰. However, between individual sphalerite grains, δ34S varies by up to 3.4‰ and the grain-to-grain precision is poor (± 1.7‰, n = 20). Measured values of δ34S correspond with analysis pit microstructures, ranging from smooth surfaces for grains with high δ34S values, to pronounced ripples and terraces in analysis pits from grains featuring low δ34S values. Electron backscatter diffraction (EBSD) shows that individual sphalerite grains are single crystals, whereas crystal orientation varies from grain-to-grain. The 3.4‰ variation in measured δ34S between individual grains of sphalerite is attributed to changes in instrumental bias caused by different crystal orientations with respect to the incident primary Cs+ beam. High δ34S values in sphalerite correlate to when the Cs+ beam is parallel to the set of directions , from [111] to [110], which are preferred directions for channeling and focusing in diamond-centered cubic crystals. Crystal orientation effects on instrumental bias were further detected in galena. However, as a result of the perfect cleavage along {100} crushed chips of galena are typically cube-shaped and likely to be preferentially oriented, thus crystal orientation effects on instrumental bias may be obscured. Test were made to improve the analytical

  10. An in-depth evaluation of accuracy and precision in Hg isotopic analysis via pneumatic nebulization and cold vapor generation multi-collector ICP-mass spectrometry.

    PubMed

    Rua-Ibarz, Ana; Bolea-Fernandez, Eduardo; Vanhaecke, Frank

    2016-01-01

    Mercury (Hg) isotopic analysis via multi-collector inductively coupled plasma (ICP)-mass spectrometry (MC-ICP-MS) can provide relevant biogeochemical information by revealing sources, pathways, and sinks of this highly toxic metal. In this work, the capabilities and limitations of two different sample introduction systems, based on pneumatic nebulization (PN) and cold vapor generation (CVG), respectively, were evaluated in the context of Hg isotopic analysis via MC-ICP-MS. The effect of (i) instrument settings and acquisition parameters, (ii) concentration of analyte element (Hg), and internal standard (Tl)-used for mass discrimination correction purposes-and (iii) different mass bias correction approaches on the accuracy and precision of Hg isotope ratio results was evaluated. The extent and stability of mass bias were assessed in a long-term study (18 months, n = 250), demonstrating a precision ≤0.006% relative standard deviation (RSD). CVG-MC-ICP-MS showed an approximately 20-fold enhancement in Hg signal intensity compared with PN-MC-ICP-MS. For CVG-MC-ICP-MS, the mass bias induced by instrumental mass discrimination was accurately corrected for by using either external correction in a sample-standard bracketing approach (SSB) or double correction, consisting of the use of Tl as internal standard in a revised version of the Russell law (Baxter approach), followed by SSB. Concomitant matrix elements did not affect CVG-ICP-MS results. Neither with PN, nor with CVG, any evidence for mass-independent discrimination effects in the instrument was observed within the experimental precision obtained. CVG-MC-ICP-MS was finally used for Hg isotopic analysis of reference materials (RMs) of relevant environmental origin. The isotopic composition of Hg in RMs of marine biological origin testified of mass-independent fractionation that affected the odd-numbered Hg isotopes. While older RMs were used for validation purposes, novel Hg isotopic data are provided for the

  11. Robustness and Accuracy of Feature-Based Single Image 2-D–3-D Registration Without Correspondences for Image-Guided Intervention

    PubMed Central

    Armand, Mehran; Otake, Yoshito; Yau, Wai-Pan; Cheung, Paul Y. S.; Hu, Yong; Taylor, Russell H.

    2015-01-01

    2-D-to-3-D registration is critical and fundamental in image-guided interventions. It could be achieved from single image using paired point correspondences between the object and the image. The common assumption that such correspondences can readily be established does not necessarily hold for image guided interventions. Intraoperative image clutter and an imperfect feature extraction method may introduce false detection and, due to the physics of X-ray imaging, the 2-D image point features may be indistinguishable from each other and/or obscured by anatomy causing false detection of the point features. These create difficulties in establishing correspondences between image features and 3-D data points. In this paper, we propose an accurate, robust, and fast method to accomplish 2-D–3-D registration using a single image without the need for establishing paired correspondences in the presence of false detection. We formulate 2-D–3-D registration as a maximum likelihood estimation problem, which is then solved by coupling expectation maximization with particle swarm optimization. The proposed method was evaluated in a phantom and a cadaver study. In the phantom study, it achieved subdegree rotation errors and submillimeter in-plane (X –Y plane) translation errors. In both studies, it outperformed the state-of-the-art methods that do not use paired correspondences and achieved the same accuracy as a state-of-the-art global optimal method that uses correct paired correspondences. PMID:23955696

  12. Accuracy and precision of 88Sr/86Sr and 87Sr/86Sr measurements by MC-ICPMS compromised by high barium concentrations

    NASA Astrophysics Data System (ADS)

    Scher, Howie D.; Griffith, Elizabeth M.; Buckley, Wayne P.

    2014-02-01

    (BaSO4) is a widely distributed mineral that incorporates strontium (Sr) during formation. Mass-dependent fractionation of Sr isotopes occurs during abiotic precipitation of barite and formation of barite associated with biological processes (e.g., bacterial sulfide oxidation). Sr isotopes in barite can provide provenance information as well as potentially reconstruct sample formation conditions (e.g., saturation state, temperature, biotic versus abiotic). Incomplete separation of Ba from Sr has complicated measurements of Sr isotopes by MC-ICPMS. In this study, we tested the effects of Ba in Sr sample solutions and modified extraction chromatography of Sr using Eichrom Sr Spec (Eichrom Technologies LLC, USA) resin to enable rapid, accurate, and precise measurements of 88Sr/86Sr and 87Sr/86Sr ratios from Ba-rich matrices. Sr isotope ratios of sample solutions doped with Ba were statistically indistinguishable from Ba-free sample solutions below 1 ppm Ba. Deviations in both 87Sr/86Sr and δ88/86Sr occurred above 1 ppm Ba. An updated extraction chromatography method tested with barite and Ba-doped seawater produces Sr sample solutions containing 10-100 ppb levels of Ba. The practice of Zr spiking for external mass-discrimination correction of 88Sr/86Sr ratios was also evaluated, and it was confirmed that variable Zr levels do not have adverse effects on the accuracy and precision of 87Sr/86Sr ratios in the Zr concentration range required to produce accurate δ88/86Sr values.

  13. Functional limits of agreement applied as a novel method comparison tool for accuracy and precision of inertial measurement unit derived displacement of the distal limb in horses.

    PubMed

    Olsen, Emil; Pfau, Thilo; Ritz, Christian

    2013-09-03

    Over ground motion analysis in horses is limited by a small number of strides and restraints of the indoor gait laboratory. Inertial measurement units (IMUs) are transforming the knowledge of human motion and objective clinical assessment through the opportunity to obtain clinically relevant data under various conditions. When using IMUs on the limbs of horses to determine local position estimates, conditions with high dynamic range of both accelerations and rotational velocities prove particularly challenging. Here we apply traditional method agreement and suggest a novel method of functional data analysis to compare motion capture with IMUs placed over the fetlock joint in seven horses. We demonstrate acceptable accuracy and precision at less than or equal to 5% of the range of motion for detection of distal limb mounted cranio-caudal and vertical position. We do not recommend the use of the latero-medial position estimate of the distal metacarpus/metatarsus during walk where the average error is 10% and the maximum error 111% of the range. We also show that functional data analysis and functional limits of agreement are sensitive methods for comparison of cyclical data and could be applied to differentiate changes in gait for individuals across time and conditions.

  14. Performing elemental microanalysis with high accuracy and high precision by scanning electron microscopy/silicon drift detector energy-dispersive X-ray spectrometry (SEM/SDD-EDS).

    PubMed

    Newbury, Dale E; Ritchie, Nicholas W M

    Electron-excited X-ray microanalysis performed in the scanning electron microscope with energy-dispersive X-ray spectrometry (EDS) is a core technique for characterization of the microstructure of materials. The recent advances in EDS performance with the silicon drift detector (SDD) enable accuracy and precision equivalent to that of the high spectral resolution wavelength-dispersive spectrometer employed on the electron probe microanalyzer platform. SDD-EDS throughput, resolution, and stability provide practical operating conditions for measurement of high-count spectra that form the basis for peak fitting procedures that recover the characteristic peak intensities even for elemental combination where severe peak overlaps occur, such PbS, MoS2, BaTiO3, SrWO4, and WSi2. Accurate analyses are also demonstrated for interferences involving large concentration ratios: a major constituent on a minor constituent (Ba at 0.4299 mass fraction on Ti at 0.0180) and a major constituent on a trace constituent (Ba at 0.2194 on Ce at 0.00407; Si at 0.1145 on Ta at 0.0041). Accurate analyses of low atomic number elements, C, N, O, and F, are demonstrated. Measurement of trace constituents with limits of detection below 0.001 mass fraction (1000 ppm) is possible within a practical measurement time of 500 s.

  15. Using Global Analysis to Extend the Accuracy and Precision of Binding Measurements with T cell Receptors and Their Peptide/MHC Ligands

    PubMed Central

    Blevins, Sydney J.; Baker, Brian M.

    2017-01-01

    In cellular immunity, clonally distributed T cell receptors (TCRs) engage complexes of peptides bound to major histocompatibility complex proteins (pMHCs). In the interactions of TCRs with pMHCs, regions of restricted and variable diversity align in a structurally complex fashion. Many studies have used mutagenesis to attempt to understand the “roles” played by various interface components in determining TCR recognition properties such as specificity and cross-reactivity. However, these measurements are often complicated or even compromised by the weak affinities TCRs maintain toward pMHC. Here, we demonstrate how global analysis of multiple datasets can be used to significantly extend the accuracy and precision of such TCR binding experiments. Application of this approach should positively impact efforts to understand TCR recognition and facilitate the creation of mutational databases to help engineer TCRs with tuned molecular recognition properties. We also show how global analysis can be used to analyze double mutant cycles in TCR-pMHC interfaces, which can lead to new insights into immune recognition. PMID:28197404

  16. High-Precision Surface Inspection: Uncertainty Evaluation within an Accuracy Range of 15μm with Triangulation-based Laser Line Scanners

    NASA Astrophysics Data System (ADS)

    Dupuis, Jan; Kuhlmann, Heiner

    2014-06-01

    Triangulation-based range sensors, e.g. laser line scanners, are used for high-precision geometrical acquisition of free-form surfaces, for reverse engineering tasks or quality management. In contrast to classical tactile measuring devices, these scanners generate a great amount of 3D-points in a short period of time and enable the inspection of soft materials. However, for accurate measurements, a number of aspects have to be considered to minimize measurement uncertainties. This study outlines possible sources of uncertainties during the measurement process regarding the scanner warm-up, the impact of laser power and exposure time as well as scanner’s reaction to areas of discontinuity, e.g. edges. All experiments were performed using a fixed scanner position to avoid effects resulting from imaging geometry. The results show a significant dependence of measurement accuracy on the correct adaption of exposure time as a function of surface reflectivity and laser power. Additionally, it is illustrated that surface structure as well as edges can cause significant systematic uncertainties.

  17. Toward robust deconvolution of pass-through paleomagnetic measurements: new tool to estimate magnetometer sensor response and laser interferometry of sample positioning accuracy

    NASA Astrophysics Data System (ADS)

    Oda, Hirokuni; Xuan, Chuang; Yamamoto, Yuhji

    2016-07-01

    Pass-through superconducting rock magnetometers (SRM) offer rapid and high-precision remanence measurements for continuous samples that are essential for modern paleomagnetism studies. However, continuous SRM measurements are inevitably smoothed and distorted due to the convolution effect of SRM sensor response. Deconvolution is necessary to restore accurate magnetization from pass-through SRM data, and robust deconvolution requires reliable estimate of SRM sensor response as well as understanding of uncertainties associated with the SRM measurement system. In this paper, we use the SRM at Kochi Core Center (KCC), Japan, as an example to introduce new tool and procedure for accurate and efficient estimate of SRM sensor response. To quantify uncertainties associated with the SRM measurement due to track positioning errors and test their effects on deconvolution, we employed laser interferometry for precise monitoring of track positions both with and without placing a u-channel sample on the SRM tray. The acquired KCC SRM sensor response shows significant cross-term of Z-axis magnetization on the X-axis pick-up coil and full widths of ~46-54 mm at half-maximum response for the three pick-up coils, which are significantly narrower than those (~73-80 mm) for the liquid He-free SRM at Oregon State University. Laser interferometry measurements on the KCC SRM tracking system indicate positioning uncertainties of ~0.1-0.2 and ~0.5 mm for tracking with and without u-channel sample on the tray, respectively. Positioning errors appear to have reproducible components of up to ~0.5 mm possibly due to patterns or damages on tray surface or rope used for the tracking system. Deconvolution of 50,000 simulated measurement data with realistic error introduced based on the position uncertainties indicates that although the SRM tracking system has recognizable positioning uncertainties, they do not significantly debilitate the use of deconvolution to accurately restore high

  18. Technical Note: Precision and accuracy of a commercially available CT optically stimulated luminescent dosimetry system for the measurement of CT dose index

    SciTech Connect

    Vrieze, Thomas J.; Sturchio, Glenn M.; McCollough, Cynthia H.

    2012-11-15

    Purpose: To determine the precision and accuracy of CTDI{sub 100} measurements made using commercially available optically stimulated luminescent (OSL) dosimeters (Landaur, Inc.) as beam width, tube potential, and attenuating material were varied. Methods: One hundred forty OSL dosimeters were individually exposed to a single axial CT scan, either in air, a 16-cm (head), or 32-cm (body) CTDI phantom at both center and peripheral positions. Scans were performed using nominal total beam widths of 3.6, 6, 19.2, and 28.8 mm at 120 kV and 28.8 mm at 80 kV. Five measurements were made for each of 28 parameter combinations. Measurements were made under the same conditions using a 100-mm long CTDI ion chamber. Exposed OSL dosimeters were returned to the manufacturer, who reported dose to air (in mGy) as a function of distance along the probe, integrated dose, and CTDI{sub 100}. Results: The mean precision averaged over 28 datasets containing five measurements each was 1.4%{+-} 0.6%, range = 0.6%-2.7% for OSL and 0.08%{+-} 0.06%, range = 0.02%-0.3% for ion chamber. The root mean square (RMS) percent differences between OSL and ion chamber CTDI{sub 100} values were 13.8%, 6.4%, and 8.7% for in-air, head, and body measurements, respectively, with an overall RMS percent difference of 10.1%. OSL underestimated CTDI{sub 100} relative to the ion chamber 21/28 times (75%). After manual correction of the 80 kV measurements, the RMS percent differences between OSL and ion chamber measurements were 9.9% and 10.0% for 80 and 120 kV, respectively. Conclusions: Measurements of CTDI{sub 100} with commercially available CT OSL dosimeters had a percent standard deviation of 1.4%. After energy-dependent correction factors were applied, the RMS percent difference in the measured CTDI{sub 100} values was about 10%, with a tendency of OSL to underestimate CTDI relative to the ion chamber. Unlike ion chamber methods, however, OSL dosimeters allow measurement of the radiation dose profile.

  19. Tephrochronology of last termination sequences in Europe: a protocol for improved analytical precision and robust correlation procedures (a joint SCOTAV-INTIMATE proposal)

    NASA Astrophysics Data System (ADS)

    Turney, Chris S. M.; Lowe, J. John; Davies, Siwan M.; Hall, Valerie; Lowe, David J.; Wastegård, Stefan; Hoek, Wim Z.; Alloway, Brent

    2004-02-01

    The precise sequence of events during the Last Termination (18 000-9000 ka 14C yr BP), and the extent to which major environmental changes were synchronous, are difficult to establish using the radiocarbon method alone because of serious distortions of the radiocarbon time-scale, as well as the influences of site-specific errors that can affect the materials dated. Attention has therefore turned to other methods that can provide independent tests of the chronology and correlation of events during the Last Termination. With emphasis on European sequences, we summarise here the potential of tephrostratigraphy and tephrochronology to fulfil this role. Recent advances in the detection and analysis of hidden tephra layers (cryptotephra) indicate that some tephras of Last Termination age are much more widespread in Europe than appreciated hitherto, and a number of new tephra deposits have also been identified. There is much potential for developing an integrated tephrochronological framework for Europe, which can help to underpin the overall chronology of events during the Last Termination. For that potential to be realised, however, there needs to be a more systematic and robust analysis of tephra layers than has been the practice in the past. We propose a protocol for improving analytical and reporting procedures, as well as the establishment of a centralised data base of the results, which will provide an important geochronological tool to support a diverse range of stratigraphical studies, including opportunities to reassess volcanic hazards. Although aimed primarily at Europe, the protocol proposed here is of equal relevance to other regions and periods of interest. Copyright

  20. Robustness of self-organizing chemoattractant field arising from precise pulse induction of its breakdown enzyme: a single-cell level analysis of PDE expression in Dictyostelium.

    PubMed

    Masaki, Noritaka; Fujimoto, Koichi; Honda-Kitahara, Mai; Hada, Emi; Sawai, Satoshi

    2013-03-05

    The oscillation of chemoattractant cyclic AMP (cAMP) in Dictyostelium discoideum is a collective phenomenon that occurs when the basal level of extracellular cAMP exceeds a threshold and invokes cooperative mutual excitation of cAMP synthesis and secretion. For pulses to be relayed from cell to cell repetitively, secreted cAMP must be cleared and brought down to the subthreshold level. One of the main determinants of the oscillatory behavior is thus how much extracellular cAMP is degraded by extracellular phosphodiesterase (PDE). To date, the exact nature of PDE gene regulation remains elusive. Here, we performed live imaging analysis of mRNA transcripts for pdsA--the gene encoding extracellular PDE. Our analysis revealed that pdsA is upregulated during the rising phase of cAMP oscillations. Furthermore, by analyzing isolated cells, we show that expression of pdsA is strictly dependent on the presence of extracellular cAMP. pdsA is induced only at ∼1 nM extracellular cAMP, which is almost identical to the threshold concentration for the cAMP relay response. The observed precise regulation of PDE expression together with degradation of extracellular cAMP by PDE form a dual positive and negative feedback circuit, and model analysis shows that this sets the cAMP level near the threshold concentration for the cAMP relay response for a wide range of adenylyl cyclase activity. The overlap of the thresholds could allow oscillations of chemoattractant cAMP to self-organize at various starving conditions, making its development robust to fluctuations in its environment.

  1. Results from a round-robin study assessing the precision and accuracy of LA-ICPMS U/Pb geochronology of zircon

    NASA Astrophysics Data System (ADS)

    Hanchar, J. M.

    2009-12-01

    A round-robin study was undertaken to assess the current state of precision and accuracy that can be achieved in LA-ICPMS U/Pb geochronology of zircon. The initial plan was to select abundant, well-characterized zircon samples to distribute to participants in the study. Three suitable samples were found, evaluated, and dated using ID-TIMS. Twenty-five laboratories in North America and Europe were asked to participate in the study. Eighteen laboratories agreed to participate, of which seventeen submitted final results. It was decided at the outset of the project that the identities of the participating researchers and laboratories not be revealed until the manuscript stemming from the project was completed. Participants were sent either fragments of zircon crystal or whole zircon crystals, selected randomly after being thoroughly mixed. Participants were asked to conform to specific requirements. These include providing all analytical conditions and equipment used, submission of all data acquired, and submitting their preferred data and preferred ages for the three samples. The participating researchers used a wide range of analytical methods (e.g., instrumentation, data reduction, error propagation) for the LA-ICPMS U/Th geochronology. These combined factors made it difficult for direct comparison of the results that were submitted. Most of the LA-ICPMS results submitted were within 2% r.s.d. of the ID-TIMS values for the three samples in the study. However, the error bars for the majority of the LA-ICPMS results for the three samples did not overlap with the ID-TIMS results. These results suggest a general underestimation of the errors calculated for the LA-ICPMS analyses U/Pb zircon analyses.

  2. The 1998-2000 SHADOZ (Southern Hemisphere ADditional OZonesondes) Tropical Ozone Climatology: Ozonesonde Precision, Accuracy and Station-to-Station Variability

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, Anne M.; McPeters, R. D.; Oltmans, S. J.; Schmidlin, F. J.; Bhartia, P. K. (Technical Monitor)

    2001-01-01

    As part of the SAFARI-2000 campaign, additional launches of ozonesondes were made at Irene, South Africa and at Lusaka, Zambia. These represent campaign augmentations to the SHADOZ database described in this paper. This network of 10 southern hemisphere tropical and subtropical stations, designated the Southern Hemisphere ADditional OZonesondes (SHADOZ) project and established from operational sites, provided over 1000 profiles from ozonesondes and radiosondes during the period 1998-2000. (Since that time, two more stations, one in southern Africa, have joined SHADOZ). Archived data are available at: http://code9l6.gsfc.nasa.gov/Data-services/shadoz>. Uncertainties and accuracies within the SHADOZ ozone data set are evaluated by analyzing: (1) imprecisions in stratospheric ozone profiles and in methods of extrapolating ozone above balloon burst; (2) comparisons of column-integrated total ozone from sondes with total ozone from the Earth-Probe/TOMS (Total Ozone Mapping Spectrometer) satellite and ground-based instruments; (3) possible biases from station-to-station due to variations in ozonesonde characteristics. The key results are: (1) Ozonesonde precision is 5%; (2) Integrated total ozone column amounts from the sondes are in good agreement (2-10%) with independent measurements from ground-based instruments at five SHADOZ sites and with overpass measurements from the TOMS satellite (version 7 data). (3) Systematic variations in TOMS-sonde offsets and in groundbased-sonde offsets from station to station reflect biases in sonde technique as well as in satellite retrieval. Discrepancies are present in both stratospheric and tropospheric ozone. (4) There is evidence for a zonal wave-one pattern in total and tropospheric ozone, but not in stratospheric ozone.

  3. VLT/SPHERE robust astrometry of the HR8799 planets at milliarcsecond-level accuracy. Orbital architecture analysis with PyAstrOFit

    NASA Astrophysics Data System (ADS)

    Wertz, O.; Absil, O.; Gómez González, C. A.; Milli, J.; Girard, J. H.; Mawet, D.; Pueyo, L.

    2017-02-01

    Context. HR8799 is orbited by at least four giant planets, making it a prime target for the recently commissioned Spectro-Polarimetric High-contrast Exoplanet REsearch (VLT/SPHERE). As such, it was observed on five consecutive nights during the SPHERE science verification in December 2014. Aims: We aim to take full advantage of the SPHERE capabilities to derive accurate astrometric measurements based on H-band images acquired with the Infra-Red Dual-band Imaging and Spectroscopy (IRDIS) subsystem, and to explore the ultimate astrometric performance of SPHERE in this observing mode. We also aim to present a detailed analysis of the orbital parameters for the four planets. Methods: We performed thorough post-processing of the IRDIS images with the Vortex Imaging Processing (VIP) package to derive a robust astrometric measurement for the four planets. This includes the identification and careful evaluation of the different contributions to the error budget, including systematic errors. Combining our astrometric measurements with the ones previously published in the literature, we constrain the orbital parameters of the four planets using PyAstrOFit, our new open-source python package dedicated to orbital fitting using Bayesian inference with Monte-Carlo Markov Chain sampling. Results: We report the astrometric positions for epoch 2014.93 with an accuracy down to 2.0 mas, mainly limited by the astrometric calibration of IRDIS. For each planet, we derive the posterior probability density functions for the six Keplerian elements and identify sets of highly probable orbits. For planet d, there is clear evidence for nonzero eccentricity (e 0.35), without completely excluding solutions with smaller eccentricities. The three other planets are consistent with circular orbits, although their probability distributions spread beyond e = 0.2, and show a peak at e ≃ 0.1 for planet e. The four planets have consistent inclinations of approximately 30° with respect to the sky

  4. Two dimensional assisted liquid chromatography - a chemometric approach to improve accuracy and precision of quantitation in liquid chromatography using 2D separation, dual detectors, and multivariate curve resolution.

    PubMed

    Cook, Daniel W; Rutan, Sarah C; Stoll, Dwight R; Carr, Peter W

    2015-02-15

    Comprehensive two-dimensional liquid chromatography (LC×LC) is rapidly evolving as the preferred method for the analysis of complex biological samples owing to its much greater resolving power compared to conventional one-dimensional (1D-LC). While its enhanced resolving power makes this method appealing, it has been shown that the precision of quantitation in LC×LC is generally not as good as that obtained with 1D-LC. The poorer quantitative performance of LC×LC is due to several factors including but not limited to the undersampling of the first dimension and the dilution of analytes during transit from the first dimension ((1)D) column to the second dimension ((2)D) column, and the larger relative background signals. A new strategy, 2D assisted liquid chromatography (2DALC), is presented here. 2DALC makes use of a diode array detector placed at the end of each column, producing both multivariate (1)D and two-dimensional (2D) chromatograms. The increased resolution of the analytes provided by the addition of a second dimension of separation enables the determination of analyte absorbance spectra from the (2)D detector signal that are relatively pure and can be used to initiate the treatment of data from the first dimension detector using multivariate curve resolution-alternating least squares (MCR-ALS). In this way, the approach leverages the strengths of both separation methods in a single analysis: the (2)D detector data is used to provide relatively pure analyte spectra to the MCR-ALS algorithm, and the final quantitative results are obtained from the resolved (1)D chromatograms, which has a much higher sampling rate and lower background signal than obtained in conventional single detector LC×LC, to obtain accurate and precise quantitative results. It is shown that 2DALC is superior to both single detector selective or comprehensive LC×LC and 1D-LC for quantitation of compounds that appear as severely overlapped peaks in the (1)D chromatogram - this is

  5. Accuracy and precision of 14C-based source apportionment of organic and elemental carbon in aerosols using the Swiss_4S protocol

    NASA Astrophysics Data System (ADS)

    Mouteva, G. O.; Fahrni, S. M.; Santos, G. M.; Randerson, J. T.; Zhang, Y.-L.; Szidat, S.; Czimczik, C. I.

    2015-09-01

    Aerosol source apportionment remains a critical challenge for understanding the transport and aging of aerosols, as well as for developing successful air pollution mitigation strategies. The contributions of fossil and non-fossil sources to organic carbon (OC) and elemental carbon (EC) in carbonaceous aerosols can be quantified by measuring the radiocarbon (14C) content of each carbon fraction. However, the use of 14C in studying OC and EC has been limited by technical challenges related to the physical separation of the two fractions and small sample sizes. There is no common procedure for OC/EC 14C analysis, and uncertainty studies have largely focused on the precision of yields. Here, we quantified the uncertainty in 14C measurement of aerosols associated with the isolation and analysis of each carbon fraction with the Swiss_4S thermal-optical analysis (TOA) protocol. We used an OC/EC analyzer (Sunset Laboratory Inc., OR, USA) coupled to a vacuum line to separate the two components. Each fraction was thermally desorbed and converted to carbon dioxide (CO2) in pure oxygen (O2). On average, 91 % of the evolving CO2 was then cryogenically trapped on the vacuum line, reduced to filamentous graphite, and measured for its 14C content via accelerator mass spectrometry (AMS). To test the accuracy of our setup, we quantified the total amount of extraneous carbon introduced during the TOA sample processing and graphitization as the sum of modern and fossil (14C-depleted) carbon introduced during the analysis of fossil reference materials (adipic acid for OC and coal for EC) and contemporary standards (oxalic acid for OC and rice char for EC) as a function of sample size. We further tested our methodology by analyzing five ambient airborne particulate matter (PM2.5) samples with a range of OC and EC concentrations and 14C contents in an interlaboratory comparison. The total modern and fossil carbon blanks of our setup were 0.8 ± 0.4 and 0.67 ± 0.34 μg C, respectively

  6. Accuracy and precision of 14C-based source apportionment of organic and elemental carbon in aerosols using the Swiss_4S protocol

    NASA Astrophysics Data System (ADS)

    Mouteva, G. O.; Fahrni, S. M.; Santos, G. M.; Randerson, J. T.; Zhang, Y. L.; Szidat, S.; Czimczik, C. I.

    2015-04-01

    Aerosol source apportionment remains a critical challenge for understanding the transport and aging of aerosols, as well as for developing successful air pollution mitigation strategies. The contributions of fossil and non-fossil sources to organic carbon (OC) and elemental carbon (EC) in carbonaceous aerosols can be quantified by measuring the radiocarbon (14C) content of each carbon fraction. However, the use of 14C in studying OC and EC has been limited by technical challenges related to the physical separation of the two fractions and small sample sizes. There is no common procedure for OC/EC 14C analysis, and uncertainty studies have largely focused on the precision of yields. Here, we quantified the uncertainty in 14C measurement of aerosols associated with the isolation and analysis of each carbon fraction with the Swiss_4S thermal-optical analysis (TOA) protocol. We used an OC/EC analyzer (Sunset Laboratory Inc., OR, USA) coupled to vacuum line to separate the two components. Each fraction was thermally desorbed and converted to carbon dioxide (CO2) in pure oxygen (O2). On average 91% of the evolving CO2 was then cryogenically trapped on the vacuum line, reduced to filamentous graphite, and measured for its 14C content via accelerator mass spectrometry (AMS). To test the accuracy of our set-up, we quantified the total amount of extraneous carbon introduced during the TOA sample processing and graphitization as the sum of modern and fossil (14C-depleted) carbon introduced during the analysis of fossil reference materials (adipic acid for OC and coal for EC) and contemporary standards (oxalic acid for OC and rice char for EC) as a function of sample size. We further tested our methodology by analyzing five ambient airborne particulate matter (PM2.5) samples with a range of OC and EC concentrations and 14C contents in an interlaboratory comparison. The total modern and fossil carbon blanks of our set-up were 0.8 ± 0.4 and 0.67 ± 0.34 μg C, respectively

  7. Non-Destructive Assay (NDA) Uncertainties Impact on Physical Inventory Difference (ID) and Material Balance Determination: Sources of Error, Precision/Accuracy, and ID/Propagation of Error (POV)

    SciTech Connect

    Wendelberger, James G.

    2016-10-31

    These are slides from a presentation made by a researcher from Los Alamos National Laboratory. The following topics are covered: sources of error for NDA gamma measurements, precision and accuracy are two important characteristics of measurements, four items processed in a material balance area during the inventory time period, inventory difference and propagation of variance, sum in quadrature, and overview of the ID/POV process.

  8. SU-E-J-03: Characterization of the Precision and Accuracy of a New, Preclinical, MRI-Guided Focused Ultrasound System for Image-Guided Interventions in Small-Bore, High-Field Magnets

    SciTech Connect

    Ellens, N; Farahani, K

    2015-06-15

    Purpose: MRI-guided focused ultrasound (MRgFUS) has many potential and realized applications including controlled heating and localized drug delivery. The development of many of these applications requires extensive preclinical work, much of it in small animal models. The goal of this study is to characterize the spatial targeting accuracy and reproducibility of a preclinical high field MRgFUS system for thermal ablation and drug delivery applications. Methods: The RK300 (FUS Instruments, Toronto, Canada) is a motorized, 2-axis FUS positioning system suitable for small bore (72 mm), high-field MRI systems. The accuracy of the system was assessed in three ways. First, the precision of the system was assessed by sonicating regular grids of 5 mm squares on polystyrene plates and comparing the resulting focal dimples to the intended pattern, thereby assessing the reproducibility and precision of the motion control alone. Second, the targeting accuracy was assessed by imaging a polystyrene plate with randomly drilled holes and replicating the hole pattern by sonicating the observed hole locations on intact polystyrene plates and comparing the results. Third, the practicallyrealizable accuracy and precision were assessed by comparing the locations of transcranial, FUS-induced blood-brain-barrier disruption (BBBD) (observed through Gadolinium enhancement) to the intended targets in a retrospective analysis of animals sonicated for other experiments. Results: The evenly-spaced grids indicated that the precision was 0.11 +/− 0.05 mm. When image-guidance was included by targeting random locations, the accuracy was 0.5 +/− 0.2 mm. The effective accuracy in the four rodent brains assessed was 0.8 +/− 0.6 mm. In all cases, the error appeared normally distributed (p<0.05) in both orthogonal axes, though the left/right error was systematically greater than the superior/inferior error. Conclusions: The targeting accuracy of this device is sub-millimeter, suitable for many

  9. Accuracy and precision of pseudo-continuous arterial spin labeling perfusion during baseline and hypercapnia: a head-to-head comparison with ¹⁵O H₂O positron emission tomography.

    PubMed

    Heijtel, D F R; Mutsaerts, H J M M; Bakker, E; Schober, P; Stevens, M F; Petersen, E T; van Berckel, B N M; Majoie, C B L M; Booij, J; van Osch, M J P; Vanbavel, E; Boellaard, R; Lammertsma, A A; Nederveen, A J

    2014-05-15

    Measurements of the cerebral blood flow (CBF) and cerebrovascular reactivity (CVR) provide useful information about cerebrovascular condition and regional metabolism. Pseudo-continuous arterial spin labeling (pCASL) is a promising non-invasive MRI technique to quantitatively measure the CBF, whereas additional hypercapnic pCASL measurements are currently showing great promise to quantitatively assess the CVR. However, the introduction of pCASL at a larger scale awaits further evaluation of the exact accuracy and precision compared to the gold standard. (15)O H₂O positron emission tomography (PET) is currently regarded as the most accurate and precise method to quantitatively measure both CBF and CVR, though it is one of the more invasive methods as well. In this study we therefore assessed the accuracy and precision of quantitative pCASL-based CBF and CVR measurements by performing a head-to-head comparison with (15)O H₂O PET, based on quantitative CBF measurements during baseline and hypercapnia. We demonstrate that pCASL CBF imaging is accurate during both baseline and hypercapnia with respect to (15)O H₂O PET with a comparable precision. These results pave the way for quantitative usage of pCASL MRI in both clinical and research settings.

  10. Detecting declines in the abundance of a bull trout (Salvelinus confluentus) population: Understanding the accuracy, precision, and costs of our efforts

    USGS Publications Warehouse

    Al-Chokhachy, R.; Budy, P.; Conner, M.

    2009-01-01

    Using empirical field data for bull trout (Salvelinus confluentus), we evaluated the trade-off between power and sampling effort-cost using Monte Carlo simulations of commonly collected mark-recapture-resight and count data, and we estimated the power to detect changes in abundance across different time intervals. We also evaluated the effects of monitoring different components of a population and stratification methods on the precision of each method. Our results illustrate substantial variability in the relative precision, cost, and information gained from each approach. While grouping estimates by age or stage class substantially increased the precision of estimates, spatial stratification of sampling units resulted in limited increases in precision. Although mark-resight methods allowed for estimates of abundance versus indices of abundance, our results suggest snorkel surveys may be a more affordable monitoring approach across large spatial scales. Detecting a 25% decline in abundance after 5 years was not possible, regardless of technique (power = 0.80), without high sampling effort (48% of study site). Detecting a 25% decline was possible after 15 years, but still required high sampling efforts. Our results suggest detecting moderate changes in abundance of freshwater salmonids requires considerable resource and temporal commitments and highlight the difficulties of using abundance measures for monitoring bull trout populations.

  11. Improving accuracy and precision of ice core δD (CH4) analyses using methane pre- and hydrogen post-pyrolysis trapping and subsequent chromatographic separation

    NASA Astrophysics Data System (ADS)

    Bock, M.; Schmitt, J.; Beck, J.; Schneider, R.; Fischer, H.

    2013-12-01

    Firn and polar ice cores offer the only direct paleoatmospheric archive. Analyses of past greenhouse gas concentrations and their isotopic compositions in air bubbles in the ice can help to constrain changes in global biogeochemical cycles in the past. For the analysis of the hydrogen isotopic composition of methane (δD (CH4)) 0.5 to 1.5 kg of ice was previously necessary to achieve the required precision. Here we present a method to improve precision and reduce the sample amount for δD (CH4) measurements on (ice core) air. Pre-concentrated methane is focused before a high temperature oven (pre pyrolysis trapping), and molecular hydrogen formed by pyrolysis is trapped afterwards (post pyrolysis trapping), both on a carbon-PLOT capillary at -196 °C. A small amount of methane and krypton are trapped together with H2 and must be separated using a short second chromatographic column to ensure accurate results. Pre and post pyrolysis trapping largely removes the isotopic fractionation induced during chromatographic separation and results in a narrow peak in the mass spectrometer. Air standards can be measured with a precision better than 1‰. For polar ice samples from glacial periods we estimate a precision of 2.2‰ for 350 g of ice (or roughly 30 mL (at standard temperature and pressure (STP)) of air) with 350 ppb of methane. This corresponds to recent tropospheric air samples (about 1900 ppb CH4) of about 6 mL (STP) or about 500 pmol of pure CH4.

  12. Accuracy and precision of a new portable ultrasound scanner, the BME-150A, in residual urine volume measurement: a comparison with the BladderScan BVI 3000.

    PubMed

    Choe, Jin Ho; Lee, Ji Yeon; Lee, Kyu-Sung

    2007-06-01

    The objective of the study was to determine the relative accuracy of a new portable ultrasound unit, BME-150A, and the BladderScan BVI 3000, as assessed in comparison with the catheterized residual urine volume. We used both of these machines to prospectively measure the residual urine volumes of 89 patients (40 men and 49 women) who were undergoing urodynamic studies. The ultrasound measurements were compared with the post-scan bladder volumes obtained by catheterization in the same patients. The ultrasounds were followed immediately (within 5 min) by in-and-out catheterizations while the patients were in a supine position. There were a total of 116 paired measurements made. The BME-150A and the BVI 3000 demonstrated a correlation with the residual volume of 0.92 and 0.94, and a mean difference from the true residual volume of 7.8 and 3.6 ml, respectively. Intraclass correlation coefficients for the accuracy of the two bladder scans were 0.90 for BME-150A and 0.95 for BVI 3000. The difference of accuracy between the two models was not significant (p = 0.2421). There were six cases in which a follow-up evaluation of falsely elevated post-void residual urine volume measurements on the ultrasound studies resulted in comparatively low catheterized volumes, with a range of differences from 66 to 275.5 ml. These cases were diagnosed with an ovarian cyst, uterine myoma, or uterine adenomyosis on pelvic ultrasonography. The accuracy of the BME-150A is comparable to that of the BVI 3000 in estimating the true residual urine volumes and is sufficient enough for us to recommend its use as an alternative to catheterization.

  13. Improving accuracy and precision of ice core δD(CH4) analyses using methane pre-pyrolysis and hydrogen post-pyrolysis trapping and subsequent chromatographic separation

    NASA Astrophysics Data System (ADS)

    Bock, M.; Schmitt, J.; Beck, J.; Schneider, R.; Fischer, H.

    2014-07-01

    Firn and polar ice cores offer the only direct palaeoatmospheric archive. Analyses of past greenhouse gas concentrations and their isotopic compositions in air bubbles in the ice can help to constrain changes in global biogeochemical cycles in the past. For the analysis of the hydrogen isotopic composition of methane (δD(CH4) or δ2H(CH4)) 0.5 to 1.5 kg of ice was hitherto used. Here we present a method to improve precision and reduce the sample amount for δD(CH4) measurements in (ice core) air. Pre-concentrated methane is focused in front of a high temperature oven (pre-pyrolysis trapping), and molecular hydrogen formed by pyrolysis is trapped afterwards (post-pyrolysis trapping), both on a carbon-PLOT capillary at -196 °C. Argon, oxygen, nitrogen, carbon monoxide, unpyrolysed methane and krypton are trapped together with H2 and must be separated using a second short, cooled chromatographic column to ensure accurate results. Pre- and post-pyrolysis trapping largely removes the isotopic fractionation induced during chromatographic separation and results in a narrow peak in the mass spectrometer. Air standards can be measured with a precision better than 1‰. For polar ice samples from glacial periods, we estimate a precision of 2.3‰ for 350 g of ice (or roughly 30 mL - at standard temperature and pressure (STP) - of air) with 350 ppb of methane. This corresponds to recent tropospheric air samples (about 1900 ppb CH4) of about 6 mL (STP) or about 500 pmol of pure CH4.

  14. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  15. Simultaneous Variable Flip Angle – Actual Flip Angle Imaging (VAFI) Method for Improved Accuracy and Precision of Three-dimensional T1 and B1 Measurements

    PubMed Central

    Hurley, Samuel A.; Yarnykh, Vasily L.; Johnson, Kevin M.; Field, Aaron S.; Alexander, Andrew L.; Samsonov, Alexey A.

    2011-01-01

    A new time-efficient and accurate technique for simultaneous mapping of T1 and B1 is proposed based on a combination of the Actual Flip angle Imaging (AFI) and Variable Flip Angle (VFA) methods: VAFI. VAFI utilizes a single AFI and one or more spoiled gradient-echo (SPGR) acquisitions with a simultaneous non-linear fitting procedure to yield accurate T1/B1 maps. The advantage of VAFI is high accuracy at either short T1 times or long TR in the AFI sequence. Simulations show this method is accurate to 0.03% in FA and 0.07% in T1 for TR/T1 times over the range of 0.01 to 0.45. We show for the case of brain imaging that it is sufficient to use only one small flip angle SPGR acquisition, which results in reduced spoiling requirements and a significant scan time reduction compared to the original VFA. In-vivo validation yielded high-quality 3D T1 maps and T1 measurements within 10% of previously published values, and within a clinically acceptable scan time. The VAFI method will increase the accuracy and clinical feasibility of many quantitative MRI methods requiring T1/B1 mapping such as DCE perfusion and quantitative MTI. PMID:22139819

  16. Method and system using power modulation for maskless vapor deposition of spatially graded thin film and multilayer coatings with atomic-level precision and accuracy

    DOEpatents

    Montcalm, Claude; Folta, James Allen; Tan, Swie-In; Reiss, Ira

    2002-07-30

    A method and system for producing a film (preferably a thin film with highly uniform or highly accurate custom graded thickness) on a flat or graded substrate (such as concave or convex optics), by sweeping the substrate across a vapor deposition source operated with time-varying flux distribution. In preferred embodiments, the source is operated with time-varying power applied thereto during each sweep of the substrate to achieve the time-varying flux distribution as a function of time. A user selects a source flux modulation recipe for achieving a predetermined desired thickness profile of the deposited film. The method relies on precise modulation of the deposition flux to which a substrate is exposed to provide a desired coating thickness distribution.

  17. Accuracy and precision of reconstruction of complex refractive index in near-field single-distance propagation-based phase-contrast tomography

    NASA Astrophysics Data System (ADS)

    Gureyev, Timur; Mohammadi, Sara; Nesterets, Yakov; Dullin, Christian; Tromba, Giuliana

    2013-10-01

    We investigate the quantitative accuracy and noise sensitivity of reconstruction of the 3D distribution of complex refractive index, n(r)=1-δ(r)+iβ(r), in samples containing materials with different refractive indices using propagation-based phase-contrast computed tomography (PB-CT). Our present study is limited to the case of parallel-beam geometry with monochromatic synchrotron radiation, but can be readily extended to cone-beam CT and partially coherent polychromatic X-rays at least in the case of weakly absorbing samples. We demonstrate that, except for regions near the interfaces between distinct materials, the distribution of imaginary part of the refractive index, β(r), can be accurately reconstructed from a single projection image per view angle using phase retrieval based on the so-called homogeneous version of the Transport of Intensity equation (TIE-Hom) in combination with conventional CT reconstruction. In contrast, the accuracy of reconstruction of δ(r) depends strongly on the choice of the "regularization" parameter in TIE-Hom. We demonstrate by means of an instructive example that for some multi-material samples, a direct application of the TIE-Hom method in PB-CT produces qualitatively incorrect results for δ(r), which can be rectified either by collecting additional projection images at each view angle, or by utilising suitable a priori information about the sample. As a separate observation, we also show that, in agreement with previous reports, it is possible to significantly improve signal-to-noise ratio by increasing the sample-to-detector distance in combination with TIE-Hom phase retrieval in PB-CT compared to conventional ("contact") CT, with the maximum achievable gain of the order of 0.3δ /β. This can lead to improved image quality and/or reduction of the X-ray dose delivered to patients in medical imaging.

  18. Preliminary assessment of the accuracy and precision of TOPEX/POSEIDON altimeter data with respect to the large-scale ocean circulation

    NASA Technical Reports Server (NTRS)

    Wunsch, Carl; Stammer, Detlef

    1994-01-01

    TOPEX/POSEIDON sea surface height measurements are examined for quantitative consistency with known elements of the oceanic general circulation and its variability. Project-provided corrections were accepted but are at tested as part of the overall results. The ocean was treated as static over each 10-day repeat cycle and maps constructed of the absolute sea surface topography from simple averages in 2 deg x 2 deg bins. A hybrid geoid model formed from a combination of the recent Joint Gravity Model-2 and the project-provided Ohio State University geoid was used to estimate the absolute topography in each 10-day period. Results are examined in terms of the annual average, seasonal average, seasonal variations, and variations near the repeat period. Conclusion are as follows: the orbit error is now difficult to observe, having been reduced to a level at or below the level of other error sources; the geoid dominates the error budget of the estimates of the absolute topography; the estimated seasonal cycle is consistent with prior estimates; shorter-period variability is dominated on the largest scales by an oscillation near 50 days in spherical harmonics Y(sup m)(sub 1)(theta, lambda) with an amplitude near 10 cm, close to the simplest alias of the M(sub 2) tide. This spectral peak and others visible in the periodograms support the hypothesis that the largest remaining time-dependent errors lie in the tidal models. Though discrepancies attribute to the geoid are within the formal uncertainties of the good estimates, removal of them is urgent for circulation studies. Current gross accuracy of the TOPEX/POSEIDON mission is in the range of 5-10 cm, distributed overbroad band of frequencies and wavenumbers. In finite bands, accuracies approach the 1-cm level, and expected improvements arising from extended mission duration should reduce these numbers by nearly an order of magnitude.

  19. The Influence of External Loads on Movement Precision During Active Shoulder Internal Rotation Movements as Measured by 3 Indices of Accuracy

    PubMed Central

    Brindle, Timothy J; Uhl, Timothy L; Nitz, Arthur J; Shapiro, Robert

    2006-01-01

    Context: Using constant, variable, and absolute error to measure movement accuracy might provide a more complete description of joint position sense than any of these values alone. Objective: To determine the effect of loaded movements and type of feedback on shoulder joint position sense and movement velocity. Design: Applied study with repeated measures comparing type of feedback and the presence of a load. Setting: Laboratory. Patients or Other Participants: Twenty healthy subjects (age = 27.2 ± 3.3 years, height = 173.2 ± 18.1 cm, mass = 70.8 ± 14.5 kg) were seated with their arms in a custom shoulder wheel. Intervention(s): Subjects internally rotated 27° in the plane of the scapula, with either visual feedback provided by a video monitor or proprioceptive feedback provided by prior passive positioning, to a target at 48° of external rotation. Subjects performed the internal rotation movements with video feedback and proprioceptive feedback and with and without load (5% of body weight). Main Outcome Measure(s): High-speed motion analysis recorded peak rotational velocity and accuracy. Constant, variable, and absolute error for joint position sense was calculated from the final position. Results: Unloaded movements demonstrated significantly greater variable error than for loaded movements (2.0 ± 0.7° and 1.5 ± 0.4°, respectively) (P < .05), but there were no differences in constant or absolute error. Peak velocity was greater for movements with proprioceptive feedback (45.6 ± 2.9°/s) than visual feedback (39.1 ± 2.1°/s) and for unloaded (47.8 ± 3.6°/s) than loaded (36.9 ± 1.0°/s) movements (P < .05). Conclusions: Shoulder joint position sense demonstrated greater variable error unloaded versus loaded movements. Both visual feedback and additional loads decreased peak rotational velocity. PMID:16619096

  20. High-accuracy, high-precision, high-resolution, continuous monitoring of urban greenhouse gas emissions? Results to date from INFLUX

    NASA Astrophysics Data System (ADS)

    Davis, K. J.; Brewer, A.; Cambaliza, M. O. L.; Deng, A.; Hardesty, M.; Gurney, K. R.; Heimburger, A. M. F.; Karion, A.; Lauvaux, T.; Lopez-Coto, I.; McKain, K.; Miles, N. L.; Patarasuk, R.; Prasad, K.; Razlivanov, I. N.; Richardson, S.; Sarmiento, D. P.; Shepson, P. B.; Sweeney, C.; Turnbull, J. C.; Whetstone, J. R.; Wu, K.

    2015-12-01

    The Indianapolis Flux Experiment (INFLUX) is testing the boundaries of our ability to use atmospheric measurements to quantify urban greenhouse gas (GHG) emissions. The project brings together inventory assessments, tower-based and aircraft-based atmospheric measurements, and atmospheric modeling to provide high-accuracy, high-resolution, continuous monitoring of emissions of GHGs from the city. Results to date include a multi-year record of tower and aircraft based measurements of the urban CO2 and CH4 signal, long-term atmospheric modeling of GHG transport, and emission estimates for both CO2 and CH4 based on both tower and aircraft measurements. We will present these emissions estimates, the uncertainties in each, and our assessment of the primary needs for improvements in these emissions estimates. We will also present ongoing efforts to improve our understanding of atmospheric transport and background atmospheric GHG mole fractions, and to disaggregate GHG sources (e.g. biogenic vs. fossil fuel CO2 fluxes), topics that promise significant improvement in urban GHG emissions estimates.

  1. Accuracy and Precision in the Southern Hemisphere Additional Ozonesondes (SHADOZ) Dataset 1998-2000 in Light of the JOSIE-2000 Results

    NASA Technical Reports Server (NTRS)

    Witte, J. C.; Thompson, A. M.; Schmidlin, F. J.; Oltmans, S. J.; McPeters, R. D.; Smit, H. G. J.

    2003-01-01

    A network of 12 southern hemisphere tropical and subtropical stations in the Southern Hemisphere ADditional OZonesondes (SHADOZ) project has provided over 2000 profiles of stratospheric and tropospheric ozone since 1998. Balloon-borne electrochemical concentration cell (ECC) ozonesondes are used with standard radiosondes for pressure, temperature and relative humidity measurements. The archived data are available at:http: //croc.gsfc.nasa.gov/shadoz. In Thompson et al., accuracies and imprecisions in the SHADOZ 1998- 2000 dataset were examined using ground-based instruments and the TOMS total ozone measurement (version 7) as references. Small variations in ozonesonde technique introduced possible biases from station-to-station. SHADOZ total ozone column amounts are now compared to version 8 TOMS; discrepancies between the two datasets are reduced 2\\% on average. An evaluation of ozone variations among the stations is made using the results of a series of chamber simulations of ozone launches (JOSIE-2000, Juelich Ozonesonde Intercomparison Experiment) in which a standard reference ozone instrument was employed with the various sonde techniques used in SHADOZ. A number of variations in SHADOZ ozone data are explained when differences in solution strength, data processing and instrument type (manufacturer) are taken into account.

  2. The effect of dilution and the use of a post-extraction nucleic acid purification column on the accuracy, precision, and inhibition of environmental DNA samples

    USGS Publications Warehouse

    Mckee, Anna M.; Spear, Stephen F.; Pierson, Todd W.

    2015-01-01

    Isolation of environmental DNA (eDNA) is an increasingly common method for detecting presence and assessing relative abundance of rare or elusive species in aquatic systems via the isolation of DNA from environmental samples and the amplification of species-specific sequences using quantitative PCR (qPCR). Co-extracted substances that inhibit qPCR can lead to inaccurate results and subsequent misinterpretation about a species’ status in the tested system. We tested three treatments (5-fold and 10-fold dilutions, and spin-column purification) for reducing qPCR inhibition from 21 partially and fully inhibited eDNA samples collected from coastal plain wetlands and mountain headwater streams in the southeastern USA. All treatments reduced the concentration of DNA in the samples. However, column purified samples retained the greatest sensitivity. For stream samples, all three treatments effectively reduced qPCR inhibition. However, for wetland samples, the 5-fold dilution was less effective than other treatments. Quantitative PCR results for column purified samples were more precise than the 5-fold and 10-fold dilutions by 2.2× and 3.7×, respectively. Column purified samples consistently underestimated qPCR-based DNA concentrations by approximately 25%, whereas the directional bias in qPCR-based DNA concentration estimates differed between stream and wetland samples for both dilution treatments. While the directional bias of qPCR-based DNA concentration estimates differed among treatments and locations, the magnitude of inaccuracy did not. Our results suggest that 10-fold dilution and column purification effectively reduce qPCR inhibition in mountain headwater stream and coastal plain wetland eDNA samples, and if applied to all samples in a study, column purification may provide the most accurate relative qPCR-based DNA concentrations estimates while retaining the greatest assay sensitivity.

  3. Evaluation of the automated hematology analyzer Sysmex XT-2000iV™ compared to the ADVIA® 2120 for its use in dogs, cats, and horses: Part I--precision, linearity, and accuracy of complete blood cell count.

    PubMed

    Bauer, Natali; Nakagawa, Julia; Dunker, Cathrin; Failing, Klaus; Moritz, Andreas

    2011-11-01

    The automated laser-based hematology analyzer Sysmex XT-2000iV™ providing a complete blood cell count (CBC) and 5-part differential has been introduced in large veterinary laboratories. The aim of the current study was to determine precision, linearity, and accuracy of the Sysmex analyzer. Reference method for the accuracy study was the laser-based hematology analyzer ADVIA® 2120. For evaluation of accuracy, consecutive fresh blood samples from healthy and diseased cats (n = 216), dogs (n = 314), and horses (n = 174) were included. A low intra-assay coefficient of variation (CV) of approximately 1% was seen for the CBC except platelet count (PLT). An intra-assay CV ranging between 2% and 5.5% was evident for the differential count except for feline and equine monocytes (7.7%) and horse eosinophils (15.7%). Linearity was excellent for white blood cell count (WBC), hematocrit value, red blood cell count (RBC), and PLT. For all evaluated species, agreement was excellent for WBC and RBC, with Spearman rank correlation coefficients (r(s)) ranging from >0.99 to 0.98. Hematocrit value correlated excellently in cats and dogs, whereas for horses, a good correlation was evident. A good correlation between both analyzers was seen in feline and equine PLT (r(s) = 0.89 and 0.92, respectively), whereas correlation was excellent for dogs (r(s) = 0.93). Biases were close to 0 except for mean corpuscular hemoglobin concentration (4.11 to -7.25 mmol/l) and canine PLT (57 × 10(9)/l). Overall, the performance of the Sysmex analyzer was excellent and compared favorably with the ADVIA analyzer.

  4. Precision volume measurement system.

    SciTech Connect

    Fischer, Erin E.; Shugard, Andrew D.

    2004-11-01

    A new precision volume measurement system based on a Kansas City Plant (KCP) design was built to support the volume measurement needs of the Gas Transfer Systems (GTS) department at Sandia National Labs (SNL) in California. An engineering study was undertaken to verify or refute KCP's claims of 0.5% accuracy. The study assesses the accuracy and precision of the system. The system uses the ideal gas law and precise pressure measurements (of low-pressure helium) in a temperature and computer controlled environment to ratio a known volume to an unknown volume.

  5. Precision optical metrology without lasers

    NASA Astrophysics Data System (ADS)

    Bergmann, Ralf B.; Burke, Jan; Falldorf, Claas

    2015-07-01

    Optical metrology is a key technique when it comes to precise and fast measurement with a resolution down to the micrometer or even nanometer regime. The choice of a particular optical metrology technique and the quality of results depends on sample parameters such as size, geometry and surface roughness as well as user requirements such as resolution, measurement time and robustness. Interferometry-based techniques are well known for their low measurement uncertainty in the nm range, but usually require careful isolation against vibration and a laser source that often needs shielding for reasons of eye-safety. In this paper, we concentrate on high precision optical metrology without lasers by using the gradient based measurement technique of deflectometry and the finite difference based technique of shear interferometry. Careful calibration of deflectometry systems allows one to investigate virtually all kinds of reflecting surfaces including aspheres or free-form surfaces with measurement uncertainties below the μm level. Computational Shear Interferometry (CoSI) allows us to combine interferometric accuracy and the possibility to use cheap and eye-safe low-brilliance light sources such as e.g. fiber coupled LEDs or even liquid crystal displays. We use CoSI e.g. for quantitative phase contrast imaging in microscopy. We highlight the advantages of both methods, discuss their transfer functions and present results on the precision of both techniques.

  6. Assessing the Accuracy and Precision of Inorganic Geochemical Data Produced through Flux Fusion and Acid Digestions: Multiple (60+) Comprehensive Analyses of BHVO-2 and the Development of Improved "Accepted" Values

    NASA Astrophysics Data System (ADS)

    Ireland, T. J.; Scudder, R.; Dunlea, A. G.; Anderson, C. H.; Murray, R. W.

    2014-12-01

    The use of geological standard reference materials (SRMs) to assess both the accuracy and the reproducibility of geochemical data is a vital consideration in determining the major and trace element abundances of geologic, oceanographic, and environmental samples. Calibration curves commonly are generated that are predicated on accurate analyses of these SRMs. As a means to verify the robustness of these calibration curves, a SRM can also be run as an unknown item (i.e., not included as a data point in the calibration). The experimentally derived composition of the SRM can thus be compared to the certified (or otherwise accepted) value. This comparison gives a direct measure of the accuracy of the method used. Similarly, if the same SRM is analyzed as an unknown over multiple analytical sessions, the external reproducibility of the method can be evaluated. Two common bulk digestion methods used in geochemical analysis are flux fusion and acid digestion. The flux fusion technique is excellent at ensuring complete digestion of a variety of sample types, is quick, and does not involve much use of hazardous acids. However, this technique is hampered by a high amount of total dissolved solids and may be accompanied by an increased analytical blank for certain trace elements. On the other hand, acid digestion (using a cocktail of concentrated nitric, hydrochloric and hydrofluoric acids) provides an exceptionally clean digestion with very low analytical blanks. However, this technique results in a loss of Si from the system and may compromise results for a few other elements (e.g., Ge). Our lab uses flux fusion for the determination of major elements and a few key trace elements by ICP-ES, while acid digestion is used for Ti and trace element analyses by ICP-MS. Here we present major and trace element data for BHVO-2, a frequently used SRM derived from a Hawaiian basalt, gathered over a period of over two years (30+ analyses by each technique). We show that both digestion

  7. Comparative Analysis of the Equivital EQ02 Lifemonitor with Holter Ambulatory ECG Device for Continuous Measurement of ECG, Heart Rate, and Heart Rate Variability: A Validation Study for Precision and Accuracy.

    PubMed

    Akintola, Abimbola A; van de Pol, Vera; Bimmel, Daniel; Maan, Arie C; van Heemst, Diana

    2016-01-01

    Background: The Equivital (EQ02) is a multi-parameter telemetric device offering both real-time and/or retrospective, synchronized monitoring of ECG, HR, and HRV, respiration, activity, and temperature. Unlike the Holter, which is the gold standard for continuous ECG measurement, EQO2 continuously monitors ECG via electrodes interwoven in the textile of a wearable belt. Objective: To compare EQ02 with the Holter for continuous home measurement of ECG, heart rate (HR), and heart rate variability (HRV). Methods: Eighteen healthy participants wore, simultaneously for 24 h, the Holter and EQ02 monitors. Per participant, averaged HR, and HRV per 5 min from the two devices were compared using Pearson correlation, paired T-test, and Bland-Altman analyses. Accuracy and precision metrics included mean absolute relative difference (MARD). Results: Artifact content of EQ02 data varied widely between (range 1.93-56.45%) and within (range 0.75-9.61%) participants. Comparing the EQ02 to the Holter, the Pearson correlations were respectively 0.724, 0.955, and 0.997 for datasets containing all data and data with < 50 or < 20% artifacts respectively. For datasets containing respectively all data, data with < 50, or < 20% artifacts, bias estimated by Bland-Altman analysis was -2.8, -1.0, and -0.8 beats per minute and 24 h MARD was 7.08, 3.01, and 1.5. After selecting a 3-h stretch of data containing 1.15% artifacts, Pearson correlation was 0.786 for HRV measured as standard deviation of NN intervals (SDNN). Conclusions: Although the EQ02 can accurately measure ECG and HRV, its accuracy and precision is highly dependent on artifact content. This is a limitation for clinical use in individual patients. However, the advantages of the EQ02 (ability to simultaneously monitor several physiologic parameters) may outweigh its disadvantages (higher artifact load) for research purposes and/ or for home monitoring in larger groups of study participants. Further studies can be aimed at

  8. Comparative Analysis of the Equivital EQ02 Lifemonitor with Holter Ambulatory ECG Device for Continuous Measurement of ECG, Heart Rate, and Heart Rate Variability: A Validation Study for Precision and Accuracy

    PubMed Central

    Akintola, Abimbola A.; van de Pol, Vera; Bimmel, Daniel; Maan, Arie C.; van Heemst, Diana

    2016-01-01

    Background: The Equivital (EQ02) is a multi-parameter telemetric device offering both real-time and/or retrospective, synchronized monitoring of ECG, HR, and HRV, respiration, activity, and temperature. Unlike the Holter, which is the gold standard for continuous ECG measurement, EQO2 continuously monitors ECG via electrodes interwoven in the textile of a wearable belt. Objective: To compare EQ02 with the Holter for continuous home measurement of ECG, heart rate (HR), and heart rate variability (HRV). Methods: Eighteen healthy participants wore, simultaneously for 24 h, the Holter and EQ02 monitors. Per participant, averaged HR, and HRV per 5 min from the two devices were compared using Pearson correlation, paired T-test, and Bland-Altman analyses. Accuracy and precision metrics included mean absolute relative difference (MARD). Results: Artifact content of EQ02 data varied widely between (range 1.93–56.45%) and within (range 0.75–9.61%) participants. Comparing the EQ02 to the Holter, the Pearson correlations were respectively 0.724, 0.955, and 0.997 for datasets containing all data and data with < 50 or < 20% artifacts respectively. For datasets containing respectively all data, data with < 50, or < 20% artifacts, bias estimated by Bland-Altman analysis was −2.8, −1.0, and −0.8 beats per minute and 24 h MARD was 7.08, 3.01, and 1.5. After selecting a 3-h stretch of data containing 1.15% artifacts, Pearson correlation was 0.786 for HRV measured as standard deviation of NN intervals (SDNN). Conclusions: Although the EQ02 can accurately measure ECG and HRV, its accuracy and precision is highly dependent on artifact content. This is a limitation for clinical use in individual patients. However, the advantages of the EQ02 (ability to simultaneously monitor several physiologic parameters) may outweigh its disadvantages (higher artifact load) for research purposes and/ or for home monitoring in larger groups of study participants. Further studies can be aimed

  9. Optimetrics for Precise Navigation

    NASA Technical Reports Server (NTRS)

    Yang, Guangning; Heckler, Gregory; Gramling, Cheryl

    2017-01-01

    Optimetrics for Precise Navigation will be implemented on existing optical communication links. The ranging and Doppler measurements are conducted over communication data frame and clock. The measurement accuracy is two orders of magnitude better than TDRSS. It also has other advantages of: The high optical carrier frequency enables: (1) Immunity from ionosphere and interplanetary Plasma noise floor, which is a performance limitation for RF tracking; and (2) High antenna gain reduces terminal size and volume, enables high precision tracking in Cubesat, and in deep space smallsat. High Optical Pointing Precision provides: (a) spacecraft orientation, (b) Minimal additional hardware to implement Precise Optimetrics over optical comm link; and (c) Continuous optical carrier phase measurement will enable the system presented here to accept future optical frequency standard with much higher clock accuracy.

  10. Precision electron polarimetry

    SciTech Connect

    Chudakov, Eugene A.

    2013-11-01

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. M{\\o}ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at ~300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100\\%-polarized electron target for M{\\o}ller polarimetry.

  11. Precision electron polarimetry

    NASA Astrophysics Data System (ADS)

    Chudakov, E.

    2013-11-01

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. Mo/ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at 300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100%-polarized electron target for Mo/ller polarimetry.

  12. Precision electron polarimetry

    SciTech Connect

    Chudakov, E.

    2013-11-07

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. Mo/ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at 300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100%-polarized electron target for Mo/ller polarimetry.

  13. SU-E-P-54: Evaluation of the Accuracy and Precision of IGPS-O X-Ray Image-Guided Positioning System by Comparison with On-Board Imager Cone-Beam Computed Tomography

    SciTech Connect

    Zhang, D; Wang, W; Jiang, B; Fu, D

    2015-06-15

    Purpose: The purpose of this study is to assess the positioning accuracy and precision of IGPS-O system which is a novel radiographic kilo-voltage x-ray image-guided positioning system developed for clinical IGRT applications. Methods: IGPS-O x-ray image-guided positioning system consists of two oblique sets of radiographic kilo-voltage x-ray projecting and imaging devices which were equiped on the ground and ceiling of treatment room. This system can determine the positioning error in the form of three translations and three rotations according to the registration of two X-ray images acquired online and the planning CT image. An anthropomorphic head phantom and an anthropomorphic thorax phantom were used for this study. The phantom was set up on the treatment table with correct position and various “planned” setup errors. Both IGPS-O x-ray image-guided positioning system and the commercial On-board Imager Cone-beam Computed Tomography (OBI CBCT) were used to obtain the setup errors of the phantom. Difference of the Result between the two image-guided positioning systems were computed and analyzed. Results: The setup errors measured by IGPS-O x-ray image-guided positioning system and the OBI CBCT system showed a general agreement, the means and standard errors of the discrepancies between the two systems in the left-right, anterior-posterior, superior-inferior directions were −0.13±0.09mm, 0.03±0.25mm, 0.04±0.31mm, respectively. The maximum difference was only 0.51mm in all the directions and the angular discrepancy was 0.3±0.5° between the two systems. Conclusion: The spatial and angular discrepancies between IGPS-O system and OBI CBCT for setup error correction was minimal. There is a general agreement between the two positioning system. IGPS-O x-ray image-guided positioning system can achieve as good accuracy as CBCT and can be used in the clinical IGRT applications.

  14. Application of AFINCH as a tool for evaluating the effects of streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the southeast Lake Michigan hydrologic subregion

    USGS Publications Warehouse

    Koltun, G.F.; Holtschlag, David J.

    2010-01-01

    Bootstrapping techniques employing random subsampling were used with the AFINCH (Analysis of Flows In Networks of CHannels) model to gain insights into the effects of variation in streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the 0405 (Southeast Lake Michigan) hydrologic subregion. AFINCH uses stepwise-regression techniques to estimate monthly water yields from catchments based on geospatial-climate and land-cover data in combination with available streamflow and water-use data. Calculations are performed on a hydrologic-subregion scale for each catchment and stream reach contained in a National Hydrography Dataset Plus (NHDPlus) subregion. Water yields from contributing catchments are multiplied by catchment areas and resulting flow values are accumulated to compute streamflows in stream reaches which are referred to as flow lines. AFINCH imposes constraints on water yields to ensure that observed streamflows are conserved at gaged locations.  Data from the 0405 hydrologic subregion (referred to as Southeast Lake Michigan) were used for the analyses. Daily streamflow data were measured in the subregion for 1 or more years at a total of 75 streamflow-gaging stations during the analysis period which spanned water years 1971–2003. The number of streamflow gages in operation each year during the analysis period ranged from 42 to 56 and averaged 47. Six sets (one set for each censoring level), each composed of 30 random subsets of the 75 streamflow gages, were created by censoring (removing) approximately 10, 20, 30, 40, 50, and 75 percent of the streamflow gages (the actual percentage of operating streamflow gages censored for each set varied from year to year, and within the year from subset to subset, but averaged approximately the indicated percentages).Streamflow estimates for six flow lines each were aggregated by censoring level, and results were analyzed to assess (a) how the

  15. Robustness in bacterial chemotaxis

    NASA Astrophysics Data System (ADS)

    Alon, U.; Surette, M. G.; Barkai, N.; Leibler, S.

    1999-01-01

    Networks of interacting proteins orchestrate the responses of living cells to a variety of external stimuli, but how sensitive is the functioning of these protein networks to variations in theirbiochemical parameters? One possibility is that to achieve appropriate function, the reaction rate constants and enzyme concentrations need to be adjusted in a precise manner, and any deviation from these `fine-tuned' values ruins the network's performance. An alternative possibility is that key properties of biochemical networks are robust; that is, they are insensitive to the precise values of the biochemical parameters. Here we address this issue in experiments using chemotaxis of Escherichia coli, one of the best-characterized sensory systems,. We focus on how response and adaptation to attractant signals vary with systematic changes in the intracellular concentration of the components of the chemotaxis network. We find that some properties, such as steady-state behaviour and adaptation time, show strong variations in response to varying protein concentrations. In contrast, the precision of adaptation is robust and does not vary with the protein concentrations. This is consistent with a recently proposed molecular mechanism for exact adaptation, where robustness is a direct consequence of the network's architecture.

  16. Precision Measurement.

    ERIC Educational Resources Information Center

    Radius, Marcie; And Others

    The manual provides information for precision measurement (counting of movements per minute of a chosen activity) of achievement in special education students. Initial sections give guidelines for the teacher, parent, and student to follow for various methods of charting behavior. It is explained that precision measurement is a way to measure the…

  17. Precision Medicine

    PubMed Central

    Cholerton, Brenna; Larson, Eric B.; Quinn, Joseph F.; Zabetian, Cyrus P.; Mata, Ignacio F.; Keene, C. Dirk; Flanagan, Margaret; Crane, Paul K.; Grabowski, Thomas J.; Montine, Kathleen S.; Montine, Thomas J.

    2017-01-01

    Three key elements to precision medicine are stratification by risk, detection of pathophysiological processes as early as possible (even before clinical presentation), and alignment of mechanism of action of intervention(s) with an individual's molecular driver(s) of disease. Used for decades in the management of some rare diseases and now gaining broad currency in cancer care, a precision medicine approach is beginning to be adapted to cognitive impairment and dementia. This review focuses on the application of precision medicine to address the clinical and biological complexity of two common neurodegenerative causes of dementia: Alzheimer disease and Parkinson disease. PMID:26724389

  18. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  19. Precision optical navigation guidance system

    NASA Astrophysics Data System (ADS)

    Starodubov, D.; McCormick, K.; Nolan, P.; Johnson, D.; Dellosa, M.; Volfson, L.; Fallahpour, A.; Willner, A.

    2016-05-01

    We present the new precision optical navigation guidance system approach that provides continuous, high quality range and bearing data to fixed wing aircraft during landing approach to an aircraft carrier. The system uses infrared optical communications to measure range between ship and aircraft with accuracy and precision better than 1 meter at ranges more than 7.5 km. The innovative receiver design measures bearing from aircraft to ship with accuracy and precision better than 0.5 mRad. The system provides real-time range and bearing updates to multiple aircraft at rates up to several kHz, and duplex data transmission between ship and aircraft.

  20. State of the Field: Extreme Precision Radial Velocities

    NASA Astrophysics Data System (ADS)

    Fischer, Debra A.; Anglada-Escude, Guillem; Arriagada, Pamela; Baluev, Roman V.; Bean, Jacob L.; Bouchy, Francois; Buchhave, Lars A.; Carroll, Thorsten; Chakraborty, Abhijit; Crepp, Justin R.; Dawson, Rebekah I.; Diddams, Scott A.; Dumusque, Xavier; Eastman, Jason D.; Endl, Michael; Figueira, Pedro; Ford, Eric B.; Foreman-Mackey, Daniel; Fournier, Paul; Fűrész, Gabor; Gaudi, B. Scott; Gregory, Philip C.; Grundahl, Frank; Hatzes, Artie P.; Hébrard, Guillaume; Herrero, Enrique; Hogg, David W.; Howard, Andrew W.; Johnson, John A.; Jorden, Paul; Jurgenson, Colby A.; Latham, David W.; Laughlin, Greg; Loredo, Thomas J.; Lovis, Christophe; Mahadevan, Suvrath; McCracken, Tyler M.; Pepe, Francesco; Perez, Mario; Phillips, David F.; Plavchan, Peter P.; Prato, Lisa; Quirrenbach, Andreas; Reiners, Ansgar; Robertson, Paul; Santos, Nuno C.; Sawyer, David; Segransan, Damien; Sozzetti, Alessandro; Steinmetz, Tilo; Szentgyorgyi, Andrew; Udry, Stéphane; Valenti, Jeff A.; Wang, Sharon X.; Wittenmyer, Robert A.; Wright, Jason T.

    2016-06-01

    The Second Workshop on Extreme Precision Radial Velocities defined circa 2015 the state of the art Doppler precision and identified the critical path challenges for reaching 10 cm s-1 measurement precision. The presentations and discussion of key issues for instrumentation and data analysis and the workshop recommendations for achieving this bold precision are summarized here. Beginning with the High Accuracy Radial Velocity Planet Searcher spectrograph, technological advances for precision radial velocity (RV) measurements have focused on building extremely stable instruments. To reach still higher precision, future spectrometers will need to improve upon the state of the art, producing even higher fidelity spectra. This should be possible with improved environmental control, greater stability in the illumination of the spectrometer optics, better detectors, more precise wavelength calibration, and broader bandwidth spectra. Key data analysis challenges for the precision RV community include distinguishing center of mass (COM) Keplerian motion from photospheric velocities (time correlated noise) and the proper treatment of telluric contamination. Success here is coupled to the instrument design, but also requires the implementation of robust statistical and modeling techniques. COM velocities produce Doppler shifts that affect every line identically, while photospheric velocities produce line profile asymmetries with wavelength and temporal dependencies that are different from Keplerian signals. Exoplanets are an important subfield of astronomy and there has been an impressive rate of discovery over the past two decades. However, higher precision RV measurements are required to serve as a discovery technique for potentially habitable worlds, to confirm and characterize detections from transit missions, and to provide mass measurements for other space-based missions. The future of exoplanet science has very different trajectories depending on the precision that can

  1. Precise Orbit Determination for ALOS

    NASA Technical Reports Server (NTRS)

    Nakamura, Ryo; Nakamura, Shinichi; Kudo, Nobuo; Katagiri, Seiji

    2007-01-01

    The Advanced Land Observing Satellite (ALOS) has been developed to contribute to the fields of mapping, precise regional land coverage observation, disaster monitoring, and resource surveying. Because the mounted sensors need high geometrical accuracy, precise orbit determination for ALOS is essential for satisfying the mission objectives. So ALOS mounts a GPS receiver and a Laser Reflector (LR) for Satellite Laser Ranging (SLR). This paper deals with the precise orbit determination experiments for ALOS using Global and High Accuracy Trajectory determination System (GUTS) and the evaluation of the orbit determination accuracy by SLR data. The results show that, even though the GPS receiver loses lock of GPS signals more frequently than expected, GPS-based orbit is consistent with SLR-based orbit. And considering the 1 sigma error, orbit determination accuracy of a few decimeters (peak-to-peak) was achieved.

  2. Precision metrology.

    PubMed

    Jiang, X; Whitehouse, D J

    2012-08-28

    This article is a summary of the Satellite Meeting, which followed on from the Discussion Meeting at the Royal Society on 'Ultra-precision engineering: from physics to manufacture', held at the Kavli Royal Society International Centre, Chicheley Hall, Buckinghamshire, UK. The meeting was restricted to 18 invited experts in various aspects of precision metrology from academics from the UK and Sweden, Government Institutes from the UK and Germany and global aerospace industries. It examined and identified metrology problem areas that are, or may be, limiting future developments in precision engineering and, in particular, metrology. The Satellite Meeting was intended to produce a vision that will inspire academia and industry to address the solutions of those open-ended problems identified. The discussion covered three areas, namely the function of engineering parts, their measurement and their manufacture, as well as their interactions.

  3. An improved robust hand-eye calibration for endoscopy navigation system

    NASA Astrophysics Data System (ADS)

    He, Wei; Kang, Kumsok; Li, Yanfang; Shi, Weili; Miao, Yu; He, Fei; Yan, Fei; Yang, Huamin; Zhang, Huimao; Mori, Kensaku; Jiang, Zhengang

    2016-03-01

    Endoscopy is widely used in clinical application, and surgical navigation system is an extremely important way to enhance the safety of endoscopy. The key to improve the accuracy of the navigation system is to solve the positional relationship between camera and tracking marker precisely. The problem can be solved by the hand-eye calibration method based on dual quaternions. However, because of the tracking error and the limited motion of the endoscope, the sample motions may contain some incomplete motion samples. Those motions will cause the algorithm unstable and inaccurate. An advanced selection rule for sample motions is proposed in this paper to improve the stability and accuracy of the methods based on dual quaternion. By setting the motion filter to filter out the incomplete motion samples, finally, high precision and robust result is achieved. The experimental results show that the accuracy and stability of camera registration have been effectively improved by selecting sample motion data automatically.

  4. Precision translator

    DOEpatents

    Reedy, Robert P.; Crawford, Daniel W.

    1984-01-01

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  5. Precision translator

    DOEpatents

    Reedy, R.P.; Crawford, D.W.

    1982-03-09

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  6. Precision GPS ephemerides and baselines

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Based on the research, the area of precise ephemerides for GPS satellites, the following observations can be made pertaining to the status and future work needed regarding orbit accuracy. There are several aspects which need to be addressed in discussing determination of precise orbits, such as force models, kinematic models, measurement models, data reduction/estimation methods, etc. Although each one of these aspects was studied at CSR in research efforts, only points pertaining to the force modeling aspect are addressed.

  7. Precise Measurement for Manufacturing

    NASA Technical Reports Server (NTRS)

    2003-01-01

    A metrology instrument known as PhaseCam supports a wide range of applications, from testing large optics to controlling factory production processes. This dynamic interferometer system enables precise measurement of three-dimensional surfaces in the manufacturing industry, delivering speed and high-resolution accuracy in even the most challenging environments.Compact and reliable, PhaseCam enables users to make interferometric measurements right on the factory floor. The system can be configured for many different applications, including mirror phasing, vacuum/cryogenic testing, motion/modal analysis, and flow visualization.

  8. Classification of LIDAR Data for Generating a High-Precision Roadway Map

    NASA Astrophysics Data System (ADS)

    Jeong, J.; Lee, I.

    2016-06-01

    Generating of a highly precise map grows up with development of autonomous driving vehicles. The highly precise map includes a precision of centimetres level unlike an existing commercial map with the precision of meters level. It is important to understand road environments and make a decision for autonomous driving since a robust localization is one of the critical challenges for the autonomous driving car. The one of source data is from a Lidar because it provides highly dense point cloud data with three dimensional position, intensities and ranges from the sensor to target. In this paper, we focus on how to segment point cloud data from a Lidar on a vehicle and classify objects on the road for the highly precise map. In particular, we propose the combination with a feature descriptor and a classification algorithm in machine learning. Objects can be distinguish by geometrical features based on a surface normal of each point. To achieve correct classification using limited point cloud data sets, a Support Vector Machine algorithm in machine learning are used. Final step is to evaluate accuracies of obtained results by comparing them to reference data The results show sufficient accuracy and it will be utilized to generate a highly precise road map.

  9. Robust verification analysis

    NASA Astrophysics Data System (ADS)

    Rider, William; Witkowski, Walt; Kamm, James R.; Wildey, Tim

    2016-02-01

    We introduce a new methodology for inferring the accuracy of computational simulations through the practice of solution verification. We demonstrate this methodology on examples from computational heat transfer, fluid dynamics and radiation transport. Our methodology is suited to both well- and ill-behaved sequences of simulations. Our approach to the analysis of these sequences of simulations incorporates expert judgment into the process directly via a flexible optimization framework, and the application of robust statistics. The expert judgment is systematically applied as constraints to the analysis, and together with the robust statistics guards against over-emphasis on anomalous analysis results. We have named our methodology Robust Verification. Our methodology is based on utilizing multiple constrained optimization problems to solve the verification model in a manner that varies the analysis' underlying assumptions. Constraints applied in the analysis can include expert judgment regarding convergence rates (bounds and expectations) as well as bounding values for physical quantities (e.g., positivity of energy or density). This approach then produces a number of error models, which are then analyzed through robust statistical techniques (median instead of mean statistics). This provides self-contained, data and expert informed error estimation including uncertainties for both the solution itself and order of convergence. Our method produces high quality results for the well-behaved cases relatively consistent with existing practice. The methodology can also produce reliable results for ill-behaved circumstances predicated on appropriate expert judgment. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and ill-behaved simulations.

  10. Robust verification analysis

    SciTech Connect

    Rider, William; Witkowski, Walt; Kamm, James R.; Wildey, Tim

    2016-02-15

    We introduce a new methodology for inferring the accuracy of computational simulations through the practice of solution verification. We demonstrate this methodology on examples from computational heat transfer, fluid dynamics and radiation transport. Our methodology is suited to both well- and ill-behaved sequences of simulations. Our approach to the analysis of these sequences of simulations incorporates expert judgment into the process directly via a flexible optimization framework, and the application of robust statistics. The expert judgment is systematically applied as constraints to the analysis, and together with the robust statistics guards against over-emphasis on anomalous analysis results. We have named our methodology Robust Verification. Our methodology is based on utilizing multiple constrained optimization problems to solve the verification model in a manner that varies the analysis' underlying assumptions. Constraints applied in the analysis can include expert judgment regarding convergence rates (bounds and expectations) as well as bounding values for physical quantities (e.g., positivity of energy or density). This approach then produces a number of error models, which are then analyzed through robust statistical techniques (median instead of mean statistics). This provides self-contained, data and expert informed error estimation including uncertainties for both the solution itself and order of convergence. Our method produces high quality results for the well-behaved cases relatively consistent with existing practice. The methodology can also produce reliable results for ill-behaved circumstances predicated on appropriate expert judgment. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and ill-behaved simulations.

  11. An accuracy measurement method for star trackers based on direct astronomic observation

    PubMed Central

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-01-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412

  12. Precision GPS ephemerides and baselines

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The emphasis of this grant was focused on precision ephemerides for the Global Positioning System (GPS) satellites for geodynamics applications. During the period of this grant, major activities were in the areas of thermal force modeling, numerical integration accuracy improvement for eclipsing satellites, analysis of GIG '91 campaign data, and the Southwest Pacific campaign data analysis.

  13. RoPEUS: A New Robust Algorithm for Static Positioning in Ultrasonic Systems

    PubMed Central

    Prieto, José Carlos; Croux, Christophe; Jiménez, Antonio Ramón

    2009-01-01

    A well known problem for precise positioning in real environments is the presence of outliers in the measurement sample. Its importance is even bigger in ultrasound based systems since this technology needs a direct line of sight between emitters and receivers. Standard techniques for outlier detection in range based systems do not usually employ robust algorithms, failing when multiple outliers are present. The direct application of standard robust regression algorithms fails in static positioning (where only the current measurement sample is considered) in real ultrasound based systems mainly due to the limited number of measurements and the geometry effects. This paper presents a new robust algorithm, called RoPEUS, based on MM estimation, that follows a typical two-step strategy: 1) a high breakdown point algorithm to obtain a clean sample, and 2) a refinement algorithm to increase the accuracy of the solution. The main modifications proposed to the standard MM robust algorithm are a built in check of partial solutions in the first step (rejecting bad geometries) and the off-line calculation of the scale of the measurements. The algorithm is tested with real samples obtained with the 3D-LOCUS ultrasound localization system in an ideal environment without obstacles. These measurements are corrupted with typical outlying patterns to numerically evaluate the algorithm performance with respect to the standard parity space algorithm. The algorithm proves to be robust under single or multiple outliers, providing similar accuracy figures in all cases. PMID:22408522

  14. GOCE Precise Science Orbits

    NASA Astrophysics Data System (ADS)

    Bock, Heike; Jäggi, Adrian; Meyer, Ulrich; Beutler, Gerhard; Heinze, Markus; Hugentobler, Urs

    GOCE (Gravity field and steady-state Ocean Circulation Explorer), as the first ESA (European Space Agency) Earth Explorer Core Mission, is dedicated for gravity field recovery of unprece-dented accuracy using data from the gradiometer, its primary science instrument. Data from the secondary instrument, the 12-channel dual-frequency GPS (Global Positioning System) receiver, is used for precise orbit determination of the satellite. These orbits are used to accu-rately geolocate the gradiometer observations and to provide complementary information for the long-wavelength part of the gravity field. A precise science orbit (PSO) product is provided by the GOCE High-Level Processing Facility (HPF) with a precision of about 2 cm and a 1-week latency. The reduced-dynamic and kinematic orbit determination strategies for the PSO product are presented together with results of about one year of data. The focus is on the improvement achieved by the use of empirically derived azimuth-and elevation-dependent variations of the phase center of the GOCE GPS antenna. The orbits are validated with satellite laser ranging (SLR) measurements.

  15. Robust omniphobic surfaces

    PubMed Central

    Tuteja, Anish; Choi, Wonjae; Mabry, Joseph M.; McKinley, Gareth H.; Cohen, Robert E.

    2008-01-01

    Superhydrophobic surfaces display water contact angles greater than 150° in conjunction with low contact angle hysteresis. Microscopic pockets of air trapped beneath the water droplets placed on these surfaces lead to a composite solid-liquid-air interface in thermodynamic equilibrium. Previous experimental and theoretical studies suggest that it may not be possible to form similar fully-equilibrated, composite interfaces with drops of liquids, such as alkanes or alcohols, that possess significantly lower surface tension than water (γlv = 72.1 mN/m). In this work we develop surfaces possessing re-entrant texture that can support strongly metastable composite solid-liquid-air interfaces, even with very low surface tension liquids such as pentane (γlv = 15.7 mN/m). Furthermore, we propose four design parameters that predict the measured contact angles for a liquid droplet on a textured surface, as well as the robustness of the composite interface, based on the properties of the solid surface and the contacting liquid. These design parameters allow us to produce two different families of re-entrant surfaces— randomly-deposited electrospun fiber mats and precisely fabricated microhoodoo surfaces—that can each support a robust composite interface with essentially any liquid. These omniphobic surfaces display contact angles greater than 150° and low contact angle hysteresis with both polar and nonpolar liquids possessing a wide range of surface tensions. PMID:19001270

  16. Using checklists and algorithms to improve qualitative exposure judgment accuracy.

    PubMed

    Arnold, Susan F; Stenzel, Mark; Drolet, Daniel; Ramachandran, Gurumurthy

    2016-01-01

    Most exposure assessments are conducted without the aid of robust personal exposure data and are based instead on qualitative inputs such as education and experience, training, documentation on the process chemicals, tasks and equipment, and other information. Qualitative assessments determine whether there is any follow-up, and influence the type that occurs, such as quantitative sampling, worker training, and implementing exposure and risk management measures. Accurate qualitative exposure judgments ensure appropriate follow-up that in turn ensures appropriate exposure management. Studies suggest that qualitative judgment accuracy is low. A qualitative exposure assessment Checklist tool was developed to guide the application of a set of heuristics to aid decision making. Practicing hygienists (n = 39) and novice industrial hygienists (n = 8) were recruited for a study evaluating the influence of the Checklist on exposure judgment accuracy. Participants generated 85 pre-training judgments and 195 Checklist-guided judgments. Pre-training judgment accuracy was low (33%) and not statistically significantly different from random chance. A tendency for IHs to underestimate the true exposure was observed. Exposure judgment accuracy improved significantly (p <0.001) to 63% when aided by the Checklist. Qualitative judgments guided by the Checklist tool were categorically accurate or over-estimated the true exposure by one category 70% of the time. The overall magnitude of exposure judgment precision also improved following training. Fleiss' κ, evaluating inter-rater agreement between novice assessors was fair to moderate (κ = 0.39). Cohen's weighted and unweighted κ were good to excellent for novice (0.77 and 0.80) and practicing IHs (0.73 and 0.89), respectively. Checklist judgment accuracy was similar to quantitative exposure judgment accuracy observed in studies of similar design using personal exposure measurements, suggesting that the tool could be useful in

  17. Optimized robust plasma sampling for glomerular filtration rate studies.

    PubMed

    Murray, Anthony W; Gannon, Mark A; Barnfield, Mark C; Waller, Michael L

    2012-09-01

    In the presence of abnormal fluid collection (e.g. ascites), the measurement of glomerular filtration rate (GFR) based on a small number (1-4) of plasma samples fails. This study investigated how a few samples will allow adequate characterization of plasma clearance to give a robust and accurate GFR measurement. A total of 68 nine-sample GFR tests (from 45 oncology patients) with abnormal clearance of a glomerular tracer were audited to develop a Monte Carlo model. This was used to generate 20 000 synthetic but clinically realistic clearance curves, which were sampled at the 10 time points suggested by the British Nuclear Medicine Society. All combinations comprising between four and 10 samples were then used to estimate the area under the clearance curve by nonlinear regression. The audited clinical plasma curves were all well represented pragmatically as biexponential curves. The area under the curve can be well estimated using as few as five judiciously timed samples (5, 10, 15, 90 and 180 min). Several seven-sample schedules (e.g. 5, 10, 15, 60, 90, 180 and 240 min) are tolerant to any one sample being discounted without significant loss of accuracy or precision. A research tool has been developed that can be used to estimate the accuracy and precision of any pattern of plasma sampling in the presence of 'third-space' kinetics. This could also be used clinically to estimate the accuracy and precision of GFR calculated from mistimed or incomplete sets of samples. It has been used to identify optimized plasma sampling schedules for GFR measurement.

  18. Robust Optimization of Alginate-Carbopol 940 Bead Formulations

    PubMed Central

    López-Cacho, J. M.; González-R, Pedro L.; Talero, B.; Rabasco, A. M.; González-Rodríguez, M. L.

    2012-01-01

    Formulation process is a very complex activity which sometimes implicates taking decisions about parameters or variables to obtain the best results in a high variability or uncertainty context. Therefore, robust optimization tools can be very useful for obtaining high quality formulations. This paper proposes the optimization of different responses through the robust Taguchi method. Each response was evaluated like a noise variable, allowing the application of Taguchi techniques to obtain a response under the point of view of the signal to noise ratio. A L18 Taguchi orthogonal array design was employed to investigate the effect of eight independent variables involved in the formulation of alginate-Carbopol beads. Responses evaluated were related to drug release profile from beads (t50% and AUC), swelling performance, encapsulation efficiency, shape and size parameters. Confirmation tests to verify the prediction model were carried out and the obtained results were very similar to those predicted in every profile. Results reveal that the robust optimization is a very useful approach that allows greater precision and accuracy to the desired value. PMID:22645438

  19. Precision spectroscopy of Helium

    SciTech Connect

    Cancio, P.; Giusfredi, G.; Mazzotti, D.; De Natale, P.; De Mauro, C.; Krachmalnicoff, V.; Inguscio, M.

    2005-05-05

    Accurate Quantum-Electrodynamics (QED) tests of the simplest bound three body atomic system are performed by precise laser spectroscopic measurements in atomic Helium. In this paper, we present a review of measurements between triplet states at 1083 nm (23S-23P) and at 389 nm (23S-33P). In 4He, such data have been used to measure the fine structure of the triplet P levels and, then, to determine the fine structure constant when compared with equally accurate theoretical calculations. Moreover, the absolute frequencies of the optical transitions have been used for Lamb-shift determinations of the levels involved with unprecedented accuracy. Finally, determination of the He isotopes nuclear structure and, in particular, a measurement of the nuclear charge radius, are performed by using hyperfine structure and isotope-shift measurements.

  20. Precision ozone vapor pressure measurements

    NASA Technical Reports Server (NTRS)

    Hanson, D.; Mauersberger, K.

    1985-01-01

    The vapor pressure above liquid ozone has been measured with a high accuracy over a temperature range of 85 to 95 K. At the boiling point of liquid argon (87.3 K) an ozone vapor pressure of 0.0403 Torr was obtained with an accuracy of + or - 0.7 percent. A least square fit of the data provided the Clausius-Clapeyron equation for liquid ozone; a latent heat of 82.7 cal/g was calculated. High-precision vapor pressure data are expected to aid research in atmospheric ozone measurements and in many laboratory ozone studies such as measurements of cross sections and reaction rates.

  1. Quality, precision and accuracy of the maximum No. 40 anemometer

    SciTech Connect

    Obermeir, J.; Blittersdorf, D.

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  2. Tomography & Geochemistry: Precision, Repeatability, Accuracy and Joint Interpretations

    NASA Astrophysics Data System (ADS)

    Foulger, G. R.; Panza, G. F.; Artemieva, I. M.; Bastow, I. D.; Cammarano, F.; Doglioni, C.; Evans, J. R.; Hamilton, W. B.; Julian, B. R.; Lustrino, M.; Thybo, H.; Yanovskaya, T. B.

    2015-12-01

    Seismic tomography can reveal the spatial seismic structure of the mantle, but has little ability to constrain composition, phase or temperature. In contrast, petrology and geochemistry can give insights into mantle composition, but have severely limited spatial control on magma sources. For these reasons, results from these three disciplines are often interpreted jointly. Nevertheless, the limitations of each method are often underestimated, and underlying assumptions de-emphasized. Examples of the limitations of seismic tomography include its ability to image in detail the three-dimensional structure of the mantle or to determine with certainty the strengths of anomalies. Despite this, published seismic anomaly strengths are often unjustifiably translated directly into physical parameters. Tomography yields seismological parameters such as wave speed and attenuation, not geological or thermal parameters. Much of the mantle is poorly sampled by seismic waves, and resolution- and error-assessment methods do not express the true uncertainties. These and other problems have become highlighted in recent years as a result of multiple tomography experiments performed by different research groups, in areas of particular interest e.g., Yellowstone. The repeatability of the results is often poorer than the calculated resolutions. The ability of geochemistry and petrology to identify magma sources and locations is typically overestimated. These methods have little ability to determine source depths. Models that assign geochemical signatures to specific layers in the mantle, including the transition zone, the lower mantle, and the core-mantle boundary, are based on speculative models that cannot be verified and for which viable, less-astonishing alternatives are available. Our knowledge is poor of the size, distribution and location of protoliths, and of metasomatism of magma sources, the nature of the partial-melting and melt-extraction process, the mixing of disparate melts, and the re-assimilation of crust and mantle lithosphere by rising melt. Interpretations of seismic tomography, petrologic and geochemical observations, and all three together, are ambiguous, and this needs to be emphasized more in presenting interpretations so that the viability of the models can be assessed more reliably.

  3. Precision and accuracy of visual foliar injury assessments

    SciTech Connect

    Gumpertz, M.L.; Tingey, D.T.; Hogsett, W.E.

    1982-07-01

    The study compared three measures of foliar injury: (i) mean percent leaf area injured of all leaves on the plant, (ii) mean percent leaf area injured of the three most injured leaves, and (iii) the proportion of injured leaves to total number of leaves. For the first measure, the variation caused by reader biases and day-to-day variations were compared with the innate plant-to-plant variation. Bean (Phaseolus vulgaris 'Pinto'), pea (Pisum sativum 'Little Marvel'), radish (Rhaphanus sativus 'Cherry Belle'), and spinach (Spinacia oleracea 'Northland') plants were exposed to either 3 ..mu..L L/sup -1/ SO/sub 2/ or 0.3 ..mu..L L/sup -1/ ozone for 2 h. Three leaf readers visually assessed the percent injury on every leaf of each plant while a fourth reader used a transparent grid to make an unbiased assessment for each plant. The mean leaf area injured of the three most injured leaves was highly correlated with all leaves on the plant only if the three most injured leaves were <100% injured. The proportion of leaves injured was not highly correlated with percent leaf area injured of all leaves on the plant for any species in this study. The largest source of variation in visual assessments was plant-to-plant variation, which ranged from 44 to 97% of the total variance, followed by variation among readers (0-32% of the variance). Except for radish exposed to ozone, the day-to-day variation accounted for <18% of the total. Reader bias in assessment of ozone injury was significant but could be adjusted for each reader by a simple linear regression (R/sup 2/ = 0.89-0.91) of the visual assessments against the grid assessments.

  4. Global positioning system measurements for crustal deformation: Precision and accuracy

    USGS Publications Warehouse

    Prescott, W.H.; Davis, J.L.; Svarc, J.L.

    1989-01-01

    Analysis of 27 repeated observations of Global Positioning System (GPS) position-difference vectors, up to 11 kilometers in length, indicates that the standard deviation of the measurements is 4 millimeters for the north component, 6 millimeters for the east component, and 10 to 20 millimeters for the vertical component. The uncertainty grows slowly with increasing vector length. At 225 kilometers, the standard deviation of the measurement is 6, 11, and 40 millimeters for the north, east, and up components, respectively. Measurements with GPS and Geodolite, an electromagnetic distance-measuring system, over distances of 10 to 40 kilometers agree within 0.2 part per million. Measurements with GPS and very long baseline interferometry of the 225-kilometer vector agree within 0.05 part per million.

  5. Accuracy and Precision of Radioactivity Quantification in Nuclear Medicine Images

    PubMed Central

    Frey, Eric C.; Humm, John L.; Ljungberg, Michael

    2012-01-01

    The ability to reliably quantify activity in nuclear medicine has a number of increasingly important applications. Dosimetry for targeted therapy treatment planning or for approval of new imaging agents requires accurate estimation of the activity in organs, tumors, or voxels at several imaging time points. Another important application is the use of quantitative metrics derived from images, such as the standard uptake value commonly used in positron emission tomography (PET), to diagnose and follow treatment of tumors. These measures require quantification of organ or tumor activities in nuclear medicine images. However, there are a number of physical, patient, and technical factors that limit the quantitative reliability of nuclear medicine images. There have been a large number of improvements in instrumentation, including the development of hybrid single-photon emission computed tomography/computed tomography and PET/computed tomography systems, and reconstruction methods, including the use of statistical iterative reconstruction methods, which have substantially improved the ability to obtain reliable quantitative information from planar, single-photon emission computed tomography, and PET images. PMID:22475429

  6. Precision and Accuracy of Intercontinental Distance Determinations Using Radio Interferometry.

    DTIC Science & Technology

    1983-07-01

    Variations of the dispersion of at least this amount occur in the Mark III system. We cannot place an upper bound on the variations of the dispersion...final two terms will be 0.002 psec and 0.020 psec when t23=2.OxlO 6 sec/sec and vl2-0.02 sec. The latter two values are upper bounds for Earth based...neglected in the derivations in Section 4.1. We will now analyze each of these terms and try to place upper bounds on their contributions to the

  7. Arrival Metering Precision Study

    NASA Technical Reports Server (NTRS)

    Prevot, Thomas; Mercer, Joey; Homola, Jeffrey; Hunt, Sarah; Gomez, Ashley; Bienert, Nancy; Omar, Faisal; Kraut, Joshua; Brasil, Connie; Wu, Minghong, G.

    2015-01-01

    This paper describes the background, method and results of the Arrival Metering Precision Study (AMPS) conducted in the Airspace Operations Laboratory at NASA Ames Research Center in May 2014. The simulation study measured delivery accuracy, flight efficiency, controller workload, and acceptability of time-based metering operations to a meter fix at the terminal area boundary for different resolution levels of metering delay times displayed to the air traffic controllers and different levels of airspeed information made available to the Time-Based Flow Management (TBFM) system computing the delay. The results show that the resolution of the delay countdown timer (DCT) on the controllers display has a significant impact on the delivery accuracy at the meter fix. Using the 10 seconds rounded and 1 minute rounded DCT resolutions resulted in more accurate delivery than 1 minute truncated and were preferred by the controllers. Using the speeds the controllers entered into the fourth line of the data tag to update the delay computation in TBFM in high and low altitude sectors increased air traffic control efficiency and reduced fuel burn for arriving aircraft during time based metering.

  8. Mixed-Precision Spectral Deferred Correction: Preprint

    SciTech Connect

    Grout, Ray W. S.

    2015-09-02

    Convergence of spectral deferred correction (SDC), where low-order time integration methods are used to construct higher-order methods through iterative refinement, can be accelerated in terms of computational effort by using mixed-precision methods. Using ideas from multi-level SDC (in turn based on FAS multigrid ideas), some of the SDC correction sweeps can use function values computed in reduced precision without adversely impacting the accuracy of the final solution. This is particularly beneficial for the performance of combustion solvers such as S3D [6] which require double precision accuracy but are performance limited by the cost of data motion.

  9. A consensus on protein structure accuracy in NMR?

    PubMed

    Billeter, Martin

    2015-02-03

    The precision of an NMR structure may be manipulated by calculation parameters such as calibration factors. Its accuracy is, however, a different issue. In this issue of Structure, Buchner and Güntert present "consensus structure bundles," where precision analysis allows estimation of accuracy.

  10. Precision injection molding of freeform optics

    NASA Astrophysics Data System (ADS)

    Fang, Fengzhou; Zhang, Nan; Zhang, Xiaodong

    2016-08-01

    Precision injection molding is the most efficient mass production technology for manufacturing plastic optics. Applications of plastic optics in field of imaging, illumination, and concentration demonstrate a variety of complex surface forms, developing from conventional plano and spherical surfaces to aspheric and freeform surfaces. It requires high optical quality with high form accuracy and lower residual stresses, which challenges both optical tool inserts machining and precision injection molding process. The present paper reviews recent progress in mold tool machining and precision injection molding, with more emphasis on precision injection molding. The challenges and future development trend are also discussed.

  11. Robust extraction of the aorta and pulmonary artery from 3D MDCT image data

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2010-03-01

    Accurate definition of the aorta and pulmonary artery from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents robust methods for defining the aorta and pulmonary artery in the central chest. The methods work on both contrast enhanced and no-contrast 3D MDCT image data. The automatic methods use a common approach employing model fitting and selection and adaptive refinement. During the occasional event that more precise vascular extraction is desired or the method fails, we also have an alternate semi-automatic fail-safe method. The semi-automatic method extracts the vasculature by extending the medial axes into a user-guided direction. A ground-truth study over a series of 40 human 3D MDCT images demonstrates the efficacy, accuracy, robustness, and efficiency of the methods.

  12. Robust design of dynamic observers

    NASA Technical Reports Server (NTRS)

    Bhattacharyya, S. P.

    1974-01-01

    The two (identity) observer realizations z = Mz + Ky and z = transpose of Az + transpose of K(y - transpose of Cz), respectively called the open loop and closed loop realizations, for the linear system x = Ax, y = Cx are analyzed with respect to the requirement of robustness; i.e., the requirement that the observer continue to regulate the error x - z satisfactorily despite small variations in the observer parameters from the projected design values. The results show that the open loop realization is never robust, that robustness requires a closed loop implementation, and that the closed loop realization is robust with respect to small perturbations in the gains transpose of K if and only if the observer can be built to contain an exact replica of the unstable and underdamped dynamics of the system being observed. These results clarify the stringent accuracy requirements on both models and hardware that must be met before an observer can be considered for use in a control system.

  13. Robust Fault Detection and Isolation for Stochastic Systems

    NASA Technical Reports Server (NTRS)

    George, Jemin; Gregory, Irene M.

    2010-01-01

    This paper outlines the formulation of a robust fault detection and isolation scheme that can precisely detect and isolate simultaneous actuator and sensor faults for uncertain linear stochastic systems. The given robust fault detection scheme based on the discontinuous robust observer approach would be able to distinguish between model uncertainties and actuator failures and therefore eliminate the problem of false alarms. Since the proposed approach involves precise reconstruction of sensor faults, it can also be used for sensor fault identification and the reconstruction of true outputs from faulty sensor outputs. Simulation results presented here validate the effectiveness of the robust fault detection and isolation system.

  14. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  15. High precision modeling for fundamental physics experiments

    NASA Astrophysics Data System (ADS)

    Rievers, Benny; Nesemann, Leo; Costea, Adrian; Andres, Michael; Stephan, Ernst P.; Laemmerzahl, Claus

    With growing experimental accuracies and high precision requirements for fundamental physics space missions the needs for accurate numerical modeling techniques are increasing. Motivated by the challenge of length stability in cavities and optical resonators we propose the develop-ment of a high precision modeling tool for the simulation of thermomechanical effects up to a numerical precision of 10-20 . Exemplary calculations for simplified test cases demonstrate the general feasibility of high precision calculations and point out the high complexity of the task. A tool for high precision analysis of complex geometries will have to use new data types, advanced FE solver routines and implement new methods for the evaluation of numerical precision.

  16. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  17. Knowledge discovery by accuracy maximization

    PubMed Central

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-01-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821

  18. Robust adaptive backstepping control for piezoelectric nano-manipulating systems

    NASA Astrophysics Data System (ADS)

    Zhang, Yangming; Yan, Peng; Zhang, Zhen

    2017-01-01

    In this paper we present a systematic modeling and control approach for nano-manipulations of a two-dimensional PZT (piezoelectric transducer) actuated servo stage. The major control challenges associated with piezoelectric nano-manipulators typically include the nonlinear dynamics of hysteresis, model uncertainties, and various disturbances. The adverse effects of these complications will result in significant performance loss, unless effectively eliminated. The primary goal of the paper is on the ultra high precision control of such systems by handling various model uncertainties and disturbances simultaneously. To this end, a novel robust adaptive backstepping-like control approach is developed such that parametric uncertainties can be estimated adaptively while the nonlinear dynamics and external disturbances are treated as bounded disturbances for robust elimination. Meanwhile, the L2-gain of the closed-loop system is considered, and an H∞ optimization problem is formulated to improve the tracking accuracy. Numerical simulations and real time experiments are finally conducted, which significantly outperform conventional PID methods and achieve around 1% tracking error for circular contouring tasks.

  19. Precise predictions for slepton pair production

    SciTech Connect

    Ayres Freitas; Andreas von Manteuffel

    2002-11-07

    At a future linear collider, the masses and couplings of scalar leptons can be measured with high accuracy, thus requiring precise theoretical predictions for the relevant processes. In this work, after a discussion of the expected experimental precision, the complete one-loop corrections to smuon and selectron pair production in the MSSM are presented and the effect of different contributions in the result is analyzed.

  20. High-precision arithmetic in mathematical physics

    DOE PAGES

    Bailey, David H.; Borwein, Jonathan M.

    2015-05-12

    For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.

  1. Precise Countersinking Tool

    NASA Technical Reports Server (NTRS)

    Jenkins, Eric S.; Smith, William N.

    1992-01-01

    Tool countersinks holes precisely with only portable drill; does not require costly machine tool. Replaceable pilot stub aligns axis of tool with centerline of hole. Ensures precise cut even with imprecise drill. Designed for relatively low cutting speeds.

  2. Symmetric geometric transfer matrix partial volume correction for PET imaging: principle, validation and robustness

    NASA Astrophysics Data System (ADS)

    Sattarivand, Mike; Kusano, Maggie; Poon, Ian; Caldwell, Curtis

    2012-11-01

    Limited spatial resolution of positron emission tomography (PET) often requires partial volume correction (PVC) to improve the accuracy of quantitative PET studies. Conventional region-based PVC methods use co-registered high resolution anatomical images (e.g. computed tomography (CT) or magnetic resonance images) to identify regions of interest. Spill-over between regions is accounted for by calculating regional spread functions (RSFs) in a geometric transfer matrix (GTM) framework. This paper describes a new analytically derived symmetric GTM (sGTM) method that relies on spill-over between RSFs rather than between regions. It is shown that the sGTM is mathematically equivalent to Labbe's method; however it is a region-based method rather than a voxel-based method and it avoids handling large matrices. The sGTM method was validated using two three-dimensional (3D) digital phantoms and one physical phantom. A 3D digital sphere phantom with sphere diameters ranging from 5 to 30 mm and a sphere-to-background uptake ratio of 3-to-1 was used. A 3D digital brain phantom was used with four different anatomical regions and a background region with different activities assigned to each region. A physical sphere phantom with the same geometry and uptake as the digital sphere phantom was manufactured and PET-CT images were acquired. Using these three phantoms, the performance of the sGTM method was assessed against that of the GTM method in terms of accuracy, precision, noise propagation and robustness. The robustness was assessed by applying mis-registration errors and errors in estimates of PET point spread function (PSF). In all three phantoms, the results showed that the sGTM method has accuracy similar to that of the GTM method and within 5%. However, the sGTM method showed better precision and noise propagation than the GTM method, especially for spheres smaller than 13 mm. Moreover, the sGTM method was more robust than the GTM method when mis-registration errors or

  3. Precision agricultural systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Precision agriculture is a new farming practice that has been developing since late 1980s. It has been variously referred to as precision farming, prescription farming, site-specific crop management, to name but a few. There are numerous definitions for precision agriculture, but the central concept...

  4. Robust Adaptive Control

    NASA Technical Reports Server (NTRS)

    Narendra, K. S.; Annaswamy, A. M.

    1985-01-01

    Several concepts and results in robust adaptive control are are discussed and is organized in three parts. The first part surveys existing algorithms. Different formulations of the problem and theoretical solutions that have been suggested are reviewed here. The second part contains new results related to the role of persistent excitation in robust adaptive systems and the use of hybrid control to improve robustness. In the third part promising new areas for future research are suggested which combine different approaches currently known.

  5. Understanding the Delayed-Keyword Effect on Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Thiede, Keith W.; Dunlosky, John; Griffin, Thomas D.; Wiley, Jennifer

    2005-01-01

    The typical finding from research on metacomprehension is that accuracy is quite low. However, recent studies have shown robust accuracy improvements when judgments follow certain generation tasks (summarizing or keyword listing) but only when these tasks are performed at a delay rather than immediately after reading (K. W. Thiede & M. C. M.…

  6. Accuracy Studies of a Magnetometer-Only Attitude-and-Rate-Determination System

    NASA Technical Reports Server (NTRS)

    Challa, M. (Editor); Wheeler, C. (Editor)

    1996-01-01

    A personal computer based system was recently prototyped that uses measurements from a three axis magnetometer (TAM) to estimate the attitude and rates of a spacecraft using no a priori knowledge of the spacecraft's state. Past studies using in-flight data from the Solar, Anomalous, and Magnetospheric Particles Explorer focused on the robustness of the system and demonstrated that attitude and rate estimates could be obtained accurately to 1.5 degrees (deg) and 0.01 deg per second (deg/sec), respectively, despite limitations in the data and in the accuracies of te truth models. This paper studies the accuracy of the Kalman filter in the system using several orbits of in-flight Earth Radiation Budget Satellite (ERBS) data and attitude and rate truth models obtained from high precision sensors to demonstrate the practical capabilities. This paper shows the following: Using telemetered TAM data, attitude accuracies of 0.2 to 0.4 deg and rate accuracies of 0.002 to 0.005 deg/sec (within ERBS attitude control requirements of 1 deg and 0.0005 deg/sec) can be obtained with minimal tuning of the filter; Replacing the TAM data in the telemetry with simulated TAM data yields corresponding accuracies of 0.1 to 0.2 deg and 0.002 to 0.005 deg/sec, thus demonstrating that the filter's accuracy can be significantly enhanced by further calibrating the TAM. Factors affecting the fillter's accuracy and techniques for tuning the system's Kalman filter are also presented.

  7. Accurate and precise determination of isotopic ratios by MC-ICP-MS: a review.

    PubMed

    Yang, Lu

    2009-01-01

    For many decades the accurate and precise determination of isotope ratios has remained a very strong interest to many researchers due to its important applications in earth, environmental, biological, archeological, and medical sciences. Traditionally, thermal ionization mass spectrometry (TIMS) has been the technique of choice for achieving the highest accuracy and precision. However, recent developments in multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) have brought a new dimension to this field. In addition to its simple and robust sample introduction, high sample throughput, and high mass resolution, the flat-topped peaks generated by this technique provide for accurate and precise determination of isotope ratios with precision reaching 0.001%, comparable to that achieved with TIMS. These features, in combination with the ability of the ICP source to ionize nearly all elements in the periodic table, have resulted in an increased use of MC-ICP-MS for such measurements in various sample matrices. To determine accurate and precise isotope ratios with MC-ICP-MS, utmost care must be exercised during sample preparation, optimization of the instrument, and mass bias corrections. Unfortunately, there are inconsistencies and errors evident in many MC-ICP-MS publications, including errors in mass bias correction models. This review examines "state-of-the-art" methodologies presented in the literature for achievement of precise and accurate determinations of isotope ratios by MC-ICP-MS. Some general rules for such accurate and precise measurements are suggested, and calculations of combined uncertainty of the data using a few common mass bias correction models are outlined.

  8. Precision CW laser automatic tracking system investigated

    NASA Technical Reports Server (NTRS)

    Lang, K. T.; Lucy, R. F.; Mcgann, E. J.; Peters, C. J.

    1966-01-01

    Precision laser tracker capable of tracking a low acceleration target to an accuracy of about 20 microradians rms is being constructed and tested. This laser tracking has the advantage of discriminating against other optical sources and the capability of simultaneously measuring range.

  9. Automatic precision measurement of spectrograms.

    PubMed

    Palmer, B A; Sansonetti, C J; Andrew, K L

    1978-08-01

    A fully automatic comparator has been designed and implemented to determine precision wavelengths from high-resolution spectrograms. The accuracy attained is superior to that of an experienced operator using a semiautomatic comparator with a photoelectric setting device. The system consists of a comparator, slightly modified for simultaneous data acquisition from two parallel scans of the spectrogram, interfaced to a minicomputer. The software which controls the system embodies three innovations of special interest. (1) Data acquired from two parallel scans are compared and used to separate unknown from standard lines, to eliminate spurious lines, to identify blends of unknown with standard lines, to improve the accuracy of the measured positions, and to flag lines which require special examination. (2) Two classes of lines are automatically recognized and appropriate line finding methods are applied to each. This provides precision measurement for both simple and complex line profiles. (3) Wavelength determination using a least-squares fitted grating equation is supported in addition to polynomial interpolation. This is most useful in spectral regions with sparsely distributed standards. The principles and implementation of these techniques are fully described.

  10. Robust Critical Point Detection

    SciTech Connect

    Bhatia, Harsh

    2016-07-28

    Robust Critical Point Detection is a software to compute critical points in a 2D or 3D vector field robustly. The software was developed as a part of the author's work at the lab as a Phd student under Livermore Scholar Program (now called Livermore Graduate Scholar Program).

  11. Mechanisms for Robust Cognition

    ERIC Educational Resources Information Center

    Walsh, Matthew M.; Gluck, Kevin A.

    2015-01-01

    To function well in an unpredictable environment using unreliable components, a system must have a high degree of robustness. Robustness is fundamental to biological systems and is an objective in the design of engineered systems such as airplane engines and buildings. Cognitive systems, like biological and engineered systems, exist within…

  12. Precision performance lamp technology

    NASA Astrophysics Data System (ADS)

    Bell, Dean A.; Kiesa, James E.; Dean, Raymond A.

    1997-09-01

    A principal function of a lamp is to produce light output with designated spectra, intensity, and/or geometric radiation patterns. The function of a precision performance lamp is to go beyond these parameters and into the precision repeatability of performance. All lamps are not equal. There are a variety of incandescent lamps, from the vacuum incandescent indictor lamp to the precision lamp of a blood analyzer. In the past the definition of a precision lamp was described in terms of wattage, light center length (LCL), filament position, and/or spot alignment. This paper presents a new view of precision lamps through the discussion of a new segment of lamp design, which we term precision performance lamps. The definition of precision performance lamps will include (must include) the factors of a precision lamp. But what makes a precision lamp a precision performance lamp is the manner in which the design factors of amperage, mscp (mean spherical candlepower), efficacy (lumens/watt), life, not considered individually but rather considered collectively. There is a statistical bias in a precision performance lamp for each of these factors; taken individually and as a whole. When properly considered the results can be dramatic to the system design engineer, system production manage and the system end-user. It can be shown that for the lamp user, the use of precision performance lamps can translate to: (1) ease of system design, (2) simplification of electronics, (3) superior signal to noise ratios, (4) higher manufacturing yields, (5) lower system costs, (6) better product performance. The factors mentioned above are described along with their interdependent relationships. It is statistically shown how the benefits listed above are achievable. Examples are provided to illustrate how proper attention to precision performance lamp characteristics actually aid in system product design and manufacturing to build and market more, market acceptable product products in the

  13. Precise Positioning with Multi-GNSS and its Advantage for Seismic Parameters Inversion

    NASA Astrophysics Data System (ADS)

    Chen, K.; Li, X.; Babeyko, A. Y.; Ge, M.

    2015-12-01

    Together with the ongoing modernization of U.S. GPS and Russian GLONASS, the two new emerging global navigation satellite systems (BeiDou from China and Galileo from European Union) have already been running and multi-GNSS era is coming. Compared with single system, multi-GNSS can significantly improve the satellite visibility, optimize the spatial geometry, reduce dilution of precision and will be of great benefits to both scientific applications and engineering services. In this contribution, we focus mainly on its potential advantages for earthquake parameters estimation and tsunami early warning. First, we assess the precise positioning performance of multi-GNSS by an out-door experiment on a shaking table. Three positioning methods were used to retrieve the simulated seismic signal: precise point positioning (PPP), variometric approach for displacements analysis stand-alone engine (VADASE) and temporal point positioning (TPP). In addition to that, with respect to VADASE and TPP, we extended the original dual-frequency model to single-frequency model and then tested the algorithms. Accuracy, reliability, and continuity were evaluated and analyzed in detail accordingly. Our results revealed that multi-GNSS offer more precise and robust positioning results over GPS-only. At last, as a case study, multi-GNSS data recorded during 2014 Pisagua Earthquake were re-processed. Using co-seismic displacements from GPS and multi-GNSS, earthquake and the aftermath tsunami were inverted, respectively.

  14. Toward Precision Healthcare: Context and Mathematical Challenges

    PubMed Central

    Colijn, Caroline; Jones, Nick; Johnston, Iain G.; Yaliraki, Sophia; Barahona, Mauricio

    2017-01-01

    Precision medicine refers to the idea of delivering the right treatment to the right patient at the right time, usually with a focus on a data-centered approach to this task. In this perspective piece, we use the term “precision healthcare” to describe the development of precision approaches that bridge from the individual to the population, taking advantage of individual-level data, but also taking the social context into account. These problems give rise to a broad spectrum of technical, scientific, policy, ethical and social challenges, and new mathematical techniques will be required to meet them. To ensure that the science underpinning “precision” is robust, interpretable and well-suited to meet the policy, ethical and social questions that such approaches raise, the mathematical methods for data analysis should be transparent, robust, and able to adapt to errors and uncertainties. In particular, precision methodologies should capture the complexity of data, yet produce tractable descriptions at the relevant resolution while preserving intelligibility and traceability, so that they can be used by practitioners to aid decision-making. Through several case studies in this domain of precision healthcare, we argue that this vision requires the development of new mathematical frameworks, both in modeling and in data analysis and interpretation. PMID:28377724

  15. [Precision and personalized medicine].

    PubMed

    Sipka, Sándor

    2016-10-01

    The author describes the concept of "personalized medicine" and the newly introduced "precision medicine". "Precision medicine" applies the terms of "phenotype", "endotype" and "biomarker" in order to characterize more precisely the various diseases. Using "biomarkers" the homogeneous type of a disease (a "phenotype") can be divided into subgroups called "endotypes" requiring different forms of treatment and financing. The good results of "precision medicine" have become especially apparent in relation with allergic and autoimmune diseases. The application of this new way of thinking is going to be necessary in Hungary, too, in the near future for participants, controllers and financing boards of healthcare. Orv. Hetil., 2016, 157(44), 1739-1741.

  16. Precision positioning device

    DOEpatents

    McInroy, John E.

    2005-01-18

    A precision positioning device is provided. The precision positioning device comprises a precision measuring/vibration isolation mechanism. A first plate is provided with the precision measuring mean secured to the first plate. A second plate is secured to the first plate. A third plate is secured to the second plate with the first plate being positioned between the second plate and the third plate. A fourth plate is secured to the third plate with the second plate being positioned between the third plate and the fourth plate. An adjusting mechanism for adjusting the position of the first plate, the second plate, the third plate, and the fourth plate relative to each other.

  17. Precision aerial application for site-specific rice crop management

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Precision agriculture includes different technologies that allow agricultural professional to use information management tools to optimize agriculture production. The new technologies allow aerial application applicators to improve application accuracy and efficiency, which saves time and money for...

  18. Robustness. [in space systems

    NASA Technical Reports Server (NTRS)

    Ryan, Robert

    1993-01-01

    The concept of rubustness includes design simplicity, component and path redundancy, desensitization to the parameter and environment variations, control of parameter variations, and punctual operations. These characteristics must be traded with functional concepts, materials, and fabrication approach against the criteria of performance, cost, and reliability. The paper describes the robustness design process, which includes the following seven major coherent steps: translation of vision into requirements, definition of the robustness characteristics desired, criteria formulation of required robustness, concept selection, detail design, manufacturing and verification, operations.

  19. Robust fault detection and isolation in stochastic systems

    NASA Astrophysics Data System (ADS)

    George, Jemin

    2012-07-01

    This article outlines the formulation of a robust fault detection and isolation (FDI) scheme that can precisely detect and isolate simultaneous actuator and sensor faults for uncertain linear stochastic systems. The given robust fault detection scheme based on the discontinuous robust observer approach would be able to distinguish between model uncertainties and actuator failures and therefore eliminate the problem of false alarms. Since the proposed approach involves estimating sensor faults, it can also be used for sensor fault identification and the reconstruction of true outputs from faulty sensor outputs. Simulation results presented here validate the effectiveness of the proposed robust FDI system.

  20. Fast, Accurate and Precise Mid-Sagittal Plane Location in 3D MR Images of the Brain

    NASA Astrophysics Data System (ADS)

    Bergo, Felipe P. G.; Falcão, Alexandre X.; Yasuda, Clarissa L.; Ruppert, Guilherme C. S.

    Extraction of the mid-sagittal plane (MSP) is a key step for brain image registration and asymmetry analysis. We present a fast MSP extraction method for 3D MR images, based on automatic segmentation of the brain and on heuristic maximization of the cerebro-spinal fluid within the MSP. The method is robust to severe anatomical asymmetries between the hemispheres, caused by surgical procedures and lesions. The method is also accurate with respect to MSP delineations done by a specialist. The method was evaluated on 64 MR images (36 pathological, 20 healthy, 8 synthetic), and it found a precise and accurate approximation of the MSP in all of them with a mean time of 60.0 seconds per image, mean angular variation within a same image (precision) of 1.26o and mean angular difference from specialist delineations (accuracy) of 1.64o.

  1. System and method for high precision isotope ratio destructive analysis

    DOEpatents

    Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R

    2013-07-02

    A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).

  2. Robust Bayesian Fluorescence Lifetime Estimation, Decay Model Selection and Instrument Response Determination for Low-Intensity FLIM Imaging

    PubMed Central

    Rowley, Mark I.; Coolen, Anthonius C. C.; Vojnovic, Borivoj; Barber, Paul R.

    2016-01-01

    We present novel Bayesian methods for the analysis of exponential decay data that exploit the evidence carried by every detected decay event and enables robust extension to advanced processing. Our algorithms are presented in the context of fluorescence lifetime imaging microscopy (FLIM) and particular attention has been paid to model the time-domain system (based on time-correlated single photon counting) with unprecedented accuracy. We present estimates of decay parameters for mono- and bi-exponential systems, offering up to a factor of two improvement in accuracy compared to previous popular techniques. Results of the analysis of synthetic and experimental data are presented, and areas where the superior precision of our techniques can be exploited in Förster Resonance Energy Transfer (FRET) experiments are described. Furthermore, we demonstrate two advanced processing methods: decay model selection to choose between differing models such as mono- and bi-exponential, and the simultaneous estimation of instrument and decay parameters. PMID:27355322

  3. Robust visual tracking with contiguous occlusion constraint

    NASA Astrophysics Data System (ADS)

    Wang, Pengcheng; Qian, Weixian; Chen, Qian

    2016-02-01

    Visual tracking plays a fundamental role in video surveillance, robot vision and many other computer vision applications. In this paper, a robust visual tracking method that is motivated by the regularized ℓ1 tracker is proposed. We focus on investigating the case that the object target is occluded. Generally, occlusion can be treated as some kind of contiguous outlier with the target object as background. However, the penalty function of the ℓ1 tracker is not robust for relatively dense error distributed in the contiguous regions. Thus, we exploit a nonconvex penalty function and MRFs for outlier modeling, which is more probable to detect the contiguous occluded regions and recover the target appearance. For long-term tracking, a particle filter framework along with a dynamic model update mechanism is developed. Both qualitative and quantitative evaluations demonstrate a robust and precise performance.

  4. Nanotechnology Based Environmentally Robust Primers

    SciTech Connect

    Barbee, T W Jr; Gash, A E; Satcher, J H Jr; Simpson, R L

    2003-03-18

    An initiator device structure consisting of an energetic metallic nano-laminate foil coated with a sol-gel derived energetic nano-composite has been demonstrated. The device structure consists of a precision sputter deposition synthesized nano-laminate energetic foil of non-toxic and non-hazardous metals along with a ceramic-based energetic sol-gel produced coating made up of non-toxic and non-hazardous components such as ferric oxide and aluminum metal. Both the nano-laminate and sol-gel technologies are versatile commercially viable processes that allow the ''engineering'' of properties such as mechanical sensitivity and energy output. The nano-laminate serves as the mechanically sensitive precision igniter and the energetic sol-gel functions as a low-cost, non-toxic, non-hazardous booster in the ignition train. In contrast to other energetic nanotechnologies these materials can now be safely manufactured at application required levels, are structurally robust, have reproducible and engineerable properties, and have excellent aging characteristics.

  5. Precision antenna reflector structures

    NASA Technical Reports Server (NTRS)

    Hedgepeth, J. M.

    1985-01-01

    The assembly of the Large Precise Reflector Infrared Telescope is detailed. Also given are the specifications for the Aft Cargo Carrier and the Large Precision Reflector structure. Packaging concepts and options, stowage depth and support truss geometry are also considered. An example of a construction scenario is given.

  6. Precision Optics Curriculum.

    ERIC Educational Resources Information Center

    Reid, Robert L.; And Others

    This guide outlines the competency-based, two-year precision optics curriculum that the American Precision Optics Manufacturers Association has proposed to fill the void that it suggests will soon exist as many of the master opticians currently employed retire. The model, which closely resembles the old European apprenticeship model, calls for 300…

  7. Robustly Aligning a Shape Model and Its Application to Car Alignment of Unknown Pose.

    PubMed

    Li, Yan; Gu, Leon; Kanade, Takeo

    2011-09-01

    Precisely localizing in an image a set of feature points that form a shape of an object, such as car or face, is called alignment. Previous shape alignment methods attempted to fit a whole shape model to the observed data, based on the assumption of Gaussian observation noise and the associated regularization process. However, such an approach, though able to deal with Gaussian noise in feature detection, turns out not to be robust or precise because it is vulnerable to gross feature detection errors or outliers resulting from partial occlusions or spurious features from the background or neighboring objects. We address this problem by adopting a randomized hypothesis-and-test approach. First, a Bayesian inference algorithm is developed to generate a shape-and-pose hypothesis of the object from a partial shape or a subset of feature points. For alignment, a large number of hypotheses are generated by randomly sampling subsets of feature points, and then evaluated to find the one that minimizes the shape prediction error. This method of randomized subset-based matching can effectively handle outliers and recover the correct object shape. We apply this approach on a challenging data set of over 5,000 different-posed car images, spanning a wide variety of car types, lighting, background scenes, and partial occlusions. Experimental results demonstrate favorable improvements over previous methods on both accuracy and robustness.

  8. Asynchronous RTK precise DGNSS positioning method for deriving a low-latency high-rate output

    NASA Astrophysics Data System (ADS)

    Liang, Zhang; Hanfeng, Lv; Dingjie, Wang; Yanqing, Hou; Jie, Wu

    2015-07-01

    Low-latency high-rate (1 Hz) precise real-time kinematic (RTK) can be applied in high-speed scenarios such as aircraft automatic landing, precise agriculture and intelligent vehicle. The classic synchronous RTK (SRTK) precise differential GNSS (DGNSS) positioning technology, however, is not able to obtain a low-latency high-rate output for the rover receiver because of long data link transmission time delays (DLTTD) from the reference receiver. To overcome the long DLTTD, this paper proposes an asynchronous real-time kinematic (ARTK) method using asynchronous observations from two receivers. The asynchronous observation model (AOM) is developed based on undifferenced carrier phase observation equations of the two receivers at different epochs with short baseline. The ephemeris error and atmosphere delay are the possible main error sources on positioning accuracy in this model, and they are analyzed theoretically. In a short DLTTD and during a period of quiet ionosphere activity, the main error sources decreasing positioning accuracy are satellite orbital errors: the "inverted ephemeris error" and the integration of satellite velocity error which increase linearly along with DLTTD. The cycle slip of asynchronous double-differencing carrier phase is detected by TurboEdit method and repaired by the additional ambiguity parameter method. The AOM can deal with synchronous observation model (SOM) and achieve precise positioning solution with synchronous observations as well, since the SOM is only a specific case of AOM. The proposed method not only can reduce the cost of data collection and transmission, but can also support the mobile phone network data link transfer mode for the data of the reference receiver. This method can avoid data synchronizing process besides ambiguity initialization step, which is very convenient for real-time navigation of vehicles. The static and kinematic experiment results show that this method achieves 20 Hz or even higher rate output in

  9. Landsat wildland mapping accuracy

    USGS Publications Warehouse

    Todd, William J.; Gehring, Dale G.; Haman, J. F.

    1980-01-01

    A Landsat-aided classification of ten wildland resource classes was developed for the Shivwits Plateau region of the Lake Mead National Recreation Area. Single stage cluster sampling (without replacement) was used to verify the accuracy of each class.

  10. The Problem of Size in Robust Design

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri

    1997-01-01

    To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.

  11. Trap array configuration influences estimates and precision of black bear density and abundance.

    PubMed

    Wilton, Clay M; Puckett, Emily E; Beringer, Jeff; Gardner, Beth; Eggert, Lori S; Belant, Jerrold L

    2014-01-01

    Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI = 193-406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information

  12. Trap Array Configuration Influences Estimates and Precision of Black Bear Density and Abundance

    PubMed Central

    Wilton, Clay M.; Puckett, Emily E.; Beringer, Jeff; Gardner, Beth; Eggert, Lori S.; Belant, Jerrold L.

    2014-01-01

    Spatial capture-recapture (SCR) models have advanced our ability to estimate population density for wide ranging animals by explicitly incorporating individual movement. Though these models are more robust to various spatial sampling designs, few studies have empirically tested different large-scale trap configurations using SCR models. We investigated how extent of trap coverage and trap spacing affects precision and accuracy of SCR parameters, implementing models using the R package secr. We tested two trapping scenarios, one spatially extensive and one intensive, using black bear (Ursus americanus) DNA data from hair snare arrays in south-central Missouri, USA. We also examined the influence that adding a second, lower barbed-wire strand to snares had on quantity and spatial distribution of detections. We simulated trapping data to test bias in density estimates of each configuration under a range of density and detection parameter values. Field data showed that using multiple arrays with intensive snare coverage produced more detections of more individuals than extensive coverage. Consequently, density and detection parameters were more precise for the intensive design. Density was estimated as 1.7 bears per 100 km2 and was 5.5 times greater than that under extensive sampling. Abundance was 279 (95% CI = 193–406) bears in the 16,812 km2 study area. Excluding detections from the lower strand resulted in the loss of 35 detections, 14 unique bears, and the largest recorded movement between snares. All simulations showed low bias for density under both configurations. Results demonstrated that in low density populations with non-uniform distribution of population density, optimizing the tradeoff among snare spacing, coverage, and sample size is of critical importance to estimating parameters with high precision and accuracy. With limited resources, allocating available traps to multiple arrays with intensive trap spacing increased the amount of information

  13. Robustness of the near infrared spectroscopy method determined using univariate and multivariate approach.

    PubMed

    Pojić, Milica; Mastilović, Jasna; Majcen, Nineta

    2012-10-01

    The robustness assessment is a part of a method validation protocol, during which several characteristics of an analytical method are also evaluated (e.g. accuracy, repeatability, reproducibility, linearity, intermediate precision, measurement uncertainty) in order to assess its fitness for purpose. The purpose of robustness assessment of the near infrared spectroscopy method (NIRS) is to indicate which factor significantly influence the obtained results, as well as to point to the potential problems that might occur in the routine application of the method. The assessment of robustness of the NIRS method included variation of certain operational and environmental factors at three level (-1,0,1) by applying univariate (one-variable-at-a-time, OVAT) and multivariate (multivariate-at-a-time, MVAT) approach to the experimental design. Operational and environmental factors that were varied included the number of subsamples to be measured in the NIRS measurement (1), environmental temperature (2), sample temperature (3), environmental air humidity (4), instrument voltage (5) and lamp aging (6). Regardless the applied experimental design, external factors with significant influence on obtained NIRS results were indicated, as well as pointed the potential problems that might occur in the routine application of the method. In order to avoid them, every effort should be made to stabilize instrument and sample temperature and to standardize the homogeneity and number of subsamples to be measured in NIRS measurement. Moreover, the obtained results highlighted the necessity that the NIRS instruments should work through a voltage regulator.

  14. System for precise position registration

    DOEpatents

    Sundelin, Ronald M.; Wang, Tong

    2005-11-22

    An apparatus for enabling accurate retaining of a precise position, such as for reacquisition of a microscopic spot or feature having a size of 0.1 mm or less, on broad-area surfaces after non-in situ processing. The apparatus includes a sample and sample holder. The sample holder includes a base and three support posts. Two of the support posts interact with a cylindrical hole and a U-groove in the sample to establish location of one point on the sample and a line through the sample. Simultaneous contact of the third support post with the surface of the sample defines a plane through the sample. All points of the sample are therefore uniquely defined by the sample and sample holder. The position registration system of the current invention provides accuracy, as measured in x, y repeatability, of at least 140 .mu.m.

  15. Magnetoresistive Current Sensors for High Accuracy, High Bandwidth Current Measurement in Spacecraft Power Electronics

    NASA Astrophysics Data System (ADS)

    Slatter, Rolf; Goffin, Benoit

    2014-08-01

    The usage of magnetoresistive (MR) current sensors is increasing steadily in the field of power electronics. Current sensors must not only be accurate and dynamic, but must also be compact and robust. The MR effect is the basis for current sensors with a unique combination of precision and bandwidth in a compact package. A space-qualifiable magnetoresistive current sensor with high accuracy and high bandwidth is being jointly developed by the sensor manufacturer Sensitec and the spacecraft power electronics supplier Thales Alenia Space (T AS) Belgium. Test results for breadboards incorporating commercial-off-the-shelf (COTS) sensors are presented as well as an application example in the electronic control and power unit for the thrust vector actuators of the Ariane5-ME launcher.

  16. A 3-D Multilateration: A Precision Geodetic Measurement System

    NASA Technical Reports Server (NTRS)

    Escobal, P. R.; Fliegel, H. F.; Jaffe, R. M.; Muller, P. M.; Ong, K. M.; Vonroos, O. H.

    1972-01-01

    A system was designed with the capability of determining 1-cm accuracy station positions in three dimensions using pulsed laser earth satellite tracking stations coupled with strictly geometric data reduction. With this high accuracy, several crucial geodetic applications become possible, including earthquake hazards assessment, precision surveying, plate tectonics, and orbital determination.

  17. Unscented predictive variable structure filter for satellite attitude estimation with model errors when using low precision sensors

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Li, Hengnian

    2016-10-01

    For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).

  18. A demonstration of sub-meter GPS orbit determination and high precision user positioning

    NASA Technical Reports Server (NTRS)

    Bertiger, Willy I.; Lichten, Stephen M.; Katsigris, Eugenia C.

    1988-01-01

    It was demonstrated that the submeter GPS (Global Positioning System) orbits can be determined using multiday arc solutions with the current GPS constellation subset visible for about 8 h each day from North America. Submeter orbit accuracy was shown through orbit repeatability and orbit prediction. North American baselines of 1000-2000 km length can be estimated simultaneously with the GPS orbits to an accuracy of better than 1.5 parts in 108 (3 cm over 2000 km distance) with a daily precision of two parts in 108 or better. The most reliable baseline solutions are obtained using the same type of receivers and antennas at each end of the baseline. Baselines greater than 1000 km distance from Florida to sites in the Caribbean region have also been determined with daily precision of 1-4 parts in 108. The Caribbean sites are located well outside the fiducial tracking network and the region of optimal GPS common visibility. Thus, these results further demonstrate the robustness of the multiday arc GPS orbit solutions.

  19. Engineering robust intelligent robots

    NASA Astrophysics Data System (ADS)

    Hall, E. L.; Ali, S. M. Alhaj; Ghaffari, M.; Liao, X.; Cao, M.

    2010-01-01

    The purpose of this paper is to discuss the challenge of engineering robust intelligent robots. Robust intelligent robots may be considered as ones that not only work in one environment but rather in all types of situations and conditions. Our past work has described sensors for intelligent robots that permit adaptation to changes in the environment. We have also described the combination of these sensors with a "creative controller" that permits adaptive critic, neural network learning, and a dynamic database that permits task selection and criteria adjustment. However, the emphasis of this paper is on engineering solutions which are designed for robust operations and worst case situations such as day night cameras or rain and snow solutions. This ideal model may be compared to various approaches that have been implemented on "production vehicles and equipment" using Ethernet, CAN Bus and JAUS architectures and to modern, embedded, mobile computing architectures. Many prototype intelligent robots have been developed and demonstrated in terms of scientific feasibility but few have reached the stage of a robust engineering solution. Continual innovation and improvement are still required. The significance of this comparison is that it provides some insights that may be useful in designing future robots for various manufacturing, medical, and defense applications where robust and reliable performance is essential.

  20. Robust Unit Commitment Considering Uncertain Demand Response

    DOE PAGES

    Liu, Guodong; Tomsovic, Kevin

    2014-09-28

    Although price responsive demand response has been widely accepted as playing an important role in the reliable and economic operation of power system, the real response from demand side can be highly uncertain due to limited understanding of consumers' response to pricing signals. To model the behavior of consumers, the price elasticity of demand has been explored and utilized in both research and real practice. However, the price elasticity of demand is not precisely known and may vary greatly with operating conditions and types of customers. To accommodate the uncertainty of demand response, alternative unit commitment methods robust to themore » uncertainty of the demand response require investigation. In this paper, a robust unit commitment model to minimize the generalized social cost is proposed for the optimal unit commitment decision taking into account uncertainty of the price elasticity of demand. By optimizing the worst case under proper robust level, the unit commitment solution of the proposed model is robust against all possible realizations of the modeled uncertain demand response. Numerical simulations on the IEEE Reliability Test System show the e ectiveness of the method. Finally, compared to unit commitment with deterministic price elasticity of demand, the proposed robust model can reduce the average Locational Marginal Prices (LMPs) as well as the price volatility.« less

  1. Robust Unit Commitment Considering Uncertain Demand Response

    SciTech Connect

    Liu, Guodong; Tomsovic, Kevin

    2014-09-28

    Although price responsive demand response has been widely accepted as playing an important role in the reliable and economic operation of power system, the real response from demand side can be highly uncertain due to limited understanding of consumers' response to pricing signals. To model the behavior of consumers, the price elasticity of demand has been explored and utilized in both research and real practice. However, the price elasticity of demand is not precisely known and may vary greatly with operating conditions and types of customers. To accommodate the uncertainty of demand response, alternative unit commitment methods robust to the uncertainty of the demand response require investigation. In this paper, a robust unit commitment model to minimize the generalized social cost is proposed for the optimal unit commitment decision taking into account uncertainty of the price elasticity of demand. By optimizing the worst case under proper robust level, the unit commitment solution of the proposed model is robust against all possible realizations of the modeled uncertain demand response. Numerical simulations on the IEEE Reliability Test System show the e ectiveness of the method. Finally, compared to unit commitment with deterministic price elasticity of demand, the proposed robust model can reduce the average Locational Marginal Prices (LMPs) as well as the price volatility.

  2. Robustness analysis of stochastic biochemical systems.

    PubMed

    Ceska, Milan; Safránek, David; Dražan, Sven; Brim, Luboš

    2014-01-01

    We propose a new framework for rigorous robustness analysis of stochastic biochemical systems that is based on probabilistic model checking techniques. We adapt the general definition of robustness introduced by Kitano to the class of stochastic systems modelled as continuous time Markov Chains in order to extensively analyse and compare robustness of biological models with uncertain parameters. The framework utilises novel computational methods that enable to effectively evaluate the robustness of models with respect to quantitative temporal properties and parameters such as reaction rate constants and initial conditions. We have applied the framework to gene regulation as an example of a central biological mechanism where intrinsic and extrinsic stochasticity plays crucial role due to low numbers of DNA and RNA molecules. Using our methods we have obtained a comprehensive and precise analysis of stochastic dynamics under parameter uncertainty. Furthermore, we apply our framework to compare several variants of two-component signalling networks from the perspective of robustness with respect to intrinsic noise caused by low populations of signalling components. We have successfully extended previous studies performed on deterministic models (ODE) and showed that stochasticity may significantly affect obtained predictions. Our case studies demonstrate that the framework can provide deeper insight into the role of key parameters in maintaining the system functionality and thus it significantly contributes to formal methods in computational systems biology.

  3. Accuracy of analyses of microelectronics nanostructures in atom probe tomography

    NASA Astrophysics Data System (ADS)

    Vurpillot, F.; Rolland, N.; Estivill, R.; Duguay, S.; Blavette, D.

    2016-07-01

    The routine use of atom probe tomography (APT) as a nano-analysis microscope in the semiconductor industry requires the precise evaluation of the metrological parameters of this instrument (spatial accuracy, spatial precision, composition accuracy or composition precision). The spatial accuracy of this microscope is evaluated in this paper in the analysis of planar structures such as high-k metal gate stacks. It is shown both experimentally and theoretically that the in-depth accuracy of reconstructed APT images is perturbed when analyzing this structure composed of an oxide layer of high electrical permittivity (higher-k dielectric constant) that separates the metal gate and the semiconductor channel of a field emitter transistor. Large differences in the evaporation field between these layers (resulting from large differences in material properties) are the main sources of image distortions. An analytic model is used to interpret inaccuracy in the depth reconstruction of these devices in APT.

  4. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    A precision liquid level sensor utilizes a balanced bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  5. Precision Measurement in Biology

    NASA Astrophysics Data System (ADS)

    Quake, Stephen

    Is biology a quantitative science like physics? I will discuss the role of precision measurement in both physics and biology, and argue that in fact both fields can be tied together by the use and consequences of precision measurement. The elementary quanta of biology are twofold: the macromolecule and the cell. Cells are the fundamental unit of life, and macromolecules are the fundamental elements of the cell. I will describe how precision measurements have been used to explore the basic properties of these quanta, and more generally how the quest for higher precision almost inevitably leads to the development of new technologies, which in turn catalyze further scientific discovery. In the 21st century, there are no remaining experimental barriers to biology becoming a truly quantitative and mathematical science.

  6. Precision Environmental Radiation Monitoring System

    SciTech Connect

    Vladimir Popov, Pavel Degtiarenko

    2010-07-01

    A new precision low-level environmental radiation monitoring system has been developed and tested at Jefferson Lab. This system provides environmental radiation measurements with accuracy and stability of the order of 1 nGy/h in an hour, roughly corresponding to approximately 1% of the natural cosmic background at the sea level. Advanced electronic front-end has been designed and produced for use with the industry-standard High Pressure Ionization Chamber detector hardware. A new highly sensitive readout electronic circuit was designed to measure charge from the virtually suspended ionization chamber ion collecting electrode. New signal processing technique and dedicated data acquisition were tested together with the new readout. The designed system enabled data collection in a remote Linux-operated computer workstation, which was connected to the detectors using a standard telephone cable line. The data acquisition system algorithm is built around the continuously running 24-bit resolution 192 kHz data sampling analog to digital convertor. The major features of the design include: extremely low leakage current in the input circuit, true charge integrating mode operation, and relatively fast response to the intermediate radiation change. These features allow operating of the device as an environmental radiation monitor, at the perimeters of the radiation-generating installations in densely populated areas, like in other monitoring and security applications requiring high precision and long-term stability. Initial system evaluation results are presented.

  7. Precision displacement reference system

    DOEpatents

    Bieg, Lothar F.; Dubois, Robert R.; Strother, Jerry D.

    2000-02-22

    A precision displacement reference system is described, which enables real time accountability over the applied displacement feedback system to precision machine tools, positioning mechanisms, motion devices, and related operations. As independent measurements of tool location is taken by a displacement feedback system, a rotating reference disk compares feedback counts with performed motion. These measurements are compared to characterize and analyze real time mechanical and control performance during operation.

  8. Deep Coupled Integration of CSAC and GNSS for Robust PNT.

    PubMed

    Ma, Lin; You, Zheng; Li, Bin; Zhou, Bin; Han, Runqi

    2015-09-11

    Global navigation satellite systems (GNSS) are the most widely used positioning, navigation, and timing (PNT) technology. However, a GNSS cannot provide effective PNT services in physical blocks, such as in a natural canyon, canyon city, underground, underwater, and indoors. With the development of micro-electromechanical system (MEMS) technology, the chip scale atomic clock (CSAC) gradually matures, and performance is constantly improved. A deep coupled integration of CSAC and GNSS is explored in this thesis to enhance PNT robustness. "Clock coasting" of CSAC provides time synchronized with GNSS and optimizes navigation equations. However, errors of clock coasting increase over time and can be corrected by GNSS time, which is stable but noisy. In this paper, weighted linear optimal estimation algorithm is used for CSAC-aided GNSS, while Kalman filter is used for GNSS-corrected CSAC. Simulations of the model are conducted, and field tests are carried out. Dilution of precision can be improved by integration. Integration is more accurate than traditional GNSS. When only three satellites are visible, the integration still works, whereas the traditional method fails. The deep coupled integration of CSAC and GNSS can improve the accuracy, reliability, and availability of PNT.

  9. Deep Coupled Integration of CSAC and GNSS for Robust PNT

    PubMed Central

    Ma, Lin; You, Zheng; Li, Bin; Zhou, Bin; Han, Runqi

    2015-01-01

    Global navigation satellite systems (GNSS) are the most widely used positioning, navigation, and timing (PNT) technology. However, a GNSS cannot provide effective PNT services in physical blocks, such as in a natural canyon, canyon city, underground, underwater, and indoors. With the development of micro-electromechanical system (MEMS) technology, the chip scale atomic clock (CSAC) gradually matures, and performance is constantly improved. A deep coupled integration of CSAC and GNSS is explored in this thesis to enhance PNT robustness. “Clock coasting” of CSAC provides time synchronized with GNSS and optimizes navigation equations. However, errors of clock coasting increase over time and can be corrected by GNSS time, which is stable but noisy. In this paper, weighted linear optimal estimation algorithm is used for CSAC-aided GNSS, while Kalman filter is used for GNSS-corrected CSAC. Simulations of the model are conducted, and field tests are carried out. Dilution of precision can be improved by integration. Integration is more accurate than traditional GNSS. When only three satellites are visible, the integration still works, whereas the traditional method fails. The deep coupled integration of CSAC and GNSS can improve the accuracy, reliability, and availability of PNT. PMID:26378542

  10. Precision medicine in cardiology.

    PubMed

    Antman, Elliott M; Loscalzo, Joseph

    2016-10-01

    The cardiovascular research and clinical communities are ideally positioned to address the epidemic of noncommunicable causes of death, as well as advance our understanding of human health and disease, through the development and implementation of precision medicine. New tools will be needed for describing the cardiovascular health status of individuals and populations, including 'omic' data, exposome and social determinants of health, the microbiome, behaviours and motivations, patient-generated data, and the array of data in electronic medical records. Cardiovascular specialists can build on their experience and use precision medicine to facilitate discovery science and improve the efficiency of clinical research, with the goal of providing more precise information to improve the health of individuals and populations. Overcoming the barriers to implementing precision medicine will require addressing a range of technical and sociopolitical issues. Health care under precision medicine will become a more integrated, dynamic system, in which patients are no longer a passive entity on whom measurements are made, but instead are central stakeholders who contribute data and participate actively in shared decision-making. Many traditionally defined diseases have common mechanisms; therefore, elimination of a siloed approach to medicine will ultimately pave the path to the creation of a universal precision medicine environment.

  11. Numerical accuracy assessment

    NASA Astrophysics Data System (ADS)

    Boerstoel, J. W.

    1988-12-01

    A framework is provided for numerical accuracy assessment. The purpose of numerical flow simulations is formulated. This formulation concerns the classes of aeronautical configurations (boundaries), the desired flow physics (flow equations and their properties), the classes of flow conditions on flow boundaries (boundary conditions), and the initial flow conditions. Next, accuracy and economical performance requirements are defined; the final numerical flow simulation results of interest should have a guaranteed accuracy, and be produced for an acceptable FLOP-price. Within this context, the validation of numerical processes with respect to the well known topics of consistency, stability, and convergence when the mesh is refined must be done by numerical experimentation because theory gives only partial answers. This requires careful design of text cases for numerical experimentation. Finally, the results of a few recent evaluation exercises of numerical experiments with a large number of codes on a few test cases are summarized.

  12. Seasonal Effects on GPS PPP Accuracy

    NASA Astrophysics Data System (ADS)

    Saracoglu, Aziz; Ugur Sanli, D.

    2016-04-01

    GPS Precise Point Positioning (PPP) is now routinely used in many geophysical applications. Static positioning and 24 h data are requested for high precision results however real life situations do not always let us collect 24 h data. Thus repeated GPS surveys of 8-10 h observation sessions are still used by some research groups. Positioning solutions from shorter data spans are subject to various systematic influences, and the positioning quality as well as the estimated velocity is degraded. Researchers pay attention to the accuracy of GPS positions and of the estimated velocities derived from short observation sessions. Recently some research groups turned their attention to the study of seasonal effects (i.e. meteorological seasons) on GPS solutions. Up to now usually regional studies have been reported. In this study, we adopt a global approach and study the various seasonal effects (including the effect of the annual signal) on GPS solutions produced from short observation sessions. We use the PPP module of the NASA/JPL's GIPSY/OASIS II software and globally distributed GPS stations' data of the International GNSS Service. Accuracy studies previously performed with 10-30 consecutive days of continuous data. Here, data from each month of a year, incorporating two years in succession, is used in the analysis. Our major conclusion is that a reformulation for the GPS positioning accuracy is necessary when taking into account the seasonal effects, and typical one term accuracy formulation is expanded to a two-term one.

  13. Robustness of spatial micronetworks

    NASA Astrophysics Data System (ADS)

    McAndrew, Thomas C.; Danforth, Christopher M.; Bagrow, James P.

    2015-04-01

    Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure.

  14. Robust Control Systems.

    DTIC Science & Technology

    1981-12-01

    106 A. 13 XSU ......................................... 108 A.14 DDTCON...................................... 108 A.15 DKFTR...operation is preserved. Although some papers (Refs 6 and 13 ) deal with robustness only in regard to parameter variations within the basic controlled...since these can ofter be neglected in actual implementation, a constant-gain time 13 ........................................ invariant solution with

  15. Robustness of spatial micronetworks.

    PubMed

    McAndrew, Thomas C; Danforth, Christopher M; Bagrow, James P

    2015-04-01

    Power lines, roadways, pipelines, and other physical infrastructure are critical to modern society. These structures may be viewed as spatial networks where geographic distances play a role in the functionality and construction cost of links. Traditionally, studies of network robustness have primarily considered the connectedness of large, random networks. Yet for spatial infrastructure, physical distances must also play a role in network robustness. Understanding the robustness of small spatial networks is particularly important with the increasing interest in microgrids, i.e., small-area distributed power grids that are well suited to using renewable energy resources. We study the random failures of links in small networks where functionality depends on both spatial distance and topological connectedness. By introducing a percolation model where the failure of each link is proportional to its spatial length, we find that when failures depend on spatial distances, networks are more fragile than expected. Accounting for spatial effects in both construction and robustness is important for designing efficient microgrids and other network infrastructure.

  16. The Paradox of Abstraction: Precision Versus Concreteness.

    PubMed

    Iliev, Rumen; Axelrod, Robert

    2016-11-22

    We introduce a novel measure of abstractness based on the amount of information of a concept computed from its position in a semantic taxonomy. We refer to this measure as precision. We propose two alternative ways to measure precision, one based on the path length from a concept to the root of the taxonomic tree, and another one based on the number of direct and indirect descendants. Since more information implies greater processing load, we hypothesize that nouns higher in precision will have a processing disadvantage in a lexical decision task. We contrast precision to concreteness, a common measure of abstractness based on the proportion of sensory-based information associated with a concept. Since concreteness facilitates cognitive processing, we predict that while both concreteness and precision are measures of abstractness, they will have opposite effects on performance. In two studies we found empirical support for our hypothesis. Precision and concreteness had opposite effects on latency and accuracy in a lexical decision task, and these opposite effects were observable while controlling for word length, word frequency, affective content and semantic diversity. Our results support the view that concepts organization includes amodal semantic structures which are independent of sensory information. They also suggest that we should distinguish between sensory-based and amount-of-information-based abstractness.

  17. An Anomaly Clock Detection Algorithm for a Robust Clock Ensemble

    DTIC Science & Technology

    2009-11-01

    41 st Annual Precise Time and Time Interval (PTTI) Meeting 121 AN ANOMALY CLOCK DETECTION ALGORITHM FOR A ROBUST CLOCK ENSEMBLE...clocks are in phase and on frequency all the time with advantages of relatively simple, robust, fully redundant, and improved performance. It allows...Algorithm parameters, such as the sliding window width as a function of the time constant, and the minimum detectable levels have been optimized and

  18. Accuracy metrics for judging time scale algorithms

    NASA Technical Reports Server (NTRS)

    Douglas, R. J.; Boulanger, J.-S.; Jacques, C.

    1994-01-01

    Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.

  19. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  20. Precision Muonium Spectroscopy

    NASA Astrophysics Data System (ADS)

    Jungmann, Klaus P.

    2016-09-01

    The muonium atom is the purely leptonic bound state of a positive muon and an electron. It has a lifetime of 2.2 µs. The absence of any known internal structure provides for precision experiments to test fundamental physics theories and to determine accurate values of fundamental constants. In particular ground state hyperfine structure transitions can be measured by microwave spectroscopy to deliver the muon magnetic moment. The frequency of the 1s-2s transition in the hydrogen-like atom can be determined with laser spectroscopy to obtain the muon mass. With such measurements fundamental physical interactions, in particular quantum electrodynamics, can also be tested at highest precision. The results are important input parameters for experiments on the muon magnetic anomaly. The simplicity of the atom enables further precise experiments, such as a search for muonium-antimuonium conversion for testing charged lepton number conservation and searches for possible antigravity of muons and dark matter.

  1. How Physics Got Precise

    SciTech Connect

    Kleppner, Daniel

    2005-01-19

    Although the ancients knew the length of the year to about ten parts per million, it was not until the end of the 19th century that precision measurements came to play a defining role in physics. Eventually such measurements made it possible to replace human-made artifacts for the standards of length and time with natural standards. For a new generation of atomic clocks, time keeping could be so precise that the effects of the local gravitational potentials on the clock rates would be important. This would force us to re-introduce an artifact into the definition of the second - the location of the primary clock. I will describe some of the events in the history of precision measurements that have led us to this pleasing conundrum, and some of the unexpected uses of atomic clocks today.

  2. Precision gap particle separator

    DOEpatents

    Benett, William J.; Miles, Robin; Jones, II., Leslie M.; Stockton, Cheryl

    2004-06-08

    A system for separating particles entrained in a fluid includes a base with a first channel and a second channel. A precision gap connects the first channel and the second channel. The precision gap is of a size that allows small particles to pass from the first channel into the second channel and prevents large particles from the first channel into the second channel. A cover is positioned over the base unit, the first channel, the precision gap, and the second channel. An port directs the fluid containing the entrained particles into the first channel. An output port directs the large particles out of the first channel. A port connected to the second channel directs the small particles out of the second channel.

  3. Precision manometer gauge

    DOEpatents

    McPherson, M.J.; Bellman, R.A.

    1982-09-27

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  4. Precision manometer gauge

    DOEpatents

    McPherson, Malcolm J.; Bellman, Robert A.

    1984-01-01

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  5. Precision Heating Process

    NASA Technical Reports Server (NTRS)

    1992-01-01

    A heat sealing process was developed by SEBRA based on technology that originated in work with NASA's Jet Propulsion Laboratory. The project involved connecting and transferring blood and fluids between sterile plastic containers while maintaining a closed system. SEBRA markets the PIRF Process to manufacturers of medical catheters. It is a precisely controlled method of heating thermoplastic materials in a mold to form or weld catheters and other products. The process offers advantages in fast, precise welding or shape forming of catheters as well as applications in a variety of other industries.

  6. Robustness, generality and efficiency of optimization algorithms in practical applications

    NASA Technical Reports Server (NTRS)

    Thanedar, P. B.; Arora, J. S.; Li, G. Y.; Lin, T. C.

    1990-01-01

    The theoretical foundations of two approaches, sequential quadratic programming (SQP) and optimality criteria (OC), are analyzed and compared, with emphasis on the critical importance of parameters such as accuracy, generality, robustness, efficiency, and ease of use in large scale structural optimization. A simplified fighter wing and active control of space structures are considered with other example problems. When applied to general system identification problems, the OC methods are shown to lose simplicity and demonstrate lack of generality, accuracy and robustness. It is concluded that the SQP method with a potential constraint strategy is a better choice as compared to the currently prevalent mathematical programming and OC approaches.

  7. Comparing dependent robust correlations.

    PubMed

    Wilcox, Rand R

    2016-11-01

    Let r1 and r2 be two dependent estimates of Pearson's correlation. There is a substantial literature on testing H0  : ρ1  = ρ2 , the hypothesis that the population correlation coefficients are equal. However, it is well known that Pearson's correlation is not robust. Even a single outlier can have a substantial impact on Pearson's correlation, resulting in a misleading understanding about the strength of the association among the bulk of the points. A way of mitigating this concern is to use a correlation coefficient that guards against outliers, many of which have been proposed. But apparently there are no results on how to compare dependent robust correlation coefficients when there is heteroscedasicity. Extant results suggest that a basic percentile bootstrap will perform reasonably well. This paper reports simulation results indicating the extent to which this is true when using Spearman's rho, a Winsorized correlation or a skipped correlation.

  8. Teaching with Precision.

    ERIC Educational Resources Information Center

    Raybould, Ted; Solity, Jonathan

    1982-01-01

    Use of precision teaching principles with learning problem students involves five steps: specifying performance, recording daily behavior, charting daily behavior, recording the teaching approach, and analyzing data. The approach has been successfully implemented through consultation of school psychologists in Walsall, England. (CL)

  9. Precision bolometer bridge

    NASA Technical Reports Server (NTRS)

    White, D. R.

    1968-01-01

    Prototype precision bolometer calibration bridge is manually balanced device for indicating dc bias and balance with either dc or ac power. An external galvanometer is used with the bridge for null indication, and the circuitry monitors voltage and current simultaneously without adapters in testing 100 and 200 ohm thin film bolometers.

  10. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    1985-01-29

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge. 2 figs.

  11. Precision liquid level sensor

    DOEpatents

    Field, Michael E.; Sullivan, William H.

    1985-01-01

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  12. Robustness of metabolic networks

    NASA Astrophysics Data System (ADS)

    Jeong, Hawoong

    2009-03-01

    We investigated the robustness of cellular metabolism by simulating the system-level computational models, and also performed the corresponding experiments to validate our predictions. We address the cellular robustness from the ``metabolite''-framework by using the novel concept of ``flux-sum,'' which is the sum of all incoming or outgoing fluxes (they are the same under the pseudo-steady state assumption). By estimating the changes of the flux-sum under various genetic and environmental perturbations, we were able to clearly decipher the metabolic robustness; the flux-sum around an essential metabolite does not change much under various perturbations. We also identified the list of the metabolites essential to cell survival, and then ``acclimator'' metabolites that can control the cell growth were discovered. Furthermore, this concept of ``metabolite essentiality'' should be useful in developing new metabolic engineering strategies for improved production of various bioproducts and designing new drugs that can fight against multi-antibiotic resistant superbacteria by knocking-down the enzyme activities around an essential metabolite. Finally, we combined a regulatory network with the metabolic network to investigate its effect on dynamic properties of cellular metabolism.

  13. Robustness of Interdependent Networks

    NASA Astrophysics Data System (ADS)

    Havlin, Shlomo

    2011-03-01

    In interdependent networks, when nodes in one network fail, they cause dependent nodes in other networks to also fail. This may happen recursively and can lead to a cascade of failures. In fact, a failure of a very small fraction of nodes in one network may lead to the complete fragmentation of a system of many interdependent networks. We will present a framework for understanding the robustness of interacting networks subject to such cascading failures and provide a basic analytic approach that may be useful in future studies. We present exact analytical solutions for the critical fraction of nodes that upon removal will lead to a failure cascade and to a complete fragmentation of two interdependent networks in a first order transition. Surprisingly, analyzing complex systems as a set of interdependent networks may alter a basic assumption that network theory has relied on: while for a single network a broader degree distribution of the network nodes results in the network being more robust to random failures, for interdependent networks, the broader the distribution is, the more vulnerable the networks become to random failure. We also show that reducing the coupling between the networks leads to a change from a first order percolation phase transition to a second order percolation transition at a critical point. These findings pose a significant challenge to the future design of robust networks that need to consider the unique properties of interdependent networks.

  14. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  15. Robust keyword retrieval method for OCRed text

    NASA Astrophysics Data System (ADS)

    Fujii, Yusaku; Takebe, Hiroaki; Tanaka, Hiroshi; Hotta, Yoshinobu

    2011-01-01

    Document management systems have become important because of the growing popularity of electronic filing of documents and scanning of books, magazines, manuals, etc., through a scanner or a digital camera, for storage or reading on a PC or an electronic book. Text information acquired by optical character recognition (OCR) is usually added to the electronic documents for document retrieval. Since texts generated by OCR generally include character recognition errors, robust retrieval methods have been introduced to overcome this problem. In this paper, we propose a retrieval method that is robust against both character segmentation and recognition errors. In the proposed method, the insertion of noise characters and dropping of characters in the keyword retrieval enables robustness against character segmentation errors, and character substitution in the keyword of the recognition candidate for each character in OCR or any other character enables robustness against character recognition errors. The recall rate of the proposed method was 15% higher than that of the conventional method. However, the precision rate was 64% lower.

  16. Demons deformable registration for CBCT-guided procedures in the head and neck: Convergence and accuracy

    SciTech Connect

    Nithiananthan, S.; Brock, K. K.; Daly, M. J.; Chan, H.; Irish, J. C.; Siewerdsen, J. H.

    2009-10-15

    Purpose: The accuracy and convergence behavior of a variant of the Demons deformable registration algorithm were investigated for use in cone-beam CT (CBCT)-guided procedures of the head and neck. Online use of deformable registration for guidance of therapeutic procedures such as image-guided surgery or radiation therapy places trade-offs on accuracy and computational expense. This work describes a convergence criterion for Demons registration developed to balance these demands; the accuracy of a multiscale Demons implementation using this convergence criterion is quantified in CBCT images of the head and neck. Methods: Using an open-source ''symmetric'' Demons registration algorithm, a convergence criterion based on the change in the deformation field between iterations was developed to advance among multiple levels of a multiscale image pyramid in a manner that optimized accuracy and computation time. The convergence criterion was optimized in cadaver studies involving CBCT images acquired using a surgical C-arm prototype modified for 3D intraoperative imaging. CBCT-to-CBCT registration was performed and accuracy was quantified in terms of the normalized cross-correlation (NCC) and target registration error (TRE). The accuracy and robustness of the algorithm were then tested in clinical CBCT images of ten patients undergoing radiation therapy of the head and neck. Results: The cadaver model allowed optimization of the convergence factor and initial measurements of registration accuracy: Demons registration exhibited TRE=(0.8{+-}0.3) mm and NCC=0.99 in the cadaveric head compared to TRE=(2.6{+-}1.0) mm and NCC=0.93 with rigid registration. Similarly for the patient data, Demons registration gave mean TRE=(1.6{+-}0.9) mm compared to rigid registration TRE=(3.6{+-}1.9) mm, suggesting registration accuracy at or near the voxel size of the patient images (1x1x2 mm{sup 3}). The multiscale implementation based on optimal convergence criteria completed registration in

  17. Robust adaptive extended Kalman filtering for real time MR-thermometry guided HIFU interventions.

    PubMed

    Roujol, Sébastien; de Senneville, Baudouin Denis; Hey, Silke; Moonen, Chrit; Ries, Mario

    2012-03-01

    Real time magnetic resonance (MR) thermometry is gaining clinical importance for monitoring and guiding high intensity focused ultrasound (HIFU) ablations of tumorous tissue. The temperature information can be employed to adjust the position and the power of the HIFU system in real time and to determine the therapy endpoint. The requirement to resolve both physiological motion of mobile organs and the rapid temperature variations induced by state-of-the-art high-power HIFU systems require fast MRI-acquisition schemes, which are generally hampered by low signal-to-noise ratios (SNRs). This directly limits the precision of real time MR-thermometry and thus in many cases the feasibility of sophisticated control algorithms. To overcome these limitations, temporal filtering of the temperature has been suggested in the past, which has generally an adverse impact on the accuracy and latency of the filtered data. Here, we propose a novel filter that aims to improve the precision of MR-thermometry while monitoring and adapting its impact on the accuracy. For this, an adaptive extended Kalman filter using a model describing the heat transfer for acoustic heating in biological tissues was employed together with an additional outlier rejection to address the problem of sparse artifacted temperature points. The filter was compared to an efficient matched FIR filter and outperformed the latter in all tested cases. The filter was first evaluated on simulated data and provided in the worst case (with an approximate configuration of the model) a substantial improvement of the accuracy by a factor 3 and 15 during heat up and cool down periods, respectively. The robustness of the filter was then evaluated during HIFU experiments on a phantom and in vivo in porcine kidney. The presence of strong temperature artifacts did not affect the thermal dose measurement using our filter whereas a high measurement variation of 70% was observed with the FIR filter.

  18. Guidance accuracy considerations for realtime GPS interferometry

    NASA Technical Reports Server (NTRS)

    Braasch, Michael S.; Van Graas, Frank

    1991-01-01

    During April and May of 1991, the Avionics Engineering Center at Ohio University completed the first set of realtime flight tests of a GPS interferometric attitude and heading determination system. This technique has myriad applications for aircraft and spacecraft guidance and control. However, before these applications can be further developed, a number of guidance accuracy issues must be considered. Among these are: signal derogation due to multipath and shadowing, effects of structural flexures, and system robustness during loss of phase lock. This paper addresses these issues with special emphasis on the information content of the GPS signal, and characterization and mitigation of multipath encountered while in flight.

  19. Astrophysics with Microarcsecond Accuracy Astrometry

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  20. [Accuracy of HDL cholesterol measurements].

    PubMed

    Niedmann, P D; Luthe, H; Wieland, H; Schaper, G; Seidel, D

    1983-02-01

    The widespread use of different methods for the determination of HDL-cholesterol (in Europe: sodium phosphotungstic acid/MgCl2) in connection with enzymatic procedures (in the USA: heparin/MnCl2 followed by the Liebermann-Burchard method) but common reference values makes it necessary to evaluate not only accuracy, specificity, and precision of the precipitation step but also of the subsequent cholesterol determination. A high ratio of serum vs. concentrated precipitation reagent (10:1 V/V) leads to the formation of variable amounts of delta-3.5-cholestadiene. This substance is not recognized by cholesterol oxidase but leads to an 1.6 times overestimation by the Liebermann-Burchard method. Therefore, errors in HDL-cholesterol determination should be considered and differences up to 30% may occur between HDL-cholesterol values determined by the different techniques (heparin/MnCl2 - Liebermann-Burchard and NaPW/MgCl2-CHOD-PAP).

  1. High-precision positioning of radar scatterers

    NASA Astrophysics Data System (ADS)

    Dheenathayalan, Prabu; Small, David; Schubert, Adrian; Hanssen, Ramon F.

    2016-05-01

    Remote sensing radar satellites cover wide areas and provide spatially dense measurements, with millions of scatterers. Knowledge of the precise position of each radar scatterer is essential to identify the corresponding object and interpret the estimated deformation. The absolute position accuracy of synthetic aperture radar (SAR) scatterers in a 2D radar coordinate system, after compensating for atmosphere and tidal effects, is in the order of centimeters for TerraSAR-X (TSX) spotlight images. However, the absolute positioning in 3D and its quality description are not well known. Here, we exploit time-series interferometric SAR to enhance the positioning capability in three dimensions. The 3D positioning precision is parameterized by a variance-covariance matrix and visualized as an error ellipsoid centered at the estimated position. The intersection of the error ellipsoid with objects in the field is exploited to link radar scatterers to real-world objects. We demonstrate the estimation of scatterer position and its quality using 20 months of TSX stripmap acquisitions over Delft, the Netherlands. Using trihedral corner reflectors (CR) for validation, the accuracy of absolute positioning in 2D is about 7 cm. In 3D, an absolute accuracy of up to ˜ 66 cm is realized, with a cigar-shaped error ellipsoid having centimeter precision in azimuth and range dimensions, and elongated in cross-range dimension with a precision in the order of meters (the ratio of the ellipsoid axis lengths is 1/3/213, respectively). The CR absolute 3D position, along with the associated error ellipsoid, is found to be accurate and agree with the ground truth position at a 99 % confidence level. For other non-CR coherent scatterers, the error ellipsoid concept is validated using 3D building models. In both cases, the error ellipsoid not only serves as a quality descriptor, but can also help to associate radar scatterers to real-world objects.

  2. High-precision laser machining of ceramics

    NASA Astrophysics Data System (ADS)

    Toenshoff, Hans K.; von Alvensleben, Ferdinand; Graumann, Christoph; Willmann, Guido

    1998-09-01

    The increasing demand for highly developed ceramic materials for various applications calls for innovative machining technologies yielding high accuracy and efficiency. Associated problems with conventional, i.e. mechanical methods, are unacceptable tool wear as well as force induced damages on ceramic components. Furthermore, the established grinding techniques often meet their limits if accurate complex 2D or 3D structures are required. In contrast to insufficient mechanical processes, UV-laser precision machining of ceramics offers not only a valuable technological alternative but a considerable economical aspect as well. In particular, excimer lasers provide a multitude of advantages for applications in high precision and micro technology. Within the UV wavelength range and pulses emitted in the nano-second region, minimal thermal effects on ceramics and polymers are observed. Thus, the ablation geometry can be controlled precisely in the lateral and vertical directions. In this paper, the excimer laser machining technology developed at the Laser Zentrum Hannover is explained. Representing current and future industrial applications, examinations concerning the precision cutting of alumina (Al2O3), and HF-composite materials, the ablation of ferrite ceramics for precision inductors and the structuring of SiC sealing and bearing rings are presented.

  3. Expendable Precision Laser Aimer for Shaped Charges

    SciTech Connect

    Ault, S; Kuklo, R

    2007-10-25

    Certain shaped-charge cutting operations require a precision aiming system that is operationally convenient, robust, and constructed to allow the aiming system to be left in place for last-minute alignment verification until it is expended when the charge is fired. This report describes an aiming system made from low cost doubled-Nd:YAG 532 nm laser modules of the type used in green laser pointers. Drawings and detailed procedures for constructing the aiming system are provided, as are the results of some minimal tests performed on a prototype device.

  4. Achieving metrological precision limits through postselection

    NASA Astrophysics Data System (ADS)

    Alves, G. Bié; Pimentel, A.; Hor-Meyll, M.; Walborn, S. P.; Davidovich, L.; Filho, R. L. de Matos

    2017-01-01

    Postselection strategies have been proposed with the aim of amplifying weak signals, which may help to overcome detection thresholds associated with technical noise in high-precision measurements. Here we use an optical setup to experimentally explore two different postselection protocols for the estimation of a small parameter: a weak-value amplification procedure and an alternative method that does not provide amplification but nonetheless is shown to be more robust for the sake of parameter estimation. Each technique leads approximately to the saturation of quantum limits for the estimation precision, expressed by the Cramér-Rao bound. For both situations, we show that parameter estimation is improved when the postselection statistics are considered together with the measurement device.

  5. Atomically Precise Surface Engineering for Producing Imagers

    NASA Technical Reports Server (NTRS)

    Greer, Frank (Inventor); Jones, Todd J. (Inventor); Nikzad, Shouleh (Inventor); Hoenk, Michael E. (Inventor)

    2015-01-01

    High-quality surface coatings, and techniques combining the atomic precision of molecular beam epitaxy and atomic layer deposition, to fabricate such high-quality surface coatings are provided. The coatings made in accordance with the techniques set forth by the invention are shown to be capable of forming silicon CCD detectors that demonstrate world record detector quantum efficiency (>50%) in the near and far ultraviolet (155 nm-300 nm). The surface engineering approaches used demonstrate the robustness of detector performance that is obtained by achieving atomic level precision at all steps in the coating fabrication process. As proof of concept, the characterization, materials, and exemplary devices produced are presented along with a comparison to other approaches.

  6. A passion for precision

    ScienceCinema

    None

    2016-07-12

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  7. A passion for precision

    SciTech Connect

    2010-05-19

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  8. The Precision Field Lysimeter Concept

    NASA Astrophysics Data System (ADS)

    Fank, J.

    2009-04-01

    The understanding and interpretation of leaching processes have improved significantly during the past decades. Unlike laboratory experiments, which are mostly performed under very controlled conditions (e.g. homogeneous, uniform packing of pre-treated test material, saturated steady-state flow conditions, and controlled uniform hydraulic conditions), lysimeter experiments generally simulate actual field conditions. Lysimeters may be classified according to different criteria such as type of soil block used (monolithic or reconstructed), drainage (drainage by gravity or vacuum or a water table may be maintained), or weighing or non-weighing lysimeters. In 2004 experimental investigations have been set up to assess the impact of different farming systems on groundwater quality of the shallow floodplain aquifer of the river Mur in Wagna (Styria, Austria). The sediment is characterized by a thin layer (30 - 100 cm) of sandy Dystric Cambisol and underlying gravel and sand. Three precisely weighing equilibrium tension block lysimeters have been installed in agricultural test fields to compare water flow and solute transport under (i) organic farming, (ii) conventional low input farming and (iii) extensification by mulching grass. Specific monitoring equipment is used to reduce the well known shortcomings of lysimeter investigations: The lysimeter core is excavated as an undisturbed monolithic block (circular, 1 m2 surface area, 2 m depth) to prevent destruction of the natural soil structure, and pore system. Tracing experiments have been achieved to investigate the occurrence of artificial preferential flow and transport along the walls of the lysimeters. The results show that such effects can be neglected. Precisely weighing load cells are used to constantly determine the weight loss of the lysimeter due to evaporation and transpiration and to measure different forms of precipitation. The accuracy of the weighing apparatus is 0.05 kg, or 0.05 mm water equivalent

  9. Principles and techniques for designing precision machines

    SciTech Connect

    Hale, Layton Carter

    1999-02-01

    This thesis is written to advance the reader's knowledge of precision-engineering principles and their application to designing machines that achieve both sufficient precision and minimum cost. It provides the concepts and tools necessary for the engineer to create new precision machine designs. Four case studies demonstrate the principles and showcase approaches and solutions to specific problems that generally have wider applications. These come from projects at the Lawrence Livermore National Laboratory in which the author participated: the Large Optics Diamond Turning Machine, Accuracy Enhancement of High- Productivity Machine Tools, the National Ignition Facility, and Extreme Ultraviolet Lithography. Although broad in scope, the topics go into sufficient depth to be useful to practicing precision engineers and often fulfill more academic ambitions. The thesis begins with a chapter that presents significant principles and fundamental knowledge from the Precision Engineering literature. Following this is a chapter that presents engineering design techniques that are general and not specific to precision machines. All subsequent chapters cover specific aspects of precision machine design. The first of these is Structural Design, guidelines and analysis techniques for achieving independently stiff machine structures. The next chapter addresses dynamic stiffness by presenting several techniques for Deterministic Damping, damping designs that can be analyzed and optimized with predictive results. Several chapters present a main thrust of the thesis, Exact-Constraint Design. A main contribution is a generalized modeling approach developed through the course of creating several unique designs. The final chapter is the primary case study of the thesis, the Conceptual Design of a Horizontal Machining Center.

  10. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  11. Precision disablement aiming system

    SciTech Connect

    Monda, Mark J.; Hobart, Clinton G.; Gladwell, Thomas Scott

    2016-02-16

    A disrupter to a target may be precisely aimed by positioning a radiation source to direct radiation towards the target, and a detector is positioned to detect radiation that passes through the target. An aiming device is positioned between the radiation source and the target, wherein a mechanical feature of the aiming device is superimposed on the target in a captured radiographic image. The location of the aiming device in the radiographic image is used to aim a disrupter towards the target.

  12. Ultra-Precision Optics

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Under a Joint Sponsored Research Agreement with Goddard Space Flight Center, SEMATECH, Inc., the Silicon Valley Group, Inc. and Tinsley Laboratories, known as SVG-Tinsley, developed an Ultra-Precision Optics Manufacturing System for space and microlithographic applications. Continuing improvements in optics manufacture will be able to meet unique NASA requirements and the production needs of the lithography industry for many years to come.

  13. Precision laser aiming system

    DOEpatents

    Ahrens, Brandon R.; Todd, Steven N.

    2009-04-28

    A precision laser aiming system comprises a disrupter tool, a reflector, and a laser fixture. The disrupter tool, the reflector and the laser fixture are configurable for iterative alignment and aiming toward an explosive device threat. The invention enables a disrupter to be quickly and accurately set up, aligned, and aimed in order to render safe or to disrupt a target from a standoff position.

  14. Precision orbit determination of altimetric satellites

    NASA Technical Reports Server (NTRS)

    Shum, C. K.; Ries, John C.; Tapley, Byron D.

    1994-01-01

    The ability to determine accurate global sea level variations is important to both detection and understanding of changes in climate patterns. Sea level variability occurs over a wide spectrum of temporal and spatial scales, and precise global measurements are only recently possible with the advent of spaceborne satellite radar altimetry missions. One of the inherent requirements for accurate determination of absolute sea surface topography is that the altimetric satellite orbits be computed with sub-decimeter accuracy within a well defined terrestrial reference frame. SLR tracking in support of precision orbit determination of altimetric satellites is significant. Recent examples are the use of SLR as the primary tracking systems for TOPEX/Poseidon and for ERS-1 precision orbit determination. The current radial orbit accuracy for TOPEX/Poseidon is estimated to be around 3-4 cm, with geographically correlated orbit errors around 2 cm. The significance of the SLR tracking system is its ability to allow altimetric satellites to obtain absolute sea level measurements and thereby provide a link to other altimetry measurement systems for long-term sea level studies. SLR tracking allows the production of precise orbits which are well centered in an accurate terrestrial reference frame. With proper calibration of the radar altimeter, these precise orbits, along with the altimeter measurements, provide long term absolute sea level measurements. The U.S. Navy's Geosat mission is equipped with only Doppler beacons and lacks laser retroreflectors. However, its orbits, and even the Geosat orbits computed using the available full 40-station Tranet tracking network, yield orbits with significant north-south shifts with respect to the IERS terrestrial reference frame. The resulting Geosat sea surface topography will be tilted accordingly, making interpretation of long-term sea level variability studies difficult.

  15. Robust Photon Locking

    SciTech Connect

    Bayer, T.; Wollenhaupt, M.; Sarpe-Tudoran, C.; Baumert, T.

    2009-01-16

    We experimentally demonstrate a strong-field coherent control mechanism that combines the advantages of photon locking (PL) and rapid adiabatic passage (RAP). Unlike earlier implementations of PL and RAP by pulse sequences or chirped pulses, we use shaped pulses generated by phase modulation of the spectrum of a femtosecond laser pulse with a generalized phase discontinuity. The novel control scenario is characterized by a high degree of robustness achieved via adiabatic preparation of a state of maximum coherence. Subsequent phase control allows for efficient switching among different target states. We investigate both properties by photoelectron spectroscopy on potassium atoms interacting with the intense shaped light field.

  16. Robust Kriged Kalman Filtering

    SciTech Connect

    Baingana, Brian; Dall'Anese, Emiliano; Mateos, Gonzalo; Giannakis, Georgios B.

    2015-11-11

    Although the kriged Kalman filter (KKF) has well-documented merits for prediction of spatial-temporal processes, its performance degrades in the presence of outliers due to anomalous events, or measurement equipment failures. This paper proposes a robust KKF model that explicitly accounts for presence of measurement outliers. Exploiting outlier sparsity, a novel l1-regularized estimator that jointly predicts the spatial-temporal process at unmonitored locations, while identifying measurement outliers is put forth. Numerical tests are conducted on a synthetic Internet protocol (IP) network, and real transformer load data. Test results corroborate the effectiveness of the novel estimator in joint spatial prediction and outlier identification.

  17. Complexity and robustness

    PubMed Central

    Carlson, J. M.; Doyle, John

    2002-01-01

    Highly optimized tolerance (HOT) was recently introduced as a conceptual framework to study fundamental aspects of complexity. HOT is motivated primarily by systems from biology and engineering and emphasizes, (i) highly structured, nongeneric, self-dissimilar internal configurations, and (ii) robust yet fragile external behavior. HOT claims these are the most important features of complexity and not accidents of evolution or artifices of engineering design but are inevitably intertwined and mutually reinforcing. In the spirit of this collection, our paper contrasts HOT with alternative perspectives on complexity, drawing on real-world examples and also model systems, particularly those from self-organized criticality. PMID:11875207

  18. Robustness of Cantor diffractals.

    PubMed

    Verma, Rupesh; Sharma, Manoj Kumar; Banerjee, Varsha; Senthilkumaran, Paramasivam

    2013-04-08

    Diffractals are electromagnetic waves diffracted by a fractal aperture. In an earlier paper, we reported an important property of Cantor diffractals, that of redundancy [R. Verma et. al., Opt. Express 20, 8250 (2012)]. In this paper, we report another important property, that of robustness. The question we address is: How much disorder in the Cantor grating can be accommodated by diffractals to continue to yield faithfully its fractal dimension and generator? This answer is of consequence in a number of physical problems involving fractal architecture.

  19. High-precision gauging of metal rings

    NASA Astrophysics Data System (ADS)

    Carlin, Mats; Lillekjendlie, Bjorn

    1994-11-01

    Raufoss AS designs and produces air brake fittings for trucks and buses on the international market. One of the critical components in the fittings is a small, circular metal ring, which is going through 100% dimension control. This article describes a low-price, high accuracy solution developed at SINTEF Instrumentation based on image metrology and a subpixel resolution algorithm. The measurement system consists of a PC-plugg-in transputer video board, a CCD camera, telecentric optics and a machine vision strobe. We describe the measurement technique in some detail, as well as the robust statistical techniques found to be essential in the real life environment.

  20. High Precision GPS Measurements

    DTIC Science & Technology

    2010-02-28

    troposphere delays with cm-level accuracy [15]. For example, the modified Hopfield model (MHM) has been shown to accurately calculate both the...differences between two locations near Rayleigh, North Carolina; RALR and NCRD which are part of the network of Continuously Operating Reference...Fritsche, M., R. Dietrich, A. Rulke, M. Rothacher, R. Steigenberger, “Impact of higher-order ionosphere terms on GPS-derived global network solutions

  1. Moving Liquids with Sound: The Physics of Acoustic Droplet Ejection for Robust Laboratory Automation in Life Sciences.

    PubMed

    Hadimioglu, Babur; Stearns, Richard; Ellson, Richard

    2016-02-01

    Liquid handling instruments for life science applications based on droplet formation with focused acoustic energy or acoustic droplet ejection (ADE) were introduced commercially more than a decade ago. While the idea of "moving liquids with sound" was known in the 20th century, the development of precise methods for acoustic dispensing to aliquot life science materials in the laboratory began in earnest in the 21st century with the adaptation of the controlled "drop on demand" acoustic transfer of droplets from high-density microplates for high-throughput screening (HTS) applications. Robust ADE implementations for life science applications achieve excellent accuracy and precision by using acoustics first to sense the liquid characteristics relevant for its transfer, and then to actuate transfer of the liquid with customized application of sound energy to the given well and well fluid in the microplate. This article provides an overview of the physics behind ADE and its central role in both acoustical and rheological aspects of robust implementation of ADE in the life science laboratory and its broad range of ejectable materials.

  2. Highly Parallel, High-Precision Numerical Integration

    SciTech Connect

    Bailey, David H.; Borwein, Jonathan M.

    2005-04-22

    This paper describes a scheme for rapidly computing numerical values of definite integrals to very high accuracy, ranging from ordinary machine precision to hundreds or thousands of digits, even for functions with singularities or infinite derivatives at endpoints. Such a scheme is of interest not only in computational physics and computational chemistry, but also in experimental mathematics, where high-precision numerical values of definite integrals can be used to numerically discover new identities. This paper discusses techniques for a parallel implementation of this scheme, then presents performance results for 1-D and 2-D test suites. Results are also given for a certain problem from mathematical physics, which features a difficult singularity, confirming a conjecture to 20,000 digit accuracy. The performance rate for this latter calculation on 1024 CPUs is 690 Gflop/s. We believe that this and one other 20,000-digit integral evaluation that we report are the highest-precision non-trivial numerical integrations performed to date.

  3. Multi-oriented windowed harmonic phase reconstruction for robust cardiac strain imaging.

    PubMed

    Cordero-Grande, Lucilio; Royuela-del-Val, Javier; Sanz-Estébanez, Santiago; Martín-Fernández, Marcos; Alberola-López, Carlos

    2016-04-01

    The purpose of this paper is to develop a method for direct estimation of the cardiac strain tensor by extending the harmonic phase reconstruction on tagged magnetic resonance images to obtain more precise and robust measurements. The extension relies on the reconstruction of the local phase of the image by means of the windowed Fourier transform and the acquisition of an overdetermined set of stripe orientations in order to avoid the phase interferences from structures outside the myocardium and the instabilities arising from the application of a gradient operator. Results have shown that increasing the number of acquired orientations provides a significant improvement in the reproducibility of the strain measurements and that the acquisition of an extended set of orientations also improves the reproducibility when compared with acquiring repeated samples from a smaller set of orientations. Additionally, biases in local phase estimation when using the original harmonic phase formulation are greatly diminished by the one here proposed. The ideas here presented allow the design of new methods for motion sensitive magnetic resonance imaging, which could simultaneously improve the resolution, robustness and accuracy of motion estimates.

  4. Accuracy of Digital vs. Conventional Implant Impressions

    PubMed Central

    Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.

    2015-01-01

    The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423

  5. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  6. Radiocarbon dating accuracy improved

    NASA Astrophysics Data System (ADS)

    Scientists have extended the accuracy of carbon-14 (14C) dating by correlating dates older than 8,000 years with uranium-thorium dates that span from 8,000 to 30,000 years before present (ybp, present = 1950). Edouard Bard, Bruno Hamelin, Richard Fairbanks and Alan Zindler, working at Columbia University's Lamont-Doherty Geological Observatory, dated corals from reefs off Barbados using both 14C and uranium-234/thorium-230 by thermal ionization mass spectrometry techniques. They found that the two age data sets deviated in a regular way, allowing the scientists to correlate the two sets of ages. The 14C dates were consistently younger than those determined by uranium-thorium, and the discrepancy increased to about 3,500 years at 20,000 ybp.

  7. Instrument Attitude Precision Control

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    2004-01-01

    A novel approach is presented in this paper to analyze attitude precision and control for an instrument gimbaled to a spacecraft subject to an internal disturbance caused by a moving component inside the instrument. Nonlinear differential equations of motion for some sample cases are derived and solved analytically to gain insight into the influence of the disturbance on the attitude pointing error. A simple control law is developed to eliminate the instrument pointing error caused by the internal disturbance. Several cases are presented to demonstrate and verify the concept presented in this paper.

  8. Precision Robotic Assembly Machine

    ScienceCinema

    None

    2016-07-12

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  9. Precision electroweak measurements

    SciTech Connect

    Demarteau, M.

    1996-11-01

    Recent electroweak precision measurements fro {ital e}{sup +}{ital e}{sup -} and {ital p{anti p}} colliders are presented. Some emphasis is placed on the recent developments in the heavy flavor sector. The measurements are compared to predictions from the Standard Model of electroweak interactions. All results are found to be consistent with the Standard Model. The indirect constraint on the top quark mass from all measurements is in excellent agreement with the direct {ital m{sub t}} measurements. Using the world`s electroweak data in conjunction with the current measurement of the top quark mass, the constraints on the Higgs` mass are discussed.

  10. Evolving Robust Gene Regulatory Networks

    PubMed Central

    Noman, Nasimul; Monjo, Taku; Moscato, Pablo; Iba, Hitoshi

    2015-01-01

    Design and implementation of robust network modules is essential for construction of complex biological systems through hierarchical assembly of ‘parts’ and ‘devices’. The robustness of gene regulatory networks (GRNs) is ascribed chiefly to the underlying topology. The automatic designing capability of GRN topology that can exhibit robust behavior can dramatically change the current practice in synthetic biology. A recent study shows that Darwinian evolution can gradually develop higher topological robustness. Subsequently, this work presents an evolutionary algorithm that simulates natural evolution in silico, for identifying network topologies that are robust to perturbations. We present a Monte Carlo based method for quantifying topological robustness and designed a fitness approximation approach for efficient calculation of topological robustness which is computationally very intensive. The proposed framework was verified using two classic GRN behaviors: oscillation and bistability, although the framework is generalized for evolving other types of responses. The algorithm identified robust GRN architectures which were verified using different analysis and comparison. Analysis of the results also shed light on the relationship among robustness, cooperativity and complexity. This study also shows that nature has already evolved very robust architectures for its crucial systems; hence simulation of this natural process can be very valuable for designing robust biological systems. PMID:25616055

  11. Galvanometer deflection: a precision high-speed system.

    PubMed

    Jablonowski, D P; Raamot, J

    1976-06-01

    An X-Y galvanometer deflection system capable of high precision in a random access mode of operation is described. Beam positional information in digitized form is obtained by employing a Ronchi grating with a sophisticated optical detection scheme. This information is used in a control interface to locate the beam to the required precision. The system is characterized by high accuracy at maximum speed and is designed for operation in a variable environment, with particular attention placed on thermal insensitivity.

  12. Robust stability of second-order systems

    NASA Technical Reports Server (NTRS)

    Chuang, C.-H.

    1995-01-01

    It has been shown recently how virtual passive controllers can be designed for second-order dynamic systems to achieve robust stability. The virtual controllers were visualized as systems made up of spring, mass and damping elements. In this paper, a new approach emphasizing on the notion of positive realness to the same second-order dynamic systems is used. Necessary and sufficient conditions for positive realness are presented for scalar spring-mass-dashpot systems. For multi-input multi-output systems, we show how a mass-spring-dashpot system can be made positive real by properly choosing its output variables. In particular, sufficient conditions are shown for the system without output velocity. Furthermore, if velocity cannot be measured then the system parameters must be precise to keep the system positive real. In practice, system parameters are not always constant and cannot be measured precisely. Therefore, in order to be useful positive real systems must be robust to some degrees. This can be achieved with the design presented in this paper.

  13. Extensibility of a linear rapid robust design methodology

    NASA Astrophysics Data System (ADS)

    Steinfeldt, Bradley A.; Braun, Robert D.

    2016-05-01

    The extensibility of a linear rapid robust design methodology is examined. This analysis is approached from a computational cost and accuracy perspective. The sensitivity of the solution's computational cost is examined by analysing effects such as the number of design variables, nonlinearity of the CAs, and nonlinearity of the response in addition to several potential complexity metrics. Relative to traditional robust design methods, the linear rapid robust design methodology scaled better with the size of the problem and had performance that exceeded the traditional techniques examined. The accuracy of applying a method with linear fundamentals to nonlinear problems was examined. It is observed that if the magnitude of nonlinearity is less than 1000 times that of the nominal linear response, the error associated with applying successive linearization will result in ? errors in the response less than 10% compared to the full nonlinear error.

  14. Robust automated knowledge capture.

    SciTech Connect

    Stevens-Adams, Susan Marie; Abbott, Robert G.; Forsythe, James Chris; Trumbo, Michael Christopher Stefan; Haass, Michael Joseph; Hendrickson, Stacey M. Langfitt

    2011-10-01

    This report summarizes research conducted through the Sandia National Laboratories Robust Automated Knowledge Capture Laboratory Directed Research and Development project. The objective of this project was to advance scientific understanding of the influence of individual cognitive attributes on decision making. The project has developed a quantitative model known as RumRunner that has proven effective in predicting the propensity of an individual to shift strategies on the basis of task and experience related parameters. Three separate studies are described which have validated the basic RumRunner model. This work provides a basis for better understanding human decision making in high consequent national security applications, and in particular, the individual characteristics that underlie adaptive thinking.

  15. Robustness in Digital Hardware

    NASA Astrophysics Data System (ADS)

    Woods, Roger; Lightbody, Gaye

    The growth in electronics has probably been the equivalent of the Industrial Revolution in the past century in terms of how much it has transformed our daily lives. There is a great dependency on technology whether it is in the devices that control travel (e.g., in aircraft or cars), our entertainment and communication systems, or our interaction with money, which has been empowered by the onset of Internet shopping and banking. Despite this reliance, there is still a danger that at some stage devices will fail within the equipment's lifetime. The purpose of this chapter is to look at the factors causing failure and address possible measures to improve robustness in digital hardware technology and specifically chip technology, giving a long-term forecast that will not reassure the reader!

  16. Robust Rocket Engine Concept

    NASA Technical Reports Server (NTRS)

    Lorenzo, Carl F.

    1995-01-01

    The potential for a revolutionary step in the durability of reusable rocket engines is made possible by the combination of several emerging technologies. The recent creation and analytical demonstration of life extending (or damage mitigating) control technology enables rapid rocket engine transients with minimum fatigue and creep damage. This technology has been further enhanced by the formulation of very simple but conservative continuum damage models. These new ideas when combined with recent advances in multidisciplinary optimization provide the potential for a large (revolutionary) step in reusable rocket engine durability. This concept has been named the robust rocket engine concept (RREC) and is the basic contribution of this paper. The concept also includes consideration of design innovations to minimize critical point damage.

  17. Precision Joining Center

    SciTech Connect

    Powell, J.W.; Westphal, D.A.

    1991-08-01

    A workshop to obtain input from industry on the establishment of the Precision Joining Center (PJC) was held on July 10--12, 1991. The PJC is a center for training Joining Technologists in advanced joining techniques and concepts in order to promote the competitiveness of US industry. The center will be established as part of the DOE Defense Programs Technology Commercialization Initiative, and operated by EG G Rocky Flats in cooperation with the American Welding Society and the Colorado School of Mines Center for Welding and Joining Research. The overall objectives of the workshop were to validate the need for a Joining Technologists to fill the gap between the welding operator and the welding engineer, and to assure that the PJC will train individuals to satisfy that need. The consensus of the workshop participants was that the Joining Technologist is a necessary position in industry, and is currently used, with some variation, by many companies. It was agreed that the PJC core curriculum, as presented, would produce a Joining Technologist of value to industries that use precision joining techniques. The advantage of the PJC would be to train the Joining Technologist much more quickly and more completely. The proposed emphasis of the PJC curriculum on equipment intensive and hands-on training was judged to be essential.

  18. Precision flyer initiator

    DOEpatents

    Frank, A.M.; Lee, R.S.

    1998-05-26

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or ``flyer`` is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices. 10 figs.

  19. Precision flyer initiator

    DOEpatents

    Frank, Alan M.; Lee, Ronald S.

    1998-01-01

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or "flyer" is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices.

  20. A novel reagent significantly improved assay robustness in imaged capillary isoelectric focusing.

    PubMed

    Zhang, Xin; Voronov, Sergey; Mussa, Nesredin; Li, Zhengjian

    2017-03-15

    Imaged Capillary Isoelectric Focusing (icIEF) has been used as primary method for charge variants analysis of therapeutic antibodies and proteins [1], [9]. Proteins tend to precipitate around their pI values during focusing [14], which directly affects the reproducibility of their charge profiles. Protein concentration, focusing time and various supplementing additives are key parameters to minimize the protein precipitation and aggregation. Urea and sucrose are common additives to reduce protein aggregation, solubilize proteins in sample matrix and therefore improve assay repeatability [15]. However some proteins and antibodies are exceptions, we found urea and sucrose are not sufficient for a typical fusion protein (Fusion protein A) in icIEF assay and high variability is observed. We report a novel reagent, formamide, significantly improved reproducibility of protein charge profiles. Our results show formamide is a good supplementary reagent to reduce aggregation and stabilize proteins in isoelectric focusing. We further confirmed the method robustness, linearity, accuracy and precision after introducing the new reagent; extremely tight pI values, significantly improved method precision and sample on-board stability are achieved by formamide. Formamide is also proven to be equally functional to multiple antibodies as urea, which makes it an extra tool in icIEF method development.

  1. Precise image-guided irradiation of small animals: a flexible non-profit platform

    NASA Astrophysics Data System (ADS)

    Tillner, Falk; Thute, Prasad; Löck, Steffen; Dietrich, Antje; Fursov, Andriy; Haase, Robert; Lukas, Mathias; Rimarzig, Bernd; Sobiella, Manfred; Krause, Mechthild; Baumann, Michael; Bütof, Rebecca; Enghardt, Wolfgang

    2016-04-01

    Preclinical in vivo studies using small animals are essential to develop new therapeutic options in radiation oncology. Of particular interest are orthotopic tumour models, which better reflect the clinical situation in terms of growth patterns and microenvironmental parameters of the tumour as well as the interplay of tumours with the surrounding normal tissues. Such orthotopic models increase the technical demands and the complexity of preclinical studies as local irradiation with therapeutically relevant doses requires image-guided target localisation and accurate beam application. Moreover, advanced imaging techniques are needed for monitoring treatment outcome. We present a novel small animal image-guided radiation therapy (SAIGRT) system, which allows for precise and accurate, conformal irradiation and x-ray imaging of small animals. High accuracy is achieved by its robust construction, the precise movement of its components and a fast high-resolution flat-panel detector. Field forming and x-ray imaging is accomplished close to the animal resulting in a small penumbra and a high image quality. Feasibility for irradiating orthotopic models has been proven using lung tumour and glioblastoma models in mice. The SAIGRT system provides a flexible, non-profit academic research platform which can be adapted to specific experimental needs and therefore enables systematic preclinical trials in multicentre research networks.

  2. Making Activity Recognition Robust against Deceptive Behavior.

    PubMed

    Saeb, Sohrab; Körding, Konrad; Mohr, David C

    2015-01-01

    Healthcare services increasingly use the activity recognition technology to track the daily activities of individuals. In some cases, this is used to provide incentives. For example, some health insurance companies offer discount to customers who are physically active, based on the data collected from their activity tracking devices. Therefore, there is an increasing motivation for individuals to cheat, by making activity trackers detect activities that increase their benefits rather than the ones they actually do. In this study, we used a novel method to make activity recognition robust against deceptive behavior. We asked 14 subjects to attempt to trick our smartphone-based activity classifier by making it detect an activity other than the one they actually performed, for example by shaking the phone while seated to make the classifier detect walking. If they succeeded, we used their motion data to retrain the classifier, and asked them to try to trick it again. The experiment ended when subjects could no longer cheat. We found that some subjects were not able to trick the classifier at all, while others required five rounds of retraining. While classifiers trained on normal activity data predicted true activity with ~38% accuracy, training on the data gathered during the deceptive behavior increased their accuracy to ~84%. We conclude that learning the deceptive behavior of one individual helps to detect the deceptive behavior of others. Thus, we can make current activity recognition robust to deception by including deceptive activity data from a few individuals.

  3. Biometric feature embedding using robust steganography technique

    NASA Astrophysics Data System (ADS)

    Rashid, Rasber D.; Sellahewa, Harin; Jassim, Sabah A.

    2013-05-01

    This paper is concerned with robust steganographic techniques to hide and communicate biometric data in mobile media objects like images, over open networks. More specifically, the aim is to embed binarised features extracted using discrete wavelet transforms and local binary patterns of face images as a secret message in an image. The need for such techniques can arise in law enforcement, forensics, counter terrorism, internet/mobile banking and border control. What differentiates this problem from normal information hiding techniques is the added requirement that there should be minimal effect on face recognition accuracy. We propose an LSB-Witness embedding technique in which the secret message is already present in the LSB plane but instead of changing the cover image LSB values, the second LSB plane will be changed to stand as a witness/informer to the receiver during message recovery. Although this approach may affect the stego quality, it is eliminating the weakness of traditional LSB schemes that is exploited by steganalysis techniques for LSB, such as PoV and RS steganalysis, to detect the existence of secrete message. Experimental results show that the proposed method is robust against PoV and RS attacks compared to other variants of LSB. We also discussed variants of this approach and determine capacity requirements for embedding face biometric feature vectors while maintain accuracy of face recognition.

  4. Making Activity Recognition Robust against Deceptive Behavior

    PubMed Central

    Saeb, Sohrab; Körding, Konrad; Mohr, David C.

    2015-01-01

    Healthcare services increasingly use the activity recognition technology to track the daily activities of individuals. In some cases, this is used to provide incentives. For example, some health insurance companies offer discount to customers who are physically active, based on the data collected from their activity tracking devices. Therefore, there is an increasing motivation for individuals to cheat, by making activity trackers detect activities that increase their benefits rather than the ones they actually do. In this study, we used a novel method to make activity recognition robust against deceptive behavior. We asked 14 subjects to attempt to trick our smartphone-based activity classifier by making it detect an activity other than the one they actually performed, for example by shaking the phone while seated to make the classifier detect walking. If they succeeded, we used their motion data to retrain the classifier, and asked them to try to trick it again. The experiment ended when subjects could no longer cheat. We found that some subjects were not able to trick the classifier at all, while others required five rounds of retraining. While classifiers trained on normal activity data predicted true activity with ~38% accuracy, training on the data gathered during the deceptive behavior increased their accuracy to ~84%. We conclude that learning the deceptive behavior of one individual helps to detect the deceptive behavior of others. Thus, we can make current activity recognition robust to deception by including deceptive activity data from a few individuals. PMID:26659118

  5. Advanced composite materials for precision segmented reflectors

    NASA Technical Reports Server (NTRS)

    Stein, Bland A.; Bowles, David E.

    1988-01-01

    The objective in the NASA Precision Segmented Reflector (PSR) project is to develop new composite material concepts for highly stable and durable reflectors with precision surfaces. The project focuses on alternate material concepts such as the development of new low coefficient of thermal expansion resins as matrices for graphite fiber reinforced composites, quartz fiber reinforced epoxies, and graphite reinforced glass. Low residual stress fabrication methods will be developed. When coupon specimens of these new material concepts have demonstrated the required surface accuracies and resistance to thermal distortion and microcracking, reflector panels will be fabricated and tested in simulated space environments. An important part of the program is the analytical modeling of environmental stability of these new composite materials concepts through constitutive equation development, modeling of microdamage in the composite matrix, and prediction of long term stability (including viscoelasticity). These analyses include both closed form and finite element solutions at the micro and macro levels.

  6. The neglected tool in the Bayesian ecologist's shed: a case study testing informative priors' effect on model accuracy

    PubMed Central

    Morris, William K; Vesk, Peter A; McCarthy, Michael A; Bunyavejchewin, Sarayudh; Baker, Patrick J

    2015-01-01

    Despite benefits for precision, ecologists rarely use informative priors. One reason that ecologists may prefer vague priors is the perception that informative priors reduce accuracy. To date, no ecological study has empirically evaluated data-derived informative priors' effects on precision and accuracy. To determine the impacts of priors, we evaluated mortality models for tree species using data from a forest dynamics plot in Thailand. Half the models used vague priors, and the remaining half had informative priors. We found precision was greater when using informative priors, but effects on accuracy were more variable. In some cases, prior information improved accuracy, while in others, it was reduced. On average, models with informative priors were no more or less accurate than models without. Our analyses provide a detailed case study on the simultaneous effect of prior information on precision and accuracy and demonstrate that when priors are specified appropriately, they lead to greater precision without systematically reducing model accuracy. PMID:25628867

  7. Robust 3D–2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation

    PubMed Central

    Otake, Yoshito; Wang, Adam S; Stayman, J Webster; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A Jay; Gokaslan, Ziya L; Siewerdsen, Jeffrey H

    2016-01-01

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with `success' defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the

  8. Robust 3D-2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation

    NASA Astrophysics Data System (ADS)

    Otake, Yoshito; Wang, Adam S.; Webster Stayman, J.; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A. Jay; Gokaslan, Ziya L.; Siewerdsen, Jeffrey H.

    2013-12-01

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with ‘success’ defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial

  9. Robust 3D-2D image registration: application to spine interventions and vertebral labeling in the presence of anatomical deformation.

    PubMed

    Otake, Yoshito; Wang, Adam S; Webster Stayman, J; Uneri, Ali; Kleinszig, Gerhard; Vogt, Sebastian; Khanna, A Jay; Gokaslan, Ziya L; Siewerdsen, Jeffrey H

    2013-12-07

    We present a framework for robustly estimating registration between a 3D volume image and a 2D projection image and evaluate its precision and robustness in spine interventions for vertebral localization in the presence of anatomical deformation. The framework employs a normalized gradient information similarity metric and multi-start covariance matrix adaptation evolution strategy optimization with local-restarts, which provided improved robustness against deformation and content mismatch. The parallelized implementation allowed orders-of-magnitude acceleration in computation time and improved the robustness of registration via multi-start global optimization. Experiments involved a cadaver specimen and two CT datasets (supine and prone) and 36 C-arm fluoroscopy images acquired with the specimen in four positions (supine, prone, supine with lordosis, prone with kyphosis), three regions (thoracic, abdominal, and lumbar), and three levels of geometric magnification (1.7, 2.0, 2.4). Registration accuracy was evaluated in terms of projection distance error (PDE) between the estimated and true target points in the projection image, including 14 400 random trials (200 trials on the 72 registration scenarios) with initialization error up to ±200 mm and ±10°. The resulting median PDE was better than 0.1 mm in all cases, depending somewhat on the resolution of input CT and fluoroscopy images. The cadaver experiments illustrated the tradeoff between robustness and computation time, yielding a success rate of 99.993% in vertebral labeling (with 'success' defined as PDE <5 mm) using 1,718 664 ± 96 582 function evaluations computed in 54.0 ± 3.5 s on a mid-range GPU (nVidia, GeForce GTX690). Parameters yielding a faster search (e.g., fewer multi-starts) reduced robustness under conditions of large deformation and poor initialization (99.535% success for the same data registered in 13.1 s), but given good initialization (e.g., ±5 mm, assuming a robust initial run) the

  10. Robust Decision-making Applied to Model Selection

    SciTech Connect

    Hemez, Francois M.

    2012-08-06

    The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define each of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.

  11. Visual inspection reliability for precision manufactured parts

    DOE PAGES

    See, Judi E.

    2015-09-04

    Sandia National Laboratories conducted an experiment for the National Nuclear Security Administration to determine the reliability of visual inspection of precision manufactured parts used in nuclear weapons. In addition visual inspection has been extensively researched since the early 20th century; however, the reliability of visual inspection for nuclear weapons parts has not been addressed. In addition, the efficacy of using inspector confidence ratings to guide multiple inspections in an effort to improve overall performance accuracy is unknown. Further, the workload associated with inspection has not been documented, and newer measures of stress have not been applied.

  12. Precise and automated microfluidic sample preparation.

    SciTech Connect

    Crocker, Robert W.; Patel, Kamlesh D.; Mosier, Bruce P.; Harnett, Cindy K.

    2004-07-01

    Autonomous bio-chemical agent detectors require sample preparation involving multiplex fluid control. We have developed a portable microfluidic pump array for metering sub-microliter volumes at flowrates of 1-100 {micro}L/min. Each pump is composed of an electrokinetic (EK) pump and high-voltage power supply with 15-Hz feedback from flow sensors. The combination of high pump fluid impedance and active control results in precise fluid metering with nanoliter accuracy. Automated sample preparation will be demonstrated by labeling proteins with fluorescamine and subsequent injection to a capillary gel electrophoresis (CGE) chip.

  13. The GBT precision telescope control system

    NASA Astrophysics Data System (ADS)

    Prestage, Richard M.; Constantikes, Kim T.; Balser, Dana S.; Condon, James J.

    2004-10-01

    The NRAO Robert C. Byrd Green Bank Telescope (GBT) is a 100m diameter advanced single dish radio telescope designed for a wide range of astronomical projects with special emphasis on precision imaging. Open-loop adjustments of the active surface, and real-time corrections to pointing and focus on the basis of structural temperatures already allow observations at frequencies up to 50GHz. Our ultimate goal is to extend the observing frequency limit up to 115GHz; this will require a two dimensional tracking error better than 1.3", and an rms surface accuracy better than 210μm. The Precision Telescope Control System project has two main components. One aspect is the continued deployment of appropriate metrology systems, including temperature sensors, inclinometers, laser rangefinders and other devices. An improved control system architecture will harness this measurement capability with the existing servo systems, to deliver the precision operation required. The second aspect is the execution of a series of experiments to identify, understand and correct the residual pointing and surface accuracy errors. These can have multiple causes, many of which depend on variable environmental conditions. A particularly novel approach is to solve simultaneously for gravitational, thermal and wind effects in the development of the telescope pointing and focus tracking models. Our precision temperature sensor system has already allowed us to compensate for thermal gradients in the antenna, which were previously responsible for the largest "non-repeatable" pointing and focus tracking errors. We are currently targetting the effects of wind as the next, currently uncompensated, source of error.

  14. Dynamics robustness of cascading systems.

    PubMed

    Young, Jonathan T; Hatakeyama, Tetsuhiro S; Kaneko, Kunihiko

    2017-03-01

    A most important property of biochemical systems is robustness. Static robustness, e.g., homeostasis, is the insensitivity of a state against perturbations, whereas dynamics robustness, e.g., homeorhesis, is the insensitivity of a dynamic process. In contrast to the extensively studied static robustness, dynamics robustness, i.e., how a system creates an invariant temporal profile against perturbations, is little explored despite transient dynamics being crucial for cellular fates and are reported to be robust experimentally. For example, the duration of a stimulus elicits different phenotypic responses, and signaling networks process and encode temporal information. Hence, robustness in time courses will be necessary for functional biochemical networks. Based on dynamical systems theory, we uncovered a general mechanism to achieve dynamics robustness. Using a three-stage linear signaling cascade as an example, we found that the temporal profiles and response duration post-stimulus is robust to perturbations against certain parameters. Then analyzing the linearized model, we elucidated the criteria of when signaling cascades will display dynamics robustness. We found that changes in the upstream modules are masked in the cascade, and that the response duration is mainly controlled by the rate-limiting module and organization of the cascade's kinetics. Specifically, we found two necessary conditions for dynamics robustness in signaling cascades: 1) Constraint on the rate-limiting process: The phosphatase activity in the perturbed module is not the slowest. 2) Constraints on the initial conditions: The kinase activity needs to be fast enough such that each module is saturated even with fast phosphatase activity and upstream changes are attenuated. We discussed the relevance of such robustness to several biological examples and the validity of the above conditions therein. Given the applicability of dynamics robustness to a variety of systems, it will provide a

  15. Dynamics robustness of cascading systems

    PubMed Central

    Kaneko, Kunihiko

    2017-01-01

    A most important property of biochemical systems is robustness. Static robustness, e.g., homeostasis, is the insensitivity of a state against perturbations, whereas dynamics robustness, e.g., homeorhesis, is the insensitivity of a dynamic process. In contrast to the extensively studied static robustness, dynamics robustness, i.e., how a system creates an invariant temporal profile against perturbations, is little explored despite transient dynamics being crucial for cellular fates and are reported to be robust experimentally. For example, the duration of a stimulus elicits different phenotypic responses, and signaling networks process and encode temporal information. Hence, robustness in time courses will be necessary for functional biochemical networks. Based on dynamical systems theory, we uncovered a general mechanism to achieve dynamics robustness. Using a three-stage linear signaling cascade as an example, we found that the temporal profiles and response duration post-stimulus is robust to perturbations against certain parameters. Then analyzing the linearized model, we elucidated the criteria of when signaling cascades will display dynamics robustness. We found that changes in the upstream modules are masked in the cascade, and that the response duration is mainly controlled by the rate-limiting module and organization of the cascade’s kinetics. Specifically, we found two necessary conditions for dynamics robustness in signaling cascades: 1) Constraint on the rate-limiting process: The phosphatase activity in the perturbed module is not the slowest. 2) Constraints on the initial conditions: The kinase activity needs to be fast enough such that each module is saturated even with fast phosphatase activity and upstream changes are attenuated. We discussed the relevance of such robustness to several biological examples and the validity of the above conditions therein. Given the applicability of dynamics robustness to a variety of systems, it will provide a

  16. Robust, accurate and fast automatic segmentation of the spinal cord.

    PubMed

    De Leener, Benjamin; Kadoury, Samuel; Cohen-Adad, Julien

    2014-09-01

    Spinal cord segmentation provides measures of atrophy and facilitates group analysis via inter-subject correspondence. Automatizing this procedure enables studies with large throughput and minimizes user bias. Although several automatic segmentation methods exist, they are often restricted in terms of image contrast and field-of-view. This paper presents a new automatic segmentation method (PropSeg) optimized for robustness, accuracy and speed. The algorithm is based on the propagation of a deformable model and is divided into three parts: firstly, an initialization step detects the spinal cord position and orientation using a circular Hough transform on multiple axial slices rostral and caudal to the starting plane and builds an initial elliptical tubular mesh. Secondly, a low-resolution deformable model is propagated along the spinal cord. To deal with highly variable contrast levels between the spinal cord and the cerebrospinal fluid, the deformation is coupled with a local contrast-to-noise adaptation at each iteration. Thirdly, a refinement process and a global deformation are applied on the propagated mesh to provide an accurate segmentation of the spinal cord. Validation was performed in 15 healthy subjects and two patients with spinal cord injury, using T1- and T2-weighted images of the entire spinal cord and on multiecho T2*-weighted images. Our method was compared against manual segmentation and against an active surface method. Results show high precision for all the MR sequences. Dice coefficients were 0.9 for the T1- and T2-weighted cohorts and 0.86 for the T2*-weighted images. The proposed method runs in less than 1min on a normal computer and can be used to quantify morphological features such as cross-sectional area along the whole spinal cord.

  17. Reticence, Accuracy and Efficacy

    NASA Astrophysics Data System (ADS)

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  18. Groves model accuracy study

    NASA Astrophysics Data System (ADS)

    Peterson, Matthew C.

    1991-08-01

    The United States Air Force Environmental Technical Applications Center (USAFETAC) was tasked to review the scientific literature for studies of the Groves Neutral Density Climatology Model and compare the Groves Model with others in the 30-60 km range. The tasking included a request to investigate the merits of comparing accuracy of the Groves Model to rocketsonde data. USAFETAC analysts found the Groves Model to be state of the art for middle-atmospheric climatological models. In reviewing previous comparisons with other models and with space shuttle-derived atmospheric densities, good density vs altitude agreement was found in almost all cases. A simple technique involving comparison of the model with range reference atmospheres was found to be the most economical way to compare the Groves Model with rocketsonde data; an example of this type is provided. The Groves 85 Model is used routinely in USAFETAC's Improved Point Analysis Model (IPAM). To create this model, Dr. Gerald Vann Groves produced tabulations of atmospheric density based on data derived from satellite observations and modified by rocketsonde observations. Neutral Density as presented here refers to the monthly mean density in 10-degree latitude bands as a function of altitude. The Groves 85 Model zonal mean density tabulations are given in their entirety.

  19. Precision Medicine in Cancer Treatment

    Cancer.gov

    Precision medicine helps doctors select cancer treatments that are most likely to help patients based on a genetic understanding of their disease. Learn about the promise of precision medicine and the role it plays in cancer treatment.

  20. Precision Joining Center

    NASA Technical Reports Server (NTRS)

    Powell, John W.

    1991-01-01

    The establishment of a Precision Joining Center (PJC) is proposed. The PJC will be a cooperatively operated center with participation from U.S. private industry, the Colorado School of Mines, and various government agencies, including the Department of Energy's Nuclear Weapons Complex (NWC). The PJC's primary mission will be as a training center for advanced joining technologies. This will accomplish the following objectives: (1) it will provide an effective mechanism to transfer joining technology from the NWC to private industry; (2) it will provide a center for testing new joining processes for the NWC and private industry; and (3) it will provide highly trained personnel to support advance joining processes for the NWC and private industry.

  1. Truss Assembly and Welding by Intelligent Precision Jigging Robots

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Dorsey, John T.; Doggett, William R.; Correll, Nikolaus

    2014-01-01

    This paper describes an Intelligent Precision Jigging Robot (IPJR) prototype that enables the precise alignment and welding of titanium space telescope optical benches. The IPJR, equipped with micron accuracy sensors and actuators, worked in tandem with a lower precision remote controlled manipulator. The combined system assembled and welded a 2 m truss from stock titanium components. The calibration of the IPJR, and the difference between the predicted and the truss dimensions as-built, identified additional sources of error that should be addressed in the next generation of IPJRs in 2D and 3D.

  2. Precision spectroscopy of hydrogen and femtosecond laser frequency combs.

    PubMed

    Hänsch, T W; Alnis, J; Fendel, P; Fischer, M; Gohle, C; Herrmann, M; Holzwarth, R; Kolachevsky, N; Udem, Th; Zimmermann, M

    2005-09-15

    Precision spectroscopy of the simple hydrogen atom has inspired dramatic advances in optical frequency metrology: femtosecond laser optical frequency comb synthesizers have revolutionized the precise measurement of optical frequencies, and they provide a reliable clock mechanism for optical atomic clocks. Precision spectroscopy of the hydrogen 1S-2S two-photon resonance has reached an accuracy of 1.4 parts in 10(14), and considerable future improvements are envisioned. Such laboratory experiments are setting new limits for possible slow variations of the fine structure constant alpha and the magnetic moment of the caesium nucleus mu(Cs) in units of the Bohr magneton mu(B).

  3. Precision Spectroscopy of Tellurium

    NASA Astrophysics Data System (ADS)

    Coker, J.; Furneaux, J. E.

    2013-06-01

    Tellurium (Te_2) is widely used as a frequency reference, largely due to the fact that it has an optical transition roughly every 2-3 GHz throughout a large portion of the visible spectrum. Although a standard atlas encompassing over 5200 cm^{-1} already exists [1], Doppler broadening present in that work buries a significant portion of the features [2]. More recent studies of Te_2 exist which do not exhibit Doppler broadening, such as Refs. [3-5], and each covers different parts of the spectrum. This work adds to that knowledge a few hundred transitions in the vicinity of 444 nm, measured with high precision in order to improve measurement of the spectroscopic constants of Te_2's excited states. Using a Fabry Perot cavity in a shock-absorbing, temperature and pressure regulated chamber, locked to a Zeeman stabilized HeNe laser, we measure changes in frequency of our diode laser to ˜1 MHz precision. This diode laser is scanned over 1000 GHz for use in a saturated-absorption spectroscopy cell filled with Te_2 vapor. Details of the cavity and its short and long-term stability are discussed, as well as spectroscopic properties of Te_2. References: J. Cariou, and P. Luc, Atlas du spectre d'absorption de la molecule de tellure, Laboratoire Aime-Cotton (1980). J. Coker et al., J. Opt. Soc. Am. B {28}, 2934 (2011). J. Verges et al., Physica Scripta {25}, 338 (1982). Ph. Courteille et al., Appl. Phys. B {59}, 187 (1994) T.J. Scholl et al., J. Opt. Soc. Am. B {22}, 1128 (2005).

  4. Mathematics for modern precision engineering.

    PubMed

    Scott, Paul J; Forbes, Alistair B

    2012-08-28

    The aim of precision engineering is the accurate control of geometry. For this reason, mathematics has a long association with precision engineering: from the calculation and correction of angular scales used in surveying and astronomical instrumentation to statistical averaging techniques used to increase precision. This study illustrates the enabling role the mathematical sciences are playing in precision engineering: modelling physical processes, instruments and complex geometries, statistical characterization of metrology systems and error compensation.

  5. Micromechanical silicon precision scale

    NASA Astrophysics Data System (ADS)

    Oja, Aarne S.; Sillanpaa, Teuvo; Seppae, H.; Kiihamaki, Jyrki; Seppala, P.; Karttunen, Jani; Riski, Kari

    2000-04-01

    A micro machined capacitive silicon scale has been designed and fabricated. It is intended for weighing masses on the order of 1 g at the resolution of about 1 ppm and below. The device consists of a micro machined SOI chip which is anodically bonded to a glass chip. The flexible electrode is formed in the SOI device layer. The other electrode is metallized on the glass and is divided into three sections. The sections are used for detecting tilting of the top electrode due to a possible off-centering of the mass load. The measuring circuit implements electrostatic force feedback and keeps the top electrode at a constant horizontal position irrespective of its mass loading. First measurements have demonstrated the stability allowing measurement of 1 g masses at an accuracy of 2...3 ppm.

  6. Robust Modeling of Stellar Triples in PHOEBE

    NASA Astrophysics Data System (ADS)

    Conroy, Kyle E.; Prsa, Andrej; Horvat, Martin; Stassun, Keivan G.

    2017-01-01

    The number of known mutually-eclipsing stellar triple and multiple systems has increased greatly during the Kepler era. These systems provide significant opportunities to both determine fundamental stellar parameters of benchmark systems to unprecedented precision as well as to study the dynamical interaction and formation mechanisms of stellar and planetary systems. Modeling these systems to their full potential, however, has not been feasible until recently. Most existing available codes are restricted to the two-body binary case and those that do provide N-body support for more components make sacrifices in precision by assuming no stellar surface distortion. We have completely redesigned and rewritten the PHOEBE binary modeling code to incorporate support for triple and higher-order systems while also robustly modeling data with Kepler precision. Here we present our approach, demonstrate several test cases based on real data, and discuss the current status of PHOEBE's support for modeling these types of systems. PHOEBE is funded in part by NSF grant #1517474.

  7. Robust relativistic bit commitment

    NASA Astrophysics Data System (ADS)

    Chakraborty, Kaushik; Chailloux, André; Leverrier, Anthony

    2016-12-01

    Relativistic cryptography exploits the fact that no information can travel faster than the speed of light in order to obtain security guarantees that cannot be achieved from the laws of quantum mechanics alone. Recently, Lunghi et al. [Phys. Rev. Lett. 115, 030502 (2015), 10.1103/PhysRevLett.115.030502] presented a bit-commitment scheme where each party uses two agents that exchange classical information in a synchronized fashion, and that is both hiding and binding. A caveat is that the commitment time is intrinsically limited by the spatial configuration of the players, and increasing this time requires the agents to exchange messages during the whole duration of the protocol. While such a solution remains computationally attractive, its practicality is severely limited in realistic settings since all communication must remain perfectly synchronized at all times. In this work, we introduce a robust protocol for relativistic bit commitment that tolerates failures of the classical communication network. This is done by adding a third agent to both parties. Our scheme provides a quadratic improvement in terms of expected sustain time compared with the original protocol, while retaining the same level of security.

  8. Robust Nonlinear Neural Codes

    NASA Astrophysics Data System (ADS)

    Yang, Qianli; Pitkow, Xaq

    2015-03-01

    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  9. Robust Weak Measurements

    NASA Astrophysics Data System (ADS)

    Tollaksen, Jeff; Aharonov, Yakir

    2006-03-01

    We introduce a new type of weak measurement which yields a quantum average of weak values that is robust, outside the range of eigenvalues, extends the valid regime for weak measurements, and for which the probability of obtaining the pre- and post-selected ensemble is not exponentially rare. This result extends the applicability of weak values, shifts the statistical interpretation previously attributed to weak values and suggests that the weak value is a property of every pre- and post-selected ensemble. We then apply this new weak measurement to Hardy's paradox. Usually the paradox is dismissed on grounds of counterfactuality, i.e., because the paradoxical effects appear only when one considers results of experiments which do not actually take place. We suggest a new set of measurements in connection with Hardy's scheme, and show that when they are actually performed, they yield strange and surprising outcomes. More generally, we claim that counterfactual paradoxes point to a deeper structure inherent to quantum mechanics characterized by weak values (Aharonov Y, Botero A, Popescu S, Reznik B, Tollaksen J, Physics Letters A, 301 (3-4): 130-138, 2002).

  10. Precision laser automatic tracking system.

    PubMed

    Lucy, R F; Peters, C J; McGann, E J; Lang, K T

    1966-04-01

    A precision laser tracker has been constructed and tested that is capable of tracking a low-acceleration target to an accuracy of about 25 microrad root mean square. In tracking high-acceleration targets, the error is directly proportional to the angular acceleration. For an angular acceleration of 0.6 rad/sec(2), the measured tracking error was about 0.1 mrad. The basic components in this tracker, similar in configuration to a heliostat, are a laser and an image dissector, which are mounted on a stationary frame, and a servocontrolled tracking mirror. The daytime sensitivity of this system is approximately 3 x 10(-10) W/m(2); the ultimate nighttime sensitivity is approximately 3 x 10(-14) W/m(2). Experimental tests were performed to evaluate both dynamic characteristics of this system and the system sensitivity. Dynamic performance of the system was obtained, using a small rocket covered with retroreflective material launched at an acceleration of about 13 g at a point 204 m from the tracker. The daytime sensitivity of the system was checked, using an efficient retroreflector mounted on a light aircraft. This aircraft was tracked out to a maximum range of 15 km, which checked the daytime sensitivity of the system measured by other means. The system also has been used to track passively stars and the Echo I satellite. Also, the system tracked passively a +7.5 magnitude star, and the signal-to-noise ratio in this experiment indicates that it should be possible to track a + 12.5 magnitude star.

  11. Robust Control Feedback and Learning

    DTIC Science & Technology

    2002-11-30

    98-1-0026 5b. GRANT NUMBER Robust Control, Feedback and Learning F49620-98-1-0026 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Michael G...Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39.18 Final Report: ROBUST CONTROL FEEDBACK AND LEARNING AFOSR Grant F49620-98-1-0026 October 1...Philadelphia, PA, 2000. [16] M. G. Safonov. Recent advances in robust control, feedback and learning . In S. 0. R. Moheimani, editor, Perspectives in Robust

  12. Robustness surfaces of complex networks

    NASA Astrophysics Data System (ADS)

    Manzano, Marc; Sahneh, Faryad; Scoglio, Caterina; Calle, Eusebi; Marzo, Jose Luis

    2014-09-01

    Despite the robustness of complex networks has been extensively studied in the last decade, there still lacks a unifying framework able to embrace all the proposed metrics. In the literature there are two open issues related to this gap: (a) how to dimension several metrics to allow their summation and (b) how to weight each of the metrics. In this work we propose a solution for the two aforementioned problems by defining the R*-value and introducing the concept of robustness surface (Ω). The rationale of our proposal is to make use of Principal Component Analysis (PCA). We firstly adjust to 1 the initial robustness of a network. Secondly, we find the most informative robustness metric under a specific failure scenario. Then, we repeat the process for several percentage of failures and different realizations of the failure process. Lastly, we join these values to form the robustness surface, which allows the visual assessment of network robustness variability. Results show that a network presents different robustness surfaces (i.e., dissimilar shapes) depending on the failure scenario and the set of metrics. In addition, the robustness surface allows the robustness of different networks to be compared.

  13. Automatic Mode Transition Enabled Robust Triboelectric Nanogenerators.

    PubMed

    Chen, Jun; Yang, Jin; Guo, Hengyu; Li, Zhaoling; Zheng, Li; Su, Yuanjie; Wen, Zhen; Fan, Xing; Wang, Zhong Lin

    2015-12-22

    Although the triboelectric nanogenerator (TENG) has been proven to be a renewable and effective route for ambient energy harvesting, its robustness remains a great challenge due to the requirement of surface friction for a decent output, especially for the in-plane sliding mode TENG. Here, we present a rationally designed TENG for achieving a high output performance without compromising the device robustness by, first, converting the in-plane sliding electrification into a contact separation working mode and, second, creating an automatic transition between a contact working state and a noncontact working state. The magnet-assisted automatic transition triboelectric nanogenerator (AT-TENG) was demonstrated to effectively harness various ambient rotational motions to generate electricity with greatly improved device robustness. At a wind speed of 6.5 m/s or a water flow rate of 5.5 L/min, the harvested energy was capable of lighting up 24 spot lights (0.6 W each) simultaneously and charging a capacitor to greater than 120 V in 60 s. Furthermore, due to the rational structural design and unique output characteristics, the AT-TENG was not only capable of harvesting energy from natural bicycling and car motion but also acting as a self-powered speedometer with ultrahigh accuracy. Given such features as structural simplicity, easy fabrication, low cost, wide applicability even in a harsh environment, and high output performance with superior device robustness, the AT-TENG renders an effective and practical approach for ambient mechanical energy harvesting as well as self-powered active sensing.

  14. Robust GPS autonomous signal quality monitoring

    NASA Astrophysics Data System (ADS)

    Ndili, Awele Nnaemeka

    The Global Positioning System (GPS), introduced by the U.S. Department of Defense in 1973, provides unprecedented world-wide navigation capabilities through a constellation of 24 satellites in global orbit, each emitting a low-power radio-frequency signal for ranging. GPS receivers track these transmitted signals, computing position to within 30 meters from range measurements made to four satellites. GPS has a wide range of applications, including aircraft, marine and land vehicle navigation. Each application places demands on GPS for various levels of accuracy, integrity, system availability and continuity of service. Radio frequency interference (RFI), which results from natural sources such as TV/FM harmonics, radar or Mobile Satellite Systems (MSS), presents a challenge in the use of GPS, by posing a threat to the accuracy, integrity and availability of the GPS navigation solution. In order to use GPS for integrity-sensitive applications, it is therefore necessary to monitor the quality of the received signal, with the objective of promptly detecting the presence of RFI, and thus provide a timely warning of degradation of system accuracy. This presents a challenge, since the myriad kinds of RFI affect the GPS receiver in different ways. What is required then, is a robust method of detecting GPS accuracy degradation, which is effective regardless of the origin of the threat. This dissertation presents a new method of robust signal quality monitoring for GPS. Algorithms for receiver autonomous interference detection and integrity monitoring are demonstrated. Candidate test statistics are derived from fundamental receiver measurements of in-phase and quadrature correlation outputs, and the gain of the Active Gain Controller (AGC). Performance of selected test statistics are evaluated in the presence of RFI: broadband interference, pulsed and non-pulsed interference, coherent CW at different frequencies; and non-RFI: GPS signal fading due to physical blockage and

  15. Impact of orbit, clock and EOP errors in GNSS Precise Point Positioning

    NASA Astrophysics Data System (ADS)

    Hackman, C.

    2012-12-01

    Precise point positioning (PPP; [1]) has gained ever-increasing usage in GNSS carrier-phase positioning, navigation and timing (PNT) since its inception in the late 1990s. In this technique, high-precision satellite clocks, satellite ephemerides and earth-orientation parameters (EOPs) are applied as fixed input by the user in order to estimate receiver/location-specific quantities such as antenna coordinates, troposphere delay and receiver-clock corrections. This is in contrast to "network" solutions, in which (typically) less-precise satellite clocks, satellite ephemerides and EOPs are used as input, and in which these parameters are estimated simultaneously with the receiver/location-specific parameters. The primary reason for increased PPP application is that it offers most of the benefits of a network solution with a smaller computing cost. In addition, the software required to do PPP positioning can be simpler than that required for network solutions. Finally, PPP permits high-precision positioning of single or sparsely spaced receivers that may have few or no GNSS satellites in common view. A drawback of PPP is that the accuracy of the results depend directly on the accuracy of the supplied orbits, clocks and EOPs, since these parameters are not adjusted during the processing. In this study, we will examine the impact of orbit, EOP and satellite clock estimates on PPP solutions. Our primary focus will be the impact of these errors on station coordinates; however the study may be extended to error propagation into receiver-clock corrections and/or troposphere estimates if time permits. Study motivation: the United States Naval Observatory (USNO) began testing PPP processing using its own predicted orbits, clocks and EOPs in Summer 2012 [2]. The results of such processing could be useful for real- or near-real-time applications should they meet accuracy/precision requirements. Understanding how errors in satellite clocks, satellite orbits and EOPs propagate

  16. Precision cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Fendt, William Ashton, Jr.

    2009-09-01

    Experimental efforts of the last few decades have brought. a golden age to mankind's endeavor to understand tine physical properties of the Universe throughout its history. Recent measurements of the cosmic microwave background (CMB) provide strong confirmation of the standard big bang paradigm, as well as introducing new mysteries, to unexplained by current physical models. In the following decades. even more ambitious scientific endeavours will begin to shed light on the new physics by looking at the detailed structure of the Universe both at very early and recent times. Modern data has allowed us to begins to test inflationary models of the early Universe, and the near future will bring higher precision data and much stronger tests. Cracking the codes hidden in these cosmological observables is a difficult and computationally intensive problem. The challenges will continue to increase as future experiments bring larger and more precise data sets. Because of the complexity of the problem, we are forced to use approximate techniques and make simplifying assumptions to ease the computational workload. While this has been reasonably sufficient until now, hints of the limitations of our techniques have begun to come to light. For example, the likelihood approximation used for analysis of CMB data from the Wilkinson Microwave Anistropy Probe (WMAP) satellite was shown to have short falls, leading to pre-emptive conclusions drawn about current cosmological theories. Also it can he shown that an approximate method used by all current analysis codes to describe the recombination history of the Universe will not be sufficiently accurate for future experiments. With a new CMB satellite scheduled for launch in the coming months, it is vital that we develop techniques to improve the analysis of cosmological data. This work develops a novel technique of both avoiding the use of approximate computational codes as well as allowing the application of new, more precise analysis

  17. High Accuracy Wavelength Calibration For A Scanning Visible Spectrometer

    SciTech Connect

    Filippo Scotti and Ronald Bell

    2010-07-29

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤ 0.2Â. An automated calibration for a scanning spectrometer has been developed to achieve a high wavelength accuracy overr the visible spectrum, stable over time and environmental conditions, without the need to recalibrate after each grating movement. The method fits all relevant spectrometer paraameters using multiple calibration spectra. With a steping-motor controlled sine-drive, accuracies of ~0.025 Â have been demonstrated. With the addition of high resolution (0.075 aresec) optical encoder on the grading stage, greater precision (~0.005 Â) is possible, allowing absolute velocity measurements with ~0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  18. Precision Astronomy with Imperfect Deep Depletion CCDs

    NASA Astrophysics Data System (ADS)

    Stubbs, Christopher; LSST Sensor Team; PanSTARRS Team

    2014-01-01

    While thick CCDs do provide definite advantages in terms of increased quantum efficiency at wavelengths 700 nm<λ < 1.1 microns and reduced fringing from atmospheric emission lines, these devices also exhibit undesirable features that pose a challenge to precision determination of the positions, fluxes, and shapes of astronomical objects, and for the precision extraction of features in astronomical spectra. For example, the assumptions of a perfectly rectilinear pixel grid and of an intensity-independent point spread function become increasingly invalid as we push to higher precision measurements. Many of the effects seen in these devices arise from lateral electrical fields within the detector, that produce charge transport anomalies that have been previously misinterpreted as quantum efficiency variations. Performing simplistic flat-fielding therefore introduces systematic errors in the image processing pipeline. One measurement challenge we face is devising a combination of calibration methods and algorithms that can distinguish genuine quantum efficiency variations from charge transport effects. These device imperfections also confront spectroscopic applications, such as line centroid determination for precision radial velocity studies. Given the scientific benefits of improving both the precision and accuracy of astronomical measurements, we need to identify, characterize, and overcome these various detector artifacts. In retrospect, many of the detector features first identified in thick CCDs also afflict measurements made with more traditional CCD detectors, albeit often at a reduced level since the photocharge is subject to the perturbing influence of lateral electric fields for a shorter time interval. I provide a qualitative overview of the physical effects we think are responsible for the observed device properties, and provide some perspective for the work that lies ahead.

  19. EVALUATION OF METRIC PRECISION FOR A RIPARIAN FOREST SURVEY

    EPA Science Inventory

    This paper evaluates the performance of a protocol to monitor riparian forests in western Oregon based on the quality of the data obtained from a recent field survey. Precision and accuracy are the criteria used to determine the quality of 19 field metrics. The field survey con...

  20. Precise measurement of planeness.

    PubMed

    Schulz, G; Schwider, J

    1967-06-01

    Interference methods are reviewed-particularly those developed at the German Academy of Sciences in Berlin-with which the deviations of an optically flat surface from the ideal plane can be measured with a high degree of exactness. One aid to achieve this is the relative methods which measure the differences in planeness between two surfaces. These are then used in the absolute methods which determine the absolute planeness of a surface. This absolute determination can be effected in connection with a liquid surface, or (as done by the authors) only by suitable evaluation of relative measurements between unknown plates in various positional combinations. Experimentally, one uses two- or multiple-beam interference fringes of equal thickness(1) or of equal inclination. The fringes are observed visually, scanned, or photographed, and in part several wavelengths or curves of equal density (Aquidensiten) are employed. The survey also brings the following new methods: a relative method, where, with the aid of fringes of superposition, the fringe separation is subdivided equidistantly thus achieving an increase of measuring precision, and an absolute method which determines the deviations of a surface from ideal planeness along arbitrary central sections, without a liquid surface, from four relative interference photographs.

  1. Prompt and Precise Prototyping

    NASA Technical Reports Server (NTRS)

    2003-01-01

    For Sanders Design International, Inc., of Wilton, New Hampshire, every passing second between the concept and realization of a product is essential to succeed in the rapid prototyping industry where amongst heavy competition, faster time-to-market means more business. To separate itself from its rivals, Sanders Design aligned with NASA's Marshall Space Flight Center to develop what it considers to be the most accurate rapid prototyping machine for fabrication of extremely precise tooling prototypes. The company's Rapid ToolMaker System has revolutionized production of high quality, small-to-medium sized prototype patterns and tooling molds with an exactness that surpasses that of computer numerically-controlled (CNC) machining devices. Created with funding and support from Marshall under a Small Business Innovation Research (SBIR) contract, the Rapid ToolMaker is a dual-use technology with applications in both commercial and military aerospace fields. The advanced technology provides cost savings in the design and manufacturing of automotive, electronic, and medical parts, as well as in other areas of consumer interest, such as jewelry and toys. For aerospace applications, the Rapid ToolMaker enables fabrication of high-quality turbine and compressor blades for jet engines on unmanned air vehicles, aircraft, and missiles.

  2. Soviet precision timekeeping research and technology

    SciTech Connect

    Vessot, R.F.C.; Allan, D.W.; Crampton, S.J.B.; Cutler, L.S.; Kern, R.H.; McCoubrey, A.O.; White, J.D.

    1991-08-01

    This report is the result of a study of Soviet progress in precision timekeeping research and timekeeping capability during the last two decades. The study was conducted by a panel of seven US scientists who have expertise in timekeeping, frequency control, time dissemination, and the direct applications of these disciplines to scientific investigation. The following topics are addressed in this report: generation of time by atomic clocks at the present level of their technology, new and emerging technologies related to atomic clocks, time and frequency transfer technology, statistical processes involving metrological applications of time and frequency, applications of precise time and frequency to scientific investigations, supporting timekeeping technology, and a comparison of Soviet research efforts with those of the United States and the West. The number of Soviet professionals working in this field is roughly 10 times that in the United States. The Soviet Union has facilities for large-scale production of frequency standards and has concentrated its efforts on developing and producing rubidium gas cell devices (relatively compact, low-cost frequency standards of modest accuracy and stability) and atomic hydrogen masers (relatively large, high-cost standards of modest accuracy and high stability). 203 refs., 45 figs., 9 tabs.

  3. Glass ceramic ZERODUR enabling nanometer precision

    NASA Astrophysics Data System (ADS)

    Jedamzik, Ralf; Kunisch, Clemens; Nieder, Johannes; Westerhoff, Thomas

    2014-03-01

    The IC Lithography roadmap foresees manufacturing of devices with critical dimension of < 20 nm. Overlay specification of single digit nanometer asking for nanometer positioning accuracy requiring sub nanometer position measurement accuracy. The glass ceramic ZERODUR® is a well-established material in critical components of microlithography wafer stepper and offered with an extremely low coefficient of thermal expansion (CTE), the tightest tolerance available on market. SCHOTT is continuously improving manufacturing processes and it's method to measure and characterize the CTE behavior of ZERODUR® to full fill the ever tighter CTE specification for wafer stepper components. In this paper we present the ZERODUR® Lithography Roadmap on the CTE metrology and tolerance. Additionally, simulation calculations based on a physical model are presented predicting the long term CTE behavior of ZERODUR® components to optimize dimensional stability of precision positioning devices. CTE data of several low thermal expansion materials are compared regarding their temperature dependence between - 50°C and + 100°C. ZERODUR® TAILORED 22°C is full filling the tight CTE tolerance of +/- 10 ppb / K within the broadest temperature interval compared to all other materials of this investigation. The data presented in this paper explicitly demonstrates the capability of ZERODUR® to enable the nanometer precision required for future generation of lithography equipment and processes.

  4. Understanding the delayed-keyword effect on metacomprehension accuracy.

    PubMed

    Thiede, Keith W; Dunlosky, John; Griffin, Thomas D; Wiley, Jennifer

    2005-11-01

    The typical finding from research on metacomprehension is that accuracy is quite low. However, recent studies have shown robust accuracy improvements when judgments follow certain generation tasks (summarizing or keyword listing) but only when these tasks are performed at a delay rather than immediately after reading (K. W. Thiede & M. C. M. Anderson, 2003; K. W. Thiede, M. C. M. Anderson, & D. Therriault, 2003). The delayed and immediate conditions in these studies confounded the delay between reading and generation tasks with other task lags, including the lag between multiple generation tasks and the lag between generation tasks and judgments. The first 2 experiments disentangle these confounded manipulations and provide clear evidence that the delay between reading and keyword generation is the only lag critical to improving metacomprehension accuracy. The 3rd and 4th experiments show that not all delayed tasks produce improvements and suggest that delayed generative tasks provide necessary diagnostic cues about comprehension for improving metacomprehension accuracy.

  5. Robust Understanding of Statistical Variation

    ERIC Educational Resources Information Center

    Peters, Susan A.

    2011-01-01

    This paper presents a framework that captures the complexity of reasoning about variation in ways that are indicative of robust understanding and describes reasoning as a blend of design, data-centric, and modeling perspectives. Robust understanding is indicated by integrated reasoning about variation within each perspective and across…

  6. Robust, Optimal Subsonic Airfoil Shapes

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan

    2014-01-01

    A method has been developed to create an airfoil robust enough to operate satisfactorily in different environments. This method determines a robust, optimal, subsonic airfoil shape, beginning with an arbitrary initial airfoil shape, and imposes the necessary constraints on the design. Also, this method is flexible and extendible to a larger class of requirements and changes in constraints imposed.

  7. Positional and orientational referencing of multiple light sectioning systems for precision profile measurement

    NASA Astrophysics Data System (ADS)

    Tratnig, Mark; Hlobil, Helmut; Reisinger, Johann; O'Leary, Paul L.

    2005-02-01

    Precision rolled strips are often intermediate products in the manufacturing of blades. In such cases the shape and size of these strips are essential to the functionality and quality of the blade and cutting workpiece. Although precision strips are normally produced in heavily automated rolling mills, their size and shape are still inspected manually with profile gauges and microscopes. In this paper we present a measurement setup with multiple light-sectioning systems, which is suitable for the inspection of all sides of a profiled strip. It consists of three measurement heads, which are used to inspect the upper side, the lower side and the back of the blade. The heads are calibrated individually; the focus of the work here is to determine the relative position and orientation of the heads with respect to each other. The first approach has been developed to reference two or more measurement heads. The calculation of the required transformations is based on the rotation of a suitable target. Due to the small depth of field, the location of the rotation axis must be pre-adjusted very precisely. To improve the accuracy and to simplify the process, a second referencing method was developed. The required target was manufactured by means of a 5-axis high speed milling machine and features a thickness tolerance of less than 1 micron. Both the referencing method and target are presented. Additionally, we demonstrate the all-side inspection of a blade. It will be shown that the approaches allow a robust and flexible referencing of multiple measurement heads to each other.

  8. Facial symmetry in robust anthropometrics.

    PubMed

    Kalina, Jan

    2012-05-01

    Image analysis methods commonly used in forensic anthropology do not have desirable robustness properties, which can be ensured by robust statistical methods. In this paper, the face localization in images is carried out by detecting symmetric areas in the images. Symmetry is measured between two neighboring rectangular areas in the images using a new robust correlation coefficient, which down-weights regions in the face violating the symmetry. Raw images of faces without usual preliminary transformations are considered. The robust correlation coefficient based on the least weighted squares regression yields very promising results also in the localization of such faces, which are not entirely symmetric. Standard methods of statistical machine learning are applied for comparison. The robust correlation analysis can be applicable to other problems of forensic anthropology.

  9. The empirical accuracy of uncertain inference models

    NASA Technical Reports Server (NTRS)

    Vaughan, David S.; Yadrick, Robert M.; Perrin, Bruce M.; Wise, Ben P.

    1987-01-01

    Uncertainty is a pervasive feature of the domains in which expert systems are designed to function. Research design to test uncertain inference methods for accuracy and robustness, in accordance with standard engineering practice is reviewed. Several studies were conducted to assess how well various methods perform on problems constructed so that correct answers are known, and to find out what underlying features of a problem cause strong or weak performance. For each method studied, situations were identified in which performance deteriorates dramatically. Over a broad range of problems, some well known methods do only about as well as a simple linear regression model, and often much worse than a simple independence probability model. The results indicate that some commercially available expert system shells should be used with caution, because the uncertain inference models that they implement can yield rather inaccurate results.

  10. Improving the precision of astrometry for space debris

    SciTech Connect

    Sun, Rongyu; Zhao, Changyin; Zhang, Xiaoxiang

    2014-03-01

    The data reduction method for optical space debris observations has many similarities with the one adopted for surveying near-Earth objects; however, due to several specific issues, the image degradation is particularly critical, which makes it difficult to obtain precise astrometry. An automatic image reconstruction method was developed to improve the astrometry precision for space debris, based on the mathematical morphology operator. Variable structural elements along multiple directions are adopted for image transformation, and then all the resultant images are stacked to obtain a final result. To investigate its efficiency, trial observations are made with Global Positioning System satellites and the astrometry accuracy improvement is obtained by comparison with the reference positions. The results of our experiments indicate that the influence of degradation in astrometric CCD images is reduced, and the position accuracy of both objects and stellar stars is improved distinctly. Our technique will contribute significantly to optical data reduction and high-order precision astrometry for space debris.

  11. A Robust Biomarker

    NASA Technical Reports Server (NTRS)

    Westall, F.; Steele, A.; Toporski, J.; Walsh, M. M.; Allen, C. C.; Guidry, S.; McKay, D. S.; Gibson, E. K.; Chafetz, H. S.

    2000-01-01

    containing fossil biofilm, including the 3.5 b.y..-old carbonaceous cherts from South Africa and Australia. As a result of the unique compositional, structural and "mineralisable" properties of bacterial polymer and biofilms, we conclude that bacterial polymers and biofilms constitute a robust and reliable biomarker for life on Earth and could be a potential biomarker for extraterrestrial life.

  12. Precise Truss Assembly using Commodity Parts and Low Precision Welding

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, William R.; Correll, Nikolaus

    2013-01-01

    We describe an Intelligent Precision Jigging Robot (IPJR), which allows high precision assembly of commodity parts with low-precision bonding. We present preliminary experiments in 2D that are motivated by the problem of assembling a space telescope optical bench on orbit using inexpensive, stock hardware and low-precision welding. An IPJR is a robot that acts as the precise "jigging", holding parts of a local assembly site in place while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (in this case, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. We report the challenges of designing the IPJR hardware and software, analyze the error in assembly, document the test results over several experiments including a large-scale ring structure, and describe future work to implement the IPJR in 3D and with micron precision.

  13. Precise Truss Assembly Using Commodity Parts and Low Precision Welding

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, W. R.; Correll, Nikolaus

    2014-01-01

    Hardware and software design and system integration for an intelligent precision jigging robot (IPJR), which allows high precision assembly using commodity parts and low-precision bonding, is described. Preliminary 2D experiments that are motivated by the problem of assembling space telescope optical benches and very large manipulators on orbit using inexpensive, stock hardware and low-precision welding are also described. An IPJR is a robot that acts as the precise "jigging", holding parts of a local structure assembly site in place, while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (for this prototype, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. The analysis of the assembly error and the results of building a square structure and a ring structure are discussed. Options for future work, to extend the IPJR paradigm to building in 3D structures at micron precision are also summarized.

  14. [Precision nutrition in the era of precision medicine].

    PubMed

    Chen, P Z; Wang, H

    2016-12-06

    Precision medicine has been increasingly incorporated into clinical practice and is enabling a new era for disease prevention and treatment. As an important constituent of precision medicine, precision nutrition has also been drawing more attention during physical examinations. The main aim of precision nutrition is to provide safe and efficient intervention methods for disease treatment and management, through fully considering the genetics, lifestyle (dietary, exercise and lifestyle choices), metabolic status, gut microbiota and physiological status (nutrient level and disease status) of individuals. Three major components should be considered in precision nutrition, including individual criteria for sufficient nutritional status, biomarker monitoring or techniques for nutrient detection and the applicable therapeutic or intervention methods. It was suggested that, in clinical practice, many inherited and chronic metabolic diseases might be prevented or managed through precision nutritional intervention. For generally healthy populations, because lifestyles, dietary factors, genetic factors and environmental exposures vary among individuals, precision nutrition is warranted to improve their physical activity and reduce disease risks. In summary, research and practice is leading toward precision nutrition becoming an integral constituent of clinical nutrition and disease prevention in the era of precision medicine.

  15. Design of high-precision ranging system for laser fuze

    NASA Astrophysics Data System (ADS)

    Chen, Shanshan; Zhang, He; Xu, Xiaobin

    2016-10-01

    According to the problem of the high-precision ranging in the circumferential scanning probe laser proximity fuze, a new type of pulsed laser ranging system has been designed. The laser transmitting module, laser receiving module and ranging processing module have been designed respectively. The factors affecting the ranging accuracy are discussed. And the method of improving the ranging accuracy is studied. The high-precision ranging system adopts the general high performance microprocessor C8051FXXX as the core. And the time interval measurement chip TDC-GP21 was used to implement the system. A PCB circuit board was processed to carry on the experiment. The results of the experiment prove that a centimeter level accuracy ranging system has been achieved. The works can offer reference for ranging system design of the circumferential scanning probe laser proximity fuze.

  16. Novel closed-loop approaches for precise relative navigation of widely separated GPS receivers in LEO

    NASA Astrophysics Data System (ADS)

    Tancredi, U.; Renga, A.; Grassi, M.

    2014-01-01

    achieved in a 1-day long dataset. Results also show that approaches exploiting ionospheric delay models are more robust and precise of approaches relying on ionospheric-delay removal techniques.

  17. Precision respiratory medicine and the microbiome.

    PubMed

    Rogers, Geraint B; Wesselingh, Steve

    2016-01-01

    A decade of rapid technological advances has provided an exciting opportunity to incorporate information relating to a range of potentially important disease determinants in the clinical decision-making process. Access to highly detailed data will enable respiratory medicine to evolve from one-size-fits-all models of care, which are associated with variable clinical effectiveness and high rates of side-effects, to precision approaches, where treatment is tailored to individual patients. The human microbiome has increasingly been recognised as playing an important part in determining disease course and response to treatment. Its inclusion in precision models of respiratory medicine, therefore, is essential. Analysis of the microbiome provides an opportunity to develop novel prognostic markers for airways disease, improve definition of clinical phenotypes, develop additional guidance to aid treatment selection, and increase the accuracy of indicators of treatment effect. In this Review we propose that collaboration between researchers and clinicians is needed if respiratory medicine is to replicate the successes of precision medicine seen in other clinical specialties.

  18. Personalized Proteomics: The Future of Precision Medicine.

    PubMed

    Duarte, Trevor T; Spencer, Charles T

    2016-01-01

    Medical diagnostics and treatment has advanced from a one size fits all science to treatment of the patient as a unique individual. Currently, this is limited solely to genetic analysis. However, epigenetic, transcriptional, proteomic, posttranslational modifications, metabolic, and environmental factors influence a patient's response to disease and treatment. As more analytical and diagnostic techniques are incorporated into medical practice, the personalized medicine initiative transitions to precision medicine giving a holistic view of the patient's condition. The high accuracy and sensitivity of mass spectrometric analysis of proteomes is well suited for the incorporation of proteomics into precision medicine. This review begins with an overview of the advance to precision medicine and the current state of the art in technology and instrumentation for mass spectrometry analysis. Thereafter, it focuses on the benefits and potential uses for personalized proteomic analysis in the diagnostic and treatment of individual patients. In conclusion, it calls for a synthesis between basic science and clinical researchers with practicing clinicians to design proteomic studies to generate meaningful and applicable translational medicine. As clinical proteomics is just beginning to come out of its infancy, this overview is provided for the new initiate.

  19. Personalized Proteomics: The Future of Precision Medicine

    PubMed Central

    Duarte, Trevor T.; Spencer, Charles T.

    2016-01-01

    Medical diagnostics and treatment has advanced from a one size fits all science to treatment of the patient as a unique individual. Currently, this is limited solely to genetic analysis. However, epigenetic, transcriptional, proteomic, posttranslational modifications, metabolic, and environmental factors influence a patient’s response to disease and treatment. As more analytical and diagnostic techniques are incorporated into medical practice, the personalized medicine initiative transitions to precision medicine giving a holistic view of the patient’s condition. The high accuracy and sensitivity of mass spectrometric analysis of proteomes is well suited for the incorporation of proteomics into precision medicine. This review begins with an overview of the advance to precision medicine and the current state of the art in technology and instrumentation for mass spectrometry analysis. Thereafter, it focuses on the benefits and potential uses for personalized proteomic analysis in the diagnostic and treatment of individual patients. In conclusion, it calls for a synthesis between basic science and clinical researchers with practicing clinicians to design proteomic studies to generate meaningful and applicable translational medicine. As clinical proteomics is just beginning to come out of its infancy, this overview is provided for the new initiate. PMID:27882306

  20. Precision mass measurements of highly charged ions

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, A. A.; Bale, J. C.; Brunner, T.; Chaudhuri, A.; Chowdhury, U.; Ettenauer, S.; Frekers, D.; Gallant, A. T.; Grossheim, A.; Lennarz, A.; Mane, E.; MacDonald, T. D.; Schultz, B. E.; Simon, M. C.; Simon, V. V.; Dilling, J.

    2012-10-01

    The reputation of Penning trap mass spectrometry for accuracy and precision was established with singly charged ions (SCI); however, the achievable precision and resolving power can be extended by using highly charged ions (HCI). The TITAN facility has demonstrated these enhancements for long-lived (T1/2>=50 ms) isobars and low-lying isomers, including ^71Ge^21+, ^74Rb^8+, ^78Rb^8+, and ^98Rb^15+. The Q-value of ^71Ge enters into the neutrino cross section, and the use of HCI reduced the resolving power required to distinguish the isobars from 3 x 10^5 to 20. The precision achieved in the measurement of ^74Rb^8+, a superallowed β-emitter and candidate to test the CVC hypothesis, rivaled earlier measurements with SCI in a fraction of the time. The 111.19(22) keV isomeric state in ^78Rb was resolved from the ground state. Mass measurements of neutron-rich Rb and Sr isotopes near A = 100 aid in determining the r-process pathway. Advanced ion manipulation techniques and recent results will be presented.

  1. Noise Robust Speech Recognition Applied to Voice-Driven Wheelchair

    NASA Astrophysics Data System (ADS)

    Sasou, Akira; Kojima, Hiroaki

    2009-12-01

    Conventional voice-driven wheelchairs usually employ headset microphones that are capable of achieving sufficient recognition accuracy, even in the presence of surrounding noise. However, such interfaces require users to wear sensors such as a headset microphone, which can be an impediment, especially for the hand disabled. Conversely, it is also well known that the speech recognition accuracy drastically degrades when the microphone is placed far from the user. In this paper, we develop a noise robust speech recognition system for a voice-driven wheelchair. This system can achieve almost the same recognition accuracy as the headset microphone without wearing sensors. We verified the effectiveness of our system in experiments in different environments, and confirmed that our system can achieve almost the same recognition accuracy as the headset microphone without wearing sensors.

  2. Centimeter-Level Robust Gnss-Aided Inertial Post-Processing for Mobile Mapping Without Local Reference Stations

    NASA Astrophysics Data System (ADS)

    Hutton, J. J.; Gopaul, N.; Zhang, X.; Wang, J.; Menon, V.; Rieck, D.; Kipka, A.; Pastor, F.

    2016-06-01

    For almost two decades mobile mapping systems have done their georeferencing using Global Navigation Satellite Systems (GNSS) to measure position and inertial sensors to measure orientation. In order to achieve cm level position accuracy, a technique referred to as post-processed carrier phase differential GNSS (DGNSS) is used. For this technique to be effective the maximum distance to a single Reference Station should be no more than 20 km, and when using a network of Reference Stations the distance to the nearest station should no more than about 70 km. This need to set up local Reference Stations limits productivity and increases costs, especially when mapping large areas or long linear features such as roads or pipelines. An alternative technique to DGNSS for high-accuracy positioning from GNSS is the so-called Precise Point Positioning or PPP method. In this case instead of differencing the rover observables with the Reference Station observables to cancel out common errors, an advanced model for every aspect of the GNSS error chain is developed and parameterized to within an accuracy of a few cm. The Trimble Centerpoint RTX positioning solution combines the methodology of PPP with advanced ambiguity resolution technology to produce cm level accuracies without the need for local reference stations. It achieves this through a global deployment of highly redundant monitoring stations that are connected through the internet and are used to determine the precise satellite data with maximum accuracy, robustness, continuity and reliability, along with advance algorithms and receiver and antenna calibrations. This paper presents a new post-processed realization of the Trimble Centerpoint RTX technology integrated into the Applanix POSPac MMS GNSS-Aided Inertial software for mobile mapping. Real-world results from over 100 airborne flights evaluated against a DGNSS network reference are presented which show that the post-processed Centerpoint RTX solution agrees with

  3. Precision coordinated control of multi-axis gantry stages.

    PubMed

    Giam, T S; Tan, K K; Huang, S

    2007-06-01

    High precision motion control of gantry stages has found numerous applications in the manufacturing industries where precise positioning is crucial. This paper presents a survey of existing control schemes as well as the development of enhanced schemes for the coordinated motion control of moving gantry stages. In particular, a robust control scheme is proposed which uses a feedback controller with a sliding mode to correct for the tracking error and to coordinate multiple axis to move in tandem. Simulation and experimental results will illustrate and compare the performance of the control schemes presented in the paper.

  4. HIFI-C: a robust and fast method for determining NMR couplings from adaptive 3D to 2D projections.

    PubMed

    Cornilescu, Gabriel; Bahrami, Arash; Tonelli, Marco; Markley, John L; Eghbalnia, Hamid R

    2007-08-01

    We describe a novel method for the robust, rapid, and reliable determination of J couplings in multi-dimensional NMR coupling data, including small couplings from larger proteins. The method, "High-resolution Iterative Frequency Identification of Couplings" (HIFI-C) is an extension of the adaptive and intelligent data collection approach introduced earlier in HIFI-NMR. HIFI-C collects one or more optimally tilted two-dimensional (2D) planes of a 3D experiment, identifies peaks, and determines couplings with high resolution and precision. The HIFI-C approach, demonstrated here for the 3D quantitative J method, offers vital features that advance the goal of rapid and robust collection of NMR coupling data. (1) Tilted plane residual dipolar couplings (RDC) data are collected adaptively in order to offer an intelligent trade off between data collection time and accuracy. (2) Data from independent planes can provide a statistical measure of reliability for each measured coupling. (3) Fast data collection enables measurements in cases where sample stability is a limiting factor (for example in the presence of an orienting medium required for residual dipolar coupling measurements). (4) For samples that are stable, or in experiments involving relatively stronger couplings, robust data collection enables more reliable determinations of couplings in shorter time, particularly for larger biomolecules. As a proof of principle, we have applied the HIFI-C approach to the 3D quantitative J experiment to determine N-C' RDC values for three proteins ranging from 56 to 159 residues (including a homodimer with 111 residues in each subunit). A number of factors influence the robustness and speed of data collection. These factors include the size of the protein, the experimental set up, and the coupling being measured, among others. To exhibit a lower bound on robustness and the potential for time saving, the measurement of dipolar couplings for the N-C' vector represents a realistic

  5. Accuracy Evaluation of Electron-Probe Microanalysis as Applied to Semiconductors and Silicates

    NASA Technical Reports Server (NTRS)

    Carpenter, Paul; Armstrong, John

    2003-01-01

    An evaluation of precision and accuracy will be presented for representative semiconductor and silicate compositions. The accuracy of electron-probe analysis depends on high precision measurements and instrumental calibration, as well as correction algorithms and fundamental parameter data sets. A critical assessment of correction algorithms and mass absorption coefficient data sets can be made using the alpha factor technique. Alpha factor analysis can be used to identify systematic errors in data sets and also of microprobe standards used for calibration.

  6. Apparatus Makes Precisely Saturated Solutions

    NASA Technical Reports Server (NTRS)

    Pusey, Marc L.

    1989-01-01

    Simple laboratory apparatus establishes equilibrium conditions of temperature and concentration in solutions for use in precise measurements of saturation conditions. With equipment typical measurement of saturation concentration of protein in solution established and measured within about 24 hours. Precisely saturated solution made by passing solvent or solution slowly along column packed with solute at precisely controlled temperature. If necessary, flow stopped for experimentally determined interval to allow equilibrium to be established in column.

  7. Test Expectancy Affects Metacomprehension Accuracy

    ERIC Educational Resources Information Center

    Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2011-01-01

    Background: Theory suggests that the accuracy of metacognitive monitoring is affected by the cues used to judge learning. Researchers have improved monitoring accuracy by directing attention to more appropriate cues; however, this is the first study to more directly point students to more appropriate cues using instructions regarding tests and…

  8. Robust and Accurate Seismic(acoustic) Ray Tracer

    NASA Astrophysics Data System (ADS)

    Debski, W.; Ando, M.

    Recent development of high resolution seismic tomography as well as a need for a high precision seismic (acoustic) source locations calls for robust and very precise numeri- cal methods of an estimation of seismic (acoustic) travel times and ray paths. Here we present a method based on a parametrisation of the ray path by a series of the Cheby- shev polynomials. This pseudo-spectral method, combined with the accurate Gauss- Lobbato integration procedure allows to reach a very high relative travel time accu- racy t/t 10-7. At the same time use of the Genetic Algorithm based optimizer (Evolutionary Algorithm) assures an extreme robustness which allows the method to be used in complicated 3D geological structures like multi-fault areas, mines, or real engineering applications, constructions, etc.

  9. Robust Registration of Dynamic Facial Sequences.

    PubMed

    Sariyanidi, Evangelos; Gunes, Hatice; Cavallaro, Andrea

    2017-04-01

    Accurate face registration is a key step for several image analysis applications. However, existing registration methods are prone to temporal drift errors or jitter among consecutive frames. In this paper, we propose an iterative rigid registration framework that estimates the misalignment with trained regressors. The input of the regressors is a robust motion representation that encodes the motion between a misaligned frame and the reference frame(s), and enables reliable performance under non-uniform illumination variations. Drift errors are reduced when the motion representation is computed from multiple reference frames. Furthermore, we use the L2 norm of the representation as a cue for performing coarse-to-fine registration efficiently. Importantly, the framework can identify registration failures and correct them. Experiments show that the proposed approach achieves significantly higher registration accuracy than the state-of-the-art techniques in challenging sequences.

  10. Robustness of airline route networks

    NASA Astrophysics Data System (ADS)

    Lordan, Oriol; Sallan, Jose M.; Escorihuela, Nuria; Gonzalez-Prieto, David

    2016-03-01

    Airlines shape their route network by defining their routes through supply and demand considerations, paying little attention to network performance indicators, such as network robustness. However, the collapse of an airline network can produce high financial costs for the airline and all its geographical area of influence. The aim of this study is to analyze the topology and robustness of the network route of airlines following Low Cost Carriers (LCCs) and Full Service Carriers (FSCs) business models. Results show that FSC hubs are more central than LCC bases in their route network. As a result, LCC route networks are more robust than FSC networks.

  11. Simple Robust Fixed Lag Smoothing

    DTIC Science & Technology

    1988-12-02

    SIMPLE ROBUST FIXED LAG SMOOTHING by ~N. D. Le R.D. Martin 4 TECHNICAL RlEPORT No. 149 December 1988 Department of Statistics, GN-22 Accesion For...frLsD1ist Special A- Z Simple Robust Fixed Lag Smoothing With Application To Radar Glint Noise * N. D. Le R. D. Martin Department of Statistics, GN...smoothers. The emphasis here is on fixed-lag smoothing , as opposed to the use of existing robust fixed interval smoothers (e.g., as in Martin, 1979

  12. Precise Adaptation in Bacterial Chemotaxis through ``Assistance Neighborhoods''

    NASA Astrophysics Data System (ADS)

    Endres, Robert

    2007-03-01

    The chemotaxis network in Escherichia coli is remarkable for its sensitivity to small relative changes in the concentrations of multiple chemical signals over a broad range of ambient concentrations. Key to this sensitivity is an adaptation system that relies on methylation and demethylation (or deamidation) of specific modification sites of the chemoreceptors by the enzymes CheR and CheB, respectively. It was recently discovered that these enzymes can access five to seven receptors when tethered to a particular receptor. We show that these ``assistance neighborhoods'' (ANs) are necessary for precise and robust adaptation in a model for signaling by clusters of chemoreceptors: (1) ANs suppress fluctuations of the receptor methylation level; (2) ANs lead to robustness with respect to biochemical parameters. We predict two limits of precise adaptation at large attractant concentrations: either receptors reach full methylation and turn off, or receptors become saturated and cease to respond to attractant but retain their adapted activity.

  13. Centroid precision and orientation precision of planar localization microscopy.

    PubMed

    McGray, C; Copeland, C R; Stavis, S M; Geist, J

    2016-09-01

    The concept of localization precision, which is essential to localization microscopy, is formally extended from optical point sources to microscopic rigid bodies. Measurement functions are presented to calculate the planar pose and motion of microscopic rigid bodies from localization microscopy data. Physical lower bounds on the associated uncertainties - termed centroid precision and orientation precision - are derived analytically in terms of the characteristics of the optical measurement system and validated numerically by Monte Carlo simulations. The practical utility of these expressions is demonstrated experimentally by an analysis of the motion of a microelectromechanical goniometer indicated by a sparse constellation of fluorescent nanoparticles. Centroid precision and orientation precision, as developed here, are useful concepts due to the generality of the expressions and the widespread interest in localization microscopy for super-resolution imaging and particle tracking.

  14. Speed and Accuracy in Shallow and Deep Stochastic Parsing

    DTIC Science & Technology

    2004-01-01

    Abstract This paper reports some experiments that com- pare the accuracy and performance of two stochastic parsing systems. The currently pop- ular...deep linguistic grammars are too difficult to produce, lack coverage and robustness, and also have poor run-time performance . The Collins parser is... PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Palo Alto Research Center,3333 Coyote Hill Road,Palo Alto,CA,94304 8. PERFORMING ORGANIZATION REPORT NUMBER

  15. What do we mean by accuracy in geomagnetic measurements?

    USGS Publications Warehouse

    Green, A.W.

    1990-01-01

    High accuracy is what distinguishes measurements made at the world's magnetic observatories from other types of geomagnetic measurements. High accuracy in determining the absolute values of the components of the Earth's magnetic field is essential to studying geomagnetic secular variation and processes at the core mantle boundary, as well as some magnetospheric processes. In some applications of geomagnetic data, precision (or resolution) of measurements may also be important. In addition to accuracy and resolution in the amplitude domain, it is necessary to consider these same quantities in the frequency and space domains. New developments in geomagnetic instruments and communications make real-time, high accuracy, global geomagnetic observatory data sets a real possibility. There is a growing realization in the scientific community of the unique relevance of geomagnetic observatory data to the principal contemporary problems in solid Earth and space physics. Together, these factors provide the promise of a 'renaissance' of the world's geomagnetic observatory system. ?? 1990.

  16. Cardiac output method comparison studies: the relation of the precision of agreement and the precision of method.

    PubMed

    Hapfelmeier, Alexander; Cecconi, Maurizio; Saugel, Bernd

    2016-04-01

    Cardiac output (CO) plays a crucial role in the hemodynamic management of critically ill patients treated in the intensive care unit and in surgical patients undergoing major surgery. In the field of cardiovascular dynamics, innovative techniques for CO determination are increasingly available. Therefore, the number of studies comparing these techniques with a reference, such as pulmonary artery thermodilution, is rapidly growing. There are mainly two outcomes of such method comparison studies: (1) the accuracy of agreement and (2) the precision of agreement. The precision of agreement depends on the precision of each method, i.e., the precision that the studied and the reference technique are able to achieve. We call this "precision of method". A decomposition of variance shows that method agreement does not only depend on the precision of method but also on another important source of variability, i.e., the method's general variability about the true values. Ignorance of that fact leads to falsified conclusions about the precision of method of the studied technique. In CO studies, serial measurements are frequently confused with repeated measurements. But as the actual CO of a subject changes from assessment to assessment, there is no real repetition of a measurement. This situation equals a scenario in which single measurements are given for multiple true values per subject. In such a case it is not possible to assess the precision of method.

  17. Robust stochastic optimization for reservoir operation

    NASA Astrophysics Data System (ADS)

    Pan, Limeng; Housh, Mashor; Liu, Pan; Cai, Ximing; Chen, Xin

    2015-01-01

    Optimal reservoir operation under uncertainty is a challenging engineering problem. Application of classic stochastic optimization methods to large-scale problems is limited due to computational difficulty. Moreover, classic stochastic methods assume that the estimated distribution function or the sample inflow data accurately represents the true probability distribution, which may be invalid and the performance of the algorithms may be undermined. In this study, we introduce a robust optimization (RO) approach, Iterative Linear Decision Rule (ILDR), so as to provide a tractable approximation for a multiperiod hydropower generation problem. The proposed approach extends the existing LDR method by accommodating nonlinear objective functions. It also provides users with the flexibility of choosing the accuracy of ILDR approximations by assigning a desired number of piecewise linear segments to each uncertainty. The performance of the ILDR is compared with benchmark policies including the sampling stochastic dynamic programming (SSDP) policy derived from historical data. The ILDR solves both the single and multireservoir systems efficiently. The single reservoir case study results show that the RO method is as good as SSDP when implemented on the original historical inflows and it outperforms SSDP policy when tested on generated inflows with the same mean and covariance matrix as those in history. For the multireservoir case study, which considers water supply in addition to power generation, numerical results show that the proposed approach performs as well as in the single reservoir case study in terms of optimal value and distributional robustness.

  18. High current high accuracy IGBT pulse generator

    SciTech Connect

    Nesterov, V.V.; Donaldson, A.R.

    1995-05-01

    A solid state pulse generator capable of delivering high current triangular or trapezoidal pulses into an inductive load has been developed at SLAC. Energy stored in a capacitor bank of the pulse generator is switched to the load through a pair of insulated gate bipolar transistors (IGBT). The circuit can then recover the remaining energy and transfer it back to the capacitor bank without reversing the capacitor voltage. A third IGBT device is employed to control the initial charge to the capacitor bank, a command charging technique, and to compensate for pulse to pulse power losses. The rack mounted pulse generator contains a 525 {mu}F capacitor bank. It can deliver 500 A at 900V into inductive loads up to 3 mH. The current amplitude and discharge time are controlled to 0.02% accuracy by a precision controller through the SLAC central computer system. This pulse generator drives a series pair of extraction dipoles.

  19. Robust Optimization of Biological Protocols

    PubMed Central

    Flaherty, Patrick; Davis, Ronald W.

    2015-01-01

    When conducting high-throughput biological experiments, it is often necessary to develop a protocol that is both inexpensive and robust. Standard approaches are either not cost-effective or arrive at an optimized protocol that is sensitive to experimental variations. We show here a novel approach that directly minimizes the cost of the protocol while ensuring the protocol is robust to experimental variation. Our approach uses a risk-averse conditional value-at-risk criterion in a robust parameter design framework. We demonstrate this approach on a polymerase chain reaction protocol and show that our improved protocol is less expensive than the standard protocol and more robust than a protocol optimized without consideration of experimental variation. PMID:26417115

  20. Robust Portfolio Optimization Using Pseudodistances.

    PubMed

    Toma, Aida; Leoni-Aubin, Samuela

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.

  1. French Meteor Network for High Precision Orbits of Meteoroids

    NASA Technical Reports Server (NTRS)

    Atreya, P.; Vaubaillon, J.; Colas, F.; Bouley, S.; Gaillard, B.; Sauli, I.; Kwon, M. K.

    2011-01-01

    There is a lack of precise meteoroids orbit from video observations as most of the meteor stations use off-the-shelf CCD cameras. Few meteoroids orbit with precise semi-major axis are available using film photographic method. Precise orbits are necessary to compute the dust flux in the Earth s vicinity, and to estimate the ejection time of the meteoroids accurately by comparing them with the theoretical evolution model. We investigate the use of large CCD sensors to observe multi-station meteors and to compute precise orbit of these meteoroids. An ideal spatial and temporal resolution to get an accuracy to those similar of photographic plates are discussed. Various problems faced due to the use of large CCD, such as increasing the spatial and the temporal resolution at the same time and computational problems in finding the meteor position are illustrated.

  2. High-precision thermal and electrical characterization of thermoelectric modules

    SciTech Connect

    Kolodner, Paul

    2014-05-15

    This paper describes an apparatus for performing high-precision electrical and thermal characterization of thermoelectric modules (TEMs). The apparatus is calibrated for operation between 20 °C and 80 °C and is normally used for measurements of heat currents in the range 0–10 W. Precision thermometry based on miniature thermistor probes enables an absolute temperature accuracy of better than 0.010 °C. The use of vacuum isolation, thermal guarding, and radiation shielding, augmented by a careful accounting of stray heat leaks and uncertainties, allows the heat current through the TEM under test to be determined with a precision of a few mW. The fractional precision of all measured parameters is approximately 0.1%.

  3. Precision analysis of passive BD aided pseudolites positioning system

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoming; Zhao, Yan

    2007-11-01

    In recent years BD (BeiDou positioning system), an active satellite navigation system, has been widely applied in geodetic survey, precise engineering survey and GNC (guide, navigation and control system) of weapons because of its reliability and availability. However, it has several problems on the accuracy, anti-interference and active-positioning. A passive BD aided pseudolites positioning system is introduced in details in this paper. The configuration and the operating principle of system are presented. In analyzing the precision of location, one of the crucial aspects to be studied is how to determine the arrangement of the pseudolites to get the good GDOP, which is discussed in the different arrangements of the pseudolites in this paper. The simulation results show that the VDOP (vertical dilution of precision) of BD is improved due to introducing the pseudolites. The experiments indicate the validity of the methods and the improvement of the positioning precision in the BD aided pseudolite system.

  4. A novel robust speed controller scheme for PMBLDC motor.

    PubMed

    Thirusakthimurugan, P; Dananjayan, P

    2007-10-01

    The design of speed and position controllers for permanent magnet brushless DC motor (PMBLDC) drive remains as an open problem in the field of motor drives. A precise speed control of PMBLDC motor is complex due to nonlinear coupling between winding currents and rotor speed. In addition, the nonlinearity present in the developed torque due to magnetic saturation of the rotor further complicates this issue. This paper presents a novel control scheme to the conventional PMBLDC motor drive, which aims at improving the robustness by complete decoupling of the design besides minimizing the mutual influence among the speed and current control loops. The interesting feature of this robust control scheme is its suitability for both static and dynamic aspects. The effectiveness of the proposed robust speed control scheme is verified through simulations.

  5. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Tradeoff on Phenotype Robustness in Biological Networks Part II: Ecological Networks.

    PubMed

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    In ecological networks, network robustness should be large enough to confer intrinsic robustness for tolerating intrinsic parameter fluctuations, as well as environmental robustness for resisting environmental disturbances, so that the phenotype stability of ecological networks can be maintained, thus guaranteeing phenotype robustness. However, it is difficult to analyze the network robustness of ecological systems because they are complex nonlinear partial differential stochastic systems. This paper develops a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance sensitivity in ecological networks. We found that the phenotype robustness criterion for ecological networks is that if intrinsic robustness + environmental robustness ≦ network robustness, then the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations and environmental disturbances. These results in robust ecological networks are similar to that in robust gene regulatory networks and evolutionary networks even they have different spatial-time scales.

  6. More Questions on Precision Teaching.

    ERIC Educational Resources Information Center

    Raybould, E. C.; Solity, J. E.

    1988-01-01

    Precision teaching can accelerate basic skills progress of special needs children. Issues discussed include using probes as performance tests, charting daily progress, using the charted data to modify teaching methods, determining appropriate age levels, assessing the number of students to be precision taught, and carefully allocating time. (JDD)

  7. Robust controls with structured perturbations

    NASA Technical Reports Server (NTRS)

    Keel, Leehyun

    1993-01-01

    This final report summarizes the recent results obtained by the principal investigator and his coworkers on the robust stability and control of systems containing parametric uncertainty. The starting point is a generalization of Kharitonov's theorem obtained in 1989, and its generalization to the multilinear case, the singling out of extremal stability subsets, and other ramifications now constitutes an extensive and coherent theory of robust parametric stability that is summarized in the results contained here.

  8. Do Fixation Cues Ensure Fixation Accuracy in Split-Fovea Studies of Word Recognition?

    ERIC Educational Resources Information Center

    Jordan, Timothy R.; Paterson, Kevin B.; Kurtev, Stoyan; Xu, Mengyun

    2009-01-01

    Many studies have claimed that hemispheric processing is split precisely at the foveal midline and so place great emphasis on the precise location at which words are fixated. These claims are based on experiments in which a variety of fixation procedures were used to ensure fixation accuracy but the effectiveness of these procedures is unclear. We…

  9. The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control

    ERIC Educational Resources Information Center

    Page, A.; Moreno, R.; Candelas, P.; Belmar, F.

    2008-01-01

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…

  10. The precise temporal calibration of dinosaur origins

    NASA Astrophysics Data System (ADS)

    Marsicano, Claudia A.; Irmis, Randall B.; Mancuso, Adriana C.; Mundil, Roland; Chemale, Farid

    2016-01-01

    Dinosaurs have been major components of ecosystems for over 200 million years. Although different macroevolutionary scenarios exist to explain the Triassic origin and subsequent rise to dominance of dinosaurs and their closest relatives (dinosauromorphs), all lack critical support from a precise biostratigraphically independent temporal framework. The absence of robust geochronologic age control for comparing alternative scenarios makes it impossible to determine if observed faunal differences vary across time, space, or a combination of both. To better constrain the origin of dinosaurs, we produced radioisotopic ages for the Argentinian Chañares Formation, which preserves a quintessential assemblage of dinosaurian precursors (early dinosauromorphs) just before the first dinosaurs. Our new high-precision chemical abrasion thermal ionization mass spectrometry (CA-TIMS) U-Pb zircon ages reveal that the assemblage is early Carnian (early Late Triassic), 5- to 10-Ma younger than previously thought. Combined with other geochronologic data from the same basin, we constrain the rate of dinosaur origins, demonstrating their relatively rapid origin in a less than 5-Ma interval, thus halving the temporal gap between assemblages containing only dinosaur precursors and those with early dinosaurs. After their origin, dinosaurs only gradually dominated mid- to high-latitude terrestrial ecosystems millions of years later, closer to the Triassic-Jurassic boundary.

  11. The precise temporal calibration of dinosaur origins.

    PubMed

    Marsicano, Claudia A; Irmis, Randall B; Mancuso, Adriana C; Mundil, Roland; Chemale, Farid

    2016-01-19

    Dinosaurs have been major components of ecosystems for over 200 million years. Although different macroevolutionary scenarios exist to explain the Triassic origin and subsequent rise to dominance of dinosaurs and their closest relatives (dinosauromorphs), all lack critical support from a precise biostratigraphically independent temporal framework. The absence of robust geochronologic age control for comparing alternative scenarios makes it impossible to determine if observed faunal differences vary across time, space, or a combination of both. To better constrain the origin of dinosaurs, we produced radioisotopic ages for the Argentinian Chañares Formation, which preserves a quintessential assemblage of dinosaurian precursors (early dinosauromorphs) just before the first dinosaurs. Our new high-precision chemical abrasion thermal ionization mass spectrometry (CA-TIMS) U-Pb zircon ages reveal that the assemblage is early Carnian (early Late Triassic), 5- to 10-Ma younger than previously thought. Combined with other geochronologic data from the same basin, we constrain the rate of dinosaur origins, demonstrating their relatively rapid origin in a less than 5-Ma interval, thus halving the temporal gap between assemblages containing only dinosaur precursors and those with early dinosaurs. After their origin, dinosaurs only gradually dominated mid- to high-latitude terrestrial ecosystems millions of years later, closer to the Triassic-Jurassic boundary.

  12. The precise temporal calibration of dinosaur origins

    PubMed Central

    Marsicano, Claudia A.; Irmis, Randall B.; Mancuso, Adriana C.; Mundil, Roland; Chemale, Farid

    2016-01-01

    Dinosaurs have been major components of ecosystems for over 200 million years. Although different macroevolutionary scenarios exist to explain the Triassic origin and subsequent rise to dominance of dinosaurs and their closest relatives (dinosauromorphs), all lack critical support from a precise biostratigraphically independent temporal framework. The absence of robust geochronologic age control for comparing alternative scenarios makes it impossible to determine if observed faunal differences vary across time, space, or a combination of both. To better constrain the origin of dinosaurs, we produced radioisotopic ages for the Argentinian Chañares Formation, which preserves a quintessential assemblage of dinosaurian precursors (early dinosauromorphs) just before the first dinosaurs. Our new high-precision chemical abrasion thermal ionization mass spectrometry (CA-TIMS) U–Pb zircon ages reveal that the assemblage is early Carnian (early Late Triassic), 5- to 10-Ma younger than previously thought. Combined with other geochronologic data from the same basin, we constrain the rate of dinosaur origins, demonstrating their relatively rapid origin in a less than 5-Ma interval, thus halving the temporal gap between assemblages containing only dinosaur precursors and those with early dinosaurs. After their origin, dinosaurs only gradually dominated mid- to high-latitude terrestrial ecosystems millions of years later, closer to the Triassic–Jurassic boundary. PMID:26644579

  13. Robustness Elasticity in Complex Networks

    PubMed Central

    Matisziw, Timothy C.; Grubesic, Tony H.; Guo, Junyu

    2012-01-01

    Network robustness refers to a network’s resilience to stress or damage. Given that most networks are inherently dynamic, with changing topology, loads, and operational states, their robustness is also likely subject to change. However, in most analyses of network structure, it is assumed that interaction among nodes has no effect on robustness. To investigate the hypothesis that network robustness is not sensitive or elastic to the level of interaction (or flow) among network nodes, this paper explores the impacts of network disruption, namely arc deletion, over a temporal sequence of observed nodal interactions for a large Internet backbone system. In particular, a mathematical programming approach is used to identify exact bounds on robustness to arc deletion for each epoch of nodal interaction. Elasticity of the identified bounds relative to the magnitude of arc deletion is assessed. Results indicate that system robustness can be highly elastic to spatial and temporal variations in nodal interactions within complex systems. Further, the presence of this elasticity provides evidence that a failure to account for nodal interaction can confound characterizations of complex networked systems. PMID:22808060

  14. Robust Long-Range Optical Tracking for Tunneling Measurement Tasks

    NASA Astrophysics Data System (ADS)

    Mossel, Annette; Gerstweiler, Georg; Vonach, Emanuel; Chmelina, Klaus; Kaufmann, Hannes

    2013-04-01

    distances between cameras (baseline) with constraints to a tunnel application scenario, (2) to evaluate robustness of unique target identification and (3) to measure accuracy of estimated 3D position. Our results prove the system's capabilities to continuously track static and moving targets within the whole tracking volume as soon as the target becomes visible to the stereo rig. Thus, preliminary sighting of the target can be omitted. Interferences are filtered and partly occluded targets can be recovered. Up to a distance of 50m with a baseline of 12m, our system provides very high precision of the 3D position estimates with a deviation of 1cm or less along all three spatial axes. At a distance of 70m, our system provides still very high accuracy in the width- and height direction with a deviation of only several millimeters and up to 3cm along the depth axis. These promising results enable our system to act as measurement and monitoring system in rough indoor environments. Furthermore, it can serve as a reliable wide area user tracking system for future mixed reality applications, e.g. for tunnel simulation, training of engineers, machine control, tunnel data interpretation and inspection.

  15. Precision magnetic field mapping for CERN experiment NA62

    NASA Astrophysics Data System (ADS)

    Fry, John R.; Ruggiero, Giuseppe; Bergsma, Felix

    2016-12-01

    In the CERN experiment NA62, low-mass straw-tube tracking-chambers have been designed to operate in vacuum and, in conjunction with precisely mapped magnetic fields, enable the determination of the trajectories of the charged decay products of a 75 GeV/c K+ with high accuracy. This is particularly important for the crucial measurement of the branching fraction for the decay K+ → π + ν ν, which has the potential to reveal BSM physics. The charged particles passing through the magnetic field of a dipole magnet receive a transverse-momentum kick, ΔP T = 270 MeV/c, which the physics requires to be determined to better than one part in a thousand. This puts stringent constraints on the required accuracy and precision of the magnetic field components at all points through which charged particles pass. Before reaching the dipole magnet the particles travel through an evacuated steel tank of length 90 m, where residual magnetic fields of typical size 50 μT modify the trajectories of the charged particles and require measurement with a precision of better than 10 μT. In this paper we describe in detail the different approaches to the measurement and analysis of the magnetic field for the two regions, the corrections to the raw data necessary to produce the final field map, and the physics validation procedures showing that the required accuracy and precision of the field maps have been achieved.

  16. Sources, Sinks, and Model Accuracy

    EPA Science Inventory

    Spatial demographic models are a necessary tool for understanding how to manage landscapes sustainably for animal populations. These models, therefore, must offer precise and testable predications about animal population dynamics and how animal demographic parameters respond to ...

  17. How robust is a robust policy? A comparative analysis of alternative robustness metrics for supporting robust decision analysis.

    NASA Astrophysics Data System (ADS)

    Kwakkel, Jan; Haasnoot, Marjolijn

    2015-04-01

    In response to climate and socio-economic change, in various policy domains there is increasingly a call for robust plans or policies. That is, plans or policies that performs well in a very large range of plausible futures. In the literature, a wide range of alternative robustness metrics can be found. The relative merit of these alternative conceptualizations of robustness has, however, received less attention. Evidently, different robustness metrics can result in different plans or policies being adopted. This paper investigates the consequences of several robustness metrics on decision making, illustrated here by the design of a flood risk management plan. A fictitious case, inspired by a river reach in the Netherlands is used. The performance of this system in terms of casualties, damages, and costs for flood and damage mitigation actions is explored using a time horizon of 100 years, and accounting for uncertainties pertaining to climate change and land use change. A set of candidate policy options is specified up front. This set of options includes dike raising, dike strengthening, creating more space for the river, and flood proof building and evacuation options. The overarching aim is to design an effective flood risk mitigation strategy that is designed from the outset to be adapted over time in response to how the future actually unfolds. To this end, the plan will be based on the dynamic adaptive policy pathway approach (Haasnoot, Kwakkel et al. 2013) being used in the Dutch Delta Program. The policy problem is formulated as a multi-objective robust optimization problem (Kwakkel, Haasnoot et al. 2014). We solve the multi-objective robust optimization problem using several alternative robustness metrics, including both satisficing robustness metrics and regret based robustness metrics. Satisficing robustness metrics focus on the performance of candidate plans across a large ensemble of plausible futures. Regret based robustness metrics compare the

  18. UXO Precise Position Tracking Ranger

    DTIC Science & Technology

    2008-01-01

    magnetometer or a Geonics EM-61 electromagnetic metal detector . The initial focus was on acquiring high accuracy, fixed point navigation and large area...Is integrated with Geometrics G-858 magnetometer and Geonics EM-61 electromagnetic metal detector • Provides ~20 cm positioning accuracy (1 σ

  19. Robust electrocardiogram (ECG) beat classification using discrete wavelet transform.

    PubMed

    Minhas, Fayyaz-ul-Amir Afsar; Arif, Muhammad

    2008-05-01

    This paper presents a robust technique for the classification of six types of heartbeats through an electrocardiogram (ECG). Features extracted from the QRS complex of the ECG using a wavelet transform along with the instantaneous RR-interval are used for beat classification. The wavelet transform utilized for feature extraction in this paper can also be employed for QRS delineation, leading to reduction in overall system complexity as no separate feature extraction stage would be required in the practical implementation of the system. Only 11 features are used for beat classification with the classification accuracy of approximately 99.5% through a KNN classifier. Another main advantage of this method is its robustness to noise, which is illustrated in this paper through experimental results. Furthermore, principal component analysis (PCA) has been used for feature reduction, which reduces the number of features from 11 to 6 while retaining the high beat classification accuracy. Due to reduction in computational complexity (using six features, the time required is approximately 4 ms per beat), a simple classifier and noise robustness (at 10 dB signal-to-noise ratio, accuracy is 95%), this method offers substantial advantages over previous techniques for implementation in a practical ECG analyzer.

  20. Three-dimensional robust diving guidance for hypersonic vehicle

    NASA Astrophysics Data System (ADS)

    Zhu, Jianwen; Liu, Luhua; Tang, Guojian; Bao, Weimin

    2016-01-01

    A novel three-dimensional robust guidance law based on H∞ filter and H∞ control is proposed to meet the constraints of the impact accuracy and the flight direction under process disturbances for the dive phase of hypersonic vehicle. Complete three-dimensional coupling relative motion equations are established and decoupled into linear ones by feedback linearization to simplify the design process of the further guidance law. Based on the linearized equations, H∞ filter is introduced to eliminate the measurement noises of line-of-sight angles and estimate the angular rates. Furthermore, H∞ robust control is well employed to design guidance law, and the filtered information is used to generate guidance commands to meet the guidance goal accurately and robustly. The simulation results of CAV-H indicate that the proposed three-dimensional equations can describe the coupling character more clearly than the traditional decoupling guidance, and the proposed guidance strategy can guide the vehicle to satisfy different multiple constraints with high accuracy and robustness.