Science.gov

Sample records for accurate precise sensitive

  1. NCLscan: accurate identification of non-co-linear transcripts (fusion, trans-splicing and circular RNA) with a good balance between sensitivity and precision

    PubMed Central

    Chuang, Trees-Juen; Wu, Chan-Shuo; Chen, Chia-Ying; Hung, Li-Yuan; Chiang, Tai-Wei; Yang, Min-Yu

    2016-01-01

    Analysis of RNA-seq data often detects numerous ‘non-co-linear’ (NCL) transcripts, which comprised sequence segments that are topologically inconsistent with their corresponding DNA sequences in the reference genome. However, detection of NCL transcripts involves two major challenges: removal of false positives arising from alignment artifacts and discrimination between different types of NCL transcripts (trans-spliced, circular or fusion transcripts). Here, we developed a new NCL-transcript-detecting method (‘NCLscan’), which utilized a stepwise alignment strategy to almost completely eliminate false calls (>98% precision) without sacrificing true positives, enabling NCLscan outperform 18 other publicly-available tools (including fusion- and circular-RNA-detecting tools) in terms of sensitivity and precision, regardless of the generation strategy of simulated dataset, type of intragenic or intergenic NCL event, read depth of coverage, read length or expression level of NCL transcript. With the high accuracy, NCLscan was applied to distinguishing between trans-spliced, circular and fusion transcripts on the basis of poly(A)- and nonpoly(A)-selected RNA-seq data. We showed that circular RNAs were expressed more ubiquitously, more abundantly and less cell type-specifically than trans-spliced and fusion transcripts. Our study thus describes a robust pipeline for the discovery of NCL transcripts, and sheds light on the fundamental biology of these non-canonical RNA events in human transcriptome. PMID:26442529

  2. Precise and Accurate Density Determination of Explosives Using Hydrostatic Weighing

    SciTech Connect

    B. Olinger

    2005-07-01

    Precise and accurate density determination requires weight measurements in air and water using sufficiently precise analytical balances, knowledge of the densities of air and water, knowledge of thermal expansions, availability of a density standard, and a method to estimate the time to achieve thermal equilibrium with water. Density distributions in pressed explosives are inferred from the densities of elements from a central slice.

  3. Precise and accurate isotopic measurements using multiple-collector ICPMS

    NASA Astrophysics Data System (ADS)

    Albarède, F.; Telouk, Philippe; Blichert-Toft, Janne; Boyet, Maud; Agranier, Arnaud; Nelson, Bruce

    2004-06-01

    New techniques of isotopic measurements by a new generation of mass spectrometers equipped with an inductively-coupled-plasma source, a magnetic mass filter, and multiple collection (MC-ICPMS) are quickly developing. These techniques are valuable because of (1) the ability of ICP sources to ionize virtually every element in the periodic table, and (2) the large sample throughout. However, because of the complex trajectories of multiple ion beams produced in the plasma source whether from the same or different elements, the acquisition of precise and accurate isotopic data with this type of instrument still requires a good understanding of instrumental fractionation processes, both mass-dependent and mass-independent. Although physical processes responsible for the instrumental mass bias are still to be understood more fully, we here present a theoretical framework that allows for most of the analytical limitations to high precision and accuracy to be overcome. After a presentation of unifying phenomenological theory for mass-dependent fractionation in mass spectrometers, we show how this theory accounts for the techniques of standard bracketing and of isotopic normalization by a ratio of either the same or a different element, such as the use of Tl to correct mass bias on Pb. Accuracy is discussed with reference to the concept of cup efficiencies. Although these can be simply calibrated by analyzing standards, we derive a straightforward, very general method to calculate accurate isotopic ratios from dynamic measurements. In this study, we successfully applied the dynamic method to Nd and Pb as examples. We confirm that the assumption of identical mass bias for neighboring elements (notably Pb and Tl, and Yb and Lu) is both unnecessary and incorrect. We further discuss the dangers of straightforward standard-sample bracketing when chemical purification of the element to be analyzed is imperfect. Pooling runs to improve precision is acceptable provided the pooled

  4. Accurate and precise calibration of AFM cantilever spring constants using laser Doppler vibrometry.

    PubMed

    Gates, Richard S; Pratt, Jon R

    2012-09-21

    Accurate cantilever spring constants are important in atomic force microscopy both in control of sensitive imaging and to provide correct nanomechanical property measurements. Conventional atomic force microscope (AFM) spring constant calibration techniques are usually performed in an AFM. They rely on significant handling and often require touching the cantilever probe tip to a surface to calibrate the optical lever sensitivity of the configuration. This can damage the tip. The thermal calibration technique developed for laser Doppler vibrometry (LDV) can be used to calibrate cantilevers without handling or touching the tip to a surface. Both flexural and torsional spring constants can be measured. Using both Euler-Bernoulli modeling and an SI traceable electrostatic force balance technique as a comparison we demonstrate that the LDV thermal technique is capable of providing rapid calibrations with a combination of ease, accuracy and precision beyond anything previously available.

  5. Accurate adjoint design sensitivities for nano metal optics.

    PubMed

    Hansen, Paul; Hesselink, Lambertus

    2015-09-01

    We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics. PMID:26368483

  6. Indirect Terahertz Spectroscopy of Molecular Ions Using Highly Accurate and Precise Mid-Ir Spectroscopy

    NASA Astrophysics Data System (ADS)

    Mills, Andrew A.; Ford, Kyle B.; Kreckel, Holger; Perera, Manori; Crabtree, Kyle N.; McCall, Benjamin J.

    2009-06-01

    With the advent of Herschel and SOFIA, laboratory methods capable of providing molecular rest frequencies in the terahertz and sub-millimeter regime are increasingly important. As of yet, it has been difficult to perform spectroscopy in this wavelength region due to the limited availability of radiation sources, optics, and detectors. Our goal is to provide accurate THz rest frequencies for molecular ions by combining previously recorded microwave transitions with combination differences obtained from high precision mid-IR spectroscopy. We are constructing a Sensitive Resolved Ion Beam Spectroscopy setup which will harness the benefits of kinematic compression in a molecular ion beam to enable very high resolution spectroscopy. This ion beam is interrogated by continuous-wave cavity ringdown spectroscopy using a home-made widely tunable difference frequency laser that utilizes two near-IR lasers and a periodically-poled lithium niobate crystal. Here, we report our efforts to optimize our ion beam spectrometer and to perform high-precision and high-accuracy frequency measurements using an optical frequency comb. footnote

  7. Accurate and precise determination of isotopic ratios by MC-ICP-MS: a review.

    PubMed

    Yang, Lu

    2009-01-01

    For many decades the accurate and precise determination of isotope ratios has remained a very strong interest to many researchers due to its important applications in earth, environmental, biological, archeological, and medical sciences. Traditionally, thermal ionization mass spectrometry (TIMS) has been the technique of choice for achieving the highest accuracy and precision. However, recent developments in multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) have brought a new dimension to this field. In addition to its simple and robust sample introduction, high sample throughput, and high mass resolution, the flat-topped peaks generated by this technique provide for accurate and precise determination of isotope ratios with precision reaching 0.001%, comparable to that achieved with TIMS. These features, in combination with the ability of the ICP source to ionize nearly all elements in the periodic table, have resulted in an increased use of MC-ICP-MS for such measurements in various sample matrices. To determine accurate and precise isotope ratios with MC-ICP-MS, utmost care must be exercised during sample preparation, optimization of the instrument, and mass bias corrections. Unfortunately, there are inconsistencies and errors evident in many MC-ICP-MS publications, including errors in mass bias correction models. This review examines "state-of-the-art" methodologies presented in the literature for achievement of precise and accurate determinations of isotope ratios by MC-ICP-MS. Some general rules for such accurate and precise measurements are suggested, and calculations of combined uncertainty of the data using a few common mass bias correction models are outlined.

  8. Hydrogen atoms can be located accurately and precisely by x-ray crystallography

    PubMed Central

    Woińska, Magdalena; Grabowsky, Simon; Dominiak, Paulina M.; Woźniak, Krzysztof; Jayatilaka, Dylan

    2016-01-01

    Precise and accurate structural information on hydrogen atoms is crucial to the study of energies of interactions important for crystal engineering, materials science, medicine, and pharmacy, and to the estimation of physical and chemical properties in solids. However, hydrogen atoms only scatter x-radiation weakly, so x-rays have not been used routinely to locate them accurately. Textbooks and teaching classes still emphasize that hydrogen atoms cannot be located with x-rays close to heavy elements; instead, neutron diffraction is needed. We show that, contrary to widespread expectation, hydrogen atoms can be located very accurately using x-ray diffraction, yielding bond lengths involving hydrogen atoms (A–H) that are in agreement with results from neutron diffraction mostly within a single standard deviation. The precision of the determination is also comparable between x-ray and neutron diffraction results. This has been achieved at resolutions as low as 0.8 Å using Hirshfeld atom refinement (HAR). We have applied HAR to 81 crystal structures of organic molecules and compared the A–H bond lengths with those from neutron measurements for A–H bonds sorted into bonds of the same class. We further show in a selection of inorganic compounds that hydrogen atoms can be located in bridging positions and close to heavy transition metals accurately and precisely. We anticipate that, in the future, conventional x-radiation sources at in-house diffractometers can be used routinely for locating hydrogen atoms in small molecules accurately instead of large-scale facilities such as spallation sources or nuclear reactors. PMID:27386545

  9. Hydrogen atoms can be located accurately and precisely by x-ray crystallography.

    PubMed

    Woińska, Magdalena; Grabowsky, Simon; Dominiak, Paulina M; Woźniak, Krzysztof; Jayatilaka, Dylan

    2016-05-01

    Precise and accurate structural information on hydrogen atoms is crucial to the study of energies of interactions important for crystal engineering, materials science, medicine, and pharmacy, and to the estimation of physical and chemical properties in solids. However, hydrogen atoms only scatter x-radiation weakly, so x-rays have not been used routinely to locate them accurately. Textbooks and teaching classes still emphasize that hydrogen atoms cannot be located with x-rays close to heavy elements; instead, neutron diffraction is needed. We show that, contrary to widespread expectation, hydrogen atoms can be located very accurately using x-ray diffraction, yielding bond lengths involving hydrogen atoms (A-H) that are in agreement with results from neutron diffraction mostly within a single standard deviation. The precision of the determination is also comparable between x-ray and neutron diffraction results. This has been achieved at resolutions as low as 0.8 Å using Hirshfeld atom refinement (HAR). We have applied HAR to 81 crystal structures of organic molecules and compared the A-H bond lengths with those from neutron measurements for A-H bonds sorted into bonds of the same class. We further show in a selection of inorganic compounds that hydrogen atoms can be located in bridging positions and close to heavy transition metals accurately and precisely. We anticipate that, in the future, conventional x-radiation sources at in-house diffractometers can be used routinely for locating hydrogen atoms in small molecules accurately instead of large-scale facilities such as spallation sources or nuclear reactors. PMID:27386545

  10. Hydrogen atoms can be located accurately and precisely by x-ray crystallography.

    PubMed

    Woińska, Magdalena; Grabowsky, Simon; Dominiak, Paulina M; Woźniak, Krzysztof; Jayatilaka, Dylan

    2016-05-01

    Precise and accurate structural information on hydrogen atoms is crucial to the study of energies of interactions important for crystal engineering, materials science, medicine, and pharmacy, and to the estimation of physical and chemical properties in solids. However, hydrogen atoms only scatter x-radiation weakly, so x-rays have not been used routinely to locate them accurately. Textbooks and teaching classes still emphasize that hydrogen atoms cannot be located with x-rays close to heavy elements; instead, neutron diffraction is needed. We show that, contrary to widespread expectation, hydrogen atoms can be located very accurately using x-ray diffraction, yielding bond lengths involving hydrogen atoms (A-H) that are in agreement with results from neutron diffraction mostly within a single standard deviation. The precision of the determination is also comparable between x-ray and neutron diffraction results. This has been achieved at resolutions as low as 0.8 Å using Hirshfeld atom refinement (HAR). We have applied HAR to 81 crystal structures of organic molecules and compared the A-H bond lengths with those from neutron measurements for A-H bonds sorted into bonds of the same class. We further show in a selection of inorganic compounds that hydrogen atoms can be located in bridging positions and close to heavy transition metals accurately and precisely. We anticipate that, in the future, conventional x-radiation sources at in-house diffractometers can be used routinely for locating hydrogen atoms in small molecules accurately instead of large-scale facilities such as spallation sources or nuclear reactors.

  11. Accurate multiple network alignment through context-sensitive random walk

    PubMed Central

    2015-01-01

    Background Comparative network analysis can provide an effective means of analyzing large-scale biological networks and gaining novel insights into their structure and organization. Global network alignment aims to predict the best overall mapping between a given set of biological networks, thereby identifying important similarities as well as differences among the networks. It has been shown that network alignment methods can be used to detect pathways or network modules that are conserved across different networks. Until now, a number of network alignment algorithms have been proposed based on different formulations and approaches, many of them focusing on pairwise alignment. Results In this work, we propose a novel multiple network alignment algorithm based on a context-sensitive random walk model. The random walker employed in the proposed algorithm switches between two different modes, namely, an individual walk on a single network and a simultaneous walk on two networks. The switching decision is made in a context-sensitive manner by examining the current neighborhood, which is effective for quantitatively estimating the degree of correspondence between nodes that belong to different networks, in a manner that sensibly integrates node similarity and topological similarity. The resulting node correspondence scores are then used to predict the maximum expected accuracy (MEA) alignment of the given networks. Conclusions Performance evaluation based on synthetic networks as well as real protein-protein interaction networks shows that the proposed algorithm can construct more accurate multiple network alignments compared to other leading methods. PMID:25707987

  12. Digital PCR modeling for maximal sensitivity, dynamic range and measurement precision.

    PubMed

    Majumdar, Nivedita; Wessel, Thomas; Marks, Jeffrey

    2015-01-01

    The great promise of digital PCR is the potential for unparalleled precision enabling accurate measurements for genetic quantification. A challenge associated with digital PCR experiments, when testing unknown samples, is to perform experiments at dilutions allowing the detection of one or more targets of interest at a desired level of precision. While theory states that optimal precision (Po) is achieved by targeting ~1.59 mean copies per partition (λ), and that dynamic range (R) includes the space spanning one positive (λL) to one negative (λU) result from the total number of partitions (n), these results are tempered for the practitioner seeking to construct digital PCR experiments in the laboratory. A mathematical framework is presented elucidating the relationships between precision, dynamic range, number of partitions, interrogated volume, and sensitivity in digital PCR. The impact that false reaction calls and volumetric variation have on sensitivity and precision is next considered. The resultant effects on sensitivity and precision are established via Monte Carlo simulations reflecting the real-world likelihood of encountering such scenarios in the laboratory. The simulations provide insight to the practitioner on how to adapt experimental loading concentrations to counteract any one of these conditions. The framework is augmented with a method of extending the dynamic range of digital PCR, with and without increasing n, via the use of dilutions. An example experiment demonstrating the capabilities of the framework is presented enabling detection across 3.33 logs of starting copy concentration. PMID:25806524

  13. Accurate and precise measurement of selenium by instrumental neutron activation analysis.

    PubMed

    Kim, In Jung; Watson, Russell P; Lindstrom, Richard M

    2011-05-01

    An accurate and precise measurement of selenium in Standard Reference Material (SRM) 3149, a primary calibration standard for the quantitative determination of selenium, has been accomplished by instrumental neutron activation analysis (INAA) in order to resolve a question arising during the certification process of the standard. Each limiting factor of the uncertainty in the activation analysis, including the sample preparation, irradiation, and γ-ray spectrometry steps, has been carefully monitored to minimize the uncertainty in the determined mass fraction. Neutron and γ-ray self-shielding within the elemental selenium INAA standards contributed most significantly to the uncertainty of the measurement. An empirical model compensating for neutron self-shielding and reducing the self-shielding uncertainty was successfully applied to these selenium standards. The mass fraction of selenium in the new lot of SRM 3149 was determined with a relative standard uncertainty of 0.6%.

  14. Accurate and precise Pb isotope ratio measurements in environmental samples by MC-ICP-MS

    NASA Astrophysics Data System (ADS)

    Weiss, Dominik J.; Kober, Bernd; Dolgopolova, Alla; Gallagher, Kerry; Spiro, Baruch; Le Roux, Gaël; Mason, Thomas F. D.; Kylander, Malin; Coles, Barry J.

    2004-04-01

    Analytical protocols for accurate and precise Pb isotope ratio determinations in peat, lichen, vegetable, chimney dust, and ore-bearing granites using MC-ICP-MS and their application to environmental studies are presented. Acid dissolution of various matrix types was achieved using high temperature/high pressure microwave and hot plate digestion procedures. The digests were passed through a column packed with EiChrom Sr-resin employing only hydrochloric acid and one column passage. This simplified column chemistry allowed high sample throughput. Typically, internal precisions for approximately 30 ng Pb were below 100 ppm (+/-2[sigma]) on all Pb ratios in all matrices. Thallium was employed to correct for mass discrimination effects and the achieved accuracy was below 80 ppm for all ratios. This involved an optimization procedure for the 205Tl/203Tl ratio using least square fits relative to certified NIST-SRM 981 Pb values. The long-term reproducibility (+/-2[sigma]) for the NIST-SRM 981 Pb standard over a 5-month period (35 measurements) was better than 350 ppm for all ratios. Selected ore-bearing granites were measured with TIMS and MC-ICP-MS and showed good correlation (e.g., r=0.999 for 206Pb/207Pb ratios, slope=0.996, n=13). Mass bias and signal intensities of Tl spiked into natural (after matrix separation) and in synthetic samples did not differ significantly, indicating that any residual components of the complex peat and lichen matrix did not influence mass bias correction. Environmental samples with very different matrices were analyzed during two different studies: (i) lichens, vegetables, and chimney dust around a Cu smelter in the Urals, and (ii) peat samples from an ombrotrophic bog in the Faroe Islands. The presented procedure for sample preparation, mass spectrometry, and data processing tools resulted in accurate and precise Pb isotope data that allowed the reliable differentiation and identification of Pb sources with variations as small as 0

  15. An inexpensive, accurate, and precise wet-mount method for enumerating aquatic viruses.

    PubMed

    Cunningham, Brady R; Brum, Jennifer R; Schwenck, Sarah M; Sullivan, Matthew B; John, Seth G

    2015-05-01

    Viruses affect biogeochemical cycling, microbial mortality, gene flow, and metabolic functions in diverse environments through infection and lysis of microorganisms. Fundamental to quantitatively investigating these roles is the determination of viral abundance in both field and laboratory samples. One current, widely used method to accomplish this with aquatic samples is the "filter mount" method, in which samples are filtered onto costly 0.02-μm-pore-size ceramic filters for enumeration of viruses by epifluorescence microscopy. Here we describe a cost-effective (ca. 500-fold-lower materials cost) alternative virus enumeration method in which fluorescently stained samples are wet mounted directly onto slides, after optional chemical flocculation of viruses in samples with viral concentrations of <5×10(7) viruses ml(-1). The concentration of viruses in the sample is then determined from the ratio of viruses to a known concentration of added microsphere beads via epifluorescence microscopy. Virus concentrations obtained by using this wet-mount method, with and without chemical flocculation, were significantly correlated with, and had precision equivalent to, those obtained by the filter mount method across concentrations ranging from 2.17×10(6) to 1.37×10(8) viruses ml(-1) when tested by using cultivated viral isolates and natural samples from marine and freshwater environments. In summary, the wet-mount method is significantly less expensive than the filter mount method and is appropriate for rapid, precise, and accurate enumeration of aquatic viruses over a wide range of viral concentrations (≥1×10(6) viruses ml(-1)) encountered in field and laboratory samples.

  16. An Inexpensive, Accurate, and Precise Wet-Mount Method for Enumerating Aquatic Viruses

    PubMed Central

    Cunningham, Brady R.; Brum, Jennifer R.; Schwenck, Sarah M.; Sullivan, Matthew B.

    2015-01-01

    Viruses affect biogeochemical cycling, microbial mortality, gene flow, and metabolic functions in diverse environments through infection and lysis of microorganisms. Fundamental to quantitatively investigating these roles is the determination of viral abundance in both field and laboratory samples. One current, widely used method to accomplish this with aquatic samples is the “filter mount” method, in which samples are filtered onto costly 0.02-μm-pore-size ceramic filters for enumeration of viruses by epifluorescence microscopy. Here we describe a cost-effective (ca. 500-fold-lower materials cost) alternative virus enumeration method in which fluorescently stained samples are wet mounted directly onto slides, after optional chemical flocculation of viruses in samples with viral concentrations of <5 × 107 viruses ml−1. The concentration of viruses in the sample is then determined from the ratio of viruses to a known concentration of added microsphere beads via epifluorescence microscopy. Virus concentrations obtained by using this wet-mount method, with and without chemical flocculation, were significantly correlated with, and had precision equivalent to, those obtained by the filter mount method across concentrations ranging from 2.17 × 106 to 1.37 × 108 viruses ml−1 when tested by using cultivated viral isolates and natural samples from marine and freshwater environments. In summary, the wet-mount method is significantly less expensive than the filter mount method and is appropriate for rapid, precise, and accurate enumeration of aquatic viruses over a wide range of viral concentrations (≥1 × 106 viruses ml−1) encountered in field and laboratory samples. PMID:25710369

  17. An inexpensive, accurate, and precise wet-mount method for enumerating aquatic viruses.

    PubMed

    Cunningham, Brady R; Brum, Jennifer R; Schwenck, Sarah M; Sullivan, Matthew B; John, Seth G

    2015-05-01

    Viruses affect biogeochemical cycling, microbial mortality, gene flow, and metabolic functions in diverse environments through infection and lysis of microorganisms. Fundamental to quantitatively investigating these roles is the determination of viral abundance in both field and laboratory samples. One current, widely used method to accomplish this with aquatic samples is the "filter mount" method, in which samples are filtered onto costly 0.02-μm-pore-size ceramic filters for enumeration of viruses by epifluorescence microscopy. Here we describe a cost-effective (ca. 500-fold-lower materials cost) alternative virus enumeration method in which fluorescently stained samples are wet mounted directly onto slides, after optional chemical flocculation of viruses in samples with viral concentrations of <5×10(7) viruses ml(-1). The concentration of viruses in the sample is then determined from the ratio of viruses to a known concentration of added microsphere beads via epifluorescence microscopy. Virus concentrations obtained by using this wet-mount method, with and without chemical flocculation, were significantly correlated with, and had precision equivalent to, those obtained by the filter mount method across concentrations ranging from 2.17×10(6) to 1.37×10(8) viruses ml(-1) when tested by using cultivated viral isolates and natural samples from marine and freshwater environments. In summary, the wet-mount method is significantly less expensive than the filter mount method and is appropriate for rapid, precise, and accurate enumeration of aquatic viruses over a wide range of viral concentrations (≥1×10(6) viruses ml(-1)) encountered in field and laboratory samples. PMID:25710369

  18. Modeling of Non-Gravitational Forces for Precise and Accurate Orbit Determination

    NASA Astrophysics Data System (ADS)

    Hackel, Stefan; Gisinger, Christoph; Steigenberger, Peter; Balss, Ulrich; Montenbruck, Oliver; Eineder, Michael

    2014-05-01

    Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The precise reconstruction of the satellite's trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency Integrated Geodetic and Occultation Receiver (IGOR) onboard the spacecraft. The increasing demand for precise radar products relies on validation methods, which require precise and accurate orbit products. An analysis of the orbit quality by means of internal and external validation methods on long and short timescales shows systematics, which reflect deficits in the employed force models. Following the proper analysis of this deficits, possible solution strategies are highlighted in the presentation. The employed Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for gravitational and non-gravitational forces. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). The satellite TerraSAR-X flies on a dusk-dawn orbit with an altitude of approximately 510 km above ground. Due to this constellation, the Sun almost constantly illuminates the satellite, which causes strong across-track accelerations on the plane rectangular to the solar rays. The indirect effect of the solar radiation is called Earth Radiation Pressure (ERP). This force depends on the sunlight, which is reflected by the illuminated Earth surface (visible spectra) and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed. The scope of

  19. Statistical precision and sensitivity of measures of dynamic gait stability.

    PubMed

    Bruijn, Sjoerd M; van Dieën, Jaap H; Meijer, Onno G; Beek, Peter J

    2009-04-15

    Recently, two methods for quantifying a system's dynamic stability have been applied to human locomotion: local stability (quantified by finite time maximum Lyapunov exponents, lambda(S-stride) and lambda(L-stride)) and orbital stability (quantified as maximum Floquet multipliers, MaxFm). Thus far, however, it has remained unclear how many data points are required to obtain precise estimates of these measures during walking, and to what extent these estimates are sensitive to changes in walking behaviour. To resolve these issues, we collected long data series of healthy subjects (n=9) walking on a treadmill in three conditions (normal walking at 0.83 m/s (3 km/h) and 1.38 m/s (5 km/h), and walking at 1.38 m/s (5 km/h) while performing a Stroop dual task). Data series from 0.83 and 1.38 m/s trials were submitted to a bootstrap procedure and paired t-tests for samples of different data series lengths were performed between 0.83 and 1.38 m/s and between 1.38 m/s with and without Stroop task. Longer data series led to more precise estimates for lambda(S-stride), lambda(L-stride), and MaxFm. All variables showed an effect of data series length. Thus, when estimating and comparing these variables across conditions, data series covering an equal number of strides should be analysed. lambda(S-stride), lambda(L-stride), and MaxFm were sensitive to the change in walking speed while only lambda(S-stride) and MaxFm were sensitive enough to capture the modulations of walking induced by the Stroop task. Still, these modulations could only be detected when using a substantial number of strides (>150). PMID:19135478

  20. Automated End-to-End Workflow for Precise and Geo-accurate Reconstructions using Fiducial Markers

    NASA Astrophysics Data System (ADS)

    Rumpler, M.; Daftry, S.; Tscharf, A.; Prettenthaler, R.; Hoppe, C.; Mayer, G.; Bischof, H.

    2014-08-01

    Photogrammetric computer vision systems have been well established in many scientific and commercial fields during the last decades. Recent developments in image-based 3D reconstruction systems in conjunction with the availability of affordable high quality digital consumer grade cameras have resulted in an easy way of creating visually appealing 3D models. However, many of these methods require manual steps in the processing chain and for many photogrammetric applications such as mapping, recurrent topographic surveys or architectural and archaeological 3D documentations, high accuracy in a geo-coordinate system is required which often cannot be guaranteed. Hence, in this paper we present and advocate a fully automated end-to-end workflow for precise and geoaccurate 3D reconstructions using fiducial markers. We integrate an automatic camera calibration and georeferencing method into our image-based reconstruction pipeline based on binary-coded fiducial markers as artificial, individually identifiable landmarks in the scene. Additionally, we facilitate the use of these markers in conjunction with known ground control points (GCP) in the bundle adjustment, and use an online feedback method that allows assessment of the final reconstruction quality in terms of image overlap, ground sampling distance (GSD) and completeness, and thus provides flexibility to adopt the image acquisition strategy already during image recording. An extensive set of experiments is presented which demonstrate the accuracy benefits to obtain a highly accurate and geographically aligned reconstruction with an absolute point position uncertainty of about 1.5 times the ground sampling distance.

  1. Precise and accurate assessment of uncertainties in model parameters from stellar interferometry. Application to stellar diameters

    NASA Astrophysics Data System (ADS)

    Lachaume, Regis; Rabus, Markus; Jordan, Andres

    2015-08-01

    In stellar interferometry, the assumption that the observables can be seen as Gaussian, independent variables is the norm. In particular, neither the optical interferometry FITS (OIFITS) format nor the most popular fitting software in the field, LITpro, offer means to specify a covariance matrix or non-Gaussian uncertainties. Interferometric observables are correlated by construct, though. Also, the calibration by an instrumental transfer function ensures that the resulting observables are not Gaussian, even if uncalibrated ones happened to be so.While analytic frameworks have been published in the past, they are cumbersome and there is no generic implementation available. We propose here a relatively simple way of dealing with correlated errors without the need to extend the OIFITS specification or making some Gaussian assumptions. By repeatedly picking at random which interferograms, which calibrator stars, and which are the errors on their diameters, and performing the data processing on the bootstrapped data, we derive a sampling of p(O), the multivariate probability density function (PDF) of the observables O. The results can be stored in a normal OIFITS file. Then, given a model m with parameters P predicting observables O = m(P), we can estimate the PDF of the model parameters f(P) = p(m(P)) by using a density estimation of the observables' PDF p.With observations repeated over different baselines, on nights several days apart, and with a significant set of calibrators systematic errors are de facto taken into account. We apply the technique to a precise and accurate assessment of stellar diameters obtained at the Very Large Telescope Interferometer with PIONIER.

  2. A new direct absorption measurement for high precision and accurate measurement of water vapor in the UT/LS

    NASA Astrophysics Data System (ADS)

    Sargent, M. R.; Sayres, D. S.; Smith, J. B.; Anderson, J.

    2011-12-01

    Highly accurate and precise water vapor measurements in the upper troposphere and lower stratosphere are critical to understanding the climate feedbacks of water vapor and clouds in that region. However, the continued disagreement among water vapor measurements (~1 - 2 ppmv) are too large to constrain the role of different hydration and dehydration mechanisms operating in the UT/LS, with model validation dependent upon which dataset is chosen. In response to these issues, we present a new instrument for measurement of water vapor in the UT/LS that was flown during the April 2011 MACPEX mission out of Houston, TX. The dual axis instrument combines the heritage and validated accuracy of the Harvard Lyman-alpha instrument with a newly designed direct IR absorption instrument, the Harvard Herriott Hygrometer (HHH). The Lyman-alpha detection axis has flown aboard NASA's WB-57 and ER2 aircraft since 1994, and provides a requisite link between the new HHH instrument and the long history of Harvard water vapor measurements. The instrument utilizes the highly sensitive Lyman-alpha photo-fragment fluorescence detection method; its accuracy has been demonstrated though rigorous laboratory calibrations and in situ diagnostic procedures. The Harvard Herriott Hygrometer employs a fiber coupled near-IR laser with state-of-the-art electronics to measure water vapor via direct absorption in a spherical Herriott cell of 10 cm length. The instrument demonstrated in-flight precision of 0.1 ppmv (1-sec, 1-sigma) at mixing ratios as low as 5 ppmv with accuracies of 10% based on careful laboratory calibrations and in-flight performance. We present a description of the measurement technique along with our methodology for calibration and details of the measurement uncertainties. The simultaneous utilization of radically different measurement techniques in a single duct in the new Harvard Water Vapor (HWV) instrument allows for the constraint of systematic errors inherent in each technique

  3. Fast, Accurate and Precise Mid-Sagittal Plane Location in 3D MR Images of the Brain

    NASA Astrophysics Data System (ADS)

    Bergo, Felipe P. G.; Falcão, Alexandre X.; Yasuda, Clarissa L.; Ruppert, Guilherme C. S.

    Extraction of the mid-sagittal plane (MSP) is a key step for brain image registration and asymmetry analysis. We present a fast MSP extraction method for 3D MR images, based on automatic segmentation of the brain and on heuristic maximization of the cerebro-spinal fluid within the MSP. The method is robust to severe anatomical asymmetries between the hemispheres, caused by surgical procedures and lesions. The method is also accurate with respect to MSP delineations done by a specialist. The method was evaluated on 64 MR images (36 pathological, 20 healthy, 8 synthetic), and it found a precise and accurate approximation of the MSP in all of them with a mean time of 60.0 seconds per image, mean angular variation within a same image (precision) of 1.26o and mean angular difference from specialist delineations (accuracy) of 1.64o.

  4. Accurate time delay technology in simulated test for high precision laser range finder

    NASA Astrophysics Data System (ADS)

    Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi

    2015-10-01

    With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.

  5. Precision grid and hand motion for accurate needle insertion in brachytherapy

    SciTech Connect

    McGill, Carl S.; Schwartz, Jonathon A.; Moore, Jason Z.; McLaughlin, Patrick W.; Shih, Albert J.

    2011-08-15

    Purpose: In prostate brachytherapy, a grid is used to guide a needle tip toward a preplanned location within the tissue. During insertion, the needle deflects en route resulting in target misplacement. In this paper, 18-gauge needle insertion experiments into phantom were performed to test effects of three parameters, which include the clearance between the grid hole and needle, the thickness of the grid, and the needle insertion speed. Measurement apparatus that consisted of two datum surfaces and digital depth gauge was developed to quantify needle deflections. Methods: The gauge repeatability and reproducibility (GR and R) test was performed on the measurement apparatus, and it proved to be capable of measuring a 2 mm tolerance from the target. Replicated experiments were performed on a 2{sup 3} factorial design (three parameters at two levels) and analysis included averages and standard deviation along with an analysis of variance (ANOVA) to find significant single and two-way interaction factors. Results: Results showed that grid with tight clearance hole and slow needle speed increased precision and accuracy of needle insertion. The tight grid was vital to enhance precision and accuracy of needle insertion for both slow and fast insertion speed; additionally, at slow speed the tight, thick grid improved needle precision and accuracy. Conclusions: In summary, the tight grid is important, regardless of speed. The grid design, which shows the capability to reduce the needle deflection in brachytherapy procedures, can potentially be implemented in the brachytherapy procedure.

  6. Precise and Accurate Measurements of Strong-Field Photoionization and a Transferable Laser Intensity Calibration Standard.

    PubMed

    Wallace, W C; Ghafur, O; Khurmi, C; Sainadh U, Satya; Calvert, J E; Laban, D E; Pullen, M G; Bartschat, K; Grum-Grzhimailo, A N; Wells, D; Quiney, H M; Tong, X M; Litvinyuk, I V; Sang, R T; Kielpinski, D

    2016-07-29

    Ionization of atoms and molecules in strong laser fields is a fundamental process in many fields of research, especially in the emerging field of attosecond science. So far, demonstrably accurate data have only been acquired for atomic hydrogen (H), a species that is accessible to few investigators. Here, we present measurements of the ionization yield for argon, krypton, and xenon with percent-level accuracy, calibrated using H, in a laser regime widely used in attosecond science. We derive a transferable calibration standard for laser peak intensity, accurate to 1.3%, that is based on a simple reference curve. In addition, our measurements provide a much needed benchmark for testing models of ionization in noble-gas atoms, such as the widely employed single-active electron approximation.

  7. Precise and Accurate Measurements of Strong-Field Photoionization and a Transferable Laser Intensity Calibration Standard.

    PubMed

    Wallace, W C; Ghafur, O; Khurmi, C; Sainadh U, Satya; Calvert, J E; Laban, D E; Pullen, M G; Bartschat, K; Grum-Grzhimailo, A N; Wells, D; Quiney, H M; Tong, X M; Litvinyuk, I V; Sang, R T; Kielpinski, D

    2016-07-29

    Ionization of atoms and molecules in strong laser fields is a fundamental process in many fields of research, especially in the emerging field of attosecond science. So far, demonstrably accurate data have only been acquired for atomic hydrogen (H), a species that is accessible to few investigators. Here, we present measurements of the ionization yield for argon, krypton, and xenon with percent-level accuracy, calibrated using H, in a laser regime widely used in attosecond science. We derive a transferable calibration standard for laser peak intensity, accurate to 1.3%, that is based on a simple reference curve. In addition, our measurements provide a much needed benchmark for testing models of ionization in noble-gas atoms, such as the widely employed single-active electron approximation. PMID:27517769

  8. Precise and Accurate Measurements of Strong-Field Photoionization and a Transferable Laser Intensity Calibration Standard

    NASA Astrophysics Data System (ADS)

    Wallace, W. C.; Ghafur, O.; Khurmi, C.; Sainadh U, Satya; Calvert, J. E.; Laban, D. E.; Pullen, M. G.; Bartschat, K.; Grum-Grzhimailo, A. N.; Wells, D.; Quiney, H. M.; Tong, X. M.; Litvinyuk, I. V.; Sang, R. T.; Kielpinski, D.

    2016-07-01

    Ionization of atoms and molecules in strong laser fields is a fundamental process in many fields of research, especially in the emerging field of attosecond science. So far, demonstrably accurate data have only been acquired for atomic hydrogen (H), a species that is accessible to few investigators. Here, we present measurements of the ionization yield for argon, krypton, and xenon with percent-level accuracy, calibrated using H, in a laser regime widely used in attosecond science. We derive a transferable calibration standard for laser peak intensity, accurate to 1.3%, that is based on a simple reference curve. In addition, our measurements provide a much needed benchmark for testing models of ionization in noble-gas atoms, such as the widely employed single-active electron approximation.

  9. Precision Pointing Control to and Accurate Target Estimation of a Non-Cooperative Vehicle

    NASA Technical Reports Server (NTRS)

    VanEepoel, John; Thienel, Julie; Sanner, Robert M.

    2006-01-01

    In 2004, NASA began investigating a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates in order to achieve capture by the proposed Hubble Robotic Vehicle (HRV), but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST. To generalize the situation, HST is the target vehicle and HRV is the chaser. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a control scheme. Non-cooperative in this context relates to the target vehicle no longer having the ability to maintain attitude control or transmit attitude knowledge.

  10. Completing eHIFLUGCS: the Ultimate Precise and Accurate Local Baseline

    NASA Astrophysics Data System (ADS)

    Reiprich, Thomas

    2012-09-01

    Currently, the largest complete local cluster sample with full high quality X-ray coverage is HIFLUGCS. Its selection is based on the ROSAT All-Sky Survey and complete X-ray follow-up has been performed with Chandra and XMM-Newton, resulting in numerous applications in cluster physics and cosmology by several research groups. The combination of high completeness, large sample size, and high quality follow-up has been crucial for this wide applicability. Here, we propose a threefold increase in sample size with a new complete high quality sample, eHIFLUGCS. We demonstrate that this significantly increased statistics will enable substantial improvements in precision for several studies as well as qualitatively new tests.

  11. Rapid, Precise, and Accurate Counts of Symbiodinium Cells Using the Guava Flow Cytometer, and a Comparison to Other Methods

    PubMed Central

    Caruso, Carlo; Burriesci, Matthew S.; Cella, Kristen; Pringle, John R.

    2015-01-01

    In studies of both the establishment and breakdown of cnidarian-dinoflagellate symbiosis, it is often necessary to determine the number of Symbiodinium cells relative to the quantity of host tissue. Ideally, the methods used should be rapid, precise, and accurate. In this study, we systematically evaluated methods for sample preparation and storage and the counting of algal cells using the hemocytometer, a custom image-analysis program for automated counting of the fluorescent algal cells, the Coulter Counter, or the Millipore Guava flow-cytometer. We found that although other methods may have value in particular applications, for most purposes, the Guava flow cytometer provided by far the best combination of precision, accuracy, and efficient use of investigator time (due to the instrument's automated sample handling), while also allowing counts of algal numbers over a wide range and in small volumes of tissue homogenate. We also found that either of two assays of total homogenate protein provided a precise and seemingly accurate basis for normalization of algal counts to the total amount of holobiont tissue. PMID:26291447

  12. Rapid, Precise, and Accurate Counts of Symbiodinium Cells Using the Guava Flow Cytometer, and a Comparison to Other Methods.

    PubMed

    Krediet, Cory J; DeNofrio, Jan C; Caruso, Carlo; Burriesci, Matthew S; Cella, Kristen; Pringle, John R

    2015-01-01

    In studies of both the establishment and breakdown of cnidarian-dinoflagellate symbiosis, it is often necessary to determine the number of Symbiodinium cells relative to the quantity of host tissue. Ideally, the methods used should be rapid, precise, and accurate. In this study, we systematically evaluated methods for sample preparation and storage and the counting of algal cells using the hemocytometer, a custom image-analysis program for automated counting of the fluorescent algal cells, the Coulter Counter, or the Millipore Guava flow-cytometer. We found that although other methods may have value in particular applications, for most purposes, the Guava flow cytometer provided by far the best combination of precision, accuracy, and efficient use of investigator time (due to the instrument's automated sample handling), while also allowing counts of algal numbers over a wide range and in small volumes of tissue homogenate. We also found that either of two assays of total homogenate protein provided a precise and seemingly accurate basis for normalization of algal counts to the total amount of holobiont tissue. PMID:26291447

  13. Precision of Sensitivity in the Design Optimization of Indeterminate Structures

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Hopkins, Dale A.

    2006-01-01

    Design sensitivity is central to most optimization methods. The analytical sensitivity expression for an indeterminate structural design optimization problem can be factored into a simple determinate term and a complicated indeterminate component. Sensitivity can be approximated by retaining only the determinate term and setting the indeterminate factor to zero. The optimum solution is reached with the approximate sensitivity. The central processing unit (CPU) time to solution is substantially reduced. The benefit that accrues from using the approximate sensitivity is quantified by solving a set of problems in a controlled environment. Each problem is solved twice: first using the closed-form sensitivity expression, then using the approximation. The problem solutions use the CometBoards testbed as the optimization tool with the integrated force method as the analyzer. The modification that may be required, to use the stiffener method as the analysis tool in optimization, is discussed. The design optimization problem of an indeterminate structure contains many dependent constraints because of the implicit relationship between stresses, as well as the relationship between the stresses and displacements. The design optimization process can become problematic because the implicit relationship reduces the rank of the sensitivity matrix. The proposed approximation restores the full rank and enhances the robustness of the design optimization method.

  14. Growing degree hours - a simple, accurate, and precise protocol to approximate growing heat summation for grapevines

    NASA Astrophysics Data System (ADS)

    Gu, S.

    2016-08-01

    Despite its low accuracy and consistency, growing degree days (GDD) has been widely used to approximate growing heat summation (GHS) for regional classification and phenological prediction. GDD is usually calculated from the mean of daily minimum and maximum temperatures (GDDmm) above a growing base temperature ( T gb). To determine approximation errors and accuracy, daily and cumulative GDDmm was compared to GDD based on daily average temperature (GDDavg), growing degree hours (GDH) based on hourly temperatures, and growing degree minutes (GDM) based on minute-by-minute temperatures. Finite error, due to the difference between measured and true temperatures above T gb is large in GDDmm but is negligible in GDDavg, GDH, and GDM, depending only upon the number of measured temperatures used for daily approximation. Hidden negative error, due to the temperatures below T gb when being averaged for approximation intervals larger than measuring interval, is large in GDDmm and GDDavg but is negligible in GDH and GDM. Both GDH and GDM improve GHS approximation accuracy over GDDmm or GDDavg by summation of multiple integration rectangles to reduce both finite and hidden negative errors. GDH is proposed as the standardized GHS approximation protocol, providing adequate accuracy and high precision independent upon T gb while requiring simple data recording and processing.

  15. Extracting Accurate and Precise Topography from Lroc Narrow Angle Camera Stereo Observations

    NASA Astrophysics Data System (ADS)

    Henriksen, M. R.; Manheim, M. R.; Speyerer, E. J.; Robinson, M. S.; LROC Team

    2016-06-01

    The Lunar Reconnaissance Orbiter Camera (LROC) includes two identical Narrow Angle Cameras (NAC) that acquire meter scale imaging. Stereo observations are acquired by imaging from two or more orbits, including at least one off-nadir slew. Digital terrain models (DTMs) generated from the stereo observations are controlled to Lunar Orbiter Laser Altimeter (LOLA) elevation profiles. With current processing methods, digital terrain models (DTM) have absolute accuracies commensurate than the uncertainties of the LOLA profiles (~10 m horizontally and ~1 m vertically) and relative horizontal and vertical precisions better than the pixel scale of the DTMs (2 to 5 m). The NAC stereo pairs and derived DTMs represent an invaluable tool for science and exploration purposes. We computed slope statistics from 81 highland and 31 mare DTMs across a range of baselines. Overlapping DTMs of single stereo sets were also combined to form larger area DTM mosaics, enabling detailed characterization of large geomorphic features and providing a key resource for future exploration planning. Currently, two percent of the lunar surface is imaged in NAC stereo and continued acquisition of stereo observations will serve to strengthen our knowledge of the Moon and geologic processes that occur on all the terrestrial planets.

  16. Accurate and precise determination of critical properties from Gibbs ensemble Monte Carlo simulations

    SciTech Connect

    Dinpajooh, Mohammadhasan; Bai, Peng; Allan, Douglas A.; Siepmann, J. Ilja

    2015-09-21

    Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor–liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields T{sub c} = 1.3128 ± 0.0016, ρ{sub c} = 0.316 ± 0.004, and p{sub c} = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρ{sub t} ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using r{sub cut} = 3.5σ yield T{sub c} and p{sub c} that are higher by 0.2% and 1.4% than simulations with r{sub cut} = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that r{sub cut} = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard

  17. Accurate and precise determination of critical properties from Gibbs ensemble Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Dinpajooh, Mohammadhasan; Bai, Peng; Allan, Douglas A.; Siepmann, J. Ilja

    2015-09-01

    Since the seminal paper by Panagiotopoulos [Mol. Phys. 61, 813 (1997)], the Gibbs ensemble Monte Carlo (GEMC) method has been the most popular particle-based simulation approach for the computation of vapor-liquid phase equilibria. However, the validity of GEMC simulations in the near-critical region has been questioned because rigorous finite-size scaling approaches cannot be applied to simulations with fluctuating volume. Valleau [Mol. Simul. 29, 627 (2003)] has argued that GEMC simulations would lead to a spurious overestimation of the critical temperature. More recently, Patel et al. [J. Chem. Phys. 134, 024101 (2011)] opined that the use of analytical tail corrections would be problematic in the near-critical region. To address these issues, we perform extensive GEMC simulations for Lennard-Jones particles in the near-critical region varying the system size, the overall system density, and the cutoff distance. For a system with N = 5500 particles, potential truncation at 8σ and analytical tail corrections, an extrapolation of GEMC simulation data at temperatures in the range from 1.27 to 1.305 yields Tc = 1.3128 ± 0.0016, ρc = 0.316 ± 0.004, and pc = 0.1274 ± 0.0013 in excellent agreement with the thermodynamic limit determined by Potoff and Panagiotopoulos [J. Chem. Phys. 109, 10914 (1998)] using grand canonical Monte Carlo simulations and finite-size scaling. Critical properties estimated using GEMC simulations with different overall system densities (0.296 ≤ ρt ≤ 0.336) agree to within the statistical uncertainties. For simulations with tail corrections, data obtained using rcut = 3.5σ yield Tc and pc that are higher by 0.2% and 1.4% than simulations with rcut = 5 and 8σ but still with overlapping 95% confidence intervals. In contrast, GEMC simulations with a truncated and shifted potential show that rcut = 8σ is insufficient to obtain accurate results. Additional GEMC simulations for hard-core square-well particles with various ranges of the

  18. Highly Accurate and Precise Infrared Transition Frequencies of the H_3^+ Cation

    NASA Astrophysics Data System (ADS)

    Perry, Adam J.; Markus, Charles R.; Hodges, James N.; Kocheril, G. Stephen; McCall, Benjamin J.

    2016-06-01

    Calculation of ab initio potential energy surfaces for molecules to high accuracy is only manageable for a handful of molecular systems. Among them is the simplest polyatomic molecule, the H_3^+ cation. In order to achieve a high degree of accuracy (<1 wn) corrections must be made to the to the traditional Born-Oppenheimer approximation that take into account not only adiabatic and non-adiabatic couplings, but quantum electrodynamic corrections as well. For the lowest rovibrational levels the agreement between theory and experiment is approaching 0.001 wn, whereas the agreement is on the order of 0.01 - 0.1 wn for higher levels which are closely rivaling the uncertainties on the experimental data. As method development for calculating these various corrections progresses it becomes necessary for the uncertainties on the experimental data to be improved in order to properly benchmark the calculations. Previously we have measured 20 rovibrational transitions of H_3^+ with MHz-level precision, all of which have arisen from low lying rotational levels. Here we present new measurements of rovibrational transitions arising from higher rotational and vibrational levels. These transitions not only allow for probing higher energies on the potential energy surface, but through the use of combination differences, will ultimately lead to prediction of the "forbidden" rotational transitions with MHz-level accuracy. L.G. Diniz, J.R. Mohallem, A. Alijah, M. Pavanello, L. Adamowicz, O.L. Polyansky, J. Tennyson Phys. Rev. A (2013), 88, 032506 O.L. Polyansky, A. Alijah, N.F. Zobov, I.I. Mizus, R.I. Ovsyannikov, J. Tennyson, L. Lodi, T. Szidarovszky, A.G. Császár Phil. Trans. R. Soc. A (2012), 370, 5014 J.N. Hodges, A.J. Perry, P.A. Jenkins II, B.M. Siller, B.J. McCall J. Chem. Phys. (2013), 139, 164201 A.J. Perry, J.N. Hodges, C.R. Markus, G.S. Kocheril, B.J. McCall J. Molec. Spectrosc. (2015), 317, 71-73.

  19. Double Precision Differential/Algebraic Sensitivity Analysis Code

    1995-06-02

    DDASAC solves nonlinear initial-value problems involving stiff implicit systems of ordinary differential and algebraic equations. Purely algebraic nonlinear systems can also be solved, given an initial guess within the region of attraction of a solution. Options include automatic reconciliation of inconsistent initial states and derivatives, automatic initial step selection, direct concurrent parametric sensitivity analysis, and stopping at a prescribed value of any user-defined functional of the current solution vector. Local error control (in the max-normmore » or the 2-norm) is provided for the state vector and can include the sensitivities on request.« less

  20. Fluorescence polarization immunoassays for rapid, accurate, and sensitive determination of mycotoxins

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Analytical methods for the determination of mycotoxins in foods are commonly based on chromatographic techniques (GC, HPLC or LC-MS). Although these methods permit a sensitive and accurate determination of the analyte, they require skilled personnel and are time-consuming, expensive, and unsuitable ...

  1. A new sensor system for accurate and precise determination of sediment dynamics and position.

    NASA Astrophysics Data System (ADS)

    Maniatis, Georgios; Hoey, Trevor; Sventek, Joseph; Hodge, Rebecca

    2014-05-01

    Sediment transport processes control many significant geomorphological changes. Consequently, sediment transport dynamics are studied across a wide range of scales leading to application of a variety of conceptually different mathematical descriptions (models) and data acquisition techniques (sensing). For river sediment transport processes both Eulerian and Lagrangian formulations are used. Data are gathered using a very wide range of sensing techniques that are not always compatible with the conceptual formulation applied. We are concerned with small to medium sediment grain-scale motion in gravel-bed rivers, and other coarse-grained environments, and: a) are developing a customised environmental sensor capable of providing coherent data that reliably record the motion; and, b) provide a mathematical framework in which these data can be analysed and interpreted, this being compatible with current stochastic approaches to sediment transport theory. Here we present results from three different aspects of the above developmental process. Firstly, we present a requirement analysis for the sensor based on the state of the art of the existing technologies. We focus on the factors that enhance data coherence and representativeness, extending the common practice for optimization which is based exclusively on electronics/computing related criteria. This analysis leads to formalization of a method that permits accurate control on the physical properties of the sensor using contemporary rapid prototyping techniques [Maniatis et al. 2013]. Secondly the first results are presented from a series of entrainment experiments in a 5 x 0.8 m flume in which a prototype sensor was deployed to monitor entrainment dynamics under increasing flow conditions (0.037 m3.s-1). The sensor was enclosed in an idealized spherical case (111 mm diameter) and placed on a constructed bed of hemispheres of the same diameter. We measured 3-axial inertial acceleration (as a measure of flow stress

  2. Accurate noncontact calibration of colloidal probe sensitivities in atomic force microscopy.

    PubMed

    Chung, Koo-Hyun; Shaw, Gordon A; Pratt, Jon R

    2009-06-01

    The absolute force sensitivities of colloidal probes comprised of atomic force microscope, or AFM, cantilevers with microspheres attached to their distal ends are measured. The force sensitivities are calibrated through reference to accurate electrostatic forces, the realizations of which are described in detail. Furthermore, the absolute accuracy of a common AFM force calibration scheme, known as the thermal noise method, is evaluated. It is demonstrated that the thermal noise method can be applied with great success to colloidal probe calibration in air and in liquid to yield force measurements with relative standard uncertainties below 5%. Techniques to combine the electrostatics-based determination of the AFM force sensitivity with measurements of the colloidal probe's thermal noise spectrum to compute noncontact estimates of the displacement sensitivity and spring constant are also developed.

  3. Accurate noncontact calibration of colloidal probe sensitivities in atomic force microscopy

    SciTech Connect

    Chung, Koo-Hyun; Shaw, Gordon A.; Pratt, Jon R.

    2009-06-15

    The absolute force sensitivities of colloidal probes comprised of atomic force microscope, or AFM, cantilevers with microspheres attached to their distal ends are measured. The force sensitivities are calibrated through reference to accurate electrostatic forces, the realizations of which are described in detail. Furthermore, the absolute accuracy of a common AFM force calibration scheme, known as the thermal noise method, is evaluated. It is demonstrated that the thermal noise method can be applied with great success to colloidal probe calibration in air and in liquid to yield force measurements with relative standard uncertainties below 5%. Techniques to combine the electrostatics-based determination of the AFM force sensitivity with measurements of the colloidal probe's thermal noise spectrum to compute noncontact estimates of the displacement sensitivity and spring constant are also developed.

  4. A hydrogen gas-water equilibration method produces accurate and precise stable hydrogen isotope ratio measurements in nutrition studies.

    PubMed

    Wong, William W; Clarke, Lucinda L

    2012-11-01

    Stable hydrogen isotope methodology is used in nutrition studies to measure growth, breast milk intake, and energy requirement. Isotope ratio MS is the best instrumentation to measure the stable hydrogen isotope ratios in physiological fluids. Conventional methods to convert physiological fluids to hydrogen gas (H(2)) for mass spectrometric analysis are labor intensive, require special reagent, and involve memory effect and potential isotope fractionation. The objective of this study was to determine the accuracy and precision of a platinum catalyzed H(2)-water equilibration method for stable hydrogen isotope ratio measurements. Time to reach isotopic equilibrium, day-to-day and week-to-week reproducibility, accuracy, and precision of stable hydrogen isotope ratio measurements by the H(2)-water equilibration method were assessed using a Thermo DELTA V Advantage continuous-flow isotope ratio mass spectrometer. It took 3 h to reach isotopic equilibrium. The day-to-day and week-to-week measurements on water and urine samples with natural abundance and enriched levels of deuterium were highly reproducible. The method was accurate to within 2.8 (o)/oo and reproducible to within 4.0 (o)/oo based on analysis of international references. All the outcome variables, whether in urine samples collected in 10 doubly labeled water studies or plasma samples collected in 26 body water studies, did not differ from those obtained using the reference zinc reduction method. The method produced highly accurate estimation on ad libitum energy intakes, body composition, and water turnover rates. The method greatly reduces the analytical cost and could easily be adopted by laboratories equipped with a continuous-flow isotope ratio mass spectrometer.

  5. A Hydrogen Gas-Water Equilibration Method Produces Accurate and Precise Stable Hydrogen Isotope Ratio Measurements in Nutrition Studies12

    PubMed Central

    Wong, William W.; Clarke, Lucinda L.

    2012-01-01

    Stable hydrogen isotope methodology is used in nutrition studies to measure growth, breast milk intake, and energy requirement. Isotope ratio MS is the best instrumentation to measure the stable hydrogen isotope ratios in physiological fluids. Conventional methods to convert physiological fluids to hydrogen gas (H2) for mass spectrometric analysis are labor intensive, require special reagent, and involve memory effect and potential isotope fractionation. The objective of this study was to determine the accuracy and precision of a platinum catalyzed H2-water equilibration method for stable hydrogen isotope ratio measurements. Time to reach isotopic equilibrium, day-to-day and week-to-week reproducibility, accuracy, and precision of stable hydrogen isotope ratio measurements by the H2-water equilibration method were assessed using a Thermo DELTA V Advantage continuous-flow isotope ratio mass spectrometer. It took 3 h to reach isotopic equilibrium. The day-to-day and week-to-week measurements on water and urine samples with natural abundance and enriched levels of deuterium were highly reproducible. The method was accurate to within 2.8 o/oo and reproducible to within 4.0 o/oo based on analysis of international references. All the outcome variables, whether in urine samples collected in 10 doubly labeled water studies or plasma samples collected in 26 body water studies, did not differ from those obtained using the reference zinc reduction method. The method produced highly accurate estimation on ad libitum energy intakes, body composition, and water turnover rates. The method greatly reduces the analytical cost and could easily be adopted by laboratories equipped with a continuous-flow isotope ratio mass spectrometer. PMID:23014490

  6. High Fidelity Non-Gravitational Force Models for Precise and Accurate Orbit Determination of TerraSAR-X

    NASA Astrophysics Data System (ADS)

    Hackel, Stefan; Montenbruck, Oliver; Steigenberger, -Peter; Eineder, Michael; Gisinger, Christoph

    Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The increasing demand for precise radar products relies on sophisticated validation methods, which require precise and accurate orbit products. Basically, the precise reconstruction of the satellite’s trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency receiver onboard the spacecraft. The Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for the gravitational and non-gravitational forces. Following a proper analysis of the orbit quality, systematics in the orbit products have been identified, which reflect deficits in the non-gravitational force models. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). Due to the dusk-dawn orbit configuration of TerraSAR-X, the satellite is almost constantly illuminated by the Sun. Therefore, the direct SRP has an effect on the lateral stability of the determined orbit. The indirect effect of the solar radiation principally contributes to the Earth Radiation Pressure (ERP). The resulting force depends on the sunlight, which is reflected by the illuminated Earth surface in the visible, and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed within the presentation. The presentation highlights the influence of non-gravitational force and satellite macro models on the orbit quality of TerraSAR-X.

  7. A case study of the sensitivity to LFV operators with precision measurements and the LHC

    NASA Astrophysics Data System (ADS)

    Cai, Yi; Schmidt, Michael A.

    2016-02-01

    We compare the sensitivity of precision measurements of lepton flavour observables to the reach of the LHC in a case study of lepton-flavour violating operators of dimension six with two leptons and two quarks. For light quarks precision measurements always yield the more stringent constraints. The LHC complements precision measurements for operators with heavier quarks. Competitive limits can already be set on the cutoff scale Λ > 600-800 GeV for operators with right-handed τ leptons using the LHC run 1 data.

  8. 3'READS+, a sensitive and accurate method for 3' end sequencing of polyadenylated RNA.

    PubMed

    Zheng, Dinghai; Liu, Xiaochuan; Tian, Bin

    2016-10-01

    Sequencing of the 3' end of poly(A)(+) RNA identifies cleavage and polyadenylation sites (pAs) and measures transcript expression. We previously developed a method, 3' region extraction and deep sequencing (3'READS), to address mispriming issues that often plague 3' end sequencing. Here we report a new version, named 3'READS+, which has vastly improved accuracy and sensitivity. Using a special locked nucleic acid oligo to capture poly(A)(+) RNA and to remove the bulk of the poly(A) tail, 3'READS+ generates RNA fragments with an optimal number of terminal A's that balance data quality and detection of genuine pAs. With improved RNA ligation steps for efficiency, the method shows much higher sensitivity (over two orders of magnitude) compared to the previous version. Using 3'READS+, we have uncovered a sizable fraction of previously overlooked pAs located next to or within a stretch of adenylate residues in human genes and more accurately assessed the frequency of alternative cleavage and polyadenylation (APA) in HeLa cells (∼50%). 3'READS+ will be a useful tool to accurately study APA and to analyze gene expression by 3' end counting, especially when the amount of input total RNA is limited. PMID:27512124

  9. 3'READS+, a sensitive and accurate method for 3' end sequencing of polyadenylated RNA.

    PubMed

    Zheng, Dinghai; Liu, Xiaochuan; Tian, Bin

    2016-10-01

    Sequencing of the 3' end of poly(A)(+) RNA identifies cleavage and polyadenylation sites (pAs) and measures transcript expression. We previously developed a method, 3' region extraction and deep sequencing (3'READS), to address mispriming issues that often plague 3' end sequencing. Here we report a new version, named 3'READS+, which has vastly improved accuracy and sensitivity. Using a special locked nucleic acid oligo to capture poly(A)(+) RNA and to remove the bulk of the poly(A) tail, 3'READS+ generates RNA fragments with an optimal number of terminal A's that balance data quality and detection of genuine pAs. With improved RNA ligation steps for efficiency, the method shows much higher sensitivity (over two orders of magnitude) compared to the previous version. Using 3'READS+, we have uncovered a sizable fraction of previously overlooked pAs located next to or within a stretch of adenylate residues in human genes and more accurately assessed the frequency of alternative cleavage and polyadenylation (APA) in HeLa cells (∼50%). 3'READS+ will be a useful tool to accurately study APA and to analyze gene expression by 3' end counting, especially when the amount of input total RNA is limited.

  10. Guided resonances on lithium niobate for extremely small electric field detection investigated by accurate sensitivity analysis.

    PubMed

    Qiu, Wentao; Ndao, Abdoulaye; Lu, Huihui; Bernal, Maria-Pilar; Baida, Fadi Issam

    2016-09-01

    We present a theoretical study of guided resonances (GR) on a thin film lithium niobate rectangular lattice photonic crystal by band diagram calculations and 3D Finite Difference Time Domain (FDTD) transmission investigations which cover a broad range of parameters. A photonic crystal with an active zone as small as 13μm×13μm×0.7μm can be easily designed to obtain a resonance Q value in the order of 1000. These resonances are then employed in electric field (E-field) sensing applications exploiting the electro optic (EO) effect of lithium niobate. A local field factor that is calculated locally for each FDTD cell is proposed to accurately estimate the sensitivity of GR based E-field sensor. The local field factor allows well agreement between simulations and reported experimental data therefore providing a valuable method in optimizing the GR structure to obtain high sensitivities. When these resonances are associated with sub-picometer optical spectrum analyzer and high field enhancement antenna design, an E-field probe with a sensitivity of 50 μV/m could be achieved. The results of our simulations could be also exploited in other EO based applications such as EEG (Electroencephalography) or ECG (Electrocardiography) probe and E-field frequency detector with an 'invisible' probe to the field being detected etc. PMID:27607627

  11. Phantom instabilities in adiabatically driven systems: dynamical sensitivity to computational precision.

    PubMed

    Jafri, Haider Hasan; Singh, Thounaojam Umeshkanta; Ramaswamy, Ramakrishna

    2012-09-01

    We study the robustness of dynamical phenomena in adiabatically driven nonlinear mappings with skew-product structure. Deviations from true orbits are observed when computations are performed with inadequate numerical precision for monotone, periodic, or quasiperiodic driving. The effect of slow modulation is to "freeze" orbits in long intervals of purely contracting or purely expanding dynamics in the phase space. When computations are carried out with low precision, numerical errors build up phantom instabilities which ultimately force trajectories to depart from the true motion. Thus, the dynamics observed with finite precision computation shows sensitivity to numerical precision: the minimum accuracy required to obtain "true" trajectories is proportional to an internal timescale that can be defined for the adiabatic system.

  12. High-precision topography measurement through accurate in-focus plane detection with hybrid digital holographic microscope and white light interferometer module.

    PubMed

    Liżewski, Kamil; Tomczewski, Sławomir; Kozacki, Tomasz; Kostencka, Julianna

    2014-04-10

    High-precision topography measurement of micro-objects using interferometric and holographic techniques can be realized provided that the in-focus plane of an imaging system is very accurately determined. Therefore, in this paper we propose an accurate technique for in-focus plane determination, which is based on coherent and incoherent light. The proposed method consists of two major steps. First, a calibration of the imaging system with an amplitude object is performed with a common autofocusing method using coherent illumination, which allows for accurate localization of the in-focus plane position. In the second step, the position of the detected in-focus plane with respect to the imaging system is measured with white light interferometry. The obtained distance is used to accurately adjust a sample with the precision required for the measurement. The experimental validation of the proposed method is given for measurement of high-numerical-aperture microlenses with subwavelength accuracy.

  13. Accurate calculation of control-augmented structural eigenvalue sensitivities using reduced-order models

    NASA Technical Reports Server (NTRS)

    Livne, Eli

    1989-01-01

    A method is presented for generating mode shapes for model order reduction in a way that leads to accurate calculation of eigenvalue derivatives and eigenvalues for a class of control augmented structures. The method is based on treating degrees of freedom where control forces act or masses are changed in a manner analogous to that used for boundary degrees of freedom in component mode synthesis. It is especially suited for structures controlled by a small number of actuators and/or tuned by a small number of concentrated masses whose positions are predetermined. A control augmented multispan beam with closely spaced natural frequencies is used for numerical experimentation. A comparison with reduced-order eigenvalue sensitivity calculations based on the normal modes of the structure shows that the method presented produces significant improvements in accuracy.

  14. Toward Sensitive and Accurate Analysis of Antibody Biotherapeutics by Liquid Chromatography Coupled with Mass Spectrometry

    PubMed Central

    An, Bo; Zhang, Ming

    2014-01-01

    Remarkable methodological advances in the past decade have expanded the application of liquid chromatography coupled with mass spectrometry (LC/MS) analysis of biotherapeutics. Currently, LC/MS represents a promising alternative or supplement to the traditional ligand binding assay (LBA) in the pharmacokinetic, pharmacodynamic, and toxicokinetic studies of protein drugs, owing to the rapid and cost-effective method development, high specificity and reproducibility, low sample consumption, the capacity of analyzing multiple targets in one analysis, and the fact that a validated method can be readily adapted across various matrices and species. While promising, technical challenges associated with sensitivity, sample preparation, method development, and quantitative accuracy need to be addressed to enable full utilization of LC/MS. This article introduces the rationale and technical challenges of LC/MS techniques in biotherapeutics analysis and summarizes recently developed strategies to alleviate these challenges. Applications of LC/MS techniques on quantification and characterization of antibody biotherapeutics are also discussed. We speculate that despite the highly attractive features of LC/MS, it will not fully replace traditional assays such as LBA in the foreseeable future; instead, the forthcoming trend is likely the conjunction of biochemical techniques with versatile LC/MS approaches to achieve accurate, sensitive, and unbiased characterization of biotherapeutics in highly complex pharmaceutical/biologic matrices. Such combinations will constitute powerful tools to tackle the challenges posed by the rapidly growing needs for biotherapeutics development. PMID:25185260

  15. High precision, high sensitivity distributed displacement and temperature measurements using OFDR-based phase tracking

    NASA Astrophysics Data System (ADS)

    Gifford, Dawn K.; Froggatt, Mark E.; Kreger, Stephen T.

    2011-05-01

    Optical Frequency Domain Reflectometry is used to measure distributed displacement and temperature change with very high sensitivity and precision by measuring the phase change of an optical fiber sensor as a function of distance with high spatial resolution and accuracy. A fiber containing semi-continuous Bragg gratings was used as the sensor. The effective length change, or displacement, in the fiber caused by small temperature changes was measured as a function of distance with a precision of 2.4 nm and a spatial resolution of 1.5 mm. The temperature changes calculated from this displacement were measured with precision of 0.001 C with an effective sensor gauge length of 12 cm. These results demonstrate that the method employed of continuously tracking the phase change along the length of the fiber sensor enables high resolution distributed measurements that can be used to detect very small displacements, temperature changes, or strains.

  16. Assignment of Calibration Information to Deeper Phylogenetic Nodes is More Effective in Obtaining Precise and Accurate Divergence Time Estimates.

    PubMed

    Mello, Beatriz; Schrago, Carlos G

    2014-01-01

    Divergence time estimation has become an essential tool for understanding macroevolutionary events. Molecular dating aims to obtain reliable inferences, which, within a statistical framework, means jointly increasing the accuracy and precision of estimates. Bayesian dating methods exhibit the propriety of a linear relationship between uncertainty and estimated divergence dates. This relationship occurs even if the number of sites approaches infinity and places a limit on the maximum precision of node ages. However, how the placement of calibration information may affect the precision of divergence time estimates remains an open question. In this study, relying on simulated and empirical data, we investigated how the location of calibration within a phylogeny affects the accuracy and precision of time estimates. We found that calibration priors set at median and deep phylogenetic nodes were associated with higher precision values compared to analyses involving calibration at the shallowest node. The results were independent of the tree symmetry. An empirical mammalian dataset produced results that were consistent with those generated by the simulated sequences. Assigning time information to the deeper nodes of a tree is crucial to guarantee the accuracy and precision of divergence times. This finding highlights the importance of the appropriate choice of outgroups in molecular dating. PMID:24855333

  17. Are Currently Available Wearable Devices for Activity Tracking and Heart Rate Monitoring Accurate, Precise, and Medically Beneficial?

    PubMed Central

    El-Amrawy, Fatema

    2015-01-01

    Objectives The new wave of wireless technologies, fitness trackers, and body sensor devices can have great impact on healthcare systems and the quality of life. However, there have not been enough studies to prove the accuracy and precision of these trackers. The objective of this study was to evaluate the accuracy, precision, and overall performance of seventeen wearable devices currently available compared with direct observation of step counts and heart rate monitoring. Methods Each participant in this study used three accelerometers at a time, running the three corresponding applications of each tracker on an Android or iOS device simultaneously. Each participant was instructed to walk 200, 500, and 1,000 steps. Each set was repeated 40 times. Data was recorded after each trial, and the mean step count, standard deviation, accuracy, and precision were estimated for each tracker. Heart rate was measured by all trackers (if applicable), which support heart rate monitoring, and compared to a positive control, the Onyx Vantage 9590 professional clinical pulse oximeter. Results The accuracy of the tested products ranged between 79.8% and 99.1%, while the coefficient of variation (precision) ranged between 4% and 17.5%. MisFit Shine showed the highest accuracy and precision (along with Qualcomm Toq), while Samsung Gear 2 showed the lowest accuracy, and Jawbone UP showed the lowest precision. However, Xiaomi Mi band showed the best package compared to its price. Conclusions The accuracy and precision of the selected fitness trackers are reasonable and can indicate the average level of activity and thus average energy expenditure. PMID:26618039

  18. Precise and accurate measurement of U and Th isotopes via ICP-MS using a single solution

    NASA Astrophysics Data System (ADS)

    Mertz-Kraus, R.; Sharp, W. D.; Ludwig, K. R.

    2012-04-01

    U-series isotope measurements by ICP-MS commonly utilize separate runs for U and Th and standard-sample bracketing to determine correction factors for mass fractionation and ion counter yields. Here we present an approach where all information necessary to calculate an age (aside from background/baseline levels) is determined while analyzing a single solution containing both U and Th. This internally calibrated procedure should reduce any bias caused by distinct behavior of sample versus standard solutions during analysis and offers advantages including simplicity of operation, calculation of preliminary ages in real time, and simplified analysis of errors and their sources. Hellstrom (2003) developed a single-solution, internally-calibrated technique for an ICP-MS with multiple ion counters, but to our knowledge no such technique is available for an ICP-MS with a single ion counter. We use a Thermo Neptune Plus multi-collector ICP-MS with eight movable Faraday cups and a fixed center cup/ion counter equipped with a high abundance-sensitivity filter (RPQ). We use Faraday cups to measure all masses except 230 and 234, which are measured on the ion counter with the RPQ detuned (i.e., Suppressor voltage = 9950 V). 238U is maintained in a cup throughout the analysis to avoid reflections and is used to normalize signal instabilities related to sample introduction. Each analysis has a three-part structure, i.e. 1) background/baseline levels, 2) sample composition, and 3) peak-tails are sequentially determined. In step 1, multiplier dark noise/Faraday baselines plus background intensities at each mass are determined while aspirating running solution. During sample measurement in step 2, ion counter yields for Th and U are determined using signals of 300-400 kcps for 229Th and 233U by measuring 229Th/238U and 233U/238U ratios first with the minor masses on the ion counter and then with both masses in cups. Mass bias can be determined using the 233U/236U ratio of the spike

  19. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  20. Polyallelic structural variants can provide accurate, highly informative genetic markers focused on diagnosis and therapeutic targets: Accuracy vs. Precision.

    PubMed

    Roses, A D

    2016-02-01

    Structural variants (SVs) include all insertions, deletions, and rearrangements in the genome, with several common types of nucleotide repeats including single sequence repeats, short tandem repeats, and insertion-deletion length variants. Polyallelic SVs provide highly informative markers for association studies with well-phenotyped cohorts. SVs can influence gene regulation by affecting epigenetics, transcription, splicing, and/or translation. Accurate assays of polyallelic SV loci are required to define the range and allele frequency of variable length alleles. PMID:26517180

  1. Fast and Accurate Microplate Method (Biolog MT2) for Detection of Fusarium Fungicides Resistance/Sensitivity

    PubMed Central

    Frąc, Magdalena; Gryta, Agata; Oszust, Karolina; Kotowicz, Natalia

    2016-01-01

    The need for finding fungicides against Fusarium is a key step in the chemical plant protection and using appropriate chemical agents. Existing, conventional methods of evaluation of Fusarium isolates resistance to fungicides are costly, time-consuming and potentially environmentally harmful due to usage of high amounts of potentially toxic chemicals. Therefore, the development of fast, accurate and effective detection methods for Fusarium resistance to fungicides is urgently required. MT2 microplates (BiologTM) method is traditionally used for bacteria identification and the evaluation of their ability to utilize different carbon substrates. However, to the best of our knowledge, there is no reports concerning the use of this technical tool to determine fungicides resistance of the Fusarium isolates. For this reason, the objectives of this study are to develop a fast method for Fusarium resistance to fungicides detection and to validate the effectiveness approach between both traditional hole-plate and MT2 microplates assays. In presented study MT2 microplate-based assay was evaluated for potential use as an alternative resistance detection method. This was carried out using three commercially available fungicides, containing following active substances: triazoles (tebuconazole), benzimidazoles (carbendazim) and strobilurins (azoxystrobin), in six concentrations (0, 0.0005, 0.005, 0.05, 0.1, 0.2%), for nine selected Fusarium isolates. In this study, the particular concentrations of each fungicides was loaded into MT2 microplate wells. The wells were inoculated with the Fusarium mycelium suspended in PM4-IF inoculating fluid. Before inoculation the suspension was standardized for each isolates into 75% of transmittance. Traditional hole-plate method was used as a control assay. The fungicides concentrations in control method were the following: 0, 0.0005, 0.005, 0.05, 0.5, 1, 2, 5, 10, 25, and 50%. Strong relationships between MT2 microplate and traditional hole

  2. Fast and Accurate Microplate Method (Biolog MT2) for Detection of Fusarium Fungicides Resistance/Sensitivity.

    PubMed

    Frąc, Magdalena; Gryta, Agata; Oszust, Karolina; Kotowicz, Natalia

    2016-01-01

    The need for finding fungicides against Fusarium is a key step in the chemical plant protection and using appropriate chemical agents. Existing, conventional methods of evaluation of Fusarium isolates resistance to fungicides are costly, time-consuming and potentially environmentally harmful due to usage of high amounts of potentially toxic chemicals. Therefore, the development of fast, accurate and effective detection methods for Fusarium resistance to fungicides is urgently required. MT2 microplates (Biolog(TM)) method is traditionally used for bacteria identification and the evaluation of their ability to utilize different carbon substrates. However, to the best of our knowledge, there is no reports concerning the use of this technical tool to determine fungicides resistance of the Fusarium isolates. For this reason, the objectives of this study are to develop a fast method for Fusarium resistance to fungicides detection and to validate the effectiveness approach between both traditional hole-plate and MT2 microplates assays. In presented study MT2 microplate-based assay was evaluated for potential use as an alternative resistance detection method. This was carried out using three commercially available fungicides, containing following active substances: triazoles (tebuconazole), benzimidazoles (carbendazim) and strobilurins (azoxystrobin), in six concentrations (0, 0.0005, 0.005, 0.05, 0.1, 0.2%), for nine selected Fusarium isolates. In this study, the particular concentrations of each fungicides was loaded into MT2 microplate wells. The wells were inoculated with the Fusarium mycelium suspended in PM4-IF inoculating fluid. Before inoculation the suspension was standardized for each isolates into 75% of transmittance. Traditional hole-plate method was used as a control assay. The fungicides concentrations in control method were the following: 0, 0.0005, 0.005, 0.05, 0.5, 1, 2, 5, 10, 25, and 50%. Strong relationships between MT2 microplate and traditional hole

  3. Pharmacogenomics of platinum-based chemotherapy sensitivity in NSCLC: toward precision medicine.

    PubMed

    Yin, Ji-Ye; Li, Xi; Zhou, Hong-Hao; Liu, Zhao-Qian

    2016-08-01

    Lung cancer is one of the leading causes of cancer-related death in the world. Platinum-based chemotherapy is the first-line treatment for non-small-cell lung cancer (NSCLC), however, the therapeutic efficiency varies remarkably among individuals. A large number of pharmacogenomics studies aimed to identify genetic variations which can be used to predict platinum response. Those studies are leading NSCLC treatment to the new era of precision medicine. In the current review, we provided a comprehensive update on the main recent findings of genetic variations which can be used to predict platinum sensitivity in the NSCLC patients.

  4. Pharmacogenomics of platinum-based chemotherapy sensitivity in NSCLC: toward precision medicine.

    PubMed

    Yin, Ji-Ye; Li, Xi; Zhou, Hong-Hao; Liu, Zhao-Qian

    2016-08-01

    Lung cancer is one of the leading causes of cancer-related death in the world. Platinum-based chemotherapy is the first-line treatment for non-small-cell lung cancer (NSCLC), however, the therapeutic efficiency varies remarkably among individuals. A large number of pharmacogenomics studies aimed to identify genetic variations which can be used to predict platinum response. Those studies are leading NSCLC treatment to the new era of precision medicine. In the current review, we provided a comprehensive update on the main recent findings of genetic variations which can be used to predict platinum sensitivity in the NSCLC patients. PMID:27462924

  5. MASS MEASUREMENTS BY AN ACCURATE AND SENSITIVE SELECTED ION RECORDING TECHNIQUE

    EPA Science Inventory

    Trace-level components of mixtures were successfully identified or confirmed by mass spectrometric accurate mass measurements, made at high resolution with selected ion recording, using GC and LC sample introduction. Measurements were made at 20 000 or 10 000 resolution, respecti...

  6. Technique for determination of accurate heat capacities of volatile, powdered, or air-sensitive samples using relaxation calorimetry

    NASA Astrophysics Data System (ADS)

    Marriott, Robert A.; Stancescu, Maria; Kennedy, Catherine A.; White, Mary Anne

    2006-09-01

    We introduce a four-step technique for the accurate determination of the heat capacity of volatile or air-sensitive samples using relaxation calorimetry. The samples are encapsulated in a hermetically sealed differential scanning calorimetry pan, in which there is an internal layer of Apiezon N grease to assist thermal relaxation. Using the Quantum Design physical property measurement system to investigate benzoic acid and copper standards, we find that this method can lead to heat capacity determinations accurate to ±2% over the temperature range of 1-300K, even for very small samples (e.g., <10mg and contributing ca. 20% to the total heat capacity).

  7. Towards an Accurate and Precise Chronology for the Colonization of Australia: The Example of Riwi, Kimberley, Western Australia

    PubMed Central

    Balme, Jane; O’Connor, Sue; Whitau, Rose

    2016-01-01

    An extensive series of 44 radiocarbon (14C) and 37 optically stimulated luminescence (OSL) ages have been obtained from the site of Riwi, south central Kimberley (NW Australia). As one of the earliest known Pleistocene sites in Australia, with archaeologically sterile sediment beneath deposits containing occupation, the chronology of the site is important in renewed debates surrounding the colonization of Sahul. Charcoal is preserved throughout the sequence and within multiple discrete hearth features. Prior to 14C dating, charcoal has been pretreated with both acid-base-acid (ABA) and acid base oxidation-stepped combustion (ABOx-SC) methods at multiple laboratories. Ages are consistent between laboratories and also between the two pretreatment methods, suggesting that contamination is easily removed from charcoal at Riwi and the Pleistocene ages are likely to be accurate. Whilst some charcoal samples recovered from outside hearth features are identified as outliers within a Bayesian model, all ages on charcoal within hearth features are consistent with stratigraphy. OSL dating has been undertaken using single quartz grains from the sandy matrix. The majority of samples show De distributions that are well-bleached but that also include evidence for mixing as a result of post-depositional bioturbation of the sediment. The results of the two techniques are compared and evaluated within a Bayesian model. Consistency between the two methods is good, and we demonstrate human occupation at this site from 46.4–44.6 cal kBP (95.4% probability range). Importantly, the lowest archaeological horizon at Riwi is underlain by sterile sediments which have been dated by OSL making it possible to demonstrate the absence of human occupation for between 0.9–5.2 ka (68.2% probability range) prior to occupation. PMID:27655174

  8. Accurate and Precise Bottom Water Paleotemperatures from Aragonitic Benthic Foraminiferal Li/Mg: Calibration, Theory, and Application

    NASA Astrophysics Data System (ADS)

    Marchitto, T. M., Jr.; Valley, S.; Lynch-Stieglitz, J.

    2015-12-01

    While great progress has been made in reconstructing past sea surface temperatures, reliable bottom water paleotemperature measurements are not routinely available. We suggest that Li/Mg ratios in biogenic aragonites, particularly in the cosmopolitan benthic foraminifer Hoeglundina elegans, have the potential to bridge this gap. Core top calibration shows that H. elegans Li/Mg decreases by 5.5% per °C (r2 = 0.91), with a relationship that is nearly identical to that displayed by a wide range of corals (r2 = 0.95). The fact that such disparate organisms behave so similarly suggests to us that thermodynamics are shining through the 'vital effects' that so often plague paleoceanographic proxies. We hypothesize that Ca2+ pumping causes Li/Ca and Mg/Ca ratios in the organisms' calcification pools to decline, while Li/Mg remains constant. Rayleigh fractionation has the opposite effect on calcification pool Li/Ca and Mg/Ca (they rise), while Li/Mg still remains essentially constant. Hence any environmental influences on Ca2+ pumping and/or Rayleigh fractionation, such as seawater carbonate chemistry, have no measurable effects on aragonite Li/Mg. Our first downcore test of the Li/Mg proxy is performed in core KNR166-2-26JPC from 546 m water depth in the Florida Straits. Benthic foraminiferal δ18O was previously used to document decreased seawater density during both Heinrich Stadial 1 (HS1) and the Younger Dryas (YD), consistent with flattening of isopycnals across the Florida Current caused by slowdown of the AMOC. Here we show striking agreement between H. elegans Li/Mg and ice-volume-corrected δ18O temperatures since ~17 ka (in both absolute values and temporal changes), confirming that bottom waters abruptly warmed during HS1 and the YD. The YD, which is better-resolved, was ~2°C warmer than the Holocene. Li/Mg indicates that Last Glacial Maximum bottom waters were ~2-3°C, or ~5°C colder than during the Holocene. If these glacial temperatures are accurate, they

  9. How accurate and precise are limited sampling strategies in estimating exposure to mycophenolic acid in people with autoimmune disease?

    PubMed

    Abd Rahman, Azrin N; Tett, Susan E; Staatz, Christine E

    2014-03-01

    Mycophenolic acid (MPA) is a potent immunosuppressant agent, which is increasingly being used in the treatment of patients with various autoimmune diseases. Dosing to achieve a specific target MPA area under the concentration-time curve from 0 to 12 h post-dose (AUC12) is likely to lead to better treatment outcomes in patients with autoimmune disease than a standard fixed-dose strategy. This review summarizes the available published data around concentration monitoring strategies for MPA in patients with autoimmune disease and examines the accuracy and precision of methods reported to date using limited concentration-time points to estimate MPA AUC12. A total of 13 studies were identified that assessed the correlation between single time points and MPA AUC12 and/or examined the predictive performance of limited sampling strategies in estimating MPA AUC12. The majority of studies investigated mycophenolate mofetil (MMF) rather than the enteric-coated mycophenolate sodium (EC-MPS) formulation of MPA. Correlations between MPA trough concentrations and MPA AUC12 estimated by full concentration-time profiling ranged from 0.13 to 0.94 across ten studies, with the highest associations (r (2) = 0.90-0.94) observed in lupus nephritis patients. Correlations were generally higher in autoimmune disease patients compared with renal allograft recipients and higher after MMF compared with EC-MPS intake. Four studies investigated use of a limited sampling strategy to predict MPA AUC12 determined by full concentration-time profiling. Three studies used a limited sampling strategy consisting of a maximum combination of three sampling time points with the latest sample drawn 3-6 h after MMF intake, whereas the remaining study tested all combinations of sampling times. MPA AUC12 was best predicted when three samples were taken at pre-dose and at 1 and 3 h post-dose with a mean bias and imprecision of 0.8 and 22.6 % for multiple linear regression analysis and of -5.5 and 23.0 % for

  10. Simple, Sensitive and Accurate Multiplex Detection of Clinically Important Melanoma DNA Mutations in Circulating Tumour DNA with SERS Nanotags

    PubMed Central

    Wee, Eugene J.H.; Wang, Yuling; Tsao, Simon Chang-Hao; Trau, Matt

    2016-01-01

    Sensitive and accurate identification of specific DNA mutations can influence clinical decisions. However accurate diagnosis from limiting samples such as circulating tumour DNA (ctDNA) is challenging. Current approaches based on fluorescence such as quantitative PCR (qPCR) and more recently, droplet digital PCR (ddPCR) have limitations in multiplex detection, sensitivity and the need for expensive specialized equipment. Herein we describe an assay capitalizing on the multiplexing and sensitivity benefits of surface-enhanced Raman spectroscopy (SERS) with the simplicity of standard PCR to address the limitations of current approaches. This proof-of-concept method could reproducibly detect as few as 0.1% (10 copies, CV < 9%) of target sequences thus demonstrating the high sensitivity of the method. The method was then applied to specifically detect three important melanoma mutations in multiplex. Finally, the PCR/SERS assay was used to genotype cell lines and ctDNA from serum samples where results subsequently validated with ddPCR. With ddPCR-like sensitivity and accuracy yet at the convenience of standard PCR, we believe this multiplex PCR/SERS method could find wide applications in both diagnostics and research. PMID:27446486

  11. Fast and accurate sensitivity analysis of IMPT treatment plans using Polynomial Chaos Expansion

    NASA Astrophysics Data System (ADS)

    Perkó, Zoltán; van der Voort, Sebastian R.; van de Water, Steven; Hartman, Charlotte M. H.; Hoogeman, Mischa; Lathouwers, Danny

    2016-06-01

    The highly conformal planned dose distribution achievable in intensity modulated proton therapy (IMPT) can severely be compromised by uncertainties in patient setup and proton range. While several robust optimization approaches have been presented to address this issue, appropriate methods to accurately estimate the robustness of treatment plans are still lacking. To fill this gap we present Polynomial Chaos Expansion (PCE) techniques which are easily applicable and create a meta-model of the dose engine by approximating the dose in every voxel with multidimensional polynomials. This Polynomial Chaos (PC) model can be built in an automated fashion relatively cheaply and subsequently it can be used to perform comprehensive robustness analysis. We adapted PC to provide among others the expected dose, the dose variance, accurate probability distribution of dose-volume histogram (DVH) metrics (e.g. minimum tumor or maximum organ dose), exact bandwidths of DVHs, and to separate the effects of random and systematic errors. We present the outcome of our verification experiments based on 6 head-and-neck (HN) patients, and exemplify the usefulness of PCE by comparing a robust and a non-robust treatment plan for a selected HN case. The results suggest that PCE is highly valuable for both research and clinical applications.

  12. Digital PCR methods improve detection sensitivity and measurement precision of low abundance mtDNA deletions

    PubMed Central

    Belmonte, Frances R.; Martin, James L.; Frescura, Kristin; Damas, Joana; Pereira, Filipe; Tarnopolsky, Mark A.; Kaufman, Brett A.

    2016-01-01

    Mitochondrial DNA (mtDNA) mutations are a common cause of primary mitochondrial disorders, and have also been implicated in a broad collection of conditions, including aging, neurodegeneration, and cancer. Prevalent among these pathogenic variants are mtDNA deletions, which show a strong bias for the loss of sequence in the major arc between, but not including, the heavy and light strand origins of replication. Because individual mtDNA deletions can accumulate focally, occur with multiple mixed breakpoints, and in the presence of normal mtDNA sequences, methods that detect broad-spectrum mutations with enhanced sensitivity and limited costs have both research and clinical applications. In this study, we evaluated semi-quantitative and digital PCR-based methods of mtDNA deletion detection using double-stranded reference templates or biological samples. Our aim was to describe key experimental assay parameters that will enable the analysis of low levels or small differences in mtDNA deletion load during disease progression, with limited false-positive detection. We determined that the digital PCR method significantly improved mtDNA deletion detection sensitivity through absolute quantitation, improved precision and reduced assay standard error. PMID:27122135

  13. Design and operation of a highly sensitive and accurate laser calorimeter for low-absorbtion materials

    NASA Astrophysics Data System (ADS)

    Kawate, Etsuo; Hanssen, Leonard M.; Kaplan, Simon G.; Datla, Raju V.

    1998-10-01

    This work surveys techniques to measure the absorption coefficient of low absorption materials. A laser calorimeter is being developed with a sensitivity goal of (1 +/- 0.2)X 10-5 cm-1 with one watt of laser power using a CO2 laser (9 (mu) m to 11 (mu) m), a CO laser (5 (mu) m to 8 (mu) m), a He-Ne laser (3.39 (mu) m), and a pumped OPO tunable laser (2 (mu) m to 4 (mu) m) in the infrared region. Much attention has been given to the requirements for high sensitivity and to sources of systematic error including stray light. Our laser calorimeter is capable of absolute electrical calibration. Preliminary results for the absorption coefficient of highly transparent potassium chloride (KCl) samples are reported.

  14. SOAP3-dp: Fast, Accurate and Sensitive GPU-Based Short Read Aligner

    PubMed Central

    Zhu, Xiaoqian; Wu, Edward; Lee, Lap-Kei; Lin, Haoxiang; Zhu, Wenjuan; Cheung, David W.; Ting, Hing-Fung; Yiu, Siu-Ming; Peng, Shaoliang; Yu, Chang; Li, Yingrui; Li, Ruiqiang; Lam, Tak-Wah

    2013-01-01

    To tackle the exponentially increasing throughput of Next-Generation Sequencing (NGS), most of the existing short-read aligners can be configured to favor speed in trade of accuracy and sensitivity. SOAP3-dp, through leveraging the computational power of both CPU and GPU with optimized algorithms, delivers high speed and sensitivity simultaneously. Compared with widely adopted aligners including BWA, Bowtie2, SeqAlto, CUSHAW2, GEM and GPU-based aligners BarraCUDA and CUSHAW, SOAP3-dp was found to be two to tens of times faster, while maintaining the highest sensitivity and lowest false discovery rate (FDR) on Illumina reads with different lengths. Transcending its predecessor SOAP3, which does not allow gapped alignment, SOAP3-dp by default tolerates alignment similarity as low as 60%. Real data evaluation using human genome demonstrates SOAP3-dp's power to enable more authentic variants and longer Indels to be discovered. Fosmid sequencing shows a 9.1% FDR on newly discovered deletions. SOAP3-dp natively supports BAM file format and provides the same scoring scheme as BWA, which enables it to be integrated into existing analysis pipelines. SOAP3-dp has been deployed on Amazon-EC2, NIH-Biowulf and Tianhe-1A. PMID:23741504

  15. A sensitive and accurate atomic magnetometer based on free spin precession

    NASA Astrophysics Data System (ADS)

    Grujić, Zoran D.; Koss, Peter A.; Bison, Georg; Weis, Antoine

    2015-05-01

    We present a laser-based atomic magnetometer that allows inferring the modulus of a magnetic field from the free Larmor precession of spin-oriented Cs vapour atoms. The detection of free spin precession (FSP) is not subject to systematic readout errors that occur in phase feedback-controlled magnetometers in which the spin precession is actively driven by an oscillating field or a modulation of light parameters, such as frequency, amplitude, or polarization. We demonstrate that an FSP-magnetometer can achieve a ˜200 fT/√Hz sensitivity (<100 fT/√Hz in the shotnoise limit) and an absolute accuracy at the same level.

  16. Assessment of assay sensitivity and precision in a malaria antibody ELISA.

    PubMed

    Rajasekariah, G Halli R; Kay, Graeme E; Russell, Natrice V; Smithyman, Anthony M

    2003-01-01

    Many types of ELISA-based immunodiagnostic test kits are commercially available in the market for specific indications. These kits provide necessary assay components, reagents, and guidelines to perform the assay under designated optimal conditions. By using these kits, any unknown or test sample can be assessed as negative or positive based on the results of referral calibrator (Ref+ve and Ref-ve) samples. It is essential to provide reliable test kits to end-users with adequate quality control analysis. Therefore, it is necessary to check the kit for any variations in its performance. While developing a malaria antibody ELISA test-kit, we optimized assay conditions with chequer-board analyses and developed an assay protocol. We have taken out kits randomly from the assembly line and had them evaluated by operators who are new to the test-kits. Assays are performed as per the test guidelines provided. Sera, diluted serially, have shown a clear discriminatory signal between a negative vs. positive sample. A COV is determined by evaluating the Ref-ve calibrator in replicate antigen-coated wells from 6 different plates. This COV is used as a tool to determine S/N ratio of test samples. Besides Ref-ve and Ref+ve calibrators, additional field serum samples are tested with the test kit. Several performance indices, such as mean, standard deviation, %CV are calculated, and the inter- and intra-assay variations determined. The assay precision is determined with large and small replicate samples. In addition, assays are performed concurrently in triplicate-, duplicate-, and single-wells, and the results are analyzed for any assay variations. Different plate areas are identified in antigen-coated 96-well plates and tested blind to detect any variations. The S/N ratio is found to be a very effective tool in determining the assay sensitivity. The %CV was within 10-15%. Variations seen in the assays are found to be due to operator errors and not due to kit reagents. These

  17. Determination of the biotin content of select foods using accurate and sensitive HPLC/avidin binding

    PubMed Central

    Staggs, C.G.; Sealey, W.M.; McCabe, B.J.; Teague, A.M.; Mock, D.M.

    2006-01-01

    Assessing dietary biotin content, biotin bioavailability, and resulting biotin status are crucial in determining whether biotin deficiency is teratogenic in humans. Accuracy in estimating dietary biotin is limited both by data gaps in food composition tables and by inaccuracies in published data. The present study applied sensitive and specific analytical techniques to determine values for biotin content in a select group of foods. Total biotin content of 87 foods was determined using acid hydrolysis and the HPLC/avidin-binding assay. These values are consistent with published values in that meat, fish, poultry, egg, dairy, and some vegetables are relatively rich sources of biotin. However, these biotin values disagreed substantially with published values for many foods. Assay values varied between 247 times greater than published values for a given food to as much as 36% less than the published biotin value. Among 51 foods assayed for which published values were available, only seven agreed within analytical variability (720%). We conclude that published values for biotin content of foods are likely to be inaccurate. PMID:16648879

  18. Correlated cryo-fluorescence and cryo-electron microscopy with high spatial precision and improved sensitivity.

    PubMed

    Schorb, Martin; Briggs, John A G

    2014-08-01

    Performing fluorescence microscopy and electron microscopy on the same sample allows fluorescent signals to be used to identify and locate features of interest for subsequent imaging by electron microscopy. To carry out such correlative microscopy on vitrified samples appropriate for structural cryo-electron microscopy it is necessary to perform fluorescence microscopy at liquid-nitrogen temperatures. Here we describe an adaptation of a cryo-light microscopy stage to permit use of high-numerical aperture objectives. This allows high-sensitivity and high-resolution fluorescence microscopy of vitrified samples. We describe and apply a correlative cryo-fluorescence and cryo-electron microscopy workflow together with a fiducial bead-based image correlation procedure. This procedure allows us to locate fluorescent bacteriophages in cryo-electron microscopy images with an accuracy on the order of 50 nm, based on their fluorescent signal. It will allow the user to precisely and unambiguously identify and locate objects and events for subsequent high-resolution structural study, based on fluorescent signals.

  19. Sensitivity Analysis for Characterizing the Accuracy and Precision of JEM/SMILES Mesospheric O3

    NASA Astrophysics Data System (ADS)

    Esmaeili Mahani, M.; Baron, P.; Kasai, Y.; Murata, I.; Kasaba, Y.

    2011-12-01

    The main purpose of this study is to evaluate the Superconducting sub-Millimeter Limb Emission Sounder (SMILES) measurements of mesospheric ozone, O3. As the first step, the error due to the impact of Mesospheric Temperature Inversions (MTIs) on ozone retrieval has been determined. The impacts of other parameters such as pressure variability, solar events, and etc. on mesospheric O3 will also be investigated. Ozone, is known to be important due to the stratospheric O3 layer protection of life on Earth by absorbing harmful UV radiations. However, O3 chemistry can be studied purely in the mesosphere without distraction of heterogeneous situation and dynamical variations due to the short lifetime of O3 in this region. Mesospheric ozone is produced by the photo-dissociation of O2 and the subsequent reaction of O with O2. Diurnal and semi-diurnal variations of mesospheric ozone are associated with variations in solar activity. The amplitude of the diurnal variation increases from a few percent at an altitude of 50 km, to about 80 percent at 70 km. Although despite the apparent simplicity of this situation, significant disagreements exist between the predictions from the existing models and observations, which need to be resolved. SMILES is a highly sensitive radiometer with a few to several tens percent of precision from upper troposphere to the mesosphere. SMILES was developed by the Japanese Aerospace eXploration Agency (JAXA) and the National Institute of Information and Communications Technology (NICT) located at the Japanese Experiment Module (JEM) on the International Space Station (ISS). SMILES has successfully measured the vertical distributions and the diurnal variations of various atmospheric species in the latitude range of 38S to 65N from October 2009 to April 2010. A sensitivity analysis is being conducted to investigate the expected precision and accuracy of the mesospheric O3 profiles (from 50 to 90 km height) due to the impact of Mesospheric Temperature

  20. Graphene fluorescence switch-based cooperative amplification: a sensitive and accurate method to detection microRNA.

    PubMed

    Liu, Haiyun; Li, Lu; Wang, Qian; Duan, Lili; Tang, Bo

    2014-06-01

    MicroRNAs (miRNAs) play significant roles in a diverse range of biological progress and have been regarded as biomarkers and therapeutic targets in cancer treatment. Sensitive and accurate detection of miRNAs is crucial for better understanding their roles in cancer cells and further validating their function in clinical diagnosis. Here, we developed a stable, sensitive, and specific miRNAs detection method on the basis of cooperative amplification combining with the graphene oxide (GO) fluorescence switch-based circular exponential amplification and the multimolecules labeling of SYBR Green I (SG). First, the target miRNA is adsorbed on the surface of GO, which can protect the miRNA from enzyme digest. Next, the miRNA hybridizes with a partial hairpin probe and then acts as a primer to initiate a strand displacement reaction to form a complete duplex. Finally, under the action of nicking enzyme, universal DNA fragments are released and used as triggers to initiate next reaction cycle, constituting a new circular exponential amplification. In the proposed strategy, a small amount of target miRNA can be converted to a large number of stable DNA triggers, leading to a remarkable amplification for the target. Moreover, compared with labeling with a 1:1 stoichiometric ratio, multimolecules binding of intercalating dye SG to double-stranded DNA (dsDNA) can induce significant enhancement of fluorescence signal and further improve the detection sensitivity. The extraordinary fluorescence quenching of GO used here guarantees the high signal-to-noise ratio. Due to the protection for target miRNA by GO, the cooperative amplification, and low fluorescence background, sensitive and accurate detection of miRNAs has been achieved. The strategy proposed here will offer a new approach for reliable quantification of miRNAs in medical research and early clinical diagnostics. PMID:24823448

  1. Fast MS/MS acquisition without dynamic exclusion enables precise and accurate quantification of proteome by MS/MS fragment intensity

    PubMed Central

    Zhang, Shen; Wu, Qi; Shan, Yichu; Zhao, Qun; Zhao, Baofeng; Weng, Yejing; Sui, Zhigang; Zhang, Lihua; Zhang, Yukui

    2016-01-01

    Most currently proteomic studies use data-dependent acquisition with dynamic exclusion to identify and quantify the peptides generated by the digestion of biological sample. Although dynamic exclusion permits more identifications and higher possibility to find low abundant proteins, stochastic and irreproducible precursor ion selection caused by dynamic exclusion limit the quantification capabilities, especially for MS/MS based quantification. This is because a peptide is usually triggered for fragmentation only once due to dynamic exclusion. Therefore the fragment ions used for quantification only reflect the peptide abundances at that given time point. Here, we propose a strategy of fast MS/MS acquisition without dynamic exclusion to enable precise and accurate quantification of proteome by MS/MS fragment intensity. The results showed comparable proteome identification efficiency compared to the traditional data-dependent acquisition with dynamic exclusion, better quantitative accuracy and reproducibility regardless of label-free based quantification or isobaric labeling based quantification. It provides us with new insights to fully explore the potential of modern mass spectrometers. This strategy was applied to the relative quantification of two human disease cell lines, showing great promises for quantitative proteomic applications. PMID:27198003

  2. Application of a cell microarray chip system for accurate, highly sensitive, and rapid diagnosis for malaria in Uganda.

    PubMed

    Yatsushiro, Shouki; Yamamoto, Takeki; Yamamura, Shohei; Abe, Kaori; Obana, Eriko; Nogami, Takahiro; Hayashi, Takuya; Sesei, Takashi; Oka, Hiroaki; Okello-Onen, Joseph; Odongo-Aginya, Emmanuel I; Alai, Mary Auma; Olia, Alex; Anywar, Dennis; Sakurai, Miki; Palacpac, Nirianne Mq; Mita, Toshihiro; Horii, Toshihiro; Baba, Yoshinobu; Kataoka, Masatoshi

    2016-01-01

    Accurate, sensitive, rapid, and easy operative diagnosis is necessary to prevent the spread of malaria. A cell microarray chip system including a push column for the recovery of erythrocytes and a fluorescence detector was employed for malaria diagnosis in Uganda. The chip with 20,944 microchambers (105 μm width and 50 μm depth) was made of polystyrene. For the analysis, 6 μl of whole blood was employed, and leukocytes were practically removed by filtration through SiO2-nano-fibers in a column. Regular formation of an erythrocyte monolayer in each microchamber was observed following dispersion of an erythrocyte suspension in a nuclear staining dye, SYTO 21, onto the chip surface and washing. About 500,000 erythrocytes were analyzed in a total of 4675 microchambers, and malaria parasite-infected erythrocytes could be detected in 5 min by using the fluorescence detector. The percentage of infected erythrocytes in each of 41 patients was determined. Accurate and quantitative detection of the parasites could be performed. A good correlation between examinations via optical microscopy and by our chip system was demonstrated over the parasitemia range of 0.0039-2.3438% by linear regression analysis (R(2) = 0.9945). Thus, we showed the potential of this chip system for the diagnosis of malaria. PMID:27445125

  3. Application of a cell microarray chip system for accurate, highly sensitive, and rapid diagnosis for malaria in Uganda

    PubMed Central

    Yatsushiro, Shouki; Yamamoto, Takeki; Yamamura, Shohei; Abe, Kaori; Obana, Eriko; Nogami, Takahiro; Hayashi, Takuya; Sesei, Takashi; Oka, Hiroaki; Okello-Onen, Joseph; Odongo-Aginya, Emmanuel I.; Alai, Mary Auma; Olia, Alex; Anywar, Dennis; Sakurai, Miki; Palacpac, Nirianne MQ; Mita, Toshihiro; Horii, Toshihiro; Baba, Yoshinobu; Kataoka, Masatoshi

    2016-01-01

    Accurate, sensitive, rapid, and easy operative diagnosis is necessary to prevent the spread of malaria. A cell microarray chip system including a push column for the recovery of erythrocytes and a fluorescence detector was employed for malaria diagnosis in Uganda. The chip with 20,944 microchambers (105 μm width and 50 μm depth) was made of polystyrene. For the analysis, 6 μl of whole blood was employed, and leukocytes were practically removed by filtration through SiO2-nano-fibers in a column. Regular formation of an erythrocyte monolayer in each microchamber was observed following dispersion of an erythrocyte suspension in a nuclear staining dye, SYTO 21, onto the chip surface and washing. About 500,000 erythrocytes were analyzed in a total of 4675 microchambers, and malaria parasite-infected erythrocytes could be detected in 5 min by using the fluorescence detector. The percentage of infected erythrocytes in each of 41 patients was determined. Accurate and quantitative detection of the parasites could be performed. A good correlation between examinations via optical microscopy and by our chip system was demonstrated over the parasitemia range of 0.0039–2.3438% by linear regression analysis (R2 = 0.9945). Thus, we showed the potential of this chip system for the diagnosis of malaria. PMID:27445125

  4. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: sensitivity and specificity analysis.

    PubMed

    Kapp, Eugene A; Schütz, Frédéric; Connolly, Lisa M; Chakel, John A; Meza, Jose E; Miller, Christine A; Fenyo, David; Eng, Jimmy K; Adkins, Joshua N; Omenn, Gilbert S; Simpson, Richard J

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X!Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, PeptideProphet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X!Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of "consensus scoring", i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  5. An evaluation, comparison, and accurate benchmarking of several publicly available MS/MS search algorithms: Sensitivity and Specificity analysis.

    SciTech Connect

    Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard

    2005-08-01

    MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.

  6. Highly sensitive and accurate screening of 40 dyes in soft drinks by liquid chromatography-electrospray tandem mass spectrometry.

    PubMed

    Feng, Feng; Zhao, Yansheng; Yong, Wei; Sun, Li; Jiang, Guibin; Chu, Xiaogang

    2011-06-15

    A method combining solid phase extraction with high performance liquid chromatography-electrospray ionization tandem mass spectrometry was developed for the highly sensitive and accurate screening of 40 dyes, most of which are banned in foods. Electrospray ionization tandem mass spectrometry was used to identify and quantify a large number of dyes for the first time, and demonstrated greater accuracy and sensitivity than the conventional liquid chromatography-ultraviolet/visible methods. The limits of detection at a signal-to-noise ratio of 3 for the dyes are 0.0001-0.01 mg/L except for Tartrazine, Amaranth, New Red and Ponceau 4R, with detection limits of 0.5, 0.25, 0.125 and 0.125 mg/L, respectively. When this method was applied to screening of dyes in soft drinks, the recoveries ranged from 91.1 to 105%. This method has been successfully applied to screening of illegal dyes in commercial soft drink samples, and it is valuable to ensure the safety of food.

  7. Accurate and precise quantification of atmospheric nitrate in streams draining land of various uses by using triple oxygen isotopes as tracers

    NASA Astrophysics Data System (ADS)

    Tsunogai, Urumu; Miyauchi, Takanori; Ohyama, Takuya; Komatsu, Daisuke D.; Nakagawa, Fumiko; Obata, Yusuke; Sato, Keiichi; Ohizumi, Tsuyoshi

    2016-06-01

    Land use in a catchment area has significant impacts on nitrate eluted from the catchment, including atmospheric nitrate deposited onto the catchment area and remineralised nitrate produced within the catchment area. Although the stable isotopic compositions of nitrate eluted from a catchment can be a useful tracer to quantify the land use influences on the sources and behaviour of the nitrate, it is best to determine these for the remineralised portion of the nitrate separately from the unprocessed atmospheric nitrate to obtain a more accurate and precise quantification of the land use influences. In this study, we determined the spatial distribution and seasonal variation of stable isotopic compositions of nitrate for more than 30 streams within the same watershed, the Lake Biwa watershed in Japan, in order to use 17O excess (Δ17O) of nitrate as an additional tracer to quantify the mole fraction of atmospheric nitrate accurately and precisely. The stable isotopic compositions, including Δ17O of nitrate, in precipitation (wet deposition; n = 196) sampled at the Sado-seki monitoring station were also determined for 3 years. The deposited nitrate showed large 17O excesses similar to those already reported for midlatitudes: Δ17O values ranged from +18.6 to +32.4 ‰ with a 3-year average of +26.3 ‰. However, nitrate in each inflow stream showed small annual average Δ17O values ranging from +0.5 to +3.1 ‰, which corresponds to mole fractions of unprocessed atmospheric nitrate to total nitrate from (1.8 ± 0.3) to (11.8 ± 1.8) % respectively, with an average for all inflow streams of (5.1 ± 0.5) %. Although the annual average Δ17O values tended to be smaller in accordance with the increase in annual average stream nitrate concentration from 12.7 to 106.2 µmol L-1, the absolute concentrations of unprocessed atmospheric nitrate were almost stable at (2.3 ± 1.1) µmol L-1 irrespective of the changes in population density and land use in each catchment area

  8. A New, Rapid, Precise and Sensitive Method for Chlorine Stable Isotope Analysis of Chlorinated Aliphatic Hydrocarbons

    NASA Astrophysics Data System (ADS)

    van Acker, M. R.; Shahar, A.; Young, E. D.; Coleman, M. L.

    2005-12-01

    Chlorinated aliphatic hydrocarbons (CAH) are recognized common groundwater contaminants. Because of their physico-chemical properties, their lifespan in groundwater is in the order of decades (Pankow and Cherry, 1996). Stable isotopes can play a role in determining the rate and extent of CAH attenuation (Slater, 2003). The use of chlorine has been hampered by the current time consuming and insensitive analytical methods. We present a new analytical procedure to measure chlorine stable isotope values using a gas chromatograph coupled to a multi-collector inductively coupled mass spectrometer (GC-MC-ICP-MS). The GC has a Porapack Q packed column. The carrier gas was helium and the temperature was constant at 160°C. The GC was coupled to the MC-ICP-MS by heated stainless steel tubing. Our high resolution spectra showed that 37Cl is free of its main interference 36Ar-H over a range of 0.004 amu. Two pure CAH, trichloroethene (TCE) and tetrachloroethene (PCE), were used for zero enrichment (sample relative to itself) and standard-sample difference measurements. Integrations and background corrections of transient signals were performed using Microsoft Excel after import of the raw data from the MC-ICPMS acquisition software. Zero enrichment tests with TCE and PCE yielded δ37Cl of -0.04±0.16‰ and -0.03±0.17‰, respectively, results for sample injections of 0.12 to 0.02 microliters. Accuracy was tested by injecting 0.24 microliters of a 50/50 mixture of TCE and PCE of known isotopic compositions as the difference between the two solvents was of paramount interest. The δ37Cl(TCE) value of PCE was -1.99±0.16‰. A highly satisfactory comparison with the conventional method is shown by published values for TCE and PCE, -2.04±0.12‰ and -0.30±0.14‰, respectively (Jendrzejewski et al., 2001), giving a δ37Cl(TCE) value for PCE of -2.34±0.18‰. These tests of the GC-MC-ICP-MS method showed that we can obtain reproducible and accurate Cl isotope values using an

  9. Sensitive, accurate and rapid detection of trace aliphatic amines in environmental samples with ultrasonic-assisted derivatization microextraction using a new fluorescent reagent for high performance liquid chromatography.

    PubMed

    Chen, Guang; Liu, Jianjun; Liu, Mengge; Li, Guoliang; Sun, Zhiwei; Zhang, Shijuan; Song, Cuihua; Wang, Hua; Suo, Yourui; You, Jinmao

    2014-07-25

    A new fluorescent reagent, 1-(1H-imidazol-1-yl)-2-(2-phenyl-1H-phenanthro[9,10-d]imidazol-1-yl)ethanone (IPPIE), is synthesized, and a simple pretreatment based on ultrasonic-assisted derivatization microextraction (UDME) with IPPIE is proposed for the selective derivatization of 12 aliphatic amines (C1: methylamine-C12: dodecylamine) in complex matrix samples (irrigation water, river water, waste water, cultivated soil, riverbank soil and riverbed soil). Under the optimal experimental conditions (solvent: ACN-HCl, catalyst: none, molar ratio: 4.3, time: 8 min and temperature: 80°C), micro amount of sample (40 μL; 5mg) can be pretreated in only 10 min, with no preconcentration, evaporation or other additional manual operations required. The interfering substances (aromatic amines, aliphatic alcohols and phenols) get the derivatization yields of <5%, causing insignificant matrix effects (<4%). IPPIE-analyte derivatives are separated by high performance liquid chromatography (HPLC) and quantified by fluorescence detection (FD). The very low instrumental detection limits (IDL: 0.66-4.02 ng/L) and method detection limits (MDL: 0.04-0.33 ng/g; 5.96-45.61 ng/L) are achieved. Analytes are further identified from adjacent peaks by on-line ion trap mass spectrometry (MS), thereby avoiding additional operations for impurities. With this UDME-HPLC-FD-MS method, the accuracy (-0.73-2.12%), precision (intra-day: 0.87-3.39%; inter-day: 0.16-4.12%), recovery (97.01-104.10%) and sensitivity were significantly improved. Successful applications in environmental samples demonstrate the superiority of this method in the sensitive, accurate and rapid determination of trace aliphatic amines in micro amount of complex samples. PMID:24925451

  10. Stealth surface modification of surface-enhanced Raman scattering substrates for sensitive and accurate detection in protein solutions.

    PubMed

    Sun, Fang; Ella-Menye, Jean-Rene; Galvan, Daniel David; Bai, Tao; Hung, Hsiang-Chieh; Chou, Ying-Nien; Zhang, Peng; Jiang, Shaoyi; Yu, Qiuming

    2015-03-24

    Reliable surface-enhanced Raman scattering (SERS) based biosensing in complex media is impeded by nonspecific protein adsorptions. Because of the near-field effect of SERS, it is challenging to modify SERS-active substrates using conventional nonfouling materials without introducing interference from their SERS signals. Herein, we report a stealth surface modification strategy for sensitive, specific and accurate detection of fructose in protein solutions using SERS by forming a mixed self-assembled monolayer (SAM). The SAM consists of a short zwitterionic thiol, N,N-dimethyl-cysteamine-carboxybetaine (CBT), and a fructose probe 4-mercaptophenylboronic acid (4-MPBA). The specifically designed and synthesized CBT not only resists protein fouling effectively, but also has very weak Raman activity compared to 4-MPBA. Thus, the CBT SAM provides a stealth surface modification to SERS-active substrates. The surface compositions of mixed SAMs were investigated using X-ray photoelectron spectroscopy (XPS) and SERS, and their nonfouling properties were studied with a surface plasmon resonance (SPR) biosensor. The mixed SAM with a surface composition of 94% CBT demonstrated a very low bovine serum albumin (BSA) adsorption (∼3 ng/cm(2)), and moreover, only the 4-MPBA signal appeared in the SERS spectrum. With the use of this surface-modified SERS-active substrate, quantification of fructose over clinically relevant concentrations (0.01-1 mM) was achieved. Partial least-squares regression (PLS) analysis showed that the detection sensitivity and accuracy were maintained for the measurements in 1 mg/mL BSA solutions. This stealth surface modification strategy provides a novel route to introduce nonfouling property to SERS-active substrates for SERS biosensing in complex media.

  11. Determination of accurate electron chiral asymmetries in fenchone and camphor in the VUV range: sensitivity to isomerism and enantiomeric purity.

    PubMed

    Nahon, Laurent; Nag, Lipsa; Garcia, Gustavo A; Myrgorodska, Iuliia; Meierhenrich, Uwe; Beaulieu, Samuel; Wanie, Vincent; Blanchet, Valérie; Géneaux, Romain; Powis, Ivan

    2016-05-14

    Photoelectron circular dichroism (PECD) manifests itself as an intense forward/backward asymmetry in the angular distribution of photoelectrons produced from randomly-oriented enantiomers by photoionization with circularly-polarized light (CPL). As a sensitive probe of both photoionization dynamics and of the chiral molecular potential, PECD attracts much interest especially with the recent performance of related experiments with visible and VUV laser sources. Here we report, by use of quasi-perfect CPL VUV synchrotron radiation and using a double imaging photoelectron/photoion coincidence (i(2)PEPICO) spectrometer, new and very accurate values of the corresponding asymmetries on showcase chiral isomers: camphor and fenchone. These data have additionally been normalized to the absolute enantiopurity of the sample as measured by a chromatographic technique. They can therefore be used as benchmarking data for new PECD experiments, as well as for theoretical models. In particular we found, especially for the outermost orbital of both molecules, a good agreement with CMS-Xα PECD modeling over the whole VUV range. We also report a spectacular sensitivity of PECD to isomerism for slow electrons, showing large and opposite asymmetries when comparing R-camphor to R-fenchone (respectively -10% and +16% around 10 eV). In the course of this study, we could also assess the analytical potential of PECD. Indeed, the accuracy of the data we provide are such that limited departure from perfect enantiopurity in the sample we purchased could be detected and estimated in excellent agreement with the analysis performed in parallel via a chromatographic technique, establishing a new standard of accuracy, in the ±1% range, for enantiomeric excess measurement via PECD. The i(2)PEPICO technique allows correlating PECD measurements to specific parent ion masses, which would allow its application to analysis of complex mixtures. PMID:27095534

  12. Near real time, accurate, and sensitive microbiological safety monitoring using an all-fibre spectroscopic fluorescence system

    NASA Astrophysics Data System (ADS)

    Vanholsbeeck, F.; Swift, S.; Cheng, M.; Bogomolny, E.

    2013-11-01

    Enumeration of microorganisms is an essential microbiological task for many industrial sectors and research fields. Various tests for detection and counting of microorganisms are used today. However most of the current methods to enumerate bacteria require either long incubation time for limited accuracy, or use complicated protocols along with bulky equipment. We have developed an accurate, all-fibre spectroscopic system to measure fluorescence signal in-situ. In this paper, we examine the potential of this setup for near real time bacteria enumeration in aquatic environment. The concept is based on a well-known phenomenon that the fluorescence quantum yields of some nucleic acid stains significantly increase upon binding with nucleic acids of microorganisms. In addition we have used GFP labeled organisms. The fluorescence signal increase can be correlated to the amount of nucleic acid present in the sample. In addition we have used GFP labeled organisms. Our results show that we are able to detect a wide range of bacteria concentrations without dilution or filtration (1-108 CFU/ml) using different optical probes we designed. This high sensitivity is due to efficient light delivery with an appropriate collection volume and in situ fluorescence detection as well as the use of a sensitive CCD spectrometer. By monitoring the laser power, we can account for laser fluctuations while measuring the fluorescence signal which improves as well the system accuracy. A synchronized laser shutter allows us to achieve a high SNR with minimal integration time, thereby reducing the photobleaching effect. In summary, we conclude that our optical setup may offer a robust method for near real time bacterial detection in aquatic environment.

  13. Precision and sensitivity of a test for vegetable fat adulteration of milk fat.

    PubMed

    Fox, J R; Duthie, A H; Wulff, S

    1988-03-01

    A test for routine screening of Mozzarella cheese and butter for vegetable fat adulteration is described. Fat is extracted and saponified. The potassium salts of the fatty acids are measured through direct gas chromatographic analysis. A ratio, calculated from the concentrations of butyric and oleic acids, is used to evaluate the purity of a sample. The test offers good precision and can detect less than 10% partially hydrogenated vegetable fat.

  14. An in-line micro-pyrolysis system to remove contaminating organic species for precise and accurate water isotope analysis by spectroscopic techniques

    NASA Astrophysics Data System (ADS)

    Panetta, R. J.; Hsiao, G.

    2011-12-01

    Trace levels of organic contaminants such as short alcohols and terpenoids have been shown to cause spectral interference in water isotope analysis by spectroscopic techniques. The result is degraded precision and accuracy in both δD and δ18O for samples such as beverages, plant extracts or slightly contaminated waters. An initial approach offered by manufacturers is post-processing software that analyzes spectral features to identify and flag contaminated samples. However, it is impossible for this software to accurately reconstruct the water isotope signature, thus it is primarily a metric for data quality. Here, we describe a novel in-line pyrolysis system (Micro-Pyrolysis Technology, MPT) placed just prior to the inlet of a cavity ring-down spectroscopy (CRDS) analyzer that effectively removes interfering organic molecules without altering the isotope values of the water. Following injection of the water sample, N2 carrier gas passes the sample through a micro-pyrolysis tube heated with multiple high temperature elements in an oxygen-free environment. The temperature is maintained above the thermal decomposition threshold of most organic compounds (≤ 900 oC), but well below that of water (~2000 oC). The main products of the pyrolysis reaction are non-interfering species such as elemental carbon and H2 gas. To test the efficacy and applicability of the system, waters of known isotopic composition were spiked with varying amounts of common interfering alcohols (methanol, ethanol, propanol, hexanol, trans-2-hexenol, cis-3-hexanol up to 5 % v/v) and common soluble plant terpenoids (carveol, linalool, geraniol, prenol). Spiked samples with no treatment to remove the organics show strong interfering absorption peaks that adversely affect the δD and δ18O values. However, with the MPT in place, all interfering absorption peaks are removed and the water absorption spectrum is fully restored. As a consequence, the δD and δ18O values also return to their original

  15. A novel light tracing system with high-precision and high-sensitivity sensors setup

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Wu, Pin Yi; Tsai, Jen Min; Tseng, Yu Hung; Chen, Hsin-Hung; Hwang, Jiann-Lih

    2013-11-01

    This paper presents a novel light source tracing system, which is comprised of a light-tracing board, with four photo-sensors of different incline angles, correspondingly disposed on its four edges, which are adjustable according to the movement range of the light source in order to achieve light-tracing purposes. This system introduces the algorithm of four-edge-sensors with servo motors in each site to improve sensor's sensitivity. The measurement values of light perception can be feedback to the programmable logic controller by wireless transceiver module. After proportional-integral-derivative operation, the system can obtain the situation of light source. In a normal mode, the light source movement range is large, the range of the incline angle of the light sensors are also set to large to obtain wide detection angle. But in a locking mode, the incline angle of the light sensing plane decreases, thus, the measurement range reduces, and the sensitivity is higher.

  16. Precise and Sensitive Lithium Isotope Ratios by Quadrupole ICP-MS

    NASA Astrophysics Data System (ADS)

    Misra, S.; Froelich, P. N.

    2008-05-01

    We present a new method for the determination of 7Li/6Li with low Li consumption (<0.3 ng/analysis), high column yields (>99.99%), high Isotope Ratio precision (< ±0.9‰, 2σ), and low blank (<500 fg/ml). We optimize for analyses of natural carbonates (foraminifera) containing 1-2 ppm-Li. Measurements are done with a single collector Quadrupole ICP-MS (Agilent 7500cs) using cold plasma (600W) to eliminate doubly-charged 12C2+ and 14N2+, soft extraction to maximize and stabilize the Li-signal, peak jumping and pulse detection of both Li isotopes, with Standard-Sample-Standard bracketing. Li solutions of 0.2 to 0.4 ppb concentration were analyzed using PFA micro-concentric nebulizer (uptake rate = 200μl/min) for a period of ~3 min/sample. The long-term external precision is ±0.9‰ (2σ) for LSVEC Li standards and ±1.5‰ (2σ) for forams, comparable to MC-ICP-MS methods. The key improvement of our method is the small Li-mass requirement (<0.3 ng/quintuplicate) compared to other ICP- MS methods (3-40 ng/analysis). Li isotope measurements in forams are limited not only by low Li concentrations but also by instrument-induced fractionation effects, matrix effects and incomplete column recovery and fractionation of Li during column separations to remove alkali and alkaline earth elements. Column separations remove these ions but induce potentially large Li-isotope fractionation in elution peaks, from +100‰ in the leading edge (ca. 1% Li load) to -100‰ in the trailing edge (ca. 1% Li load). Thus tiny incomplete Li- recoveries during column separations can result in large unrecognized column-induced fractionation of eluted Li. These factors conspire to require a column method with both 100% recovery and quantitative separation of matrix elements. We refined a single step ion chromatographic method to quantitatively recover (>99.99% yield) and separate Li from all matrix elements using small volume resin (2ml / 3.4meq AG50W-X8) and low elution volume (6ml of 0.5N

  17. Bit Grooming: statistically accurate precision-preserving quantization with compression, evaluated in the netCDF Operators (NCO, v4.4.8+)

    NASA Astrophysics Data System (ADS)

    Zender, Charles S.

    2016-09-01

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25-80 and 5-65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1-5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1-2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that

  18. Bit Grooming: statistically accurate precision-preserving quantization with compression, evaluated in the netCDF Operators (NCO, v4.4.8+)

    DOE PAGES

    Zender, Charles S.

    2016-09-19

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits ofmore » consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25–80 and 5–65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1–5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1–2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic

  19. Mars-GRAM: Increasing the Precision of Sensitivity Studies at Large Optical Depths

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, C. G.; Badger, Andrew M.

    2010-01-01

    The Mars Global Reference Atmospheric Model (Mars-GRAM) is an engineering-level atmospheric model widely used for diverse mission applications. Mars-GRAM's perturbation modeling capability is commonly used, in a Monte-Carlo mode, to perform high fidelity engineering end-to-end simulations for entry, descent, and landing (EDL). It has been discovered during the Mars Science Laboratory (MSL) site selection process that Mars-GRAM, when used for sensitivity studies for MapYear=0 and large optical depth values such as tau=3, is less than realistic. A comparison study between Mars atmospheric density estimates from Mars-GRAM and measurements by Mars Global Surveyor (MGS) has been undertaken for locations of varying latitudes, Ls, and LTST on Mars. The preliminary results from this study have validated the Thermal Emission Spectrometer (TES) limb data. From the surface to 80 km altitude, Mars-GRAM is based on the NASA Ames Mars General Circulation Model (MGCM). MGCM results that were used for Mars-GRAM with MapYear=0 were from a MGCM run with a fixed value of tau=3 for the entire year at all locations. This has resulted in an imprecise atmospheric density at all altitudes. To solve this pressure-density problem, density factor values were determined for tau=.3, 1 and 3 that will adjust the input values of MGCM MapYear 0 pressure and density to achieve a better match of Mars-GRAM MapYear 0 with TES observations for MapYears 1 and 2 at comparable dust loading. The addition of these density factors to Mars-GRAM will improve the results of the sensitivity studies done for large optical depths.

  20. Sampling Strategies in Antimicrobial Resistance Monitoring: Evaluating How Precision and Sensitivity Vary with the Number of Animals Sampled per Farm

    PubMed Central

    Yamamoto, Takehisa; Hayama, Yoko; Hidano, Arata; Kobayashi, Sota; Muroga, Norihiko; Ishikawa, Kiyoyasu; Ogura, Aki; Tsutsui, Toshiyuki

    2014-01-01

    Because antimicrobial resistance in food-producing animals is a major public health concern, many countries have implemented antimicrobial monitoring systems at a national level. When designing a sampling scheme for antimicrobial resistance monitoring, it is necessary to consider both cost effectiveness and statistical plausibility. In this study, we examined how sampling scheme precision and sensitivity can vary with the number of animals sampled from each farm, while keeping the overall sample size constant to avoid additional sampling costs. Five sampling strategies were investigated. These employed 1, 2, 3, 4 or 6 animal samples per farm, with a total of 12 animals sampled in each strategy. A total of 1,500 Escherichia coli isolates from 300 fattening pigs on 30 farms were tested for resistance against 12 antimicrobials. The performance of each sampling strategy was evaluated by bootstrap resampling from the observational data. In the bootstrapping procedure, farms, animals, and isolates were selected randomly with replacement, and a total of 10,000 replications were conducted. For each antimicrobial, we observed that the standard deviation and 2.5–97.5 percentile interval of resistance prevalence were smallest in the sampling strategy that employed 1 animal per farm. The proportion of bootstrap samples that included at least 1 isolate with resistance was also evaluated as an indicator of the sensitivity of the sampling strategy to previously unidentified antimicrobial resistance. The proportion was greatest with 1 sample per farm and decreased with larger samples per farm. We concluded that when the total number of samples is pre-specified, the most precise and sensitive sampling strategy involves collecting 1 sample per farm. PMID:24466335

  1. Sampling strategies in antimicrobial resistance monitoring: evaluating how precision and sensitivity vary with the number of animals sampled per farm.

    PubMed

    Yamamoto, Takehisa; Hayama, Yoko; Hidano, Arata; Kobayashi, Sota; Muroga, Norihiko; Ishikawa, Kiyoyasu; Ogura, Aki; Tsutsui, Toshiyuki

    2014-01-01

    Because antimicrobial resistance in food-producing animals is a major public health concern, many countries have implemented antimicrobial monitoring systems at a national level. When designing a sampling scheme for antimicrobial resistance monitoring, it is necessary to consider both cost effectiveness and statistical plausibility. In this study, we examined how sampling scheme precision and sensitivity can vary with the number of animals sampled from each farm, while keeping the overall sample size constant to avoid additional sampling costs. Five sampling strategies were investigated. These employed 1, 2, 3, 4 or 6 animal samples per farm, with a total of 12 animals sampled in each strategy. A total of 1,500 Escherichia coli isolates from 300 fattening pigs on 30 farms were tested for resistance against 12 antimicrobials. The performance of each sampling strategy was evaluated by bootstrap resampling from the observational data. In the bootstrapping procedure, farms, animals, and isolates were selected randomly with replacement, and a total of 10,000 replications were conducted. For each antimicrobial, we observed that the standard deviation and 2.5-97.5 percentile interval of resistance prevalence were smallest in the sampling strategy that employed 1 animal per farm. The proportion of bootstrap samples that included at least 1 isolate with resistance was also evaluated as an indicator of the sensitivity of the sampling strategy to previously unidentified antimicrobial resistance. The proportion was greatest with 1 sample per farm and decreased with larger samples per farm. We concluded that when the total number of samples is pre-specified, the most precise and sensitive sampling strategy involves collecting 1 sample per farm.

  2. Design of state-feedback controllers including sensitivity reduction, with applications to precision pointing

    NASA Technical Reports Server (NTRS)

    Hadass, Z.

    1974-01-01

    The design procedure of feedback controllers was described and the considerations for the selection of the design parameters were given. The frequency domain properties of single-input single-output systems using state feedback controllers are analyzed, and desirable phase and gain margin properties are demonstrated. Special consideration is given to the design of controllers for tracking systems, especially those designed to track polynomial commands. As an example, a controller was designed for a tracking telescope with a polynomial tracking requirement and some special features such as actuator saturation and multiple measurements, one of which is sampled. The resulting system has a tracking performance comparing favorably with a much more complicated digital aided tracker. The parameter sensitivity reduction was treated by considering the variable parameters as random variables. A performance index is defined as a weighted sum of the state and control convariances that sum from both the random system disturbances and the parameter uncertainties, and is minimized numerically by adjusting a set of free parameters.

  3. Precise extension-mode resonant sensor with uniform and repeatable sensitivity for detection of ppm-level ammonia

    NASA Astrophysics Data System (ADS)

    Yu, Feng; Yu, Haitao; Xu, Pengcheng; Li, Xinxin

    2014-04-01

    This paper presents a micromechanical resonant gas sensor, which is operated in extensional bulk mode, for high-accuracy sensing of ultra-low concentration chemical vapors. The designed and fabricated gravimetric resonant microsensor has exhibited both high Q-factor of 11157 in air and high mass sensitivity of 10.16 Hz pg-1. Much more importantly, both theoretical analysis and sensing experiment have verified that such an extensional resonance mode is technically advantageous in sensing accuracy (e.g., output signal linearity in terms of gas concentration and reproducibility of sensitivity), which can be attributed to the uniform vibration amplitude at each point of the gas-adsorbing area. Loaded with -COOH functionalized mesoporous-silica nanopaticles as sensing material, which features an ultra-large surface area, the chemical sensing micro-resonator experimentally performs precise detection performance to ppm-level ammonia vapor. The limit of detection is as low as 1 ppm and the response time of the sensor is as short as 10 s.

  4. Effective theory for the nonrigid rotor in an electromagnetic field: Toward accurate and precise calculations of E2 transitions in deformed nuclei

    DOE PAGES

    Coello Pérez, Eduardo A.; Papenbrock, Thomas F.

    2015-07-27

    In this paper, we present a model-independent approach to electric quadrupole transitions of deformed nuclei. Based on an effective theory for axially symmetric systems, the leading interactions with electromagnetic fields enter as minimal couplings to gauge potentials, while subleading corrections employ gauge-invariant nonminimal couplings. This approach yields transition operators that are consistent with the Hamiltonian, and the power counting of the effective theory provides us with theoretical uncertainty estimates. We successfully test the effective theory in homonuclear molecules that exhibit a large separation of scales. For ground-state band transitions of rotational nuclei, the effective theory describes data well within theoreticalmore » uncertainties at leading order. To probe the theory at subleading order, data with higher precision would be valuable. For transitional nuclei, next-to-leading-order calculations and the high-precision data are consistent within the theoretical uncertainty estimates. In addition, we study the faint interband transitions within the effective theory and focus on the E2 transitions from the 02+ band (the “β band”) to the ground-state band. Here the predictions from the effective theory are consistent with data for several nuclei, thereby proposing a solution to a long-standing challenge.« less

  5. Effective theory for the nonrigid rotor in an electromagnetic field: Toward accurate and precise calculations of E 2 transitions in deformed nuclei

    NASA Astrophysics Data System (ADS)

    Coello Pérez, E. A.; Papenbrock, T.

    2015-07-01

    We present a model-independent approach to electric quadrupole transitions of deformed nuclei. Based on an effective theory for axially symmetric systems, the leading interactions with electromagnetic fields enter as minimal couplings to gauge potentials, while subleading corrections employ gauge-invariant nonminimal couplings. This approach yields transition operators that are consistent with the Hamiltonian, and the power counting of the effective theory provides us with theoretical uncertainty estimates. We successfully test the effective theory in homonuclear molecules that exhibit a large separation of scales. For ground-state band transitions of rotational nuclei, the effective theory describes data well within theoretical uncertainties at leading order. To probe the theory at subleading order, data with higher precision would be valuable. For transitional nuclei, next-to-leading-order calculations and the high-precision data are consistent within the theoretical uncertainty estimates. We also study the faint interband transitions within the effective theory and focus on the E 2 transitions from the 02+ band (the "β band") to the ground-state band. Here the predictions from the effective theory are consistent with data for several nuclei, thereby proposing a solution to a long-standing challenge.

  6. Effective theory for the nonrigid rotor in an electromagnetic field: Toward accurate and precise calculations of E2 transitions in deformed nuclei

    SciTech Connect

    Coello Pérez, Eduardo A.; Papenbrock, Thomas F.

    2015-07-27

    In this paper, we present a model-independent approach to electric quadrupole transitions of deformed nuclei. Based on an effective theory for axially symmetric systems, the leading interactions with electromagnetic fields enter as minimal couplings to gauge potentials, while subleading corrections employ gauge-invariant nonminimal couplings. This approach yields transition operators that are consistent with the Hamiltonian, and the power counting of the effective theory provides us with theoretical uncertainty estimates. We successfully test the effective theory in homonuclear molecules that exhibit a large separation of scales. For ground-state band transitions of rotational nuclei, the effective theory describes data well within theoretical uncertainties at leading order. To probe the theory at subleading order, data with higher precision would be valuable. For transitional nuclei, next-to-leading-order calculations and the high-precision data are consistent within the theoretical uncertainty estimates. In addition, we study the faint interband transitions within the effective theory and focus on the E2 transitions from the 02+ band (the “β band”) to the ground-state band. Here the predictions from the effective theory are consistent with data for several nuclei, thereby proposing a solution to a long-standing challenge.

  7. KLY5 Kappabridge: High sensitivity susceptibility and anisotropy meter precisely decomposing in-phase and out-of-phase components

    NASA Astrophysics Data System (ADS)

    Pokorny, Petr; Pokorny, Jiri; Chadima, Martin; Hrouda, Frantisek; Studynka, Jan; Vejlupek, Josef

    2016-04-01

    The KLY5 Kappabridge is equipped, in addition to standard measurement of in-phase magnetic susceptibility and its anisotropy, for precise and calibrated measurement of out-of-phase susceptibility and its anisotropy. The phase angle is measured in "absolute" terms, i.e. without any residual phase error. The measured value of the out-of-phase susceptibility is independent on both the magnitude of the complex susceptibility and intensity of the driving magnetic field. The precise decomposition of the complex susceptibility into the in-phase and out-of-phase components is verified through presumably zero out-of-phase susceptibility of pure gadolinium oxide. The outstanding sensitivity in measurement of weak samples is achieved by newly developed drift compensation routine in addition to the latest models of electronic devices. In rocks, soils, and environmental materials, in which it is usually due to viscous relaxation, the out-of-phase susceptibility is able to substitute the more laborious frequency-dependent susceptibility routinely used in magnetic granulometry. Another new feature is measurement of the anisotropy of out-of-phase magnetic susceptibility (opAMS), which is also performed simultaneously and automatically with standard (in-phase) AMS measurement. The opAMS enables the direct determination of the magnetic sub-fabrics of the minerals that show non-zero out-of-phase susceptibility either due to viscous relaxation (ultrafine grains of magnetite or maghemite), or due to weak-field hysteresis (titanomagnetite, hematite, pyrrhotite), or due to eddy currents (in conductive minerals). Using the 3D rotator, the instrument performs the measurement of both the AMS and opAMS by only one insertion of the specimen into the specimen holder. In addition, fully automated measurement of the field variation of the AMS and opAMS is possible. The instrument is able to measure, in conjunction with the CS-4 Furnace and CS-L Cryostat, the temperature variation of

  8. Accurate control of multishelled ZnO hollow microspheres for dye-sensitized solar cells with high efficiency.

    PubMed

    Dong, Zhenghong; Lai, Xiaoyong; Halpert, Jonathan E; Yang, Nailiang; Yi, Luoxin; Zhai, Jin; Wang, Dan; Tang, Zhiyong; Jiang, Lei

    2012-02-21

    A series of multishelled ZnO hollow microspheres with controlled shell number and inter-shell spacing have been successfully prepared by a simple carbonaceous microsphere templating method, whose large surface area and complex multishelled hollow structure enable them load sufficient dyes and multi-reflect the light for enhancing light harvesting and realize a high conversion efficiency of up to 5.6% when used in dye-sensitized solar cells. PMID:22266874

  9. Distribution of nuclear bomb Pu in Nishiyama area, Nagasaki, estimated by accurate and precise determination of 240Pu/239Pu ratio in soils.

    PubMed

    Yoshida, S; Muramatsu, Y; Yamazaki, S; Ban-Nai, T

    2007-01-01

    Plutonium isotopes in forest soils collected in Nishiyama area, Nagasaki, were successfully determined by high resolution inductively coupled plasma mass spectrometry after the treatment with a microwave decomposition system. The (240)Pu/(239)Pu atom ratios observed in the samples in the Nishiyama area were obviously lower than the range of the global fallout. The low ratios (minimum 0.032) observed in Nishiyama area indicated the influence of detonation of the Pu nuclear weapon in 1945. Since the area is contaminated also by global fallout, the (240)Pu/(239)Pu atom ratio can be more sensitive indicator of bomb-derived Pu than Pu activity concentration.

  10. A Sensor Array Using Multi-functional Field-effect Transistors with Ultrahigh Sensitivity and Precision for Bio-monitoring

    PubMed Central

    Kim, Do-Il; Quang Trung, Tran; Hwang, Byeong-Ung; Kim, Jin-Su; Jeon, Sanghun; Bae, Jihyun; Park, Jong-Jin; Lee, Nae-Eung

    2015-01-01

    Mechanically adaptive electronic skins (e-skins) emulate tactition and thermoception by cutaneous mechanoreceptors and thermoreceptors in human skin, respectively. When exposed to multiple stimuli including mechanical and thermal stimuli, discerning and quantifying precise sensing signals from sensors embedded in e-skins are critical. In addition, different detection modes for mechanical stimuli, rapidly adapting (RA) and slowly adapting (SA) mechanoreceptors in human skin are simultaneously required. Herein, we demonstrate the fabrication of a highly sensitive, pressure-responsive organic field-effect transistor (OFET) array enabling both RA- and SA- mode detection by adopting easily deformable, mechano-electrically coupled, microstructured ferroelectric gate dielectrics and an organic semiconductor channel. We also demonstrate that the OFET array can separate out thermal stimuli for thermoreception during quantification of SA-type static pressure, by decoupling the input signals of pressure and temperature. Specifically, we adopt piezoelectric-pyroelectric coupling of highly crystalline, microstructured poly(vinylidene fluoride-trifluoroethylene) gate dielectric in OFETs with stimuli to allow monitoring of RA- and SA-mode responses to dynamic and static forcing conditions, respectively. This approach enables us to apply the sensor array to e-skins for bio-monitoring of humans and robotics. PMID:26223845

  11. A Sensor Array Using Multi-functional Field-effect Transistors with Ultrahigh Sensitivity and Precision for Bio-monitoring

    NASA Astrophysics Data System (ADS)

    Kim, Do-Il; Quang Trung, Tran; Hwang, Byeong-Ung; Kim, Jin-Su; Jeon, Sanghun; Bae, Jihyun; Park, Jong-Jin; Lee, Nae-Eung

    2015-07-01

    Mechanically adaptive electronic skins (e-skins) emulate tactition and thermoception by cutaneous mechanoreceptors and thermoreceptors in human skin, respectively. When exposed to multiple stimuli including mechanical and thermal stimuli, discerning and quantifying precise sensing signals from sensors embedded in e-skins are critical. In addition, different detection modes for mechanical stimuli, rapidly adapting (RA) and slowly adapting (SA) mechanoreceptors in human skin are simultaneously required. Herein, we demonstrate the fabrication of a highly sensitive, pressure-responsive organic field-effect transistor (OFET) array enabling both RA- and SA- mode detection by adopting easily deformable, mechano-electrically coupled, microstructured ferroelectric gate dielectrics and an organic semiconductor channel. We also demonstrate that the OFET array can separate out thermal stimuli for thermoreception during quantification of SA-type static pressure, by decoupling the input signals of pressure and temperature. Specifically, we adopt piezoelectric-pyroelectric coupling of highly crystalline, microstructured poly(vinylidene fluoride-trifluoroethylene) gate dielectric in OFETs with stimuli to allow monitoring of RA- and SA-mode responses to dynamic and static forcing conditions, respectively. This approach enables us to apply the sensor array to e-skins for bio-monitoring of humans and robotics.

  12. A Sensor Array Using Multi-functional Field-effect Transistors with Ultrahigh Sensitivity and Precision for Bio-monitoring.

    PubMed

    Kim, Do-Il; Trung, Tran Quang; Hwang, Byeong-Ung; Kim, Jin-Su; Jeon, Sanghun; Bae, Jihyun; Park, Jong-Jin; Lee, Nae-Eung

    2015-01-01

    Mechanically adaptive electronic skins (e-skins) emulate tactition and thermoception by cutaneous mechanoreceptors and thermoreceptors in human skin, respectively. When exposed to multiple stimuli including mechanical and thermal stimuli, discerning and quantifying precise sensing signals from sensors embedded in e-skins are critical. In addition, different detection modes for mechanical stimuli, rapidly adapting (RA) and slowly adapting (SA) mechanoreceptors in human skin are simultaneously required. Herein, we demonstrate the fabrication of a highly sensitive, pressure-responsive organic field-effect transistor (OFET) array enabling both RA- and SA- mode detection by adopting easily deformable, mechano-electrically coupled, microstructured ferroelectric gate dielectrics and an organic semiconductor channel. We also demonstrate that the OFET array can separate out thermal stimuli for thermoreception during quantification of SA-type static pressure, by decoupling the input signals of pressure and temperature. Specifically, we adopt piezoelectric-pyroelectric coupling of highly crystalline, microstructured poly(vinylidene fluoride-trifluoroethylene) gate dielectric in OFETs with stimuli to allow monitoring of RA- and SA-mode responses to dynamic and static forcing conditions, respectively. This approach enables us to apply the sensor array to e-skins for bio-monitoring of humans and robotics. PMID:26223845

  13. Novel, Precise, Accurate Ion-Pairing Method to Determine the Related Substances of the Fondaparinux Sodium Drug Substance: Low-Molecular-Weight Heparin

    PubMed Central

    Deshpande, Amol A.; Madhavan, P.; Deshpande, Girish R.; Chandel, Ravi Kumar; Yarbagi, Kaviraj M.; Joshi, Alok R.; Moses Babu, J.; Murali Krishna, R.; Rao, I. M.

    2016-01-01

    Fondaparinux sodium is a synthetic low-molecular-weight heparin (LMWH). This medication is an anticoagulant or a blood thinner, prescribed for the treatment of pulmonary embolism and prevention and treatment of deep vein thrombosis. Its determination in the presence of related impurities was studied and validated by a novel ion-pair HPLC method. The separation of the drug and its degradation products was achieved with the polymer-based PLRPs column (250 mm × 4.6 mm; 5 μm) in gradient elution mode. The mixture of 100 mM n-hexylamine and 100 mM acetic acid in water was used as buffer solution. Mobile phase A and mobile phase B were prepared by mixing the buffer and acetonitrile in the ratio of 90:10 (v/v) and 20:80 (v/v), respectively. Mobile phases were delivered in isocratic mode (2% B for 0–5 min) followed by gradient mode (2–85% B in 5–60 min). An Evaporative Light Scattering Detector (ELSD) was connected to the LC system to detect the responses of chromatographic separation. Further, the drug was subjected to stress studies for acidic, basic, oxidative, photolytic, and thermal degradations as per ICH guidelines and the drug was found to be labile in acid, base hydrolysis, and oxidation, while stable in neutral, thermal, and photolytic degradation conditions. The method provided linear responses over the concentration range of the LOQ to 0.30% for each impurity with respect to the analyte concentration of 12.5 mg/mL, and regression analysis showed a correlation coefficient value (r2) of more than 0.99 for all the impurities. The LOD and LOQ were found to be 1.4 µg/mL and 4.1 µg/mL, respectively, for fondaparinux. The developed ion-pair method was validated as per ICH guidelines with respect to accuracy, selectivity, precision, linearity, and robustness. PMID:27110496

  14. Novel, Precise, Accurate Ion-Pairing Method to Determine the Related Substances of the Fondaparinux Sodium Drug Substance: Low-Molecular-Weight Heparin.

    PubMed

    Deshpande, Amol A; Madhavan, P; Deshpande, Girish R; Chandel, Ravi Kumar; Yarbagi, Kaviraj M; Joshi, Alok R; Moses Babu, J; Murali Krishna, R; Rao, I M

    2016-01-01

    Fondaparinux sodium is a synthetic low-molecular-weight heparin (LMWH). This medication is an anticoagulant or a blood thinner, prescribed for the treatment of pulmonary embolism and prevention and treatment of deep vein thrombosis. Its determination in the presence of related impurities was studied and validated by a novel ion-pair HPLC method. The separation of the drug and its degradation products was achieved with the polymer-based PLRPs column (250 mm × 4.6 mm; 5 μm) in gradient elution mode. The mixture of 100 mM n-hexylamine and 100 mM acetic acid in water was used as buffer solution. Mobile phase A and mobile phase B were prepared by mixing the buffer and acetonitrile in the ratio of 90:10 (v/v) and 20:80 (v/v), respectively. Mobile phases were delivered in isocratic mode (2% B for 0-5 min) followed by gradient mode (2-85% B in 5-60 min). An Evaporative Light Scattering Detector (ELSD) was connected to the LC system to detect the responses of chromatographic separation. Further, the drug was subjected to stress studies for acidic, basic, oxidative, photolytic, and thermal degradations as per ICH guidelines and the drug was found to be labile in acid, base hydrolysis, and oxidation, while stable in neutral, thermal, and photolytic degradation conditions. The method provided linear responses over the concentration range of the LOQ to 0.30% for each impurity with respect to the analyte concentration of 12.5 mg/mL, and regression analysis showed a correlation coefficient value (r(2)) of more than 0.99 for all the impurities. The LOD and LOQ were found to be 1.4 µg/mL and 4.1 µg/mL, respectively, for fondaparinux. The developed ion-pair method was validated as per ICH guidelines with respect to accuracy, selectivity, precision, linearity, and robustness. PMID:27110496

  15. Sensitive and accurate identification of protein–DNA binding events in ChIP-chip assays using higher order derivative analysis

    PubMed Central

    Barrett, Christian L.; Cho, Byung-Kwan

    2011-01-01

    Immuno-precipitation of protein–DNA complexes followed by microarray hybridization is a powerful and cost-effective technology for discovering protein–DNA binding events at the genome scale. It is still an unresolved challenge to comprehensively, accurately and sensitively extract binding event information from the produced data. We have developed a novel strategy composed of an information-preserving signal-smoothing procedure, higher order derivative analysis and application of the principle of maximum entropy to address this challenge. Importantly, our method does not require any input parameters to be specified by the user. Using genome-scale binding data of two Escherichia coli global transcription regulators for which a relatively large number of experimentally supported sites are known, we show that ∼90% of known sites were resolved to within four probes, or ∼88 bp. Over half of the sites were resolved to within two probes, or ∼38 bp. Furthermore, we demonstrate that our strategy delivers significant quantitative and qualitative performance gains over available methods. Such accurate and sensitive binding site resolution has important consequences for accurately reconstructing transcriptional regulatory networks, for motif discovery, for furthering our understanding of local and non-local factors in protein–DNA interactions and for extending the usefulness horizon of the ChIP-chip platform. PMID:21051353

  16. Determination of Baylisascaris schroederi infection in wild giant pandas by an accurate and sensitive PCR/CE-SSCP method.

    PubMed

    Zhang, Wenping; Yie, Shangmian; Yue, Bisong; Zhou, Jielong; An, Renxiong; Yang, Jiangdong; Chen, Wangli; Wang, Chengdong; Zhang, Liang; Shen, Fujun; Yang, Guangyou; Hou, Rong; Zhang, Zhihe

    2012-01-01

    It has been recognized that other than habitat loss, degradation and fragmentation, the infection of the roundworm Baylisascaris schroederi (B. schroederi) is one of the major causes of death in wild giant pandas. However, the prevalence and intensity of the parasite infection has been inconsistently reported through a method that uses sedimentation-floatation followed by a microscope examination. This method fails to accurately determine infection because there are many bamboo residues and/or few B. schroederi eggs in the examined fecal samples. In the present study, we adopted a method that uses PCR and capillary electrophoresis combined with a single-strand conformation polymorphism analysis (PCR/CE-SSCP) to detect B. schroederi infection in wild giant pandas at a nature reserve, and compared it to the traditional microscope approach. The PCR specifically amplified a single band of 279-bp from both fecal samples and positive controls, which was confirmed by sequence analysis to correspond to the mitochondrial COII gene of B. schroederi. Moreover, it was demonstrated that the amount of genomic DNA was linearly correlated with the peak area of the CE-SSCP analysis. Thus, our adopted method can reliably detect the infectious prevalence and intensity of B. schroederi in wild giant pandas. The prevalence of B. schroederi was found to be 54% in the 91 fecal samples examined, and 48% in the fecal samples of 31 identified individual giant pandas. Infectious intensities of the 91 fecal samples were detected to range from 2.8 to 959.2 units/gram, and from 4.8 to 959.2 units/gram in the fecal samples of the 31 identified giant pandas. For comparison, by using the traditional microscope method, the prevalence of B. schroederi was found to be only 33% in the 91 fecal samples, 32% in the fecal samples of the 31 identified giant pandas, and no reliable infectious intensity was observed. PMID:22911871

  17. Determination of Baylisascaris schroederi Infection in Wild Giant Pandas by an Accurate and Sensitive PCR/CE-SSCP Method

    PubMed Central

    Zhang, Wenping; Yie, Shangmian; Yue, Bisong; Zhou, Jielong; An, Renxiong; Yang, Jiangdong; Chen, Wangli; Wang, Chengdong; Zhang, Liang; Shen, Fujun; Yang, Guangyou; Hou, Rong; Zhang, Zhihe

    2012-01-01

    It has been recognized that other than habitat loss, degradation and fragmentation, the infection of the roundworm Baylisascaris schroederi (B. schroederi) is one of the major causes of death in wild giant pandas. However, the prevalence and intensity of the parasite infection has been inconsistently reported through a method that uses sedimentation-floatation followed by a microscope examination. This method fails to accurately determine infection because there are many bamboo residues and/or few B. schroederi eggs in the examined fecal samples. In the present study, we adopted a method that uses PCR and capillary electrophoresis combined with a single-strand conformation polymorphism analysis (PCR/CE-SSCP) to detect B. schroederi infection in wild giant pandas at a nature reserve, and compared it to the traditional microscope approach. The PCR specifically amplified a single band of 279-bp from both fecal samples and positive controls, which was confirmed by sequence analysis to correspond to the mitochondrial COII gene of B. schroederi. Moreover, it was demonstrated that the amount of genomic DNA was linearly correlated with the peak area of the CE-SSCP analysis. Thus, our adopted method can reliably detect the infectious prevalence and intensity of B. schroederi in wild giant pandas. The prevalence of B. schroederi was found to be 54% in the 91 fecal samples examined, and 48% in the fecal samples of 31 identified individual giant pandas. Infectious intensities of the 91 fecal samples were detected to range from 2.8 to 959.2 units/gram, and from 4.8 to 959.2 units/gram in the fecal samples of the 31 identified giant pandas. For comparison, by using the traditional microscope method, the prevalence of B. schroederi was found to be only 33% in the 91 fecal samples, 32% in the fecal samples of the 31 identified giant pandas, and no reliable infectious intensity was observed. PMID:22911871

  18. RapMap: a rapid, sensitive and accurate tool for mapping RNA-seq reads to transcriptomes

    PubMed Central

    Srivastava, Avi; Sarkar, Hirak; Gupta, Nitish; Patro, Rob

    2016-01-01

    Motivation: The alignment of sequencing reads to a transcriptome is a common and important step in many RNA-seq analysis tasks. When aligning RNA-seq reads directly to a transcriptome (as is common in the de novo setting or when a trusted reference annotation is available), care must be taken to report the potentially large number of multi-mapping locations per read. This can pose a substantial computational burden for existing aligners, and can considerably slow downstream analysis. Results: We introduce a novel concept, quasi-mapping, and an efficient algorithm implementing this approach for mapping sequencing reads to a transcriptome. By attempting only to report the potential loci of origin of a sequencing read, and not the base-to-base alignment by which it derives from the reference, RapMap—our tool implementing quasi-mapping—is capable of mapping sequencing reads to a target transcriptome substantially faster than existing alignment tools. The algorithm we use to implement quasi-mapping uses several efficient data structures and takes advantage of the special structure of shared sequence prevalent in transcriptomes to rapidly provide highly-accurate mapping information. We demonstrate how quasi-mapping can be successfully applied to the problems of transcript-level quantification from RNA-seq reads and the clustering of contigs from de novo assembled transcriptomes into biologically meaningful groups. Availability and implementation: RapMap is implemented in C ++11 and is available as open-source software, under GPL v3, at https://github.com/COMBINE-lab/RapMap. Contact: rob.patro@cs.stonybrook.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307617

  19. Determination of Baylisascaris schroederi infection in wild giant pandas by an accurate and sensitive PCR/CE-SSCP method.

    PubMed

    Zhang, Wenping; Yie, Shangmian; Yue, Bisong; Zhou, Jielong; An, Renxiong; Yang, Jiangdong; Chen, Wangli; Wang, Chengdong; Zhang, Liang; Shen, Fujun; Yang, Guangyou; Hou, Rong; Zhang, Zhihe

    2012-01-01

    It has been recognized that other than habitat loss, degradation and fragmentation, the infection of the roundworm Baylisascaris schroederi (B. schroederi) is one of the major causes of death in wild giant pandas. However, the prevalence and intensity of the parasite infection has been inconsistently reported through a method that uses sedimentation-floatation followed by a microscope examination. This method fails to accurately determine infection because there are many bamboo residues and/or few B. schroederi eggs in the examined fecal samples. In the present study, we adopted a method that uses PCR and capillary electrophoresis combined with a single-strand conformation polymorphism analysis (PCR/CE-SSCP) to detect B. schroederi infection in wild giant pandas at a nature reserve, and compared it to the traditional microscope approach. The PCR specifically amplified a single band of 279-bp from both fecal samples and positive controls, which was confirmed by sequence analysis to correspond to the mitochondrial COII gene of B. schroederi. Moreover, it was demonstrated that the amount of genomic DNA was linearly correlated with the peak area of the CE-SSCP analysis. Thus, our adopted method can reliably detect the infectious prevalence and intensity of B. schroederi in wild giant pandas. The prevalence of B. schroederi was found to be 54% in the 91 fecal samples examined, and 48% in the fecal samples of 31 identified individual giant pandas. Infectious intensities of the 91 fecal samples were detected to range from 2.8 to 959.2 units/gram, and from 4.8 to 959.2 units/gram in the fecal samples of the 31 identified giant pandas. For comparison, by using the traditional microscope method, the prevalence of B. schroederi was found to be only 33% in the 91 fecal samples, 32% in the fecal samples of the 31 identified giant pandas, and no reliable infectious intensity was observed.

  20. Panel-based Genetic Diagnostic Testing for Inherited Eye Diseases is Highly Accurate and Reproducible and More Sensitive for Variant Detection Than Exome Sequencing

    PubMed Central

    Bujakowska, Kinga M.; Sousa, Maria E.; Fonseca-Kelly, Zoë D.; Taub, Daniel G.; Janessian, Maria; Wang, Dan Yi; Au, Elizabeth D.; Sims, Katherine B.; Sweetser, David A.; Fulton, Anne B.; Liu, Qin; Wiggs, Janey L.; Gai, Xiaowu; Pierce, Eric A.

    2015-01-01

    Purpose Next-generation sequencing (NGS) based methods are being adopted broadly for genetic diagnostic testing, but the performance characteristics of these techniques have not been fully defined with regard to test accuracy and reproducibility. Methods We developed a targeted enrichment and NGS approach for genetic diagnostic testing of patients with inherited eye disorders, including inherited retinal degenerations, optic atrophy and glaucoma. In preparation for providing this Genetic Eye Disease (GEDi) test on a CLIA-certified basis, we performed experiments to measure the sensitivity, specificity, reproducibility as well as the clinical sensitivity of the test. Results The GEDi test is highly reproducible and accurate, with sensitivity and specificity for single nucleotide variant detection of 97.9% and 100%, respectively. The sensitivity for variant detection was notably better than the 88.3% achieved by whole exome sequencing (WES) using the same metrics, due to better coverage of targeted genes in the GEDi test compared to commercially available exome capture sets. Prospective testing of 192 patients with IRDs indicated that the clinical sensitivity of the GEDi test is high, with a diagnostic rate of 51%. Conclusion The data suggest that based on quantified performance metrics, selective targeted enrichment is preferable to WES for genetic diagnostic testing. PMID:25412400

  1. Towards precision medicine.

    PubMed

    Ashley, Euan A

    2016-08-16

    There is great potential for genome sequencing to enhance patient care through improved diagnostic sensitivity and more precise therapeutic targeting. To maximize this potential, genomics strategies that have been developed for genetic discovery - including DNA-sequencing technologies and analysis algorithms - need to be adapted to fit clinical needs. This will require the optimization of alignment algorithms, attention to quality-coverage metrics, tailored solutions for paralogous or low-complexity areas of the genome, and the adoption of consensus standards for variant calling and interpretation. Global sharing of this more accurate genotypic and phenotypic data will accelerate the determination of causality for novel genes or variants. Thus, a deeper understanding of disease will be realized that will allow its targeting with much greater therapeutic precision. PMID:27528417

  2. Liquid Hybridization and Solid Phase Detection: A Highly Sensitive and Accurate Strategy for MicroRNA Detection in Plants and Animals.

    PubMed

    Li, Fosheng; Mei, Lanju; Zhan, Cheng; Mao, Qiang; Yao, Min; Wang, Shenghua; Tang, Lin; Chen, Fang

    2016-01-01

    MicroRNAs (miRNAs) play important roles in nearly every aspect of biology, including physiological, biochemical, developmental and pathological processes. Therefore, a highly sensitive and accurate method of detection of miRNAs has great potential in research on theory and application, such as the clinical approach to medicine, animal and plant production, as well as stress response. Here, we report a strategic method to detect miRNAs from multicellular organisms, which mainly includes liquid hybridization and solid phase detection (LHSPD); it has been verified in various species and is much more sensitive than traditional biotin-labeled Northern blots. By using this strategy and chemiluminescent detection with digoxigenin (DIG)-labeled or biotin-labeled oligonucleotide probes, as low as 0.01-0.25 fmol [for DIG-CDP Star (disodium2-chloro-5-(4-methoxyspiro{1,2-dioxetane-3,2'-(5'-chloro)tricyclo[3.3.1.13,7]decan}-4-yl)phenyl phosphate) system], 0.005-0.1 fmol (for biotin-CDP Star system), or 0.05-0.5 fmol (for biotin-luminol system) of miRNA can be detected and one-base difference can be distinguished between miRNA sequences. Moreover, LHSPD performed very well in the quantitative analysis of miRNAs, and the whole process can be completed within about 9 h. The strategy of LHSPD provides an effective solution for rapid, accurate, and sensitive detection and quantitative analysis of miRNAs in plants and animals. PMID:27598139

  3. Liquid Hybridization and Solid Phase Detection: A Highly Sensitive and Accurate Strategy for MicroRNA Detection in Plants and Animals

    PubMed Central

    Li, Fosheng; Mei, Lanju; Zhan, Cheng; Mao, Qiang; Yao, Min; Wang, Shenghua; Tang, Lin; Chen, Fang

    2016-01-01

    MicroRNAs (miRNAs) play important roles in nearly every aspect of biology, including physiological, biochemical, developmental and pathological processes. Therefore, a highly sensitive and accurate method of detection of miRNAs has great potential in research on theory and application, such as the clinical approach to medicine, animal and plant production, as well as stress response. Here, we report a strategic method to detect miRNAs from multicellular organisms, which mainly includes liquid hybridization and solid phase detection (LHSPD); it has been verified in various species and is much more sensitive than traditional biotin-labeled Northern blots. By using this strategy and chemiluminescent detection with digoxigenin (DIG)-labeled or biotin-labeled oligonucleotide probes, as low as 0.01–0.25 fmol [for DIG-CDP Star (disodium2-chloro-5-(4-methoxyspiro{1,2-dioxetane-3,2′-(5′-chloro)tricyclo[3.3.1.13,7]decan}-4-yl)phenyl phosphate) system], 0.005–0.1 fmol (for biotin-CDP Star system), or 0.05–0.5 fmol (for biotin-luminol system) of miRNA can be detected and one-base difference can be distinguished between miRNA sequences. Moreover, LHSPD performed very well in the quantitative analysis of miRNAs, and the whole process can be completed within about 9 h. The strategy of LHSPD provides an effective solution for rapid, accurate, and sensitive detection and quantitative analysis of miRNAs in plants and animals. PMID:27598139

  4. A single procedure for the accurate and precise quantification of the rare earth elements, Sc, Y, Th and Pb in dust and peat for provenance tracing in climate and environmental studies.

    PubMed

    Ferrat, Marion; Weiss, Dominik J; Strekopytov, Stanislav

    2012-05-15

    The geochemical provenancing of atmospheric dust deposited in terrestrial archives such as peat bogs using trace elements is central to the study of atmospheric deposition over the continents and at the heart of many climate and environmental studies. The use of a single digestion method on all sample types involved in such a study (dust archive and sources) minimizes the contribution of the total analytical error when comparing sample compositions and attributing a source to the deposited dust. To date, this factor is limiting progress in geographical areas where the compositional variations between the sources and within the archive are small. Here, seven microwave and hot plate digestion methods were tested on rock, soil and plant reference materials to establish a unique method optimizing precision and accuracy in all sample types. The best results were obtained with a hot plate closed-vessel digestion with 2 ml HF and 0.5 ml HNO(3) for 0.1g of sample, which allowed the precise, accurate and low blank quantification of the trace elements La-Yb, Sc, Y, Th and Pb by ICP-MS. This method was tested in a climate study in central Asia and temporal changes in the dominant dust source were for the first time successfully linked to changes in atmospheric circulation patterns above this region.

  5. Development and validation of a sensitive solid phase extraction/hydrophilic interaction liquid chromatography/mass spectrometry method for the accurate determination of glucosamine in dog plasma.

    PubMed

    Hubert, C; Houari, S; Lecomte, F; Houbart, V; De Bleye, C; Fillet, M; Piel, G; Rozet, E; Hubert, Ph

    2010-05-01

    A sensitive and accurate LC/MS method was developed for the monitoring of glucosamine (GLcN) dog plasmatic concentration. In this scope, relatively low plasmatic concentrations of GLcN were expected, ranging from 50 to 1000 ng/mL. Liquid chromatography coupled to simple quadrupole mass spectrometry detection (LC/MS) was selected bringing the selectivity and the sensitivity needed for this application. Additionally, a solid phase extraction (SPE) step was performed to reduce matrix and ion suppression effects. Due to the ionisable character of the compound of interest, a mixed-mode strong cation exchange (Plexa PCX) disposable extraction cartridge (DEC) was selected. The separation was carried out on a Zorbax SB-CN column (5 microm, 4.6mm i.d. x 250 mm), considering hydrophilic interaction liquid chromatography (HILIC). Indeed, the mobile phase was made of methanol and 5mM ammonium hydrogen carbonate buffer at pH 7.5 (95/5, v/v). The detection was led at m/z ratios of 180.0 and 417.0, for GLcN and IS, respectively. Reliability of the results was demonstrated through the validation of the method using an approach based on the accuracy profile allowing managing the risk associated to the use of these methods in routine analysis: it is thus guaranteed that each future result will fall in the +/-30% acceptance limits with a probability of at least 90%. Successful application of the method to a preliminary pharmacokinetic study illustrated the usefulness of the method for pre-clinical studies.

  6. IrisPlex: a sensitive DNA tool for accurate prediction of blue and brown eye colour in the absence of ancestry information.

    PubMed

    Walsh, Susan; Liu, Fan; Ballantyne, Kaye N; van Oven, Mannis; Lao, Oscar; Kayser, Manfred

    2011-06-01

    A new era of 'DNA intelligence' is arriving in forensic biology, due to the impending ability to predict externally visible characteristics (EVCs) from biological material such as those found at crime scenes. EVC prediction from forensic samples, or from body parts, is expected to help concentrate police investigations towards finding unknown individuals, at times when conventional DNA profiling fails to provide informative leads. Here we present a robust and sensitive tool, termed IrisPlex, for the accurate prediction of blue and brown eye colour from DNA in future forensic applications. We used the six currently most eye colour-informative single nucleotide polymorphisms (SNPs) that previously revealed prevalence-adjusted prediction accuracies of over 90% for blue and brown eye colour in 6168 Dutch Europeans. The single multiplex assay, based on SNaPshot chemistry and capillary electrophoresis, both widely used in forensic laboratories, displays high levels of genotyping sensitivity with complete profiles generated from as little as 31pg of DNA, approximately six human diploid cell equivalents. We also present a prediction model to correctly classify an individual's eye colour, via probability estimation solely based on DNA data, and illustrate the accuracy of the developed prediction test on 40 individuals from various geographic origins. Moreover, we obtained insights into the worldwide allele distribution of these six SNPs using the HGDP-CEPH samples of 51 populations. Eye colour prediction analyses from HGDP-CEPH samples provide evidence that the test and model presented here perform reliably without prior ancestry information, although future worldwide genotype and phenotype data shall confirm this notion. As our IrisPlex eye colour prediction test is capable of immediate implementation in forensic casework, it represents one of the first steps forward in the creation of a fully individualised EVC prediction system for future use in forensic DNA intelligence.

  7. Detection and quantitation of trace phenolphthalein (in pharmaceutical preparations and in forensic exhibits) by liquid chromatography-tandem mass spectrometry, a sensitive and accurate method.

    PubMed

    Sharma, Kakali; Sharma, Shiba P; Lahiri, Sujit C

    2013-01-01

    Phenolphthalein, an acid-base indicator and laxative, is important as a constituent of widely used weight-reducing multicomponent food formulations. Phenolphthalein is an useful reagent in forensic science for the identification of blood stains of suspected victims and for apprehending erring officials accepting bribes in graft or trap cases. The pink-colored alkaline hand washes originating from the phenolphthalein-smeared notes can easily be determined spectrophotometrically. But in many cases, colored solution turns colorless with time, which renders the genuineness of bribe cases doubtful to the judiciary. No method is known till now for the detection and identification of phenolphthalein in colorless forensic exhibits with positive proof. Liquid chromatography-tandem mass spectrometry had been found to be most sensitive, accurate method capable of detection and quantitation of trace phenolphthalein in commercial formulations and colorless forensic exhibits with positive proof. The detection limit of phenolphthalein was found to be 1.66 pg/L or ng/mL, and the calibration curve shows good linearity (r(2) = 0.9974). PMID:23106487

  8. Rapid, Sensitive, and Accurate Evaluation of Drug Resistant Mutant (NS5A-Y93H) Strain Frequency in Genotype 1b HCV by Invader Assay.

    PubMed

    Yoshimi, Satoshi; Ochi, Hidenori; Murakami, Eisuke; Uchida, Takuro; Kan, Hiromi; Akamatsu, Sakura; Hayes, C Nelson; Abe, Hiromi; Miki, Daiki; Hiraga, Nobuhiko; Imamura, Michio; Aikata, Hiroshi; Chayama, Kazuaki

    2015-01-01

    Daclatasvir and asunaprevir dual oral therapy is expected to achieve high sustained virological response (SVR) rates in patients with HCV genotype 1b infection. However, presence of the NS5A-Y93H substitution at baseline has been shown to be an independent predictor of treatment failure for this regimen. By using the Invader assay, we developed a system to rapidly and accurately detect the presence of mutant strains and evaluate the proportion of patients harboring a pre-treatment Y93H mutation. This assay system, consisting of nested PCR followed by Invader reaction with well-designed primers and probes, attained a high overall assay success rate of 98.9% among a total of 702 Japanese HCV genotype 1b patients. Even in serum samples with low HCV titers, more than half of the samples could be successfully assayed. Our assay system showed a better lower detection limit of Y93H proportion than using direct sequencing, and Y93H frequencies obtained by this method correlated well with those of deep-sequencing analysis (r = 0.85, P <0.001). The proportion of the patients with the mutant strain estimated by this assay was 23.6% (164/694). Interestingly, patients with the Y93H mutant strain showed significantly lower ALT levels (p=8.8 x 10-4), higher serum HCV RNA levels (p=4.3 x 10-7), and lower HCC risk (p=6.9 x 10-3) than those with the wild type strain. Because the method is both sensitive and rapid, the NS5A-Y93H mutant strain detection system established in this study may provide important pre-treatment information valuable not only for treatment decisions but also for prediction of disease progression in HCV genotype 1b patients. PMID:26083687

  9. Guidelines for Dual Energy X-Ray Absorptiometry Analysis of Trabecular Bone-Rich Regions in Mice: Improved Precision, Accuracy, and Sensitivity for Assessing Longitudinal Bone Changes.

    PubMed

    Shi, Jiayu; Lee, Soonchul; Uyeda, Michael; Tanjaya, Justine; Kim, Jong Kil; Pan, Hsin Chuan; Reese, Patricia; Stodieck, Louis; Lin, Andy; Ting, Kang; Kwak, Jin Hee; Soo, Chia

    2016-05-01

    Trabecular bone is frequently studied in osteoporosis research because changes in trabecular bone are the most common cause of osteoporotic fractures. Dual energy X-ray absorptiometry (DXA) analysis specific to trabecular bone-rich regions is crucial to longitudinal osteoporosis research. The purpose of this study is to define a novel method for accurately analyzing trabecular bone-rich regions in mice via DXA. This method will be utilized to analyze scans obtained from the International Space Station in an upcoming study of microgravity-induced bone loss. Thirty 12-week-old BALB/c mice were studied. The novel method was developed by preanalyzing trabecular bone-rich sites in the distal femur, proximal tibia, and lumbar vertebrae via high-resolution X-ray imaging followed by DXA and micro-computed tomography (micro-CT) analyses. The key DXA steps described by the novel method were (1) proper mouse positioning, (2) region of interest (ROI) sizing, and (3) ROI positioning. The precision of the new method was assessed by reliability tests and a 14-week longitudinal study. The bone mineral content (BMC) data from DXA was then compared to the BMC data from micro-CT to assess accuracy. Bone mineral density (BMD) intra-class correlation coefficients of the new method ranging from 0.743 to 0.945 and Levene's test showing that there was significantly lower variances of data generated by new method both verified its consistency. By new method, a Bland-Altman plot displayed good agreement between DXA BMC and micro-CT BMC for all sites and they were strongly correlated at the distal femur and proximal tibia (r=0.846, p<0.01; r=0.879, p<0.01, respectively). The results suggest that the novel method for site-specific analysis of trabecular bone-rich regions in mice via DXA yields more precise, accurate, and repeatable BMD measurements than the conventional method.

  10. Guidelines for Dual Energy X-Ray Absorptiometry Analysis of Trabecular Bone-Rich Regions in Mice: Improved Precision, Accuracy, and Sensitivity for Assessing Longitudinal Bone Changes.

    PubMed

    Shi, Jiayu; Lee, Soonchul; Uyeda, Michael; Tanjaya, Justine; Kim, Jong Kil; Pan, Hsin Chuan; Reese, Patricia; Stodieck, Louis; Lin, Andy; Ting, Kang; Kwak, Jin Hee; Soo, Chia

    2016-05-01

    Trabecular bone is frequently studied in osteoporosis research because changes in trabecular bone are the most common cause of osteoporotic fractures. Dual energy X-ray absorptiometry (DXA) analysis specific to trabecular bone-rich regions is crucial to longitudinal osteoporosis research. The purpose of this study is to define a novel method for accurately analyzing trabecular bone-rich regions in mice via DXA. This method will be utilized to analyze scans obtained from the International Space Station in an upcoming study of microgravity-induced bone loss. Thirty 12-week-old BALB/c mice were studied. The novel method was developed by preanalyzing trabecular bone-rich sites in the distal femur, proximal tibia, and lumbar vertebrae via high-resolution X-ray imaging followed by DXA and micro-computed tomography (micro-CT) analyses. The key DXA steps described by the novel method were (1) proper mouse positioning, (2) region of interest (ROI) sizing, and (3) ROI positioning. The precision of the new method was assessed by reliability tests and a 14-week longitudinal study. The bone mineral content (BMC) data from DXA was then compared to the BMC data from micro-CT to assess accuracy. Bone mineral density (BMD) intra-class correlation coefficients of the new method ranging from 0.743 to 0.945 and Levene's test showing that there was significantly lower variances of data generated by new method both verified its consistency. By new method, a Bland-Altman plot displayed good agreement between DXA BMC and micro-CT BMC for all sites and they were strongly correlated at the distal femur and proximal tibia (r=0.846, p<0.01; r=0.879, p<0.01, respectively). The results suggest that the novel method for site-specific analysis of trabecular bone-rich regions in mice via DXA yields more precise, accurate, and repeatable BMD measurements than the conventional method. PMID:26956416

  11. A Sensitive Identification of Warm Debris Disks in the Solar Neighborhood through Precise Calibration of Saturated WISE Photometry

    NASA Astrophysics Data System (ADS)

    Patel, Rahul I.; Metchev, Stanimir A.; Heinze, Aren

    2014-05-01

    We present a sensitive search for WISE W3 (12 μm) and W4 (22 μm) excesses from warm optically thin dust around Hipparcos main sequence stars within 75 pc from the Sun. We use contemporaneously measured photometry from WISE, remove sources of contamination, and derive and apply corrections to saturated fluxes to attain optimal sensitivity to >10 μm excesses. We use data from the WISE All-Sky Survey Catalog rather than the AllWISE release because we find that its saturated photometry is better behaved, allowing us to detect small excesses even around saturated stars in WISE. Our new discoveries increase by 45% the number of stars with warm dusty excesses and expand the number of known debris disks (with excess at any wavelength) within 75 pc by 29%. We identify 220 Hipparcos debris disk host stars, 108 of which are new detections at any wavelength. We present the first measurement of a 12 μm and/or 22 μm excess for 10 stars with previously known cold (50-100 K) disks. We also find five new stars with small but significant W3 excesses, adding to the small population of known exozodi, and we detect evidence for a W2 excess around HIP 96562 (F2V), indicative of tenuous hot (780 K) dust. As a result of our WISE study, the number of debris disks with known 10-30 μm excesses within 75 pc (379) has now surpassed the number of disks with known >30 μm excesses (289, with 171 in common), even if the latter have been found to have a higher occurrence rate in unbiased samples.

  12. Precision and sensitivity of the measurement of 15N enrichment in D-alanine from bacterial cell walls using positive/negative ion mass spectrometry

    NASA Technical Reports Server (NTRS)

    Tunlid, A.; Odham, G.; Findlay, R. H.; White, D. C.

    1985-01-01

    Sensitive detection of cellular components from specific groups of microbes can be utilized as 'signatures' in the examination of microbial consortia from soils, sediments or biofilms. Utilizing capillary gas chromatography/mass spectrometry and stereospecific derivatizing agents, D-alanine, a component localized in the prokaryotic (bacterial) cell wall, can be detected reproducibly. Enrichments of D-[15N]alanine determined in E. coli grown with [15N]ammonia can be determined with precision at 1.0 atom%. Chemical ionization with methane gas and the detection of negative ions (M - HF)- and (M - F or M + H - HF)- formed from the heptafluorobutyryl D-2 butanol ester of D-alanine allowed as little as 8 pg (90 fmol) to be detected reproducibly. This method can be utilized to define the metabolic activity in terms of 15N incorporation at the level of 10(3)-10(4) cells, as a function of the 15N-14N ratio.

  13. Precision manometer gauge

    DOEpatents

    McPherson, Malcolm J.; Bellman, Robert A.

    1984-01-01

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  14. Precision manometer gauge

    DOEpatents

    McPherson, M.J.; Bellman, R.A.

    1982-09-27

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  15. High Dynamics and Precision Optical Measurement Using a Position Sensitive Detector (PSD) in Reflection-Mode: Application to 2D Object Tracking over a Smart Surface

    PubMed Central

    Ivan, Ioan Alexandru; Ardeleanu, Mihai; Laurent, Guillaume J.

    2012-01-01

    When related to a single and good contrast object or a laser spot, position sensing, or sensitive, detectors (PSDs) have a series of advantages over the classical camera sensors, including a good positioning accuracy for a fast response time and very simple signal conditioning circuits. To test the performance of this kind of sensor for microrobotics, we have made a comparative analysis between a precise but slow video camera and a custom-made fast PSD system applied to the tracking of a diffuse-reflectivity object transported by a pneumatic microconveyor called Smart-Surface. Until now, the fast system dynamics prevented the full control of the smart surface by visual servoing, unless using a very expensive high frame rate camera. We have built and tested a custom and low cost PSD-based embedded circuit, optically connected with a camera to a single objective by means of a beam splitter. A stroboscopic light source enhanced the resolution. The obtained results showed a good linearity and a fast (over 500 frames per second) response time which will enable future closed-loop control by using PSD. PMID:23223078

  16. High dynamics and precision optical measurement using a position sensitive detector (PSD) in reflection-mode: application to 2D object tracking over a Smart Surface.

    PubMed

    Ivan, Ioan Alexandru; Ardeleanu, Mihai; Laurent, Guillaume J

    2012-01-01

    When related to a single and good contrast object or a laser spot, position sensing, or sensitive, detectors (PSDs) have a series of advantages over the classical camera sensors, including a good positioning accuracy for a fast response time and very simple signal conditioning circuits. To test the performance of this kind of sensor for microrobotics, we have made a comparative analysis between a precise but slow video camera and a custom-made fast PSD system applied to the tracking of a diffuse-reflectivity object transported by a pneumatic microconveyor called Smart-Surface. Until now, the fast system dynamics prevented the full control of the smart surface by visual servoing, unless using a very expensive high frame rate camera. We have built and tested a custom and low cost PSD-based embedded circuit, optically connected with a camera to a single objective by means of a beam splitter. A stroboscopic light source enhanced the resolution. The obtained results showed a good linearity and a fast (over 500 frames per second) response time which will enable future closed-loop control by using PSD. PMID:23223078

  17. Self-assessment of pain and discomfort in patients with temporomandibular disorders: a comparison of five different scales with respect to their precision and sensitivity as well as their capacity to register memory of pain and discomfort.

    PubMed

    Magnusson, T; List, T; Helkimo, M

    1995-08-01

    Five different scales of self-assessment of pain were tested in patients with temporomandibular disorders. The precision and sensitivity and the capacity to register memory of pain and discomfort were compared for each of the five scales. The behaviour rating scale was found to be superior to the other four scales in respect of precision and sensitivity to pain and discomfort and when recording the memory of these two variables. This scale was also considered by the patients to be the most relevant and the simplest to understand. From these results, the behaviour rating scale can be recommended when measuring pain and discomfort in patients with temporomandibular disorders.

  18. Direct quantification of lycopene in products derived from thermally processed tomatoes: optothermal window as a selective, sensitive, and accurate analytical method without the need for preparatory steps.

    PubMed

    Bicanic, Dane; Swarts, Jan; Luterotti, Svjetlana; Pietraperzia, Giangaetano; Dóka, Otto; de Rooij, Hans

    2004-09-01

    The concept of the optothermal window (OW) is proposed as a reliable analytical tool to rapidly determine the concentration of lycopene in a large variety of commercial tomato products in an extremely simple way (the determination is achieved without the need for pretreatment of the sample). The OW is a relative technique as the information is deduced from the calibration curve that relates the OW data (i.e., the product of the absorption coefficient beta and the thermal diffusion length micro) with the lycopene concentration obtained from spectrophotometric measurements. The accuracy of the method has been ascertained with a high correlation coefficient (R = 0.98) between the OW data and results acquired from the same samples by means of the conventional extraction spectrophotometric method. The intrinsic precision of the OW method is quite high (better than 1%), whereas the repeatability of the determination (RSD = 0.4-9.5%, n= 3-10) is comparable to that of spectrophotometry.

  19. Assessing the beginning to end-of-mission sensitivity change of the PREcision MOnitor Sensor total solar irradiance radiometer (PREMOS/PICARD)

    NASA Astrophysics Data System (ADS)

    Ball, William T.; Schmutz, Werner; Fehlmann, André; Finsterle, Wolfgang; Walter, Benjamin

    2016-08-01

    The switching of the total solar irradiance (TSI) backup radiometer (PREMOS-B) to a primary role for 2 weeks at the end of the PICARD mission provides a unique opportunity to test the fundamental hypothesis of radiometer experiments in space, which is that the sensitivity change of instruments due to the space environment is identical for the same instrument type as a function of solar-exposure time of the instruments. We verify this hypothesis for the PREMOS TSI radiometers within the PREMOS experiment on the PICARD mission. We confirm that the sensitivity change of the backup instrument, PREMOS-B, is similar to that of the identically-constructed primary radiometer, PREMOS-A. The extended exposure of the backup instrument at the end of the mission allows for the assessment, with an uncertainty estimate, of the sensitivity change of the primary radiometer from the beginning of the PICARD mission compared to the end, and of the degradation of the backup over the mission. We correct six sets of PREMOS-B observations connecting October 2011 with February 2014, using six ratios from simultaneous PREMOS-A and PREMOS-B exposures during the first days of PREMOS-A operation in 2010. These ratios are then used, without indirect estimates or assumptions, to evaluate the stability of SORCE/TIM and SOHO/VIRGO TSI measurements, which have both operated for more than a decade and now show different trends over the time span of the PICARD mission, namely from 2010 to 2014. We find that by February 2014 relative to October 2011 PREMOS-B supports the SORCE/TIM TSI time evolution, which in May 2014 relative to October 2011 is ~0.11 W m-2, or ~84 ppm, higher than SOHO/VIRGO. Such a divergence between SORCE/TIM and SOHO/VIRGO over this period is a significant fraction of the estimated decline of 0.2 W m-2 between the solar minima of 1996 and 2008, and questions the reliability of that estimated trend. Extrapolating the uncertainty indicated by the disagreement of SORCE/TIM and PREMOS

  20. Knowing what the brain is seeing in three dimensions: A novel, noninvasive, sensitive, accurate, and low-noise technique for measuring ocular torsion.

    PubMed

    Otero-Millan, Jorge; Roberts, Dale C; Lasker, Adrian; Zee, David S; Kheradmand, Amir

    2015-01-01

    Torsional eye movements are rotations of the eye around the line of sight. Measuring torsion is essential to understanding how the brain controls eye position and how it creates a veridical perception of object orientation in three dimensions. Torsion is also important for diagnosis of many vestibular, neurological, and ophthalmological disorders. Currently, there are multiple devices and methods that produce reliable measurements of horizontal and vertical eye movements. Measuring torsion, however, noninvasively and reliably has been a longstanding challenge, with previous methods lacking real-time capabilities or suffering from intrusive artifacts. We propose a novel method for measuring eye movements in three dimensions using modern computer vision software (OpenCV) and concepts of iris recognition. To measure torsion, we use template matching of the entire iris and automatically account for occlusion of the iris and pupil by the eyelids. The current setup operates binocularly at 100 Hz with noise <0.1° and is accurate within 20° of gaze to the left, to the right, and up and 10° of gaze down. This new method can be widely applicable and fill a gap in many scientific and clinical disciplines. PMID:26587699

  1. Knowing what the brain is seeing in three dimensions: A novel, noninvasive, sensitive, accurate, and low-noise technique for measuring ocular torsion

    PubMed Central

    Otero-Millan, Jorge; Roberts, Dale C.; Lasker, Adrian; Zee, David S.; Kheradmand, Amir

    2015-01-01

    Torsional eye movements are rotations of the eye around the line of sight. Measuring torsion is essential to understanding how the brain controls eye position and how it creates a veridical perception of object orientation in three dimensions. Torsion is also important for diagnosis of many vestibular, neurological, and ophthalmological disorders. Currently, there are multiple devices and methods that produce reliable measurements of horizontal and vertical eye movements. Measuring torsion, however, noninvasively and reliably has been a longstanding challenge, with previous methods lacking real-time capabilities or suffering from intrusive artifacts. We propose a novel method for measuring eye movements in three dimensions using modern computer vision software (OpenCV) and concepts of iris recognition. To measure torsion, we use template matching of the entire iris and automatically account for occlusion of the iris and pupil by the eyelids. The current setup operates binocularly at 100 Hz with noise <0.1° and is accurate within 20° of gaze to the left, to the right, and up and 10° of gaze down. This new method can be widely applicable and fill a gap in many scientific and clinical disciplines. PMID:26587699

  2. Knowing what the brain is seeing in three dimensions: A novel, noninvasive, sensitive, accurate, and low-noise technique for measuring ocular torsion.

    PubMed

    Otero-Millan, Jorge; Roberts, Dale C; Lasker, Adrian; Zee, David S; Kheradmand, Amir

    2015-01-01

    Torsional eye movements are rotations of the eye around the line of sight. Measuring torsion is essential to understanding how the brain controls eye position and how it creates a veridical perception of object orientation in three dimensions. Torsion is also important for diagnosis of many vestibular, neurological, and ophthalmological disorders. Currently, there are multiple devices and methods that produce reliable measurements of horizontal and vertical eye movements. Measuring torsion, however, noninvasively and reliably has been a longstanding challenge, with previous methods lacking real-time capabilities or suffering from intrusive artifacts. We propose a novel method for measuring eye movements in three dimensions using modern computer vision software (OpenCV) and concepts of iris recognition. To measure torsion, we use template matching of the entire iris and automatically account for occlusion of the iris and pupil by the eyelids. The current setup operates binocularly at 100 Hz with noise <0.1° and is accurate within 20° of gaze to the left, to the right, and up and 10° of gaze down. This new method can be widely applicable and fill a gap in many scientific and clinical disciplines.

  3. Precision and Recall for Regression

    NASA Astrophysics Data System (ADS)

    Torgo, Luis; Ribeiro, Rita

    Cost sensitive prediction is a key task in many real world applications. Most existing research in this area deals with classification problems. This paper addresses a related regression problem: the prediction of rare extreme values of a continuous variable. These values are often regarded as outliers and removed from posterior analysis. However, for many applications (e.g. in finance, meteorology, biology, etc.) these are the key values that we want to accurately predict. Any learning method obtains models by optimizing some preference criteria. In this paper we propose new evaluation criteria that are more adequate for these applications. We describe a generalization for regression of the concepts of precision and recall often used in classification. Using these new evaluation metrics we are able to focus the evaluation of predictive models on the cases that really matter for these applications. Our experiments indicate the advantages of the use of these new measures when comparing predictive models in the context of our target applications.

  4. Watch the Children: Precision Referring

    ERIC Educational Resources Information Center

    Hiltbrunner, Curtis L.; Vasa, Stanley F.

    1974-01-01

    The Precision Referral Form (PRF) is described as a quick, accurate and easy instrument that enables teachers to communicate learning and behavior problems of students to resource or ancillary personnel and to pinpoint students' behaviors. (GW)

  5. High-Precision Tungsten Isotopic Analysis by Multicollection Negative Thermal Ionization Mass Spectrometry Based on Simultaneous Measurement of W and (18)O/(16)O Isotope Ratios for Accurate Fractionation Correction.

    PubMed

    Trinquier, Anne; Touboul, Mathieu; Walker, Richard J

    2016-02-01

    Determination of the (182)W/(184)W ratio to a precision of ± 5 ppm (2σ) is desirable for constraining the timing of core formation and other early planetary differentiation processes. However, WO3(-) analysis by negative thermal ionization mass spectrometry normally results in a residual correlation between the instrumental-mass-fractionation-corrected (182)W/(184)W and (183)W/(184)W ratios that is attributed to mass-dependent variability of O isotopes over the course of an analysis and between different analyses. A second-order correction using the (183)W/(184)W ratio relies on the assumption that this ratio is constant in nature. This may prove invalid, as has already been realized for other isotope systems. The present study utilizes simultaneous monitoring of the (18)O/(16)O and W isotope ratios to correct oxide interferences on a per-integration basis and thus avoid the need for a double normalization of W isotopes. After normalization of W isotope ratios to a pair of W isotopes, following the exponential law, no residual W-O isotope correlation is observed. However, there is a nonideal mass bias residual correlation between (182)W/(i)W and (183)W/(i)W with time. Without double normalization of W isotopes and on the basis of three or four duplicate analyses, the external reproducibility per session of (182)W/(184)W and (183)W/(184)W normalized to (186)W/(183)W is 5-6 ppm (2σ, 1-3 μg loads). The combined uncertainty per session is less than 4 ppm for (183)W/(184)W and less than 6 ppm for (182)W/(184)W (2σm) for loads between 3000 and 50 ng.

  6. Using hyperspectral data in precision farming applications

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Precision farming practices such as variable rate applications of fertilizer and agricultural chemicals require accurate field variability mapping. This chapter investigated the value of hyperspectral remote sensing in providing useful information for five applications of precision farming: (a) Soil...

  7. A simple, sensitive, and accurate alcohol electrode

    SciTech Connect

    Verduyn, C.; Scheffers, W.A.; Van Dijken, J.P.

    1983-04-01

    The construction and performance of an enzyme electrode is described which specifically detects lower primary aliphatic alcohols in aqueous solutions. The electrode consists of a commercial Clark-type oxygen electrode on which alcohol oxidase (E.C. 1.1.3.13) and catalase were immobilized. The decrease in electrode current is linearly proportional to ethanol concentrations betwee 1 and 25 ppm. The response of the electrode remains constant during 400 assays over a period of two weeks. The response time is between 1 and 2 min. Assembly of the electrode takes less than 1 h.

  8. Portable high precision pressure transducer system

    DOEpatents

    Piper, T.C.; Morgan, J.P.; Marchant, N.J.; Bolton, S.M.

    1994-04-26

    A high precision pressure transducer system is described for checking the reliability of a second pressure transducer system used to monitor the level of a fluid confined in a holding tank. Since the response of the pressure transducer is temperature sensitive, it is continually housed in an battery powered oven which is configured to provide a temperature stable environment at specified temperature for an extended period of time. Further, a high precision temperature stabilized oscillator and counter are coupled to a single board computer to accurately determine the pressure transducer oscillation frequency and convert it to an applied pressure. All of the components are powered by the batteries which during periods of availability of line power are charged by an on board battery charger. The pressure readings outputs are transmitted to a line printer and a vacuum fluorescent display. 2 figures.

  9. Portable high precision pressure transducer system

    DOEpatents

    Piper, Thomas C.; Morgan, John P.; Marchant, Norman J.; Bolton, Steven M.

    1994-01-01

    A high precision pressure transducer system for checking the reliability of a second pressure transducer system used to monitor the level of a fluid confined in a holding tank. Since the response of the pressure transducer is temperature sensitive, it is continually housed in an battery powered oven which is configured to provide a temperature stable environment at specified temperature for an extended period of time. Further, a high precision temperature stabilized oscillator and counter are coupled to a single board computer to accurately determine the pressure transducer oscillation frequency and convert it to an applied pressure. All of the components are powered by the batteries which during periods of availability of line power are charged by an on board battery charger. The pressure readings outputs are transmitted to a line printer and a vacuum florescent display.

  10. Portable high precision pressure transducer system

    NASA Astrophysics Data System (ADS)

    Piper, T. C.; Morgan, J. P.; Marchant, N. J.; Bolton, S. M.

    A high precision pressure transducer system for checking the reliability of a second pressure transducer system used to monitor the level of a fluid confined in a holding tank is presented. Since the response of the pressure transducer is temperature sensitive, it is continually housed in a battery powered oven which is configured to provide a temperature stable environment at specified temperature for an extended period of time. Further, a high precision temperature stabilized oscillator and counter are coupled to a single board computer to accurately determine the pressure transducer oscillation frequency and convert it to an applied pressure. All of the components are powered by the batteries which during periods of availability of line power are charged by an on-board battery charger. The pressure readings outputs are transmitted to a line printer and a vacuum fluorescent display.

  11. Precision Muonium Spectroscopy

    NASA Astrophysics Data System (ADS)

    Jungmann, Klaus P.

    2016-09-01

    The muonium atom is the purely leptonic bound state of a positive muon and an electron. It has a lifetime of 2.2 µs. The absence of any known internal structure provides for precision experiments to test fundamental physics theories and to determine accurate values of fundamental constants. In particular ground state hyperfine structure transitions can be measured by microwave spectroscopy to deliver the muon magnetic moment. The frequency of the 1s-2s transition in the hydrogen-like atom can be determined with laser spectroscopy to obtain the muon mass. With such measurements fundamental physical interactions, in particular quantum electrodynamics, can also be tested at highest precision. The results are important input parameters for experiments on the muon magnetic anomaly. The simplicity of the atom enables further precise experiments, such as a search for muonium-antimuonium conversion for testing charged lepton number conservation and searches for possible antigravity of muons and dark matter.

  12. Serial measurement of hFABP and high-sensitivity troponin I post-PCI in STEMI: how fast and accurate can myocardial infarct size and no-reflow be predicted?

    PubMed

    Uitterdijk, André; Sneep, Stefan; van Duin, Richard W B; Krabbendam-Peters, Ilona; Gorsse-Bakker, Charlotte; Duncker, Dirk J; van der Giessen, Willem J; van Beusekom, Heleen M M

    2013-10-01

    The objective of this study was to compare heart-specific fatty acid binding protein (hFABP) and high-sensitivity troponin I (hsTnI) via serial measurements to identify early time points to accurately quantify infarct size and no-reflow in a preclinical swine model of ST-elevated myocardial infarction (STEMI). Myocardial necrosis, usually confirmed by hsTnI or TnT, takes several hours of ischemia before plasma levels rise in the absence of reperfusion. We evaluated the fast marker hFABP compared with hsTnI to estimate infarct size and no-reflow upon reperfused (2 h occlusion) and nonreperfused (8 h occlusion) STEMI in swine. In STEMI (n = 4) and STEMI + reperfusion (n = 8) induced in swine, serial blood samples were taken for hFABP and hsTnI and compared with triphenyl tetrazolium chloride and thioflavin-S staining for infarct size and no-reflow at the time of euthanasia. hFABP increased faster than hsTnI upon occlusion (82 ± 29 vs. 180 ± 73 min, P < 0.05) and increased immediately upon reperfusion while hsTnI release was delayed 16 ± 3 min (P < 0.05). Peak hFABP and hsTnI reperfusion values were reached at 30 ± 5 and 139 ± 21 min, respectively (P < 0.05). Infarct size (containing 84 ± 0.6% no-reflow) correlated well with area under the curve for hFABP (r(2) = 0.92) but less for hsTnI (r(2) = 0.53). At 50 and 60 min reperfusion, hFABP correlated best with infarct size (r(2) = 0.94 and 0.93) and no-reflow (r(2) = 0.96 and 0.94) and showed high sensitivity for myocardial necrosis (2.3 ± 0.6 and 0.4 ± 0.6 g). hFABP rises faster and correlates better with infarct size and no-reflow than hsTnI in STEMI + reperfusion when measured early after reperfusion. The highest sensitivity detecting myocardial necrosis, 0.4 ± 0.6 g at 60 min postreperfusion, provides an accurate and early measurement of infarct size and no-reflow.

  13. How Accurately can we Calculate Thermal Systems?

    SciTech Connect

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  14. Precision electron polarimetry

    SciTech Connect

    Chudakov, Eugene A.

    2013-11-01

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. M{\\o}ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at ~300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100\\%-polarized electron target for M{\\o}ller polarimetry.

  15. Precision electron polarimetry

    NASA Astrophysics Data System (ADS)

    Chudakov, E.

    2013-11-01

    A new generation of precise Parity-Violating experiments will require a sub-percent accuracy of electron beam polarimetry. Compton polarimetry can provide such accuracy at high energies, but at a few hundred MeV the small analyzing power limits the sensitivity. Mo/ller polarimetry provides a high analyzing power independent on the beam energy, but is limited by the properties of the polarized targets commonly used. Options for precision polarimetry at 300 MeV will be discussed, in particular a proposal to use ultra-cold atomic hydrogen traps to provide a 100%-polarized electron target for Mo/ller polarimetry.

  16. Precision translator

    DOEpatents

    Reedy, R.P.; Crawford, D.W.

    1982-03-09

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  17. Precision translator

    DOEpatents

    Reedy, Robert P.; Crawford, Daniel W.

    1984-01-01

    A precision translator for focusing a beam of light on the end of a glass fiber which includes two turning fork-like members rigidly connected to each other. These members have two prongs each with its separation adjusted by a screw, thereby adjusting the orthogonal positioning of a glass fiber attached to one of the members. This translator is made of simple parts with capability to keep adjustment even in condition of rough handling.

  18. Facile realization of efficient blocking from ZnO/TiO2 mismatch interface in dye-sensitized solar cells and precise microscopic modeling adapted by circuit analysis

    NASA Astrophysics Data System (ADS)

    Ameri, Mohsen; Samavat, Feridoun; Mohajerani, Ezeddin; Fathollahi, Mohammad-Reza

    2016-06-01

    In the present research, the effect of \\text{ZnO} -based blocking layers on the operational features of \\text{Ti}{{\\text{O}}2} -based dye-sensitized solar cells is investigated. A facile solution-based coating method is applied to prepare an interfacial highly transparent \\text{ZnO} compact blocking layer (CBL) to enhance the efficiency of dye-sensitized solar cells. Different precursor molar concentration were tested to find the optimum concentration. Optical and electrical measurements were carried out to confirm the operation of the CBLs. Morphological characterizations were performed by scanning electron microscopy (SEM) and atomic force microscopy (AFM) to investigate the structure of the compact layers. We have also developed a set of modeling procedures to extract the effective electrical parameters including the parasitic resistances and charged carrier profiles to investigate the effect of CBLs on the dye-sensitized solar cell (DSSC) performance. The adopted modeling approach should establish a versatile framework for diagnosis of DSSCs and facilitates the exploration of critical factors influencing device performance.

  19. A new sample preparation and separation combination for precise, accurate, rapid, and simultaneous determination of vitamins B1, B2, B3, B5, B6, B7, and B9 in infant formula and related nutritionals by LC-MS/MS.

    PubMed

    Cellar, Nicholas A; McClure, Sean C; Salvati, Louis M; Reddy, Todime M

    2016-08-31

    An improved method was developed for simultaneous determination of the fortified forms of thiamine (B1), riboflavin (B2), nicotinamide and nicotinic acid (B3), pantothenic acid (B5), pyridoxine (B6), biotin (B7), and folic acid (B9) in infant formulas and related nutritionals. The method employed a simple, effective, and rapid sample preparation followed by liquid chromatography tandem mass spectrometry (LC-MS/MS). It improved upon previous methodologies by offering facile and rugged sample preparation with improved chromatographic conditions, which culminated in a highly accurate and precise method for water-soluble vitamin determination in a wide range of formulas. The method was validated over six days in ten unique matrices with two analysts and on instruments in two different labs. Intermediate precision averaged 3.4 ± 2.6% relative standard deviation and over-spike recovery averaged 100.2 ± 2.4% (n = 160). Due to refinements in sample preparation, the method had high sample throughput capacity. PMID:27506358

  20. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages. PMID:8788799

  1. Self-assembly of DNA and cell-adhesive proteins onto pH-sensitive inorganic crystals for precise and efficient transgene delivery.

    PubMed

    Chowdhury, E H

    2008-01-01

    Intracellular delivery of a functional gene or a gene-silencing DNA or RNA sequence is expected to be a powerful tool for treating critical human diseases very precisely and effectively. One of the major hurdles to the successful delivery of a nucleic acid with nanoparticles is the transport across the plasma membrane. The existence of various and numerous cell surface receptors with potential capability of being internalized by cells upon ligand binding unveils the ways of overcoming the barrier by targeting the nanoparticles to specific receptor. This review will reveal the current progress on utilizing the cell adhesion molecules as targeting receptors for transgene delivery, with a special focus on the design of bio-functionalized inorganic nanocrystals using both naturally occurring and genetically engineered cell adhesive proteins for high efficiency transfection of embryonic stem cells. Self-assembly of both DNA and cell-adhesive proteins, such as fibronectin and E-cadherin-Fc into the growing nanocrystals of carbonate apatite leads to their high affinity interactions with fibronectin-specific integrins and E-cadherin in embryonic stem cell surface and accelerates transgene delivery for subsequent expression. While only apatite nano-particles were very inefficient in transfecting embryonic stem cells, fibronectin-anchored particles and to a more significant extent, fibronectin and E-cadherin-Fc-associated particles dramatically enhanced transgene delivery with a value notably higher than that of commercially available lipofection system. Activation of protein kinase C (PKC) dramatically enhances transgene expression probably by up-regulating both integrin and E-cadherin. Thus, the new establishment of a bio-functional hybrid gene-carrier would promote and facilitate development of stem cell-based therapy in regenerative medicine.

  2. Precision spectroscopy of Helium

    SciTech Connect

    Cancio, P.; Giusfredi, G.; Mazzotti, D.; De Natale, P.; De Mauro, C.; Krachmalnicoff, V.; Inguscio, M.

    2005-05-05

    Accurate Quantum-Electrodynamics (QED) tests of the simplest bound three body atomic system are performed by precise laser spectroscopic measurements in atomic Helium. In this paper, we present a review of measurements between triplet states at 1083 nm (23S-23P) and at 389 nm (23S-33P). In 4He, such data have been used to measure the fine structure of the triplet P levels and, then, to determine the fine structure constant when compared with equally accurate theoretical calculations. Moreover, the absolute frequencies of the optical transitions have been used for Lamb-shift determinations of the levels involved with unprecedented accuracy. Finally, determination of the He isotopes nuclear structure and, in particular, a measurement of the nuclear charge radius, are performed by using hyperfine structure and isotope-shift measurements.

  3. Modified algesimeter provides accurate depth measurements

    NASA Technical Reports Server (NTRS)

    Turner, D. P.

    1966-01-01

    Algesimeter which incorporates a standard sensory needle with a sensitive micrometer, measures needle point depth penetration in pain tolerance research. This algesimeter provides an inexpensive, precise instrument with assured validity of recordings in those biomedical areas with a requirement for repeated pain detection or ascertaining pain sensitivity.

  4. Precision and power grip priming by observed grasping.

    PubMed

    Vainio, Lari; Tucker, Mike; Ellis, Rob

    2007-11-01

    The coupling of hand grasping stimuli and the subsequent grasp execution was explored in normal participants. Participants were asked to respond with their right- or left-hand to the accuracy of an observed (dynamic) grasp while they were holding precision or power grasp response devices in their hands (e.g., precision device/right-hand; power device/left-hand). The observed hand was making either accurate or inaccurate precision or power grasps and participants signalled the accuracy of the observed grip by making one or other response depending on instructions. Responses were made faster when they matched the observed grip type. The two grasp types differed in their sensitivity to the end-state (i.e., accuracy) of the observed grip. The end-state influenced the power grasp congruency effect more than the precision grasp effect when the observed hand was performing the grasp without any goal object (Experiments 1 and 2). However, the end-state also influenced the precision grip congruency effect (Experiment 3) when the action was object-directed. The data are interpreted as behavioural evidence of the automatic imitation coding of the observed actions. The study suggests that, in goal-oriented imitation coding, the context of an action (e.g., being object-directed) is more important factor in coding precision grips than power grips.

  5. High-torque precision stepping drive

    NASA Technical Reports Server (NTRS)

    Kaspareck, W. E.

    1968-01-01

    Stepping drive has been designed for precise incremental angular positioning of scale models of spacecraft about a horizontal axis in order to accurately measure antenna receiving and transmitting characteristics. Positioning is insured by spring-loaded, self-locking plungers.

  6. High precision anatomy for MEG.

    PubMed

    Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth

    2014-02-01

    Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1mm. Estimates of relative co-registration error were <1.5mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. PMID:23911673

  7. High precision anatomy for MEG☆

    PubMed Central

    Troebinger, Luzia; López, José David; Lutti, Antoine; Bradbury, David; Bestmann, Sven; Barnes, Gareth

    2014-01-01

    Precise MEG estimates of neuronal current flow are undermined by uncertain knowledge of the head location with respect to the MEG sensors. This is either due to head movements within the scanning session or systematic errors in co-registration to anatomy. Here we show how such errors can be minimized using subject-specific head-casts produced using 3D printing technology. The casts fit the scalp of the subject internally and the inside of the MEG dewar externally, reducing within session and between session head movements. Systematic errors in matching to MRI coordinate system are also reduced through the use of MRI-visible fiducial markers placed on the same cast. Bootstrap estimates of absolute co-registration error were of the order of 1 mm. Estimates of relative co-registration error were < 1.5 mm between sessions. We corroborated these scalp based estimates by looking at the MEG data recorded over a 6 month period. We found that the between session sensor variability of the subject's evoked response was of the order of the within session noise, showing no appreciable noise due to between-session movement. Simulations suggest that the between-session sensor level amplitude SNR improved by a factor of 5 over conventional strategies. We show that at this level of coregistration accuracy there is strong evidence for anatomical models based on the individual rather than canonical anatomy; but that this advantage disappears for errors of greater than 5 mm. This work paves the way for source reconstruction methods which can exploit very high SNR signals and accurate anatomical models; and also significantly increases the sensitivity of longitudinal studies with MEG. PMID:23911673

  8. Drilling Precise Orifices and Slots

    NASA Technical Reports Server (NTRS)

    Richards, C. W.; Seidler, J. E.

    1983-01-01

    Reaction control thrustor injector requires precisely machined orifices and slots. Tooling setup consists of rotary table, numerical control system and torque sensitive drill press. Components used to drill oxidizer orifices. Electric discharge machine drills fuel-feed orifices. Device automates production of identical parts so several are completed in less time than previously.

  9. High precision triangular waveform generator

    DOEpatents

    Mueller, Theodore R.

    1983-01-01

    An ultra-linear ramp generator having separately programmable ascending and descending ramp rates and voltages is provided. Two constant current sources provide the ramp through an integrator. Switching of the current at current source inputs rather than at the integrator input eliminates switching transients and contributes to the waveform precision. The triangular waveforms produced by the waveform generator are characterized by accurate reproduction and low drift over periods of several hours. The ascending and descending slopes are independently selectable.

  10. Precision powder feeder

    DOEpatents

    Schlienger, M. Eric; Schmale, David T.; Oliver, Michael S.

    2001-07-10

    A new class of precision powder feeders is disclosed. These feeders provide a precision flow of a wide range of powdered materials, while remaining robust against jamming or damage. These feeders can be precisely controlled by feedback mechanisms.

  11. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  12. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  13. Preparation and accurate measurement of pure ozone.

    PubMed

    Janssen, Christof; Simone, Daniela; Guinet, Mickaël

    2011-03-01

    Preparation of high purity ozone as well as precise and accurate measurement of its pressure are metrological requirements that are difficult to meet due to ozone decomposition occurring in pressure sensors. The most stable and precise transducer heads are heated and, therefore, prone to accelerated ozone decomposition, limiting measurement accuracy and compromising purity. Here, we describe a vacuum system and a method for ozone production, suitable to accurately determine the pressure of pure ozone by avoiding the problem of decomposition. We use an inert gas in a particularly designed buffer volume and can thus achieve high measurement accuracy and negligible degradation of ozone with purities of 99.8% or better. The high degree of purity is ensured by comprehensive compositional analyses of ozone samples. The method may also be applied to other reactive gases. PMID:21456766

  14. Precision cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Fendt, William Ashton, Jr.

    2009-09-01

    Experimental efforts of the last few decades have brought. a golden age to mankind's endeavor to understand tine physical properties of the Universe throughout its history. Recent measurements of the cosmic microwave background (CMB) provide strong confirmation of the standard big bang paradigm, as well as introducing new mysteries, to unexplained by current physical models. In the following decades. even more ambitious scientific endeavours will begin to shed light on the new physics by looking at the detailed structure of the Universe both at very early and recent times. Modern data has allowed us to begins to test inflationary models of the early Universe, and the near future will bring higher precision data and much stronger tests. Cracking the codes hidden in these cosmological observables is a difficult and computationally intensive problem. The challenges will continue to increase as future experiments bring larger and more precise data sets. Because of the complexity of the problem, we are forced to use approximate techniques and make simplifying assumptions to ease the computational workload. While this has been reasonably sufficient until now, hints of the limitations of our techniques have begun to come to light. For example, the likelihood approximation used for analysis of CMB data from the Wilkinson Microwave Anistropy Probe (WMAP) satellite was shown to have short falls, leading to pre-emptive conclusions drawn about current cosmological theories. Also it can he shown that an approximate method used by all current analysis codes to describe the recombination history of the Universe will not be sufficiently accurate for future experiments. With a new CMB satellite scheduled for launch in the coming months, it is vital that we develop techniques to improve the analysis of cosmological data. This work develops a novel technique of both avoiding the use of approximate computational codes as well as allowing the application of new, more precise analysis

  15. SPLASH: Accurate OH maser positions

    NASA Astrophysics Data System (ADS)

    Walsh, Andrew; Gomez, Jose F.; Jones, Paul; Cunningham, Maria; Green, James; Dawson, Joanne; Ellingsen, Simon; Breen, Shari; Imai, Hiroshi; Lowe, Vicki; Jones, Courtney

    2013-10-01

    The hydroxyl (OH) 18 cm lines are powerful and versatile probes of diffuse molecular gas, that may trace a largely unstudied component of the Galactic ISM. SPLASH (the Southern Parkes Large Area Survey in Hydroxyl) is a large, unbiased and fully-sampled survey of OH emission, absorption and masers in the Galactic Plane that will achieve sensitivities an order of magnitude better than previous work. In this proposal, we request ATCA time to follow up OH maser candidates. This will give us accurate (~10") positions of the masers, which can be compared to other maser positions from HOPS, MMB and MALT-45 and will provide full polarisation measurements towards a sample of OH masers that have not been observed in MAGMO.

  16. Environment Assisted Precision Magnetometry

    NASA Astrophysics Data System (ADS)

    Cappellaro, P.; Goldstein, G.; Maze, J. R.; Jiang, L.; Hodges, J. S.; Sorensen, A. S.; Lukin, M. D.

    2010-03-01

    We describe a method to enhance the sensitivity of magnetometry and achieve nearly Heisenberg-limited precision measurement using a novel class of entangled states. An individual qubit is used to sense the dynamics of surrounding ancillary qubits, which are in turn affected by the external field to be measured. The resulting sensitivity enhancement is determined by the number of ancillas strongly coupled to the sensor qubit, it does not depend on the exact values of the couplings (allowing to use disordered systems), and is resilient to decoherence. As a specific example we consider electronic spins in the solid-state, where the ancillary system is associated with the surrounding spin bath. The conventional approach has been to consider these spins only as a source of decoherence and to adopt decoupling scheme to mitigate their effects. Here we describe novel control techniques that transform the environment spins into a resource used to amplify the sensor spin response to weak external perturbations, while maintaining the beneficial effects of dynamical decoupling sequences. We discuss specific applications to improve magnetic sensing with diamond nano-crystals, using one Nitrogen-Vacancy center spin coupled to Nitrogen electronic spins.

  17. Tube dimpling tool assures accurate dip-brazed joints

    NASA Technical Reports Server (NTRS)

    Beuyukian, C. S.; Heisman, R. M.

    1968-01-01

    Portable, hand-held dimpling tool assures accurate brazed joints between tubes of different diameters. Prior to brazing, the tool performs precise dimpling and nipple forming and also provides control and accurate measuring of the height of nipples and depth of dimples so formed.

  18. Accurate Feeding of Nanoantenna by Singular Optics for Nanoscale Translational and Rotational Displacement Sensing.

    PubMed

    Xi, Zheng; Wei, Lei; Adam, A J L; Urbach, H P; Du, Luping

    2016-09-01

    Identifying subwavelength objects and displacements is of crucial importance in optical nanometrology. We show in this Letter that nanoantennas with subwavelength structures can be excited precisely by incident beams with singularity. This accurate feeding beyond the diffraction limit can lead to dynamic control of the unidirectional scattering in the far field. The combination of the field discontinuity of the incoming singular beam with the rapid phase variation near the antenna leads to remarkable sensitivity of the far-field scattering to the displacement at a scale much smaller than the wavelength. This Letter introduces a far-field deep subwavelength position detection method based on the interaction of singular optics with nanoantennas.

  19. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  20. Arrival Metering Precision Study

    NASA Technical Reports Server (NTRS)

    Prevot, Thomas; Mercer, Joey; Homola, Jeffrey; Hunt, Sarah; Gomez, Ashley; Bienert, Nancy; Omar, Faisal; Kraut, Joshua; Brasil, Connie; Wu, Minghong, G.

    2015-01-01

    This paper describes the background, method and results of the Arrival Metering Precision Study (AMPS) conducted in the Airspace Operations Laboratory at NASA Ames Research Center in May 2014. The simulation study measured delivery accuracy, flight efficiency, controller workload, and acceptability of time-based metering operations to a meter fix at the terminal area boundary for different resolution levels of metering delay times displayed to the air traffic controllers and different levels of airspeed information made available to the Time-Based Flow Management (TBFM) system computing the delay. The results show that the resolution of the delay countdown timer (DCT) on the controllers display has a significant impact on the delivery accuracy at the meter fix. Using the 10 seconds rounded and 1 minute rounded DCT resolutions resulted in more accurate delivery than 1 minute truncated and were preferred by the controllers. Using the speeds the controllers entered into the fourth line of the data tag to update the delay computation in TBFM in high and low altitude sectors increased air traffic control efficiency and reduced fuel burn for arriving aircraft during time based metering.

  1. Prompt and Precise Prototyping

    NASA Technical Reports Server (NTRS)

    2003-01-01

    For Sanders Design International, Inc., of Wilton, New Hampshire, every passing second between the concept and realization of a product is essential to succeed in the rapid prototyping industry where amongst heavy competition, faster time-to-market means more business. To separate itself from its rivals, Sanders Design aligned with NASA's Marshall Space Flight Center to develop what it considers to be the most accurate rapid prototyping machine for fabrication of extremely precise tooling prototypes. The company's Rapid ToolMaker System has revolutionized production of high quality, small-to-medium sized prototype patterns and tooling molds with an exactness that surpasses that of computer numerically-controlled (CNC) machining devices. Created with funding and support from Marshall under a Small Business Innovation Research (SBIR) contract, the Rapid ToolMaker is a dual-use technology with applications in both commercial and military aerospace fields. The advanced technology provides cost savings in the design and manufacturing of automotive, electronic, and medical parts, as well as in other areas of consumer interest, such as jewelry and toys. For aerospace applications, the Rapid ToolMaker enables fabrication of high-quality turbine and compressor blades for jet engines on unmanned air vehicles, aircraft, and missiles.

  2. Simple and accurate optical height sensor for wafer inspection systems

    NASA Astrophysics Data System (ADS)

    Shimura, Kei; Nakai, Naoya; Taniguchi, Koichi; Itoh, Masahide

    2016-02-01

    An accurate method for measuring the wafer surface height is required for wafer inspection systems to adjust the focus of inspection optics quickly and precisely. A method for projecting a laser spot onto the wafer surface obliquely and for detecting its image displacement using a one-dimensional position-sensitive detector is known, and a variety of methods have been proposed for improving the accuracy by compensating the measurement error due to the surface patterns. We have developed a simple and accurate method in which an image of a reticle with eight slits is projected on the wafer surface and its reflected image is detected using an image sensor. The surface height is calculated by averaging the coordinates of the images of the slits in both the two directions in the captured image. Pattern-related measurement error was reduced by applying the coordinates averaging to the multiple-slit-projection method. Accuracy of better than 0.35 μm was achieved for a patterned wafer at the reference height and ±0.1 mm from the reference height in a simple configuration.

  3. Accurate Optical Reference Catalogs

    NASA Astrophysics Data System (ADS)

    Zacharias, N.

    2006-08-01

    Current and near future all-sky astrometric catalogs on the ICRF are reviewed with the emphasis on reference star data at optical wavelengths for user applications. The standard error of a Hipparcos Catalogue star position is now about 15 mas per coordinate. For the Tycho-2 data it is typically 20 to 100 mas, depending on magnitude. The USNO CCD Astrograph Catalog (UCAC) observing program was completed in 2004 and reductions toward the final UCAC3 release are in progress. This all-sky reference catalogue will have positional errors of 15 to 70 mas for stars in the 10 to 16 mag range, with a high degree of completeness. Proper motions for the about 60 million UCAC stars will be derived by combining UCAC astrometry with available early epoch data, including yet unpublished scans of the complete set of AGK2, Hamburg Zone astrograph and USNO Black Birch programs. Accurate positional and proper motion data are combined in the Naval Observatory Merged Astrometric Dataset (NOMAD) which includes Hipparcos, Tycho-2, UCAC2, USNO-B1, NPM+SPM plate scan data for astrometry, and is supplemented by multi-band optical photometry as well as 2MASS near infrared photometry. The Milli-Arcsecond Pathfinder Survey (MAPS) mission is currently being planned at USNO. This is a micro-satellite to obtain 1 mas positions, parallaxes, and 1 mas/yr proper motions for all bright stars down to about 15th magnitude. This program will be supplemented by a ground-based program to reach 18th magnitude on the 5 mas level.

  4. On the accurate estimation of gap fraction during daytime with digital cover photography

    NASA Astrophysics Data System (ADS)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  5. Precision performance lamp technology

    NASA Astrophysics Data System (ADS)

    Bell, Dean A.; Kiesa, James E.; Dean, Raymond A.

    1997-09-01

    A principal function of a lamp is to produce light output with designated spectra, intensity, and/or geometric radiation patterns. The function of a precision performance lamp is to go beyond these parameters and into the precision repeatability of performance. All lamps are not equal. There are a variety of incandescent lamps, from the vacuum incandescent indictor lamp to the precision lamp of a blood analyzer. In the past the definition of a precision lamp was described in terms of wattage, light center length (LCL), filament position, and/or spot alignment. This paper presents a new view of precision lamps through the discussion of a new segment of lamp design, which we term precision performance lamps. The definition of precision performance lamps will include (must include) the factors of a precision lamp. But what makes a precision lamp a precision performance lamp is the manner in which the design factors of amperage, mscp (mean spherical candlepower), efficacy (lumens/watt), life, not considered individually but rather considered collectively. There is a statistical bias in a precision performance lamp for each of these factors; taken individually and as a whole. When properly considered the results can be dramatic to the system design engineer, system production manage and the system end-user. It can be shown that for the lamp user, the use of precision performance lamps can translate to: (1) ease of system design, (2) simplification of electronics, (3) superior signal to noise ratios, (4) higher manufacturing yields, (5) lower system costs, (6) better product performance. The factors mentioned above are described along with their interdependent relationships. It is statistically shown how the benefits listed above are achievable. Examples are provided to illustrate how proper attention to precision performance lamp characteristics actually aid in system product design and manufacturing to build and market more, market acceptable product products in the

  6. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  7. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  8. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  9. Precision Imaging with Adaptive Optics Aperture Masking Interferometry

    NASA Astrophysics Data System (ADS)

    Martinache, F.; Lloyd, J. P.; Tuthill, P.; Woodruff, H. C.; ten Brummelaar, T.; Turner, N.

    2005-12-01

    Adaptive Optics (AO) enables sensitive diffraction limited imaging from the ground on large telescopes. Much of the promise of AO has yet to be fully realised, due to the difficulties imposed by the complicated, unstable and unknown PSF. At the highest resolutions, (inside the PSF) AO has yet to demonstrate full potential for improvements over speckle techniques. The most precise astronomical speckle imaging observations have resulted from non-redundant pupil masking. We are developing a technique to solve the problem of PSF characterization in AO imaging by synthesizing the heritage of image reconstruction with sparse pupil sampling from astronomical interferometry with the long coherence times available after AO correction. Masking the output pupil of the AO system with a non-redundant array can provide self-calibrated imaging. Further calibration of the MTF can be provided with AO wavefront sensor telemetry data. With a precision calibrated PSF, reliable, well-posed deconvolution is possible. High SNR data and accurate MTF calibration provided by the combination of non-redundant masking and AO system telemetry, allow super-resolution. AEOS provides a unique capability to explore the dynamic range and imaging precision of this technique at visible wavelengths. The NSF/AFOSR program has funded an instrument to explore these new imaging techniques at AEOS. ZOR/AO (Zero Optical Redundance with Adaptive Optics) is presently under construction, to be deployed at AEOS in 2005.

  10. Proprioceptive precision is impaired in Ehlers-Danlos syndrome.

    PubMed

    Clayton, Holly A; Jones, Stephanie A H; Henriques, Denise Y P

    2015-01-01

    It has been suggested that people with Ehlers-Danlos syndrome (EDS), or other similar connective tissue disorders, may have proprioceptive impairments, the reason for which is still unknown. We recently found that EDS patients were less precise than healthy controls when estimating their felt hand's position relative to visible peripheral reference locations, and that this deficit was positively correlated with the severity of joint hypermobility. We further explore proprioceptive abilities in EDS by having patients localize their non-dominant left hand at a greater number of workspace locations than in our previous study. Additionally, we explore the relationship between chronic pain and proprioceptive sensitivity. We found that, although patients were just as accurate as controls, they were not as precise. Patients showed twice as much scatter than controls at all locations, but the degree of scatter did not positively correlate with chronic pain scores. This further supports the idea that a proprioceptive impairment pertaining to precision is present in EDS, but may not relate to the magnitude of chronic pain.

  11. Proprioceptive precision is impaired in Ehlers-Danlos syndrome.

    PubMed

    Clayton, Holly A; Jones, Stephanie A H; Henriques, Denise Y P

    2015-01-01

    It has been suggested that people with Ehlers-Danlos syndrome (EDS), or other similar connective tissue disorders, may have proprioceptive impairments, the reason for which is still unknown. We recently found that EDS patients were less precise than healthy controls when estimating their felt hand's position relative to visible peripheral reference locations, and that this deficit was positively correlated with the severity of joint hypermobility. We further explore proprioceptive abilities in EDS by having patients localize their non-dominant left hand at a greater number of workspace locations than in our previous study. Additionally, we explore the relationship between chronic pain and proprioceptive sensitivity. We found that, although patients were just as accurate as controls, they were not as precise. Patients showed twice as much scatter than controls at all locations, but the degree of scatter did not positively correlate with chronic pain scores. This further supports the idea that a proprioceptive impairment pertaining to precision is present in EDS, but may not relate to the magnitude of chronic pain. PMID:26180743

  12. Precision volume measuring system

    SciTech Connect

    Klevgard, P.A.

    1984-11-01

    An engineering study was undertaken to calibrate and certify a precision volume measurement system that uses the ideal gas law and precise pressure measurements (of low-pressure helium) to ratio a known to an unknown volume. The constant-temperature, computer-controlled system was tested for thermodynamic instabilities, for precision (0.01%), and for bias (0.01%). Ratio scaling was used to optimize the quartz crystal pressure transducer calibration.

  13. High precision, rapid laser hole drilling

    DOEpatents

    Chang, Jim J.; Friedman, Herbert W.; Comaskey, Brian J.

    2005-03-08

    A laser system produces a first laser beam for rapidly removing the bulk of material in an area to form a ragged hole. The laser system produces a second laser beam for accurately cleaning up the ragged hole so that the final hole has dimensions of high precision.

  14. High precision, rapid laser hole drilling

    DOEpatents

    Chang, Jim J.; Friedman, Herbert W.; Comaskey, Brian J.

    2013-04-02

    A laser system produces a first laser beam for rapidly removing the bulk of material in an area to form a ragged hole. The laser system produces a second laser beam for accurately cleaning up the ragged hole so that the final hole has dimensions of high precision.

  15. High precision, rapid laser hole drilling

    DOEpatents

    Chang, Jim J.; Friedman, Herbert W.; Comaskey, Brian J.

    2007-03-20

    A laser system produces a first laser beam for rapidly removing the bulk of material in an area to form a ragged hole. The laser system produces a second laser beam for accurately cleaning up the ragged hole so that the final hole has dimensions of high precision.

  16. Sensing technologies for precision specialty crop production

    Technology Transfer Automated Retrieval System (TEKTRAN)

    With the advances in electronic and information technologies, various sensing systems have been developed for specialty crop production around the world. Accurate information concerning the spatial variability within fields is very important for precision farming of specialty crops. However, this va...

  17. Precision positioning device

    DOEpatents

    McInroy, John E.

    2005-01-18

    A precision positioning device is provided. The precision positioning device comprises a precision measuring/vibration isolation mechanism. A first plate is provided with the precision measuring mean secured to the first plate. A second plate is secured to the first plate. A third plate is secured to the second plate with the first plate being positioned between the second plate and the third plate. A fourth plate is secured to the third plate with the second plate being positioned between the third plate and the fourth plate. An adjusting mechanism for adjusting the position of the first plate, the second plate, the third plate, and the fourth plate relative to each other.

  18. A Precise Lunar Photometric Function

    NASA Astrophysics Data System (ADS)

    McEwen, A. S.

    1996-03-01

    The Clementine multispectral dataset will enable compositional mapping of the entire lunar surface at a resolution of ~100-200 m, but a highly accurate photometric normalization is needed to achieve challenging scientific objectives such as mapping petrographic or elemental compositions. The goal of this work is to normalize the Clementine data to an accuracy of 1% for the UVVIS images (0.415, 0.75, 0.9, 0.95, and 1.0 micrometers) and 2% for NIR images (1.1, 1.25, 1.5, 2.0, 2.6, and 2.78 micrometers), consistent with radiometric calibration goals. The data will be normalized to R30, the reflectance expected at an incidence angle (i) and phase angle (alpha) of 30 degrees and emission angle (e) of 0 degree, matching the photometric geometry of lunar samples measured at the reflectance laboratory (RELAB) at Brown University The focus here is on the precision of the normalization, not the putative physical significance of the photometric function parameters. The 2% precision achieved is significantly better than the ~10% precision of a previous normalization.

  19. Precision Teaching: An Introduction.

    ERIC Educational Resources Information Center

    West, Richard P.; And Others

    1990-01-01

    Precision teaching is introduced as a method of helping students develop fluency or automaticity in the performance of academic skills. Precision teaching involves being aware of the relationship between teaching and learning, measuring student performance regularly and frequently, and analyzing the measurements to develop instructional and…

  20. Precision Optics Curriculum.

    ERIC Educational Resources Information Center

    Reid, Robert L.; And Others

    This guide outlines the competency-based, two-year precision optics curriculum that the American Precision Optics Manufacturers Association has proposed to fill the void that it suggests will soon exist as many of the master opticians currently employed retire. The model, which closely resembles the old European apprenticeship model, calls for 300…

  1. Accurate Stellar Parameters for Exoplanet Host Stars

    NASA Astrophysics Data System (ADS)

    Brewer, John Michael; Fischer, Debra; Basu, Sarbani; Valenti, Jeff A.

    2015-01-01

    A large impedement to our understanding of planet formation is obtaining a clear picture of planet radii and densities. Although determining precise ratios between planet and stellar host are relatively easy, determining accurate stellar parameters is still a difficult and costly undertaking. High resolution spectral analysis has traditionally yielded precise values for some stellar parameters but stars in common between catalogs from different authors or analyzed using different techniques often show offsets far in excess of their uncertainties. Most analyses now use some external constraint, when available, to break observed degeneracies between surface gravity, effective temperature, and metallicity which can otherwise lead to correlated errors in results. However, these external constraints are impossible to obtain for all stars and can require more costly observations than the initial high resolution spectra. We demonstrate that these discrepencies can be mitigated by use of a larger line list that has carefully tuned atomic line data. We use an iterative modeling technique that does not require external constraints. We compare the surface gravity obtained with our spectral synthesis modeling to asteroseismically determined values for 42 Kepler stars. Our analysis agrees well with only a 0.048 dex offset and an rms scatter of 0.05 dex. Such accurate stellar gravities can reduce the primary source of uncertainty in radii by almost an order of magnitude over unconstrained spectral analysis.

  2. Toward precision medicine in neurological diseases.

    PubMed

    Tan, Lin; Jiang, Teng; Tan, Lan; Yu, Jin-Tai

    2016-03-01

    Technological development has paved the way for accelerated genomic discovery and is bringing precision medicine into view. The goal of precision medicine is to deliver optimally targeted and timed interventions tailored to an individual's molecular drivers of disease. Neurological diseases are promisingly suited models for precision medicine because of the rapidly expanding genetic knowledge base, phenotypic classification, the development of biomarkers and the potential modifying treatments. Moving forward, it is crucial that through these integrated research platforms to provide analysis both for accurate personal genome analysis and gene and drug discovery. Here we describe our vision of how precision medicine can bring greater clarity to the clinical and biological complexity of neurological diseases. PMID:27127757

  3. The Use of Accurate Mass Tags for High-Throughput Microbial Proteomics

    SciTech Connect

    Smith, Richard D. ); Anderson, Gordon A. ); Lipton, Mary S. ); Masselon, Christophe D. ); Pasa Tolic, Ljiljana ); Shen, Yufeng ); Udseth, Harold R. )

    2002-08-01

    We describe and demonstrate a global strategy that extends the sensitivity, dynamic range, comprehensiveness, and throughput of proteomic measurements based upon the use of peptide accurate mass tags (AMTs) produced by global protein enzymatic digestion. The two-stage strategy exploits Fourier transform-ion cyclotron resonance (FT-ICR) mass spectrometry to validate peptide AMTs for a specific organism, tissue or cell type from potential mass tags identified using conventional tandem mass spectrometry (MS/MS) methods, providing greater confidence in identifications as well as the basis for subsequent measurements without the need for MS/MS, and thus with greater sensitivity and increased throughput. A single high resolution capillary liquid chromatography separation combined with high sensitivity, high resolution and ac-curate FT-ICR measurements has been shown capable of characterizing peptide mixtures of significantly more than 10 5 components with mass accuracies of -1 ppm, sufficient for broad protein identification using AMTs. Other attractions of the approach include the broad and relatively unbiased proteome coverage, the capability for exploiting stable isotope labeling methods to realize high precision for relative protein abundance measurements, and the projected potential for study of mammalian proteomes when combined with additional sample fractionation. Using this strategy, in our first application we have been able to identify AMTs for 60% of the potentially expressed proteins in the organism Deinococcus radiodurans.

  4. Precision volume measurement system.

    SciTech Connect

    Fischer, Erin E.; Shugard, Andrew D.

    2004-11-01

    A new precision volume measurement system based on a Kansas City Plant (KCP) design was built to support the volume measurement needs of the Gas Transfer Systems (GTS) department at Sandia National Labs (SNL) in California. An engineering study was undertaken to verify or refute KCP's claims of 0.5% accuracy. The study assesses the accuracy and precision of the system. The system uses the ideal gas law and precise pressure measurements (of low-pressure helium) in a temperature and computer controlled environment to ratio a known volume to an unknown volume.

  5. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    A precision liquid level sensor utilizes a balanced bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  6. Precision Measurement in Biology

    NASA Astrophysics Data System (ADS)

    Quake, Stephen

    Is biology a quantitative science like physics? I will discuss the role of precision measurement in both physics and biology, and argue that in fact both fields can be tied together by the use and consequences of precision measurement. The elementary quanta of biology are twofold: the macromolecule and the cell. Cells are the fundamental unit of life, and macromolecules are the fundamental elements of the cell. I will describe how precision measurements have been used to explore the basic properties of these quanta, and more generally how the quest for higher precision almost inevitably leads to the development of new technologies, which in turn catalyze further scientific discovery. In the 21st century, there are no remaining experimental barriers to biology becoming a truly quantitative and mathematical science.

  7. Precision displacement reference system

    DOEpatents

    Bieg, Lothar F.; Dubois, Robert R.; Strother, Jerry D.

    2000-02-22

    A precision displacement reference system is described, which enables real time accountability over the applied displacement feedback system to precision machine tools, positioning mechanisms, motion devices, and related operations. As independent measurements of tool location is taken by a displacement feedback system, a rotating reference disk compares feedback counts with performed motion. These measurements are compared to characterize and analyze real time mechanical and control performance during operation.

  8. Precision moisture generation and measurement.

    SciTech Connect

    Thornberg, Steven Michael; White, Michael I.; Irwin, Adriane Nadine

    2010-03-01

    In many industrial processes, gaseous moisture is undesirable as it can lead to metal corrosion, polymer degradation, and other materials aging processes. However, generating and measuring precise moisture concentrations is challenging due to the need to cover a broad concentration range (parts-per-billion to percent) and the affinity of moisture to a wide range surfaces and materials. This document will discuss the techniques employed by the Mass Spectrometry Laboratory of the Materials Reliability Department at Sandia National Laboratories to generate and measure known gaseous moisture concentrations. This document highlights the use of a chilled mirror and primary standard humidity generator for the characterization of aluminum oxide moisture sensors. The data presented shows an excellent correlation in frost point measured between the two instruments, and thus provides an accurate and reliable platform for characterizing moisture sensors and performing other moisture related experiments.

  9. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  10. Line gas sampling system ensures accurate analysis

    SciTech Connect

    Not Available

    1992-06-01

    Tremendous changes in the natural gas business have resulted in new approaches to the way natural gas is measured. Electronic flow measurement has altered the business forever, with developments in instrumentation and a new sensitivity to the importance of proper natural gas sampling techniques. This paper reports that YZ Industries Inc., Snyder, Texas, combined its 40 years of sampling experience with the latest in microprocessor-based technology to develop the KynaPak 2000 series, the first on-line natural gas sampling system that is both compact and extremely accurate. This means the composition of the sampled gas must be representative of the whole and related to flow. If so, relative measurement and sampling techniques are married, gas volumes are accurately accounted for and adjustments to composition can be made.

  11. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    NASA Astrophysics Data System (ADS)

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  12. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  13. Estimating sparse precision matrices

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Nikhil; White, Martin; Zhou, Harrison H.; O'Connell, Ross

    2016-08-01

    We apply a method recently introduced to the statistical literature to directly estimate the precision matrix from an ensemble of samples drawn from a corresponding Gaussian distribution. Motivated by the observation that cosmological precision matrices are often approximately sparse, the method allows one to exploit this sparsity of the precision matrix to more quickly converge to an asymptotic 1/sqrt{N_sim} rate while simultaneously providing an error model for all of the terms. Such an estimate can be used as the starting point for further regularization efforts which can improve upon the 1/sqrt{N_sim} limit above, and incorporating such additional steps is straightforward within this framework. We demonstrate the technique with toy models and with an example motivated by large-scale structure two-point analysis, showing significant improvements in the rate of convergence. For the large-scale structure example, we find errors on the precision matrix which are factors of 5 smaller than for the sample precision matrix for thousands of simulations or, alternatively, convergence to the same error level with more than an order of magnitude fewer simulations.

  14. Expressing precision and bias in calorimetry

    SciTech Connect

    Hauck, Danielle K; Croft, Stephen; Bracken, David S

    2010-01-01

    The calibration and calibration verification of a nuclear calorimeter represents a substantial investment of time in part because a single calorimeter measurement takes of the order of 2 to 24h to complete. The time to complete a measurement generally increases with the size of the calorimeter measurement well. It is therefore important to plan the sequence of measurements rather carefully so as to cover the dynamic range and achieve the required accuracy within a reasonable time frame. This work will discuss how calibrations and their verification has been done in the past and what we consider to be good general practice in this regard. A proposed approach to calibration and calibration verification is presented which, in the final analysis, makes use of all the available data - both calibration and verification collectively - in order to obtain the best (in a best fit sense) possible calibration. The combination of sample variance and percent recovery are traditionally taken as sufficient to capture the random (precision) and systematic (bias) contributions to the uncertainty in a calorimetric assay. These terms have been defined as well as formulated for a basic calibration. It has been tradition to assume that sensitivity is a linear function of power. However, the availability of computer power and statistical packages should be utilized to fit the response function as accurately as possible using whatever functions are deemed most suitable. Allowing for more flexibility in the response function fit will enable the calibration to be updated according to the results from regular validation measurements through the year. In a companion paper to be published elsewhere we plan to discuss alternative fitting functions.

  15. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  16. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  17. Accurate Telescope Mount Positioning with MEMS Accelerometers

    NASA Astrophysics Data System (ADS)

    Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.

    2014-08-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate, and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the subarcminute range which is considerably smaller than the field-of-view of conventional imaging telescope systems. Here we present how this subarcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  18. Precision laser automatic tracking system.

    PubMed

    Lucy, R F; Peters, C J; McGann, E J; Lang, K T

    1966-04-01

    A precision laser tracker has been constructed and tested that is capable of tracking a low-acceleration target to an accuracy of about 25 microrad root mean square. In tracking high-acceleration targets, the error is directly proportional to the angular acceleration. For an angular acceleration of 0.6 rad/sec(2), the measured tracking error was about 0.1 mrad. The basic components in this tracker, similar in configuration to a heliostat, are a laser and an image dissector, which are mounted on a stationary frame, and a servocontrolled tracking mirror. The daytime sensitivity of this system is approximately 3 x 10(-10) W/m(2); the ultimate nighttime sensitivity is approximately 3 x 10(-14) W/m(2). Experimental tests were performed to evaluate both dynamic characteristics of this system and the system sensitivity. Dynamic performance of the system was obtained, using a small rocket covered with retroreflective material launched at an acceleration of about 13 g at a point 204 m from the tracker. The daytime sensitivity of the system was checked, using an efficient retroreflector mounted on a light aircraft. This aircraft was tracked out to a maximum range of 15 km, which checked the daytime sensitivity of the system measured by other means. The system also has been used to track passively stars and the Echo I satellite. Also, the system tracked passively a +7.5 magnitude star, and the signal-to-noise ratio in this experiment indicates that it should be possible to track a + 12.5 magnitude star.

  19. Precision gap particle separator

    DOEpatents

    Benett, William J.; Miles, Robin; Jones, II., Leslie M.; Stockton, Cheryl

    2004-06-08

    A system for separating particles entrained in a fluid includes a base with a first channel and a second channel. A precision gap connects the first channel and the second channel. The precision gap is of a size that allows small particles to pass from the first channel into the second channel and prevents large particles from the first channel into the second channel. A cover is positioned over the base unit, the first channel, the precision gap, and the second channel. An port directs the fluid containing the entrained particles into the first channel. An output port directs the large particles out of the first channel. A port connected to the second channel directs the small particles out of the second channel.

  20. How Physics Got Precise

    SciTech Connect

    Kleppner, Daniel

    2005-01-19

    Although the ancients knew the length of the year to about ten parts per million, it was not until the end of the 19th century that precision measurements came to play a defining role in physics. Eventually such measurements made it possible to replace human-made artifacts for the standards of length and time with natural standards. For a new generation of atomic clocks, time keeping could be so precise that the effects of the local gravitational potentials on the clock rates would be important. This would force us to re-introduce an artifact into the definition of the second - the location of the primary clock. I will describe some of the events in the history of precision measurements that have led us to this pleasing conundrum, and some of the unexpected uses of atomic clocks today.

  1. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  2. Platform Precision Autopilot Overview and Mission Performance

    NASA Technical Reports Server (NTRS)

    Strovers, Brian K.; Lee, James A.

    2009-01-01

    The Platform Precision Autopilot is an instrument landing system-interfaced autopilot system, developed to enable an aircraft to repeatedly fly nearly the same trajectory hours, days, or weeks later. The Platform Precision Autopilot uses a novel design to interface with a NASA Gulfstream III jet by imitating the output of an instrument landing system approach. This technique minimizes, as much as possible, modifications to the baseline Gulfstream III jet and retains the safety features of the aircraft autopilot. The Platform Precision Autopilot requirement is to fly within a 5-m (16.4-ft) radius tube for distances to 200 km (108 nmi) in the presence of light turbulence for at least 90 percent of the time. This capability allows precise repeat-pass interferometry for the Unmanned Aerial Vehicle Synthetic Aperture Radar program, whose primary objective is to develop a miniaturized, polarimetric, L-band synthetic aperture radar. Precise navigation is achieved using an accurate differential global positioning system developed by the Jet Propulsion Laboratory. Flight-testing has demonstrated the ability of the Platform Precision Autopilot to control the aircraft within the specified tolerance greater than 90 percent of the time in the presence of aircraft system noise and nonlinearities, constant pilot throttle adjustments, and light turbulence.

  3. Precision Heating Process

    NASA Technical Reports Server (NTRS)

    1992-01-01

    A heat sealing process was developed by SEBRA based on technology that originated in work with NASA's Jet Propulsion Laboratory. The project involved connecting and transferring blood and fluids between sterile plastic containers while maintaining a closed system. SEBRA markets the PIRF Process to manufacturers of medical catheters. It is a precisely controlled method of heating thermoplastic materials in a mold to form or weld catheters and other products. The process offers advantages in fast, precise welding or shape forming of catheters as well as applications in a variety of other industries.

  4. Precision Nova operations

    SciTech Connect

    Ehrlich, R.B.; Miller, J.L.; Saunders, R.L.; Thompson, C.E.; Weiland, T.L.; Laumann, C.W.

    1995-09-01

    To improve the symmetry of x-ray drive on indirectly driven ICF capsules, we have increased the accuracy of operating procedures and diagnostics on the Nova laser. Precision Nova operations includes routine precision power balance to within 10% rms in the ``foot`` and 5% nns in the peak of shaped pulses, beam synchronization to within 10 ps rms, and pointing of the beams onto targets to within 35 {mu}m rms. We have also added a ``fail-safe chirp`` system to avoid Stimulated Brillouin Scattering (SBS) in optical components during high energy shots.

  5. Attaining the Photometric Precision Required by Future Dark Energy Projects

    SciTech Connect

    Stubbs, Christopher

    2013-01-21

    This report outlines our progress towards achieving the high-precision astronomical measurements needed to derive improved constraints on the nature of the Dark Energy. Our approach to obtaining higher precision flux measurements has two basic components: 1) determination of the optical transmission of the atmosphere, and 2) mapping out the instrumental photon sensitivity function vs. wavelength, calibrated by referencing the measurements to the known sensitivity curve of a high precision silicon photodiode, and 3) using the self-consistency of the spectrum of stars to achieve precise color calibrations.

  6. Accurate lineshape spectroscopy and the Boltzmann constant

    PubMed Central

    Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  7. Accurate adiabatic correction in the hydrogen molecule

    NASA Astrophysics Data System (ADS)

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-01

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10-12 at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10-7 cm-1, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  8. Accurate adiabatic correction in the hydrogen molecule

    SciTech Connect

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  9. Accurate adiabatic correction in the hydrogen molecule.

    PubMed

    Pachucki, Krzysztof; Komasa, Jacek

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10(-12) at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H2, HD, HT, D2, DT, and T2 has been determined. For the ground state of H2 the estimated precision is 3 × 10(-7) cm(-1), which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels. PMID:25494728

  10. MEMS accelerometers in accurate mount positioning systems

    NASA Astrophysics Data System (ADS)

    Mészáros, László; Pál, András.; Jaskó, Attila

    2014-07-01

    In order to attain precise, accurate and stateless positioning of telescope mounts we apply microelectromechanical accelerometer systems (also known as MEMS accelerometers). In common practice, feedback from the mount position is provided by electronic, optical or magneto-mechanical systems or via real-time astrometric solution based on the acquired images. Hence, MEMS-based systems are completely independent from these mechanisms. Our goal is to investigate the advantages and challenges of applying such devices and to reach the sub-arcminute range { that is well smaller than the field-of-view of conventional imaging telescope systems. We present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors. Basically, these sensors yield raw output within an accuracy of a few degrees. We show what kind of calibration procedures could exploit spherical and cylindrical constraints between accelerometer output channels in order to achieve the previously mentioned accuracy level. We also demonstrate how can our implementation be inserted in a telescope control system. Although this attainable precision is less than both the resolution of telescope mount drive mechanics and the accuracy of astrometric solutions, the independent nature of attitude determination could significantly increase the reliability of autonomous or remotely operated astronomical observations.

  11. High-precision triangular-waveform generator

    DOEpatents

    Mueller, T.R.

    1981-11-14

    An ultra-linear ramp generator having separately programmable ascending and decending ramp rates and voltages is provided. Two constant current sources provide the ramp through an integrator. Switching of the current at current source inputs rather than at the integrator input eliminates switching transients and contributes to the waveform precision. The triangular waveforms produced by the waveform generator are characterized by accurate reproduction and low drift over periods of several hours. The ascending and descending slopes are independently selectable.

  12. Precision orbit determination of altimetric satellites

    NASA Astrophysics Data System (ADS)

    Shum, C. K.; Ries, John C.; Tapley, Byron D.

    1994-11-01

    The ability to determine accurate global sea level variations is important to both detection and understanding of changes in climate patterns. Sea level variability occurs over a wide spectrum of temporal and spatial scales, and precise global measurements are only recently possible with the advent of spaceborne satellite radar altimetry missions. One of the inherent requirements for accurate determination of absolute sea surface topography is that the altimetric satellite orbits be computed with sub-decimeter accuracy within a well defined terrestrial reference frame. SLR tracking in support of precision orbit determination of altimetric satellites is significant. Recent examples are the use of SLR as the primary tracking systems for TOPEX/Poseidon and for ERS-1 precision orbit determination. The current radial orbit accuracy for TOPEX/Poseidon is estimated to be around 3-4 cm, with geographically correlated orbit errors around 2 cm. The significance of the SLR tracking system is its ability to allow altimetric satellites to obtain absolute sea level measurements and thereby provide a link to other altimetry measurement systems for long-term sea level studies. SLR tracking allows the production of precise orbits which are well centered in an accurate terrestrial reference frame. With proper calibration of the radar altimeter, these precise orbits, along with the altimeter measurements, provide long term absolute sea level measurements. The U.S. Navy's Geosat mission is equipped with only Doppler beacons and lacks laser retroreflectors. However, its orbits, and even the Geosat orbits computed using the available full 40-station Tranet tracking network, yield orbits with significant north-south shifts with respect to the IERS terrestrial reference frame. The resulting Geosat sea surface topography will be tilted accordingly, making interpretation of long-term sea level variability studies difficult.

  13. High-Precision Computation and Mathematical Physics

    SciTech Connect

    Bailey, David H.; Borwein, Jonathan M.

    2008-11-03

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

  14. Iterative Precise Conductivity Measurement with IDEs.

    PubMed

    Hubálek, Jaromír

    2015-05-22

    The paper presents a new approach in the field of precise electrolytic conductivity measurements with planar thin- and thick-film electrodes. This novel measuring method was developed for measurement with comb-like electrodes called interdigitated electrodes (IDEs). Correction characteristics over a wide range of specific conductivities were determined from an interface impedance characterization of the thick-film IDEs. The local maximum of the capacitive part of the interface impedance is used for corrections to get linear responses. The measuring frequency was determined at a wide range of measured conductivity. An iteration mode of measurements was suggested to precisely measure the conductivity at the right frequency in order to achieve a highly accurate response. The method takes precise conductivity measurements in concentration ranges from 10(-6) to 1 M without electrode cell replacement.

  15. Precision bolometer bridge

    NASA Technical Reports Server (NTRS)

    White, D. R.

    1968-01-01

    Prototype precision bolometer calibration bridge is manually balanced device for indicating dc bias and balance with either dc or ac power. An external galvanometer is used with the bridge for null indication, and the circuitry monitors voltage and current simultaneously without adapters in testing 100 and 200 ohm thin film bolometers.

  16. Precision liquid level sensor

    DOEpatents

    Field, M.E.; Sullivan, W.H.

    1985-01-29

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge. 2 figs.

  17. Precision physics at LHC

    SciTech Connect

    Hinchliffe, I.

    1997-05-01

    In this talk the author gives a brief survey of some physics topics that will be addressed by the Large Hadron Collider currently under construction at CERN. Instead of discussing the reach of this machine for new physics, the author gives examples of the types of precision measurements that might be made if new physics is discovered.

  18. Precision in Stereochemical Terminology

    ERIC Educational Resources Information Center

    Wade, Leroy G., Jr.

    2006-01-01

    An analysis of relatively new terminology that has given multiple definitions often resulting in students learning principles that are actually false is presented with an example of the new term stereogenic atom introduced by Mislow and Siegel. The Mislow terminology would be useful in some cases if it were used precisely and correctly, but it is…

  19. High Precision Astrometry

    NASA Astrophysics Data System (ADS)

    Riess, Adam

    2012-10-01

    This |*|program |*|uses |*|the |*|enhanced |*|astrometric |*|precision |*|enabled |*|by |*|spatial |*|scanning |*|to |*|calibrate |*|remaining |*|obstacles |*|toreaching |*|<<40 |*|microarc|*|second |*|astrometry |*|{<1 |*|millipixel} |*|with |*|WFC3/UVIS |*|by |*|1} |*|improving |*|geometric |*|distor-on |*|2} |*|calibratingthe |*|e|*|ect |*|of |*|breathing |*|on |*|astrometry|*|3} |*|calibrating |*|the |*|e|*|ect |*|of |*|CTE |*|on |*|astrometry, |*|4} |*|characterizing |*|the |*|boundaries |*|andorientations |*|of |*|the |*|WFC3 |*|lithograph |*|cells.

  20. Precision liquid level sensor

    DOEpatents

    Field, Michael E.; Sullivan, William H.

    1985-01-01

    A precision liquid level sensor utilizes a balanced R. F. bridge, each arm including an air dielectric line. Changes in liquid level along one air dielectric line imbalance the bridge and create a voltage which is directly measurable across the bridge.

  1. Accurate Feeding of Nanoantenna by Singular Optics for Nanoscale Translational and Rotational Displacement Sensing.

    PubMed

    Xi, Zheng; Wei, Lei; Adam, A J L; Urbach, H P; Du, Luping

    2016-09-01

    Identifying subwavelength objects and displacements is of crucial importance in optical nanometrology. We show in this Letter that nanoantennas with subwavelength structures can be excited precisely by incident beams with singularity. This accurate feeding beyond the diffraction limit can lead to dynamic control of the unidirectional scattering in the far field. The combination of the field discontinuity of the incoming singular beam with the rapid phase variation near the antenna leads to remarkable sensitivity of the far-field scattering to the displacement at a scale much smaller than the wavelength. This Letter introduces a far-field deep subwavelength position detection method based on the interaction of singular optics with nanoantennas. PMID:27661688

  2. Precision Environmental Radiation Monitoring System

    SciTech Connect

    Vladimir Popov, Pavel Degtiarenko

    2010-07-01

    A new precision low-level environmental radiation monitoring system has been developed and tested at Jefferson Lab. This system provides environmental radiation measurements with accuracy and stability of the order of 1 nGy/h in an hour, roughly corresponding to approximately 1% of the natural cosmic background at the sea level. Advanced electronic front-end has been designed and produced for use with the industry-standard High Pressure Ionization Chamber detector hardware. A new highly sensitive readout electronic circuit was designed to measure charge from the virtually suspended ionization chamber ion collecting electrode. New signal processing technique and dedicated data acquisition were tested together with the new readout. The designed system enabled data collection in a remote Linux-operated computer workstation, which was connected to the detectors using a standard telephone cable line. The data acquisition system algorithm is built around the continuously running 24-bit resolution 192 kHz data sampling analog to digital convertor. The major features of the design include: extremely low leakage current in the input circuit, true charge integrating mode operation, and relatively fast response to the intermediate radiation change. These features allow operating of the device as an environmental radiation monitor, at the perimeters of the radiation-generating installations in densely populated areas, like in other monitoring and security applications requiring high precision and long-term stability. Initial system evaluation results are presented.

  3. Making sense of high sensitivity troponin assays and their role in clinical care.

    PubMed

    Daniels, Lori B

    2014-04-01

    Cardiac troponin assays have an established and undisputed role in the diagnosis and risk stratification of patients with acute myocardial infarction. As troponin assays gets more sensitive and more precise, the number of potential uses has rapidly expanded, but the use of this test has also become more complicated and controversial. Highly sensitive troponin assays can now detect troponin levels in most individuals, but accurate interpretation of these levels requires a clear understanding of the assay in the context of the clinical scenario. This paper provides a practical and up-to-date overview of the uses of highly sensitive troponin assays for diagnosis, prognosis, and risk stratification in clinical practice.

  4. A passion for precision

    ScienceCinema

    None

    2016-07-12

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  5. Precision synchrotron radiation detectors

    SciTech Connect

    Levi, M.; Rouse, F.; Butler, J.; Jung, C.K.; Lateur, M.; Nash, J.; Tinsman, J.; Wormser, G.; Gomez, J.J.; Kent, J.

    1989-03-01

    Precision detectors to measure synchrotron radiation beam positions have been designed and installed as part of beam energy spectrometers at the Stanford Linear Collider (SLC). The distance between pairs of synchrotron radiation beams is measured absolutely to better than 28 /mu/m on a pulse-to-pulse basis. This contributes less than 5 MeV to the error in the measurement of SLC beam energies (approximately 50 GeV). A system of high-resolution video cameras viewing precisely-aligned fiducial wire arrays overlaying phosphorescent screens has achieved this accuracy. Also, detectors of synchrotron radiation using the charge developed by the ejection of Compton-recoil electrons from an array of fine wires are being developed. 4 refs., 5 figs., 1 tab.

  6. A passion for precision

    SciTech Connect

    2010-05-19

    For more than three decades, the quest for ever higher precision in laser spectroscopy of the simple hydrogen atom has inspired many advances in laser, optical, and spectroscopic techniques, culminating in femtosecond laser optical frequency combs  as perhaps the most precise measuring tools known to man. Applications range from optical atomic clocks and tests of QED and relativity to searches for time variations of fundamental constants. Recent experiments are extending frequency comb techniques into the extreme ultraviolet. Laser frequency combs can also control the electric field of ultrashort light pulses, creating powerful new tools for the emerging field of attosecond science.Organiser(s): L. Alvarez-Gaume / PH-THNote: * Tea & coffee will be served at 16:00.

  7. Ultra precision machining

    NASA Astrophysics Data System (ADS)

    Debra, Daniel B.; Hesselink, Lambertus; Binford, Thomas

    1990-05-01

    There are a number of fields that require or can use to advantage very high precision in machining. For example, further development of high energy lasers and x ray astronomy depend critically on the manufacture of light weight reflecting metal optical components. To fabricate these optical components with machine tools they will be made of metal with mirror quality surface finish. By mirror quality surface finish, it is meant that the dimensions tolerances on the order of 0.02 microns and surface roughness of 0.07. These accuracy targets fall in the category of ultra precision machining. They cannot be achieved by a simple extension of conventional machining processes and techniques. They require single crystal diamond tools, special attention to vibration isolation, special isolation of machine metrology, and on line correction of imperfection in the motion of the machine carriages on their way.

  8. Ultrasonic precision optical grinding technology

    NASA Astrophysics Data System (ADS)

    Cahill, Michael J.; Bechtold, Michael J.; Fess, Edward; Wolfs, Frank L.; Bechtold, Rob

    2015-10-01

    As optical geometries become more precise and complex and a wider range of materials are used, the processes used for manufacturing become more critical. As the preparatory stage for polishing, this is especially true for grinding. Slow processing speeds, accelerated tool wear, and poor surface quality are often detriments in manufacturing glass and hard ceramics. The quality of the ground surface greatly influences the polishing process and the resulting finished product. Through extensive research and development, OptiPro Systems has introduced an ultrasonic assisted grinding technology, OptiSonic, which has numerous advantages over traditional grinding processes. OptiSonic utilizes a custom tool holder designed to produce oscillations in line with the rotating spindle. A newly developed software package called IntelliSonic is integral to this platform. IntelliSonic automatically characterizes the tool and continuously optimizes the output frequency for optimal cutting while in contact with the part. This helps maintain a highly consistent process under changing load conditions for a more accurate surface. Utilizing a wide variety of instruments, test have proven to show a reduction in tool wear and increase in surface quality while allowing processing speeds to be increased. OptiSonic has proven to be an enabling technology to overcome the difficulties seen in grinding of glass and hard optical ceramics. OptiSonic has demonstrated numerous advantages over the standard CNC grinding process. Advantages are evident in reduced tool wear, better surface quality, and reduced cycle times due to increased feed rates. These benefits can be seen over numerous applications within the precision optics industry.

  9. Ultra-Precision Optics

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Under a Joint Sponsored Research Agreement with Goddard Space Flight Center, SEMATECH, Inc., the Silicon Valley Group, Inc. and Tinsley Laboratories, known as SVG-Tinsley, developed an Ultra-Precision Optics Manufacturing System for space and microlithographic applications. Continuing improvements in optics manufacture will be able to meet unique NASA requirements and the production needs of the lithography industry for many years to come.

  10. Precise clock synchronization protocol

    NASA Astrophysics Data System (ADS)

    Luit, E. J.; Martin, J. M. M.

    1993-12-01

    A distributed clock synchronization protocol is presented which achieves a very high precision without the need for very frequent resynchronizations. The protocol tolerates failures of the clocks: clocks may be too slow or too fast, exhibit omission failures and report inconsistent values. Synchronization takes place in synchronization rounds as in many other synchronization protocols. At the end of each round, clock times are exchanged between the clocks. Each clock applies a convergence function (CF) to the values obtained. This function estimates the difference between its clock and an average clock and corrects its clock accordingly. Clocks are corrected for drift relative to this average clock during the next synchronization round. The protocol is based on the assumption that clock reading errors are small with respect to the required precision of synchronization. It is shown that the CF resynchronizes the clocks with high precision even when relatively large clock drifts are possible. It is also shown that the drift-corrected clocks remain synchronized until the end of the next synchronization round. The stability of the protocol is proven.

  11. Precision Experiments at LEP

    NASA Astrophysics Data System (ADS)

    de Boer, W.

    2015-07-01

    The Large Electron-Positron Collider (LEP) established the Standard Model (SM) of particle physics with unprecedented precision, including all its radiative corrections. These led to predictions for the masses of the top quark and Higgs boson, which were beautifully confirmed later on. After these precision measurements the Nobel Prize in Physics was awarded in 1999 jointly to 't Hooft and Veltman "for elucidating the quantum structure of electroweak interactions in physics". Another hallmark of the LEP results were the precise measurements of the gauge coupling constants, which excluded unification of the forces within the SM, but allowed unification within the supersymmetric extension of the SM. This increased the interest in Supersymmetry (SUSY) and Grand Unified Theories, especially since the SM has no candidate for the elusive dark matter, while SUSY provides an excellent candidate for dark matter. In addition, SUSY removes the quadratic divergencies of the SM and predicts the Higgs mechanism from radiative electroweak symmetry breaking with a SM-like Higgs boson having a mass below 130 GeV in agreement with the Higgs boson discovery at the LHC. However, the predicted SUSY particles have not been found either because they are too heavy for the present LHC energy and luminosity or Nature has found alternative ways to circumvent the shortcomings of the SM.

  12. Precision Experiments at LEP

    NASA Astrophysics Data System (ADS)

    de Boer, W.

    2015-09-01

    The Large Electron Positron Collider (LEP) established the Standard Model (SM) of particle physics with unprecedented precision, including all its radiative corrections. These led to predictions for the masses of the top quark and Higgs boson, which were beautifully confirmed later on. After these precision measurements the Nobel Prize in Physics was awarded in 1999 jointly to 't Hooft and Veltman "for elucidating the quantum structure of electroweak interactions in physics". Another hallmark of the LEP results were the precise measurements of the gauge coupling constants, which excluded unification of the forces within the SM, but allowed unification within the supersymmetric extension of the SM. This increased the interest in Supersymmetry (SUSY) and Grand Unified Theories, especially since the SM has no candidate for the elusive dark matter, while Supersymmetry provides an excellent candidate for dark matter. In addition, Supersymmetry removes the quadratic divergencies of the SM and {\\it predicts} the Higgs mechanism from radiative electroweak symmetry breaking with a SM-like Higgs boson having a mass below 130 GeV in agreement with the Higgs boson discovery at the LHC. However, the predicted SUSY particles have not been found either because they are too heavy for the present LHC energy and luminosity or Nature has found alternative ways to circumvent the shortcomings of the SM.

  13. ACCURATE CHARACTERIZATION OF HIGH-DEGREE MODES USING MDI OBSERVATIONS

    SciTech Connect

    Korzennik, S. G.; Rabello-Soares, M. C.; Schou, J.; Larson, T. P.

    2013-08-01

    their uncertainties and the precision of the ridge-to-mode correction schemes, through a detailed assessment of the sensitivity of the model to its input set. The precision of the ridge-to-mode correction is indicative of any possible residual systematic biases in the inferred mode characteristics. In our conclusions, we address how to further improve these estimates, and the implications for other data sets, like GONG+ and HMI.

  14. Spectroscopic Method for Fast and Accurate Group A Streptococcus Bacteria Detection.

    PubMed

    Schiff, Dillon; Aviv, Hagit; Rosenbaum, Efraim; Tischler, Yaakov R

    2016-02-16

    Rapid and accurate detection of pathogens is paramount to human health. Spectroscopic techniques have been shown to be viable methods for detecting various pathogens. Enhanced methods of Raman spectroscopy can discriminate unique bacterial signatures; however, many of these require precise conditions and do not have in vivo replicability. Common biological detection methods such as rapid antigen detection tests have high specificity but do not have high sensitivity. Here we developed a new method of bacteria detection that is both highly specific and highly sensitive by combining the specificity of antibody staining and the sensitivity of spectroscopic characterization. Bacteria samples, treated with a fluorescent antibody complex specific to Streptococcus pyogenes, were volumetrically normalized according to their Raman bacterial signal intensity and characterized for fluorescence, eliciting a positive result for samples containing Streptococcus pyogenes and a negative result for those without. The normalized fluorescence intensity of the Streptococcus pyogenes gave a signal that is up to 16.4 times higher than that of other bacteria samples for bacteria stained in solution and up to 12.7 times higher in solid state. This method can be very easily replicated for other bacteria species using suitable antibody-dye complexes. In addition, this method shows viability for in vivo detection as it requires minute amounts of bacteria, low laser excitation power, and short integration times in order to achieve high signal.

  15. Spectroscopic Method for Fast and Accurate Group A Streptococcus Bacteria Detection.

    PubMed

    Schiff, Dillon; Aviv, Hagit; Rosenbaum, Efraim; Tischler, Yaakov R

    2016-02-16

    Rapid and accurate detection of pathogens is paramount to human health. Spectroscopic techniques have been shown to be viable methods for detecting various pathogens. Enhanced methods of Raman spectroscopy can discriminate unique bacterial signatures; however, many of these require precise conditions and do not have in vivo replicability. Common biological detection methods such as rapid antigen detection tests have high specificity but do not have high sensitivity. Here we developed a new method of bacteria detection that is both highly specific and highly sensitive by combining the specificity of antibody staining and the sensitivity of spectroscopic characterization. Bacteria samples, treated with a fluorescent antibody complex specific to Streptococcus pyogenes, were volumetrically normalized according to their Raman bacterial signal intensity and characterized for fluorescence, eliciting a positive result for samples containing Streptococcus pyogenes and a negative result for those without. The normalized fluorescence intensity of the Streptococcus pyogenes gave a signal that is up to 16.4 times higher than that of other bacteria samples for bacteria stained in solution and up to 12.7 times higher in solid state. This method can be very easily replicated for other bacteria species using suitable antibody-dye complexes. In addition, this method shows viability for in vivo detection as it requires minute amounts of bacteria, low laser excitation power, and short integration times in order to achieve high signal. PMID:26752013

  16. Precision electroweak measurements

    SciTech Connect

    Demarteau, M.

    1996-11-01

    Recent electroweak precision measurements fro {ital e}{sup +}{ital e}{sup -} and {ital p{anti p}} colliders are presented. Some emphasis is placed on the recent developments in the heavy flavor sector. The measurements are compared to predictions from the Standard Model of electroweak interactions. All results are found to be consistent with the Standard Model. The indirect constraint on the top quark mass from all measurements is in excellent agreement with the direct {ital m{sub t}} measurements. Using the world`s electroweak data in conjunction with the current measurement of the top quark mass, the constraints on the Higgs` mass are discussed.

  17. Precision Robotic Assembly Machine

    ScienceCinema

    None

    2016-07-12

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  18. Instrument Attitude Precision Control

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan

    2004-01-01

    A novel approach is presented in this paper to analyze attitude precision and control for an instrument gimbaled to a spacecraft subject to an internal disturbance caused by a moving component inside the instrument. Nonlinear differential equations of motion for some sample cases are derived and solved analytically to gain insight into the influence of the disturbance on the attitude pointing error. A simple control law is developed to eliminate the instrument pointing error caused by the internal disturbance. Several cases are presented to demonstrate and verify the concept presented in this paper.

  19. Precision Robotic Assembly Machine

    SciTech Connect

    2009-08-14

    The world's largest laser system is the National Ignition Facility (NIF), located at Lawrence Livermore National Laboratory. NIF's 192 laser beams are amplified to extremely high energy, and then focused onto a tiny target about the size of a BB, containing frozen hydrogen gas. The target must be perfectly machined to incredibly demanding specifications. The Laboratory's scientists and engineers have developed a device called the "Precision Robotic Assembly Machine" for this purpose. Its unique design won a prestigious R&D-100 award from R&D Magazine.

  20. Precision Pointing System Development

    SciTech Connect

    BUGOS, ROBERT M.

    2003-03-01

    The development of precision pointing systems has been underway in Sandia's Electronic Systems Center for over thirty years. Important areas of emphasis are synthetic aperture radars and optical reconnaissance systems. Most applications are in the aerospace arena, with host vehicles including rockets, satellites, and manned and unmanned aircraft. Systems have been used on defense-related missions throughout the world. Presently in development are pointing systems with accuracy goals in the nanoradian regime. Future activity will include efforts to dramatically reduce system size and weight through measures such as the incorporation of advanced materials and MEMS inertial sensors.

  1. Precision mass measurements

    NASA Astrophysics Data System (ADS)

    Gläser, M.; Borys, M.

    2009-12-01

    Mass as a physical quantity and its measurement are described. After some historical remarks, a short summary of the concept of mass in classical and modern physics is given. Principles and methods of mass measurements, for example as energy measurement or as measurement of weight forces and forces caused by acceleration, are discussed. Precision mass measurement by comparing mass standards using balances is described in detail. Measurement of atomic masses related to 12C is briefly reviewed as well as experiments and recent discussions for a future new definition of the kilogram, the SI unit of mass.

  2. Isara 400 ultra-precision CMM

    NASA Astrophysics Data System (ADS)

    Spaan, H. A. M.; Widdershoven, I.

    2011-10-01

    This paper presents the realization of the Isara 400 ultra-precision 3D coordinate measuring machine, which features a measuring volume of 400 × 400 × 100 mm and a traceable measurement uncertainty better than 50 nm. In order to achieve these challenging specifications, specific calibration strategies need to be applied, such as the calibration of the system's mirror table. In addition, a newly developed ultra-precision tactile probe system is described, featuring a probe tip radius of 35 μm results of the 3D sensitivity calibration of this probe are presented. Finally, results are presented measuring a full hemisphere in 3D of a SiN ultra precision master ball, resulting in a repeatability of 7.9 nm rms.

  3. Needs and challenges in precision wear measurement

    SciTech Connect

    Blau, P.J.

    1996-01-10

    Accurate, precise wear measurements are a key element in solving both current wear problems and in basic wear research. Applications range from assessing durability of micro-scale components to accurate screening of surface treatments and thin solid films. Need to distinguish small differences in wear tate presents formidable problems to those who are developing new materials and surface treatments. Methods for measuring wear in ASTM standard test methods are discussed. Errors in using alterate methods of wear measurement on the same test specimen are also described. Human judgemental factors are a concern in common methods for wear measurement, and an experiment involving measurement of a wear scar by ten different people is described. Precision in wear measurement is limited both by the capabilities of the measuring instruments and by the nonuniformity of the wear process. A method of measuring wear using nano-scale indentations is discussed. Current and future prospects for incorporating advanced, higher-precision wear measurement methods into standards are considered.

  4. Accurate documentation and wound measurement.

    PubMed

    Hampton, Sylvie

    This article, part 4 in a series on wound management, addresses the sometimes routine yet crucial task of documentation. Clear and accurate records of a wound enable its progress to be determined so the appropriate treatment can be applied. Thorough records mean any practitioner picking up a patient's notes will know when the wound was last checked, how it looked and what dressing and/or treatment was applied, ensuring continuity of care. Documenting every assessment also has legal implications, demonstrating due consideration and care of the patient and the rationale for any treatment carried out. Part 5 in the series discusses wound dressing characteristics and selection.

  5. Precision flyer initiator

    DOEpatents

    Frank, Alan M.; Lee, Ronald S.

    1998-01-01

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or "flyer" is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices.

  6. Precision flyer initiator

    DOEpatents

    Frank, A.M.; Lee, R.S.

    1998-05-26

    A precision flyer initiator forms a substantially spherical detonation wave in a high explosive (HE) pellet. An explosive driver, such as a detonating cord, a wire bridge circuit or a small explosive, is detonated. A flyer material is sandwiched between the explosive driver and an end of a barrel that contains an inner channel. A projectile or ``flyer`` is sheared from the flyer material by the force of the explosive driver and projected through the inner channel. The flyer than strikes the HE pellet, which is supported above a second end of the barrel by a spacer ring. A gap or shock decoupling material delays the shock wave in the barrel from predetonating the HE pellet before the flyer. A spherical detonation wave is formed in the HE pellet. Thus, a shock wave traveling through the barrel fails to reach the HE pellet before the flyer strikes the HE pellet. The precision flyer initiator can be used in mining devices, well-drilling devices and anti-tank devices. 10 figs.

  7. Precision Joining Center

    SciTech Connect

    Powell, J.W.; Westphal, D.A.

    1991-08-01

    A workshop to obtain input from industry on the establishment of the Precision Joining Center (PJC) was held on July 10--12, 1991. The PJC is a center for training Joining Technologists in advanced joining techniques and concepts in order to promote the competitiveness of US industry. The center will be established as part of the DOE Defense Programs Technology Commercialization Initiative, and operated by EG G Rocky Flats in cooperation with the American Welding Society and the Colorado School of Mines Center for Welding and Joining Research. The overall objectives of the workshop were to validate the need for a Joining Technologists to fill the gap between the welding operator and the welding engineer, and to assure that the PJC will train individuals to satisfy that need. The consensus of the workshop participants was that the Joining Technologist is a necessary position in industry, and is currently used, with some variation, by many companies. It was agreed that the PJC core curriculum, as presented, would produce a Joining Technologist of value to industries that use precision joining techniques. The advantage of the PJC would be to train the Joining Technologist much more quickly and more completely. The proposed emphasis of the PJC curriculum on equipment intensive and hands-on training was judged to be essential.

  8. Precision measurements in supersymmetry

    SciTech Connect

    Feng, J.L.

    1995-05-01

    Supersymmetry is a promising framework in which to explore extensions of the standard model. If candidates for supersymmetric particles are found, precision measurements of their properties will then be of paramount importance. The prospects for such measurements and their implications are the subject of this thesis. If charginos are produced at the LEP II collider, they are likely to be one of the few available supersymmetric signals for many years. The author considers the possibility of determining fundamental supersymmetry parameters in such a scenario. The study is complicated by the dependence of observables on a large number of these parameters. He proposes a straightforward procedure for disentangling these dependences and demonstrate its effectiveness by presenting a number of case studies at representative points in parameter space. In addition to determining the properties of supersymmetric particles, precision measurements may also be used to establish that newly-discovered particles are, in fact, supersymmetric. Supersymmetry predicts quantitative relations among the couplings and masses of superparticles. The author discusses tests of such relations at a future e{sup +}e{sup {minus}} linear collider, using measurements that exploit the availability of polarizable beams. Stringent tests of supersymmetry from chargino production are demonstrated in two representative cases, and fermion and neutralino processes are also discussed.

  9. Radiotherapy in the Era of Precision Medicine.

    PubMed

    Yard, Brian; Chie, Eui Kyu; Adams, Drew J; Peacock, Craig; Abazeed, Mohamed E

    2015-10-01

    Current predictors of radiation response are largely limited to clinical and histopathologic parameters, and extensive systematic analyses of the correlation between radiation sensitivity and genomic parameters remain lacking. In the era of precision medicine, the lack of -omic determinants of radiation response has hindered the personalization of radiation delivery to the unique characteristics of each patient׳s cancer and impeded the discovery of new therapies that can be administered concurrently with radiation therapy. The cataloging of the -omic determinants of radiation sensitivity of cancer has great potential in enhancing efficacy and limiting toxicity in the context of a new approach to precision radiotherapy. Herein, we review concepts and data that contribute to the delineation of the radiogenomic landscape of cancer.

  10. Precise adaptation in chemotaxis through ``assistance neighborhoods"

    NASA Astrophysics Data System (ADS)

    Endres, Robert; Wingreen, Ned

    2006-03-01

    The chemotaxis network in Escherichia coli is remarkable for its sensitivity to small relative changes in the concentrations of multiple chemical signals over a broad range of ambient concentrations. Key to this sensitivity is an adaptation system that relies on methylation and demethylation/deamidation of specific modification sites of the chemoreceptors by the enzymes CheR and CheB, respectively. These enzymes can access 5-7 receptors once tethered to a particular receptor. Based on these ``assistance neighborhoods'', we present a model for precise adaptation of mixed clusters of two-state chemoreceptors. In agreement with experiment the response of adapted cells to addition/removal of attractant scales with the free-energy change at fixed ligand affinity. Our model further predicts two possible limits of precise adaptation: either the response to further addition of attractant stops through saturation of the receptors, or receptors fully methylate before they saturate and therefore stop adapting.

  11. Which Method Is Most Precise; Which Is Most Accurate? An Undergraduate Experiment

    ERIC Educational Resources Information Center

    Jordan, A. D.

    2007-01-01

    A simple experiment, the determination of the density of a liquid by several methods, is presented. Since the concept of density is a familiar one, the experiment is suitable for the introductory laboratory period of a first- or second-year course in physical or analytical chemistry. The main objective of the experiment is to familiarize students…

  12. Precision signal power measurement

    NASA Technical Reports Server (NTRS)

    Winkelstein, R.

    1972-01-01

    Accurate estimation of signal power is an important Deep Space Network (DSN) consideration. Ultimately, spacecraft power and weight is saved if no reserve transmitter power is needed to compensate for inaccurate measurements. Spectral measurement of the received signal has proved to be an effective method of estimating signal power over a wide dynamic range. Furthermore, on-line spectral measurements provide an important diagnostic tool for examining spacecraft anomalies. Prototype equipment installed at a 64-m-diameter antenna site has been successfully used to make measurements of carrier power and sideband symmetry of telemetry signals received from the Mariner Mars 1971 spacecraft.

  13. High-Precision Dispensing of Nanoliter Biofluids on Glass Pedestal Arrays for Ultrasensitive Biomolecule Detection.

    PubMed

    Chen, Xiaoxiao; Liu, Yang; Xu, QianFeng; Zhu, Jing; Poget, Sébastien F; Lyons, Alan M

    2016-05-01

    Precise dispensing of nanoliter droplets is necessary for the development of sensitive and accurate assays, especially when the availability of the source solution is limited. Conventional approaches are limited by imprecise positioning, large shear forces, surface tension effects, and high costs. To address the need for precise and economical dispensing of nanoliter volumes, we developed a new approach where the dispensed volume is dependent on the size and shape of defined surface features, thus freeing the dispensing process from pumps and fine-gauge needles requiring accurate positioning. The surface we fabricated, called a nanoliter droplet virtual well microplate (nVWP), achieves high-precision dispensing (better than ±0.5 nL or ±1.6% at 32 nL) of 20-40 nL droplets using a small source drop (3-10 μL) on isolated hydrophilic glass pedestals (500 μm on a side) bonded to arrays of polydimethylsiloxane conical posts. The sharp 90° edge of the glass pedestal pins the solid-liquid-vapor triple contact line (TCL), averting the wetting of the glass sidewalls while the fluid is prevented from receding from the edge. This edge creates a sufficiently large energy barrier such that microliter water droplets can be poised on the glass pedestals, exhibiting contact angles greater >150°. This approach relieves the stringent mechanical alignment tolerances required for conventional dispensing techniques, shifting the control of dispensed volume to the area circumscribed by the glass edge. The effects of glass surface chemistry and dispense velocity on droplet volume were studied using optical microscopy and high-speed video. Functionalization of the glass pedestal surface enabled the selective adsorption of specific peptides and proteins from synthetic and natural biomolecule mixtures, such as venom. We further demonstrate how the nVWP dispensing platform can be used for a variety of assays, including sensitive detection of proteins and peptides by fluorescence

  14. High precision laser forming for microactuation

    NASA Astrophysics Data System (ADS)

    Folkersma, Ger K. G. P.; Römer, G. R. B. E.; Brouwer, D. M.; Huis in't Veld, A. J.

    2014-03-01

    For assembly of micro-devices, such as photonic devices, the precision alignment of components is often critical for their performance. Laser forming, also known as laser-adjusting, can be used to create an integrated microactuator to align the components with sub-micron precision after bonding. In this paper a so-called three-bridge planar manipulator was used to study the laser-material interaction and thermal and mechanical behavior of the laser forming mechanism. A 3-D Finite Element Method (FEM) model and experiments are used to identify the optimal parameter settings for a high precision actuator. The goal in this paper is to investigate how precise the maximum occurring temperature and the resulting displacement are predicted by a 3-D FEM model by comparing with experimental results. A secondary goal is to investigate the resolution of the mechanism and the range of motion. With the experimental setup we measure the displacement and surface temperature in real-time. The time-dependent heat transfer FEM models match closely with experimental results, however the structural model can deviate more than 100% in absolute displacement. Experimentally, a positioning resolution of 0.1μm was achieved, with a total stroke exceeding 20μm. A spread of 10% in the temperature cycles between several experiments was found, which was attributed to a spread in the surface absorptivity. Combined with geometric tolerances, the spread in displacement can be as large as 20%. This implies that feedback control of the laser power, in combination with iterative learning during positioning, is required for high precision alignment. Even though the FEM models deviate substantially from the experiments, the 3-D FEM model predicts the trend in deformation sufficiently accurate to use it for design optimization of high precision 3-D actuators using laser adjusting.

  15. Light leptonic new physics at the precision frontier

    NASA Astrophysics Data System (ADS)

    Le Dall, Matthias

    2016-06-01

    Precision probes of new physics are often interpreted through their indirect sensitivity to short-distance scales. In this proceedings contribution, we focus on the question of which precision observables, at current sensitivity levels, allow for an interpretation via either short-distance new physics or consistent models of long-distance new physics, weakly coupled to the Standard Model. The electroweak scale is chosen to set the dividing line between these scenarios. In particular, we find that inverse see-saw models of neutrino mass allow for light new physics interpretations of most precision leptonic observables, such as lepton universality, lepton flavor violation, but not for the electron EDM.

  16. Precision Joining Center

    NASA Technical Reports Server (NTRS)

    Powell, John W.

    1991-01-01

    The establishment of a Precision Joining Center (PJC) is proposed. The PJC will be a cooperatively operated center with participation from U.S. private industry, the Colorado School of Mines, and various government agencies, including the Department of Energy's Nuclear Weapons Complex (NWC). The PJC's primary mission will be as a training center for advanced joining technologies. This will accomplish the following objectives: (1) it will provide an effective mechanism to transfer joining technology from the NWC to private industry; (2) it will provide a center for testing new joining processes for the NWC and private industry; and (3) it will provide highly trained personnel to support advance joining processes for the NWC and private industry.

  17. Precision laser cutting

    SciTech Connect

    Kautz, D.D.; Anglin, C.D.; Ramos, T.J.

    1990-01-19

    Many materials that are otherwise difficult to fabricate can be cut precisely with lasers. This presentation discusses the advantages and limitations of laser cutting for refractory metals, ceramics, and composites. Cutting in these materials was performed with a 400-W, pulsed Nd:YAG laser. Important cutting parameters such as beam power, pulse waveforms, cutting gases, travel speed, and laser coupling are outlined. The effects of process parameters on cut quality are evaluated. Three variables are used to determine the cut quality: kerf width, slag adherence, and metallurgical characteristics of recast layers and heat-affected zones around the cuts. Results indicate that ductile materials with good coupling characteristics (such as stainless steel alloys and tantalum) cut well. Materials lacking one or both of these properties (such as tungsten and ceramics) are difficult to cut without proper part design, stress relief, or coupling aids. 3 refs., 2 figs., 1 tab.

  18. Precision Spectroscopy of Tellurium

    NASA Astrophysics Data System (ADS)

    Coker, J.; Furneaux, J. E.

    2013-06-01

    Tellurium (Te_2) is widely used as a frequency reference, largely due to the fact that it has an optical transition roughly every 2-3 GHz throughout a large portion of the visible spectrum. Although a standard atlas encompassing over 5200 cm^{-1} already exists [1], Doppler broadening present in that work buries a significant portion of the features [2]. More recent studies of Te_2 exist which do not exhibit Doppler broadening, such as Refs. [3-5], and each covers different parts of the spectrum. This work adds to that knowledge a few hundred transitions in the vicinity of 444 nm, measured with high precision in order to improve measurement of the spectroscopic constants of Te_2's excited states. Using a Fabry Perot cavity in a shock-absorbing, temperature and pressure regulated chamber, locked to a Zeeman stabilized HeNe laser, we measure changes in frequency of our diode laser to ˜1 MHz precision. This diode laser is scanned over 1000 GHz for use in a saturated-absorption spectroscopy cell filled with Te_2 vapor. Details of the cavity and its short and long-term stability are discussed, as well as spectroscopic properties of Te_2. References: J. Cariou, and P. Luc, Atlas du spectre d'absorption de la molecule de tellure, Laboratoire Aime-Cotton (1980). J. Coker et al., J. Opt. Soc. Am. B {28}, 2934 (2011). J. Verges et al., Physica Scripta {25}, 338 (1982). Ph. Courteille et al., Appl. Phys. B {59}, 187 (1994) T.J. Scholl et al., J. Opt. Soc. Am. B {22}, 1128 (2005).

  19. Accurate single-trial detection of movement intention made possible using adaptive wavelet transform.

    PubMed

    Chamanzar, Alireza; Malekmohammadi, Alireza; Bahrani, Masih; Shabany, Mahdi

    2015-01-01

    The outlook of brain-computer interfacing (BCI) is very bright. The real-time, accurate detection of a motor movement task is critical in BCI systems. The poor signal-to-noise-ratio (SNR) of EEG signals and the ambiguity of noise generator sources in brain renders this task quite challenging. In this paper, we demonstrate a novel algorithm for precise detection of the onset of a motor movement through identification of event-related-desynchronization (ERD) patterns. Using an adaptive matched filter technique implemented based on an optimized continues Wavelet transform by selecting an appropriate basis, we can detect single-trial ERDs. Moreover, we use a maximum-likelihood (ML), electrooculography (EOG) artifact removal method to remove eye-related artifacts to significantly improve the detection performance. We have applied this technique to our locally recorded Emotiv(®) data set of 6 healthy subjects, where an average detection selectivity of 85 ± 6% and sensitivity of 88 ± 7.7% is achieved with a temporal precision in the range of -1250 to 367 ms in onset detections of single-trials.

  20. COSMOS: accurate detection of somatic structural variations through asymmetric comparison between tumor and normal samples.

    PubMed

    Yamagata, Koichi; Yamanishi, Ayako; Kokubu, Chikara; Takeda, Junji; Sese, Jun

    2016-05-01

    An important challenge in cancer genomics is precise detection of structural variations (SVs) by high-throughput short-read sequencing, which is hampered by the high false discovery rates of existing analysis tools. Here, we propose an accurate SV detection method named COSMOS, which compares the statistics of the mapped read pairs in tumor samples with isogenic normal control samples in a distinct asymmetric manner. COSMOS also prioritizes the candidate SVs using strand-specific read-depth information. Performance tests on modeled tumor genomes revealed that COSMOS outperformed existing methods in terms of F-measure. We also applied COSMOS to an experimental mouse cell-based model, in which SVs were induced by genome engineering and gamma-ray irradiation, followed by polymerase chain reaction-based confirmation. The precision of COSMOS was 84.5%, while the next best existing method was 70.4%. Moreover, the sensitivity of COSMOS was the highest, indicating that COSMOS has great potential for cancer genome analysis.

  1. COSMOS: accurate detection of somatic structural variations through asymmetric comparison between tumor and normal samples

    PubMed Central

    Yamagata, Koichi; Yamanishi, Ayako; Kokubu, Chikara; Takeda, Junji; Sese, Jun

    2016-01-01

    An important challenge in cancer genomics is precise detection of structural variations (SVs) by high-throughput short-read sequencing, which is hampered by the high false discovery rates of existing analysis tools. Here, we propose an accurate SV detection method named COSMOS, which compares the statistics of the mapped read pairs in tumor samples with isogenic normal control samples in a distinct asymmetric manner. COSMOS also prioritizes the candidate SVs using strand-specific read-depth information. Performance tests on modeled tumor genomes revealed that COSMOS outperformed existing methods in terms of F-measure. We also applied COSMOS to an experimental mouse cell-based model, in which SVs were induced by genome engineering and gamma-ray irradiation, followed by polymerase chain reaction-based confirmation. The precision of COSMOS was 84.5%, while the next best existing method was 70.4%. Moreover, the sensitivity of COSMOS was the highest, indicating that COSMOS has great potential for cancer genome analysis. PMID:26833260

  2. Accurate thickness measurement of graphene

    NASA Astrophysics Data System (ADS)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  3. Accurate thickness measurement of graphene.

    PubMed

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  4. Precision tropopause turbulence measurements

    NASA Astrophysics Data System (ADS)

    Otten, Leonard John, III; Jones, Al; Black, Don G.; Lane, Joshua; Hugo, Ron; Beyer, Jeffery; Roggemann, Michael C.

    2000-11-01

    Limited samples of the turbulence structure in the tropopause suggest that conventional models for atmospheric turbulence may not apply through this portion of the atmosphere. This paper discusses the instrumentation requirements, design and calibration of a balloon borne sensor suite designed to accurately measure the distribution and spectral spatial character of the index of refraction fluctuations through the tropopause. The basis for the data system is a 16 bit dynamic range, high data rate sample and hold instrumentation package. Calibration and characterization of the constant current anemometers used in the measurements show them to have a frequency response greater than 170 Hz at the -3 Db point and sufficient resolution to measure a Cn2 of 1 x 10-19 cm-2/3. A novel technique was developed that integrates the over 20 signals into two time correlated telemetry streams. The entire system has been assembled for a flight in the late summer of 2000.

  5. Oxygen-Enhanced MRI Accurately Identifies, Quantifies, and Maps Tumor Hypoxia in Preclinical Cancer Models.

    PubMed

    O'Connor, James P B; Boult, Jessica K R; Jamin, Yann; Babur, Muhammad; Finegan, Katherine G; Williams, Kaye J; Little, Ross A; Jackson, Alan; Parker, Geoff J M; Reynolds, Andrew R; Waterton, John C; Robinson, Simon P

    2016-02-15

    There is a clinical need for noninvasive biomarkers of tumor hypoxia for prognostic and predictive studies, radiotherapy planning, and therapy monitoring. Oxygen-enhanced MRI (OE-MRI) is an emerging imaging technique for quantifying the spatial distribution and extent of tumor oxygen delivery in vivo. In OE-MRI, the longitudinal relaxation rate of protons (ΔR1) changes in proportion to the concentration of molecular oxygen dissolved in plasma or interstitial tissue fluid. Therefore, well-oxygenated tissues show positive ΔR1. We hypothesized that the fraction of tumor tissue refractory to oxygen challenge (lack of positive ΔR1, termed "Oxy-R fraction") would be a robust biomarker of hypoxia in models with varying vascular and hypoxic features. Here, we demonstrate that OE-MRI signals are accurate, precise, and sensitive to changes in tumor pO2 in highly vascular 786-0 renal cancer xenografts. Furthermore, we show that Oxy-R fraction can quantify the hypoxic fraction in multiple models with differing hypoxic and vascular phenotypes, when used in combination with measurements of tumor perfusion. Finally, Oxy-R fraction can detect dynamic changes in hypoxia induced by the vasomodulator agent hydralazine. In contrast, more conventional biomarkers of hypoxia (derived from blood oxygenation-level dependent MRI and dynamic contrast-enhanced MRI) did not relate to tumor hypoxia consistently. Our results show that the Oxy-R fraction accurately quantifies tumor hypoxia noninvasively and is immediately translatable to the clinic.

  6. CLASS2: accurate and efficient splice variant annotation from RNA-seq reads

    PubMed Central

    Song, Li; Sabunciyan, Sarven; Florea, Liliana

    2016-01-01

    Next generation sequencing of cellular RNA is making it possible to characterize genes and alternative splicing in unprecedented detail. However, designing bioinformatics tools to accurately capture splicing variation has proven difficult. Current programs can find major isoforms of a gene but miss lower abundance variants, or are sensitive but imprecise. CLASS2 is a novel open source tool for accurate genome-guided transcriptome assembly from RNA-seq reads based on the model of splice graph. An extension of our program CLASS, CLASS2 jointly optimizes read patterns and the number of supporting reads to score and prioritize transcripts, implemented in a novel, scalable and efficient dynamic programming algorithm. When compared against reference programs, CLASS2 had the best overall accuracy and could detect up to twice as many splicing events with precision similar to the best reference program. Notably, it was the only tool to produce consistently reliable transcript models for a wide range of applications and sequencing strategies, including ribosomal RNA-depleted samples. Lightweight and multi-threaded, CLASS2 requires <3GB RAM and can analyze a 350 million read set within hours, and can be widely applied to transcriptomics studies ranging from clinical RNA sequencing, to alternative splicing analyses, and to the annotation of new genomes. PMID:26975657

  7. Precision of archerfish C-starts is fully temperature compensated.

    PubMed

    Krupczynski, Philipp; Schuster, Stefan

    2013-09-15

    Hunting archerfish precisely adapt their predictive C-starts to the initial movement of dislodged prey so that turn angle and initial speed are matched to the place and time of the later point of catch. The high accuracy and the known target point of the starts allow a sensitive straightforward assay of how temperature affects the underlying circuits. Furthermore, archerfish face rapid temperature fluctuations in their mangrove biotopes that could compromise performance. Here, we show that after a brief acclimation period the function of the C-starts was fully maintained over a range of operating temperatures: (i) full responsiveness was maintained at all temperatures, (ii) at all temperatures the fish selected accurate turns and were able to do so over the full angular range, (iii) at all temperatures speed attained immediately after the end of the C-start was matched - with equal accuracy - to 'virtual speed', i.e. the ratio of remaining distance to the future landing point and remaining time. While precision was fully temperature compensated, C-start latency was not and increased by about 4 ms per 1°C cooling. Also, kinematic aspects of the C-start were only partly temperature compensated. Above 26°C, the duration of the two major phases of the C-start were temperature compensated. At lower temperatures, however, durations increased similar to latency. Given the accessibility of the underlying networks, the archerfish predictive start should be an excellent model to assay the degree of plasticity and functional stability of C-start motor patterns. PMID:23737557

  8. [Application of precision medicine in obesity and metabolic disease surgery].

    PubMed

    Wang, Cunchuan; Gao, Zhiguang

    2016-01-01

    The U. S. A. president Obama called for a new initiative to fund precision medicine during his State of Union Address on January 20th, 2015, which meant that the human medicine enters a new era. The meaning of "precision medicine" is significantly similar to the concept of precision obesity and metabolic disease surgery, which was proposed by the author in early August 2011. Nowadays, obesity and metabolic disease surgery has been transformed from open surgery to laparoscopic surgery, the extensive mode to the precision mode. The key value concept is to minimize postoperative complication, minimize postoperative hospital stay and obtain the best effect of weight loss by accurate preoperative assessment, delicate operation, excellent postoperative management and scientific follow-up. The precision obesity and metabolic disease surgery has more development space in the future. PMID:26797833

  9. High-speed precision weighing of pharmaceutical capsules

    NASA Astrophysics Data System (ADS)

    Bürmen, Miran; Pernuš, Franjo; Likar, Boštjan

    2009-11-01

    In this paper, we present a cost-effective method for fast and accurate in-line weighing of hard gelatin capsules based on the optimized capacitance sensor and real-time processing of the capsule capacitance profile resulting from 5000 capacitance measurements per second. First, the effect of the shape and size of the capacitive sensor on the sensitivity and stability of the measurements was investigated in order to optimize the performance of the system. The method was tested on two types of hard gelatin capsules weighing from 50 mg to 650 mg. The results showed that the capacitance profile was exceptionally well correlated with the capsule weight with the correlation coefficient exceeding 0.999. The mean precision of the measurements was in the range from 1 mg to 3 mg, depending on the size of the capsule and was significantly lower than the 5% weight tolerances usually used by the pharmaceutical industry. Therefore, the method was found feasible for weighing pharmaceutical hard gelatin capsules as long as certain conditions are met regarding the capsule fill properties and environment stability. The proposed measurement system can be calibrated by using only two or three sets of capsules with known weight. However, for most applications it is sufficient to use only empty and nominally filled capsules for calibration. Finally, a practical application of the proposed method showed that a single system is capable of weighing around 75 000 capsules per hour, while using multiple systems could easily increase the inspection rate to meet almost any requirements.

  10. Precision engineering for astronomy: historical origins and the future revolution in ground-based astronomy.

    PubMed

    Cunningham, Colin; Russell, Adrian

    2012-08-28

    Since the dawn of civilization, the human race has pushed technology to the limit to study the heavens in ever-increasing detail. As astronomical instruments have evolved from those built by Tycho Brahe in the sixteenth century, through Galileo and Newton in the seventeenth, to the present day, astronomers have made ever more precise measurements. To do this, they have pushed the art and science of precision engineering to extremes. Some of the critical steps are described in the evolution of precision engineering from the first telescopes to the modern generation telescopes and ultra-sensitive instruments that need a combination of precision manufacturing, metrology and accurate positioning systems. In the future, precision-engineered technologies such as those emerging from the photonics industries may enable future progress in enhancing the capabilities of instruments, while potentially reducing the size and cost. In the modern era, there has been a revolution in astronomy leading to ever-increasing light-gathering capability. Today, the European Southern Observatory (ESO) is at the forefront of this revolution, building observatories on the ground that are set to transform our view of the universe. At an elevation of 5000 m in the Atacama Desert of northern Chile, the Atacama Large Millimetre/submillimetre Array (ALMA) is nearing completion. The ALMA is the most powerful radio observatory ever and is being built by a global partnership from Europe, North America and East Asia. In the optical/infrared part of the spectrum, the latest project for ESO is even more ambitious: the European Extremely Large Telescope, a giant 40 m class telescope that will also be located in Chile and which will give the most detailed view of the universe so far.

  11. Precision engineering for astronomy: historical origins and the future revolution in ground-based astronomy.

    PubMed

    Cunningham, Colin; Russell, Adrian

    2012-08-28

    Since the dawn of civilization, the human race has pushed technology to the limit to study the heavens in ever-increasing detail. As astronomical instruments have evolved from those built by Tycho Brahe in the sixteenth century, through Galileo and Newton in the seventeenth, to the present day, astronomers have made ever more precise measurements. To do this, they have pushed the art and science of precision engineering to extremes. Some of the critical steps are described in the evolution of precision engineering from the first telescopes to the modern generation telescopes and ultra-sensitive instruments that need a combination of precision manufacturing, metrology and accurate positioning systems. In the future, precision-engineered technologies such as those emerging from the photonics industries may enable future progress in enhancing the capabilities of instruments, while potentially reducing the size and cost. In the modern era, there has been a revolution in astronomy leading to ever-increasing light-gathering capability. Today, the European Southern Observatory (ESO) is at the forefront of this revolution, building observatories on the ground that are set to transform our view of the universe. At an elevation of 5000 m in the Atacama Desert of northern Chile, the Atacama Large Millimetre/submillimetre Array (ALMA) is nearing completion. The ALMA is the most powerful radio observatory ever and is being built by a global partnership from Europe, North America and East Asia. In the optical/infrared part of the spectrum, the latest project for ESO is even more ambitious: the European Extremely Large Telescope, a giant 40 m class telescope that will also be located in Chile and which will give the most detailed view of the universe so far. PMID:22802494

  12. Precision medicine in myasthenia graves: begin from the data precision

    PubMed Central

    Hong, Yu; Xie, Yanchen; Hao, Hong-Jun; Sun, Ren-Cheng

    2016-01-01

    Myasthenia gravis (MG) is a prototypic autoimmune disease with overt clinical and immunological heterogeneity. The data of MG is far from individually precise now, partially due to the rarity and heterogeneity of this disease. In this review, we provide the basic insights of MG data precision, including onset age, presenting symptoms, generalization, thymus status, pathogenic autoantibodies, muscle involvement, severity and response to treatment based on references and our previous studies. Subgroups and quantitative traits of MG are discussed in the sense of data precision. The role of disease registries and scientific bases of precise analysis are also discussed to ensure better collection and analysis of MG data. PMID:27127759

  13. Precise Truss Assembly using Commodity Parts and Low Precision Welding

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, William R.; Correll, Nikolaus

    2013-01-01

    We describe an Intelligent Precision Jigging Robot (IPJR), which allows high precision assembly of commodity parts with low-precision bonding. We present preliminary experiments in 2D that are motivated by the problem of assembling a space telescope optical bench on orbit using inexpensive, stock hardware and low-precision welding. An IPJR is a robot that acts as the precise "jigging", holding parts of a local assembly site in place while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (in this case, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. We report the challenges of designing the IPJR hardware and software, analyze the error in assembly, document the test results over several experiments including a large-scale ring structure, and describe future work to implement the IPJR in 3D and with micron precision.

  14. Precise Truss Assembly Using Commodity Parts and Low Precision Welding

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, W. R.; Correll, Nikolaus

    2014-01-01

    Hardware and software design and system integration for an intelligent precision jigging robot (IPJR), which allows high precision assembly using commodity parts and low-precision bonding, is described. Preliminary 2D experiments that are motivated by the problem of assembling space telescope optical benches and very large manipulators on orbit using inexpensive, stock hardware and low-precision welding are also described. An IPJR is a robot that acts as the precise "jigging", holding parts of a local structure assembly site in place, while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (for this prototype, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. The analysis of the assembly error and the results of building a square structure and a ring structure are discussed. Options for future work, to extend the IPJR paradigm to building in 3D structures at micron precision are also summarized.

  15. Few-Nucleon Charge Radii and a Precision Isotope Shift Measurement in Helium

    NASA Astrophysics Data System (ADS)

    Hassan Rezaeian, Nima; Shiner, David

    2015-10-01

    Recent improvements in atomic theory and experiment provide a valuable method to precisely determine few nucleon charge radii, complementing the more direct scattering approaches, and providing sensitive tests of few-body nuclear theory. Some puzzles with respect to this method exist, particularly in the muonic and electronic measurements of the proton radius, known as the proton puzzle. Perhaps this puzzle will also exist in nuclear size measurements in helium. Muonic helium measurements are ongoing while our new electronic results will be discussed here. We measured precisely the isotope shift of the 23S - 23P transitions in 3He and 4He. The result is almost an order of magnitude more accurate than previous measured values. To achieve this accuracy, we implemented various experimental techniques. We used a tunable laser frequency discriminator and electro-optic modulation technique to precisely control the frequency and intensity. We select and stabilize the intensity of the required sideband and eliminate unused sidebands. The technique uses a MEMS fiber switch (ts = 10 ms) and several temperature stabilized narrow band (3 GHz) fiber gratings. A beam with both species of helium is achieved using a custom fiber laser for simultaneous optical pumping. A servo-controlled retro-reflected laser beam eliminates Doppler effects. Careful detection design and software are essential for unbiased data collection. Our new results will be compared to previous measurements.

  16. [Contrast sensitivity in glaucoma].

    PubMed

    Bartos, D

    1989-05-01

    Author reports on results of the contrast sensitivity examinations using the Cambridge low-contrast lattice test supplied by Clement Clarke International LTD, in patients with open-angle glaucoma and ocular hypertension. In glaucoma patients there was observed statistically significant decrease of the contrast sensitivity. In patients with ocular hypertension decrease of the contrast sensitivity was in patients affected by corresponding changes of the visual field and of the optical disc. The main advantages of the Cambridge low-contrast lattice test were simplicity, rapidity and precision of its performance. PMID:2743444

  17. Sensitive indirect spectrophotometric determination of isoniazid

    NASA Astrophysics Data System (ADS)

    Safavi, A.; Karimi, M. A.; Hormozi Nezhad, M. R.; Kamali, R.; Saghir, N.

    2004-03-01

    A simple, rapid, sensitive and accurate indirect spectrophotometric method for the microdetermination of isoniazid (INH) in pure form and pharmaceutical formulations is developed. The procedure is based on the reaction of copper(II) with isoniazid in the presence of neocuproine (NC). In the presence of neocuproine, copper(II) is reduced easily by isoniazid to a Cu(I)-neocuproine complex, which shows an absorption maximum at 454 nm. By measuring the absorbance of the complex at this wavelength, isoniazid can be determined in the range 0.3-3.5 μg ml -1. This method was applied to the determination of isoniazid in pharmaceutical formulation and enabled the determination of the isoniazid in microgram quantities (0.3-3.5 μg ml -1). The results obtained for the assay of pharmaceutical preparations compared well with those obtained by the official method and demonstrated good accuracy and precision.

  18. Sensitive indirect spectrophotometric determination of isoniazid.

    PubMed

    Safavi, A; Karimi, M A; Hormozi Nezhad, M R; Kamali, R; Saghir, N

    2004-03-01

    A simple, rapid, sensitive and accurate indirect spectrophotometric method for the microdetermination of isoniazid (INH) in pure form and pharmaceutical formulations is developed. The procedure is based on the reaction of copper(II) with isoniazid in the presence of neocuproine (NC). In the presence of neocuproine, copper(II) is reduced easily by isoniazid to a Cu(I)-neocuproine complex, which shows an absorption maximum at 454 nm. By measuring the absorbance of the complex at this wavelength, isoniazid can be determined in the range 0.3-3.5 microgml-1. This method was applied to the determination of isoniazid in pharmaceutical formulation and enabled the determination of the isoniazid in microgram quantities (0.3-3.5 microgml-1). The results obtained for the assay of pharmaceutical preparations compared well with those obtained by the official method and demonstrated good accuracy and precision.

  19. Pitch evaluation of high-precision gratings

    NASA Astrophysics Data System (ADS)

    Lu, Yancong; Zhou, Changhe; Wei, Chunlong; Jia, Wei; Xiang, Xiansong; Li, Yanyang; Yu, Junjie; Li, Shubin; Wang, Jin; Liu, Kun; Wei, Shengbin

    2014-11-01

    Optical encoders and laser interferometers are two primary solutions in nanometer metrology. As the precision of encoders depends on the uniformity of grating pitches, it is essential to evaluate pitches accurately. We use a CCD image sensor to acquire grating image for evaluating the pitches with high precision. Digital image correlation technique is applied to filter out the noises. We propose three methods for determining the pitches of grating with peak positions of correlation coefficients. Numerical simulation indicated the average of pitch deviations from the true pitch and the pitch variations are less than 0.02 pixel and 0.1 pixel for these three methods when the ideal grating image is added with salt and pepper noise, speckle noise, and Gaussian noise. Experimental results demonstrated that our method can measure the pitch of the grating accurately, for example, our home-made grating with 20μm period has 475nm peak-to-valley uniformity with 40nm standard deviation during 35mm range. Another measurement illustrated that our home-made grating has 40nm peak-to-valley uniformity with 10nm standard deviation. This work verified that our lab can fabricate high-accuracy gratings which should be interesting for practical application in optical encoders.

  20. Precision Teaching, Frequency-Building, and Ballet Dancing

    ERIC Educational Resources Information Center

    Lokke, Gunn E. H.; Lokke, Jon A.; Arntzen, Erick

    2008-01-01

    This article reports the effectiveness of a brief intervention aimed at achieving fluency in basic ballet moves in a 9-year-old Norwegian girl by use of frequency-building and Precision Teaching procedures. One nonfluent ballet move was pinpointed, and instructional and training procedures designed to increase the frequency of accurate responding…

  1. Rapid and precise determination of ATP using a modified photometer

    USGS Publications Warehouse

    Shultz, David J.; Stephens, Doyle W.

    1980-01-01

    An inexpensive delay timer was designed to modify a commercially available ATP photometer which allows a disposable tip pipette to be used for injecting either enzyme or sample into the reaction cuvette. The disposable tip pipette is as precise and accurate as a fixed-needle syringe but eliminates the problem of sample contamination and decreases analytical time. (USGS)

  2. How dim is dim? Precision of the celestial compass in moonlight and sunlight

    PubMed Central

    Dacke, M.; Byrne, M. J.; Baird, E.; Scholtz, C. H.; Warrant, E. J.

    2011-01-01

    Prominent in the sky, but not visible to humans, is a pattern of polarized skylight formed around both the Sun and the Moon. Dung beetles are, at present, the only animal group known to use the much dimmer polarization pattern formed around the Moon as a compass cue for maintaining travel direction. However, the Moon is not visible every night and the intensity of the celestial polarization pattern gradually declines as the Moon wanes. Therefore, for nocturnal orientation on all moonlit nights, the absolute sensitivity of the dung beetle's polarization detector may limit the precision of this behaviour. To test this, we studied the straight-line foraging behaviour of the nocturnal ball-rolling dung beetle Scarabaeus satyrus to establish when the Moon is too dim—and the polarization pattern too weak—to provide a reliable cue for orientation. Our results show that celestial orientation is as accurate during crescent Moon as it is during full Moon. Moreover, this orientation accuracy is equal to that measured for diurnal species that orient under the 100 million times brighter polarization pattern formed around the Sun. This indicates that, in nocturnal species, the sensitivity of the optical polarization compass can be greatly increased without any loss of precision. PMID:21282173

  3. Digital encoding of cellular mRNAs enabling precise and absolute gene expression measurement by single-molecule counting.

    PubMed

    Fu, Glenn K; Wilhelmy, Julie; Stern, David; Fan, H Christina; Fodor, Stephen P A

    2014-03-18

    We present a new approach for the sensitive detection and accurate quantitation of messenger ribonucleic acid (mRNA) gene transcripts in single cells. First, the entire population of mRNAs is encoded with molecular barcodes during reverse transcription. After amplification of the gene targets of interest, molecular barcodes are counted by sequencing or scored on a simple hybridization detector to reveal the number of molecules in the starting sample. Since absolute quantities are measured, calibration to standards is unnecessary, and many of the relative quantitation challenges such as polymerase chain reaction (PCR) bias are avoided. We apply the method to gene expression analysis of minute sample quantities and demonstrate precise measurements with sensitivity down to sub single-cell levels. The method is an easy, single-tube, end point assay utilizing standard thermal cyclers and PCR reagents. Accurate and precise measurements are obtained without any need for cycle-to-cycle intensity-based real-time monitoring or physical partitioning into multiple reactions (e.g., digital PCR). Further, since all mRNA molecules are encoded with molecular barcodes, amplification can be used to generate more material for multiple measurements and technical replicates can be carried out on limited samples. The method is particularly useful for small sample quantities, such as single-cell experiments. Digital encoding of cellular content preserves true abundance levels and overcomes distortions introduced by amplification.

  4. Digital Encoding of Cellular mRNAs Enabling Precise and Absolute Gene Expression Measurement by Single-Molecule Counting

    PubMed Central

    2014-01-01

    We present a new approach for the sensitive detection and accurate quantitation of messenger ribonucleic acid (mRNA) gene transcripts in single cells. First, the entire population of mRNAs is encoded with molecular barcodes during reverse transcription. After amplification of the gene targets of interest, molecular barcodes are counted by sequencing or scored on a simple hybridization detector to reveal the number of molecules in the starting sample. Since absolute quantities are measured, calibration to standards is unnecessary, and many of the relative quantitation challenges such as polymerase chain reaction (PCR) bias are avoided. We apply the method to gene expression analysis of minute sample quantities and demonstrate precise measurements with sensitivity down to sub single-cell levels. The method is an easy, single-tube, end point assay utilizing standard thermal cyclers and PCR reagents. Accurate and precise measurements are obtained without any need for cycle-to-cycle intensity-based real-time monitoring or physical partitioning into multiple reactions (e.g., digital PCR). Further, since all mRNA molecules are encoded with molecular barcodes, amplification can be used to generate more material for multiple measurements and technical replicates can be carried out on limited samples. The method is particularly useful for small sample quantities, such as single-cell experiments. Digital encoding of cellular content preserves true abundance levels and overcomes distortions introduced by amplification. PMID:24579851

  5. New High Precision Linelist of H_3^+

    NASA Astrophysics Data System (ADS)

    Hodges, James N.; Perry, Adam J.; Markus, Charles; Jenkins, Paul A., II; Kocheril, G. Stephen; McCall, Benjamin J.

    2014-06-01

    As the simplest polyatomic molecule, H_3^+ serves as an ideal benchmark for theoretical predictions of rovibrational energy levels. By strictly ab initio methods, the current accuracy of theoretical predictions is limited to an impressive one hundredth of a wavenumber, which has been accomplished by consideration of relativistic, adiabatic, and non-adiabatic corrections to the Born-Oppenheimer PES. More accurate predictions rely on a treatment of quantum electrodynamic effects, which have improved the accuracies of vibrational transitions in molecular hydrogen to a few MHz. High precision spectroscopy is of the utmost importance for extending the frontiers of ab initio calculations, as improved precision and accuracy enable more rigorous testing of calculations. Additionally, measuring rovibrational transitions of H_3^+ can be used to predict its forbidden rotational spectrum. Though the existing data can be used to determine rotational transition frequencies, the uncertainties are prohibitively large. Acquisition of rovibrational spectra with smaller experimental uncertainty would enable a spectroscopic search for the rotational transitions. The technique Noise Immune Cavity Enhanced Optical Heterodyne Velocity Modulation Spectroscopy, or NICE-OHVMS has been previously used to precisely and accurately measure transitions of H_3^+, CH_5^+, and HCO^+ to sub-MHz uncertainty. A second module for our optical parametric oscillator has extended our instrument's frequency coverage from 3.2-3.9 μm to 2.5-3.9 μm. With extended coverage, we have improved our previous linelist by measuring additional transitions. O. L. Polyansky, et al. Phil. Trans. R. Soc. A (2012), 370, 5014--5027. J. Komasa, et al. J. Chem. Theor. Comp. (2011), 7, 3105--3115. C. M. Lindsay, B. J. McCall, J. Mol. Spectrosc. (2001), 210, 66--83. J. N. Hodges, et al. J. Chem. Phys. (2013), 139, 164201.

  6. Fast and Accurate Exhaled Breath Ammonia Measurement

    PubMed Central

    Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.

    2014-01-01

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141

  7. Accurate radio and optical positions for southern radio sources

    NASA Technical Reports Server (NTRS)

    Harvey, Bruce R.; Jauncey, David L.; White, Graeme L.; Nothnagel, Axel; Nicolson, George D.; Reynolds, John E.; Morabito, David D.; Bartel, Norbert

    1992-01-01

    Accurate radio positions with a precision of about 0.01 arcsec are reported for eight compact extragalactic radio sources south of -45-deg declination. The radio positions were determined using VLBI at 8.4 GHz on the 9589 km Tidbinbilla (Australia) to Hartebeesthoek (South Africa) baseline. The sources were selected from the Parkes Catalogue to be strong, flat-spectrum radio sources with bright optical QSO counterparts. Optical positions of the QSOs were also measured from the ESO B Sky Survey plates with respect to stars from the Perth 70 Catalogue, to an accuracy of about 0.19 arcsec rms. These radio and optical positions are as precise as any presently available in the far southern sky. A comparison of the radio and optical positions confirms the estimated optical position errors and shows that there is overall agreement at the 0.1-arcsec level between the radio and Perth 70 optical reference frames in the far south.

  8. Precise Radio-Telescope Measurements Advance Frontier Gravitational Physics

    NASA Astrophysics Data System (ADS)

    2009-09-01

    Scientists using a continent-wide array of radio telescopes have made an extremely precise measurement of the curvature of space caused by the Sun's gravity, and their technique promises a major contribution to a frontier area of basic physics. "Measuring the curvature of space caused by gravity is one of the most sensitive ways to learn how Einstein's theory of General Relativity relates to quantum physics. Uniting gravity theory with quantum theory is a major goal of 21st-Century physics, and these astronomical measurements are a key to understanding the relationship between the two," said Sergei Kopeikin of the University of Missouri. Kopeikin and his colleagues used the National Science Foundation's Very Long Baseline Array (VLBA) radio-telescope system to measure the bending of light caused by the Sun's gravity to an accuracy of 0.03 percent. With further observations, the scientists say their precision technique can make the most accurate measure ever of this phenomenon. Bending of starlight by gravity was predicted by Albert Einstein when he published his theory of General Relativity in 1916. According to relativity theory, the strong gravity of a massive object such as the Sun produces curvature in the nearby space, which alters the path of light or radio waves passing near the object. The phenomenon was first observed during a solar eclipse in 1919. Though numerous measurements of the effect have been made over the intervening 90 years, the problem of merging General Relativity and quantum theory has required ever more accurate observations. Physicists describe the space curvature and gravitational light-bending as a parameter called "gamma." Einstein's theory holds that gamma should equal exactly 1.0. "Even a value that differs by one part in a million from 1.0 would have major ramifications for the goal of uniting gravity theory and quantum theory, and thus in predicting the phenomena in high-gravity regions near black holes," Kopeikin said. To make

  9. Distinguishing between the success and precision of recollection.

    PubMed

    Harlow, Iain M; Yonelinas, Andrew P

    2016-01-01

    Recollection reflects the retrieval of complex qualitative information about prior events. Recently, Harlow and Donaldson developed a method for separating the probability of recollection success from the precision of the mnemonic information retrieved. In the current study, we ask if these properties are separable on the basis of subjective reports-are participants aware of these two aspects of recollection and can they reliably report on them? Participants studied words paired with a location on a circle outline, and at test recalled the location for a given word as accurately as possible. Additionally, participants provided separate subjective ratings of recollection confidence and recollection precision. The results indicated that participants either recollected the target location with considerable (but variable) precision or retrieved no accurate location information at all. Importantly, recollection confidence reliably predicted whether locations were recollected, while precision ratings instead reflected the precision of the locations retrieved. The results demonstrate the experimental separability of recollection success and precision, and highlight the importance of disentangling these two different aspects of recollection when examining episodic memory.

  10. COMPUTER SIMULATIONS OF WAVEGUIDE WINDOW AND COUPLER IRIS FOR PRECISION MATCHING

    SciTech Connect

    Lee, Sung-Woo; Kang, Yoon W; Shin, Ki; Vassioutchenko, Alexandre V

    2011-01-01

    A tapered ridge waveguide iris input coupler and a waveguide ceramic disk windows are used on each of six drift tube linac (DTL) cavities in the Spallation Neutron Source (SNS). The coupler design employs rapidly tapered double ridge waveguide to reduce the cross section down to a smaller low impedance transmission line section that can couple to the DTL tank easily. The impedance matching is done by adjusting the dimensions of the thin slit aperture between the ridges that is the coupling element responsible for the power delivery to the cavity. Since the coupling is sensitive to the dimensional changes of the aperture, it requires careful tuning for precise matching. Accurate RF simulation using latest 3-D EM code is desirable to help the tuning for maintenance and spare manufacturing. Simulations are done for the complete system with the ceramic window and the coupling iris on the cavity to see mutual interaction between the components as a whole.

  11. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  12. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  13. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  14. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  15. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  16. MEASUREMENT AND PRECISION, EXPERIMENTAL VERSION.

    ERIC Educational Resources Information Center

    Harvard Univ., Cambridge, MA. Harvard Project Physics.

    THIS DOCUMENT IS AN EXPERIMENTAL VERSION OF A PROGRAMED TEXT ON MEASUREMENT AND PRECISION. PART I CONTAINS 24 FRAMES DEALING WITH PRECISION AND SIGNIFICANT FIGURES ENCOUNTERED IN VARIOUS MATHEMATICAL COMPUTATIONS AND MEASUREMENTS. PART II BEGINS WITH A BRIEF SECTION ON EXPERIMENTAL DATA, COVERING SUCH POINTS AS (1) ESTABLISHING THE ZERO POINT, (2)…

  17. More Questions on Precision Teaching.

    ERIC Educational Resources Information Center

    Raybould, E. C.; Solity, J. E.

    1988-01-01

    Precision teaching can accelerate basic skills progress of special needs children. Issues discussed include using probes as performance tests, charting daily progress, using the charted data to modify teaching methods, determining appropriate age levels, assessing the number of students to be precision taught, and carefully allocating time. (JDD)

  18. Precision Teaching: Discoveries and Effects.

    ERIC Educational Resources Information Center

    Lindsley, Ogden R.

    1992-01-01

    This paper defines precision teaching; describes its monitoring methods by displaying a standard celeration chart and explaining charting conventions; points out precision teaching's roots in laboratory free-operant conditioning; discusses its learning tactics and performance principles; and describes its effectiveness in producing learning gains.…

  19. Pink-Beam, Highly-Accurate Compact Water Cooled Slits

    SciTech Connect

    Lyndaker, Aaron; Deyhim, Alex; Jayne, Richard; Waterman, Dave; Caletka, Dave; Steadman, Paul; Dhesi, Sarnjeet

    2007-01-19

    Advanced Design Consulting, Inc. (ADC) has designed accurate compact slits for applications where high precision is required. The system consists of vertical and horizontal slit mechanisms, a vacuum vessel which houses them, water cooling lines with vacuum guards connected to the individual blades, stepper motors with linear encoders, limit (home position) switches and electrical connections including internal wiring for a drain current measurement system. The total slit size is adjustable from 0 to 15 mm both vertically and horizontally. Each of the four blades are individually controlled and motorized. In this paper, a summary of the design and Finite Element Analysis of the system are presented.

  20. Accurate method of modeling cluster scaling relations in modified gravity

    NASA Astrophysics Data System (ADS)

    He, Jian-hua; Li, Baojiu

    2016-06-01

    We propose a new method to model cluster scaling relations in modified gravity. Using a suite of nonradiative hydrodynamical simulations, we show that the scaling relations of accumulated gas quantities, such as the Sunyaev-Zel'dovich effect (Compton-y parameter) and the x-ray Compton-y parameter, can be accurately predicted using the known results in the Λ CDM model with a precision of ˜3 % . This method provides a reliable way to analyze the gas physics in modified gravity using the less demanding and much more efficient pure cold dark matter simulations. Our results therefore have important theoretical and practical implications in constraining gravity using cluster surveys.

  1. System and method for high precision isotope ratio destructive analysis

    SciTech Connect

    Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R

    2013-07-02

    A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).

  2. High-precision positioning of radar scatterers

    NASA Astrophysics Data System (ADS)

    Dheenathayalan, Prabu; Small, David; Schubert, Adrian; Hanssen, Ramon F.

    2016-05-01

    Remote sensing radar satellites cover wide areas and provide spatially dense measurements, with millions of scatterers. Knowledge of the precise position of each radar scatterer is essential to identify the corresponding object and interpret the estimated deformation. The absolute position accuracy of synthetic aperture radar (SAR) scatterers in a 2D radar coordinate system, after compensating for atmosphere and tidal effects, is in the order of centimeters for TerraSAR-X (TSX) spotlight images. However, the absolute positioning in 3D and its quality description are not well known. Here, we exploit time-series interferometric SAR to enhance the positioning capability in three dimensions. The 3D positioning precision is parameterized by a variance-covariance matrix and visualized as an error ellipsoid centered at the estimated position. The intersection of the error ellipsoid with objects in the field is exploited to link radar scatterers to real-world objects. We demonstrate the estimation of scatterer position and its quality using 20 months of TSX stripmap acquisitions over Delft, the Netherlands. Using trihedral corner reflectors (CR) for validation, the accuracy of absolute positioning in 2D is about 7 cm. In 3D, an absolute accuracy of up to ˜ 66 cm is realized, with a cigar-shaped error ellipsoid having centimeter precision in azimuth and range dimensions, and elongated in cross-range dimension with a precision in the order of meters (the ratio of the ellipsoid axis lengths is 1/3/213, respectively). The CR absolute 3D position, along with the associated error ellipsoid, is found to be accurate and agree with the ground truth position at a 99 % confidence level. For other non-CR coherent scatterers, the error ellipsoid concept is validated using 3D building models. In both cases, the error ellipsoid not only serves as a quality descriptor, but can also help to associate radar scatterers to real-world objects.

  3. Raman Fingerprints of Atomically Precise Graphene Nanoribbons

    PubMed Central

    2016-01-01

    Bottom-up approaches allow the production of ultranarrow and atomically precise graphene nanoribbons (GNRs) with electronic and optical properties controlled by the specific atomic structure. Combining Raman spectroscopy and ab initio simulations, we show that GNR width, edge geometry, and functional groups all influence their Raman spectra. The low-energy spectral region below 1000 cm–1 is particularly sensitive to edge morphology and functionalization, while the D peak dispersion can be used to uniquely fingerprint the presence of GNRs and differentiates them from other sp2 carbon nanostructures. PMID:26907096

  4. Precision frequency measurements with interferometric weak values

    NASA Astrophysics Data System (ADS)

    Starling, David J.; Dixon, P. Ben; Jordan, Andrew N.; Howell, John C.

    2010-12-01

    We demonstrate an experiment which utilizes a Sagnac interferometer to measure a change in optical frequency of 129 ± 7 kHz/Hz with only 2 mW of continuous-wave, single-mode input power. We describe the measurement of a weak value and show how even higher-frequency sensitivities may be obtained over a bandwidth of several nanometers. This technique has many possible applications, such as precision relative frequency measurements and laser locking without the use of atomic lines.

  5. Accurate Mass Measurements in Proteomics

    SciTech Connect

    Liu, Tao; Belov, Mikhail E.; Jaitly, Navdeep; Qian, Weijun; Smith, Richard D.

    2007-08-01

    proteins can also be extensively modified by PTMs26-31 or by their interactions with other biomolecules or small molecules.32,33 Thus, it is highly desirable that proteins, the primary functional macromolecules involved in almost all biological activities, can be studied directly and systematically to determine their diverse properties and interplay. Such proteome-wide analysis is expected to provide a wealth of biological information, such as sequence, quantity, PTMs, interactions, activities, subcellular distribution and structure of proteins, which is critical to the comprehensive understanding of the biological systems. However, the de novo analysis of proteins isolated from cells, tissues or bodily fluids poses significant challenges due to the tremendous complexity and depth of the proteome, which necessitates high-throughput and highly sensitive analytical techniques. It is therefore not surprising that mass spectrometry (MS) has become an indispensable technology for proteome analysis.

  6. A robust and accurate formulation of molecular and colloidal electrostatics

    NASA Astrophysics Data System (ADS)

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y. C.

    2016-08-01

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics.

  7. A robust and accurate formulation of molecular and colloidal electrostatics.

    PubMed

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y C

    2016-08-01

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics. PMID:27497538

  8. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  9. Measuring and balancing dynamic unbalance of precision centrifuge

    NASA Astrophysics Data System (ADS)

    Yang, Yafei; Huo, Xin

    2008-10-01

    A precision centrifuge is used to test and calibrate accelerometer model parameters. Its dynamic unbalance may cause the perturbation of the centrifuge to deteriorate the test and calibration accuracy of an accelerometer. By analyzing the causes of dynamic unbalance, the influences on precision centrifuge from static unbalance and couple unbalance are developed. It is considered measuring and balancing of static unbalance is a key to resolving a dynamic unbalance problem of precision centrifuge with a disk in structure. Measuring means and calculating formulas of static unbalance amount are given, and balancing principle and method are provided. The correctness and effectiveness of this method are confirmed by experiments on a device under tuning, thereby the accurate and high-effective measuring and balancing method of dynamic unbalance of this precision centrifuge was provided.

  10. Precision measurement of cosmic magnification from 21 cm emitting galaxies

    SciTech Connect

    Zhang, Pengjie; Pen, Ue-Li; /Canadian Inst. Theor. Astrophys.

    2005-04-01

    We show how precision lensing measurements can be obtained through the lensing magnification effect in high redshift 21cm emission from galaxies. Normally, cosmic magnification measurements have been seriously complicated by galaxy clustering. With precise redshifts obtained from 21cm emission line wavelength, one can correlate galaxies at different source planes, or exclude close pairs to eliminate such contaminations. We provide forecasts for future surveys, specifically the SKA and CLAR. SKA can achieve percent precision on the dark matter power spectrum and the galaxy dark matter cross correlation power spectrum, while CLAR can measure an accurate cross correlation power spectrum. The neutral hydrogen fraction was most likely significantly higher at high redshifts, which improves the number of observed galaxies significantly, such that also CLAR can measure the dark matter lensing power spectrum. SKA can also allow precise measurement of lensing bispectrum.

  11. French Meteor Network for High Precision Orbits of Meteoroids

    NASA Technical Reports Server (NTRS)

    Atreya, P.; Vaubaillon, J.; Colas, F.; Bouley, S.; Gaillard, B.; Sauli, I.; Kwon, M. K.

    2011-01-01

    There is a lack of precise meteoroids orbit from video observations as most of the meteor stations use off-the-shelf CCD cameras. Few meteoroids orbit with precise semi-major axis are available using film photographic method. Precise orbits are necessary to compute the dust flux in the Earth s vicinity, and to estimate the ejection time of the meteoroids accurately by comparing them with the theoretical evolution model. We investigate the use of large CCD sensors to observe multi-station meteors and to compute precise orbit of these meteoroids. An ideal spatial and temporal resolution to get an accuracy to those similar of photographic plates are discussed. Various problems faced due to the use of large CCD, such as increasing the spatial and the temporal resolution at the same time and computational problems in finding the meteor position are illustrated.

  12. Accurate Method for Determining Adhesion of Cantilever Beams

    SciTech Connect

    Michalske, T.A.; de Boer, M.P.

    1999-01-08

    Using surface micromachined samples, we demonstrate the accurate measurement of cantilever beam adhesion by using test structures which are adhered over long attachment lengths. We show that this configuration has a deep energy well, such that a fracture equilibrium is easily reached. When compared to the commonly used method of determining the shortest attached beam, the present method is much less sensitive to variations in surface topography or to details of capillary drying.

  13. Precision Agriculture. Reaping the Benefits of Technological Growth. Resources in Technology.

    ERIC Educational Resources Information Center

    Hadley, Joel F.

    1998-01-01

    Technological innovations have revolutionized farming. Using precision farming techniques, farmers get an accurate picture of a field's attributes, such as soil properties, yield rates, and crop characteristics through the use of Differential Global Positioning Satellite hardware. (JOW)

  14. A novel high-sensitivity FBG pressure sensor

    NASA Astrophysics Data System (ADS)

    Yao, Zhenhua; Fu, Tao; Leng, Jinsong

    2007-07-01

    A novel pressure sensor based on FBG is designed in this paper. Not only in normal environment, also does it accurately work in water and petrol where other conventional sensors can not work normally. In this paper, the principle of the novel sensor is introduced, and two experiments are further performed: One is keeping the sensor flatly in the gastight silo whose pressure is supplied by an air compressing engine, and the other one is keeping the sensor in liquid. The analysis of the result data demonstrates that the sensor possesses high sensitivity, high linearity, high precision and repeatability. Its experimental linearity and sensitivity approach 0.99858 and 5.35×10 -3MPa -1, respectively. It is also discussed using the sensor to measure the volume in tank.

  15. Precision Measurements at the ILC

    SciTech Connect

    Nelson, T.K.; /SLAC

    2006-12-06

    With relatively low backgrounds and a well-determined initial state, the proposed International Linear Collider (ILC) would provide a precision complement to the LHC experiments at the energy frontier. Completely and precisely exploring the discoveries of the LHC with such a machine will be critical in understanding the nature of those discoveries and what, if any, new physics they represent. The unique ability to form a complete picture of the Higgs sector is a prime example of the probative power of the ILC and represents a new era in precision physics.

  16. Precision Instrument and Equipment Repairers.

    ERIC Educational Resources Information Center

    Wyatt, Ian

    2001-01-01

    Explains the job of precision instrument and equipment repairers, who work on cameras, medical equipment, musical instruments, watches and clocks, and industrial measuring devices. Discusses duties, working conditions, employment and earnings, job outlook, and skills and training. (JOW)

  17. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  18. Accurate Runout Measurement for HDD Spinning Motors and Disks

    NASA Astrophysics Data System (ADS)

    Jiang, Quan; Bi, Chao; Lin, Song

    As hard disk drive (HDD) areal density increases, its track width becomes smaller and smaller and so is non-repeatable runout. HDD industry needs more accurate and better resolution runout measurements of spinning spindle motors and media platters in both axial and radial directions. This paper introduces a new system how to precisely measure the runout of HDD spinning disks and motors through synchronously acquiring the rotor position signal and the displacements in axial or radial directions. In order to minimize the synchronizing error between the rotor position and the displacement signal, a high resolution counter is adopted instead of the conventional phase-lock loop method. With Laser Doppler Vibrometer and proper signal processing, the proposed runout system can precisely measure the runout of the HDD spinning disks and motors with 1 nm resolution and 0.2% accuracy with a proper sampling rate. It can provide an effective and accurate means to measure the runout of high areal density HDDs, in particular the next generation HDDs, such as, pattern media HDDs and HAMR HDDs.

  19. Precision GPS ephemerides and baselines

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Based on the research, the area of precise ephemerides for GPS satellites, the following observations can be made pertaining to the status and future work needed regarding orbit accuracy. There are several aspects which need to be addressed in discussing determination of precise orbits, such as force models, kinematic models, measurement models, data reduction/estimation methods, etc. Although each one of these aspects was studied at CSR in research efforts, only points pertaining to the force modeling aspect are addressed.

  20. Smart and precise alignment of optical systems

    NASA Astrophysics Data System (ADS)

    Langehanenberg, Patrik; Heinisch, Josef; Stickler, Daniel

    2013-09-01

    For the assembly of any kind of optical systems the precise centration of every single element is of particular importance. Classically the precise alignment of optical components is based on the precise centering of all components to an external axis (usually a high-precision rotary spindle axis). Main drawback of this timeconsuming process is that it is significantly sensitive to misalignments of the reference (e.g. the housing) axis. In order to facilitate process in this contribution we present a novel alignment strategy for the TRIOPTICS OptiCentric® instrument family that directly aligns two elements with respect to each other by measuring the first element's axis and using this axis as alignment reference without the detour of considering an external reference. According to the optical design any axis in the system can be chosen as target axis. In case of the alignment to a barrel this axis is measured by using a distance sensor (e.g., the classically used dial indicator). Instead of fine alignment the obtained data is used for the calculation of its orientation within the setup. Alternatively, the axis of an optical element (single lens or group of lenses) whose orientation is measured with the standard OptiCentric MultiLens concept can be used as a reference. In the instrument's software the decentering of the adjusting element to the calculated axis is displayed in realtime and indicated by a target mark that can be used for the manual alignment. In addition, the obtained information can also be applied for active and fully automated alignment of lens assemblies with the help of motorized actuators.

  1. Towards precision medicine in epilepsy surgery

    PubMed Central

    Jin, Pingping; Wu, Dongyan; Li, Xiaoxuan

    2016-01-01

    Up to a third of all patients with epilepsy are refractory to medical therapy even in the context of the introduction of new antiepileptic drugs (AEDs) with considerable advantages in safety and tolerability over the last two decades. It has been widely accepted that epilepsy surgery is a highly effective therapeutic option in a selected subset of patients with refractory focal seizure. There is no doubt that accurate localization of the epileptogenic zone (EZ) is crucial to the success of resection surgery for intractable epilepsy. The pre-surgical evaluation requires a multimodality approach wherein each modality provides unique and complimentary information. Accurate localization of EZ still remains challenging, especially in patients with normal features on MRI. Whereas substantial progress has been made in the methods of pre-surgical assessment in recent years, which widened the applicability of surgical treatment for children and adults with refractory seizure. Advances in neuroimaging including voxel-based morphometric MRI analysis, multimodality techniques and computer-aided subtraction ictal SPECT co-registered to MRI have improved our ability to identify subtle structural and metabolic lesions causing focal seizure. Considerable observations from animal model with epilepsy and pre-surgical patients have consistently found a strong correlation between high frequency oscillations (HFOs) and epileptogenic brain tissue that suggest HFOs could be a potential biomarker of EZ. Since SEEG emphasizes the importance to study the spatiotemporal dynamics of seizure discharges, accounting for the dynamic, multidirectional spatiotemporal organization of the ictal discharges, it has greatly deep our understanding of the anatomo-electro-clinical profile of seizure. In this review, we focus on some state-of-the-art pre-surgical investigations that contribute to the precision medicine. Furthermore, advances also provide opportunity to achieve the minimal side effects and

  2. High sensitivity, homogeneous particle-based immunoassay for thyrotropin (Multipact).

    PubMed

    Wilkins, T A; Brouwers, G; Mareschal, J C; Cambiaso, C L

    1988-09-01

    We describe the first homogeneous, nonradioactive, high-sensitivity assay for human thyrotropin (TSH). The assay is based on particle immunoassay techniques, wherein 800-nm particles form the basis for the immunochemistry, delivery, and the detection technologies, respectively. Our assay also is the first to involve the use of fragmented monoclonal antibodies (to eliminate serum interferences) covalently coupled to particles without loss of their binding properties. Assays are performed in a semiautomated mode with use of a new modular system (Multipact). Equilibrium is reached in less than 2 h. Precision profile, sensitivity, and clinical studies indicate that the assay is accurate, has good precision at low concentrations, and that detection-limit characteristics compare well with those of a leading commercial high-sensitivity immunoradiometric assay (IRMA) for TSH. Dilution characteristics were satisfactory down to the assay's detection limit for a range of clinical samples. Correlation studies vs a reference IRMA method yielded the regression equation, present method = 0.976 (IRMA) + 0.002 milli-int. unit/L (r = 0.98), for 223 samples with TSH concentrations in the range 0 to 30 milli-int. units/L. For 40 samples with TSH less than or equal to 1.0 milli-int. unit/L it was: present method = 0.94 (IRMA) + 0.005 milli-int. unit/L (r = 0.96). PMID:3416423

  3. Precise radial velocities in the near infrared

    NASA Astrophysics Data System (ADS)

    Redman, Stephen L.

    Since the first detection of a planet outside our Solar System byWolszczan & Frail (1992), over 500 exoplanets have been found to date2, none of which resemble the Earth. Most of these planets were discovered by measuring the radial velocity (hereafter, RV) of the host star, which wobbles under the gravitational influence of any existing planetary companions. However, this method has yet to achieve the sub-m/s precision necessary to detect an Earth-mass planet in the Habitable Zone (the region around a star that can support liquid water; hereafter, HZ) (Kasting et al. 1993) around a Solar-type star. Even though Kepler (Borucki et al. 2010) has announced several Earth-sized HZ candidates, these targets will be exceptionally difficult to confirm with current astrophysical spectrographs (Borucki et al. 2011). The fastest way to discover and confirm potentiallyhabitable Earth-mass planets is to observe stars with lower masses - in particular, late M dwarfs. While M dwarfs are readily abundant, comprising some 70% of the local stellar population, their low optical luminosity presents a formidable challenge to current optical RV instruments. By observing in the near-infrared (hereafter, NIR), where the flux from M dwarfs peaks, we can potentially reach low RV precisions with significantly less telescope time than would be required by a comparable optical instrument. However, NIR precision RV measurements are a relatively new idea and replete with challenges: IR arrays, unlike CCDs, are sensitive to the thermal background; modal noise is a bigger issue in the NIR than in the optical; and the NIR currently lacks the calibration sources like the very successful thorium-argon (hereafter, ThAr) hollow-cathode lamp and Iodine gas cell of the optical. The PSU Pathfinder (hereafter, Pathfinder) was designed to explore these technical issues with the intention of mitigating these problems for future NIR high-resolution spectrographs, such as the Habitable-Zone Planet Finder (HZPF

  4. High precision radial velocities with GIANO spectra

    NASA Astrophysics Data System (ADS)

    Carleo, I.; Sanna, N.; Gratton, R.; Benatti, S.; Bonavita, M.; Oliva, E.; Origlia, L.; Desidera, S.; Claudi, R.; Sissa, E.

    2016-06-01

    Radial velocities (RV) measured from near-infrared (NIR) spectra are a potentially excellent tool to search for extrasolar planets around cool or active stars. High resolution infrared (IR) spectrographs now available are reaching the high precision of visible instruments, with a constant improvement over time. GIANO is an infrared echelle spectrograph at the Telescopio Nazionale Galileo (TNG) and it is a powerful tool to provide high resolution spectra for accurate RV measurements of exoplanets and for chemical and dynamical studies of stellar or extragalactic objects. No other high spectral resolution IR instrument has GIANO's capability to cover the entire NIR wavelength range (0.95-2.45 μm) in a single exposure. In this paper we describe the ensemble of procedures that we have developed to measure high precision RVs on GIANO spectra acquired during the Science Verification (SV) run, using the telluric lines as wavelength reference. We used the Cross Correlation Function (CCF) method to determine the velocity for both the star and the telluric lines. For this purpose, we constructed two suitable digital masks that include about 2000 stellar lines, and a similar number of telluric lines. The method is applied to various targets with different spectral type, from K2V to M8 stars. We reached different precisions mainly depending on the H-magnitudes: for H ˜ 5 we obtain an rms scatter of ˜ 10 m s-1, while for H ˜ 9 the standard deviation increases to ˜ 50 ÷ 80 m s-1. The corresponding theoretical error expectations are ˜ 4 m s-1 and 30 m s-1, respectively. Finally we provide the RVs measured with our procedure for the targets observed during GIANO Science Verification.

  5. Error bounds from extra precise iterative refinement

    SciTech Connect

    Demmel, James; Hida, Yozo; Kahan, William; Li, Xiaoye S.; Mukherjee, Soni; Riedy, E. Jason

    2005-02-07

    We present the design and testing of an algorithm for iterative refinement of the solution of linear equations, where the residual is computed with extra precision. This algorithm was originally proposed in the 1960s [6, 22] as a means to compute very accurate solutions to all but the most ill-conditioned linear systems of equations. However two obstacles have until now prevented its adoption in standard subroutine libraries like LAPACK: (1) There was no standard way to access the higher precision arithmetic needed to compute residuals, and (2) it was unclear how to compute a reliable error bound for the computed solution. The completion of the new BLAS Technical Forum Standard [5] has recently removed the first obstacle. To overcome the second obstacle, we show how a single application of iterative refinement can be used to compute an error bound in any norm at small cost, and use this to compute both an error bound in the usual infinity norm, and a componentwise relative error bound. We report extensive test results on over 6.2 million matrices of dimension 5, 10, 100, and 1000. As long as a normwise (resp. componentwise) condition number computed by the algorithm is less than 1/max{l_brace}10,{radical}n{r_brace} {var_epsilon}{sub w}, the computed normwise (resp. componentwise) error bound is at most 2 max{l_brace}10,{radical}n{r_brace} {center_dot} {var_epsilon}{sub w}, and indeed bounds the true error. Here, n is the matrix dimension and w is single precision roundoff error. For worse conditioned problems, we get similarly small correct error bounds in over 89.4% of cases.

  6. No galaxy left behind: accurate measurements with the faintest objects in the Dark Energy Survey

    NASA Astrophysics Data System (ADS)

    Suchyta, E.; Huff, E. M.; Aleksić, J.; Melchior, P.; Jouvel, S.; MacCrann, N.; Ross, A. J.; Crocce, M.; Gaztanaga, E.; Honscheid, K.; Leistedt, B.; Peiris, H. V.; Rykoff, E. S.; Sheldon, E.; Abbott, T.; Abdalla, F. B.; Allam, S.; Banerji, M.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Burke, D. L.; Rosell, A. Carnero; Kind, M. Carrasco; Carretero, J.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Doel, P.; Eifler, T. F.; Estrada, J.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; James, D. J.; Jarvis, M.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Lima, M.; Maia, M. A. G.; March, M.; Marshall, J. L.; Miller, C. J.; Miquel, R.; Neilsen, E.; Nichol, R. C.; Nord, B.; Ogando, R.; Percival, W. J.; Reil, K.; Roodman, A.; Sako, M.; Sanchez, E.; Scarpine, V.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Swanson, M. E. C.; Tarle, G.; Thaler, J.; Thomas, D.; Vikram, V.; Walker, A. R.; Wechsler, R. H.; Zhang, Y.; DES Collaboration

    2016-03-01

    Accurate statistical measurement with large imaging surveys has traditionally required throwing away a sizable fraction of the data. This is because most measurements have relied on selecting nearly complete samples, where variations in the composition of the galaxy population with seeing, depth, or other survey characteristics are small. We introduce a new measurement method that aims to minimize this wastage, allowing precision measurement for any class of detectable stars or galaxies. We have implemented our proposal in BALROG, software which embeds fake objects in real imaging to accurately characterize measurement biases. We demonstrate this technique with an angular clustering measurement using Dark Energy Survey (DES) data. We first show that recovery of our injected galaxies depends on a variety of survey characteristics in the same way as the real data. We then construct a flux-limited sample of the faintest galaxies in DES, chosen specifically for their sensitivity to depth and seeing variations. Using the synthetic galaxies as randoms in the Landy-Szalay estimator suppresses the effects of variable survey selection by at least two orders of magnitude. With this correction, our measured angular clustering is found to be in excellent agreement with that of a matched sample from much deeper, higher resolution space-based Cosmological Evolution Survey (COSMOS) imaging; over angular scales of 0.004° < θ < 0.2°, we find a best-fitting scaling amplitude between the DES and COSMOS measurements of 1.00 ± 0.09. We expect this methodology to be broadly useful for extending measurements' statistical reach in a variety of upcoming imaging surveys.

  7. No Galaxy Left Behind: Accurate Measurements with the Faintest Objects in the Dark Energy Survey

    DOE PAGES

    Suchyta, E.

    2016-01-27

    Accurate statistical measurement with large imaging surveys has traditionally required throwing away a sizable fraction of the data. This is because most measurements have have relied on selecting nearly complete samples, where variations in the composition of the galaxy population with seeing, depth, or other survey characteristics are small. We introduce a new measurement method that aims to minimize this wastage, allowing precision measurement for any class of stars or galaxies detectable in an imaging survey. We have implemented our proposal in Balrog, a software package which embeds fake objects in real imaging in order to accurately characterize measurement biases.more » We also demonstrate this technique with an angular clustering measurement using Dark Energy Survey (DES) data. We first show that recovery of our injected galaxies depends on a wide variety of survey characteristics in the same way as the real data. We then construct a flux-limited sample of the faintest galaxies in DES, chosen specifically for their sensitivity to depth and seeing variations. Using the synthetic galaxies as randoms in the standard LandySzalay correlation function estimator suppresses the effects of variable survey selection by at least two orders of magnitude. Now our measured angular clustering is found to be in excellent agreement with that of a matched sample drawn from much deeper, higherresolution space-based COSMOS imaging; over angular scales of 0.004° < θ < 0.2 ° , we find a best-fit scaling amplitude between the DES and COSMOS measurements of 1.00 ± 0.09. We expect this methodology to be broadly useful for extending the statistical reach of measurements in a wide variety of coming imaging surveys.« less

  8. Accurate alignment of optical axes of a biplate using a spectroscopic Mueller matrix ellipsometer.

    PubMed

    Gu, Honggang; Chen, Xiuguo; Jiang, Hao; Zhang, Chuanwei; Li, Weiqi; Liu, Shiyuan

    2016-05-20

    The biplate that consists of two single wave plates made from birefringent materials with their fast axes oriented perpendicular to each other is one of the most commonly used retarders in many optical systems. The internal alignment of the optical axes of the two single wave plates is a key procedure in the fabrication and application of a biplate to reduce the spurious artifacts of oscillations in polarization properties due to the misalignment error and to improve the accuracy and precision of the systems using such biplates. In this paper, we propose a method to accurately align the axes of an arbitrary biplate by minimizing the oscillations in the characteristic parameter spectra of the biplate detected by a spectroscopic Mueller matrix ellipsometer (MME). We derived analytical relations between the characteristic parameters and the misalignment error in the biplate, which helps us to analyze the sensitivity of the characteristic parameters to the misalignment error and to evaluate the alignment accuracy quantitatively. Experimental results performed on a house-developed MME demonstrate that the alignment accuracy of the proposed method is better than 0.01° in aligning the optical axes of a quartz biplate.

  9. Precise Restraightening of Bent Studs

    NASA Technical Reports Server (NTRS)

    Boardman, R. E.

    1982-01-01

    Special tool quickly bends studs back into shape accurately and safely by force applied by hydraulic ram, with deflection being measured by dial indicator. Ram and indicator can be interchanged for straightening in reverse direction.

  10. Accurately measuring volcanic plume velocity with multiple UV spectrometers

    USGS Publications Warehouse

    Williams-Jones, G.; Horton, K.A.; Elias, T.; Garbeil, H.; Mouginis-Mark, P. J.; Sutton, A.J.; Harris, A.J.L.

    2006-01-01

    A fundamental problem with all ground-based remotely sensed measurements of volcanic gas flux is the difficulty in accurately measuring the velocity of the gas plume. Since a representative wind speed and direction are used as proxies for the actual plume velocity, there can be considerable uncertainty in reported gas flux values. Here we present a method that uses at least two time-synchronized simultaneously recording UV spectrometers (FLYSPECs) placed a known distance apart. By analyzing the time varying structure of SO2 concentration signals at each instrument, the plume velocity can accurately be determined. Experiments were conducted on Ki??lauea (USA) and Masaya (Nicaragua) volcanoes in March and August 2003 at plume velocities between 1 and 10 m s-1. Concurrent ground-based anemometer measurements differed from FLYSPEC-measured plume speeds by up to 320%. This multi-spectrometer method allows for the accurate remote measurement of plume velocity and can therefore greatly improve the precision of volcanic or industrial gas flux measurements. ?? Springer-Verlag 2006.

  11. Precision and Disclosure in Text and Voice Interviews on Smartphones.

    PubMed

    Schober, Michael F; Conrad, Frederick G; Antoun, Christopher; Ehlen, Patrick; Fail, Stefanie; Hupp, Andrew L; Johnston, Michael; Vickers, Lucas; Yan, H Yanna; Zhang, Chan

    2015-01-01

    As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (e.g., SMS), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. In the study reported here, 634 people who had agreed to participate in an interview on their iPhone were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched. The key question was how the interview mode affected the quality of the response data, in particular the precision of numerical answers (how many were not rounded), variation in answers to multiple questions with the same response scale (differentiation), and disclosure of socially undesirable information. Texting led to higher quality data-fewer rounded numerical answers, more differentiated answers to a battery of questions, and more disclosure of sensitive information-than voice interviews, both with human and automated interviewers. Text respondents also reported a strong preference for future interviews by text. The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. The findings also suggest that answers from text interviews, when aggregated across a sample, can tell a different story about a population than answers from voice interviews, potentially altering the policy implications from a survey.

  12. Precision and Disclosure in Text and Voice Interviews on Smartphones

    PubMed Central

    Antoun, Christopher; Ehlen, Patrick; Fail, Stefanie; Hupp, Andrew L.; Johnston, Michael; Vickers, Lucas; Yan, H. Yanna; Zhang, Chan

    2015-01-01

    As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (e.g., SMS), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. In the study reported here, 634 people who had agreed to participate in an interview on their iPhone were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched. The key question was how the interview mode affected the quality of the response data, in particular the precision of numerical answers (how many were not rounded), variation in answers to multiple questions with the same response scale (differentiation), and disclosure of socially undesirable information. Texting led to higher quality data—fewer rounded numerical answers, more differentiated answers to a battery of questions, and more disclosure of sensitive information—than voice interviews, both with human and automated interviewers. Text respondents also reported a strong preference for future interviews by text. The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. The findings also suggest that answers from text interviews, when aggregated across a sample, can tell a different story about a population than answers from voice interviews, potentially altering the policy implications from a survey. PMID:26060991

  13. Precision and Disclosure in Text and Voice Interviews on Smartphones.

    PubMed

    Schober, Michael F; Conrad, Frederick G; Antoun, Christopher; Ehlen, Patrick; Fail, Stefanie; Hupp, Andrew L; Johnston, Michael; Vickers, Lucas; Yan, H Yanna; Zhang, Chan

    2015-01-01

    As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (e.g., SMS), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. In the study reported here, 634 people who had agreed to participate in an interview on their iPhone were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched. The key question was how the interview mode affected the quality of the response data, in particular the precision of numerical answers (how many were not rounded), variation in answers to multiple questions with the same response scale (differentiation), and disclosure of socially undesirable information. Texting led to higher quality data-fewer rounded numerical answers, more differentiated answers to a battery of questions, and more disclosure of sensitive information-than voice interviews, both with human and automated interviewers. Text respondents also reported a strong preference for future interviews by text. The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. The findings also suggest that answers from text interviews, when aggregated across a sample, can tell a different story about a population than answers from voice interviews, potentially altering the policy implications from a survey. PMID:26060991

  14. Improving precision of forage yield trials: A case study

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Field-based agronomic and genetic research relies heavily on the data generated from field evaluations. Therefore, it is imperative to optimize the precision of yield estimates in cultivar evaluation trials to make reliable selections. Experimental error in yield trials is sensitive to several facto...

  15. Kinematic precision of gear trains

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.

    1983-01-01

    Kinematic precision is affected by errors which are the result of either intentional adjustments or accidental defects in manufacturing and assembly of gear trains. A method for the determination of kinematic precision of gear trains is described. The method is based on the exact kinematic relations for the contact point motions of the gear tooth surfaces under the influence of errors. An approximate method is also explained. Example applications of the general approximate methods are demonstrated for gear trains consisting of involute (spur and helical) gears, circular arc (Wildhaber-Novikov) gears, and spiral bevel gears. Gear noise measurements from a helicopter transmission are presented and discussed with relation to the kinematic precision theory. Previously announced in STAR as N82-32733

  16. Kinematic precision of gear trains

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.

    1982-01-01

    Kinematic precision is affected by errors which are the result of either intentional adjustments or accidental defects in manufacturing and assembly of gear trains. A method for the determination of kinematic precision of gear trains is described. The method is based on the exact kinematic relations for the contact point motions of the gear tooth surfaces under the influence of errors. An approximate method is also explained. Example applications of the general approximate methods are demonstrated for gear trains consisting of involute (spur and helical) gears, circular arc (Wildhaber-Novikov) gears, and spiral bevel gears. Gear noise measurements from a helicopter transmission are presented and discussed with relation to the kinematic precision theory.

  17. Precise Orbit Determination for ALOS

    NASA Technical Reports Server (NTRS)

    Nakamura, Ryo; Nakamura, Shinichi; Kudo, Nobuo; Katagiri, Seiji

    2007-01-01

    The Advanced Land Observing Satellite (ALOS) has been developed to contribute to the fields of mapping, precise regional land coverage observation, disaster monitoring, and resource surveying. Because the mounted sensors need high geometrical accuracy, precise orbit determination for ALOS is essential for satisfying the mission objectives. So ALOS mounts a GPS receiver and a Laser Reflector (LR) for Satellite Laser Ranging (SLR). This paper deals with the precise orbit determination experiments for ALOS using Global and High Accuracy Trajectory determination System (GUTS) and the evaluation of the orbit determination accuracy by SLR data. The results show that, even though the GPS receiver loses lock of GPS signals more frequently than expected, GPS-based orbit is consistent with SLR-based orbit. And considering the 1 sigma error, orbit determination accuracy of a few decimeters (peak-to-peak) was achieved.

  18. Precision cleaning apparatus and method

    DOEpatents

    Schneider, T.W.; Frye, G.C.; Martin, S.J.

    1998-01-13

    A precision cleaning apparatus and method are disclosed. The precision cleaning apparatus includes a cleaning monitor further comprising an acoustic wave cleaning sensor such as a quartz crystal microbalance (QCM), a flexural plate wave (FPW) sensor, a shear horizontal acoustic plate mode (SH--APM) sensor, or a shear horizontal surface acoustic wave (SH--SAW) sensor; and measurement means connectable to the sensor for measuring in-situ one or more electrical response characteristics that vary in response to removal of one or more contaminants from the sensor and a workpiece located adjacent to the sensor during cleaning. Methods are disclosed for precision cleaning of one or more contaminants from a surface of the workpiece by means of the cleaning monitor that determines a state of cleanliness and any residual contamination that may be present after cleaning; and also for determining an effectiveness of a cleaning medium for removing one or more contaminants from a workpiece. 11 figs.

  19. Precision cleaning apparatus and method

    DOEpatents

    Schneider, Thomas W.; Frye, Gregory C.; Martin, Stephen J.

    1998-01-01

    A precision cleaning apparatus and method. The precision cleaning apparatus includes a cleaning monitor further comprising an acoustic wave cleaning sensor such as a quartz crystal microbalance (QCM), a flexural plate wave (FPW) sensor, a shear horizontal acoustic plate mode (SH--APM) sensor, or a shear horizontal surface acoustic wave (SH--SAW) sensor; and measurement means connectable to the sensor for measuring in-situ one or more electrical response characteristics that vary in response to removal of one or more contaminants from the sensor and a workpiece located adjacent to the sensor during cleaning. Methods are disclosed for precision cleaning of one or more contaminants from a surface of the workpiece by means of the cleaning monitor that determines a state of cleanliness and any residual contamination that may be present after cleaning; and also for determining an effectiveness of a cleaning medium for removing one or more contaminants from a workpiece.

  20. Validation and Generalization of a Method for Precise Size Measurements of Metal Nanoclusters on Supports

    SciTech Connect

    Reed, B W; Morgan, D G; Okamoto, N L; Kulkarni, A; Gates, B C; Browning, N D

    2008-09-16

    We recently described a data analysis method for precise ({approx}0.1 {angstrom} random error in the mean for a 200 kV instrument with a 3 {angstrom} FWHM probe size) size measurements of small clusters of heavy metal atoms on supports as imaged in a scanning transmission electron microscope, including an experimental demonstration using clusters that were primarily triosmium or decaosmium. The method is intended for low signal-to-noise ratio images of radiation-sensitive samples. We now present a detailed analysis, including a generalization to address issues of particle anisotropy and biased orientation distributions. In the future, this analysis should enable extraction of shape as well as size information, up to the noise-defined limit of information present in the image. We also present results from an extensive series of simulations designed to determine the method's range of applicability and expected performance in realistic situations. The simulations reproduce the experiments quite accurately, enabling a correction of systematic errors so that only the {approx}0.1 {angstrom} random error remains. The results are very stable over a wide range of parameters. We introduce a variation on the method with improved precision and stability relative to the original version, while also showing how simple diagnostics can test whether the results are reliable in any particular instance.

  1. Platform Precision Autopilot Overview and Flight Test Results

    NASA Technical Reports Server (NTRS)

    Lin, V.; Strovers, B.; Lee, J.; Beck, R.

    2008-01-01

    The Platform Precision Autopilot is an instrument landing system interfaced autopilot system, developed to enable an aircraft to repeatedly fly nearly the same trajectory hours, days, or weeks later. The Platform Precision Autopilot uses a novel design to interface with a NASA Gulfstream III jet by imitating the output of an instrument landing system approach. This technique minimizes, as much as possible, modifications to the baseline Gulfstream III jet and retains the safety features of the aircraft autopilot. The Platform Precision Autopilot requirement is to fly within a 5-m (16.4-ft) radius tube for distances to 200 km (108 nmi) in the presence of light turbulence for at least 90 percent of the time. This capability allows precise repeat-pass interferometry for the Uninhabited Aerial Vehicle Synthetic Aperture Radar program, whose primary objective is to develop a miniaturized, polarimetric, L-band synthetic aperture radar. Precise navigation is achieved using an accurate differential global positioning system developed by the Jet Propulsion Laboratory. Flight-testing has demonstrated the ability of the Platform Precision Autopilot to control the aircraft within the specified tolerance greater than 90 percent of the time in the presence of aircraft system noise and nonlinearities, constant pilot throttle adjustments, and light turbulence.

  2. Microbiopsy/precision cutting devices

    DOEpatents

    Krulevitch, Peter A.; Lee, Abraham P.; Northrup, M. Allen; Benett, William J.

    1999-01-01

    Devices for performing tissue biopsy on a small scale (microbiopsy). By reducing the size of the biopsy tool and removing only a small amount of tissue or other material in a minimally invasive manner, the risks, costs, injury and patient discomfort associated with traditional biopsy procedures can be reduced. By using micromachining and precision machining capabilities, it is possible to fabricate small biopsy/cutting devices from silicon. These devices can be used in one of four ways 1) intravascularly, 2) extravascularly, 3) by vessel puncture, and 4) externally. Additionally, the devices may be used in precision surgical cutting.

  3. ELECTROWEAK PHYSICS AND PRECISION STUDIES.

    SciTech Connect

    MARCIANO, W.

    2005-10-24

    The utility of precision electroweak measurements for predicting the Standard Model Higgs mass via quantum loop effects is discussed. Current values of m{sub W}, sin{sup 2} {theta}{sub W}(m{sub Z}){sub {ovr MS}} and m{sub t} imply a relatively light Higgs which is below the direct experimental bound but possibly consistent with Supersymmetry expectations. The existence of Supersymmetry is further suggested by a 2{sigma} discrepancy between experiment and theory for the muon anomalous magnetic moment. Constraints from precision studies on other types of ''New Physics'' are also briefly described.

  4. Precision Manipulation with Cooperative Robots

    NASA Technical Reports Server (NTRS)

    Stroupe, Ashley; Huntsberger, Terry; Okon, Avi; Aghzarian, Hrand

    2005-01-01

    This work addresses several challenges of cooperative transportThis work addresses several challenges of cooperative transport and precision manipulation. Precision manipulation requires a rigid grasp, which places a hard constraint on the relative rover formation that must be accommodated, even though the rovers cannot directly observe their relative poses. Additionally, rovers must jointly select appropriate actions based on all available sensor information. Lastly, rovers cannot act on independent sensor information, but must fuse information to move jointly; the methods for fusing information must be determined.

  5. PRECISION RADIAL VELOCITIES WITH CSHELL

    SciTech Connect

    Crockett, Christopher J.; Prato, L.; Mahmud, Naved I.; Johns-Krull, Christopher M.; Jaffe, Daniel T.; Beichman, Charles A. E-mail: lprato@lowell.edu E-mail: cmj@rice.edu

    2011-07-10

    Radial velocity (RV) identification of extrasolar planets has historically been dominated by optical surveys. Interest in expanding exoplanet searches to M dwarfs and young stars, however, has motivated a push to improve the precision of near-infrared RV techniques. We present our methodology for achieving 58 m s{sup -1} precision in the K band on the M0 dwarf GJ 281 using the CSHELL spectrograph at the 3 m NASA Infrared Telescope Facility. We also demonstrate our ability to recover the known 4 M{sub JUP} exoplanet Gl 86 b and discuss the implications for success in detecting planets around 1-3 Myr old T Tauri stars.

  6. Microbiopsy/precision cutting devices

    DOEpatents

    Krulevitch, P.A.; Lee, A.P.; Northrup, M.A.; Benett, W.J.

    1999-07-27

    Devices are disclosed for performing tissue biopsy on a small scale (microbiopsy). By reducing the size of the biopsy tool and removing only a small amount of tissue or other material in a minimally invasive manner, the risks, costs, injury and patient discomfort associated with traditional biopsy procedures can be reduced. By using micromachining and precision machining capabilities, it is possible to fabricate small biopsy/cutting devices from silicon. These devices can be used in one of four ways (1) intravascularly, (2) extravascularly, (3) by vessel puncture, and (4) externally. Additionally, the devices may be used in precision surgical cutting. 6 figs.

  7. Precision agriculture and food security.

    PubMed

    Gebbers, Robin; Adamchuk, Viacheslav I

    2010-02-12

    Precision agriculture comprises a set of technologies that combines sensors, information systems, enhanced machinery, and informed management to optimize production by accounting for variability and uncertainties within agricultural systems. Adapting production inputs site-specifically within a field and individually for each animal allows better use of resources to maintain the quality of the environment while improving the sustainability of the food supply. Precision agriculture provides a means to monitor the food production chain and manage both the quantity and quality of agricultural produce.

  8. Optimization of precision localization microscopy using CMOS camera technology

    NASA Astrophysics Data System (ADS)

    Fullerton, Stephanie; Bennett, Keith; Toda, Eiji; Takahashi, Teruo

    2012-02-01

    Light microscopy imaging is being transformed by the application of computational methods that permit the detection of spatial features below the optical diffraction limit. Successful localization microscopy (STORM, dSTORM, PALM, PhILM, etc.) relies on the precise position detection of fluorescence emitted by single molecules using highly sensitive cameras with rapid acquisition speeds. Electron multiplying CCD (EM-CCD) cameras are the current standard detector for these applications. Here, we challenge the notion that EM-CCD cameras are the best choice for precision localization microscopy and demonstrate, through simulated and experimental data, that certain CMOS detector technology achieves better localization precision of single molecule fluorophores. It is well-established that localization precision is limited by system noise. Our findings show that the two overlooked noise sources relevant for precision localization microscopy are the shot noise of the background light in the sample and the excess noise from electron multiplication in EM-CCD cameras. At low light conditions (< 200 photons/fluorophore) with no optical background, EM-CCD cameras are the preferred detector. However, in practical applications, optical background noise is significant, creating conditions where CMOS performs better than EM-CCD. Furthermore, the excess noise of EM-CCD is equivalent to reducing the information content of each photon detected which, in localization microscopy, reduces the precision of the localization. Thus, new CMOS technology with 100fps, <1.3 e- read noise and high QE is the best detector choice for super resolution precision localization microscopy.

  9. On the very accurate numerical evaluation of the Generalized Fermi-Dirac Integrals

    NASA Astrophysics Data System (ADS)

    Mohankumar, N.; Natarajan, A.

    2016-10-01

    We indicate a new and a very accurate algorithm for the evaluation of the Generalized Fermi-Dirac Integral with a relative error less than 10-20. The method involves Double Exponential, Trapezoidal and Gauss-Legendre quadratures. For the residue correction of the Gauss-Legendre scheme, a simple and precise continued fraction algorithm is used.

  10. Evaluation of the Voigt function to arbitrary precision

    NASA Astrophysics Data System (ADS)

    Boyer, W.; Lynas-Gray, A. E.

    2014-11-01

    Accurate and rapid Voigt function evaluations are an essential component of synthetic stellar spectrum calculations and the development of improved algorithms continues to be a priority. Multiprecision arithmetic was applied to obtain Voigt functions evaluated to 56 digits, which could be extended to arbitrarily high precision if required. While the technique cannot be used in practical applications, it provides results against which fast routines may be benchmarked for accuracy.

  11. Sensitivity studies for a space-based methane lidar mission

    NASA Astrophysics Data System (ADS)

    Kiemle, C.; Quatrevalet, M.; Ehret, G.; Amediek, A.; Fix, A.; Wirth, M.

    2011-10-01

    Methane is the third most important greenhouse gas in the atmosphere after water vapour and carbon dioxide. A major handicap to quantify the emissions at the Earth's surface in order to better understand biosphere-atmosphere exchange processes and potential climate feedbacks is the lack of accurate and global observations of methane. Space-based integrated path differential absorption (IPDA) lidar has potential to fill this gap, and a Methane Remote Lidar Mission (MERLIN) on a small satellite in polar orbit was proposed by DLR and CNES in the frame of a German-French climate monitoring initiative. System simulations are used to identify key performance parameters and to find an advantageous instrument configuration, given the environmental, technological, and budget constraints. The sensitivity studies use representative averages of the atmospheric and surface state to estimate the measurement precision, i.e. the random uncertainty due to instrument noise. Key performance parameters for MERLIN are average laser power, telescope size, orbit height, surface reflectance, and detector noise. A modest-size lidar instrument with 0.45 W average laser power and 0.55 m telescope diameter on a 506 km orbit could provide 50-km averaged methane column measurement along the sub-satellite track with a precision of about 1% over vegetation. The use of a methane absorption trough at 1.65 μm improves the near-surface measurement sensitivity and vastly relaxes the wavelength stability requirement that was identified as one of the major technological risks in the pre-phase A studies for A-SCOPE, a space-based IPDA lidar for carbon dioxide at the European Space Agency. Minimal humidity and temperature sensitivity at this wavelength position will enable accurate measurements in tropical wetlands, key regions with largely uncertain methane emissions. In contrast to actual passive remote sensors, measurements in Polar Regions will be possible and biases due to aerosol layers and thin

  12. Sensitivity studies for a space-based methane lidar mission

    NASA Astrophysics Data System (ADS)

    Kiemle, C.; Quatrevalet, M.; Ehret, G.; Amediek, A.; Fix, A.; Wirth, M.

    2011-06-01

    Methane is the third most important greenhouse gas in the atmosphere after water vapour and carbon dioxide. A major handicap to quantify the emissions at the Earth's surface in order to better understand biosphere-atmosphere exchange processes and potential climate feedbacks is the lack of accurate and global observations of methane. Space-based integrated path differential absorption (IPDA) lidar has potential to fill this gap, and a Methane Remote Lidar Mission (MERLIN) on a small satellite in Polar orbit was proposed by DLR and CNES in the frame of a German-French climate monitoring initiative. System simulations are used to identify key performance parameters and to find an advantageous instrument configuration, given the environmental, technological, and budget constraints. The sensitivity studies use representative averages of the atmospheric and surface state to estimate the measurement precision, i.e. the random uncertainty due to instrument noise. Key performance parameters for MERLIN are average laser power, telescope size, orbit height, surface reflectance, and detector noise. A modest-size lidar instrument with 0.45 W average laser power and 0.55 m telescope diameter on a 506 km orbit could provide 50-km averaged methane column measurement along the sub-satellite track with a precision of about 1 % over vegetation. The use of a methane absorption trough at 1.65 μm improves the near-surface measurement sensitivity and vastly relaxes the wavelength stability requirement that was identified as one of the major technological risks in the pre-phase A studies for A-SCOPE, a space-based IPDA lidar for carbon dioxide at the European Space Agency. Minimal humidity and temperature sensitivity at this wavelength position will enable accurate measurements in tropical wetlands, key regions with largely uncertain methane emissions. In contrast to actual passive remote sensors, measurements in Polar Regions will be possible and biases due to aerosol layers and thin

  13. Precision Cleaning - Path to Premier

    NASA Technical Reports Server (NTRS)

    Mackler, Scott E.

    2008-01-01

    ITT Space Systems Division s new Precision Cleaning facility provides critical cleaning and packaging of aerospace flight hardware and optical payloads to meet customer performance requirements. The Precision Cleaning Path to Premier Project was a 2007 capital project and is a key element in the approved Premier Resource Management - Integrated Supply Chain Footprint Optimization Project. Formerly precision cleaning was located offsite in a leased building. A new facility equipped with modern precision cleaning equipment including advanced process analytical technology and improved capabilities was designed and built after outsourcing solutions were investigated and found lacking in ability to meet quality specifications and schedule needs. SSD cleans parts that can range in size from a single threaded fastener all the way up to large composite structures. Materials that can be processed include optics, composites, metals and various high performance coatings. We are required to provide verification to our customers that we have met their particulate and molecular cleanliness requirements and we have that analytical capability in this new facility. The new facility footprint is approximately half the size of the former leased operation and provides double the amount of throughput. Process improvements and new cleaning equipment are projected to increase 1st pass yield from 78% to 98% avoiding $300K+/yr in rework costs. Cost avoidance of $350K/yr will result from elimination of rent, IT services, transportation, and decreased utility costs. Savings due to reduced staff expected to net $4-500K/yr.

  14. Sensor fusion for precision agriculture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Information-based management of crop production systems known as precision agriculture relies on different sensor technologies aimed at characterization of spatial heterogeneity of a cropping environment. Remote and proximal sensing systems have been deployed to obtain high-resolution data pertainin...

  15. Precision Machining Technology. Curriculum Guide.

    ERIC Educational Resources Information Center

    Idaho State Dept. of Education, Boise. Div. of Vocational Education.

    This curriculum guide was developed from a Technical Committee Report prepared with the assistance of industry personnel and containing a Task List which is the basis of the guide. It presents competency-based program standards for courses in precision machining technology and is part of the Idaho Vocational Curriculum Guide Project, a cooperative…

  16. Precision Efficacy Analysis for Regression.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.

    When multiple linear regression is used to develop a prediction model, sample size must be large enough to ensure stable coefficients. If the derivation sample size is inadequate, the model may not predict well for future subjects. The precision efficacy analysis for regression (PEAR) method uses a cross- validity approach to select sample sizes…

  17. Precision Tests of Electroweak Interactions

    SciTech Connect

    Akhundov, Arif

    2008-04-21

    The status of the precision tests of the electroweak interactions is reviewed in this paper. An emphasis is put on the Standard Model analysis based on measurements at LEP/SLC and the Tevatron. The results of the measurements of the electroweak mixing angle in the NuTeV experiment and the future prospects are discussed.

  18. Spin and precision electroweak physics

    SciTech Connect

    Marciano, W.J.

    1993-12-31

    A perspective on fundamental parameters and precision tests of the Standard Model is given. Weak neutral current reactions are discussed with emphasis on those processes involving (polarized) electrons. The role of electroweak radiative corrections in determining the top quark mass and probing for ``new physics`` is described.

  19. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  20. Precision Subsampling System for Mars Surface Missions

    NASA Technical Reports Server (NTRS)

    Mahaffy, P. R.; Paulsen, G.; Mellerowicz, B.; ten Kate, I. L.; Conrad, P.; Corrigan, C. M.; Li, X.

    2012-01-01

    The ability to analyze heterogeneous rock samples at fine spatial scales would represent a powerful addition to our planetary in situ analytical toolbox. This is particularly true for Mars, where the signatures of past environments and, potentially, habitability are preserved in chemical and morphological variations across sedimentary layers and among mineral pr.ases in a given rock specimen. On Earth, microbial life often associates with surfaces at the interface of chemical nutrients, and ultimately retains sub-millimeter to millimeter-scale layer confinement in fossilization. On Mars, and possibly other bodies, trace chemical markers (elemental, organic/molecular, isotopic, chiral, etc.) and fine-scale morphological markers (e.g., micro-fossils) may he too subtle, degraded, or ambiguous to be detected, using miniaturized instrumentation, without some concentration or isolation. This is because (i) instrument sensitivity may not be high enough to detect trace markers in bulk averages; and (ii) instrument slectiviry may not be sufficient to distinguish such markers from interfering/counteracting signals from the bulk. Moreover from a fundamental chemostratigraphic perspective there would be a great benefit to assessing specific chemical and stable isotopic gradients, over millimeter-to-centimeter scales and beyond, with higher precision than currently possible in situ. We have developed a precision subsampling system (PSS) that addresses this need while remaining relatively flexible to a variety of instruments that may take advantage of the capability on future missions. The PSS is relevant to a number of possible lander/rover missions, especially Mars Sample Return. Our specific PSS prototype is undergoing testing under Mars ambient conditions, on a variety of natural analog rocks and rock drill cores, using a set of complementary flight-compatible measurement techniques. The system is available for testing with other contact instruments that may benefit from

  1. Climate Sensitivity, Sea Level, and Atmospheric Carbon Dioxide

    NASA Technical Reports Server (NTRS)

    Hansen, James; Sato, Makiko; Russell, Gary; Kharecha, Pushker

    2013-01-01

    Cenozoic temperature, sea level and CO2 covariations provide insights into climate sensitivity to external forcings and sea-level sensitivity to climate change. Climate sensitivity depends on the initial climate state, but potentially can be accurately inferred from precise palaeoclimate data. Pleistocene climate oscillations yield a fast-feedback climate sensitivity of 3+/-1deg C for a 4 W/sq m CO2 forcing if Holocene warming relative to the Last Glacial Maximum (LGM) is used as calibration, but the error (uncertainty) is substantial and partly subjective because of poorly defined LGM global temperature and possible human influences in the Holocene. Glacial-to-interglacial climate change leading to the prior (Eemian) interglacial is less ambiguous and implies a sensitivity in the upper part of the above range, i.e. 3-4deg C for a 4 W/sq m CO2 forcing. Slow feedbacks, especially change of ice sheet size and atmospheric CO2, amplify the total Earth system sensitivity by an amount that depends on the time scale considered. Ice sheet response time is poorly defined, but we show that the slow response and hysteresis in prevailing ice sheet models are exaggerated. We use a global model, simplified to essential processes, to investigate state dependence of climate sensitivity, finding an increased sensitivity towards warmer climates, as low cloud cover is diminished and increased water vapour elevates the tropopause. Burning all fossil fuels, we conclude, would make most of the planet uninhabitable by humans, thus calling into question strategies that emphasize adaptation to climate change.

  2. Climate sensitivity, sea level and atmospheric carbon dioxide

    PubMed Central

    Hansen, James; Sato, Makiko; Russell, Gary; Kharecha, Pushker

    2013-01-01

    Cenozoic temperature, sea level and CO2 covariations provide insights into climate sensitivity to external forcings and sea-level sensitivity to climate change. Climate sensitivity depends on the initial climate state, but potentially can be accurately inferred from precise palaeoclimate data. Pleistocene climate oscillations yield a fast-feedback climate sensitivity of 3±1°C for a 4 W m−2 CO2 forcing if Holocene warming relative to the Last Glacial Maximum (LGM) is used as calibration, but the error (uncertainty) is substantial and partly subjective because of poorly defined LGM global temperature and possible human influences in the Holocene. Glacial-to-interglacial climate change leading to the prior (Eemian) interglacial is less ambiguous and implies a sensitivity in the upper part of the above range, i.e. 3–4°C for a 4 W m−2 CO2 forcing. Slow feedbacks, especially change of ice sheet size and atmospheric CO2, amplify the total Earth system sensitivity by an amount that depends on the time scale considered. Ice sheet response time is poorly defined, but we show that the slow response and hysteresis in prevailing ice sheet models are exaggerated. We use a global model, simplified to essential processes, to investigate state dependence of climate sensitivity, finding an increased sensitivity towards warmer climates, as low cloud cover is diminished and increased water vapour elevates the tropopause. Burning all fossil fuels, we conclude, would make most of the planet uninhabitable by humans, thus calling into question strategies that emphasize adaptation to climate change. PMID:24043864

  3. Highly damped kinematic coupling for precision instruments

    DOEpatents

    Hale, Layton C.; Jensen, Steven A.

    2001-01-01

    A highly damped kinematic coupling for precision instruments. The kinematic coupling provides support while causing essentially no influence to its nature shape, with such influences coming, for example, from manufacturing tolerances, temperature changes, or ground motion. The coupling uses three ball-cone constraints, each combined with a released flexural degree of freedom. This arrangement enables a gain of higher load capacity and stiffness, but can also significantly reduce the friction level in proportion to the ball radius divided by the distance between the ball and the hinge axis. The blade flexures reduces somewhat the stiffness of the coupling and provides an ideal location to apply constrained-layer damping which is accomplished by attaching a viscoelastic layer and a constraining layer on opposite sides of each of the blade flexures. The three identical ball-cone flexures provide a damped coupling mechanism to kinematically support the projection optics system of the extreme ultraviolet lithography (EUVL) system, or other load-sensitive apparatus.

  4. Accurate fluorescence quantum yield determination by fluorescence correlation spectroscopy.

    PubMed

    Kempe, Daryan; Schöne, Antonie; Fitter, Jörg; Gabba, Matteo

    2015-04-01

    Here, we present a comparative method for the accurate determination of fluorescence quantum yields (QYs) by fluorescence correlation spectroscopy. By exploiting the high sensitivity of single-molecule spectroscopy, we obtain the QYs of samples in the microliter range and at (sub)nanomolar concentrations. Additionally, in combination with fluorescence lifetime measurements, our method allows the quantification of both static and collisional quenching constants. Thus, besides being simple and fast, our method opens up the possibility to photophysically characterize labeled biomolecules under application-relevant conditions and with low sample consumption, which is often important in single-molecule studies.

  5. High-precision dosimetry for radiotherapy using the optically stimulated luminescence technique and thin Al2O3:C dosimeters

    NASA Astrophysics Data System (ADS)

    Yukihara, E. G.; Yoshimura, E. M.; Lindstrom, T. D.; Ahmad, S.; Taylor, K. K.; Mardirossian, G.

    2005-12-01

    The potential of using the optically stimulated luminescence (OSL) technique with aluminium oxide (Al2O3:C) dosimeters for a precise and accurate estimation of absorbed doses delivered by high-energy photon beams was investigated. This study demonstrates the high reproducibility of the OSL measurements and presents a preliminary determination of the depth-dose curve in water for a 6 MV photon beam from a linear accelerator. The uncertainty of a single OSL measurement, estimated from the variance of a large sample of dosimeters irradiated with the same dose, was 0.7%. In the depth-dose curve obtained using the OSL technique, the difference between the measured and expected doses was <=0.7% for depths between 1.5 and 10 cm, and 1.1% for a depth of 15 cm. The readout procedure includes a normalization of the response of the dosimeter with respect to a reference dose in order to eliminate variations in the dosimeter mass, dosimeter sensitivity, and the reader's sensitivity. This may be relevant for quality assurance programmes, since it simplifies the requirements in terms of personnel training to achieve the precision and accuracy necessary for radiotherapy applications. We concluded that the OSL technique has the potential to be reliably incorporated in quality assurance programmes and dose verification.

  6. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients

    PubMed Central

    Asadian, Simin; Khatony, Alireza; Moradi, Gholamreza; Abdi, Alireza; Rezaei, Mansour

    2016-01-01

    Introduction An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. Methods In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients’ body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. Results There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (P<0.001). Kappa coefficients showed good agreement between the temperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%). Paired t-test demonstrated an acceptable precision with forehead (P=0.132), left (P=0.18) and right (P=0.318) tympanic membranes, oral (P=1.00), and axillary (P=1.00) methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. Conclusion The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left) for assessing a patient’s body temperature in the intensive care units because of high accuracy and acceptable precision.

  7. Accuracy and precision of four common peripheral temperature measurement methods in intensive care patients

    PubMed Central

    Asadian, Simin; Khatony, Alireza; Moradi, Gholamreza; Abdi, Alireza; Rezaei, Mansour

    2016-01-01

    Introduction An accurate determination of body temperature in critically ill patients is a fundamental requirement for initiating the proper process of diagnosis, and also therapeutic actions; therefore, the aim of the study was to assess the accuracy and precision of four noninvasive peripheral methods of temperature measurement compared to the central nasopharyngeal measurement. Methods In this observational prospective study, 237 patients were recruited from the intensive care unit of Imam Ali Hospital of Kermanshah. The patients’ body temperatures were measured by four peripheral methods; oral, axillary, tympanic, and forehead along with a standard central nasopharyngeal measurement. After data collection, the results were analyzed by paired t-test, kappa coefficient, receiver operating characteristic curve, and using Statistical Package for the Social Sciences, version 19, software. Results There was a significant meaningful correlation between all the peripheral methods when compared with the central measurement (P<0.001). Kappa coefficients showed good agreement between the temperatures of right and left tympanic membranes and the standard central nasopharyngeal measurement (88%). Paired t-test demonstrated an acceptable precision with forehead (P=0.132), left (P=0.18) and right (P=0.318) tympanic membranes, oral (P=1.00), and axillary (P=1.00) methods. Sensitivity and specificity of both the left and right tympanic membranes were more than for other methods. Conclusion The tympanic and forehead methods had the highest and lowest accuracy for measuring body temperature, respectively. It is recommended to use the tympanic method (right and left) for assessing a patient’s body temperature in the intensive care units because of high accuracy and acceptable precision. PMID:27621673

  8. Accurate oscillator strengths for interstellar ultraviolet lines of Cl I

    NASA Technical Reports Server (NTRS)

    Schectman, R. M.; Federman, S. R.; Beideck, D. J.; Ellis, D. J.

    1993-01-01

    Analyses on the abundance of interstellar chlorine rely on accurate oscillator strengths for ultraviolet transitions. Beam-foil spectroscopy was used to obtain f-values for the astrophysically important lines of Cl I at 1088, 1097, and 1347 A. In addition, the line at 1363 A was studied. Our f-values for 1088, 1097 A represent the first laboratory measurements for these lines; the values are f(1088)=0.081 +/- 0.007 (1 sigma) and f(1097) = 0.0088 +/- 0.0013 (1 sigma). These results resolve the issue regarding the relative strengths for 1088, 1097 A in favor of those suggested by astronomical measurements. For the other lines, our results of f(1347) = 0.153 +/- 0.011 (1 sigma) and f(1363) = 0.055 +/- 0.004 (1 sigma) are the most precisely measured values available. The f-values are somewhat greater than previous experimental and theoretical determinations.

  9. Accurate object tracking system by integrating texture and depth cues

    NASA Astrophysics Data System (ADS)

    Chen, Ju-Chin; Lin, Yu-Hang

    2016-03-01

    A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.

  10. Accurate atom counting in mesoscopic ensembles.

    PubMed

    Hume, D B; Stroescu, I; Joos, M; Muessel, W; Strobel, H; Oberthaler, M K

    2013-12-20

    Many cold atom experiments rely on precise atom number detection, especially in the context of quantum-enhanced metrology where effects at the single particle level are important. Here, we investigate the limits of atom number counting via resonant fluorescence detection for mesoscopic samples of trapped atoms. We characterize the precision of these fluorescence measurements beginning from the single-atom level up to more than one thousand. By investigating the primary noise sources, we obtain single-atom resolution for atom numbers as high as 1200. This capability is an essential prerequisite for future experiments with highly entangled states of mesoscopic atomic ensembles.

  11. Context Sensitive Modeling of Cancer Drug Sensitivity

    PubMed Central

    Chen, Bo-Juen; Litvin, Oren; Ungar, Lyle; Pe’er, Dana

    2015-01-01

    Recent screening of drug sensitivity in large panels of cancer cell lines provides a valuable resource towards developing algorithms that predict drug response. Since more samples provide increased statistical power, most approaches to prediction of drug sensitivity pool multiple cancer types together without distinction. However, pan-cancer results can be misleading due to the confounding effects of tissues or cancer subtypes. On the other hand, independent analysis for each cancer-type is hampered by small sample size. To balance this trade-off, we present CHER (Contextual Heterogeneity Enabled Regression), an algorithm that builds predictive models for drug sensitivity by selecting predictive genomic features and deciding which ones should—and should not—be shared across different cancers, tissues and drugs. CHER provides significantly more accurate models of drug sensitivity than comparable elastic-net-based models. Moreover, CHER provides better insight into the underlying biological processes by finding a sparse set of shared and type-specific genomic features. PMID:26274927

  12. Establishment of chondroitin B lyase-based analytical methods for sensitive and quantitative detection of dermatan sulfate in heparin.

    PubMed

    Wu, Jingjun; Ji, Yang; Su, Nan; Li, Ye; Liu, Xinxin; Mei, Xiang; Zhou, Qianqian; Zhang, Chong; Xing, Xin-hui

    2016-06-25

    Dermatan sulfate (DS) is one of the hardest impurities to remove from heparin products due to their high structural similarity. The development of a sensitive and feasible method for quantitative detection of DS in heparin is essential to ensure the clinical safety of heparin pharmaceuticals. In the current study, based on the substrate specificity of chondroitin B lyase, ultraviolet spectrophotometric and strong anion-exchange high-performance liquid chromatographic methods were established for detection of DS in heparin. The former method facilitated analysis in heparin with DS concentrations greater than 0.1mgmL(-1) at 232nm, with good linearity, precision and recovery. The latter method allowed sensitive and accurate detection of DS at concentrations lower than 0.1mgmL(-1), exhibiting good linearity, precision and recovery. The linear range of DS detection using the latter method was between 0.01 and 0.5mgmL(-1).

  13. Precision measurement of transition matrix elements via light shift cancellation.

    PubMed

    Herold, C D; Vaidya, V D; Li, X; Rolston, S L; Porto, J V; Safronova, M S

    2012-12-14

    We present a method for accurate determination of atomic transition matrix elements at the 10(-3) level. Measurements of the ac Stark (light) shift around "magic-zero" wavelengths, where the light shift vanishes, provide precise constraints on the matrix elements. We make the first measurement of the 5s - 6p matrix elements in rubidium by measuring the light shift around the 421 and 423 nm zeros through diffraction of a condensate off a sequence of standing wave pulses. In conjunction with existing theoretical and experimental data, we find 0.3235(9)ea(0) and 0.5230(8)ea(0) for the 5s - 6p(1/2) and 5s - 6p(3/2) elements, respectively, an order of magnitude more accurate than the best theoretical values. This technique can provide needed, accurate matrix elements for many atoms, including those used in atomic clocks, tests of fundamental symmetries, and quantum information. PMID:23368314

  14. Highly accurate isotope measurements of surface material on planetary objects in situ

    NASA Astrophysics Data System (ADS)

    Riedo, Andreas; Neuland, Maike; Meyer, Stefan; Tulej, Marek; Wurz, Peter

    2013-04-01

    Studies of isotope variations in solar system objects are of particular interest and importance. Highly accurate isotope measurements provide insight into geochemical processes, constrain the time of formation of planetary material (crystallization ages) and can be robust tracers of pre-solar events and processes. A detailed understanding of the chronology of the early solar system and dating of planetary materials require precise and accurate measurements of isotope ratios, e.g. lead, and abundance of trace element. However, such measurements are extremely challenging and until now, they never have been attempted in space research. Our group designed a highly miniaturized and self-optimizing laser ablation time-of-flight mass spectrometer for space flight for sensitive and accurate measurements of the elemental and isotopic composition of extraterrestrial materials in situ. Current studies were performed by using UV radiation for ablation and ionization of sample material. High spatial resolution is achieved by focusing the laser beam to about Ø 20μm onto the sample surface. The instrument supports a dynamic range of at least 8 orders of magnitude and a mass resolution m/Δm of up to 800—900, measured at iron peak. We developed a measurement procedure, which will be discussed in detail, that allows for the first time to measure with the instrument the isotope distribution of elements, e.g. Ti, Pb, etc., with a measurement accuracy and precision in the per mill and sub per mill level, which is comparable to well-known and accepted measurement techniques, such as TIMS, SIMS and LA-ICP-MS. The present instrument performance offers together with the measurement procedure in situ measurements of 207Pb/206Pb ages with the accuracy for age in the range of tens of millions of years. Furthermore, and in contrast to other space instrumentation, our instrument can measure all elements present in the sample above 10 ppb concentration, which offers versatile applications

  15. Precision machining of steel decahedrons

    NASA Technical Reports Server (NTRS)

    Abernathy, W. J.; Sealy, J. R.

    1972-01-01

    Production of highly accurate decahedron prisms from hardened stainless steel is discussed. Prism is used to check angular alignment of mounting pads of strapdown inertial guidance system. Accuracies obtainable using recommended process and details of operation are described. Photographic illustration of production device is included.

  16. Precision measurement with atom interferometry

    NASA Astrophysics Data System (ADS)

    Wang, Jin

    2015-05-01

    Development of atom interferometry and its application in precision measurement are reviewed in this paper. The principle, features and the implementation of atom interferometers are introduced, the recent progress of precision measurement with atom interferometry, including determination of gravitational constant and fine structure constant, measurement of gravity, gravity gradient and rotation, test of weak equivalence principle, proposal of gravitational wave detection, and measurement of quadratic Zeeman shift are reviewed in detail. Determination of gravitational redshift, new definition of kilogram, and measurement of weak force with atom interferometry are also briefly introduced. Project supported by the National Basic Research Program of China (Grant No. 2010CB832805) and the National Natural Science Foundation of China (Grant No. 11227803).

  17. High Precision Isotopic Reference Material Program

    NASA Astrophysics Data System (ADS)

    Mann, J. L.; Vocke, R. D.

    2007-12-01

    Recent developments in thermal ionization and inductively coupled plasma multicollector mass spectrometers have lead to "high precision" isotope ratio measurements with uncertainties approaching a few parts in 106. These new measurement capabilities have revolutionized the study of isotopic variations in nature by increasing the number of elements showing natural variations by almost a factor of two, and new research areas are actively opening up in climate change, health, ecology, geology and forensic studies. Because the isotopic applications are impacting very diverse fields, there is at present little effective coordination between research laboratories over reference materials and the values to apply to those materials. NIST had originally developed the techniques for producing accurate isotopic characterizations, culminating in the NIST Isotopic SRM series. The values on existing materials however are insufficiently precise and, in some cases, may be isotopically heterogeneous. A new generation of isotopic standards is urgently needed and will directly affect the quality and scope of emergent applications and ensure that the results being derived from these diverse fields are comparable. A series of new isotopic reference materials similar to the NIST 3100 single element solution series is being designed for this purpose and twelve elements have been selected as having the most pressing need. In conjunction with other expert users and National Metrology Institutes, an isotopic characterization of the respective 12 selected ampoules from the NIST single element solution series is currently underway. In this presentation the preliminary results of this screening will be discussed as well as the suitability of these materials in terms of homogeneity and purity, long term stability and availability, and isotopic relevance. Approaches to value assignment will also be discussed.

  18. Ultra-rare Disease and Genomics-Driven Precision Medicine

    PubMed Central

    Lee, Sangmoon

    2016-01-01

    Since next-generation sequencing (NGS) technique was adopted into clinical practices, revolutionary advances in diagnosing rare genetic diseases have been achieved through translating genomic medicine into precision or personalized management. Indeed, several successful cases of molecular diagnosis and treatment with personalized or targeted therapies of rare genetic diseases have been reported. Still, there are several obstacles to be overcome for wider application of NGS-based precision medicine, including high sequencing cost, incomplete variant sensitivity and accuracy, practical complexities, and a shortage of available treatment options. PMID:27445646

  19. Method for grinding precision components

    DOEpatents

    Ramanath, Srinivasan; Kuo, Shih Yee; Williston, William H.; Buljan, Sergej-Tomislav

    2000-01-01

    A method for precision cylindrical grinding of hard brittle materials, such as ceramics or glass and composites comprising ceramics or glass, provides material removal rates as high as 19-380 cm.sup.3 /min/cm. The abrasive tools used in the method comprise a strong, light weight wheel core bonded to a continuous rim of abrasive segments containing superabrasive grain in a dense metal bond matrix.

  20. Green Solvents for Precision Cleaning

    NASA Technical Reports Server (NTRS)

    Grandelli, Heather; Maloney, Phillip; DeVor, Robert; Surma, Jan; Hintze, Paul

    2013-01-01

    Aerospace machinery used in liquid oxygen (LOX) fuel systems must be precision cleaned to achieve a very low level of non-volatile residue (< 1 mg0.1 m2), especially flammable residue. Traditionally chlorofluorocarbons (CFCs) have been used in the precision cleaning of LOX systems, specifically CFC 113 (C2Cl3F3). CFCs have been known to cause the depletion of ozone and in 1987, were banned by the Montreal Protocol due to health, safety and environmental concerns. This has now led to the development of new processes in the precision cleaning of aerospace components. An ideal solvent-replacement is non-flammable, environmentally benign, non-corrosive, inexpensive, effective and evaporates completely, leaving no residue. Highlighted is a green precision cleaning process, which is contaminant removal using supercritical carbon dioxide as the environmentally benign solvent. In this process, the contaminant is dissolved in carbon dioxide, and the parts are recovered at the end of the cleaning process completely dry and ready for use. Typical contaminants of aerospace components include hydrocarbon greases, hydraulic fluids, silicone fluids and greases, fluorocarbon fluids and greases and fingerprint oil. Metallic aerospace components range from small nuts and bolts to much larger parts, such as butterfly valves 18 in diameter. A fluorinated grease, Krytox, is investigated as a model contaminant in these preliminary studies, and aluminum coupons are employed as a model aerospace component. Preliminary studies are presented in which the experimental parameters are optimized for removal of Krytox from aluminum coupons in a stirred-batch process. The experimental conditions investigated are temperature, pressure, exposure time and impeller speed. Temperatures of 308 - 423 K, pressures in the range of 8.3 - 41.4 MPa, exposure times between 5 - 60 min and impeller speeds of 0 - 1000 rpm were investigated. Preliminary results showed up to 86 cleaning efficiency with the

  1. Fit to Electroweak Precision Data

    SciTech Connect

    Erler, Jens

    2006-11-17

    A brief review of electroweak precision data from LEP, SLC, the Tevatron, and low energies is presented. The global fit to all data including the most recent results on the masses of the top quark and the W boson reinforces the preference for a relatively light Higgs boson. I will also give an outlook on future developments at the Tevatron Run II, CEBAF, the LHC, and the ILC.

  2. Precision linear ramp function generator

    DOEpatents

    Jatko, W. Bruce; McNeilly, David R.; Thacker, Louis H.

    1986-01-01

    A ramp function generator is provided which produces a precise linear ramp unction which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.

  3. Precision linear ramp function generator

    DOEpatents

    Jatko, W.B.; McNeilly, D.R.; Thacker, L.H.

    1984-08-01

    A ramp function generator is provided which produces a precise linear ramp function which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.

  4. Gage tests tube flares quickly and accurately

    NASA Technical Reports Server (NTRS)

    Griffin, F. D.

    1966-01-01

    Flared tube gage with a test cone that is precisely made with a tapering surface to complement the tube flare is capable of determining the accuracy of a tube flare efficiently and economically. This device should improve the speed, efficiency, and accuracy of tube flare inspections.

  5. Attaining m s-1 level intrinsic Doppler precision with RHEA, a low-cost single-mode spectrograph

    NASA Astrophysics Data System (ADS)

    Feger, Tobias; Ireland, Michael J.; Schwab, Christian; Bento, Joao; Bacigalupo, Carlos; Coutts, David W.

    2016-08-01

    We present RHEA, a compact and inexpensive single-mode spectrograph which is built to exploit the capabilities of modest-sized telescopes in an economic way. The instrument is fed by up to seven optical waveguides with the aim of achieving an efficient and modal-noise-free unit, suitable for attaining extreme Doppler precision. The cross-dispersed layout features a wavelength coverage from 430-650 nm, with spectral resolution of R ˜75,000. When coupled to small telescopes using fast tip/tilt control, our instrument is well-suited to sensitive spectroscopy. Example science cases are accurate radial velocity studies of low to intermediate-mass giant stars with the purpose of searching for giant plants and using asteroseismology to simultaneously measure the host star parameters. In this paper we describe the final instrument design and present first results from testing the internal stability.

  6. Precision of spiral-bevel gears

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.

    1982-01-01

    The kinematic errors in spiral bevel gear trains caused by the generation of nonconjugate surfaces, by axial displacements of the gears during assembly, and by eccentricity of the assembled gears were determined. One mathematical model corresponds to the motion of the contact ellipse across the tooth surface, (geometry I) and the other along the tooth surface (geometry II). The following results were obtained: (1) kinematic errors induced by errors of manufacture may be minimized by applying special machine settings, the original error may be reduced by order of magnitude, the procedure is most effective for geometry 2 gears, (2) when trying to adjust the bearing contact pattern between the gear teeth for geometry 1 gears, it is more desirable to shim the gear axially; for geometry II gears, shim the pinion axially; (3) the kinematic accuracy of spiral bevel drives are most sensitive to eccentricities of the gear and less sensitive to eccentricities of the pinion. The precision of mounting accuracy and manufacture are most crucial for the gear, and less so for the pinion.

  7. Precision of spiral-bevel gears

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.

    1983-01-01

    The kinematic errors in spiral bevel gear trains caused by the generation of nonconjugate surfaces, by axial displacements of the gears during assembly, and by eccentricity of the assembled gears were determined. One mathematical model corresponds to the motion of the contact ellipse across the tooth surface, (geometry I) and the other along the tooth surface (geometry II). The following results were obtained: (1) kinematic errors induced by errors of manufacture may be minimized by applying special machine settings, the original error may be reduced by order of magnitude, the procedure is most effective for geometry 2 gears, (2) when trying to adjust the bearing contact pattern between the gear teeth for geometry I gears, it is more desirable to shim the gear axially; for geometry II gears, shim the pinion axially; (3) the kinematic accuracy of spiral bevel drives are most sensitive to eccentricities of the gear and less sensitive to eccentricities of the pinion. The precision of mounting accuracy and manufacture are most crucial for the gear, and less so for the pinion. Previously announced in STAR as N82-30552

  8. Ultra-accurate collaborative information filtering via directed user similarity

    NASA Astrophysics Data System (ADS)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  9. Modified chemiluminescent NO analyzer accurately measures NOX

    NASA Technical Reports Server (NTRS)

    Summers, R. L.

    1978-01-01

    Installation of molybdenum nitric oxide (NO)-to-higher oxides of nitrogen (NOx) converter in chemiluminescent gas analyzer and use of air purge allow accurate measurements of NOx in exhaust gases containing as much as thirty percent carbon monoxide (CO). Measurements using conventional analyzer are highly inaccurate for NOx if as little as five percent CO is present. In modified analyzer, molybdenum has high tolerance to CO, and air purge substantially quenches NOx destruction. In test, modified chemiluminescent analyzer accurately measured NO and NOx concentrations for over 4 months with no denegration in performance.

  10. Validation of GOMOS ozone precision estimates in the stratosphere

    NASA Astrophysics Data System (ADS)

    Sofieva, V. F.; Tamminen, J.; Kyrölä, E.; Laeng, A.; von Clarmann, T.; Dalaudier, F.; Hauchecorne, A.; Bertaux, J.-L.; Barrot, G.; Blanot, L.; Fussen, D.; Vanhellemont, F.

    2014-07-01

    Accurate information about uncertainties is required in nearly all data analyses, e.g., inter-comparisons, data assimilation, combined use. Validation of precision estimates (viz., the random component of estimated uncertainty) is important for remote sensing measurements, which provide the information about atmospheric parameters by solving an inverse problem. For the Global Ozone Monitoring by Occultation of Stars (GOMOS) instrument, this is a real challenge, due to the dependence of the signal-to-noise ratio (and thus precision estimates) on stellar properties, small number of self-collocated measurements, and growing noise as a function of time due to instrument aging. The estimated ozone uncertainties are small in the stratosphere for bright star occultations, which complicates validation of precision values, given the natural ozone variability. In this paper, we discuss different methods for geophysical validation of precision estimates and their applicability to GOMOS data. We propose a simple method for validation of GOMOS precision estimates for ozone in the stratosphere. This method is based on comparisons of differences in sample variance with differences in uncertainty estimates for measurements from different stars selected in a region of small natural variability. For GOMOS, the difference in sample variances for different stars at tangent altitudes 25-45 km is well explained by the difference in squared precisions, if the stars are not dim. Since this is observed for several stars, and since normalized χ2 is close to 1 for these occultations in the stratosphere, we conclude that the GOMOS precision estimates are realistic in occultations of sufficiently bright stars. For dim stars, errors are overestimated due to improper accounting of the dark charge correction uncertainty in the error budget. The proposed method can also be applied to stratospheric ozone data from other instruments, including multi-instrument analyses.

  11. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  12. An accurate temperature correction model for thermocouple hygrometers.

    PubMed

    Savage, M J; Cass, A; de Jager, J M

    1982-02-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature.

  13. An accurate temperature correction model for thermocouple hygrometers.

    PubMed

    Savage, M J; Cass, A; de Jager, J M

    1982-02-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature. PMID:16662241

  14. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  15. Sensitive skin.

    PubMed

    Misery, L; Loser, K; Ständer, S

    2016-02-01

    Sensitive skin is a clinical condition defined by the self-reported facial presence of different sensory perceptions, including tightness, stinging, burning, tingling, pain and pruritus. Sensitive skin may occur in individuals with normal skin, with skin barrier disturbance, or as a part of the symptoms associated with facial dermatoses such as rosacea, atopic dermatitis and psoriasis. Although experimental studies are still pending, the symptoms of sensitive skin suggest the involvement of cutaneous nerve fibres and neuronal, as well as epidermal, thermochannels. Many individuals with sensitive skin report worsening symptoms due to environmental factors. It is thought that this might be attributed to the thermochannel TRPV1, as it typically responds to exogenous, endogenous, physical and chemical stimuli. Barrier disruptions and immune mechanisms may also be involved. This review summarizes current knowledge on the epidemiology, potential mechanisms, clinics and therapy of sensitive skin. PMID:26805416

  16. Approaches for the accurate definition of geological time boundaries

    NASA Astrophysics Data System (ADS)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  17. Sex differences in accuracy and precision when judging time to arrival: data from two Internet studies.

    PubMed

    Sanders, Geoff; Sinclair, Kamila

    2011-12-01

    We report two Internet studies that investigated sex differences in the accuracy and precision of judging time to arrival. We used accuracy to mean the ability to match the actual time to arrival and precision to mean the consistency with which each participant made their judgments. Our task was presented as a computer game in which a toy UFO moved obliquely towards the participant through a virtual three-dimensional space on route to a docking station. The UFO disappeared before docking and participants pressed their space bar at the precise moment they thought the UFO would have docked. Study 1 showed it was possible to conduct quantitative studies of spatiotemporal judgments in virtual reality via the Internet and confirmed reports that men are more accurate because women underestimate, but found no difference in precision measured as intra-participant variation. Study 2 repeated Study 1 with five additional presentations of one condition to provide a better measure of precision. Again, men were more accurate than women but there were no sex differences in precision. However, within the coincidence-anticipation timing (CAT) literature, of those studies that report sex differences, a majority found that males are both more accurate and more precise than females. Noting that many CAT studies report no sex differences, we discuss appropriate interpretations of such null findings. While acknowledging that CAT performance may be influenced by experience we suggest that the sex difference may have originated among our ancestors with the evolutionary selection of men for hunting and women for gathering.

  18. Emulation workbench for position sensitive gaseous scintillation detectors

    NASA Astrophysics Data System (ADS)

    Pereira, L.; Margato, L. M. S.; Morozov, A.; Solovov, V.; Fraga, F. A. F.

    2015-12-01

    Position sensitive detectors based on gaseous scintillation proportional counters with Anger-type readout are being used in several research areas such as neutron detection, search for dark matter and neutrinoless double beta decay. Design and optimization of such detectors are complex and time consuming tasks. Simulations, while being a powerful tool, strongly depend on the light transfer models and demand accurate knowledge of many parameters, which are often not available. Here we describe an alternative approach based on the experimental evaluation of a detector using an isotropic point-like light source with precisely controllable light emission properties, installed on a 3D positioning system. The results obtained with the developed setup at validation conditions, when the scattered light is strongly suppressed show good agreement with simulations.

  19. Can Appraisers Rate Work Performance Accurately?

    ERIC Educational Resources Information Center

    Hedge, Jerry W.; Laue, Frances J.

    The ability of individuals to make accurate judgments about others is examined and literature on this subject is reviewed. A wide variety of situational factors affects the appraisal of performance. It is generally accepted that the purpose of the appraisal influences the accuracy of the appraiser. The instrumentation, or tools, available to the…

  20. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  1. Efficient and accurate modelling of quantum nanostructures

    NASA Astrophysics Data System (ADS)

    Ayad, Marina; Obayya, Salah S. A.; Swillam, Mohamed A.

    2016-03-01

    An efficient sensitivity analysis approach for quantum nanostructures is proposed. The imaginary time propagation method (ITP) is utilized to solve the Time Dependent Schrödinger's Equation (TDSE). Using this method, an extraction of all the modes and their sensitivity with respect to all the design parameters have been performed with minimal computational effort. The sensitivity analysis is performed using the Adjoint Variable Method (AVM) and results are comparable to those obtained using Central Finite Difference Method (CFD) applied directly on the response level.

  2. Method and apparatus for precision laser micromachining

    DOEpatents

    Chang, Jim; Warner, Bruce E.; Dragon, Ernest P.

    2000-05-02

    A method and apparatus for micromachining and microdrilling which results in a machined part of superior surface quality is provided. The system uses a near diffraction limited, high repetition rate, short pulse length, visible wavelength laser. The laser is combined with a high speed precision tilting mirror and suitable beam shaping optics, thus allowing a large amount of energy to be accurately positioned and scanned on the workpiece. As a result of this system, complicated, high resolution machining patterns can be achieved. A cover plate may be temporarily attached to the workpiece. Then as the workpiece material is vaporized during the machining process, the vapors condense on the cover plate rather than the surface of the workpiece. In order to eliminate cutting rate variations as the cutting direction is varied, a randomly polarized laser beam is utilized. A rotating half-wave plate is used to achieve the random polarization. In order to correctly locate the focus at the desired location within the workpiece, the position of the focus is first determined by monitoring the speckle size while varying the distance between the workpiece and the focussing optics. When the speckle size reaches a maximum, the focus is located at the first surface of the workpiece. After the location of the focus has been determined, it is repositioned to the desired location within the workpiece, thus optimizing the quality of the machined area.

  3. Axion Bounds from Precision Cosmology

    SciTech Connect

    Raffelt, G. G.; Hamann, J.; Hannestad, S.; Mirizzi, A.; Wong, Y. Y. Y.

    2010-08-30

    Depending on their mass, axions produced in the early universe can leave different imprints in cosmic structures. If axions have masses in the eV-range, they contribute a hot dark matter fraction, allowing one to constrain m{sub a} in analogy to neutrinos. In the more favored scenario where axions play the role of cold dark matter and if reheating after inflation does not restore the Peccei-Quinn symmetry, the axion field provides isocurvature fluctuations that are severely constrained by precision cosmology. There remains a small sliver in parameter space where isocurvature fluctuations could still show up in future probes.

  4. An Arbitrary Precision Computation Package

    2003-06-14

    This package permits a scientist to perform computations using an arbitrarily high level of numeric precision (the equivalent of hundreds or even thousands of digits), by making only minor changes to conventional C++ or Fortran-90 soruce code. This software takes advantage of certain properties of IEEE floating-point arithmetic, together with advanced numeric algorithms, custom data types and operator overloading. Also included in this package is the "Experimental Mathematician's Toolkit", which incorporates many of these facilitiesmore » into an easy-to-use interactive program.« less

  5. Precision ozone vapor pressure measurements

    NASA Technical Reports Server (NTRS)

    Hanson, D.; Mauersberger, K.

    1985-01-01

    The vapor pressure above liquid ozone has been measured with a high accuracy over a temperature range of 85 to 95 K. At the boiling point of liquid argon (87.3 K) an ozone vapor pressure of 0.0403 Torr was obtained with an accuracy of + or - 0.7 percent. A least square fit of the data provided the Clausius-Clapeyron equation for liquid ozone; a latent heat of 82.7 cal/g was calculated. High-precision vapor pressure data are expected to aid research in atmospheric ozone measurements and in many laboratory ozone studies such as measurements of cross sections and reaction rates.

  6. Motion and gravity effects in the precision of quantum clocks

    PubMed Central

    Lindkvist, Joel; Sabín, Carlos; Johansson, Göran; Fuentes, Ivette

    2015-01-01

    We show that motion and gravity affect the precision of quantum clocks. We consider a localised quantum field as a fundamental model of a quantum clock moving in spacetime and show that its state is modified due to changes in acceleration. By computing the quantum Fisher information we determine how relativistic motion modifies the ultimate bound in the precision of the measurement of time. While in the absence of motion the squeezed vacuum is the ideal state for time estimation, we find that it is highly sensitive to the motion-induced degradation of the quantum Fisher information. We show that coherent states are generally more resilient to this degradation and that in the case of very low initial number of photons, the optimal precision can be even increased by motion. These results can be tested with current technology by using superconducting resonators with tunable boundary conditions. PMID:25988238

  7. High precision spectroscopy and imaging in THz frequency range

    NASA Astrophysics Data System (ADS)

    Vaks, Vladimir L.

    2014-03-01

    Application of microwave methods for development of the THz frequency range has resulted in elaboration of high precision THz spectrometers based on nonstationary effects. The spectrometers characteristics (spectral resolution and sensitivity) meet the requirements for high precision analysis. The gas analyzers, based on the high precision spectrometers, have been successfully applied for analytical investigations of gas impurities in high pure substances. These investigations can be carried out both in absorption cell and in reactor. The devices can be used for ecological monitoring, detecting the components of chemical weapons and explosive in the atmosphere. The great field of THz investigations is the medicine application. Using the THz spectrometers developed one can detect markers for some diseases in exhaled air.

  8. Motion and gravity effects in the precision of quantum clocks.

    PubMed

    Lindkvist, Joel; Sabín, Carlos; Johansson, Göran; Fuentes, Ivette

    2015-05-19

    We show that motion and gravity affect the precision of quantum clocks. We consider a localised quantum field as a fundamental model of a quantum clock moving in spacetime and show that its state is modified due to changes in acceleration. By computing the quantum Fisher information we determine how relativistic motion modifies the ultimate bound in the precision of the measurement of time. While in the absence of motion the squeezed vacuum is the ideal state for time estimation, we find that it is highly sensitive to the motion-induced degradation of the quantum Fisher information. We show that coherent states are generally more resilient to this degradation and that in the case of very low initial number of photons, the optimal precision can be even increased by motion. These results can be tested with current technology by using superconducting resonators with tunable boundary conditions.

  9. Climate Sensitivity

    SciTech Connect

    Lindzen, Richard

    2011-11-09

    Warming observed thus far is entirely consistent with low climate sensitivity. However, the result is ambiguous because the sources of climate change are numerous and poorly specified. Model predictions of substantial warming aredependent on positive feedbacks associated with upper level water vapor and clouds, but models are notably inadequate in dealing with clouds and the impacts of clouds and water vapor are intimately intertwined. Various approaches to measuring sensitivity based on the physics of the feedbacks will be described. The results thus far point to negative feedbacks. Problems with these approaches as well as problems with the concept of climate sensitivity will be described.

  10. Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field

    NASA Astrophysics Data System (ADS)

    Hoang, Ngoc-Tram D.; Nguyen, Duy-Anh P.; Hoang, Van-Hung; Le, Van-Hoang

    2016-08-01

    Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.

  11. Precision-Guaranteed Quantum Tomography

    NASA Astrophysics Data System (ADS)

    Sugiyama, Takanori; Turner, Peter S.; Murao, Mio

    2013-10-01

    Quantum state tomography is currently the standard tool for verifying that a state prepared in the lab is close to an ideal target state, but up to now there have been no rigorous methods for evaluating the precision of the state preparation in tomographic experiments. We propose a new estimator for quantum state tomography, and prove that the (always physical) estimates will be close to the true prepared state with a high probability. We derive an explicit formula for evaluating how high the probability is for an arbitrary finite-dimensional system and explicitly give the one- and two-qubit cases as examples. This formula applies for any informationally complete sets of measurements, arbitrary finite number of data sets, and general loss functions including the infidelity, the Hilbert-Schmidt, and the trace distances. Using the formula, we can evaluate not only the difference between the estimated and prepared states, but also the difference between the prepared and target states. This is the first result directly applicable to the problem of evaluating the precision of estimation and preparation in quantum tomographic experiments.

  12. Precision Metrology Using Weak Measurements

    NASA Astrophysics Data System (ADS)

    Zhang, Lijian; Datta, Animesh; Walmsley, Ian A.

    2015-05-01

    Weak values and measurements have been proposed as a means to achieve dramatic enhancements in metrology based on the greatly increased range of possible measurement outcomes. Unfortunately, the very large values of measurement outcomes occur with highly suppressed probabilities. This raises three vital questions in weak-measurement-based metrology. Namely, (Q1) Does postselection enhance the measurement precision? (Q2) Does weak measurement offer better precision than strong measurement? (Q3) Is it possible to beat the standard quantum limit or to achieve the Heisenberg limit with weak measurement using only classical resources? We analyze these questions for two prototypical, and generic, measurement protocols and show that while the answers to the first two questions are negative for both protocols, the answer to the last is affirmative for measurements with phase-space interactions, and negative for configuration space interactions. Our results, particularly the ability of weak measurements to perform at par with strong measurements in some cases, are instructive for the design of weak-measurement-based protocols for quantum metrology.

  13. Precision experiments in electroweak interactions

    SciTech Connect

    Swartz, M.L.

    1990-03-01

    The electroweak theory of Glashow, Weinberg, and Salam (GWS) has become one of the twin pillars upon which our understanding of all particle physics phenomena rests. It is a brilliant achievement that qualitatively and quantitatively describes all of the vast quantity of experimental data that have been accumulated over some forty years. Note that the word quantitatively must be qualified. The low energy limiting cases of the GWS theory, Quantum Electrodynamics and the V-A Theory of Weak Interactions, have withstood rigorous testing. The high energy synthesis of these ideas, the GWS theory, has not yet been subjected to comparably precise scrutiny. The recent operation of a new generation of proton-antiproton (p{bar p}) and electron-positron (e{sup +}e{sup {minus}}) colliders has made it possible to produce and study large samples of the electroweak gauge bosons W{sup {plus minus}} and Z{sup 0}. We expect that these facilities will enable very precise tests of the GWS theory to be performed in the near future. In keeping with the theme of this Institute, Physics at the 100 GeV Mass Scale, these lectures will explore the current status and the near-future prospects of these experiments.

  14. Antihydrogen production and precision experiments

    SciTech Connect

    Nieto, M.M.; Goldman, T.; Holzscheiter, M.H.

    1996-12-31

    The study of CPT invariance with the highest achievable precision in all particle sectors is of fundamental importance for physics. Equally important is the question of the gravitational acceleration of antimatter. In recent years, impressive progress has been achieved in capturing antiprotons in specially designed Penning traps, in cooling them to energies of a few milli-electron volts, and in storing them for hours in a small volume of space. Positrons have been accumulated in large numbers in similar traps, and low energy positron or positronium beams have been generated. Finally, steady progress has been made in trapping and cooling neutral atoms. Thus the ingredients to form antihydrogen at rest are at hand. Once antihydrogen atoms have been captured at low energy, spectroscopic methods can be applied to interrogate their atomic structure with extremely high precision and compare it to its normal matter counterpart, the hydrogen atom. Especially the 1S-2S transition, with a lifetime of the excited state of 122 msec and thereby a natural linewidth of 5 parts in 10{sup 16}, offers in principle the possibility to directly compare matter and antimatter properties at a level of 1 part in 10{sup 16}.

  15. Precision Mass Measurements at CARIBU

    NASA Astrophysics Data System (ADS)

    Lascar, D.; van Schelt, J.; Savard, G.; Caldwell, S.; Chaudhuri, A.; Clark, J. A.; Levand, A. F.; Li, G.; Sternberg, M.; Sun, T.; Zabransky, B. J.; Segel, R.; Sharma, K.

    2010-02-01

    Neutron separation energies (Sn) are essential inputs to models of explosive r-process nucleosynthesis. However, for nuclei farther from stability, the precision of Sn decreases as production decreases and observation of those nuclei become more difficult. Many of the most critical inputs to the models are based on extrapolations from measurements of masses closer to stability than the predicted r-process path. Measuring masses that approach and lie on the predicted r-process path will further constrain the systematic uncertainties in these extrapolated values. The Canadian Penning Trap Mass Spectrometer (CPT) at Argonne National Laboratory (ANL) has measured the masses of more than 160 nuclei to high precision. A recent move to the CAlifornium Rare Isotope Breeder Upgrade (CARIBU) at ANL has given the CPT unique access to weakly produced nuclei that cannot be easily reached via proton induced fission of ^238U. CARIBU will eventually use a 1 Ci source of ^252Cf to produce these nuclei. Installation of the CPT at CARIBU as well as the first CPT mass measurements of neutron rich nuclei at CARIBU will be discussed. )

  16. Precise Adaptation in Bacterial Chemotaxis through ``Assistance Neighborhoods''

    NASA Astrophysics Data System (ADS)

    Endres, Robert

    2007-03-01

    The chemotaxis network in Escherichia coli is remarkable for its sensitivity to small relative changes in the concentrations of multiple chemical signals over a broad range of ambient concentrations. Key to this sensitivity is an adaptation system that relies on methylation and demethylation (or deamidation) of specific modification sites of the chemoreceptors by the enzymes CheR and CheB, respectively. It was recently discovered that these enzymes can access five to seven receptors when tethered to a particular receptor. We show that these ``assistance neighborhoods'' (ANs) are necessary for precise and robust adaptation in a model for signaling by clusters of chemoreceptors: (1) ANs suppress fluctuations of the receptor methylation level; (2) ANs lead to robustness with respect to biochemical parameters. We predict two limits of precise adaptation at large attractant concentrations: either receptors reach full methylation and turn off, or receptors become saturated and cease to respond to attractant but retain their adapted activity.

  17. PRECISE DOPPLER MONITORING OF BARNARD'S STAR

    SciTech Connect

    Choi, Jieun; Marcy, Geoffrey W.; Howard, Andrew W.; Isaacson, Howard; McCarthy, Chris; Fischer, Debra A.; Johnson, John A.; Wright, Jason T.

    2013-02-20

    We present 248 precise Doppler measurements of Barnard's Star (Gl 699), the second nearest star system to Earth, obtained from Lick and Keck Observatories during the 25 years between 1987 and 2012. The early precision was 20 m s{sup -1} but was 2 m s{sup -1} during the last 8 years, constituting the most extensive and sensitive search for Doppler signatures of planets around this stellar neighbor. We carefully analyze the 136 Keck radial velocities spanning 8 years by first applying a periodogram analysis to search for nearly circular orbits. We find no significant periodic Doppler signals with amplitudes above {approx}2 m s{sup -1}, setting firm upper limits on the minimum mass (Msin i) of any planets with orbital periods from 0.1 to 1000 days. Using a Monte Carlo analysis for circular orbits, we determine that planetary companions to Barnard's Star with masses above 2 M {sub Circled-Plus} and periods below 10 days would have been detected. Planets with periods up to 2 years and masses above 10 M {sub Circled-Plus} (0.03 M {sub Jup}) are also ruled out. A similar analysis allowing for eccentric orbits yields comparable mass limits. The habitable zone of Barnard's Star appears to be devoid of roughly Earth-mass planets or larger, save for face-on orbits. Previous claims of planets around the star by van de Kamp are strongly refuted. The radial velocity of Barnard's Star increases with time at 4.515 {+-} 0.002 m s{sup -1} yr{sup -1}, consistent with the predicted geometrical effect, secular acceleration, that exchanges transverse for radial components of velocity.

  18. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  19. Gluten Sensitivity

    MedlinePlus

    Gluten is a protein found in wheat, rye, and barley. It is found mainly in foods but ... products like medicines, vitamins, and supplements. People with gluten sensitivity have problems with gluten. It is different ...

  20. Climate Sensitivity

    NASA Astrophysics Data System (ADS)

    Hansen, J.

    2007-12-01

    Discussion of climate sensitivity requires careful definition of forcings, feedbacks and response times, indeed, foggy definitions have produced flawed assessments of climate sensitivity. The best information available on climate sensitivity comes from insightful interpretation of the Earth's history aided by quantitative information from climate models and understanding of climate processes. Climate sensitivity is a strong function of time scale, in part because of the nature of climate feedbacks. Unfortunately for humanity, the preponderance of feedbacks on the century time scale appears to be positive. The chief implication is the need for a sharp reversal in the trend of human-made climate forcing, if we are to avoid creating a planet that is dramatically different than the one on which civilization developed.

  1. Shape design sensitivity analysis using domain information

    NASA Technical Reports Server (NTRS)

    Seong, Hwal-Gyeong; Choi, Kyung K.

    1985-01-01

    A numerical method for obtaining accurate shape design sensitivity information for built-up structures is developed and demonstrated through analysis of examples. The basic character of the finite element method, which gives more accurate domain information than boundary information, is utilized for shape design sensitivity improvement. A domain approach for shape design sensitivity analysis of built-up structures is derived using the material derivative idea of structural mechanics and the adjoint variable method of design sensitivity analysis. Velocity elements and B-spline curves are introduced to alleviate difficulties in generating domain velocity fields. The regularity requirements of the design velocity field are studied.

  2. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    PubMed

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  3. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  4. Accurate modeling of parallel scientific computations

    NASA Technical Reports Server (NTRS)

    Nicol, David M.; Townsend, James C.

    1988-01-01

    Scientific codes are usually parallelized by partitioning a grid among processors. To achieve top performance it is necessary to partition the grid so as to balance workload and minimize communication/synchronization costs. This problem is particularly acute when the grid is irregular, changes over the course of the computation, and is not known until load time. Critical mapping and remapping decisions rest on the ability to accurately predict performance, given a description of a grid and its partition. This paper discusses one approach to this problem, and illustrates its use on a one-dimensional fluids code. The models constructed are shown to be accurate, and are used to find optimal remapping schedules.

  5. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  6. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-10-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  7. Accurate maser positions for MALT-45

    NASA Astrophysics Data System (ADS)

    Jordan, Christopher; Bains, Indra; Voronkov, Maxim; Lo, Nadia; Jones, Paul; Muller, Erik; Cunningham, Maria; Burton, Michael; Brooks, Kate; Green, James; Fuller, Gary; Barnes, Peter; Ellingsen, Simon; Urquhart, James; Morgan, Larry; Rowell, Gavin; Walsh, Andrew; Loenen, Edo; Baan, Willem; Hill, Tracey; Purcell, Cormac; Breen, Shari; Peretto, Nicolas; Jackson, James; Lowe, Vicki; Longmore, Steven

    2013-04-01

    MALT-45 is an untargeted survey, mapping the Galactic plane in CS (1-0), Class I methanol masers, SiO masers and thermal emission, and high frequency continuum emission. After obtaining images from the survey, a number of masers were detected, but without accurate positions. This project seeks to resolve each maser and its environment, with the ultimate goal of placing the Class I methanol maser into a timeline of high mass star formation.

  8. Manufacturing Precise, Lightweight Paraboloidal Mirrors

    NASA Technical Reports Server (NTRS)

    Hermann, Frederick Thomas

    2006-01-01

    A process for fabricating a precise, diffraction- limited, ultra-lightweight, composite- material (matrix/fiber) paraboloidal telescope mirror has been devised. Unlike the traditional process of fabrication of heavier glass-based mirrors, this process involves a minimum of manual steps and subjective judgment. Instead, this process involves objectively controllable, repeatable steps; hence, this process is better suited for mass production. Other processes that have been investigated for fabrication of precise composite-material lightweight mirrors have resulted in print-through of fiber patterns onto reflecting surfaces, and have not provided adequate structural support for maintenance of stable, diffraction-limited surface figures. In contrast, this process does not result in print-through of the fiber pattern onto the reflecting surface and does provide a lightweight, rigid structure capable of maintaining a diffraction-limited surface figure in the face of changing temperature, humidity, and air pressure. The process consists mainly of the following steps: 1. A precise glass mandrel is fabricated by conventional optical grinding and polishing. 2. The mandrel is coated with a release agent and covered with layers of a carbon- fiber composite material. 3. The outer surface of the outer layer of the carbon-fiber composite material is coated with a surfactant chosen to provide for the proper flow of an epoxy resin to be applied subsequently. 4. The mandrel as thus covered is mounted on a temperature-controlled spin table. 5. The table is heated to a suitable temperature and spun at a suitable speed as the epoxy resin is poured onto the coated carbon-fiber composite material. 6. The surface figure of the optic is monitored and adjusted by use of traditional Ronchi, Focault, and interferometric optical measurement techniques while the speed of rotation and the temperature are adjusted to obtain the desired figure. The proper selection of surfactant, speed or rotation

  9. The Precision Field Lysimeter Concept

    NASA Astrophysics Data System (ADS)

    Fank, J.

    2009-04-01

    The understanding and interpretation of leaching processes have improved significantly during the past decades. Unlike laboratory experiments, which are mostly performed under very controlled conditions (e.g. homogeneous, uniform packing of pre-treated test material, saturated steady-state flow conditions, and controlled uniform hydraulic conditions), lysimeter experiments generally simulate actual field conditions. Lysimeters may be classified according to different criteria such as type of soil block used (monolithic or reconstructed), drainage (drainage by gravity or vacuum or a water table may be maintained), or weighing or non-weighing lysimeters. In 2004 experimental investigations have been set up to assess the impact of different farming systems on groundwater quality of the shallow floodplain aquifer of the river Mur in Wagna (Styria, Austria). The sediment is characterized by a thin layer (30 - 100 cm) of sandy Dystric Cambisol and underlying gravel and sand. Three precisely weighing equilibrium tension block lysimeters have been installed in agricultural test fields to compare water flow and solute transport under (i) organic farming, (ii) conventional low input farming and (iii) extensification by mulching grass. Specific monitoring equipment is used to reduce the well known shortcomings of lysimeter investigations: The lysimeter core is excavated as an undisturbed monolithic block (circular, 1 m2 surface area, 2 m depth) to prevent destruction of the natural soil structure, and pore system. Tracing experiments have been achieved to investigate the occurrence of artificial preferential flow and transport along the walls of the lysimeters. The results show that such effects can be neglected. Precisely weighing load cells are used to constantly determine the weight loss of the lysimeter due to evaporation and transpiration and to measure different forms of precipitation. The accuracy of the weighing apparatus is 0.05 kg, or 0.05 mm water equivalent

  10. Accurate Molecular Polarizabilities Based on Continuum Electrostatics

    PubMed Central

    Truchon, Jean-François; Nicholls, Anthony; Iftimie, Radu I.; Roux, Benoît; Bayly, Christopher I.

    2013-01-01

    A novel approach for representing the intramolecular polarizability as a continuum dielectric is introduced to account for molecular electronic polarization. It is shown, using a finite-difference solution to the Poisson equation, that the Electronic Polarization from Internal Continuum (EPIC) model yields accurate gas-phase molecular polarizability tensors for a test set of 98 challenging molecules composed of heteroaromatics, alkanes and diatomics. The electronic polarization originates from a high intramolecular dielectric that produces polarizabilities consistent with B3LYP/aug-cc-pVTZ and experimental values when surrounded by vacuum dielectric. In contrast to other approaches to model electronic polarization, this simple model avoids the polarizability catastrophe and accurately calculates molecular anisotropy with the use of very few fitted parameters and without resorting to auxiliary sites or anisotropic atomic centers. On average, the unsigned error in the average polarizability and anisotropy compared to B3LYP are 2% and 5%, respectively. The correlation between the polarizability components from B3LYP and this approach lead to a R2 of 0.990 and a slope of 0.999. Even the F2 anisotropy, shown to be a difficult case for existing polarizability models, can be reproduced within 2% error. In addition to providing new parameters for a rapid method directly applicable to the calculation of polarizabilities, this work extends the widely used Poisson equation to areas where accurate molecular polarizabilities matter. PMID:23646034

  11. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models. PMID:27111139

  12. Accurate phase-shift velocimetry in rock

    NASA Astrophysics Data System (ADS)

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  13. Accurate phase-shift velocimetry in rock.

    PubMed

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R; Holmes, William M

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  14. Simulations of thermally transferred OSL signals in quartz: Accuracy and precision of the protocols for equivalent dose evaluation

    NASA Astrophysics Data System (ADS)

    Pagonis, Vasilis; Adamiec, Grzegorz; Athanassas, C.; Chen, Reuven; Baker, Atlee; Larsen, Meredith; Thompson, Zachary

    2011-06-01

    Thermally-transferred optically stimulated luminescence (TT-OSL) signals in sedimentary quartz have been the subject of several recent studies, due to the potential shown by these signals to increase the range of luminescence dating by an order of magnitude. Based on these signals, a single aliquot protocol termed the ReSAR protocol has been developed and tested experimentally. This paper presents extensive numerical simulations of this ReSAR protocol. The purpose of the simulations is to investigate several aspects of the ReSAR protocol which are believed to cause difficulties during application of the protocol. Furthermore, several modified versions of the ReSAR protocol are simulated, and their relative accuracy and precision are compared. The simulations are carried out using a recently published kinetic model for quartz, consisting of 11 energy levels. One hundred random variants of the natural samples were generated by keeping the transition probabilities between energy levels fixed, while allowing simultaneous random variations of the concentrations of the 11 energy levels. The relative intrinsic accuracy and precision of the protocols are simulated by calculating the equivalent dose (ED) within the model, for a given natural burial dose of the sample. The complete sequence of steps undertaken in several versions of the dating protocols is simulated. The relative intrinsic precision of these techniques is estimated by fitting Gaussian probability functions to the resulting simulated distribution of ED values. New simulations are presented for commonly used OSL sensitivity tests, consisting of successive cycles of sample irradiation with the same dose, followed by measurements of the sensitivity corrected L/T signals. We investigate several experimental factors which may be affecting both the intrinsic precision and intrinsic accuracy of the ReSAR protocol. The results of the simulation show that the four different published versions of the ReSAR protocol can

  15. Role of telecommunications in precision agriculture

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Precision agriculture has been made possible by the confluence of several technologies: geographic positioning systems, geographic information systems, image analysis software, low-cost microcomputer-based variable rate controller/recorders, and precision tractor guidance systems. While these techn...

  16. A Comparison of the Astrometric Precision and Accuracy of Double Star Observations with Two Telescopes

    NASA Astrophysics Data System (ADS)

    Alvarez, Pablo; Fishbein, Amos E.; Hyland, Michael W.; Kight, Cheyne L.; Lopez, Hairold; Navarro, Tanya; Rosas, Carlos A.; Schachter, Aubrey E.; Summers, Molly A.; Weise, Eric D.; Hoffman, Megan A.; Mires, Robert C.; Johnson, Jolyon M.; Genet, Russell M.; White, Robin

    2009-01-01

    Using a manual Meade 6" Newtonian telescope and a computerized Meade 10" Schmidt-Cassegrain telescope, students from Arroyo Grande High School measured the well-known separation and position angle of the bright visual double star Albireo. The precision and accuracy of the observations from the two telescopes were compared to each other and to published values of Albireo taken as the standard. It was hypothesized that the larger, computerized telescope would be both more precise and more accurate.

  17. Precision orbit determination for Topex

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Schutz, B. E.; Ries, J. C.; Shum, C. K.

    1990-01-01

    The ability of radar altimeters to measure the distance from a satellite to the ocean surface with a precision of the order of 2 cm imposes unique requirements for the orbit determination accuracy. The orbit accuracy requirements will be especially demanding for the joint NASA/CNES Ocean Topography Experiment (Topex/Poseidon). For this mission, a radial orbit accuracy of 13 centimeters will be required for a mission period of three to five years. This is an order of magnitude improvement in the accuracy achieved during any previous satellite mission. This investigation considers the factors which limit the orbit accuracy for the Topex mission. Particular error sources which are considered include the geopotential, the radiation pressure and the atmospheric drag model.

  18. Precision cosmology and the landscape

    SciTech Connect

    Bousso, Raphael; Bousso, Raphael

    2006-10-01

    After reviewing the cosmological constant problem -- why is Lambda not huge? -- I outline the two basic approaches that had emerged by the late 1980s, and note that each made a clear prediction. Precision cosmological experiments now indicate that the cosmological constant is nonzero. This result strongly favors the environmental approach, in which vacuum energy can vary discretely among widely separated regions in the universe. The need to explain this variation from first principles constitutes an observational constraint on fundamental theory. I review arguments that string theory satisfies this constraint, as it contains a dense discretuum of metastable vacua. The enormous landscape of vacua calls for novel, statistical methods of deriving predictions, and it prompts us to reexamine our description of spacetime on the largest scales. I discuss the effects of cosmological dynamics, and I speculate that weighting vacua by their entropy production may allow for prior-free predictions that do not resort to explicitly anthropic arguments.

  19. Manufacturing Ultra-Precision Meso-scale Products by Coining

    SciTech Connect

    Seugling, R M; Davis, P J; Rickens, K; Osmer, J; Brinksmeier, E

    2010-02-18

    A method for replicating ultra-precision, meso-scale features onto a near-net-shape metallic blank has been demonstrated. The 'coining' technology can be used to imprint a wide range of features and/or profiles into two opposing surfaces. The instrumented system provides the ability to measure and control the product thickness and total thickness variation (TTV). The coining mechanism relies on kinematic principles to accurately and efficiently produce ultra-precision work pieces without the production of by products such as machining chips, or grinding swarf while preserving surface finish, material structure and overall form. Coining has been developed as a niche process for manufacturing difficult to machine, millimeter size components made from materials that may present hazardous conditions. In the case described in this paper a refractory metal part, tantalum (Ta) was produced with 4 {micro}m peak to valley 50 {micro}m special wavelength sine wave coined into the surface of 50 {micro}m blank. This technique shows promise for use on ductile materials that cannot be precision machined with conventional single crystal diamond tooling and/or has strict requirements on subsurface damage, surface impurities and grain structure. As a production process, it can be used to reduce manufacturing costs where large numbers of ultra-precision, repetitive designs are required and produce parts out of hazardous materials without generating added waste.

  20. Improving the Precision of the Half Life of 34Ar

    NASA Astrophysics Data System (ADS)

    Iacob, V. E.; Hardy, J. C.; Bencomo, M.; Chen, L.; Horvat, V.; Nica, N.; Park, H. I.

    2016-03-01

    Currently, precise ft-values measured for superallowed 0+ -->0+ β transitions provide the most accurate value for Vud, the up-down quark mixing element of the Cabibbo-Kobayashi-Maskawa (CKM) matrix. This enables the most demanding test of CKM unitarity, one of the pillars of the Standard Model. Further improvements in precision are possible if the ft values for pairs of mirror 0+ -->0+ transitions can be measured with 0.1% precision or better. The decays of 34Ar and 34Cl are members of such a mirror pair, but so far the former is not known with sufficient precision. Since our 2006 publication of the half-life of 34Ar, we have improved significantly our acquisition and analysis techniques, adding refinements that have led to increased accuracy. The 34Cl half-life is about twice that of 34Ar. This obscures the 34Ar contribution to the decay in measurements such as ours, which detected the decay positrons and was thus unable to differentiate between the parent and daughter decays. We report here two experiments aiming to improve the half-life of 34Ar: The first detected positrons as in but with improved controls; the second measured γ rays in coincidence with positrons, thus achieving a clear separation of 34Ar decay from 34Cl.

  1. Accuracy-precision trade-off in visual orientation constancy.

    PubMed

    De Vrijer, M; Medendorp, W P; Van Gisbergen, J A M

    2009-02-09

    Using the subjective visual vertical task (SVV), previous investigations on the maintenance of visual orientation constancy during lateral tilt have found two opposite bias effects in different tilt ranges. The SVV typically shows accurate performance near upright but severe undercompensation at tilts beyond 60 deg (A-effect), frequently with slight overcompensation responses (E-effect) in between. Here we investigate whether a Bayesian spatial-perception model can account for this error pattern. The model interprets A- and E-effects as the drawback of a computational strategy, geared at maintaining visual stability with optimal precision at small tilt angles. In this study, we test whether these systematic errors can be seen as the consequence of a precision-accuracy trade-off when combining a veridical but noisy signal about eye orientation in space with the visual signal. To do so, we used a psychometric approach to assess both precision and accuracy of the SVV in eight subjects laterally tilted at 9 different tilt angles (-120 degrees to 120 degrees). Results show that SVV accuracy and precision worsened with tilt angle, according to a pattern that could be fitted quite adequately by the Bayesian model. We conclude that spatial vision essentially follows the rules of Bayes' optimal observer theory.

  2. Precise tuning of barnacle leg length to coastal wave action.

    PubMed Central

    Arsenault, D. J.; Marchinko, K. B.; Palmer, A. R.

    2001-01-01

    Both spatial and temporal variation in environmental conditions can favour intraspecific plasticity in animal form. But how precise is such environmental modulation? Individual Balanus glandula Darwin, a common northeastern Pacific barnacle, produce longer feeding legs in still water than in moving water. We report here that, on the west coast of Vancouver Island, Canada, the magnitude and the precision of this phenotypic variation is impressive. First, the feeding legs of barnacles from protected bays were nearly twice as long (for the same body mass) as those from open ocean shores. Second, leg length varied surprisingly precisely with wave exposure: the average maximum velocities of breaking waves recorded in situ explained 95.6-99.5% of the variation in average leg length observed over a threefold range of wave exposure. The decline in leg length with increasing wave action was less than predicted due to simple scaling, perhaps due to changes in leg shape or material properties. Nonetheless, the precision of this relationship reveals a remarkably close coupling between growth environment and adult form, and suggests that between-population differences in barnacle leg length may be used for estimating differences in average wave exposure easily and accurately in studies of coastal ecology. PMID:11600079

  3. Slow Control System for the NIFFTE High Precision TPC

    NASA Astrophysics Data System (ADS)

    Thornton, Remington

    2010-11-01

    The Neutron Induced Fission Fragment Tracking Experiment (NIFFTE) has designed a Time Projection Chamber (TPC) to measure neutron induced fission cross-section measurements of the major actinides to sub-1% precision over a wide incident neutron energy range. These measurements are necessary to design the next generation of nuclear power plants. In order to achieve our high precision goals, an accurate and efficient slow control system must be implemented. Custom software has been created to control the hardware through Maximum Integration Data Acquisition System (MIDAS). This includes reading room and device temperature, setting the high voltage power supplies, and reading voltages. From hardware to software, an efficient design has been implemented and tested. This poster will present the setup and data from this slow control system.

  4. Ultra-precise particle velocities in pulsed supersonic beams

    SciTech Connect

    Christen, Wolfgang

    2013-07-14

    We describe an improved experimental method for the generation of cold, directed particle bunches, and the highly accurate determination of their velocities in a pulsed supersonic beam, allowing for high-resolution experiments of atoms, molecules, and clusters. It is characterized by a pulsed high pressure jet source with high brilliance and optimum repeatability, a flight distance of few metres that can be varied with a tolerance of setting of 50 {mu}m, and a precision in the mean flight time of particles of better than 10{sup -4}. The technique achieves unmatched accuracies in particle velocities and kinetic energies and also permits the reliable determination of enthalpy changes with very high precision.

  5. Perspective on precision machining, polishing, and optical requirements

    SciTech Connect

    Sanger, G.M.

    1981-08-18

    While precision machining has been applied to the manufacture of optical components for a considerable period, the process has, in general, had its thinking restricted to producing only the accurate shapes required. The purpose of this paper is to show how optical components must be considered from an optical (functional) point of view and that the manufacturing process must be selected on that basis. To fill out this perspective, simplistic examples of how optical components are specified with respect to form and finish are given, a comparison between optical polishing and precision machining is made, and some thoughts on which technique should be selected for a specific application are presented. A short discussion of future trends related to accuracy, materials, and tools is included.

  6. Experimental evaluation of active-member control of precision structures

    NASA Technical Reports Server (NTRS)

    Fanson, James; Blackwood, Gary; Chu, Cheng-Chih

    1989-01-01

    The results of closed loop experiments that use piezoelectric active-members to control the flexible motion of a precision truss structure are described. These experiments are directed toward the development of high-performance structural systems as part of the Control/Structure Interaction (CSI) program at JPL. The focus of CSI activity at JPL is to develop the technology necessary to accurately control both the shape and vibration levels in the precision structures from which proposed large space-based observatories will be built. Structural error budgets for these types of structures will likely be in the sub-micron regime; optical tolerances will be even tighter. In order to achieve system level stability and local positioning at this level, it is generally expected that some form of active control will be required.

  7. PRECISION SPECTROPHOTOMETRY AT THE LEVEL OF 0.1%

    SciTech Connect

    Yan Renbin

    2011-11-15

    Accurate relative spectrophotometry is critical for many science applications. Small wavelength-scale residuals in the flux calibration can significantly impact the measurements of weak emission and absorption features in the spectra. Using Sloan Digital Sky Survey data, we demonstrate that the average spectra of carefully selected red-sequence galaxies can be used as a spectroscopic standard to improve the relative spectrophotometry precision to 0.1% on small wavelength scales (from a few to hundreds of Angstroms). We achieve this precision by comparing stacked spectra across tiny redshift intervals. The redshift intervals must be small enough that any systematic stellar population evolution is minimized and is less than the spectrophotometric uncertainty. This purely empirical technique does not require any theoretical knowledge of true galaxy spectra. It can be applied to all large spectroscopic galaxy redshift surveys that sample a large number of galaxies in a uniform population.

  8. Towards High Precision Deuteron Polarimetry

    SciTech Connect

    Silva e Silva, M. da

    2009-08-04

    A finite electric dipole moment (EDM) in any fundamental system would constitute a signal for new physics. The deuteron presents itself as an optimal candidate both experimentally and theoretically. A new storage ring technique is being developed for which a small change in the vertical polarization would be a signal of a non-zero EDM. A novel polarimeter concept is under investigation. Besides being highly efficient, this polarimeter should continuously monitor the beam polarization, guaranteeing optimal sensitivity. Detailed studies on systematic error control, in addition to the measurement of cross sections and analyzing powers, were carried out at KVI-Groningen in The Netherlands. Measurements were conducted at COSY-Juelich in Germany yielding high efficiencies. The (statistics limited) ability to track changes in polarization at the level of a few hundred parts-per-million has been demonstrated. Further studies and developments to meet the final goal of sub-part-per-million sensitivity are in progress.

  9. Programming supramolecular biohybrids as precision therapeutics.

    PubMed

    Ng, David Yuen Wah; Wu, Yuzhou; Kuan, Seah Ling; Weil, Tanja

    2014-12-16

    CONSPECTUS: Chemical programming of macromolecular structures to instill a set of defined chemical properties designed to behave in a sequential and precise manner is a characteristic vision for creating next generation nanomaterials. In this context, biopolymers such as proteins and nucleic acids provide an attractive platform for the integration of complex chemical design due to their sequence specificity and geometric definition, which allows accurate translation of chemical functionalities to biological activity. Coupled with the advent of amino acid specific modification techniques, "programmable" areas of a protein chain become exclusively available for any synthetic customization. We envision that chemically reprogrammed hybrid proteins will bridge the vital link to overcome the limitations of synthetic and biological materials, providing a unique strategy for tailoring precision therapeutics. In this Account, we present our work toward the chemical design of protein- derived hybrid polymers and their supramolecular responsiveness, while summarizing their impact and the advancement in biomedicine. Proteins, in their native form, represent the central framework of all biological processes and are an unrivaled class of macromolecular drugs with immense specificity. Nonetheless, the route of administration of protein therapeutics is often vastly different from Nature's biosynthesis. Therefore, it is imperative to chemically reprogram these biopolymers to direct their entry and activity toward the designated target. As a consequence of the innate structural regularity of proteins, we show that supramolecular interactions facilitated by stimulus responsive chemistry can be intricately designed as a powerful tool to customize their functions, stability, activity profiles, and transportation capabilities. From another perspective, a protein in its denatured, unfolded form serves as a monodispersed, biodegradable polymer scaffold decorated with functional side

  10. Precision grinding process development for brittle materials

    SciTech Connect

    Blaedel, K L; Davis, P J; Piscotty, M A

    1999-04-01

    High performance, brittle materials are the materials of choice for many of today's engineering applications. This paper describes three separate precision grinding processes developed at Lawrence Liver-more National Laboratory to machine precision ceramic components. Included in the discussion of the precision processes is a variety of grinding wheel dressing, truing and profiling techniques.

  11. 21 CFR 872.3165 - Precision attachment.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Precision attachment. 872.3165 Section 872.3165...) MEDICAL DEVICES DENTAL DEVICES Prosthetic Devices § 872.3165 Precision attachment. (a) Identification. A precision attachment or preformed bar is a device made of austenitic alloys or alloys containing 75...

  12. High-precision photometry for K2 Campaign 1

    NASA Astrophysics Data System (ADS)

    Huang, C. X.; Penev, K.; Hartman, J. D.; Bakos, G. Á.; Bhatti, W.; Domsa, I.; de Val-Borro, M.

    2015-12-01

    The two reaction wheel K2 mission promises and has delivered new discoveries in the stellar and exoplanet fields. However, due to the loss of accurate pointing, it also brings new challenges for the data reduction processes. In this paper, we describe a new reduction pipeline for extracting high-precision photometry from the K2 data set, and present public light curves for the K2 Campaign 1 target pixel data set. Key to our reduction is the derivation of global astrometric solutions from the target stamps, from which accurate centroids are passed on for high-precision photometry extraction. We extract target light curves for sources from a combined UCAC4 and EPIC catalogue - this includes not only primary targets of the K2 campaign 1, but also any other stars that happen to fall on the pixel stamps. We provide the raw light curves, and the products of various detrending processes aimed at removing different types of systematics. Our astrometric solutions achieve a median residual of ˜0.127 arcsec. For bright stars, our best 6.5 h precision for raw light curves is ˜20 parts per million (ppm). For our detrended light curves, the best 6.5 h precision achieved is ˜15 ppm. We show that our detrended light curves have fewer systematic effects (or trends, or red-noise) than light curves produced by other groups from the same observations. Example light curves of transiting planets and a Cepheid variable candidate, are also presented. We make all light curves public, including the raw and detrended photometry, at http://k2.hatsurveys.org.

  13. Patient-Specific Orbital Implants: Development and Implementation of Technology for More Accurate Orbital Reconstruction.

    PubMed

    Podolsky, Dale J; Mainprize, James G; Edwards, Glenn P; Antonyshyn, Oleh M

    2016-01-01

    Fracture of the orbital floor is commonly seen in facial trauma. Accurate anatomical reconstruction of the orbital floor contour is challenging. The authors demonstrate a novel method to more precisely reconstruct the orbital floor on a 50-year-old female who sustained an orbital floor fracture following a fall. Results of the reconstruction show excellent reapproximation of the native orbital floor contour and complete resolution of her enopthalmos and facial asymmetry. PMID:26674886

  14. ACCURATE ORBITAL INTEGRATION OF THE GENERAL THREE-BODY PROBLEM BASED ON THE D'ALEMBERT-TYPE SCHEME

    SciTech Connect

    Minesaki, Yukitaka

    2013-03-15

    We propose an accurate orbital integration scheme for the general three-body problem that retains all conserved quantities except angular momentum. The scheme is provided by an extension of the d'Alembert-type scheme for constrained autonomous Hamiltonian systems. Although the proposed scheme is merely second-order accurate, it can precisely reproduce some periodic, quasiperiodic, and escape orbits. The Levi-Civita transformation plays a role in designing the scheme.

  15. Accurate Orbital Integration of the General Three-body Problem Based on the d'Alembert-type Scheme

    NASA Astrophysics Data System (ADS)

    Minesaki, Yukitaka

    2013-03-01

    We propose an accurate orbital integration scheme for the general three-body problem that retains all conserved quantities except angular momentum. The scheme is provided by an extension of the d'Alembert-type scheme for constrained autonomous Hamiltonian systems. Although the proposed scheme is merely second-order accurate, it can precisely reproduce some periodic, quasiperiodic, and escape orbits. The Levi-Civita transformation plays a role in designing the scheme.

  16. Precision cosmology and the density of baryons in the universe.

    PubMed

    Kaplinghat, M; Turner, M S

    2001-01-15

    Big-bang nucleosynthesis (BBN) and cosmic microwave background (CMB) anisotropy measurements give independent, accurate measurements of the baryon density and can test the framework of the standard cosmology. Early CMB data are consistent with the long-standing conclusion from BBN that baryons constitute a small fraction of matter in the Universe, but may indicate a slightly higher value for the baryon density. We clarify precisely what the two methods determine and point out that differing values for the baryon density can indicate either an inconsistency or physics beyond the standard models of cosmology and particle physics. We discuss other signatures of the new physics in CMB anisotropy.

  17. Precise measurements of primordial power spectrum with 21 cm fluctuations

    SciTech Connect

    Kohri, Kazunori; Oyama, Yoshihiko; Sekiguchi, Toyokazu; Takahashi, Tomo E-mail: oyamayo@post.kek.jp E-mail: tomot@cc.saga-u.ac.jp

    2013-10-01

    We discuss the issue of how precisely we can measure the primordial power spectrum by using future observations of 21 cm fluctuations and cosmic microwave background (CMB). For this purpose, we investigate projected constraints on the quantities characterizing primordial power spectrum: the spectral index n{sub s}, its running α{sub s} and even its higher order running β{sub s}. We show that future 21 cm observations in combinations with CMB would accurately measure above mentioned observables of primordial power spectrum. We also discuss its implications to some explicit inflationary models.

  18. Arbitrary precision composite pulses for NMR quantum computing.

    PubMed

    Alway, William G; Jones, Jonathan A

    2007-11-01

    We discuss the implementation of arbitrary precision composite pulses developed using the methods of Brown et al. [K.R. Brown, A.W. Harrow, I.L. Chuang, Arbitrarily accurate composite pulse sequences, Phys. Rev. A 70 (2004) 052318]. We give explicit results for pulse sequences designed to tackle both the simple case of pulse length errors and the more complex case of off-resonance errors. The results are developed in the context of NMR quantum computation, but could be applied more widely.

  19. MC Kernel: Broadband Waveform Sensitivity Kernels for Seismic Tomography

    NASA Astrophysics Data System (ADS)

    Stähler, Simon C.; van Driel, Martin; Auer, Ludwig; Hosseini, Kasra; Sigloch, Karin; Nissen-Meyer, Tarje

    2016-04-01

    We present MC Kernel, a software implementation to calculate seismic sensitivity kernels on arbitrary tetrahedral or hexahedral grids across the whole observable seismic frequency band. Seismic sensitivity kernels are the basis for seismic tomography, since they map measurements to model perturbations. Their calculation over the whole frequency range was so far only possible with approximative methods (Dahlen et al. 2000). Fully numerical methods were restricted to the lower frequency range (usually below 0.05 Hz, Tromp et al. 2005). With our implementation, it's possible to compute accurate sensitivity kernels for global tomography across the observable seismic frequency band. These kernels rely on wavefield databases computed via AxiSEM (www.axisem.info), and thus on spherically symmetric models. The advantage is that frequencies up to 0.2 Hz and higher can be accessed. Since the usage of irregular, adapted grids is an integral part of regularisation in seismic tomography, MC Kernel works in a inversion-grid-centred fashion: A Monte-Carlo integration method is used to project the kernel onto each basis function, which allows to control the desired precision of the kernel estimation. Also, it means that the code concentrates calculation effort on regions of interest without prior assumptions on the kernel shape. The code makes extensive use of redundancies in calculating kernels for different receivers or frequency-pass-bands for one earthquake, to facilitate its usage in large-scale global seismic tomography.

  20. W and Z precision physics

    NASA Astrophysics Data System (ADS)

    Josa, M. Isabel

    2013-11-01

    Recent results on W and Z physics from LHC experiments are presented. Measurements reviewed include total W and Z cross sections, W lepton charge asymmetry, and Z differential cross sections. Production of Z bosons is studied as a function of rapidity, transverse momentum and angular variables. The Drell-Yan differential distribution with the dilepton mass and the double differential distribution with the dilepton mass and rapidity are shown. Finally, measurements of several electroweak observables, the forward-backward Drell-Yan asymmetry and the sin θW are also presented. The measurements are compared with theoretical predictions using the most accurate theoretical predictions and modern part on distribution functions. A general agreement is observed.

  1. Accurately Mapping M31's Microlensing Population

    NASA Astrophysics Data System (ADS)

    Crotts, Arlin

    2004-07-01

    We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity

  2. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  3. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2016-07-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  4. Cellular signalling effects in high precision radiotherapy

    NASA Astrophysics Data System (ADS)

    McMahon, Stephen J.; McGarry, Conor K.; Butterworth, Karl T.; Jain, Suneil; O'Sullivan, Joe M.; Hounsell, Alan R.; Prise, Kevin M.

    2015-06-01

    Radiotherapy is commonly planned on the basis of physical dose received by the tumour and surrounding normal tissue, with margins added to address the possibility of geometric miss. However, recent experimental evidence suggests that intercellular signalling results in a given cell’s survival also depending on the dose received by neighbouring cells. A model of radiation-induced cell killing and signalling was used to analyse how this effect depends on dose and margin choices. Effective Uniform Doses were calculated for model tumours in both idealised cases with no delivery uncertainty and more realistic cases incorporating geometric uncertainty. In highly conformal irradiation, a lack of signalling from outside the target leads to reduced target cell killing, equivalent to under-dosing by up to 10% compared to large uniform fields. This effect is significantly reduced when higher doses per fraction are considered, both increasing the level of cell killing and reducing margin sensitivity. These effects may limit the achievable biological precision of techniques such as stereotactic radiotherapy even in the absence of geometric uncertainties, although it is predicted that larger fraction sizes reduce the relative contribution of cell signalling driven effects. These observations may contribute to understanding the efficacy of hypo-fractionated radiotherapy.

  5. A new approach to compute accurate velocity of meteors

    NASA Astrophysics Data System (ADS)

    Egal, Auriane; Gural, Peter; Vaubaillon, Jeremie; Colas, Francois; Thuillot, William

    2016-10-01

    The CABERNET project was designed to push the limits of meteoroid orbit measurements by improving the determination of the meteors' velocities. Indeed, despite of the development of the cameras networks dedicated to the observation of meteors, there is still an important discrepancy between the measured orbits of meteoroids computed and the theoretical results. The gap between the observed and theoretic semi-major axis of the orbits is especially significant; an accurate determination of the orbits of meteoroids therefore largely depends on the computation of the pre-atmospheric velocities. It is then imperative to dig out how to increase the precision of the measurements of the velocity.In this work, we perform an analysis of different methods currently used to compute the velocities and trajectories of the meteors. They are based on the intersecting planes method developed by Ceplecha (1987), the least squares method of Borovicka (1990), and the multi-parameter fitting (MPF) method published by Gural (2012).In order to objectively compare the performances of these techniques, we have simulated realistic meteors ('fakeors') reproducing the different error measurements of many cameras networks. Some fakeors are built following the propagation models studied by Gural (2012), and others created by numerical integrations using the Borovicka et al. 2007 model. Different optimization techniques have also been investigated in order to pick the most suitable one to solve the MPF, and the influence of the geometry of the trajectory on the result is also presented.We will present here the results of an improved implementation of the multi-parameter fitting that allow an accurate orbit computation of meteors with CABERNET. The comparison of different velocities computation seems to show that if the MPF is by far the best method to solve the trajectory and the velocity of a meteor, the ill-conditioning of the costs functions used can lead to large estimate errors for noisy

  6. Efficient and Accurate Indoor Localization Using Landmark Graphs

    NASA Astrophysics Data System (ADS)

    Gu, F.; Kealy, A.; Khoshelham, K.; Shang, J.

    2016-06-01

    Indoor localization is important for a variety of applications such as location-based services, mobile social networks, and emergency response. Fusing spatial information is an effective way to achieve accurate indoor localization with little or with no need for extra hardware. However, existing indoor localization methods that make use of spatial information are either too computationally expensive or too sensitive to the completeness of landmark detection. In this paper, we solve this problem by using the proposed landmark graph. The landmark graph is a directed graph where nodes are landmarks (e.g., doors, staircases, and turns) and edges are accessible paths with heading information. We compared the proposed method with two common Dead Reckoning (DR)-based methods (namely, Compass + Accelerometer + Landmarks and Gyroscope + Accelerometer + Landmarks) by a series of experiments. Experimental results show that the proposed method can achieve 73% accuracy with a positioning error less than 2.5 meters, which outperforms the other two DR-based methods.

  7. Accurate finite difference methods for time-harmonic wave propagation

    NASA Technical Reports Server (NTRS)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  8. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  9. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  10. Accurate density functional thermochemistry for larger molecules.

    SciTech Connect

    Raghavachari, K.; Stefanov, B. B.; Curtiss, L. A.; Lucent Tech.

    1997-06-20

    Density functional methods are combined with isodesmic bond separation reaction energies to yield accurate thermochemistry for larger molecules. Seven different density functionals are assessed for the evaluation of heats of formation, Delta H 0 (298 K), for a test set of 40 molecules composed of H, C, O and N. The use of bond separation energies results in a dramatic improvement in the accuracy of all the density functionals. The B3-LYP functional has the smallest mean absolute deviation from experiment (1.5 kcal mol/f).

  11. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material. PMID:11366835

  12. Universality: Accurate Checks in Dyson's Hierarchical Model

    NASA Astrophysics Data System (ADS)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.

    2003-06-01

    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  13. Precision Adjustable Liquid Regulator (ALR)

    NASA Astrophysics Data System (ADS)

    Meinhold, R.; Parker, M.

    2004-10-01

    A passive mechanical regulator has been developed for the control of fuel or oxidizer flow to a 450N class bipropellant engine for use on commercial and interplanetary spacecraft. There are several potential benefits to the propulsion system, depending on mission requirements and spacecraft design. This system design enables more precise control of main engine mixture ratio and inlet pressure, and simplifies the pressurization system by transferring the function of main engine flow rate control from the pressurization/propellant tank assemblies, to a single component, the ALR. This design can also reduce the thermal control requirements on the propellant tanks, avoid costly Qualification testing of biprop engines for missions with more stringent requirements, and reduce the overall propulsion system mass and power usage. In order to realize these benefits, the ALR must meet stringent design requirements. The main advantage of this regulator over other units available in the market is that it can regulate about its nominal set point to within +/-0.85%, and change its regulation set point in flight +/-4% about that nominal point. The set point change is handled actively via a stepper motor driven actuator, which converts rotary into linear motion to affect the spring preload acting on the regulator. Once adjusted to a particular set point, the actuator remains in its final position unpowered, and the regulator passively maintains outlet pressure. The very precise outlet regulation pressure is possible due to new technology developed by Moog, Inc. which reduces typical regulator mechanical hysteresis to near zero. The ALR requirements specified an outlet pressure set point range from 225 to 255 psi, and equivalent water flow rates required were in the 0.17 lb/sec range. The regulation output pressure is maintained at +/-2 psi about the set point from a P (delta or differential pressure) of 20 to over 100 psid. Maximum upstream system pressure was specified at 320 psi

  14. Ultralow thermal sensitivity of phase and propagation delay in hollow core optical fibres

    PubMed Central

    Slavík, Radan; Marra, Giuseppe; Fokoua, Eric Numkam; Baddela, Naveen; Wheeler, Natalie V.; Petrovich, Marco; Poletti, Francesco; Richardson, David J.

    2015-01-01

    Propagation time through an optical fibre changes with the environment, e.g., a change in temperature alters the fibre length and its refractive index. These changes have negligible impact in many key fibre applications, e.g., telecommunications, however, they can be detrimental in many others. Examples are fibre-based interferometry (e.g., for precise measurement and sensing) and fibre-based transfer and distribution of accurate time and frequency. Here we show through two independent experiments that hollow-core photonic bandgap fibres have a significantly smaller sensitivity to temperature variations than traditional solid-core fibres. The 18 times improvement observed, over 3 times larger than previously reported, makes them the most environmentally insensitive fibre technology available and a promising candidate for many next-generation fibre systems applications that are sensitive to drifts in optical phase or absolute propagation delay. PMID:26490424

  15. Ultralow thermal sensitivity of phase and propagation delay in hollow core optical fibres

    NASA Astrophysics Data System (ADS)

    Slavík, Radan; Marra, Giuseppe; Fokoua, Eric Numkam; Baddela, Naveen; Wheeler, Natalie V.; Petrovich, Marco; Poletti, Francesco; Richardson, David J.

    2015-10-01

    Propagation time through an optical fibre changes with the environment, e.g., a change in temperature alters the fibre length and its refractive index. These changes have negligible impact in many key fibre applications, e.g., telecommunications, however, they can be detrimental in many others. Examples are fibre-based interferometry (e.g., for precise measurement and sensing) and fibre-based transfer and distribution of accurate time and frequency. Here we show through two independent experiments that hollow-core photonic bandgap fibres have a significantly smaller sensitivity to temperature variations than traditional solid-core fibres. The 18 times improvement observed, over 3 times larger than previously reported, makes them the most environmentally insensitive fibre technology available and a promising candidate for many next-generation fibre systems applications that are sensitive to drifts in optical phase or absolute propagation delay.

  16. New fabrication technique for highly sensitive qPlus sensor with well-defined spring constant.

    PubMed

    Labidi, Hatem; Kupsta, Martin; Huff, Taleana; Salomons, Mark; Vick, Douglas; Taucer, Marco; Pitters, Jason; Wolkow, Robert A

    2015-11-01

    A new technique for the fabrication of highly sensitive qPlus sensor for atomic force microscopy (AFM) is described. The focused ion beam was used to cut then weld onto a bare quartz tuning fork a sharp micro-tip from an electrochemically etched tungsten wire. The resulting qPlus sensor exhibits high resonance frequency and quality factor allowing increased force gradient sensitivity. Its spring constant can be determined precisely which allows accurate quantitative AFM measurements. The sensor is shown to be very stable and could undergo usual UHV tip cleaning including e-beam and field evaporation as well as in situ STM tip treatment. Preliminary results with STM and AFM atomic resolution imaging at 4.5 K of the silicon Si(111)-7×7 surface are presented.

  17. A simple high-precision Jacob's staff design for the high-resolution stratigrapher

    USGS Publications Warehouse

    Elder, W.P.

    1989-01-01

    The new generation of high-resolution stratigraphic research depends upon detailed bed-by-bed analysis to enhance regional correlation potential. The standard Jacob's staff is not an efficient and precise tool for measuring thin-bedded strata. The high-precision Jacob's staff design presented and illustrated in this paper meets the qualifications required of such an instrument. The prototype of this simple design consists of a sliding bracket that holds a Brunton-type compass at right angles to a ruled-off staff. This instrument provides rapid and accurate measurement of both thick- or thin-bedded sequences, thus decreasing field time and increasing stratigraphic precision. -Author

  18. Analysis of photo-pattern sensitivity in patients with Pokemon-related symptoms.

    PubMed

    Funatsuka, Makoto; Fujita, Michinari; Shirakawa, Seigo; Oguni, Hirokazu; Osawa, Makiko

    2003-01-01

    This study was designed to analyze photo-pattern sensitivity in patients who developed acute neurologic symptoms associated with watching an animated television program, "Pokemon." The 18 patients (13 females and five males) underwent electroencephalograms and photo-pattern stimulation testing, including special stimulation test batteries (strobe-pattern test and cathode ray tube-pattern test). Photo-pattern sensitivity was confirmed in 16 patients with and without seizure episodes. The strobe-pattern test including a white flickering light test (with eyes open, closed, and open or closed), and the cathode ray tube-pattern test each induced a photo-paroxysmal response in more than 80% of patients. However, with the eyes closed only, as is common in Japan, the photo-paroxysmal response induction rate with a white flickering light stimulus was significantly lower (43%). In the cathode ray tube-pattern test, higher spatial frequencies produced higher rates of photo-paroxysmal response induction. It was demonstrated that underlying photo-pattern sensitivity is more accurately investigated by our method than by standard intermittent photic stimulation alone. By characterizing underlying photo-pattern sensitivity and identifying predisposing factors more precisely, we can develop better guidelines for prevention of a second "Pokemon" incident. According to the results of the present cathode ray tube-pattern test, pattern sensitivity (especially spatial resolution) appears to also be involved in Pokemon-related symptoms, in addition to chromatic sensitivity. PMID:12657417

  19. Analysis of photo-pattern sensitivity in patients with Pokemon-related symptoms.

    PubMed

    Funatsuka, Makoto; Fujita, Michinari; Shirakawa, Seigo; Oguni, Hirokazu; Osawa, Makiko

    2003-01-01

    This study was designed to analyze photo-pattern sensitivity in patients who developed acute neurologic symptoms associated with watching an animated television program, "Pokemon." The 18 patients (13 females and five males) underwent electroencephalograms and photo-pattern stimulation testing, including special stimulation test batteries (strobe-pattern test and cathode ray tube-pattern test). Photo-pattern sensitivity was confirmed in 16 patients with and without seizure episodes. The strobe-pattern test including a white flickering light test (with eyes open, closed, and open or closed), and the cathode ray tube-pattern test each induced a photo-paroxysmal response in more than 80% of patients. However, with the eyes closed only, as is common in Japan, the photo-paroxysmal response induction rate with a white flickering light stimulus was significantly lower (43%). In the cathode ray tube-pattern test, higher spatial frequencies produced higher rates of photo-paroxysmal response induction. It was demonstrated that underlying photo-pattern sensitivity is more accurately investigated by our method than by standard intermittent photic stimulation alone. By characterizing underlying photo-pattern sensitivity and identifying predisposing factors more precisely, we can develop better guidelines for prevention of a second "Pokemon" incident. According to the results of the present cathode ray tube-pattern test, pattern sensitivity (especially spatial resolution) appears to also be involved in Pokemon-related symptoms, in addition to chromatic sensitivity.

  20. PRECISION POINTING OF IBEX-Lo OBSERVATIONS

    SciTech Connect

    Hlond, M.; Bzowski, M.; Moebius, E.; Kucharek, H.; Heirtzler, D.; Schwadron, N. A.; Neill, M. E. O'; Clark, G.; Crew, G. B.; Fuselier, S.; McComas, D. J. E-mail: eberhard.moebius@unh.edu E-mail: stephen.a.fuselier@linco.com E-mail: DMcComas@swri.edu

    2012-02-01

    Post-launch boresight of the IBEX-Lo instrument on board the Interstellar Boundary Explorer (IBEX) is determined based on IBEX-Lo Star Sensor observations. Accurate information on the boresight of the neutral gas camera is essential for precise determination of interstellar gas flow parameters. Utilizing spin-phase information from the spacecraft attitude control system (ACS), positions of stars observed by the Star Sensor during two years of IBEX measurements were analyzed and compared with positions obtained from a star catalog. No statistically significant differences were observed beyond those expected from the pre-launch uncertainty in the Star Sensor mounting. Based on the star observations and their positions in the spacecraft reference system, pointing of the IBEX satellite spin axis was determined and compared with the pointing obtained from the ACS. Again, no statistically significant deviations were observed. We conclude that no systematic correction for boresight geometry is needed in the analysis of IBEX-Lo observations to determine neutral interstellar gas flow properties. A stack-up of uncertainties in attitude knowledge shows that the instantaneous IBEX-Lo pointing is determined to within {approx}0.{sup 0}1 in both spin angle and elevation using either the Star Sensor or the ACS. Further, the Star Sensor can be used to independently determine the spacecraft spin axis. Thus, Star Sensor data can be used reliably to correct the spin phase when the Star Tracker (used by the ACS) is disabled by bright objects in its field of view. The Star Sensor can also determine the spin axis during most orbits and thus provides redundancy for the Star Tracker.