Sample records for accurate precise sensitive

  1. Accurate, Sensitive, and Precise Multiplexed Proteomics Using the Complement Reporter Ion Cluster

    DOE PAGES

    Sonnett, Matthew; Yeung, Eyan; Wuhr, Martin

    2018-03-09

    We present that quantitative analysis of proteomes across multiple time points, organelles, and perturbations is essential for understanding both fundamental biology and disease states. The development of isobaric tags (e.g. TMT) have enabled the simultaneous measurement of peptide abundances across several different conditions. These multiplexed approaches are promising in principle because of advantages in throughput and measurement quality. However, in practice existing multiplexing approaches suffer from key limitations. In its simple implementation (TMT-MS2), measurements are distorted by chemical noise leading to poor measurement accuracy. The current state-of-the-art (TMT-MS3) addresses this, but requires specialized quadrupole-iontrap-Orbitrap instrumentation. The complement reporter ion approachmore » (TMTc) produces high accuracy measurements and is compatible with many more instruments, like quadrupole-Orbitraps. However, the required deconvolution of the TMTc cluster leads to poor measurement precision. Here, we introduce TMTc+, which adds the modeling of the MS2-isolation step into the deconvolution algorithm. The resulting measurements are comparable in precision to TMT-MS3/MS2. The improved duty cycle, and lower filtering requirements make TMTc+ more sensitive than TMT-MS3 and comparable with TMT-MS2. At the same time, unlike TMT-MS2, TMTc+ is exquisitely able to distinguish signal from chemical noise even outperforming TMT-MS3. Lastly, we compare TMTc+ to quantitative label-free proteomics of total HeLa lysate and find that TMTc+ quantifies 7.8k versus 3.9k proteins in a 5-plex sample. At the same time the median coefficient of variation improves from 13% to 4%. Furthermore, TMTc+ advances quantitative proteomics by enabling accurate, sensitive, and precise multiplexed experiments on more commonly used instruments.« less

  2. Accurate, Sensitive, and Precise Multiplexed Proteomics Using the Complement Reporter Ion Cluster

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sonnett, Matthew; Yeung, Eyan; Wuhr, Martin

    We present that quantitative analysis of proteomes across multiple time points, organelles, and perturbations is essential for understanding both fundamental biology and disease states. The development of isobaric tags (e.g. TMT) have enabled the simultaneous measurement of peptide abundances across several different conditions. These multiplexed approaches are promising in principle because of advantages in throughput and measurement quality. However, in practice existing multiplexing approaches suffer from key limitations. In its simple implementation (TMT-MS2), measurements are distorted by chemical noise leading to poor measurement accuracy. The current state-of-the-art (TMT-MS3) addresses this, but requires specialized quadrupole-iontrap-Orbitrap instrumentation. The complement reporter ion approachmore » (TMTc) produces high accuracy measurements and is compatible with many more instruments, like quadrupole-Orbitraps. However, the required deconvolution of the TMTc cluster leads to poor measurement precision. Here, we introduce TMTc+, which adds the modeling of the MS2-isolation step into the deconvolution algorithm. The resulting measurements are comparable in precision to TMT-MS3/MS2. The improved duty cycle, and lower filtering requirements make TMTc+ more sensitive than TMT-MS3 and comparable with TMT-MS2. At the same time, unlike TMT-MS2, TMTc+ is exquisitely able to distinguish signal from chemical noise even outperforming TMT-MS3. Lastly, we compare TMTc+ to quantitative label-free proteomics of total HeLa lysate and find that TMTc+ quantifies 7.8k versus 3.9k proteins in a 5-plex sample. At the same time the median coefficient of variation improves from 13% to 4%. Furthermore, TMTc+ advances quantitative proteomics by enabling accurate, sensitive, and precise multiplexed experiments on more commonly used instruments.« less

  3. High spatial precision nano-imaging of polarization-sensitive plasmonic particles

    NASA Astrophysics Data System (ADS)

    Liu, Yunbo; Wang, Yipei; Lee, Somin Eunice

    2018-02-01

    Precise polarimetric imaging of polarization-sensitive nanoparticles is essential for resolving their accurate spatial positions beyond the diffraction limit. However, conventional technologies currently suffer from beam deviation errors which cannot be corrected beyond the diffraction limit. To overcome this issue, we experimentally demonstrate a spatially stable nano-imaging system for polarization-sensitive nanoparticles. In this study, we show that by integrating a voltage-tunable imaging variable polarizer with optical microscopy, we are able to suppress beam deviation errors. We expect that this nano-imaging system should allow for acquisition of accurate positional and polarization information from individual nanoparticles in applications where real-time, high precision spatial information is required.

  4. Precise and accurate isotope ratio measurements by ICP-MS.

    PubMed

    Becker, J S; Dietze, H J

    2000-09-01

    The precise and accurate determination of isotope ratios by inductively coupled plasma mass spectrometry (ICP-MS) and laser ablation ICP-MS (LA-ICP-MS) is important for quite different application fields (e.g. for isotope ratio measurements of stable isotopes in nature, especially for the investigation of isotope variation in nature or age dating, for determining isotope ratios of radiogenic elements in the nuclear industry, quality assurance of fuel material, for reprocessing plants, nuclear material accounting and radioactive waste control, for tracer experiments using stable isotopes or long-lived radionuclides in biological or medical studies). Thermal ionization mass spectrometry (TIMS), which used to be the dominant analytical technique for precise isotope ratio measurements, is being increasingly replaced for isotope ratio measurements by ICP-MS due to its excellent sensitivity, precision and good accuracy. Instrumental progress in ICP-MS was achieved by the introduction of the collision cell interface in order to dissociate many disturbing argon-based molecular ions, thermalize the ions and neutralize the disturbing argon ions of plasma gas (Ar+). The application of the collision cell in ICP-QMS results in a higher ion transmission, improved sensitivity and better precision of isotope ratio measurements compared to quadrupole ICP-MS without the collision cell [e.g., for 235U/238U approximately 1 (10 microg x L(-1) uranium) 0.07% relative standard deviation (RSD) vs. 0.2% RSD in short-term measurements (n = 5)]. A significant instrumental improvement for ICP-MS is the multicollector device (MC-ICP-MS) in order to obtain a better precision of isotope ratio measurements (with a precision of up to 0.002%, RSD). CE- and HPLC-ICP-MS are used for the separation of isobaric interferences of long-lived radionuclides and stable isotopes by determination of spallation nuclide abundances in an irradiated tantalum target.

  5. Advances in Multicollector ICPMS for precise and accurate isotope ratio measurements of Uranium isotopes

    NASA Astrophysics Data System (ADS)

    Bouman, C.; Lloyd, N. S.; Schwieters, J.

    2011-12-01

    The accurate and precise determination of uranium isotopes is challenging, because of the large dynamic range posed by the U isotope abundances and the limited available sample material. Various mass spectrometric techniques are used for the measurement of U isotopes, where TIMS is the most accepted and accurate one. Multicollector inductively coupled plasma mass spectrometry (MC-ICPMS) can offer higher productivity compared to TIMS, but is traditionally limited by low efficiency of sample utilisation. This contribution will discuss progress in MC-ICPMS for detecting 234U, 235U, 236U and 238U in various uranium reference materials from IRMM and NBL. The Thermo Scientific NEPTUNE Plus with Jet Interface offers a modified dry plasma ICP interface using a large interface pump combined with a special set of sample and skimmer cones giving ultimate sensitivity for all elements across the mass range. For uranium, an ion yield of > 3 % was reported previously [1]. The NEPTUNE Plus also offers Multi Ion Counting using discrete dynode electron multipliers as well as two high abundance-sensitivity filters to discriminate against peak tailing effects on 234U and 236U originating from the major uranium beams. These improvements in sensitivity and dynamic range allow accurate measurements of 234U, 235U and 236U abundances on very small samples and at low concentration. In our approach, minor U isotopes 234U and 236U were detected on ion counters with high abundance sensitivity filters, whereas 235U and 238U were detected on Faraday Cups using a high gain current amplifier (10e12 Ohm) for 235U. Precisions and accuracies for 234U and 236U were down to ~1%. For 235U, subpermil levels were reached.

  6. Digital PCR Modeling for Maximal Sensitivity, Dynamic Range and Measurement Precision

    PubMed Central

    Majumdar, Nivedita; Wessel, Thomas; Marks, Jeffrey

    2015-01-01

    The great promise of digital PCR is the potential for unparalleled precision enabling accurate measurements for genetic quantification. A challenge associated with digital PCR experiments, when testing unknown samples, is to perform experiments at dilutions allowing the detection of one or more targets of interest at a desired level of precision. While theory states that optimal precision (Po) is achieved by targeting ~1.59 mean copies per partition (λ), and that dynamic range (R) includes the space spanning one positive (λL) to one negative (λU) result from the total number of partitions (n), these results are tempered for the practitioner seeking to construct digital PCR experiments in the laboratory. A mathematical framework is presented elucidating the relationships between precision, dynamic range, number of partitions, interrogated volume, and sensitivity in digital PCR. The impact that false reaction calls and volumetric variation have on sensitivity and precision is next considered. The resultant effects on sensitivity and precision are established via Monte Carlo simulations reflecting the real-world likelihood of encountering such scenarios in the laboratory. The simulations provide insight to the practitioner on how to adapt experimental loading concentrations to counteract any one of these conditions. The framework is augmented with a method of extending the dynamic range of digital PCR, with and without increasing n, via the use of dilutions. An example experiment demonstrating the capabilities of the framework is presented enabling detection across 3.33 logs of starting copy concentration. PMID:25806524

  7. Accurate and precise determination of isotopic ratios by MC-ICP-MS: a review.

    PubMed

    Yang, Lu

    2009-01-01

    For many decades the accurate and precise determination of isotope ratios has remained a very strong interest to many researchers due to its important applications in earth, environmental, biological, archeological, and medical sciences. Traditionally, thermal ionization mass spectrometry (TIMS) has been the technique of choice for achieving the highest accuracy and precision. However, recent developments in multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) have brought a new dimension to this field. In addition to its simple and robust sample introduction, high sample throughput, and high mass resolution, the flat-topped peaks generated by this technique provide for accurate and precise determination of isotope ratios with precision reaching 0.001%, comparable to that achieved with TIMS. These features, in combination with the ability of the ICP source to ionize nearly all elements in the periodic table, have resulted in an increased use of MC-ICP-MS for such measurements in various sample matrices. To determine accurate and precise isotope ratios with MC-ICP-MS, utmost care must be exercised during sample preparation, optimization of the instrument, and mass bias corrections. Unfortunately, there are inconsistencies and errors evident in many MC-ICP-MS publications, including errors in mass bias correction models. This review examines "state-of-the-art" methodologies presented in the literature for achievement of precise and accurate determinations of isotope ratios by MC-ICP-MS. Some general rules for such accurate and precise measurements are suggested, and calculations of combined uncertainty of the data using a few common mass bias correction models are outlined.

  8. Hydrogen atoms can be located accurately and precisely by x-ray crystallography.

    PubMed

    Woińska, Magdalena; Grabowsky, Simon; Dominiak, Paulina M; Woźniak, Krzysztof; Jayatilaka, Dylan

    2016-05-01

    Precise and accurate structural information on hydrogen atoms is crucial to the study of energies of interactions important for crystal engineering, materials science, medicine, and pharmacy, and to the estimation of physical and chemical properties in solids. However, hydrogen atoms only scatter x-radiation weakly, so x-rays have not been used routinely to locate them accurately. Textbooks and teaching classes still emphasize that hydrogen atoms cannot be located with x-rays close to heavy elements; instead, neutron diffraction is needed. We show that, contrary to widespread expectation, hydrogen atoms can be located very accurately using x-ray diffraction, yielding bond lengths involving hydrogen atoms (A-H) that are in agreement with results from neutron diffraction mostly within a single standard deviation. The precision of the determination is also comparable between x-ray and neutron diffraction results. This has been achieved at resolutions as low as 0.8 Å using Hirshfeld atom refinement (HAR). We have applied HAR to 81 crystal structures of organic molecules and compared the A-H bond lengths with those from neutron measurements for A-H bonds sorted into bonds of the same class. We further show in a selection of inorganic compounds that hydrogen atoms can be located in bridging positions and close to heavy transition metals accurately and precisely. We anticipate that, in the future, conventional x-radiation sources at in-house diffractometers can be used routinely for locating hydrogen atoms in small molecules accurately instead of large-scale facilities such as spallation sources or nuclear reactors.

  9. Hydrogen atoms can be located accurately and precisely by x-ray crystallography

    PubMed Central

    Woińska, Magdalena; Grabowsky, Simon; Dominiak, Paulina M.; Woźniak, Krzysztof; Jayatilaka, Dylan

    2016-01-01

    Precise and accurate structural information on hydrogen atoms is crucial to the study of energies of interactions important for crystal engineering, materials science, medicine, and pharmacy, and to the estimation of physical and chemical properties in solids. However, hydrogen atoms only scatter x-radiation weakly, so x-rays have not been used routinely to locate them accurately. Textbooks and teaching classes still emphasize that hydrogen atoms cannot be located with x-rays close to heavy elements; instead, neutron diffraction is needed. We show that, contrary to widespread expectation, hydrogen atoms can be located very accurately using x-ray diffraction, yielding bond lengths involving hydrogen atoms (A–H) that are in agreement with results from neutron diffraction mostly within a single standard deviation. The precision of the determination is also comparable between x-ray and neutron diffraction results. This has been achieved at resolutions as low as 0.8 Å using Hirshfeld atom refinement (HAR). We have applied HAR to 81 crystal structures of organic molecules and compared the A–H bond lengths with those from neutron measurements for A–H bonds sorted into bonds of the same class. We further show in a selection of inorganic compounds that hydrogen atoms can be located in bridging positions and close to heavy transition metals accurately and precisely. We anticipate that, in the future, conventional x-radiation sources at in-house diffractometers can be used routinely for locating hydrogen atoms in small molecules accurately instead of large-scale facilities such as spallation sources or nuclear reactors. PMID:27386545

  10. A precise and accurate acupoint location obtained on the face using consistency matrix pointwise fusion method.

    PubMed

    Yanq, Xuming; Ye, Yijun; Xia, Yong; Wei, Xuanzhong; Wang, Zheyu; Ni, Hongmei; Zhu, Ying; Xu, Lingyu

    2015-02-01

    To develop a more precise and accurate method, and identified a procedure to measure whether an acupoint had been correctly located. On the face, we used an acupoint location from different acupuncture experts and obtained the most precise and accurate values of acupoint location based on the consistency information fusion algorithm, through a virtual simulation of the facial orientation coordinate system. Because of inconsistencies in each acupuncture expert's original data, the system error the general weight calculation. First, we corrected each expert of acupoint location system error itself, to obtain a rational quantification for each expert of acupuncture and moxibustion acupoint location consistent support degree, to obtain pointwise variable precision fusion results, to put every expert's acupuncture acupoint location fusion error enhanced to pointwise variable precision. Then, we more effectively used the measured characteristics of different acupuncture expert's acupoint location, to improve the measurement information utilization efficiency and acupuncture acupoint location precision and accuracy. Based on using the consistency matrix pointwise fusion method on the acupuncture experts' acupoint location values, each expert's acupoint location information could be calculated, and the most precise and accurate values of each expert's acupoint location could be obtained.

  11. Accurate Filtering of Privacy-Sensitive Information in Raw Genomic Data.

    PubMed

    Decouchant, Jérémie; Fernandes, Maria; Völp, Marcus; Couto, Francisco M; Esteves-Veríssimo, Paulo

    2018-04-13

    Sequencing thousands of human genomes has enabled breakthroughs in many areas, among them precision medicine, the study of rare diseases, and forensics. However, mass collection of such sensitive data entails enormous risks if not protected to the highest standards. In this article, we follow the position and argue that post-alignment privacy is not enough and that data should be automatically protected as early as possible in the genomics workflow, ideally immediately after the data is produced. We show that a previous approach for filtering short reads cannot extend to long reads and present a novel filtering approach that classifies raw genomic data (i.e., whose location and content is not yet determined) into privacy-sensitive (i.e., more affected by a successful privacy attack) and non-privacy-sensitive information. Such a classification allows the fine-grained and automated adjustment of protective measures to mitigate the possible consequences of exposure, in particular when relying on public clouds. We present the first filter that can be indistinctly applied to reads of any length, i.e., making it usable with any recent or future sequencing technologies. The filter is accurate, in the sense that it detects all known sensitive nucleotides except those located in highly variable regions (less than 10 nucleotides remain undetected per genome instead of 100,000 in previous works). It has far less false positives than previously known methods (10% instead of 60%) and can detect sensitive nucleotides despite sequencing errors (86% detected instead of 56% with 2% of mutations). Finally, practical experiments demonstrate high performance, both in terms of throughput and memory consumption. Copyright © 2018. Published by Elsevier Inc.

  12. Precise and accurate assay of pregnenolone and five other neurosteroids in monkey brain tissue by LC-MS/MS.

    PubMed

    Dury, Alain Y; Ke, Yuyong; Labrie, Fernand

    2016-09-01

    A series of steroids present in the brain have been named "neurosteroids" following the possibility of their role in the central nervous system impairments such as anxiety disorders, depression, premenstrual dysphoric disorder (PMDD), addiction, or even neurodegenerative disorders such as Alzheimer's and Parkinson's diseases. Study of their potential role requires a sensitive and accurate assay of their concentration in the monkey brain, the closest model to the human. We have thus developed a robust, precise and accurate liquid chromatography-tandem mass spectrometry method for the assay of pregnenolone, pregnanolone, epipregnanolone, allopregnanolone, epiallopregnanolone, and androsterone in the cynomolgus monkey brain. The extraction method includes a thorough sample cleanup using protein precipitation and phospholipid removal, followed by hexane liquid-liquid extraction and a Girard T ketone-specific derivatization. This method opens the possibility of investigating the potential implication of these six steroids in the most suitable animal model for neurosteroid-related research. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Statistical analysis of an RNA titration series evaluates microarray precision and sensitivity on a whole-array basis

    PubMed Central

    Holloway, Andrew J; Oshlack, Alicia; Diyagama, Dileepa S; Bowtell, David DL; Smyth, Gordon K

    2006-01-01

    Background Concerns are often raised about the accuracy of microarray technologies and the degree of cross-platform agreement, but there are yet no methods which can unambiguously evaluate precision and sensitivity for these technologies on a whole-array basis. Results A methodology is described for evaluating the precision and sensitivity of whole-genome gene expression technologies such as microarrays. The method consists of an easy-to-construct titration series of RNA samples and an associated statistical analysis using non-linear regression. The method evaluates the precision and responsiveness of each microarray platform on a whole-array basis, i.e., using all the probes, without the need to match probes across platforms. An experiment is conducted to assess and compare four widely used microarray platforms. All four platforms are shown to have satisfactory precision but the commercial platforms are superior for resolving differential expression for genes at lower expression levels. The effective precision of the two-color platforms is improved by allowing for probe-specific dye-effects in the statistical model. The methodology is used to compare three data extraction algorithms for the Affymetrix platforms, demonstrating poor performance for the commonly used proprietary algorithm relative to the other algorithms. For probes which can be matched across platforms, the cross-platform variability is decomposed into within-platform and between-platform components, showing that platform disagreement is almost entirely systematic rather than due to measurement variability. Conclusion The results demonstrate good precision and sensitivity for all the platforms, but highlight the need for improved probe annotation. They quantify the extent to which cross-platform measures can be expected to be less accurate than within-platform comparisons for predicting disease progression or outcome. PMID:17118209

  14. Working memory recall precision is a more sensitive index than span.

    PubMed

    Zokaei, Nahid; Burnett Heyes, Stephanie; Gorgoraptis, Nikos; Budhdeo, Sanjay; Husain, Masud

    2015-09-01

    Delayed adjustment tasks have recently been developed to examine working memory (WM) precision, that is, the resolution with which items maintained in memory are recalled. However, despite their emerging use in experimental studies of healthy people, evaluation of patient populations is sparse. We first investigated the validity of adjustment tasks, comparing precision with classical span measures of memory across the lifespan in 114 people. Second, we asked whether precision measures can potentially provide a more sensitive measure of WM than traditional span measures. Specifically, we tested this hypothesis examining WM in a group with early, untreated Parkinson's disease (PD) and its modulation by subsequent treatment on dopaminergic medication. Span measures correlated with precision across the lifespan: in children, young, and elderly participants. However, they failed to detect changes in WM in PD patients, either pre- or post-treatment initiation. By contrast, recall precision was sensitive enough to pick up such changes. PD patients pre-medication were significantly impaired compared to controls, but improved significantly after 3 months of being established on dopaminergic medication. These findings suggest that precision methods might provide a sensitive means to investigate WM and its modulation by interventions in clinical populations. © 2014 The Authors Journal of Neuropsychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  15. Mass spectrometry in Earth sciences: the precise and accurate measurement of time.

    PubMed

    Schaltegger, Urs; Wotzlaw, Jörn-Frederik; Ovtcharova, Maria; Chiaradia, Massimo; Spikings, Richard

    2014-01-01

    Precise determinations of the isotopic compositions of a variety of elements is a widely applied tool in Earth sciences. Isotope ratios are used to quantify rates of geological processes that occurred during the previous 4.5 billion years, and also at the present time. An outstanding application is geochronology, which utilizes the production of radiogenic daughter isotopes by the radioactive decay of parent isotopes. Geochronological tools, involving isotopic analysis of selected elements from smallest volumes of minerals by thermal ionization mass spectrometry, provide precise and accurate measurements of time throughout the geological history of our planet over nine orders of magnitude, from the accretion of the proto-planetary disk, to the timing of the last glaciation. This article summarizes the recent efforts of the Isotope Geochemistry, Geochronology and Thermochronology research group at the University of Geneva to advance the U-Pb geochronological tool to achieve unprecedented precision and accuracy, and presents two examples of its application to two significant open questions in Earth sciences: what are the triggers and timescales of volcanic supereruptions, and what were the causes of mass extinctions in the geological past, driven by global climatic and environmental deterioration?

  16. The Rényi divergence enables accurate and precise cluster analysis for localisation microscopy.

    PubMed

    Staszowska, Adela D; Fox-Roberts, Patrick; Hirvonen, Liisa M; Peddie, Christopher J; Collinson, Lucy M; Jones, Gareth E; Cox, Susan

    2018-06-01

    Clustering analysis is a key technique for quantitatively characterising structures in localisation microscopy images. To build up accurate information about biological structures, it is critical that the quantification is both accurate (close to the ground truth) and precise (has small scatter and is reproducible). Here we describe how the Rényi divergence can be used for cluster radius measurements in localisation microscopy data. We demonstrate that the Rényi divergence can operate with high levels of background and provides results which are more accurate than Ripley's functions, Voronoi tesselation or DBSCAN. Data supporting this research will be made accessible via a web link. Software codes developed for this work can be accessed via http://coxphysics.com/Renyi_divergence_software.zip. Implemented in C ++. Correspondence and requests for materials can be also addressed to the corresponding author. adela.staszowska@gmail.com or susan.cox@kcl.ac.uk. Supplementary data are available at Bioinformatics online.

  17. Integrated multi-ISE arrays with improved sensitivity, accuracy and precision

    NASA Astrophysics Data System (ADS)

    Wang, Chunling; Yuan, Hongyan; Duan, Zhijuan; Xiao, Dan

    2017-03-01

    Increasing use of ion-selective electrodes (ISEs) in the biological and environmental fields has generated demand for high-sensitivity ISEs. However, improving the sensitivities of ISEs remains a challenge because of the limit of the Nernstian slope (59.2/n mV). Here, we present a universal ion detection method using an electronic integrated multi-electrode system (EIMES) that bypasses the Nernstian slope limit of 59.2/n mV, thereby enabling substantial enhancement of the sensitivity of ISEs. The results reveal that the response slope is greatly increased from 57.2 to 1711.3 mV, 57.3 to 564.7 mV and 57.7 to 576.2 mV by electronic integrated 30 Cl- electrodes, 10 F- electrodes and 10 glass pH electrodes, respectively. Thus, a tiny change in the ion concentration can be monitored, and correspondingly, the accuracy and precision are substantially improved. The EIMES is suited for all types of potentiometric sensors and may pave the way for monitoring of various ions with high accuracy and precision because of its high sensitivity.

  18. Accurate, precise, and efficient theoretical methods to calculate anion-π interaction energies in model structures.

    PubMed

    Mezei, Pál D; Csonka, Gábor I; Ruzsinszky, Adrienn; Sun, Jianwei

    2015-01-13

    A correct description of the anion-π interaction is essential for the design of selective anion receptors and channels and important for advances in the field of supramolecular chemistry. However, it is challenging to do accurate, precise, and efficient calculations of this interaction, which are lacking in the literature. In this article, by testing sets of 20 binary anion-π complexes of fluoride, chloride, bromide, nitrate, or carbonate ions with hexafluorobenzene, 1,3,5-trifluorobenzene, 2,4,6-trifluoro-1,3,5-triazine, or 1,3,5-triazine and 30 ternary π-anion-π' sandwich complexes composed from the same monomers, we suggest domain-based local-pair natural orbital coupled cluster energies extrapolated to the complete basis-set limit as reference values. We give a detailed explanation of the origin of anion-π interactions, using the permanent quadrupole moments, static dipole polarizabilities, and electrostatic potential maps. We use symmetry-adapted perturbation theory (SAPT) to calculate the components of the anion-π interaction energies. We examine the performance of the direct random phase approximation (dRPA), the second-order screened exchange (SOSEX), local-pair natural-orbital (LPNO) coupled electron pair approximation (CEPA), and several dispersion-corrected density functionals (including generalized gradient approximation (GGA), meta-GGA, and double hybrid density functional). The LPNO-CEPA/1 results show the best agreement with the reference results. The dRPA method is only slightly less accurate and precise than the LPNO-CEPA/1, but it is considerably more efficient (6-17 times faster) for the binary complexes studied in this paper. For 30 ternary π-anion-π' sandwich complexes, we give dRPA interaction energies as reference values. The double hybrid functionals are much more efficient but less accurate and precise than dRPA. The dispersion-corrected double hybrid PWPB95-D3(BJ) and B2PLYP-D3(BJ) functionals perform better than the GGA and meta

  19. Calibration of gyro G-sensitivity coefficients with FOG monitoring on precision centrifuge

    NASA Astrophysics Data System (ADS)

    Lu, Jiazhen; Yang, Yanqiang; Li, Baoguo; Liu, Ming

    2017-07-01

    The advantages of mechanical gyros, such as high precision, endurance and reliability, make them widely used as the core parts of inertial navigation systems (INS) utilized in the fields of aeronautics, astronautics and underground exploration. In a high-g environment, the accuracy of gyros is degraded. Therefore, the calibration and compensation of the gyro G-sensitivity coefficients is essential when the INS operates in a high-g environment. A precision centrifuge with a counter-rotating platform is the typical equipment for calibrating the gyro, as it can generate large centripetal acceleration and keep the angular rate close to zero; however, its performance is seriously restricted by the angular perturbation in the high-speed rotating process. To reduce the dependence on the precision of the centrifuge and counter-rotating platform, an effective calibration method for the gyro g-sensitivity coefficients under fiber-optic gyroscope (FOG) monitoring is proposed herein. The FOG can efficiently compensate spindle error and improve the anti-interference ability. Harmonic analysis is performed for data processing. Simulations show that the gyro G-sensitivity coefficients can be efficiently estimated to up to 99% of the true value and compensated using a lookup table or fitting method. Repeated tests indicate that the G-sensitivity coefficients can be correctly calibrated when the angular rate accuracy of the precision centrifuge is as low as 0.01%. Verification tests are performed to demonstrate that the attitude errors can be decreased from 0.36° to 0.08° in 200 s. The proposed measuring technology is generally applicable in engineering, as it can reduce the accuracy requirements for the centrifuge and the environment.

  20. Transthoracic echocardiography: an accurate and precise method for estimating cardiac output in the critically ill patient.

    PubMed

    Mercado, Pablo; Maizel, Julien; Beyls, Christophe; Titeca-Beauport, Dimitri; Joris, Magalie; Kontar, Loay; Riviere, Antoine; Bonef, Olivier; Soupison, Thierry; Tribouilloy, Christophe; de Cagny, Bertrand; Slama, Michel

    2017-06-09

    % yielded a sensitivity of 88% and specificity of 66% for detecting a ΔCO-PAC of more than 10%. In critically ill mechanically ventilated patients, CO-TTE is an accurate and precise method for estimating CO. Furthermore, CO-TTE can accurately track variations in CO.

  1. Digital PCR: A Sensitive and Precise Method for KIT D816V Quantification in Mastocytosis.

    PubMed

    Greiner, Georg; Gurbisz, Michael; Ratzinger, Franz; Witzeneder, Nadine; Simonitsch-Klupp, Ingrid; Mitterbauer-Hohendanner, Gerlinde; Mayerhofer, Matthias; Müllauer, Leonhard; Sperr, Wolfgang R; Valent, Peter; Hoermann, Gregor

    2018-03-01

    The analytically sensitive detection of KIT D816V in blood and bone marrow is important for diagnosing systemic mastocytosis (SM). Additionally, precise quantification of the KIT D816V variant allele fraction (VAF) is relevant clinically because it helps to predict multilineage involvement and prognosis in cases of advanced SM. Digital PCR (dPCR) is a promising new method for sensitive detection and accurate quantification of somatic mutations. We performed a validation study of dPCR for KIT D816V on 302 peripheral blood and bone marrow samples from 156 patients with mastocytosis for comparison with melting curve analysis after peptide nucleic acid-mediated PCR clamping (clamp-PCR) and allele-specific quantitative real-time PCR (qPCR). dPCR showed a limit of detection of 0.01% VAF with a mean CV of 8.5% and identified the mutation in 90% of patients compared with 70% for clamp-PCR ( P < 0.001). Moreover, dPCR for KIT D816V was highly concordant with qPCR without systematic deviation of results, and confirmed the clinical value of KIT D816V VAF measurements. Thus, patients with advanced SM showed a significantly higher KIT D816V VAF (median, 2.43%) compared with patients with indolent SM (median, 0.14%; P < 0.001). Moreover, dPCR confirmed the prognostic significance of a high KIT D816V VAF regarding survival ( P < 0.001). dPCR for KIT D816V provides a high degree of precision and sensitivity combined with the potential for interlaboratory standardization, which is crucial for the implementation of KIT D816V allele burden measurement. Thus, dPCR is suitable as a new method for KIT D816V testing in patients with mastocytosis. © 2017 American Association for Clinical Chemistry.

  2. Fluorescence polarization immunoassays for rapid, accurate, and sensitive determination of mycotoxins

    USDA-ARS?s Scientific Manuscript database

    Analytical methods for the determination of mycotoxins in foods are commonly based on chromatographic techniques (GC, HPLC or LC-MS). Although these methods permit a sensitive and accurate determination of the analyte, they require skilled personnel and are time-consuming, expensive, and unsuitable ...

  3. NCLscan: accurate identification of non-co-linear transcripts (fusion, trans-splicing and circular RNA) with a good balance between sensitivity and precision.

    PubMed

    Chuang, Trees-Juen; Wu, Chan-Shuo; Chen, Chia-Ying; Hung, Li-Yuan; Chiang, Tai-Wei; Yang, Min-Yu

    2016-02-18

    Analysis of RNA-seq data often detects numerous 'non-co-linear' (NCL) transcripts, which comprised sequence segments that are topologically inconsistent with their corresponding DNA sequences in the reference genome. However, detection of NCL transcripts involves two major challenges: removal of false positives arising from alignment artifacts and discrimination between different types of NCL transcripts (trans-spliced, circular or fusion transcripts). Here, we developed a new NCL-transcript-detecting method ('NCLscan'), which utilized a stepwise alignment strategy to almost completely eliminate false calls (>98% precision) without sacrificing true positives, enabling NCLscan outperform 18 other publicly-available tools (including fusion- and circular-RNA-detecting tools) in terms of sensitivity and precision, regardless of the generation strategy of simulated dataset, type of intragenic or intergenic NCL event, read depth of coverage, read length or expression level of NCL transcript. With the high accuracy, NCLscan was applied to distinguishing between trans-spliced, circular and fusion transcripts on the basis of poly(A)- and nonpoly(A)-selected RNA-seq data. We showed that circular RNAs were expressed more ubiquitously, more abundantly and less cell type-specifically than trans-spliced and fusion transcripts. Our study thus describes a robust pipeline for the discovery of NCL transcripts, and sheds light on the fundamental biology of these non-canonical RNA events in human transcriptome. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Therapeutic Drug Monitoring of Phenytoin by Simple, Rapid, Accurate, Highly Sensitive and Novel Method and Its Clinical Applications.

    PubMed

    Shaikh, Abdul S; Guo, Ruichen

    2017-01-01

    Phenytoin has very challenging pharmacokinetic properties. To prevent its toxicity and ensure efficacy, continuous therapeutic monitoring is required. It is hard to get a simple, accurate, rapid, easily available, economical and highly sensitive assay in one method for therapeutic monitoring of phenytoin. The present study is directed towards establishing and validating a simpler, rapid, an accurate, highly sensitive, novel and environment friendly liquid chromatography/mass spectrometry (LC/MS) method for offering rapid and reliable TDM results of phenytoin in epileptic patients to physicians and clinicians for making immediate and rational decision. 27 epileptics patients with uncontrolled seizures or suspected of non-compliance or toxicity of phenytoin were selected and advised for TDM of phenytoin by neurologists of Qilu Hospital Jinan, China. The LC/MS assay was used for performing of therapeutic monitoring of phenytoin. The Agilent 1100 LC/MS system was used for TDM. The mixture of Ammonium acetate 5mM: Methanol at (35: 65 v/v) was used for the composition of mobile phase. The Diamonsil C18 (150mm×4.6mm, 5μm) column was used for the extraction of analytes in plasma. The samples were prepared with one step simple protein precipitation method. The technique was validated with the guidelines of International Conference on Harmonisation (ICH). The calibration curve demonstrated decent linearity within (0.2-20 µg/mL) concentration range with linearity equation, y= 0.0667855 x +0.00241785 and correlation coefficient (R2) of 0.99928. The specificity, recovery, linearity, accuracy, precision and stability results were within the accepted limits. The concentration of 0.2 µg/mL was observed as lower limit of quantitation (LLOQ), which is 12.5 times lower than the currently available enzyme-multiplied immunoassay technique (EMIT) for measurement of phenytoin in epilepsy patients. A rapid, simple, economical, precise, highly sensitive and novel LC/MS assay has been

  5. Precisely and Accurately Inferring Single-Molecule Rate Constants

    PubMed Central

    Kinz-Thompson, Colin D.; Bailey, Nevette A.; Gonzalez, Ruben L.

    2017-01-01

    The kinetics of biomolecular systems can be quantified by calculating the stochastic rate constants that govern the biomolecular state versus time trajectories (i.e., state trajectories) of individual biomolecules. To do so, the experimental signal versus time trajectories (i.e., signal trajectories) obtained from observing individual biomolecules are often idealized to generate state trajectories by methods such as thresholding or hidden Markov modeling. Here, we discuss approaches for idealizing signal trajectories and calculating stochastic rate constants from the resulting state trajectories. Importantly, we provide an analysis of how the finite length of signal trajectories restrict the precision of these approaches, and demonstrate how Bayesian inference-based versions of these approaches allow rigorous determination of this precision. Similarly, we provide an analysis of how the finite lengths and limited time resolutions of signal trajectories restrict the accuracy of these approaches, and describe methods that, by accounting for the effects of the finite length and limited time resolution of signal trajectories, substantially improve this accuracy. Collectively, therefore, the methods we consider here enable a rigorous assessment of the precision, and a significant enhancement of the accuracy, with which stochastic rate constants can be calculated from single-molecule signal trajectories. PMID:27793280

  6. Toward accurate and precise estimates of lion density.

    PubMed

    Elliot, Nicholas B; Gopalaswamy, Arjun M

    2017-08-01

    Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km 2 , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and

  7. Precision manometer gauge

    DOEpatents

    McPherson, Malcolm J.; Bellman, Robert A.

    1984-01-01

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  8. Precision manometer gauge

    DOEpatents

    McPherson, M.J.; Bellman, R.A.

    1982-09-27

    A precision manometer gauge which locates a zero height and a measured height of liquid using an open tube in communication with a reservoir adapted to receive the pressure to be measured. The open tube has a reference section carried on a positioning plate which is moved vertically with machine tool precision. Double scales are provided to read the height of the positioning plate accurately, the reference section being inclined for accurate meniscus adjustment, and means being provided to accurately locate a zero or reference position.

  9. Is digital photography an accurate and precise method for measuring range of motion of the hip and knee?

    PubMed

    Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C

    2017-09-07

    Accurate measurements of knee and hip motion are required for management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion at the hip and knee. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, hip flexion/abduction/internal rotation/external rotation and knee flexion/extension were measured using visual estimation, goniometry, and photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard, while precision was defined by the proportion of measurements within either 5° or 10°. Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although two statistically significant differences were found in measurement accuracy between the three techniques, neither of these differences met clinical significance (difference of 1.4° for hip abduction and 1.7° for the knee extension). Precision of measurements was significantly higher for digital photography than: (i) visual estimation for hip abduction and knee extension, and (ii) goniometry for knee extension only. There was no clinically significant difference in measurement accuracy between the three techniques for hip and knee motion. Digital photography only showed higher precision for two joint motions (hip abduction and knee extension). Overall digital photography shows equivalent accuracy and near-equivalent precision to visual estimation and goniometry.

  10. Sensitivity, accuracy, and precision issues in opto-electronic holography based on fiber optics and high-spatial- and high-digitial-resolution cameras

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Yokum, Jeffrey S.; Pryputniewicz, Ryszard J.

    2002-06-01

    Sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography based on fiber optics and high-spatial and high-digital resolution cameras, are discussed in this paper. It is shown that sensitivity, accuracy, and precision dependent on both, the effective determination of optical phase and the effective characterization of the illumination-observation conditions. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gages, demonstrating the applicability of quantitative optical metrology techniques to satisfy constantly increasing needs for the study and development of emerging technologies.

  11. Standardizing a simpler, more sensitive and accurate tail bleeding assay in mice

    PubMed Central

    Liu, Yang; Jennings, Nicole L; Dart, Anthony M; Du, Xiao-Jun

    2012-01-01

    AIM: To optimize the experimental protocols for a simple, sensitive and accurate bleeding assay. METHODS: Bleeding assay was performed in mice by tail tip amputation, immersing the tail in saline at 37 °C, continuously monitoring bleeding patterns and measuring bleeding volume from changes in the body weight. Sensitivity and extent of variation of bleeding time and bleeding volume were compared in mice treated with the P2Y receptor inhibitor prasugrel at various doses or in mice deficient of FcRγ, a signaling protein of the glycoprotein VI receptor. RESULTS: We described details of the bleeding assay with the aim of standardizing this commonly used assay. The bleeding assay detailed here was simple to operate and permitted continuous monitoring of bleeding pattern and detection of re-bleeding. We also reported a simple and accurate way of quantifying bleeding volume from changes in the body weight, which correlated well with chemical assay of hemoglobin levels (r2 = 0.990, P < 0.0001). We determined by tail bleeding assay the dose-effect relation of the anti-platelet drug prasugrel from 0.015 to 5 mg/kg. Our results showed that the correlation of bleeding time and volume was unsatisfactory and that compared with the bleeding time, bleeding volume was more sensitive in detecting a partial inhibition of platelet’s haemostatic activity (P < 0.01). Similarly, in mice with genetic disruption of FcRγ as a signaling molecule of P-selectin glycoprotein ligand-1 leading to platelet dysfunction, both increased bleeding volume and repeated bleeding pattern defined the phenotype of the knockout mice better than that of a prolonged bleeding time. CONCLUSION: Determination of bleeding pattern and bleeding volume, in addition to bleeding time, improved the sensitivity and accuracy of this assay, particularly when platelet function is partially inhibited. PMID:24520531

  12. High Sensitive Precise 3D Accelerometer for Solar System Exploration with Unmanned Spacecrafts

    NASA Astrophysics Data System (ADS)

    Savenko, Y. V.; Demyanenko, P. O.; Zinkovskiy, Y. F.

    Solutions of several space and geophysical tasks require creating high sensitive precise accelerometers with sensitivity in order of 10 -13 g. These several tasks are following: inertial navigation of the Earth and Space; gravimetry nearby the Earth and into Space; geology; geophysics; seismology etc. Accelerometers (gravimeters and gradientmeters) with required sensitivity are not available now. The best accelerometers in the world have sensitivity worth on 4-5 orders. It has been developed a new class of fiber-optical sensors (FOS) with light pulse modulation. These sensors have super high threshold sensitivity and wide (up to 10 orders) dynamic range, and can be used as a base for creating of measurement units of physical values as 3D superhigh sensitive precise accelerometers of linear accelerations that is suitable for highest requirements. The principle of operation of the FOS is organically combined with a digital signal processing. It allows decreasing hardware of the accelerometer due to using a usual air-borne or space-borne computer; correcting the influence of natural, design, technological drawbacks of FOS on measured results; neutralising the influence of extraordinary situations available during using of FOS; decreasing the influence of internal and external destabilising factors (as for FOS), such as oscillation of environment temperature, instability of pendulum cycle frequency of sensitive element of the accelerometer etc. We were conducted a quantitative estimation of precise opportunities of analogue FOS in structure of fiber optical measuring devices (FOMD) for elementary FOMD with analogue FOS built on modern element basis of fiber optics (FO), at following assumptions: absolute parameter stability of devices of FOS measuring path; single transmission band of registration path; maximum possible inserted in optical fiber (OF) a radiated power. Even at such idealized assumptions, a calculated value in limit reached minimum inaccuracy of

  13. Is digital photography an accurate and precise method for measuring range of motion of the shoulder and elbow?

    PubMed

    Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C

    2018-03-01

    Accurate measurements of shoulder and elbow motion are required for the management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, shoulder flexion/abduction/internal rotation/external rotation and elbow flexion/extension were measured using visual estimation, goniometry, and digital photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard (motion capture analysis), while precision was defined by the proportion of measurements within the authors' definition of clinical significance (10° for all motions except for elbow extension where 5° was used). Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although statistically significant differences were found in measurement accuracy between the three techniques, none of these differences met the authors' definition of clinical significance. Precision of the measurements was significantly higher for both digital photography (shoulder abduction [93% vs. 74%, p < 0.001], shoulder internal rotation [97% vs. 83%, p = 0.001], and elbow flexion [93% vs. 65%, p < 0.001]) and goniometry (shoulder abduction [92% vs. 74%, p < 0.001] and shoulder internal rotation [94% vs. 83%, p = 0.008]) than visual estimation. Digital photography was more precise than goniometry for measurements of elbow flexion only [93% vs. 76%, p < 0.001]. There was no clinically significant difference in measurement accuracy between the three techniques for shoulder and elbow motion. Digital photography showed higher measurement precision compared to visual estimation for shoulder abduction, shoulder

  14. Portable high precision pressure transducer system

    DOEpatents

    Piper, Thomas C.; Morgan, John P.; Marchant, Norman J.; Bolton, Steven M.

    1994-01-01

    A high precision pressure transducer system for checking the reliability of a second pressure transducer system used to monitor the level of a fluid confined in a holding tank. Since the response of the pressure transducer is temperature sensitive, it is continually housed in an battery powered oven which is configured to provide a temperature stable environment at specified temperature for an extended period of time. Further, a high precision temperature stabilized oscillator and counter are coupled to a single board computer to accurately determine the pressure transducer oscillation frequency and convert it to an applied pressure. All of the components are powered by the batteries which during periods of availability of line power are charged by an on board battery charger. The pressure readings outputs are transmitted to a line printer and a vacuum florescent display.

  15. Portable high precision pressure transducer system

    DOEpatents

    Piper, T.C.; Morgan, J.P.; Marchant, N.J.; Bolton, S.M.

    1994-04-26

    A high precision pressure transducer system is described for checking the reliability of a second pressure transducer system used to monitor the level of a fluid confined in a holding tank. Since the response of the pressure transducer is temperature sensitive, it is continually housed in an battery powered oven which is configured to provide a temperature stable environment at specified temperature for an extended period of time. Further, a high precision temperature stabilized oscillator and counter are coupled to a single board computer to accurately determine the pressure transducer oscillation frequency and convert it to an applied pressure. All of the components are powered by the batteries which during periods of availability of line power are charged by an on board battery charger. The pressure readings outputs are transmitted to a line printer and a vacuum fluorescent display. 2 figures.

  16. Technologies That Enable Accurate and Precise Nano- to Milliliter-Scale Liquid Dispensing of Aqueous Reagents Using Acoustic Droplet Ejection.

    PubMed

    Sackmann, Eric K; Majlof, Lars; Hahn-Windgassen, Annett; Eaton, Brent; Bandzava, Temo; Daulton, Jay; Vandenbroucke, Arne; Mock, Matthew; Stearns, Richard G; Hinkson, Stephen; Datwani, Sammy S

    2016-02-01

    Acoustic liquid handling uses high-frequency acoustic signals that are focused on the surface of a fluid to eject droplets with high accuracy and precision for various life science applications. Here we present a multiwell source plate, the Echo Qualified Reservoir (ER), which can acoustically transfer over 2.5 mL of fluid per well in 25-nL increments using an Echo 525 liquid handler. We demonstrate two Labcyte technologies-Dynamic Fluid Analysis (DFA) methods and a high-voltage (HV) grid-that are required to maintain accurate and precise fluid transfers from the ER at this volume scale. DFA methods were employed to dynamically assess the energy requirements of the fluid and adjust the acoustic ejection parameters to maintain a constant velocity droplet. Furthermore, we demonstrate that the HV grid enhances droplet velocity and coalescence at the destination plate. These technologies enabled 5-µL per destination well transfers to a 384-well plate, with accuracy and precision values better than 4%. Last, we used the ER and Echo 525 liquid handler to perform a quantitative polymerase chain reaction (qPCR) assay to demonstrate an application that benefits from the flexibility and larger volume capabilities of the ER. © 2015 Society for Laboratory Automation and Screening.

  17. Tau-independent Phase Analysis: A Novel Method for Accurately Determining Phase Shifts.

    PubMed

    Tackenberg, Michael C; Jones, Jeff R; Page, Terry L; Hughey, Jacob J

    2018-06-01

    Estimations of period and phase are essential in circadian biology. While many techniques exist for estimating period, comparatively few methods are available for estimating phase. Current approaches to analyzing phase often vary between studies and are sensitive to coincident changes in period and the stage of the circadian cycle at which the stimulus occurs. Here we propose a new technique, tau-independent phase analysis (TIPA), for quantifying phase shifts in multiple types of circadian time-course data. Through comprehensive simulations, we show that TIPA is both more accurate and more precise than the standard actogram approach. TIPA is computationally simple and therefore will enable accurate and reproducible quantification of phase shifts across multiple subfields of chronobiology.

  18. Evaluation of the precision of contrast sensitivity function assessment on a tablet device

    PubMed Central

    Dorr, Michael; Lesmes, Luis A.; Elze, Tobias; Wang, Hui; Lu, Zhong-Lin; Bex, Peter J.

    2017-01-01

    The contrast sensitivity function (CSF) relates the visibility of a spatial pattern to both its size and contrast, and is therefore a more comprehensive assessment of visual function than acuity, which only determines the smallest resolvable pattern size. Because of the additional dimension of contrast, estimating the CSF can be more time-consuming. Here, we compare two methods for rapid assessment of the CSF that were implemented on a tablet device. For a single-trial assessment, we asked 63 myopes and 38 emmetropes to tap the peak of a “sweep grating” on the tablet’s touch screen. For a more precise assessment, subjects performed 50 trials of the quick CSF method in a 10-AFC letter recognition task. Tests were performed with and without optical correction, and in monocular and binocular conditions; one condition was measured twice to assess repeatability. Results show that both methods are highly correlated; using both common and novel measures for test-retest repeatability, however, the quick CSF delivers more precision with testing times of under three minutes. Further analyses show how a population prior can improve convergence rate of the quick CSF, and how the multi-dimensional output of the quick CSF can provide greater precision than scalar outcome measures. PMID:28429773

  19. Advanced bioanalytics for precision medicine.

    PubMed

    Roda, Aldo; Michelini, Elisa; Caliceti, Cristiana; Guardigli, Massimo; Mirasoli, Mara; Simoni, Patrizia

    2018-01-01

    Precision medicine is a new paradigm that combines diagnostic, imaging, and analytical tools to produce accurate diagnoses and therapeutic interventions tailored to the individual patient. This approach stands in contrast to the traditional "one size fits all" concept, according to which researchers develop disease treatments and preventions for an "average" patient without considering individual differences. The "one size fits all" concept has led to many ineffective or inappropriate treatments, especially for pathologies such as Alzheimer's disease and cancer. Now, precision medicine is receiving massive funding in many countries, thanks to its social and economic potential in terms of improved disease prevention, diagnosis, and therapy. Bioanalytical chemistry is critical to precision medicine. This is because identifying an appropriate tailored therapy requires researchers to collect and analyze information on each patient's specific molecular biomarkers (e.g., proteins, nucleic acids, and metabolites). In other words, precision diagnostics is not possible without precise bioanalytical chemistry. This Trend article highlights some of the most recent advances, including massive analysis of multilayer omics, and new imaging technique applications suitable for implementing precision medicine. Graphical abstract Precision medicine combines bioanalytical chemistry, molecular diagnostics, and imaging tools for performing accurate diagnoses and selecting optimal therapies for each patient.

  20. Simple, Sensitive and Accurate Multiplex Detection of Clinically Important Melanoma DNA Mutations in Circulating Tumour DNA with SERS Nanotags

    PubMed Central

    Wee, Eugene J.H.; Wang, Yuling; Tsao, Simon Chang-Hao; Trau, Matt

    2016-01-01

    Sensitive and accurate identification of specific DNA mutations can influence clinical decisions. However accurate diagnosis from limiting samples such as circulating tumour DNA (ctDNA) is challenging. Current approaches based on fluorescence such as quantitative PCR (qPCR) and more recently, droplet digital PCR (ddPCR) have limitations in multiplex detection, sensitivity and the need for expensive specialized equipment. Herein we describe an assay capitalizing on the multiplexing and sensitivity benefits of surface-enhanced Raman spectroscopy (SERS) with the simplicity of standard PCR to address the limitations of current approaches. This proof-of-concept method could reproducibly detect as few as 0.1% (10 copies, CV < 9%) of target sequences thus demonstrating the high sensitivity of the method. The method was then applied to specifically detect three important melanoma mutations in multiplex. Finally, the PCR/SERS assay was used to genotype cell lines and ctDNA from serum samples where results subsequently validated with ddPCR. With ddPCR-like sensitivity and accuracy yet at the convenience of standard PCR, we believe this multiplex PCR/SERS method could find wide applications in both diagnostics and research. PMID:27446486

  1. Simple, Sensitive and Accurate Multiplex Detection of Clinically Important Melanoma DNA Mutations in Circulating Tumour DNA with SERS Nanotags.

    PubMed

    Wee, Eugene J H; Wang, Yuling; Tsao, Simon Chang-Hao; Trau, Matt

    2016-01-01

    Sensitive and accurate identification of specific DNA mutations can influence clinical decisions. However accurate diagnosis from limiting samples such as circulating tumour DNA (ctDNA) is challenging. Current approaches based on fluorescence such as quantitative PCR (qPCR) and more recently, droplet digital PCR (ddPCR) have limitations in multiplex detection, sensitivity and the need for expensive specialized equipment. Herein we describe an assay capitalizing on the multiplexing and sensitivity benefits of surface-enhanced Raman spectroscopy (SERS) with the simplicity of standard PCR to address the limitations of current approaches. This proof-of-concept method could reproducibly detect as few as 0.1% (10 copies, CV < 9%) of target sequences thus demonstrating the high sensitivity of the method. The method was then applied to specifically detect three important melanoma mutations in multiplex. Finally, the PCR/SERS assay was used to genotype cell lines and ctDNA from serum samples where results subsequently validated with ddPCR. With ddPCR-like sensitivity and accuracy yet at the convenience of standard PCR, we believe this multiplex PCR/SERS method could find wide applications in both diagnostics and research.

  2. A robust statistical estimation (RoSE) algorithm jointly recovers the 3D location and intensity of single molecules accurately and precisely

    NASA Astrophysics Data System (ADS)

    Mazidi, Hesam; Nehorai, Arye; Lew, Matthew D.

    2018-02-01

    In single-molecule (SM) super-resolution microscopy, the complexity of a biological structure, high molecular density, and a low signal-to-background ratio (SBR) may lead to imaging artifacts without a robust localization algorithm. Moreover, engineered point spread functions (PSFs) for 3D imaging pose difficulties due to their intricate features. We develop a Robust Statistical Estimation algorithm, called RoSE, that enables joint estimation of the 3D location and photon counts of SMs accurately and precisely using various PSFs under conditions of high molecular density and low SBR.

  3. A sensitive and accurate method for the determination of perfluoroalkyl and polyfluoroalkyl substances in human serum using a high performance liquid chromatography-online solid phase extraction-tandem mass spectrometry.

    PubMed

    Yu, Chang Ho; Patel, Bhupendra; Palencia, Marilou; Fan, Zhihua Tina

    2017-01-13

    A selective, sensitive, and accurate analytical method for the measurement of perfluoroalkyl and polyfluoroalkyl substances (PFASs) in human serum, utilizing LC-MS/MS (liquid chromatography-tandem mass spectrometry), was developed and validated according to the Centers for Disease Control and Prevention (CDC) guidelines for biological sample analysis. Tests were conducted to determine the optimal analytical column, mobile phase composition and pH, gradient program, and cleaning procedure. The final analytical column selected for analysis was an extra densely bonded silica-packed reverse-phase column (Agilent XDB-C 8 , 3.0×100mm, 3.5μm). Mobile phase A was an aqueous buffer solution containing 10mM ammonium acetate (pH=4.3). Mobile phase B was a mixture of methanol and acetonitrile (1:1, v/v). The gradient program was programmed by initiating a fast elution (%B, from 40 to 65%) between 1.0 and 1.5min, followed by a slow elution (%B: 65-80%) in the period of 1.5-7.5min. The cleanup procedures were augmented by cleaning with (1) various solvents (isopropyl alcohol, methanol, acetonitrile, and reverse osmosis-purified water); (2) extensive washing steps for the autosampler and solid phase extraction (SPE) cartridge; and (3) a post-analysis cleaning step for the whole system. Under the above conditions, the resolution and sensitivity were significantly improved. Twelve target PFASs were baseline-separated (2.5-7.0min) within a 10-min of acquisition time. The limits of detection (LODs) were 0.01ng/mL or lower for all of the target compounds, making this method 5 times more sensitive than previously published methods. The newly developed method was validated in the linear range of 0.01-50ng/mL, and the accuracy (recovery between 80 and 120%) and precision (RSD<20%) were acceptable at three spiked levels (0.25, 2.5, and 25ng/mL). The method development and validation results demonstrated that this method was precise, accurate, and robust, with high-throughput (∼10min

  4. OPTIMA: sensitive and accurate whole-genome alignment of error-prone genomic maps by combinatorial indexing and technology-agnostic statistical analysis.

    PubMed

    Verzotto, Davide; M Teo, Audrey S; Hillmer, Axel M; Nagarajan, Niranjan

    2016-01-01

    Resolution of complex repeat structures and rearrangements in the assembly and analysis of large eukaryotic genomes is often aided by a combination of high-throughput sequencing and genome-mapping technologies (for example, optical restriction mapping). In particular, mapping technologies can generate sparse maps of large DNA fragments (150 kilo base pairs (kbp) to 2 Mbp) and thus provide a unique source of information for disambiguating complex rearrangements in cancer genomes. Despite their utility, combining high-throughput sequencing and mapping technologies has been challenging because of the lack of efficient and sensitive map-alignment algorithms for robustly aligning error-prone maps to sequences. We introduce a novel seed-and-extend glocal (short for global-local) alignment method, OPTIMA (and a sliding-window extension for overlap alignment, OPTIMA-Overlap), which is the first to create indexes for continuous-valued mapping data while accounting for mapping errors. We also present a novel statistical model, agnostic with respect to technology-dependent error rates, for conservatively evaluating the significance of alignments without relying on expensive permutation-based tests. We show that OPTIMA and OPTIMA-Overlap outperform other state-of-the-art approaches (1.6-2 times more sensitive) and are more efficient (170-200 %) and precise in their alignments (nearly 99 % precision). These advantages are independent of the quality of the data, suggesting that our indexing approach and statistical evaluation are robust, provide improved sensitivity and guarantee high precision.

  5. A comparison of force and acoustic emission sensors in monitoring precision cylindrical grinding; Technical Digest

    NASA Astrophysics Data System (ADS)

    Marsh, Eric R.; Couey, Jeremiah A.; Knapp, Byron R.; Vallance, R. R.

    2005-05-01

    Aerostatic spindles are used in precision grinding applications requiring high stiffness and very low error motions (5 to 25 nm). Forces generated during precision grinding are small and present challenges for accurate and reliable process monitoring. These challenges are met by incorporating non-contact displacement sensors into an aerostatic spindle that are calibrated to measure grinding forces from rotor motion. Four experiments compare this force-sensing approach to acoustic emission (AE) in detecting workpiece contact, process monitoring with small depths of cut, detecting workpiece defects, and evaluating abrasive wheel wear/loading. Results indicate that force measurements are preferable to acoustic emission in precision grinding since the force sensor offers improved contact sensitivity, higher resolution, and is capable of detecting events occurring within a single revolution of the grinding wheel.

  6. Precision Timing of PSR J0437-4715: An Accurate Pulsar Distance, a High Pulsar Mass, and a Limit on the Variation of Newton's Gravitational Constant

    NASA Astrophysics Data System (ADS)

    Verbiest, J. P. W.; Bailes, M.; van Straten, W.; Hobbs, G. B.; Edwards, R. T.; Manchester, R. N.; Bhat, N. D. R.; Sarkissian, J. M.; Jacoby, B. A.; Kulkarni, S. R.

    2008-05-01

    Analysis of 10 years of high-precision timing data on the millisecond pulsar PSR J0437-4715 has resulted in a model-independent kinematic distance based on an apparent orbital period derivative, dot Pb , determined at the 1.5% level of precision (Dk = 157.0 +/- 2.4 pc), making it one of the most accurate stellar distance estimates published to date. The discrepancy between this measurement and a previously published parallax distance estimate is attributed to errors in the DE200 solar system ephemerides. The precise measurement of dot Pb allows a limit on the variation of Newton's gravitational constant, |Ġ/G| <= 23 × 10-12 yr-1. We also constrain any anomalous acceleration along the line of sight to the pulsar to |a⊙/c| <= 1.5 × 10-18 s-1 at 95% confidence, and derive a pulsar mass, mpsr = 1.76 +/- 0.20 M⊙, one of the highest estimates so far obtained.

  7. Using hyperspectral data in precision farming applications

    USDA-ARS?s Scientific Manuscript database

    Precision farming practices such as variable rate applications of fertilizer and agricultural chemicals require accurate field variability mapping. This chapter investigated the value of hyperspectral remote sensing in providing useful information for five applications of precision farming: (a) Soil...

  8. Precision grid and hand motion for accurate needle insertion in brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGill, Carl S.; Schwartz, Jonathon A.; Moore, Jason Z.

    2011-08-15

    Purpose: In prostate brachytherapy, a grid is used to guide a needle tip toward a preplanned location within the tissue. During insertion, the needle deflects en route resulting in target misplacement. In this paper, 18-gauge needle insertion experiments into phantom were performed to test effects of three parameters, which include the clearance between the grid hole and needle, the thickness of the grid, and the needle insertion speed. Measurement apparatus that consisted of two datum surfaces and digital depth gauge was developed to quantify needle deflections. Methods: The gauge repeatability and reproducibility (GR and R) test was performed on themore » measurement apparatus, and it proved to be capable of measuring a 2 mm tolerance from the target. Replicated experiments were performed on a 2{sup 3} factorial design (three parameters at two levels) and analysis included averages and standard deviation along with an analysis of variance (ANOVA) to find significant single and two-way interaction factors. Results: Results showed that grid with tight clearance hole and slow needle speed increased precision and accuracy of needle insertion. The tight grid was vital to enhance precision and accuracy of needle insertion for both slow and fast insertion speed; additionally, at slow speed the tight, thick grid improved needle precision and accuracy. Conclusions: In summary, the tight grid is important, regardless of speed. The grid design, which shows the capability to reduce the needle deflection in brachytherapy procedures, can potentially be implemented in the brachytherapy procedure.« less

  9. Using Precision in STEM Language: A Qualitative Look

    ERIC Educational Resources Information Center

    Capraro, Mary M.; Bicer, Ali; Grant, Melva R.; Lincoln, Yvonna S.

    2017-01-01

    Teachers need to develop a variety of pedagogical strategies that can encourage precise and accurate communication--an extremely important 21st century skill. Precision with STEM oral language is essential. Emphasizing oral communication with precise language in combination with increased spatial skills with modeling can improve the chances of…

  10. Sensitive and accurate identification of protein–DNA binding events in ChIP-chip assays using higher order derivative analysis

    PubMed Central

    Barrett, Christian L.; Cho, Byung-Kwan

    2011-01-01

    Immuno-precipitation of protein–DNA complexes followed by microarray hybridization is a powerful and cost-effective technology for discovering protein–DNA binding events at the genome scale. It is still an unresolved challenge to comprehensively, accurately and sensitively extract binding event information from the produced data. We have developed a novel strategy composed of an information-preserving signal-smoothing procedure, higher order derivative analysis and application of the principle of maximum entropy to address this challenge. Importantly, our method does not require any input parameters to be specified by the user. Using genome-scale binding data of two Escherichia coli global transcription regulators for which a relatively large number of experimentally supported sites are known, we show that ∼90% of known sites were resolved to within four probes, or ∼88 bp. Over half of the sites were resolved to within two probes, or ∼38 bp. Furthermore, we demonstrate that our strategy delivers significant quantitative and qualitative performance gains over available methods. Such accurate and sensitive binding site resolution has important consequences for accurately reconstructing transcriptional regulatory networks, for motif discovery, for furthering our understanding of local and non-local factors in protein–DNA interactions and for extending the usefulness horizon of the ChIP-chip platform. PMID:21051353

  11. An Image-Based Algorithm for Precise and Accurate High Throughput Assessment of Drug Activity against the Human Parasite Trypanosoma cruzi

    PubMed Central

    Moraes, Carolina Borsoi; Yang, Gyongseon; Kang, Myungjoo; Freitas-Junior, Lucio H.; Hansen, Michael A. E.

    2014-01-01

    We present a customized high content (image-based) and high throughput screening algorithm for the quantification of Trypanosoma cruzi infection in host cells. Based solely on DNA staining and single-channel images, the algorithm precisely segments and identifies the nuclei and cytoplasm of mammalian host cells as well as the intracellular parasites infecting the cells. The algorithm outputs statistical parameters including the total number of cells, number of infected cells and the total number of parasites per image, the average number of parasites per infected cell, and the infection ratio (defined as the number of infected cells divided by the total number of cells). Accurate and precise estimation of these parameters allow for both quantification of compound activity against parasites, as well as the compound cytotoxicity, thus eliminating the need for an additional toxicity-assay, hereby reducing screening costs significantly. We validate the performance of the algorithm using two known drugs against T.cruzi: Benznidazole and Nifurtimox. Also, we have checked the performance of the cell detection with manual inspection of the images. Finally, from the titration of the two compounds, we confirm that the algorithm provides the expected half maximal effective concentration (EC50) of the anti-T. cruzi activity. PMID:24503652

  12. A Wearable and Highly Sensitive Graphene Strain Sensor for Precise Home-Based Pulse Wave Monitoring.

    PubMed

    Yang, Tingting; Jiang, Xin; Zhong, Yujia; Zhao, Xuanliang; Lin, Shuyuan; Li, Jing; Li, Xinming; Xu, Jianlong; Li, Zhihong; Zhu, Hongwei

    2017-07-28

    Profuse medical information about cardiovascular properties can be gathered from pulse waveforms. Therefore, it is desirable to design a smart pulse monitoring device to achieve noninvasive and real-time acquisition of cardiovascular parameters. The majority of current pulse sensors are usually bulky or insufficient in sensitivity. In this work, a graphene-based skin-like sensor is explored for pulse wave sensing with features of easy use and wearing comfort. Moreover, the adjustment of the substrate stiffness and interfacial bonding accomplish the optimal balance between sensor linearity and signal sensitivity, as well as measurement of the beat-to-beat radial arterial pulse. Compared with the existing bulky and nonportable clinical instruments, this highly sensitive and soft sensing patch not only provides primary sensor interface to human skin, but also can objectively and accurately detect the subtle pulse signal variations in a real-time fashion, such as pulse waveforms with different ages, pre- and post-exercise, thus presenting a promising solution to home-based pulse monitoring.

  13. Accurate time delay technology in simulated test for high precision laser range finder

    NASA Astrophysics Data System (ADS)

    Chen, Zhibin; Xiao, Wenjian; Wang, Weiming; Xue, Mingxi

    2015-10-01

    With the continuous development of technology, the ranging accuracy of pulsed laser range finder (LRF) is higher and higher, so the maintenance demand of LRF is also rising. According to the dominant ideology of "time analog spatial distance" in simulated test for pulsed range finder, the key of distance simulation precision lies in the adjustable time delay. By analyzing and comparing the advantages and disadvantages of fiber and circuit delay, a method was proposed to improve the accuracy of the circuit delay without increasing the count frequency of the circuit. A high precision controllable delay circuit was designed by combining the internal delay circuit and external delay circuit which could compensate the delay error in real time. And then the circuit delay accuracy could be increased. The accuracy of the novel circuit delay methods proposed in this paper was actually measured by a high sampling rate oscilloscope actual measurement. The measurement result shows that the accuracy of the distance simulated by the circuit delay is increased from +/- 0.75m up to +/- 0.15m. The accuracy of the simulated distance is greatly improved in simulated test for high precision pulsed range finder.

  14. Fully automated laboratory and field-portable goniometer used for performing accurate and precise multiangular reflectance measurements

    NASA Astrophysics Data System (ADS)

    Harms, Justin D.; Bachmann, Charles M.; Ambeau, Brittany L.; Faulring, Jason W.; Ruiz Torres, Andres J.; Badura, Gregory; Myers, Emily

    2017-10-01

    Field-portable goniometers are created for a wide variety of applications. Many of these applications require specific types of instruments and measurement schemes and must operate in challenging environments. Therefore, designs are based on the requirements that are specific to the application. We present a field-portable goniometer that was designed for measuring the hemispherical-conical reflectance factor (HCRF) of various soils and low-growing vegetation in austere coastal and desert environments and biconical reflectance factors in laboratory settings. Unlike some goniometers, this system features a requirement for "target-plane tracking" to ensure that measurements can be collected on sloped surfaces, without compromising angular accuracy. The system also features a second upward-looking spectrometer to measure the spatially dependent incoming illumination, an integrated software package to provide full automation, an automated leveling system to ensure a standard frame of reference, a design that minimizes the obscuration due to self-shading to measure the opposition effect, and the ability to record a digital elevation model of the target region. This fully automated and highly mobile system obtains accurate and precise measurements of HCRF in a wide variety of terrain and in less time than most other systems while not sacrificing consistency or repeatability in laboratory environments.

  15. Precision muon physics

    NASA Astrophysics Data System (ADS)

    Gorringe, T. P.; Hertzog, D. W.

    2015-09-01

    The muon is playing a unique role in sub-atomic physics. Studies of muon decay both determine the overall strength and establish the chiral structure of weak interactions, as well as setting extraordinary limits on charged-lepton-flavor-violating processes. Measurements of the muon's anomalous magnetic moment offer singular sensitivity to the completeness of the standard model and the predictions of many speculative theories. Spectroscopy of muonium and muonic atoms gives unmatched determinations of fundamental quantities including the magnetic moment ratio μμ /μp, lepton mass ratio mμ /me, and proton charge radius rp. Also, muon capture experiments are exploring elusive features of weak interactions involving nucleons and nuclei. We will review the experimental landscape of contemporary high-precision and high-sensitivity experiments with muons. One focus is the novel methods and ingenious techniques that achieve such precision and sensitivity in recent, present, and planned experiments. Another focus is the uncommonly broad and topical range of questions in atomic, nuclear and particle physics that such experiments explore.

  16. Polarization mode beating techniques for high-sensitivity intracavity sensing

    NASA Astrophysics Data System (ADS)

    Rosales-Garcia, Andrea

    Several industries, including semiconductor, space, defense, medical, chemical and homeland security, demand precise and accurate measurements in the nanometer and sub-nanometer scale. Optical interferometers have been widely investigated due to its dynamic-range, non-contact and high-precision features. Although commercially available interferometers can have sub-nanometer resolution, the practical accuracy exceeds the nanometer range. The fast development of nanotechnology requires more sensitive, reliable, compact and lower cost alternatives than those in existence. This work demonstrates a compact, versatile, accurate and cost-effective fiber laser sensor based on intracavity polarization mode beating (PMB) techniques for monitoring intracavity phase changes with very high sensitivity. Fiber resonators support two orthogonal polarization modes that can behave as two independent lasing channels within the cavity. The fiber laser incorporates an intracavity polarizing beamsplitter that allows for adjusting independently the polarization modes. The heterodyne detection of the laser output produces a beating (PMB) signal, whose frequency is a function of the phase difference between the polarization modes. The optical phase difference is transferred from the optical frequency to a much lower frequency and thus electronic methods can be used to obtain very precise measurements. Upon changing the pathlength of one mode, changes iu the PMB frequency can be effectively measured. Furthermore, since the polarization nodes share the same cavity, the PMB technique provides a simple means to achieve suppression of common mode noise and laser source instabilities. Frequency changes of the PMB signal are evaluated as a function of displacement, intracavity pressure and air density. Refractive index changes of 10 -9 and sub-nanometer displacement measurements are readily attained. Increased refractive index sensitivity and sub-picometer displacement can be reached owing to the

  17. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    NASA Astrophysics Data System (ADS)

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  18. Precision Muonium Spectroscopy

    NASA Astrophysics Data System (ADS)

    Jungmann, Klaus P.

    2016-09-01

    The muonium atom is the purely leptonic bound state of a positive muon and an electron. It has a lifetime of 2.2 µs. The absence of any known internal structure provides for precision experiments to test fundamental physics theories and to determine accurate values of fundamental constants. In particular ground state hyperfine structure transitions can be measured by microwave spectroscopy to deliver the muon magnetic moment. The frequency of the 1s-2s transition in the hydrogen-like atom can be determined with laser spectroscopy to obtain the muon mass. With such measurements fundamental physical interactions, in particular quantum electrodynamics, can also be tested at highest precision. The results are important input parameters for experiments on the muon magnetic anomaly. The simplicity of the atom enables further precise experiments, such as a search for muonium-antimuonium conversion for testing charged lepton number conservation and searches for possible antigravity of muons and dark matter.

  19. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages.

  20. Real-Time and Accurate Identification of Single Oligonucleotide Photoisomers via an Aerolysin Nanopore.

    PubMed

    Hu, Zheng-Li; Li, Zi-Yuan; Ying, Yi-Lun; Zhang, Junji; Cao, Chan; Long, Yi-Tao; Tian, He

    2018-04-03

    Identification of the configuration for the photoresponsive oligonucleotide plays an important role in the ingenious design of DNA nanomolecules and nanodevices. Due to the limited resolution and sensitivity of present methods, it remains a challenge to determine the accurate configuration of photoresponsive oligonucleotides, much less a precise description of their photoconversion process. Here, we used an aerolysin (AeL) nanopore-based confined space for real-time determination and quantification of the absolute cis/ trans configuration of each azobenzene-modified oligonucleotide (Azo-ODN) with a single molecule resolution. The two completely separated current distributions with narrow peak widths at half height (<0.62 pA) are assigned to cis/ trans-Azo-ODN isomers, respectively. Due to the high current sensitivity, each isomer of Azo-ODN could be undoubtedly identified, which gives the accurate photostationary conversion values of 82.7% for trans-to- cis under UV irradiation and 82.5% for cis-to- trans under vis irradiation. Further real-time kinetic evaluation reveals that the photoresponsive rate constants of Azo-ODN from trans-to- cis and cis-to -trans are 0.43 and 0.20 min -1 , respectively. This study will promote the sophisticated design of photoresponsive ODN to achieve an efficient and applicable photocontrollable process.

  1. Digital encoding of cellular mRNAs enabling precise and absolute gene expression measurement by single-molecule counting.

    PubMed

    Fu, Glenn K; Wilhelmy, Julie; Stern, David; Fan, H Christina; Fodor, Stephen P A

    2014-03-18

    We present a new approach for the sensitive detection and accurate quantitation of messenger ribonucleic acid (mRNA) gene transcripts in single cells. First, the entire population of mRNAs is encoded with molecular barcodes during reverse transcription. After amplification of the gene targets of interest, molecular barcodes are counted by sequencing or scored on a simple hybridization detector to reveal the number of molecules in the starting sample. Since absolute quantities are measured, calibration to standards is unnecessary, and many of the relative quantitation challenges such as polymerase chain reaction (PCR) bias are avoided. We apply the method to gene expression analysis of minute sample quantities and demonstrate precise measurements with sensitivity down to sub single-cell levels. The method is an easy, single-tube, end point assay utilizing standard thermal cyclers and PCR reagents. Accurate and precise measurements are obtained without any need for cycle-to-cycle intensity-based real-time monitoring or physical partitioning into multiple reactions (e.g., digital PCR). Further, since all mRNA molecules are encoded with molecular barcodes, amplification can be used to generate more material for multiple measurements and technical replicates can be carried out on limited samples. The method is particularly useful for small sample quantities, such as single-cell experiments. Digital encoding of cellular content preserves true abundance levels and overcomes distortions introduced by amplification.

  2. Precisely Controlled Ultrathin Conjugated Polymer Films for Large Area Transparent Transistors and Highly Sensitive Chemical Sensors.

    PubMed

    Khim, Dongyoon; Ryu, Gi-Seong; Park, Won-Tae; Kim, Hyunchul; Lee, Myungwon; Noh, Yong-Young

    2016-04-13

    A uniform ultrathin polymer film is deposited over a large area with molecularlevel precision by the simple wire-wound bar-coating method. The bar-coated ultrathin films not only exhibit high transparency of up to 90% in the visible wavelength range but also high charge carrier mobility with a high degree of percolation through the uniformly covered polymer nanofibrils. They are capable of realizing highly sensitive multigas sensors and represent the first successful report of ethylene detection using a sensor based on organic field-effect transistors. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. High-Precision Timing of Several Millisecond Pulsars

    NASA Astrophysics Data System (ADS)

    Ferdman, R. D.; Stairs, I. H.; Backer, D. C.; Ramachandran, R.; Demorest, P.; Nice, D. J.; Lyne, A. G.; Kramer, M.; Lorimer, D.; McLaughlin, M.; Manchester, D.; Camilo, F.; D'Amico, N.; Possenti, A.; Burgay, M.; Joshi, B. C.; Freire, P. C.

    2004-12-01

    The highest precision pulsar timing is achieved by reproducing as accurately as possible the pulse profile as emitted by the pulsar, in high signal-to-noise observations. The best profile reconstruction can be accomplished with several-bit voltage sampling and coherent removal of the dispersion suffered by pulsar signals as they traverse the interstellar medium. The Arecibo Signal Processor (ASP) and its counterpart the Green Bank Astronomical Signal Processor (GASP) are flexible, state-of-the-art wide-bandwidth observing systems, built primarily for high-precision long-term timing of millisecond and binary pulsars. ASP and GASP are in use at the 300-m Arecibo telescope in Puerto Rico and the 100-m Green Bank Telescope in Green Bank, West Virginia, respectively, taking advantage of the enormous sensitivities of these telescopes. These instruments result in high-precision science through 4 and 8-bit sampling and perform coherent dedispersion on the incoming data stream in real or near-real time. This is done using a network of personal computers, over an observing bandwidth of 64 to 128 MHz, in each of two polarizations. We present preliminary results of timing and polarimetric observations with ASP/GASP for several pulsars, including the recently-discovered relativistic double-pulsar binary J0737-3039. These data are compared to simultaneous observations with other pulsar instruments, such as the new "spigot card" spectrometer on the GBT and the Princeton Mark IV instrument at Arecibo, the precursor timing system to ASP. We also briefly discuss several upcoming observations with ASP/GASP.

  4. A link prediction approach to cancer drug sensitivity prediction.

    PubMed

    Turki, Turki; Wei, Zhi

    2017-10-03

    Predicting the response to a drug for cancer disease patients based on genomic information is an important problem in modern clinical oncology. This problem occurs in part because many available drug sensitivity prediction algorithms do not consider better quality cancer cell lines and the adoption of new feature representations; both lead to the accurate prediction of drug responses. By predicting accurate drug responses to cancer, oncologists gain a more complete understanding of the effective treatments for each patient, which is a core goal in precision medicine. In this paper, we model cancer drug sensitivity as a link prediction, which is shown to be an effective technique. We evaluate our proposed link prediction algorithms and compare them with an existing drug sensitivity prediction approach based on clinical trial data. The experimental results based on the clinical trial data show the stability of our link prediction algorithms, which yield the highest area under the ROC curve (AUC) and are statistically significant. We propose a link prediction approach to obtain new feature representation. Compared with an existing approach, the results show that incorporating the new feature representation to the link prediction algorithms has significantly improved the performance.

  5. Calibrating GPS With TWSTFT For Accurate Time Transfer

    DTIC Science & Technology

    2008-12-01

    40th Annual Precise Time and Time Interval (PTTI) Meeting 577 CALIBRATING GPS WITH TWSTFT FOR ACCURATE TIME TRANSFER Z. Jiang1 and...primary time transfer techniques are GPS and TWSTFT (Two-Way Satellite Time and Frequency Transfer, TW for short). 83% of UTC time links are...Calibrating GPS With TWSTFT For Accurate Time Transfer 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT

  6. High-precision measurement of the X-ray Cu Kα spectrum

    PubMed Central

    Mendenhall, Marcus H.; Henins, Albert; Hudson, Lawrence T.; Szabo, Csilla I.; Windover, Donald; Cline, James P.

    2017-01-01

    The structure of the X-ray emission lines of the Cu Kα complex has been remeasured on a newly commissioned instrument, in a manner directly traceable to the Système Internationale definition of the meter. In this measurement, the region from 8000 eV to 8100 eV has been covered with a highly precise angular scale, and well-defined system efficiency, providing accurate wavelengths and relative intensities. This measurement updates the standard multi-Lorentzian-fit parameters from Härtwig, Hölzer, et al., and is in modest disagreement with their results for the wavelength of the Kα1 line when compared via quadratic fitting of the peak top; the intensity ratio of Kα1 to Kα2 agrees within the combined error bounds. However, the position of the fitted top of Kα1 is very sensitive to the fit parameters, so it is not believed to be a robust value to quote without further qualification. We also provide accurate intensity and wavelength information for the so-called Kα3,4 “satellite” complex. Supplementary data is provided which gives the entire shape of the spectrum in this region, allowing it to be used directly in cases where simplified, multi-Lorentzian fits to it are not sufficiently accurate. PMID:28757682

  7. Attaining the Photometric Precision Required by Future Dark Energy Projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stubbs, Christopher

    2013-01-21

    This report outlines our progress towards achieving the high-precision astronomical measurements needed to derive improved constraints on the nature of the Dark Energy. Our approach to obtaining higher precision flux measurements has two basic components: 1) determination of the optical transmission of the atmosphere, and 2) mapping out the instrumental photon sensitivity function vs. wavelength, calibrated by referencing the measurements to the known sensitivity curve of a high precision silicon photodiode, and 3) using the self-consistency of the spectrum of stars to achieve precise color calibrations.

  8. Precision medicine for psychopharmacology: a general introduction.

    PubMed

    Shin, Cheolmin; Han, Changsu; Pae, Chi-Un; Patkar, Ashwin A

    2016-07-01

    Precision medicine is an emerging medical model that can provide accurate diagnoses and tailored therapeutic strategies for patients based on data pertaining to genes, microbiomes, environment, family history and lifestyle. Here, we provide basic information about precision medicine and newly introduced concepts, such as the precision medicine ecosystem and big data processing, and omics technologies including pharmacogenomics, pharamacometabolomics, pharmacoproteomics, pharmacoepigenomics, connectomics and exposomics. The authors review the current state of omics in psychiatry and the future direction of psychopharmacology as it moves towards precision medicine. Expert commentary: Advances in precision medicine have been facilitated by achievements in multiple fields, including large-scale biological databases, powerful methods for characterizing patients (such as genomics, proteomics, metabolomics, diverse cellular assays, and even social networks and mobile health technologies), and computer-based tools for analyzing large amounts of data.

  9. On the accurate estimation of gap fraction during daytime with digital cover photography

    NASA Astrophysics Data System (ADS)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  10. COSMOS: accurate detection of somatic structural variations through asymmetric comparison between tumor and normal samples

    PubMed Central

    Yamagata, Koichi; Yamanishi, Ayako; Kokubu, Chikara; Takeda, Junji; Sese, Jun

    2016-01-01

    An important challenge in cancer genomics is precise detection of structural variations (SVs) by high-throughput short-read sequencing, which is hampered by the high false discovery rates of existing analysis tools. Here, we propose an accurate SV detection method named COSMOS, which compares the statistics of the mapped read pairs in tumor samples with isogenic normal control samples in a distinct asymmetric manner. COSMOS also prioritizes the candidate SVs using strand-specific read-depth information. Performance tests on modeled tumor genomes revealed that COSMOS outperformed existing methods in terms of F-measure. We also applied COSMOS to an experimental mouse cell-based model, in which SVs were induced by genome engineering and gamma-ray irradiation, followed by polymerase chain reaction-based confirmation. The precision of COSMOS was 84.5%, while the next best existing method was 70.4%. Moreover, the sensitivity of COSMOS was the highest, indicating that COSMOS has great potential for cancer genome analysis. PMID:26833260

  11. Are Currently Available Wearable Devices for Activity Tracking and Heart Rate Monitoring Accurate, Precise, and Medically Beneficial?

    PubMed Central

    El-Amrawy, Fatema

    2015-01-01

    Objectives The new wave of wireless technologies, fitness trackers, and body sensor devices can have great impact on healthcare systems and the quality of life. However, there have not been enough studies to prove the accuracy and precision of these trackers. The objective of this study was to evaluate the accuracy, precision, and overall performance of seventeen wearable devices currently available compared with direct observation of step counts and heart rate monitoring. Methods Each participant in this study used three accelerometers at a time, running the three corresponding applications of each tracker on an Android or iOS device simultaneously. Each participant was instructed to walk 200, 500, and 1,000 steps. Each set was repeated 40 times. Data was recorded after each trial, and the mean step count, standard deviation, accuracy, and precision were estimated for each tracker. Heart rate was measured by all trackers (if applicable), which support heart rate monitoring, and compared to a positive control, the Onyx Vantage 9590 professional clinical pulse oximeter. Results The accuracy of the tested products ranged between 79.8% and 99.1%, while the coefficient of variation (precision) ranged between 4% and 17.5%. MisFit Shine showed the highest accuracy and precision (along with Qualcomm Toq), while Samsung Gear 2 showed the lowest accuracy, and Jawbone UP showed the lowest precision. However, Xiaomi Mi band showed the best package compared to its price. Conclusions The accuracy and precision of the selected fitness trackers are reasonable and can indicate the average level of activity and thus average energy expenditure. PMID:26618039

  12. Are Currently Available Wearable Devices for Activity Tracking and Heart Rate Monitoring Accurate, Precise, and Medically Beneficial?

    PubMed

    El-Amrawy, Fatema; Nounou, Mohamed Ismail

    2015-10-01

    The new wave of wireless technologies, fitness trackers, and body sensor devices can have great impact on healthcare systems and the quality of life. However, there have not been enough studies to prove the accuracy and precision of these trackers. The objective of this study was to evaluate the accuracy, precision, and overall performance of seventeen wearable devices currently available compared with direct observation of step counts and heart rate monitoring. Each participant in this study used three accelerometers at a time, running the three corresponding applications of each tracker on an Android or iOS device simultaneously. Each participant was instructed to walk 200, 500, and 1,000 steps. Each set was repeated 40 times. Data was recorded after each trial, and the mean step count, standard deviation, accuracy, and precision were estimated for each tracker. Heart rate was measured by all trackers (if applicable), which support heart rate monitoring, and compared to a positive control, the Onyx Vantage 9590 professional clinical pulse oximeter. The accuracy of the tested products ranged between 79.8% and 99.1%, while the coefficient of variation (precision) ranged between 4% and 17.5%. MisFit Shine showed the highest accuracy and precision (along with Qualcomm Toq), while Samsung Gear 2 showed the lowest accuracy, and Jawbone UP showed the lowest precision. However, Xiaomi Mi band showed the best package compared to its price. The accuracy and precision of the selected fitness trackers are reasonable and can indicate the average level of activity and thus average energy expenditure.

  13. Precise predictions for V+jets dark matter backgrounds

    NASA Astrophysics Data System (ADS)

    Lindert, J. M.; Pozzorini, S.; Boughezal, R.; Campbell, J. M.; Denner, A.; Dittmaier, S.; Gehrmann-De Ridder, A.; Gehrmann, T.; Glover, N.; Huss, A.; Kallweit, S.; Maierhöfer, P.; Mangano, M. L.; Morgan, T. A.; Mück, A.; Petriello, F.; Salam, G. P.; Schönherr, M.; Williams, C.

    2017-12-01

    High-energy jets recoiling against missing transverse energy (MET) are powerful probes of dark matter at the LHC. Searches based on large MET signatures require a precise control of the Z(ν {\\bar{ν }})+ jet background in the signal region. This can be achieved by taking accurate data in control regions dominated by Z(ℓ ^+ℓ ^-)+ jet, W(ℓ ν )+ jet and γ + jet production, and extrapolating to the Z(ν {\\bar{ν }})+ jet background by means of precise theoretical predictions. In this context, recent advances in perturbative calculations open the door to significant sensitivity improvements in dark matter searches. In this spirit, we present a combination of state-of-the-art calculations for all relevant V+ jets processes, including throughout NNLO QCD corrections and NLO electroweak corrections supplemented by Sudakov logarithms at two loops. Predictions at parton level are provided together with detailed recommendations for their usage in experimental analyses based on the reweighting of Monte Carlo samples. Particular attention is devoted to the estimate of theoretical uncertainties in the framework of dark matter searches, where subtle aspects such as correlations across different V+ jet processes play a key role. The anticipated theoretical uncertainty in the Z(ν {\\bar{ν }})+ jet background is at the few percent level up to the TeV range.

  14. Precise and accurate in situ Pb-Pb dating of apatite, monazite, and sphene by laser ablation multiple-collector ICP-MS

    NASA Astrophysics Data System (ADS)

    Willigers, B. J. A.; Baker, J. A.; Krogstad, E. J.; Peate, D. W.

    2002-03-01

    age data does, however, show a small discrepancy between the LA-MC-ICP-MS and TIMS ages (˜1% younger). High-resolution mass scans of the sphene during ablation clearly showed several small and as yet unidentified isobaric interferences that overlap with the 207Pb peak at the resolution conditions for measurement of isotope ratios. These might account for the age discrepancy between the LA-MC-ICP-MS and TIMS sphene ages. LA-MC-ICP-MS is a rapid, accurate, and precise method for in situ determination of Pb isotope ratios that can be used for geochronological studies in a manner similar to an ion microprobe, albeit currently at a somewhat degraded spatial resolution. Further modifications to the LA-MC-ICP-MS system, such as improved sensitivity, ion transmission, and LA methodology, may lead to this type of instrument becoming the method of choice for many types of in situ Pb isotope dating.

  15. How dim is dim? Precision of the celestial compass in moonlight and sunlight

    PubMed Central

    Dacke, M.; Byrne, M. J.; Baird, E.; Scholtz, C. H.; Warrant, E. J.

    2011-01-01

    Prominent in the sky, but not visible to humans, is a pattern of polarized skylight formed around both the Sun and the Moon. Dung beetles are, at present, the only animal group known to use the much dimmer polarization pattern formed around the Moon as a compass cue for maintaining travel direction. However, the Moon is not visible every night and the intensity of the celestial polarization pattern gradually declines as the Moon wanes. Therefore, for nocturnal orientation on all moonlit nights, the absolute sensitivity of the dung beetle's polarization detector may limit the precision of this behaviour. To test this, we studied the straight-line foraging behaviour of the nocturnal ball-rolling dung beetle Scarabaeus satyrus to establish when the Moon is too dim—and the polarization pattern too weak—to provide a reliable cue for orientation. Our results show that celestial orientation is as accurate during crescent Moon as it is during full Moon. Moreover, this orientation accuracy is equal to that measured for diurnal species that orient under the 100 million times brighter polarization pattern formed around the Sun. This indicates that, in nocturnal species, the sensitivity of the optical polarization compass can be greatly increased without any loss of precision. PMID:21282173

  16. Light leptonic new physics at the precision frontier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le Dall, Matthias, E-mail: mledall@uvic.ca

    2016-06-21

    Precision probes of new physics are often interpreted through their indirect sensitivity to short-distance scales. In this proceedings contribution, we focus on the question of which precision observables, at current sensitivity levels, allow for an interpretation via either short-distance new physics or consistent models of long-distance new physics, weakly coupled to the Standard Model. The electroweak scale is chosen to set the dividing line between these scenarios. In particular, we find that inverse see-saw models of neutrino mass allow for light new physics interpretations of most precision leptonic observables, such as lepton universality, lepton flavor violation, but not for themore » electron EDM.« less

  17. Precision laser aiming system

    DOEpatents

    Ahrens, Brandon R [Albuquerque, NM; Todd, Steven N [Rio Rancho, NM

    2009-04-28

    A precision laser aiming system comprises a disrupter tool, a reflector, and a laser fixture. The disrupter tool, the reflector and the laser fixture are configurable for iterative alignment and aiming toward an explosive device threat. The invention enables a disrupter to be quickly and accurately set up, aligned, and aimed in order to render safe or to disrupt a target from a standoff position.

  18. Atomic Precision Plasma Processing - Modeling Investigations

    NASA Astrophysics Data System (ADS)

    Rauf, Shahid

    2016-09-01

    Sub-nanometer precision is increasingly being required of many critical plasma processes in the semiconductor industry. Some of these critical processes include atomic layer etch and plasma enhanced atomic layer deposition. Accurate control over ion energy and ion / radical composition is needed during plasma processing to meet the demanding atomic-precision requirements. While improvements in mainstream inductively and capacitively coupled plasmas can help achieve some of these goals, newer plasma technologies can expand the breadth of problems addressable by plasma processing. Computational modeling is used to examine issues relevant to atomic precision plasma processing in this paper. First, a molecular dynamics model is used to investigate atomic layer etch of Si and SiO2 in Cl2 and fluorocarbon plasmas. Both planar surfaces and nanoscale structures are considered. It is shown that accurate control of ion energy in the sub-50 eV range is necessary for atomic scale precision. In particular, if the ion energy is greater than 10 eV during plasma processing, several atomic layers get damaged near the surface. Low electron temperature (Te) plasmas are particularly attractive for atomic precision plasma processing due to their low plasma potential. One of the most attractive options in this regard is energetic-electron beam generated plasma, where Te <0.5 eV has been achieved in plasmas of molecular gases. These low Te plasmas are computationally examined in this paper using a hybrid fluid-kinetic model. It is shown that such plasmas not only allow for sub-5 eV ion energies, but also enable wider range of ion / radical composition. Coauthors: Jun-Chieh Wang, Jason Kenney, Ankur Agarwal, Leonid Dorf, and Ken Collins.

  19. Precision Oncology beyond Targeted Therapy: Combining Omics Data with Machine Learning Matches the Majority of Cancer Cells to Effective Therapeutics.

    PubMed

    Ding, Michael Q; Chen, Lujia; Cooper, Gregory F; Young, Jonathan D; Lu, Xinghua

    2018-02-01

    Precision oncology involves identifying drugs that will effectively treat a tumor and then prescribing an optimal clinical treatment regimen. However, most first-line chemotherapy drugs do not have biomarkers to guide their application. For molecularly targeted drugs, using the genomic status of a drug target as a therapeutic indicator has limitations. In this study, machine learning methods (e.g., deep learning) were used to identify informative features from genome-scale omics data and to train classifiers for predicting the effectiveness of drugs in cancer cell lines. The methodology introduced here can accurately predict the efficacy of drugs, regardless of whether they are molecularly targeted or nonspecific chemotherapy drugs. This approach, on a per-drug basis, can identify sensitive cancer cells with an average sensitivity of 0.82 and specificity of 0.82; on a per-cell line basis, it can identify effective drugs with an average sensitivity of 0.80 and specificity of 0.82. This report describes a data-driven precision medicine approach that is not only generalizable but also optimizes therapeutic efficacy. The framework detailed herein, when successfully translated to clinical environments, could significantly broaden the scope of precision oncology beyond targeted therapies, benefiting an expanded proportion of cancer patients. Mol Cancer Res; 16(2); 269-78. ©2017 AACR . ©2017 American Association for Cancer Research.

  20. Digital PCR methods improve detection sensitivity and measurement precision of low abundance mtDNA deletions.

    PubMed

    Belmonte, Frances R; Martin, James L; Frescura, Kristin; Damas, Joana; Pereira, Filipe; Tarnopolsky, Mark A; Kaufman, Brett A

    2016-04-28

    Mitochondrial DNA (mtDNA) mutations are a common cause of primary mitochondrial disorders, and have also been implicated in a broad collection of conditions, including aging, neurodegeneration, and cancer. Prevalent among these pathogenic variants are mtDNA deletions, which show a strong bias for the loss of sequence in the major arc between, but not including, the heavy and light strand origins of replication. Because individual mtDNA deletions can accumulate focally, occur with multiple mixed breakpoints, and in the presence of normal mtDNA sequences, methods that detect broad-spectrum mutations with enhanced sensitivity and limited costs have both research and clinical applications. In this study, we evaluated semi-quantitative and digital PCR-based methods of mtDNA deletion detection using double-stranded reference templates or biological samples. Our aim was to describe key experimental assay parameters that will enable the analysis of low levels or small differences in mtDNA deletion load during disease progression, with limited false-positive detection. We determined that the digital PCR method significantly improved mtDNA deletion detection sensitivity through absolute quantitation, improved precision and reduced assay standard error.

  1. Digital PCR methods improve detection sensitivity and measurement precision of low abundance mtDNA deletions

    PubMed Central

    Belmonte, Frances R.; Martin, James L.; Frescura, Kristin; Damas, Joana; Pereira, Filipe; Tarnopolsky, Mark A.; Kaufman, Brett A.

    2016-01-01

    Mitochondrial DNA (mtDNA) mutations are a common cause of primary mitochondrial disorders, and have also been implicated in a broad collection of conditions, including aging, neurodegeneration, and cancer. Prevalent among these pathogenic variants are mtDNA deletions, which show a strong bias for the loss of sequence in the major arc between, but not including, the heavy and light strand origins of replication. Because individual mtDNA deletions can accumulate focally, occur with multiple mixed breakpoints, and in the presence of normal mtDNA sequences, methods that detect broad-spectrum mutations with enhanced sensitivity and limited costs have both research and clinical applications. In this study, we evaluated semi-quantitative and digital PCR-based methods of mtDNA deletion detection using double-stranded reference templates or biological samples. Our aim was to describe key experimental assay parameters that will enable the analysis of low levels or small differences in mtDNA deletion load during disease progression, with limited false-positive detection. We determined that the digital PCR method significantly improved mtDNA deletion detection sensitivity through absolute quantitation, improved precision and reduced assay standard error. PMID:27122135

  2. Five critical elements to ensure the precision medicine.

    PubMed

    Chen, Chengshui; He, Mingyan; Zhu, Yichun; Shi, Lin; Wang, Xiangdong

    2015-06-01

    The precision medicine as a new emerging area and therapeutic strategy has occurred and was practiced in the individual and brought unexpected successes, and gained high attentions from professional and social aspects as a new path to improve the treatment and prognosis of patients. There will be a number of new components to appear or be discovered, of which clinical bioinformatics integrates clinical phenotypes and informatics with bioinformatics, computational science, mathematics, and systems biology. In addition to those tools, precision medicine calls more accurate and repeatable methodologies for the identification and validation of gene discovery. Precision medicine will bring more new therapeutic strategies, drug discovery and development, and gene-oriented treatment. There is an urgent need to identify and validate disease-specific, mechanism-based, or epigenetics-dependent biomarkers to monitor precision medicine, and develop "precision" regulations to guard the application of precision medicine.

  3. High sensitivity optical molecular imaging system

    NASA Astrophysics Data System (ADS)

    An, Yu; Yuan, Gao; Huang, Chao; Jiang, Shixin; Zhang, Peng; Wang, Kun; Tian, Jie

    2018-02-01

    Optical Molecular Imaging (OMI) has the advantages of high sensitivity, low cost and ease of use. By labeling the regions of interest with fluorescent or bioluminescence probes, OMI can noninvasively obtain the distribution of the probes in vivo, which play the key role in cancer research, pharmacokinetics and other biological studies. In preclinical and clinical application, the image depth, resolution and sensitivity are the key factors for researchers to use OMI. In this paper, we report a high sensitivity optical molecular imaging system developed by our group, which can improve the imaging depth in phantom to nearly 5cm, high resolution at 2cm depth, and high image sensitivity. To validate the performance of the system, special designed phantom experiments and weak light detection experiment were implemented. The results shows that cooperated with high performance electron-multiplying charge coupled device (EMCCD) camera, precision design of light path system and high efficient image techniques, our OMI system can simultaneously collect the light-emitted signals generated by fluorescence molecular imaging, bioluminescence imaging, Cherenkov luminance and other optical imaging modality, and observe the internal distribution of light-emitting agents fast and accurately.

  4. Atomic-resolution transmission electron microscopy of electron beam–sensitive crystalline materials

    NASA Astrophysics Data System (ADS)

    Zhang, Daliang; Zhu, Yihan; Liu, Lingmei; Ying, Xiangrong; Hsiung, Chia-En; Sougrat, Rachid; Li, Kun; Han, Yu

    2018-02-01

    High-resolution imaging of electron beam–sensitive materials is one of the most difficult applications of transmission electron microscopy (TEM). The challenges are manifold, including the acquisition of images with extremely low beam doses, the time-constrained search for crystal zone axes, the precise image alignment, and the accurate determination of the defocus value. We develop a suite of methods to fulfill these requirements and acquire atomic-resolution TEM images of several metal organic frameworks that are generally recognized as highly sensitive to electron beams. The high image resolution allows us to identify individual metal atomic columns, various types of surface termination, and benzene rings in the organic linkers. We also apply our methods to other electron beam–sensitive materials, including the organic-inorganic hybrid perovskite CH3NH3PbBr3.

  5. High precision, rapid laser hole drilling

    DOEpatents

    Chang, Jim J.; Friedman, Herbert W.; Comaskey, Brian J.

    2007-03-20

    A laser system produces a first laser beam for rapidly removing the bulk of material in an area to form a ragged hole. The laser system produces a second laser beam for accurately cleaning up the ragged hole so that the final hole has dimensions of high precision.

  6. High precision, rapid laser hole drilling

    DOEpatents

    Chang, Jim J.; Friedman, Herbert W.; Comaskey, Brian J.

    2005-03-08

    A laser system produces a first laser beam for rapidly removing the bulk of material in an area to form a ragged hole. The laser system produces a second laser beam for accurately cleaning up the ragged hole so that the final hole has dimensions of high precision.

  7. High precision, rapid laser hole drilling

    DOEpatents

    Chang, Jim J.; Friedman, Herbert W.; Comaskey, Brian J.

    2013-04-02

    A laser system produces a first laser beam for rapidly removing the bulk of material in an area to form a ragged hole. The laser system produces a second laser beam for accurately cleaning up the ragged hole so that the final hole has dimensions of high precision.

  8. Routine OGTT: a robust model including incretin effect for precise identification of insulin sensitivity and secretion in a single individual.

    PubMed

    De Gaetano, Andrea; Panunzi, Simona; Matone, Alice; Samson, Adeline; Vrbikova, Jana; Bendlova, Bela; Pacini, Giovanni

    2013-01-01

    In order to provide a method for precise identification of insulin sensitivity from clinical Oral Glucose Tolerance Test (OGTT) observations, a relatively simple mathematical model (Simple Interdependent glucose/insulin MOdel SIMO) for the OGTT, which coherently incorporates commonly accepted physiological assumptions (incretin effect and saturating glucose-driven insulin secretion) has been developed. OGTT data from 78 patients in five different glucose tolerance groups were analyzed: normal glucose tolerance (NGT), impaired glucose tolerance (IGT), impaired fasting glucose (IFG), IFG+IGT, and Type 2 Diabetes Mellitus (T2DM). A comparison with the 2011 Salinari (COntinuos GI tract MOdel, COMO) and the 2002 Dalla Man (Dalla Man MOdel, DMMO) models was made with particular attention to insulin sensitivity indices ISCOMO, ISDMMO and kxgi (the insulin sensitivity index for SIMO). ANOVA on kxgi values across groups resulted significant overall (P<0.001), and post-hoc comparisons highlighted the presence of three different groups: NGT (8.62×10(-5)±9.36×10(-5) min(-1)pM(-1)), IFG (5.30×10(-5)±5.18×10(-5)) and combined IGT, IFG+IGT and T2DM (2.09×10(-5)±1.95×10(-5), 2.38×10(-5)±2.28×10(-5) and 2.38×10(-5)±2.09×10(-5) respectively). No significance was obtained when comparing ISCOMO or ISDMMO across groups. Moreover, kxgi presented the lowest sample average coefficient of variation over the five groups (25.43%), with average CVs for ISCOMO and ISDMMO of 70.32% and 57.75% respectively; kxgi also presented the strongest correlations with all considered empirical measures of insulin sensitivity. While COMO and DMMO appear over-parameterized for fitting single-subject clinical OGTT data, SIMO provides a robust, precise, physiologically plausible estimate of insulin sensitivity, with which habitual empirical insulin sensitivity indices correlate well. The kxgi index, reflecting insulin secretion dependency on glycemia, also significantly differentiates clinically

  9. COPS: A Sensitive and Accurate Tool for Detecting Somatic Copy Number Alterations Using Short-Read Sequence Data from Paired Samples

    PubMed Central

    Krishnan, Neeraja M.; Gaur, Prakhar; Chaudhary, Rakshit; Rao, Arjun A.; Panda, Binay

    2012-01-01

    Copy Number Alterations (CNAs) such as deletions and duplications; compose a larger percentage of genetic variations than single nucleotide polymorphisms or other structural variations in cancer genomes that undergo major chromosomal re-arrangements. It is, therefore, imperative to identify cancer-specific somatic copy number alterations (SCNAs), with respect to matched normal tissue, in order to understand their association with the disease. We have devised an accurate, sensitive, and easy-to-use tool, COPS, COpy number using Paired Samples, for detecting SCNAs. We rigorously tested the performance of COPS using short sequence simulated reads at various sizes and coverage of SCNAs, read depths, read lengths and also with real tumor:normal paired samples. We found COPS to perform better in comparison to other known SCNA detection tools for all evaluated parameters, namely, sensitivity (detection of true positives), specificity (detection of false positives) and size accuracy. COPS performed well for sequencing reads of all lengths when used with most upstream read alignment tools. Additionally, by incorporating a downstream boundary segmentation detection tool, the accuracy of SCNA boundaries was further improved. Here, we report an accurate, sensitive and easy to use tool in detecting cancer-specific SCNAs using short-read sequence data. In addition to cancer, COPS can be used for any disease as long as sequence reads from both disease and normal samples from the same individual are available. An added boundary segmentation detection module makes COPS detected SCNA boundaries more specific for the samples studied. COPS is available at ftp://115.119.160.213 with username “cops” and password “cops”. PMID:23110103

  10. Temporally precise single-cell resolution optogenetics

    PubMed Central

    Shemesh, Or A.; Tanese, Dimitrii; Zampini, Valeria; Linghu, Changyang; Piatkevich, Kiryl; Ronzitti, Emiliano; Papagiakoumou, Eirini; Boyden, Edward S.; Emiliani, Valentina

    2017-01-01

    Optogenetic control of individual neurons with high temporal precision, within intact mammalian brain circuitry, would enable powerful explorations of how neural circuits operate. Two-photon computer generated holography enables precise sculpting of light, and could in principle enable simultaneous illumination of many neurons in a network, with the requisite temporal precision to simulate accurate neural codes. We designed a high efficacy soma-targeted opsin, finding that fusing the N-terminal 150 residues of kainate receptor subunit 2 (KA2) to the recently discovered high-photocurrent channelrhodopsin CoChR restricted expression of this opsin primarily to the cell body of mammalian cortical neurons. In combination with two-photon holographic stimulation, we found that this somatic CoChR (soCoChR) enabled photostimulation of individual cells in intact cortical circuits with single cell resolution and <1 millisecond temporal precision, and use soCoChR to perform connectivity mapping on intact cortical circuits. PMID:29184208

  11. Development of an accurate, sensitive, and robust isotope dilution laser ablation ICP-MS method for simultaneous multi-element analysis (chlorine, sulfur, and heavy metals) in coal samples.

    PubMed

    Boulyga, Sergei F; Heilmann, Jens; Prohaska, Thomas; Heumann, Klaus G

    2007-10-01

    A method for the direct multi-element determination of Cl, S, Hg, Pb, Cd, U, Br, Cr, Cu, Fe, and Zn in powdered coal samples has been developed by applying inductively coupled plasma isotope dilution mass spectrometry (ICP-IDMS) with laser-assisted introduction into the plasma. A sector-field ICP-MS with a mass resolution of 4,000 and a high-ablation rate laser ablation system provided significantly better sensitivity, detection limits, and accuracy compared to a conventional laser ablation system coupled with a quadrupole ICP-MS. The sensitivity ranges from about 590 cps for (35)Cl+ to more than 6 x 10(5) cps for (238)U+ for 1 microg of trace element per gram of coal sample. Detection limits vary from 450 ng g(-1) for chlorine and 18 ng g(-1) for sulfur to 9.5 pg g(-1) for mercury and 0.3 pg g(-1) for uranium. Analyses of minor and trace elements in four certified reference materials (BCR-180 Gas Coal, BCR-331 Steam Coal, SRM 1632c Trace Elements in Coal, SRM 1635 Trace Elements in Coal) yielded good agreement of usually not more than 5% deviation from the certified values and precisions of less than 10% relative standard deviation for most elements. Higher relative standard deviations were found for particular elements such as Hg and Cd caused by inhomogeneities due to associations of these elements within micro-inclusions in coal which was demonstrated for Hg in SRM 1635, SRM 1632c, and another standard reference material (SRM 2682b, Sulfur and Mercury in Coal). The developed LA-ICP-IDMS method with its simple sample pretreatment opens the possibility for accurate, fast, and highly sensitive determinations of environmentally critical contaminants in coal as well as of trace impurities in similar sample materials like graphite powder and activated charcoal on a routine basis.

  12. How Advances in Imaging Will Affect Precision Radiation Oncology.

    PubMed

    Jaffray, David A; Das, Shiva; Jacobs, Paula M; Jeraj, Robert; Lambin, Philippe

    2018-06-01

    Radiation oncology is 1 of the most structured disciplines in medicine. It is of a highly technical nature with reliance on robotic systems to deliver intervention, engagement of diverse expertise, and early adoption of digital approaches to optimize and execute the application of this highly effective cancer treatment. As a localized intervention, the dependence on sensitive, specific, and accurate imaging to define the extent of disease, its heterogeneity, and adjacency to normal tissues directly affects the therapeutic ratio. Image-based in vivo temporal monitoring of the response to treatment enables adaptation and further affects the therapeutic ratio. Thus, more precise intervention will enable fractionation schedules that better interoperate with advances such as immunotherapy. In the data set-rich era that promises precision and personalized medicine, the radiation oncology field will integrate these new data into highly protocoled pathways of care that begin with multimodality prediction and enable patient-specific adaptation of therapy based on quantitative measures of the individual's dose-volume temporal trajectory and midtherapy predictions of response. In addition to advancements in computed tomography imaging, emerging technologies, such as ultra-high-field magnetic resonance and molecular imaging will bring new information to the design of treatments. Next-generation image guided radiation therapy systems will inject high specificity and sensitivity data and stimulate adaptive replanning. In addition, a myriad of pre- and peritherapeutic markers derived from advances in molecular pathology (eg, tumor genomics), automated and comprehensive imaging analytics (eg, radiomics, tumor microenvironment), and many other emerging biomarkers (eg, circulating tumor cell assays) will need to be integrated to maximize the benefit of radiation therapy for an individual patient. We present a perspective on the promise and challenges of fully exploiting imaging

  13. Performance Evaluation of Real-Time Precise Point Positioning Method

    NASA Astrophysics Data System (ADS)

    Alcay, Salih; Turgut, Muzeyyen

    2017-12-01

    Post-Processed Precise Point Positioning (PPP) is a well-known zero-difference positioning method which provides accurate and precise results. After the experimental tests, IGS Real Time Service (RTS) officially provided real time orbit and clock products for the GNSS community that allows real-time (RT) PPP applications. Different software packages can be used for RT-PPP. In this study, in order to evaluate the performance of RT-PPP, 3 IGS stations are used. Results, obtained by using BKG Ntrip Client (BNC) Software v2.12, are examined in terms of both accuracy and precision.

  14. Nanomaterials for Cancer Precision Medicine.

    PubMed

    Wang, Yilong; Sun, Shuyang; Zhang, Zhiyuan; Shi, Donglu

    2018-04-01

    Medical science has recently advanced to the point where diagnosis and therapeutics can be carried out with high precision, even at the molecular level. A new field of "precision medicine" has consequently emerged with specific clinical implications and challenges that can be well-addressed by newly developed nanomaterials. Here, a nanoscience approach to precision medicine is provided, with a focus on cancer therapy, based on a new concept of "molecularly-defined cancers." "Next-generation sequencing" is introduced to identify the oncogene that is responsible for a class of cancers. This new approach is fundamentally different from all conventional cancer therapies that rely on diagnosis of the anatomic origins where the tumors are found. To treat cancers at molecular level, a recently developed "microRNA replacement therapy" is applied, utilizing nanocarriers, in order to regulate the driver oncogene, which is the core of cancer precision therapeutics. Furthermore, the outcome of the nanomediated oncogenic regulation has to be accurately assessed by the genetically characterized, patient-derived xenograft models. Cancer therapy in this fashion is a quintessential example of precision medicine, presenting many challenges to the materials communities with new issues in structural design, surface functionalization, gene/drug storage and delivery, cell targeting, and medical imaging. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. PROMISE of Coronary CT Angiography: Precise and Accurate Diagnosis and Prognosis in Coronary Artery Disease.

    PubMed

    Thomas, Dustin M; Branch, Kelley R; Cury, Ricardo C

    2016-04-01

    Coronary computed tomography angiography (CCTA) is a rapidly growing and powerful diagnostic test that offers a great deal of precision with respect to diagnosing coronary artery disease (CAD). Guideline statements for patients with stable ischemic heart disease have recommended CCTA for only a limited portion of intermediate-risk patients who have relative or absolute contraindications for exercise or vasodilator stress testing. The publication of two large, prospective randomized clinical trials, the Prospective Multicenter Imaging Study for Evaluation of Chest Pain and the Scottish Computed Tomography of the Heart Trial are likely to expand these indications. These new data from large trials, in addition to other studies, show that CCTA is highly sensitive for the detection of CAD, identifies high-risk patients for cardiac events based on extent or plaque morphology of CAD that would not be identified by other noninvasive means, and provides significantly greater diagnostic certainty for proper treatment, including referral for invasive coronary angiography with revascularization more appropriately. Superior diagnostic accuracy and prognostic data with CCTA, when compared with other functional stress tests, may result in a reduction in unnecessary downstream testing and cost savings. In addition, newer CCTA applications hold the promise of providing a complete evaluation of a patient's coronary anatomy as well as a per-vessel ischemic evaluation. This review focuses on the interval knowledge obtained from newer data on CCTA in patients with stable ischemic heart disease, primarily focusing on the contributions of the Prospective Multicenter Imaging Study for Evaluation of Chest Pain and the Scottish Computed Tomography of the Heart Trial.

  16. COSMOS: accurate detection of somatic structural variations through asymmetric comparison between tumor and normal samples.

    PubMed

    Yamagata, Koichi; Yamanishi, Ayako; Kokubu, Chikara; Takeda, Junji; Sese, Jun

    2016-05-05

    An important challenge in cancer genomics is precise detection of structural variations (SVs) by high-throughput short-read sequencing, which is hampered by the high false discovery rates of existing analysis tools. Here, we propose an accurate SV detection method named COSMOS, which compares the statistics of the mapped read pairs in tumor samples with isogenic normal control samples in a distinct asymmetric manner. COSMOS also prioritizes the candidate SVs using strand-specific read-depth information. Performance tests on modeled tumor genomes revealed that COSMOS outperformed existing methods in terms of F-measure. We also applied COSMOS to an experimental mouse cell-based model, in which SVs were induced by genome engineering and gamma-ray irradiation, followed by polymerase chain reaction-based confirmation. The precision of COSMOS was 84.5%, while the next best existing method was 70.4%. Moreover, the sensitivity of COSMOS was the highest, indicating that COSMOS has great potential for cancer genome analysis. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  17. Research about the high precision temperature measurement

    NASA Astrophysics Data System (ADS)

    Lin, J.; Yu, J.; Zhu, X.; Zeng, Z.; Deng, Y.

    2012-12-01

    High precision temperature control system is one of most important support conditions for tunable birefringent filter.As the first step,we researched some high precision temperature measurement methods for it. Firstly, circuits with a 24 bit ADC as the sensor's reader were carefully designed; Secondly, an ARM porcessor is used as the centrol processing unit, it provides sufficient reading and procesing ability; Thirdly, three kinds of sensors, PT100, Dale 01T1002-5 thermistor, Wheatstone bridge(constructed by pure copper and manganin) as the senor of the temperature were tested respectively. The resolution of the measurement with these three kinds of sensors are all better than 0.001 that's enough for 0.01 stability temperature control. Comparatively, Dale 01T1002-5 thermistor could get the most accurate temperature of the key point, Wheatstone bridge could get the most accurate mean temperature of the whole layer, both of them will be used in our futrue temperature controll system.

  18. SU-E-T-112: Experimental Characterization of a Novel Thermal Reservoir for Consistent and Accurate Annealing of High-Sensitivity TLDs.

    PubMed

    Donahue, W; Bongiorni, P; Hearn, R; Rodgers, J; Nath, R; Chen, Z

    2012-06-01

    To develop and characterize a novel thermal reservoir for consistent and accurate annealing of high-sensitivity thermoluminescence dosimeters (TLD-100H) for dosimetry of brachytherapy sources. The sensitivity of TLD-100H is about 18 times that of TLD-100 which has clear advantages in for interstitial brachytherapy sources. However, the TLD-100H requires a short high temperature annealing cycle (15 min.) and opening and closing the oven door causes significant temperature fluctuations leading to unreliable measurements. A new thermal reservoir made of aluminum alloy was developed to provide stable temperature environment in a standard hot air oven. The thermal reservoir consisted of a 20 cm × 20 cm × 8 cm Al block with a machine-milled chamber in the middle to house the aluminum TLD holding tray. The thermal reservoir was placed inside the oven until it reaches thermal equilibrium with oven chamber. The temperatures of the oven chamber, heat reservoir, and TLD holding tray were monitored by two independent thermo-couples which interfaced digitally to a control computer. A LabView interface was written for monitoring and recording the temperatures in TLD holding tray, the thermal reservoir, and oven chamber. The temperature profiles were measured as a function of oven-door open duration. The settings for oven chamber temperature and oven door open-close duration were optimized to achieve a stable temperature of 240 0C in the TLD holding tray. Complete temperature profiles of the TLD annealing tray over the entire annealing process were obtained. A LabView interface was written for monitoring and recording the temperatures in TLD holding The use of the thermal reservoir has significantly reduced the temperature fluctuations caused by the opening of oven door when inserting the TLD holding tray into the oven chamber. It has enabled consistent annealing of high-sensitivity TLDs. A comprehensive characterization of a custom-built novel thermal reservoir for annealing

  19. Precision of Sensitivity in the Design Optimization of Indeterminate Structures

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Hopkins, Dale A.

    2006-01-01

    Design sensitivity is central to most optimization methods. The analytical sensitivity expression for an indeterminate structural design optimization problem can be factored into a simple determinate term and a complicated indeterminate component. Sensitivity can be approximated by retaining only the determinate term and setting the indeterminate factor to zero. The optimum solution is reached with the approximate sensitivity. The central processing unit (CPU) time to solution is substantially reduced. The benefit that accrues from using the approximate sensitivity is quantified by solving a set of problems in a controlled environment. Each problem is solved twice: first using the closed-form sensitivity expression, then using the approximation. The problem solutions use the CometBoards testbed as the optimization tool with the integrated force method as the analyzer. The modification that may be required, to use the stiffener method as the analysis tool in optimization, is discussed. The design optimization problem of an indeterminate structure contains many dependent constraints because of the implicit relationship between stresses, as well as the relationship between the stresses and displacements. The design optimization process can become problematic because the implicit relationship reduces the rank of the sensitivity matrix. The proposed approximation restores the full rank and enhances the robustness of the design optimization method.

  20. Reliable low precision simulations in land surface models

    NASA Astrophysics Data System (ADS)

    Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.

    2017-12-01

    Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.

  1. Precision Teaching, Frequency-Building, and Ballet Dancing

    ERIC Educational Resources Information Center

    Lokke, Gunn E. H.; Lokke, Jon A.; Arntzen, Erick

    2008-01-01

    This article reports the effectiveness of a brief intervention aimed at achieving fluency in basic ballet moves in a 9-year-old Norwegian girl by use of frequency-building and Precision Teaching procedures. One nonfluent ballet move was pinpointed, and instructional and training procedures designed to increase the frequency of accurate responding…

  2. High-precision Ru isotopic measurements by multi-collector ICP-MS.

    PubMed

    Becker, Harry; Dalpe, Claude; Walker, Richard J

    2002-06-01

    Ruthenium isotopic data for a pure Aldrich ruthenium nitrate solution obtained using a Nu Plasma multi collector inductively coupled plasma-mass spectrometer (MC-ICP-MS) shows excellent agreement (better than 1 epsilon unit = 1 part in 10(4)) with data obtained by other techniques for the mass range between 96 and 101 amu. External precisions are at the 0.5-1.7 epsilon level (2sigma). Higher sensitivity for MC ICP-MS compared to negative thermal ionization mass spectrometry (N-TIMS) is offset by the uncertainties introduced by relatively large mass discrimination and instabilities in the plasma source-ion extraction region that affect the long-term reproducibility. Large mass bias correction in ICP mass spectrometry demands particular attention to be paid to the choice of normalizing isotopes. Because of its position in the mass spectrum and the large mass bias correction, obtaining precise and accurate abundance data for 104Ru by MC-ICP-MS remains difficult. Internal and external mass bias correction schemes in this mass range may show similar shortcomings if the isotope of interest does not lie within the mass range covered by the masses used for normalization. Analyses of meteorite samples show that if isobaric interferences from Mo are sufficiently large (Ru/Mo < 10(4)), uncertainties on the Mo interference correction propagate through the mass bias correction and yield inaccurate results for Ru isotopic compositions. Second-order linear corrections may be used to correct for these inaccuracies, but such results are generally less precise than N-TIMS data.

  3. Do We Know Who Will Drop out?: A Review of the Predictors of Dropping out of High School--Precision, Sensitivity, and Specificity

    ERIC Educational Resources Information Center

    Bowers, Alex J.; Sprott, Ryan; Taff, Sherry A.

    2013-01-01

    The purpose of this study is to review the literature on the most accurate indicators of students at risk of dropping out of high school. We used Relative Operating Characteristic (ROC) analysis to compare the sensitivity and specificity of 110 dropout flags across 36 studies. Our results indicate that 1) ROC analysis provides a means to compare…

  4. The design of high precision temperature control system for InGaAs short-wave infrared detector

    NASA Astrophysics Data System (ADS)

    Wang, Zheng-yun; Hu, Yadong; Ni, Chen; Huang, Lin; Zhang, Aiwen; Sun, Xiao-bing; Hong, Jin

    2018-02-01

    The InGaAs Short-wave infrared detector is a temperature-sensitive device. Accurate temperature control can effectively reduce the background signal and improve detection accuracy, detection sensitivity, and the SNR of the detection system. Firstly, the relationship between temperature and detection background, NEP is analyzed, the principle of TEC and formula between cooling power, cooling current and hot-cold interface temperature difference are introduced. Then, the high precision constant current drive circuit based on triode voltage control current, and an incremental algorithm model based on deviation tracking compensation and PID control are proposed, which effectively suppresses the temperature overshoot, overcomes the temperature inertia, and has strong robustness. Finally, the detector and temperature control system are tested. Results show that: the lower of detector temperature, the smaller the temperature fluctuation, the higher the detection accuracy and the detection sensitivity. The temperature control system achieves the high temperature control with the temperature control rate is 7 8°C/min and the temperature fluctuation is better than +/-0. 04°C.

  5. Platform Precision Autopilot Overview and Mission Performance

    NASA Technical Reports Server (NTRS)

    Strovers, Brian K.; Lee, James A.

    2009-01-01

    The Platform Precision Autopilot is an instrument landing system-interfaced autopilot system, developed to enable an aircraft to repeatedly fly nearly the same trajectory hours, days, or weeks later. The Platform Precision Autopilot uses a novel design to interface with a NASA Gulfstream III jet by imitating the output of an instrument landing system approach. This technique minimizes, as much as possible, modifications to the baseline Gulfstream III jet and retains the safety features of the aircraft autopilot. The Platform Precision Autopilot requirement is to fly within a 5-m (16.4-ft) radius tube for distances to 200 km (108 nmi) in the presence of light turbulence for at least 90 percent of the time. This capability allows precise repeat-pass interferometry for the Unmanned Aerial Vehicle Synthetic Aperture Radar program, whose primary objective is to develop a miniaturized, polarimetric, L-band synthetic aperture radar. Precise navigation is achieved using an accurate differential global positioning system developed by the Jet Propulsion Laboratory. Flight-testing has demonstrated the ability of the Platform Precision Autopilot to control the aircraft within the specified tolerance greater than 90 percent of the time in the presence of aircraft system noise and nonlinearities, constant pilot throttle adjustments, and light turbulence.

  6. Sensitivity studies for a space-based methane lidar mission

    NASA Astrophysics Data System (ADS)

    Kiemle, C.; Quatrevalet, M.; Ehret, G.; Amediek, A.; Fix, A.; Wirth, M.

    2011-10-01

    Methane is the third most important greenhouse gas in the atmosphere after water vapour and carbon dioxide. A major handicap to quantify the emissions at the Earth's surface in order to better understand biosphere-atmosphere exchange processes and potential climate feedbacks is the lack of accurate and global observations of methane. Space-based integrated path differential absorption (IPDA) lidar has potential to fill this gap, and a Methane Remote Lidar Mission (MERLIN) on a small satellite in polar orbit was proposed by DLR and CNES in the frame of a German-French climate monitoring initiative. System simulations are used to identify key performance parameters and to find an advantageous instrument configuration, given the environmental, technological, and budget constraints. The sensitivity studies use representative averages of the atmospheric and surface state to estimate the measurement precision, i.e. the random uncertainty due to instrument noise. Key performance parameters for MERLIN are average laser power, telescope size, orbit height, surface reflectance, and detector noise. A modest-size lidar instrument with 0.45 W average laser power and 0.55 m telescope diameter on a 506 km orbit could provide 50-km averaged methane column measurement along the sub-satellite track with a precision of about 1% over vegetation. The use of a methane absorption trough at 1.65 μm improves the near-surface measurement sensitivity and vastly relaxes the wavelength stability requirement that was identified as one of the major technological risks in the pre-phase A studies for A-SCOPE, a space-based IPDA lidar for carbon dioxide at the European Space Agency. Minimal humidity and temperature sensitivity at this wavelength position will enable accurate measurements in tropical wetlands, key regions with largely uncertain methane emissions. In contrast to actual passive remote sensors, measurements in Polar Regions will be possible and biases due to aerosol layers and thin

  7. Sensitivity studies for a space-based methane lidar mission

    NASA Astrophysics Data System (ADS)

    Kiemle, C.; Quatrevalet, M.; Ehret, G.; Amediek, A.; Fix, A.; Wirth, M.

    2011-06-01

    Methane is the third most important greenhouse gas in the atmosphere after water vapour and carbon dioxide. A major handicap to quantify the emissions at the Earth's surface in order to better understand biosphere-atmosphere exchange processes and potential climate feedbacks is the lack of accurate and global observations of methane. Space-based integrated path differential absorption (IPDA) lidar has potential to fill this gap, and a Methane Remote Lidar Mission (MERLIN) on a small satellite in Polar orbit was proposed by DLR and CNES in the frame of a German-French climate monitoring initiative. System simulations are used to identify key performance parameters and to find an advantageous instrument configuration, given the environmental, technological, and budget constraints. The sensitivity studies use representative averages of the atmospheric and surface state to estimate the measurement precision, i.e. the random uncertainty due to instrument noise. Key performance parameters for MERLIN are average laser power, telescope size, orbit height, surface reflectance, and detector noise. A modest-size lidar instrument with 0.45 W average laser power and 0.55 m telescope diameter on a 506 km orbit could provide 50-km averaged methane column measurement along the sub-satellite track with a precision of about 1 % over vegetation. The use of a methane absorption trough at 1.65 μm improves the near-surface measurement sensitivity and vastly relaxes the wavelength stability requirement that was identified as one of the major technological risks in the pre-phase A studies for A-SCOPE, a space-based IPDA lidar for carbon dioxide at the European Space Agency. Minimal humidity and temperature sensitivity at this wavelength position will enable accurate measurements in tropical wetlands, key regions with largely uncertain methane emissions. In contrast to actual passive remote sensors, measurements in Polar Regions will be possible and biases due to aerosol layers and thin

  8. Linear signal noise summer accurately determines and controls S/N ratio

    NASA Technical Reports Server (NTRS)

    Sundry, J. L.

    1966-01-01

    Linear signal noise summer precisely controls the relative power levels of signal and noise, and mixes them linearly in accurately known ratios. The S/N ratio accuracy and stability are greatly improved by this technique and are attained simultaneously.

  9. High precision triangular waveform generator

    DOEpatents

    Mueller, Theodore R.

    1983-01-01

    An ultra-linear ramp generator having separately programmable ascending and descending ramp rates and voltages is provided. Two constant current sources provide the ramp through an integrator. Switching of the current at current source inputs rather than at the integrator input eliminates switching transients and contributes to the waveform precision. The triangular waveforms produced by the waveform generator are characterized by accurate reproduction and low drift over periods of several hours. The ascending and descending slopes are independently selectable.

  10. High-precision triangular-waveform generator

    DOEpatents

    Mueller, T.R.

    1981-11-14

    An ultra-linear ramp generator having separately programmable ascending and decending ramp rates and voltages is provided. Two constant current sources provide the ramp through an integrator. Switching of the current at current source inputs rather than at the integrator input eliminates switching transients and contributes to the waveform precision. The triangular waveforms produced by the waveform generator are characterized by accurate reproduction and low drift over periods of several hours. The ascending and descending slopes are independently selectable.

  11. Effect of ambient light and age-related macular degeneration on precision walking.

    PubMed

    Alexander, M Scott; Lajoie, Kim; Neima, David R; Strath, Robert A; Robinovitch, Stephen N; Marigold, Daniel S

    2014-08-01

    To determine how age-related macular degeneration (AMD) and changes in ambient light affect the control of foot placement while walking. Ten older adults with AMD and 11 normal-sighted controls performed a precision walking task under normal (∼600 lx), dim (∼0.7 lx), and after a sudden reduction (∼600 to 0.7 lx) of light. The precision walking task involved subjects walking and stepping to the center of a series of irregularly spaced, low-contrast targets. Habitual visual acuity and contrast sensitivity and visual field function were also assessed. There were no differences between groups when performing the walking task in normal light (p > 0.05). In reduced lighting, older adults with AMD were less accurate and more variable when stepping across the targets compared to controls (p < 0.05). A sudden reduction of light proved the most challenging for this population. In the AMD group, contrast sensitivity and visual acuity were not significantly correlated with walking performance. Visual field thresholds in the AMD group were only associated with greater foot placement error and variability in the dim light walking condition (r = -0.69 to -0.87, p < 0.05). While walking performance is similar between groups in normal light, poor ambient lighting results in decreased foot placement accuracy in older adults with AMD. Improper foot placement while walking can lead to a fall and possible injury. Thus, to improve the mobility of those with AMD, strategies to enhance the environment in reduced lighting situations are necessary.

  12. Analysis of feline and canine allergen components in patients sensitized to pets.

    PubMed

    Ukleja-Sokołowska, Natalia; Gawrońska-Ukleja, Ewa; Żbikowska-Gotz, Magdalena; Socha, Ewa; Lis, Kinga; Sokołowski, Łukasz; Kuźmiński, Andrzej; Bartuzi, Zbigniew

    2016-01-01

    Component resolved allergen diagnosis allows for a precise evaluation of the sensitization profiles of patients sensitized to felines and canines. An accurate interpretation of these results allows better insight into the evolution of a given patients sensitizations, and allows for a more precise evaluation of their prognoses. 70 patients (42 women and 28 men, aged 18-65, with the average of 35.5) with a positive feline or canine allergy diagnosis were included in the research group. 30 patients with a negative allergy diagnosis were included in the control group. The total IgE levels of all patients with allergies as well as their allergen-specific IgE to feline and canine allergens were measured. Specific IgE levels to canine (Can f 1, Can f 2, Can f 3, Can f 5) and feline (Fel d 1, Fel d 2, Fel d 4) allergen components were also measured with the use of the ImmunoCap method. Monosensitization for only one canine or feline component was found in 30% of patients. As predicted, the main feline allergen was Fel d 1, which sensitized as many as 93.9% of patients sensitized to felines. Among 65 patients sensitized to at least one feline component, for 30 patients (46.2%) the only sensitizing feline component was Fel d 1. Only 19 patients in that group (63.3%) were not simultaneously sensitized to dogs and 11 (36.7%), the isolated sensitization to feline Fel d 1 notwithstanding, displayed concurrent sensitizations to one of the canine allergen components. Fel d 4 sensitized 49.2% of the research group.64.3% of patients sensitized to canine components had heightened levels of specific IgE to Can f 1. Monosensitization in that group occurred for 32.1% of the patients. Sensitization to Can f 5 was observed among 52.4% of the patients. Concurrent sensitizations to a few allergic components, not only cross-reactive but also originating in different protein families, are a significant problem for patients sensitized to animals.

  13. A highly sensitive and accurate gene expression analysis by sequencing ("bead-seq") for a single cell.

    PubMed

    Matsunaga, Hiroko; Goto, Mari; Arikawa, Koji; Shirai, Masataka; Tsunoda, Hiroyuki; Huang, Huan; Kambara, Hideki

    2015-02-15

    Analyses of gene expressions in single cells are important for understanding detailed biological phenomena. Here, a highly sensitive and accurate method by sequencing (called "bead-seq") to obtain a whole gene expression profile for a single cell is proposed. A key feature of the method is to use a complementary DNA (cDNA) library on magnetic beads, which enables adding washing steps to remove residual reagents in a sample preparation process. By adding the washing steps, the next steps can be carried out under the optimal conditions without losing cDNAs. Error sources were carefully evaluated to conclude that the first several steps were the key steps. It is demonstrated that bead-seq is superior to the conventional methods for single-cell gene expression analyses in terms of reproducibility, quantitative accuracy, and biases caused during sample preparation and sequencing processes. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Bit Grooming: Statistically accurate precision-preserving quantization with compression, evaluated in the netCDF operators (NCO, v4.4.8+)

    DOE PAGES

    Zender, Charles S.

    2016-09-19

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits ofmore » consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25–80 and 5–65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1–5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1–2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic

  15. Bit Grooming: statistically accurate precision-preserving quantization with compression, evaluated in the netCDF Operators (NCO, v4.4.8+)

    NASA Astrophysics Data System (ADS)

    Zender, Charles S.

    2016-09-01

    Geoscientific models and measurements generate false precision (scientifically meaningless data bits) that wastes storage space. False precision can mislead (by implying noise is signal) and be scientifically pointless, especially for measurements. By contrast, lossy compression can be both economical (save space) and heuristic (clarify data limitations) without compromising the scientific integrity of data. Data quantization can thus be appropriate regardless of whether space limitations are a concern. We introduce, implement, and characterize a new lossy compression scheme suitable for IEEE floating-point data. Our new Bit Grooming algorithm alternately shaves (to zero) and sets (to one) the least significant bits of consecutive values to preserve a desired precision. This is a symmetric, two-sided variant of an algorithm sometimes called Bit Shaving that quantizes values solely by zeroing bits. Our variation eliminates the artificial low bias produced by always zeroing bits, and makes Bit Grooming more suitable for arrays and multi-dimensional fields whose mean statistics are important. Bit Grooming relies on standard lossless compression to achieve the actual reduction in storage space, so we tested Bit Grooming by applying the DEFLATE compression algorithm to bit-groomed and full-precision climate data stored in netCDF3, netCDF4, HDF4, and HDF5 formats. Bit Grooming reduces the storage space required by initially uncompressed and compressed climate data by 25-80 and 5-65 %, respectively, for single-precision values (the most common case for climate data) quantized to retain 1-5 decimal digits of precision. The potential reduction is greater for double-precision datasets. When used aggressively (i.e., preserving only 1-2 digits), Bit Grooming produces storage reductions comparable to other quantization techniques such as Linear Packing. Unlike Linear Packing, whose guaranteed precision rapidly degrades within the relatively narrow dynamic range of values that

  16. Drilling Precise Orifices and Slots

    NASA Technical Reports Server (NTRS)

    Richards, C. W.; Seidler, J. E.

    1983-01-01

    Reaction control thrustor injector requires precisely machined orifices and slots. Tooling setup consists of rotary table, numerical control system and torque sensitive drill press. Components used to drill oxidizer orifices. Electric discharge machine drills fuel-feed orifices. Device automates production of identical parts so several are completed in less time than previously.

  17. Advancing Precision Nuclear Medicine and Molecular Imaging for Lymphoma.

    PubMed

    Wright, Chadwick L; Maly, Joseph J; Zhang, Jun; Knopp, Michael V

    2017-01-01

    PET with fluorodeoxyglucose F 18 ( 18 F FDG-PET) is a meaningful biomarker for the detection, targeted biopsy, and treatment of lymphoma. This article reviews the evolution of 18 F FDG-PET as a putative biomarker for lymphoma and addresses the current capabilities, challenges, and opportunities to enable precision medicine practices for lymphoma. Precision nuclear medicine is driven by new imaging technologies and methodologies to more accurately detect malignant disease. Although quantitative assessment of response is limited, such technologies will enable a more precise metabolic mapping with much higher definition image detail and thus may make it a robust and valid quantitative response assessment methodology. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. French Meteor Network for High Precision Orbits of Meteoroids

    NASA Technical Reports Server (NTRS)

    Atreya, P.; Vaubaillon, J.; Colas, F.; Bouley, S.; Gaillard, B.; Sauli, I.; Kwon, M. K.

    2011-01-01

    There is a lack of precise meteoroids orbit from video observations as most of the meteor stations use off-the-shelf CCD cameras. Few meteoroids orbit with precise semi-major axis are available using film photographic method. Precise orbits are necessary to compute the dust flux in the Earth s vicinity, and to estimate the ejection time of the meteoroids accurately by comparing them with the theoretical evolution model. We investigate the use of large CCD sensors to observe multi-station meteors and to compute precise orbit of these meteoroids. An ideal spatial and temporal resolution to get an accuracy to those similar of photographic plates are discussed. Various problems faced due to the use of large CCD, such as increasing the spatial and the temporal resolution at the same time and computational problems in finding the meteor position are illustrated.

  19. Value of Sample Return and High Precision Analyses: Need for A Resource of Compelling Stories, Metaphors and Examples for Public Speakers

    NASA Technical Reports Server (NTRS)

    Allton, J. H.

    2017-01-01

    There is widespread agreement among planetary scientists that much of what we know about the workings of the solar system comes from accurate, high precision measurements on returned samples. Precision is a function of the number of atoms the instrumentation is able to count. Accuracy depends on the calibration or standardization technique. For Genesis, the solar wind sample return mission, acquiring enough atoms to ensure precise SW measurements and then accurately quantifying those measurements were steps known to be non-trivial pre-flight. The difficulty of precise and accurate measurements on returned samples, and why they cannot be made remotely, is not communicated well to the public. In part, this is be-cause "high precision" is abstract and error bars are not very exciting topics. This paper explores ideas for collecting and compiling compelling metaphors and colorful examples as a resource for planetary science public speakers.

  20. MASS MEASUREMENTS BY AN ACCURATE AND SENSITIVE SELECTED ION RECORDING TECHNIQUE

    EPA Science Inventory

    Trace-level components of mixtures were successfully identified or confirmed by mass spectrometric accurate mass measurements, made at high resolution with selected ion recording, using GC and LC sample introduction. Measurements were made at 20 000 or 10 000 resolution, respecti...

  1. The Precise Repositioning Instrument for Genioplasty and a Three-Dimensional Printing Technique for Treatment of Complex Facial Asymmetry.

    PubMed

    Wang, Lin; Tian, Dan; Sun, Xiumei; Xiao, Yanju; Chen, Li; Wu, Guomin

    2017-08-01

    Facial asymmetry is very common in maxillofacial deformities. It is difficult to achieve accurate reconstruction. With the help of 3D printing models and surgical templates, the osteotomy line and the amount of bone grinding can be accurate. Also, by means of the precise repositioning instrument, the repositioning of genioplasty can be accurate and quick. In this study, we present a three-dimensional printing technique and the precise repositioning instrument to guide the osteotomy and repositioning, and illustrate their feasibility and validity. Eight patients with complex facial asymmetries were studied. A precise 3D printing model was obtained. We made the preoperative design and surgical templates according to it. The surgical templates and precise repositioning instrument were used to obtain an accurate osteotomy and repositioning during the operation. Postoperative measurements were made based on computed tomographic data, including chin point deviation as well as the symmetry of the mandible evaluated by 3D curve functions. All patients obtained satisfactory esthetic results, and no recurrences occurred during follow-up. The results showed that we achieved clinically acceptable precision for the mandible and chin. The mean and SD of ICC between R-Post and L-Post were 0.973 ± 0.007. The mean and SD of chin point deviation 6 months after the operation were 0.63 ± 0.19 mm. The results of this study suggest that the three-dimensional printing technique and the precise repositioning instrument could aid in making better operation designs and more accurate manipulation in orthognathic surgery for complex facial asymmetry. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  2. Daily FOUR score assessment provides accurate prognosis of long-term outcome in out-of-hospital cardiac arrest.

    PubMed

    Weiss, N; Venot, M; Verdonk, F; Chardon, A; Le Guennec, L; Llerena, M C; Raimbourg, Q; Taldir, G; Luque, Y; Fagon, J-Y; Guerot, E; Diehl, J-L

    2015-05-01

    The accurate prediction of outcome after out-of-hospital cardiac arrest (OHCA) is of major importance. The recently described Full Outline of UnResponsiveness (FOUR) is well adapted to mechanically ventilated patients and does not depend on verbal response. To evaluate the ability of FOUR assessed by intensivists to accurately predict outcome in OHCA. We prospectively identified patients admitted for OHCA with a Glasgow Coma Scale below 8. Neurological assessment was performed daily. Outcome was evaluated at 6 months using Glasgow-Pittsburgh Cerebral Performance Categories (GP-CPC). Eighty-five patients were included. At 6 months, 19 patients (22%) had a favorable outcome, GP-CPC 1-2, and 66 (78%) had an unfavorable outcome, GP-CPC 3-5. Compared to both brainstem responses at day 3 and evolution of Glasgow Coma Scale, evolution of FOUR score over the three first days was able to predict unfavorable outcome more precisely. Thus, absence of improvement or worsening from day 1 to day 3 of FOUR had 0.88 (0.79-0.97) specificity, 0.71 (0.66-0.76) sensitivity, 0.94 (0.84-1.00) PPV and 0.54 (0.49-0.59) NPV to predict unfavorable outcome. Similarly, the brainstem response of FOUR score at 0 evaluated at day 3 had 0.94 (0.89-0.99) specificity, 0.60 (0.50-0.70) sensitivity, 0.96 (0.92-1.00) PPV and 0.47 (0.37-0.57) NPV to predict unfavorable outcome. The absence of improvement or worsening from day 1 to day 3 of FOUR evaluated by intensivists provides an accurate prognosis of poor neurological outcome in OHCA. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  3. Sensitive, accurate and rapid detection of trace aliphatic amines in environmental samples with ultrasonic-assisted derivatization microextraction using a new fluorescent reagent for high performance liquid chromatography.

    PubMed

    Chen, Guang; Liu, Jianjun; Liu, Mengge; Li, Guoliang; Sun, Zhiwei; Zhang, Shijuan; Song, Cuihua; Wang, Hua; Suo, Yourui; You, Jinmao

    2014-07-25

    A new fluorescent reagent, 1-(1H-imidazol-1-yl)-2-(2-phenyl-1H-phenanthro[9,10-d]imidazol-1-yl)ethanone (IPPIE), is synthesized, and a simple pretreatment based on ultrasonic-assisted derivatization microextraction (UDME) with IPPIE is proposed for the selective derivatization of 12 aliphatic amines (C1: methylamine-C12: dodecylamine) in complex matrix samples (irrigation water, river water, waste water, cultivated soil, riverbank soil and riverbed soil). Under the optimal experimental conditions (solvent: ACN-HCl, catalyst: none, molar ratio: 4.3, time: 8 min and temperature: 80°C), micro amount of sample (40 μL; 5mg) can be pretreated in only 10 min, with no preconcentration, evaporation or other additional manual operations required. The interfering substances (aromatic amines, aliphatic alcohols and phenols) get the derivatization yields of <5%, causing insignificant matrix effects (<4%). IPPIE-analyte derivatives are separated by high performance liquid chromatography (HPLC) and quantified by fluorescence detection (FD). The very low instrumental detection limits (IDL: 0.66-4.02 ng/L) and method detection limits (MDL: 0.04-0.33 ng/g; 5.96-45.61 ng/L) are achieved. Analytes are further identified from adjacent peaks by on-line ion trap mass spectrometry (MS), thereby avoiding additional operations for impurities. With this UDME-HPLC-FD-MS method, the accuracy (-0.73-2.12%), precision (intra-day: 0.87-3.39%; inter-day: 0.16-4.12%), recovery (97.01-104.10%) and sensitivity were significantly improved. Successful applications in environmental samples demonstrate the superiority of this method in the sensitive, accurate and rapid determination of trace aliphatic amines in micro amount of complex samples. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Sex differences in accuracy and precision when judging time to arrival: data from two Internet studies.

    PubMed

    Sanders, Geoff; Sinclair, Kamila

    2011-12-01

    We report two Internet studies that investigated sex differences in the accuracy and precision of judging time to arrival. We used accuracy to mean the ability to match the actual time to arrival and precision to mean the consistency with which each participant made their judgments. Our task was presented as a computer game in which a toy UFO moved obliquely towards the participant through a virtual three-dimensional space on route to a docking station. The UFO disappeared before docking and participants pressed their space bar at the precise moment they thought the UFO would have docked. Study 1 showed it was possible to conduct quantitative studies of spatiotemporal judgments in virtual reality via the Internet and confirmed reports that men are more accurate because women underestimate, but found no difference in precision measured as intra-participant variation. Study 2 repeated Study 1 with five additional presentations of one condition to provide a better measure of precision. Again, men were more accurate than women but there were no sex differences in precision. However, within the coincidence-anticipation timing (CAT) literature, of those studies that report sex differences, a majority found that males are both more accurate and more precise than females. Noting that many CAT studies report no sex differences, we discuss appropriate interpretations of such null findings. While acknowledging that CAT performance may be influenced by experience we suggest that the sex difference may have originated among our ancestors with the evolutionary selection of men for hunting and women for gathering.

  5. Precise optical observation of 0.5-GPa shock waves in condensed materials

    NASA Astrophysics Data System (ADS)

    Nagayama, Kunihito; Mori, Yasuhito

    1999-06-01

    Precision optical observation method was developed to study impact-generated high-pressure shock waves in condensed materials. The present method makes it possible to sensitively detect the shock waves of the relatively low shock stress around 0.5 GPa. The principle of the present method is based on the use of total internal reflection by triangular prisms placed on the free surface of a target assembly. When a plane shock wave arrives at the free surface, the light reflected from the prisms extinguishes instantaneously. The reason is that the total internal reflection changes to the reflection depending on micron roughness of the free surface after the shock arrival. The shock arrival at the bottom face of the prisms can be detected here by two kinds of methods, i.e., a photographic method and a gauge method. The photographic method is an inclined prism method of using a high-speed streak camera. The shock velocity and the shock tilt angle can be estimated accurately from an obtained streak photograph. While in the gauge method, an in-material PVDF stress gauge is combined with an optical prism-pin. The PVDF gauge records electrically the stress profile behind the shockwave front, and the Hugoniot data can be precisely measured by combining the prism pin with the PVDF gauge.

  6. Precision and repeatability of the Optotrak 3020 motion measurement system.

    PubMed

    States, R A; Pappas, E

    2006-01-01

    Several motion analysis systems are used by researchers to quantify human motion and to perform accurate surgical procedures. The Optotrak 3020 is one of these systems and despite its widespread use there is not any published information on its precision and repeatability. We used a repeated measures design study to evaluate the precision and repeatability of the Optotrak 3020 by measuring distance and angle in three sessions, four distances and three conditions (motion, static vertical, and static tilted). Precision and repeatability were found to be excellent for both angle and distance although they decreased with increasing distance from the sensors and with tilt from the plane of the sensors. Motion did not have a significant effect on the precision of the measurements. In conclusion, the measurement error of the Optotrak is minimal. Further studies are needed to evaluate its precision and repeatability under human motion conditions.

  7. An accurate registration technique for distorted images

    NASA Technical Reports Server (NTRS)

    Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis

    1990-01-01

    Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.

  8. Platform Precision Autopilot Overview and Flight Test Results

    NASA Technical Reports Server (NTRS)

    Lin, V.; Strovers, B.; Lee, J.; Beck, R.

    2008-01-01

    The Platform Precision Autopilot is an instrument landing system interfaced autopilot system, developed to enable an aircraft to repeatedly fly nearly the same trajectory hours, days, or weeks later. The Platform Precision Autopilot uses a novel design to interface with a NASA Gulfstream III jet by imitating the output of an instrument landing system approach. This technique minimizes, as much as possible, modifications to the baseline Gulfstream III jet and retains the safety features of the aircraft autopilot. The Platform Precision Autopilot requirement is to fly within a 5-m (16.4-ft) radius tube for distances to 200 km (108 nmi) in the presence of light turbulence for at least 90 percent of the time. This capability allows precise repeat-pass interferometry for the Uninhabited Aerial Vehicle Synthetic Aperture Radar program, whose primary objective is to develop a miniaturized, polarimetric, L-band synthetic aperture radar. Precise navigation is achieved using an accurate differential global positioning system developed by the Jet Propulsion Laboratory. Flight-testing has demonstrated the ability of the Platform Precision Autopilot to control the aircraft within the specified tolerance greater than 90 percent of the time in the presence of aircraft system noise and nonlinearities, constant pilot throttle adjustments, and light turbulence.

  9. The precise and accurate production of millimetric water droplets using a superhydrophobic generating apparatus

    NASA Astrophysics Data System (ADS)

    Wood, Michael J.; Aristizabal, Felipe; Coady, Matthew; Nielson, Kent; Ragogna, Paul J.; Kietzig, Anne-Marie

    2018-02-01

    The production of millimetric liquid droplets has importance in a wide range of applications both in the laboratory and industrially. As such, much effort has been put forth to devise methods to generate these droplets on command in a manner which results in high diameter accuracy and precision, well-defined trajectories followed by successive droplets and low oscillations in droplet shape throughout their descents. None of the currently employed methods of millimetric droplet generation described in the literature adequately addresses all of these desired droplet characteristics. The reported methods invariably involve the cohesive separation of the desired volume of liquid from the bulk supply in the same step that separates the single droplet from the solid generator. We have devised a droplet generation device which separates the desired volume of liquid within a tee-apparatus in a step prior to the generation of the droplet which has yielded both high accuracy and precision of the diameters of the final droplets produced. Further, we have engineered a generating tip with extreme antiwetting properties which has resulted in reduced adhesion forces between the liquid droplet and the solid tip. This has yielded the ability to produce droplets of low mass without necessitating different diameter generating tips or the addition of surfactants to the liquid, well-defined droplet trajectories, and low oscillations in droplet volume. The trajectories and oscillations of the droplets produced have been assessed and presented quantitatively in a manner that has been lacking in the current literature.

  10. Precision forging technology for aluminum alloy

    NASA Astrophysics Data System (ADS)

    Deng, Lei; Wang, Xinyun; Jin, Junsong; Xia, Juchen

    2018-03-01

    Aluminum alloy is a preferred metal material for lightweight part manufacturing in aerospace, automobile, and weapon industries due to its good physical properties, such as low density, high specific strength, and good corrosion resistance. However, during forging processes, underfilling, folding, broken streamline, crack, coarse grain, and other macro- or microdefects are easily generated because of the deformation characteristics of aluminum alloys, including narrow forgeable temperature region, fast heat dissipation to dies, strong adhesion, high strain rate sensitivity, and large flow resistance. Thus, it is seriously restricted for the forged part to obtain precision shape and enhanced property. In this paper, progresses in precision forging technologies of aluminum alloy parts were reviewed. Several advanced precision forging technologies have been developed, including closed die forging, isothermal die forging, local loading forging, metal flow forging with relief cavity, auxiliary force or vibration loading, casting-forging hybrid forming, and stamping-forging hybrid forming. High-precision aluminum alloy parts can be realized by controlling the forging processes and parameters or combining precision forging technologies with other forming technologies. The development of these technologies is beneficial to promote the application of aluminum alloys in manufacturing of lightweight parts.

  11. Precision measurements of the RSA method using a phantom model of hip prosthesis.

    PubMed

    Mäkinen, Tatu J; Koort, Jyri K; Mattila, Kimmo T; Aro, Hannu T

    2004-04-01

    Radiostereometric analysis (RSA) has become one of the recommended techniques for pre-market evaluation of new joint implant designs. In this study we evaluated the effect of repositioning of X-ray tubes and phantom model on the precision of the RSA method. In precision measurements, we utilized mean error of rigid body fitting (ME) values as an internal control for examinations. ME value characterizes relative motion among the markers within each rigid body and is conventionally used to detect loosening of a bone marker. Three experiments, each consisting of 10 double examinations, were performed. In the first experiment, the X-ray tubes and the phantom model were not repositioned between one double examination. In experiments two and three, the X-ray tubes were repositioned between one double examination. In addition, the position of the phantom model was changed in experiment three. Results showed that significant differences could be found in 2 of 12 comparisons when evaluating the translation and rotation of the prosthetic components. Repositioning procedures increased ME values mimicking deformation of rigid body segments. Thus, ME value seemed to be a more sensitive parameter than migration values in this study design. These results confirmed the importance of standardized radiographic technique and accurate patient positioning for RSA measurements. Standardization and calibration procedures should be performed with phantom models in order to avoid unnecessary radiation dose of the patients. The present model gives the means to establish and to follow the intra-laboratory precision of the RSA method. The model is easily applicable in any research unit and allows the comparison of the precision values in different laboratories of multi-center trials.

  12. High-precision dosimetry for radiotherapy using the optically stimulated luminescence technique and thin Al2O3:C dosimeters.

    PubMed

    Yukihara, E G; Yoshimura, E M; Lindstrom, T D; Ahmad, S; Taylor, K K; Mardirossian, G

    2005-12-07

    The potential of using the optically stimulated luminescence (OSL) technique with aluminium oxide (Al(2)O(3):C) dosimeters for a precise and accurate estimation of absorbed doses delivered by high-energy photon beams was investigated. This study demonstrates the high reproducibility of the OSL measurements and presents a preliminary determination of the depth-dose curve in water for a 6 MV photon beam from a linear accelerator. The uncertainty of a single OSL measurement, estimated from the variance of a large sample of dosimeters irradiated with the same dose, was 0.7%. In the depth-dose curve obtained using the OSL technique, the difference between the measured and expected doses was < or =0.7% for depths between 1.5 and 10 cm, and 1.1% for a depth of 15 cm. The readout procedure includes a normalization of the response of the dosimeter with respect to a reference dose in order to eliminate variations in the dosimeter mass, dosimeter sensitivity, and the reader's sensitivity. This may be relevant for quality assurance programmes, since it simplifies the requirements in terms of personnel training to achieve the precision and accuracy necessary for radiotherapy applications. We concluded that the OSL technique has the potential to be reliably incorporated in quality assurance programmes and dose verification.

  13. Precision Pointing Control to and Accurate Target Estimation of a Non-Cooperative Vehicle

    NASA Technical Reports Server (NTRS)

    VanEepoel, John; Thienel, Julie; Sanner, Robert M.

    2006-01-01

    In 2004, NASA began investigating a robotic servicing mission for the Hubble Space Telescope (HST). Such a mission would not only require estimates of the HST attitude and rates in order to achieve capture by the proposed Hubble Robotic Vehicle (HRV), but also precision control to achieve the desired rate and maintain the orientation to successfully dock with HST. To generalize the situation, HST is the target vehicle and HRV is the chaser. This work presents a nonlinear approach for estimating the body rates of a non-cooperative target vehicle, and coupling this estimation to a control scheme. Non-cooperative in this context relates to the target vehicle no longer having the ability to maintain attitude control or transmit attitude knowledge.

  14. Accuracy, precision, usability, and cost of portable silver test methods for ceramic filter factories.

    PubMed

    Meade, Rhiana D; Murray, Anna L; Mittelman, Anjuliee M; Rayner, Justine; Lantagne, Daniele S

    2017-02-01

    Locally manufactured ceramic water filters are one effective household drinking water treatment technology. During manufacturing, silver nanoparticles or silver nitrate are applied to prevent microbiological growth within the filter and increase bacterial removal efficacy. Currently, there is no recommendation for manufacturers to test silver concentrations of application solutions or filtered water. We identified six commercially available silver test strips, kits, and meters, and evaluated them by: (1) measuring in quintuplicate six samples from 100 to 1,000 mg/L (application range) and six samples from 0.0 to 1.0 mg/L (effluent range) of silver nanoparticles and silver nitrate to determine accuracy and precision; (2) conducting volunteer testing to assess ease-of-use; and (3) comparing costs. We found no method accurately detected silver nanoparticles, and accuracy ranged from 4 to 91% measurement error for silver nitrate samples. Most methods were precise, but only one method could test both application and effluent concentration ranges of silver nitrate. Volunteers considered test strip methods easiest. The cost for 100 tests ranged from 36 to 1,600 USD. We found no currently available method accurately and precisely measured both silver types at reasonable cost and ease-of-use, thus these methods are not recommended to manufacturers. We recommend development of field-appropriate methods that accurately and precisely measure silver nanoparticle and silver nitrate concentrations.

  15. High-Precision Registration of Point Clouds Based on Sphere Feature Constraints.

    PubMed

    Huang, Junhui; Wang, Zhao; Gao, Jianmin; Huang, Youping; Towers, David Peter

    2016-12-30

    Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method.

  16. High-Precision Registration of Point Clouds Based on Sphere Feature Constraints

    PubMed Central

    Huang, Junhui; Wang, Zhao; Gao, Jianmin; Huang, Youping; Towers, David Peter

    2016-01-01

    Point cloud registration is a key process in multi-view 3D measurements. Its precision affects the measurement precision directly. However, in the case of the point clouds with non-overlapping areas or curvature invariant surface, it is difficult to achieve a high precision. A high precision registration method based on sphere feature constraint is presented to overcome the difficulty in the paper. Some known sphere features with constraints are used to construct virtual overlapping areas. The virtual overlapping areas provide more accurate corresponding point pairs and reduce the influence of noise. Then the transformation parameters between the registered point clouds are solved by an optimization method with weight function. In that case, the impact of large noise in point clouds can be reduced and a high precision registration is achieved. Simulation and experiments validate the proposed method. PMID:28042846

  17. Ion chromatography for the precise analysis of chloride and sodium in sweat for the diagnosis of cystic fibrosis.

    PubMed

    Doorn, J; Storteboom, T T R; Mulder, A M; de Jong, W H A; Rottier, B L; Kema, I P

    2015-07-01

    Measurement of chloride in sweat is an essential part of the diagnostic algorithm for cystic fibrosis. The lack in sensitivity and reproducibility of current methods led us to develop an ion chromatography/high-performance liquid chromatography (IC/HPLC) method, suitable for the analysis of both chloride and sodium in small volumes of sweat. Precision, linearity and limit of detection of an in-house developed IC/HPLC method were established. Method comparison between the newly developed IC/HPLC method and the traditional Chlorocounter was performed, and trueness was determined using Passing Bablok method comparison with external quality assurance material (Royal College of Pathologists of Australasia). Precision and linearity fulfill criteria as established by UK guidelines are comparable with inductively coupled plasma-mass spectrometry methods. Passing Bablok analysis demonstrated excellent correlation between IC/HPLC measurements and external quality assessment target values, for both chloride and sodium. With a limit of quantitation of 0.95 mmol/L, our method is suitable for the analysis of small amounts of sweat and can thus be used in combination with the Macroduct collection system. Although a chromatographic application results in a somewhat more expensive test compared to a Chlorocounter test, more accurate measurements are achieved. In addition, simultaneous measurements of sodium concentrations will result in better detection of false positives, less test repeating and thus faster and more accurate and effective diagnosis. The described IC/HPLC method, therefore, provides a precise, relatively cheap and easy-to-handle application for the analysis of both chloride and sodium in sweat. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  18. Bit-Grooming: Shave Your Bits with Razor-sharp Precision

    NASA Astrophysics Data System (ADS)

    Zender, C. S.; Silver, J.

    2017-12-01

    Lossless compression can reduce climate data storage by 30-40%. Further reduction requires lossy compression that also reduces precision. Fortunately, geoscientific models and measurements generate false precision (scientifically meaningless data bits) that can be eliminated without sacrificing scientifically meaningful data. We introduce Bit Grooming, a lossy compression algorithm that removes the bloat due to false-precision, those bits and bytes beyond the meaningful precision of the data.Bit Grooming is statistically unbiased, applies to all floating point numbers, and is easy to use. Bit-Grooming reduces geoscience data storage requirements by 40-80%. We compared Bit Grooming to competitors Linear Packing, Layer Packing, and GRIB2/JPEG2000. The other compression methods have the edge in terms of compression, but Bit Grooming is the most accurate and certainly the most usable and portable.Bit Grooming provides flexible and well-balanced solutions to the trade-offs among compression, accuracy, and usability required by lossy compression. Geoscientists could reduce their long term storage costs, and show leadership in the elimination of false precision, by adopting Bit Grooming.

  19. Precision engineering for astronomy: historical origins and the future revolution in ground-based astronomy.

    PubMed

    Cunningham, Colin; Russell, Adrian

    2012-08-28

    Since the dawn of civilization, the human race has pushed technology to the limit to study the heavens in ever-increasing detail. As astronomical instruments have evolved from those built by Tycho Brahe in the sixteenth century, through Galileo and Newton in the seventeenth, to the present day, astronomers have made ever more precise measurements. To do this, they have pushed the art and science of precision engineering to extremes. Some of the critical steps are described in the evolution of precision engineering from the first telescopes to the modern generation telescopes and ultra-sensitive instruments that need a combination of precision manufacturing, metrology and accurate positioning systems. In the future, precision-engineered technologies such as those emerging from the photonics industries may enable future progress in enhancing the capabilities of instruments, while potentially reducing the size and cost. In the modern era, there has been a revolution in astronomy leading to ever-increasing light-gathering capability. Today, the European Southern Observatory (ESO) is at the forefront of this revolution, building observatories on the ground that are set to transform our view of the universe. At an elevation of 5000 m in the Atacama Desert of northern Chile, the Atacama Large Millimetre/submillimetre Array (ALMA) is nearing completion. The ALMA is the most powerful radio observatory ever and is being built by a global partnership from Europe, North America and East Asia. In the optical/infrared part of the spectrum, the latest project for ESO is even more ambitious: the European Extremely Large Telescope, a giant 40 m class telescope that will also be located in Chile and which will give the most detailed view of the universe so far.

  20. Climate sensitivity, sea level and atmospheric carbon dioxide

    PubMed Central

    Hansen, James; Sato, Makiko; Russell, Gary; Kharecha, Pushker

    2013-01-01

    Cenozoic temperature, sea level and CO2 covariations provide insights into climate sensitivity to external forcings and sea-level sensitivity to climate change. Climate sensitivity depends on the initial climate state, but potentially can be accurately inferred from precise palaeoclimate data. Pleistocene climate oscillations yield a fast-feedback climate sensitivity of 3±1°C for a 4 W m−2 CO2 forcing if Holocene warming relative to the Last Glacial Maximum (LGM) is used as calibration, but the error (uncertainty) is substantial and partly subjective because of poorly defined LGM global temperature and possible human influences in the Holocene. Glacial-to-interglacial climate change leading to the prior (Eemian) interglacial is less ambiguous and implies a sensitivity in the upper part of the above range, i.e. 3–4°C for a 4 W m−2 CO2 forcing. Slow feedbacks, especially change of ice sheet size and atmospheric CO2, amplify the total Earth system sensitivity by an amount that depends on the time scale considered. Ice sheet response time is poorly defined, but we show that the slow response and hysteresis in prevailing ice sheet models are exaggerated. We use a global model, simplified to essential processes, to investigate state dependence of climate sensitivity, finding an increased sensitivity towards warmer climates, as low cloud cover is diminished and increased water vapour elevates the tropopause. Burning all fossil fuels, we conclude, would make most of the planet uninhabitable by humans, thus calling into question strategies that emphasize adaptation to climate change. PMID:24043864

  1. Climate Sensitivity, Sea Level, and Atmospheric Carbon Dioxide

    NASA Technical Reports Server (NTRS)

    Hansen, James; Sato, Makiko; Russell, Gary; Kharecha, Pushker

    2013-01-01

    Cenozoic temperature, sea level and CO2 covariations provide insights into climate sensitivity to external forcings and sea-level sensitivity to climate change. Climate sensitivity depends on the initial climate state, but potentially can be accurately inferred from precise palaeoclimate data. Pleistocene climate oscillations yield a fast-feedback climate sensitivity of 3+/-1deg C for a 4 W/sq m CO2 forcing if Holocene warming relative to the Last Glacial Maximum (LGM) is used as calibration, but the error (uncertainty) is substantial and partly subjective because of poorly defined LGM global temperature and possible human influences in the Holocene. Glacial-to-interglacial climate change leading to the prior (Eemian) interglacial is less ambiguous and implies a sensitivity in the upper part of the above range, i.e. 3-4deg C for a 4 W/sq m CO2 forcing. Slow feedbacks, especially change of ice sheet size and atmospheric CO2, amplify the total Earth system sensitivity by an amount that depends on the time scale considered. Ice sheet response time is poorly defined, but we show that the slow response and hysteresis in prevailing ice sheet models are exaggerated. We use a global model, simplified to essential processes, to investigate state dependence of climate sensitivity, finding an increased sensitivity towards warmer climates, as low cloud cover is diminished and increased water vapour elevates the tropopause. Burning all fossil fuels, we conclude, would make most of the planet uninhabitable by humans, thus calling into question strategies that emphasize adaptation to climate change.

  2. Climate sensitivity, sea level and atmospheric carbon dioxide.

    PubMed

    Hansen, James; Sato, Makiko; Russell, Gary; Kharecha, Pushker

    2013-10-28

    Cenozoic temperature, sea level and CO2 covariations provide insights into climate sensitivity to external forcings and sea-level sensitivity to climate change. Climate sensitivity depends on the initial climate state, but potentially can be accurately inferred from precise palaeoclimate data. Pleistocene climate oscillations yield a fast-feedback climate sensitivity of 3±1(°)C for a 4 W m(-2) CO2 forcing if Holocene warming relative to the Last Glacial Maximum (LGM) is used as calibration, but the error (uncertainty) is substantial and partly subjective because of poorly defined LGM global temperature and possible human influences in the Holocene. Glacial-to-interglacial climate change leading to the prior (Eemian) interglacial is less ambiguous and implies a sensitivity in the upper part of the above range, i.e. 3-4(°)C for a 4 W m(-2) CO2 forcing. Slow feedbacks, especially change of ice sheet size and atmospheric CO2, amplify the total Earth system sensitivity by an amount that depends on the time scale considered. Ice sheet response time is poorly defined, but we show that the slow response and hysteresis in prevailing ice sheet models are exaggerated. We use a global model, simplified to essential processes, to investigate state dependence of climate sensitivity, finding an increased sensitivity towards warmer climates, as low cloud cover is diminished and increased water vapour elevates the tropopause. Burning all fossil fuels, we conclude, would make most of the planet uninhabitable by humans, thus calling into question strategies that emphasize adaptation to climate change.

  3. Improving the sensitivity and specificity of a bioanalytical assay for the measurement of certolizumab pegol.

    PubMed

    Smeraglia, John; Silva, John-Paul; Jones, Kieran

    2017-08-01

    In order to evaluate placental transfer of certolizumab pegol (CZP), a more sensitive and selective bioanalytical assay was required to accurately measure low CZP concentrations in infant and umbilical cord blood. Results & methodology: A new electrochemiluminescence immunoassay was developed to measure CZP levels in human plasma. Validation experiments demonstrated improved selectivity (no matrix interference observed) and a detection range of 0.032-5.0 μg/ml. Accuracy and precision met acceptance criteria (mean total error ≤20.8%). Dilution linearity and sample stability were acceptable and sufficient to support the method. The electrochemiluminescence immunoassay was validated for measuring low CZP concentrations in human plasma. The method demonstrated a more than tenfold increase in sensitivity compared with previous assays, and improved selectivity for intact CZP.

  4. Panel-based Genetic Diagnostic Testing for Inherited Eye Diseases is Highly Accurate and Reproducible and More Sensitive for Variant Detection Than Exome Sequencing

    PubMed Central

    Bujakowska, Kinga M.; Sousa, Maria E.; Fonseca-Kelly, Zoë D.; Taub, Daniel G.; Janessian, Maria; Wang, Dan Yi; Au, Elizabeth D.; Sims, Katherine B.; Sweetser, David A.; Fulton, Anne B.; Liu, Qin; Wiggs, Janey L.; Gai, Xiaowu; Pierce, Eric A.

    2015-01-01

    Purpose Next-generation sequencing (NGS) based methods are being adopted broadly for genetic diagnostic testing, but the performance characteristics of these techniques have not been fully defined with regard to test accuracy and reproducibility. Methods We developed a targeted enrichment and NGS approach for genetic diagnostic testing of patients with inherited eye disorders, including inherited retinal degenerations, optic atrophy and glaucoma. In preparation for providing this Genetic Eye Disease (GEDi) test on a CLIA-certified basis, we performed experiments to measure the sensitivity, specificity, reproducibility as well as the clinical sensitivity of the test. Results The GEDi test is highly reproducible and accurate, with sensitivity and specificity for single nucleotide variant detection of 97.9% and 100%, respectively. The sensitivity for variant detection was notably better than the 88.3% achieved by whole exome sequencing (WES) using the same metrics, due to better coverage of targeted genes in the GEDi test compared to commercially available exome capture sets. Prospective testing of 192 patients with IRDs indicated that the clinical sensitivity of the GEDi test is high, with a diagnostic rate of 51%. Conclusion The data suggest that based on quantified performance metrics, selective targeted enrichment is preferable to WES for genetic diagnostic testing. PMID:25412400

  5. Precision Agriculture. Reaping the Benefits of Technological Growth. Resources in Technology.

    ERIC Educational Resources Information Center

    Hadley, Joel F.

    1998-01-01

    Technological innovations have revolutionized farming. Using precision farming techniques, farmers get an accurate picture of a field's attributes, such as soil properties, yield rates, and crop characteristics through the use of Differential Global Positioning Satellite hardware. (JOW)

  6. Sensitivity, stability, and precision of quantitative Ns-LIBS-based fuel-air-ratio measurements for methane-air flames at 1-11 bar.

    PubMed

    Hsu, Paul S; Gragston, Mark; Wu, Yue; Zhang, Zhili; Patnaik, Anil K; Kiefer, Johannes; Roy, Sukesh; Gord, James R

    2016-10-01

    Nanosecond laser-induced breakdown spectroscopy (ns-LIBS) is employed for quantitative local fuel-air (F/A) ratio (i.e., ratio of actual fuel-to-oxidizer mass over ratio of fuel-to-oxidizer mass at stoichiometry, measurements in well-characterized methane-air flames at pressures of 1-11 bar). We selected nitrogen and hydrogen atomic-emission lines at 568 nm and 656 nm, respectively, to establish a correlation between the line intensities and the F/A ratio. We have investigated the effects of laser-pulse energy, camera gate delay, and pressure on the sensitivity, stability, and precision of the quantitative ns-LIBS F/A ratio measurements. We determined the optimal laser energy and camera gate delay for each pressure condition and found that measurement stability and precision are degraded with an increase in pressure. We have identified primary limitations of the F/A ratio measurement employing ns-LIBS at elevated pressures as instabilities caused by the higher density laser-induced plasma and the presence of the higher level of soot. Potential improvements are suggested.

  7. Precision Pointing Control System (PPCS) star tracker test

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Tests performed on the TRW precision star tracker are described. The unit tested was a two-axis gimballed star tracker designed to provide star LOS data to an accuracy of 1 to 2 sec. The tracker features a unique bearing system and utilizes thermal and mechanical symmetry techniques to achieve high precision which can be demonstrated in a one g environment. The test program included a laboratory evaluation of tracker functional operation, sensitivity, repeatibility, and thermal stability.

  8. Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods

    DOEpatents

    Grossman, Mark W.; George, William A.

    1987-01-01

    A process for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H.sub.2 O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg.sub.2 Cl.sub.2. The method for doing this involves dissolving a precise amount of Hg.sub.2 Cl.sub.2 in an electrolyte solution comprised of concentrated HCl and H.sub.2 O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg.

  9. Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods

    DOEpatents

    Grossman, M.W.; George, W.A.

    1987-07-07

    A process is described for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H[sub 2]O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg[sub 2]Cl[sub 2]. The method for doing this involves dissolving a precise amount of Hg[sub 2]Cl[sub 2] in an electrolyte solution comprised of concentrated HCl and H[sub 2]O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg. 1 fig.

  10. Predicting individual contrast sensitivity functions from acuity and letter contrast sensitivity measurements

    PubMed Central

    Thurman, Steven M.; Davey, Pinakin Gunvant; McCray, Kaydee Lynn; Paronian, Violeta; Seitz, Aaron R.

    2016-01-01

    Contrast sensitivity (CS) is widely used as a measure of visual function in both basic research and clinical evaluation. There is conflicting evidence on the extent to which measuring the full contrast sensitivity function (CSF) offers more functionally relevant information than a single measurement from an optotype CS test, such as the Pelli–Robson chart. Here we examine the relationship between functional CSF parameters and other measures of visual function, and establish a framework for predicting individual CSFs with effectively a zero-parameter model that shifts a standard-shaped template CSF horizontally and vertically according to independent measurements of high contrast acuity and letter CS, respectively. This method was evaluated for three different CSF tests: a chart test (CSV-1000), a computerized sine-wave test (M&S Sine Test), and a recently developed adaptive test (quick CSF). Subjects were 43 individuals with healthy vision or impairment too mild to be considered low vision (acuity range of −0.3 to 0.34 logMAR). While each test demands a slightly different normative template, results show that individual subject CSFs can be predicted with roughly the same precision as test–retest repeatability, confirming that individuals predominantly differ in terms of peak CS and peak spatial frequency. In fact, these parameters were sufficiently related to empirical measurements of acuity and letter CS to permit accurate estimation of the entire CSF of any individual with a deterministic model (zero free parameters). These results demonstrate that in many cases, measuring the full CSF may provide little additional information beyond letter acuity and contrast sensitivity. PMID:28006065

  11. Precise montaging and metric quantification of retinal surface area from ultra-widefield fundus photography and fluorescein angiography.

    PubMed

    Croft, Daniel E; van Hemert, Jano; Wykoff, Charles C; Clifton, David; Verhoek, Michael; Fleming, Alan; Brown, David M

    2014-01-01

    Accurate quantification of retinal surface area from ultra-widefield (UWF) images is challenging due to warping produced when the retina is projected onto a two-dimensional plane for analysis. By accounting for this, the authors sought to precisely montage and accurately quantify retinal surface area in square millimeters. Montages were created using Optos 200Tx (Optos, Dunfermline, U.K.) images taken at different gaze angles. A transformation projected the images to their correct location on a three-dimensional model. Area was quantified with spherical trigonometry. Warping, precision, and accuracy were assessed. Uncorrected, posterior pixels represented up to 79% greater surface area than peripheral pixels. Assessing precision, a standard region was quantified across 10 montages of the same eye (RSD: 0.7%; mean: 408.97 mm(2); range: 405.34-413.87 mm(2)). Assessing accuracy, 50 patients' disc areas were quantified (mean: 2.21 mm(2); SE: 0.06 mm(2)), and the results fell within the normative range. By accounting for warping inherent in UWF images, precise montaging and accurate quantification of retinal surface area in square millimeters were achieved. Copyright 2014, SLACK Incorporated.

  12. Precisely Tailored DNA Nanostructures and their Theranostic Applications.

    PubMed

    Zhu, Bing; Wang, Lihua; Li, Jiang; Fan, Chunhai

    2017-12-01

    A critical challenge in nanotechnology is the limited precision and controllability of the structural parameters, which brings about concerns in uniformity, reproducibility and performance. Self-assembled DNA nanostructures, as a newly emerged type of nano-biomaterials, possess low-nanometer precision, excellent programmability and addressability. They can precisely arrange various molecules and materials to form spatially ordered complex, resulting in unambiguous physical or chemical properties. Because of these, DNA nanostructures have shown great promise in numerous biomedical theranostic applications. In this account, we briefly review the history and advances on construction of DNA nanoarchitectures and superstructures with accurate structural parameters. We focus on recent progress in exploiting these DNA nanostructures as platforms for quantitative biosensing, intracellular diagnosis, imaging, and smart drug delivery. We also discuss key challenges in practical applications. © 2017 The Chemical Society of Japan & Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. 27 CFR 30.23 - Use of precision hydrometers and thermometers.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... hydrometers and thermometers. 30.23 Section 30.23 Alcohol, Tobacco Products and Firearms ALCOHOL AND TOBACCO... Use of precision hydrometers and thermometers. Care should be exercised to obtain accurate hydrometer... entire quantity. The hydrometers should be kept clean and free of any oily substance. Immediately before...

  14. Modeling and Positioning of a PZT Precision Drive System.

    PubMed

    Liu, Che; Guo, Yanling

    2017-11-08

    The fact that piezoelectric ceramic transducer (PZT) precision drive systems in 3D printing are faced with nonlinear problems with respect to positioning, such as hysteresis and creep, has had an extremely negative impact on the precision of laser focusing systems. To eliminate the impact of PZT nonlinearity during precision drive movement, mathematical modeling and theoretical analyses of each module comprising the system were carried out in this study, a micro-displacement measurement circuit based on Position Sensitive Detector (PSD) is constructed, followed by the establishment of system closed-loop control and creep control models. An XL-80 laser interferometer (Renishaw, Wotton-under-Edge, UK) was used to measure the performance of the precision drive system, showing that system modeling and control algorithms were correct, with the requirements for precision positioning of the drive system satisfied.

  15. Modeling and Positioning of a PZT Precision Drive System

    PubMed Central

    Liu, Che; Guo, Yanling

    2017-01-01

    The fact that piezoelectric ceramic transducer (PZT) precision drive systems in 3D printing are faced with nonlinear problems with respect to positioning, such as hysteresis and creep, has had an extremely negative impact on the precision of laser focusing systems. To eliminate the impact of PZT nonlinearity during precision drive movement, mathematical modeling and theoretical analyses of each module comprising the system were carried out in this study, a micro-displacement measurement circuit based on Position Sensitive Detector (PSD) is constructed, followed by the establishment of system closed-loop control and creep control models. An XL-80 laser interferometer (Renishaw, Wotton-under-Edge, UK) was used to measure the performance of the precision drive system, showing that system modeling and control algorithms were correct, with the requirements for precision positioning of the drive system satisfied. PMID:29117140

  16. Field precision machining technology of target chamber in ICF lasers

    NASA Astrophysics Data System (ADS)

    Xu, Yuanli; Wu, Wenkai; Shi, Sucun; Duan, Lin; Chen, Gang; Wang, Baoxu; Song, Yugang; Liu, Huilin; Zhu, Mingzhi

    2016-10-01

    In ICF lasers, many independent laser beams are required to be positioned on target with a very high degree of accuracy during a shot. The target chamber provides a precision platform and datum reference for final optics assembly and target collimation and location system. The target chamber consists of shell with welded flanges, reinforced concrete pedestal, and lateral support structure. The field precision machining technology of target chamber in ICF lasers have been developed based on ShenGuangIII (SGIII). The same center of the target chamber is adopted in the process of design, fabrication, and alignment. The technologies of beam collimation and datum reference transformation are developed for the fabrication, positioning and adjustment of target chamber. A supporting and rotating mechanism and a special drilling machine are developed to bore the holes of ports. An adjustment mechanism is designed to accurately position the target chamber. In order to ensure the collimation requirements of the beam leading and focusing and the target positioning, custom-machined spacers are used to accurately correct the alignment error of the ports. Finally, this paper describes the chamber center, orientation, and centering alignment error measurements of SGIII. The measurements show the field precision machining of SGIII target chamber meet its design requirement. These information can be used on similar systems.

  17. Robotic-Arm Assisted Total Knee Arthroplasty Demonstrated Greater Accuracy and Precision to Plan Compared with Manual Techniques.

    PubMed

    Hampp, Emily L; Chughtai, Morad; Scholl, Laura Y; Sodhi, Nipun; Bhowmik-Stoker, Manoshi; Jacofsky, David J; Mont, Michael A

    2018-05-01

    This study determined if robotic-arm assisted total knee arthroplasty (RATKA) allows for more accurate and precise bone cuts and component position to plan compared with manual total knee arthroplasty (MTKA). Specifically, we assessed the following: (1) final bone cuts, (2) final component position, and (3) a potential learning curve for RATKA. On six cadaver specimens (12 knees), a MTKA and RATKA were performed on the left and right knees, respectively. Bone-cut and final-component positioning errors relative to preoperative plans were compared. Median errors and standard deviations (SDs) in the sagittal, coronal, and axial planes were compared. Median values of the absolute deviation from plan defined the accuracy to plan. SDs described the precision to plan. RATKA bone cuts were as or more accurate to plan based on nominal median values in 11 out of 12 measurements. RATKA bone cuts were more precise to plan in 8 out of 12 measurements ( p  ≤ 0.05). RATKA final component positions were as or more accurate to plan based on median values in five out of five measurements. RATKA final component positions were more precise to plan in four out of five measurements ( p  ≤ 0.05). Stacked error results from all cuts and implant positions for each specimen in procedural order showed that RATKA error was less than MTKA error. Although this study analyzed a small number of cadaver specimens, there were clear differences that separated these two groups. When compared with MTKA, RATKA demonstrated more accurate and precise bone cuts and implant positioning to plan. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  18. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  19. Accurately measuring volcanic plume velocity with multiple UV spectrometers

    USGS Publications Warehouse

    Williams-Jones, Glyn; Horton, Keith A.; Elias, Tamar; Garbeil, Harold; Mouginis-Mark, Peter J; Sutton, A. Jeff; Harris, Andrew J. L.

    2006-01-01

    A fundamental problem with all ground-based remotely sensed measurements of volcanic gas flux is the difficulty in accurately measuring the velocity of the gas plume. Since a representative wind speed and direction are used as proxies for the actual plume velocity, there can be considerable uncertainty in reported gas flux values. Here we present a method that uses at least two time-synchronized simultaneously recording UV spectrometers (FLYSPECs) placed a known distance apart. By analyzing the time varying structure of SO2 concentration signals at each instrument, the plume velocity can accurately be determined. Experiments were conducted on Kīlauea (USA) and Masaya (Nicaragua) volcanoes in March and August 2003 at plume velocities between 1 and 10 m s−1. Concurrent ground-based anemometer measurements differed from FLYSPEC-measured plume speeds by up to 320%. This multi-spectrometer method allows for the accurate remote measurement of plume velocity and can therefore greatly improve the precision of volcanic or industrial gas flux measurements.

  20. Gaussian signal relaxation around spin echoes: Implications for precise reversible transverse relaxation quantification of pulmonary tissue at 1.5 and 3 Tesla.

    PubMed

    Zapp, Jascha; Domsch, Sebastian; Weingärtner, Sebastian; Schad, Lothar R

    2017-05-01

    To characterize the reversible transverse relaxation in pulmonary tissue and to study the benefit of a quadratic exponential (Gaussian) model over the commonly used linear exponential model for increased quantification precision. A point-resolved spectroscopy sequence was used for comprehensive sampling of the relaxation around spin echoes. Measurements were performed in an ex vivo tissue sample and in healthy volunteers at 1.5 Tesla (T) and 3 T. The goodness of fit using χred2 and the precision of the fitted relaxation time by means of its confidence interval were compared between the two relaxation models. The Gaussian model provides enhanced descriptions of pulmonary relaxation with lower χred2 by average factors of 4 ex vivo and 3 in volunteers. The Gaussian model indicates higher sensitivity to tissue structure alteration with increased precision of reversible transverse relaxation time measurements also by average factors of 4 ex vivo and 3 in volunteers. The mean relaxation times of the Gaussian model in volunteers are T2,G' = (1.97 ± 0.27) msec at 1.5 T and T2,G' = (0.83 ± 0.21) msec at 3 T. Pulmonary signal relaxation was found to be accurately modeled as Gaussian, providing a potential biomarker T2,G' with high sensitivity. Magn Reson Med 77:1938-1945, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  1. qCSF in Clinical Application: Efficient Characterization and Classification of Contrast Sensitivity Functions in Amblyopia

    PubMed Central

    Hou, Fang; Huang, Chang-Bing; Lesmes, Luis; Feng, Li-Xia; Tao, Liming; Zhou, Yi-Feng; Lu, Zhong-Lin

    2010-01-01

    Purpose. The qCSF method is a novel procedure for rapid measurement of spatial contrast sensitivity functions (CSFs). It combines Bayesian adaptive inference with a trial-to-trial information gain strategy, to directly estimate four parameters defining the observer's CSF. In the present study, the suitability of the qCSF method for clinical application was examined. Methods. The qCSF method was applied to rapidly assess spatial CSFs in 10 normal and 8 amblyopic participants. The qCSF was evaluated for accuracy, precision, test–retest reliability, suitability of CSF model assumptions, and accuracy of amblyopia screening. Results. qCSF estimates obtained with as few as 50 trials matched those obtained with 300 Ψ trials. The precision of qCSF estimates obtained with 120 and 130 trials, in normal subjects and amblyopes, matched the precision of 300 Ψ trials. For both groups and both methods, test–retest sensitivity estimates were well matched (all R > 0.94). The qCSF model assumptions were valid for 8 of 10 normal participants and all amblyopic participants. Measures of the area under log CSF (AULCSF) and the cutoff spatial frequency (cutSF) were lower in the amblyopia group; these differences were captured within 50 qCSF trials. Amblyopia was detected at an approximately 80% correct rate in 50 trials, when a logistic regression model was used with AULCSF and cutSF as predictors. Conclusions. The qCSF method is sufficiently rapid, accurate, and precise in measuring CSFs in normal and amblyopic persons. It has great potential for clinical practice. PMID:20484592

  2. Delay times of a LiDAR-guided precision sprayer control system

    USDA-ARS?s Scientific Manuscript database

    Accurate flow control systems in triggering sprays against detected targets are needed for precision variable-rate sprayer development. System delay times due to the laser-sensor data buffer, software operation, and hydraulic-mechanical component response were determined for a control system used fo...

  3. Near real time, accurate, and sensitive microbiological safety monitoring using an all-fibre spectroscopic fluorescence system

    NASA Astrophysics Data System (ADS)

    Vanholsbeeck, F.; Swift, S.; Cheng, M.; Bogomolny, E.

    2013-11-01

    Enumeration of microorganisms is an essential microbiological task for many industrial sectors and research fields. Various tests for detection and counting of microorganisms are used today. However most of the current methods to enumerate bacteria require either long incubation time for limited accuracy, or use complicated protocols along with bulky equipment. We have developed an accurate, all-fibre spectroscopic system to measure fluorescence signal in-situ. In this paper, we examine the potential of this setup for near real time bacteria enumeration in aquatic environment. The concept is based on a well-known phenomenon that the fluorescence quantum yields of some nucleic acid stains significantly increase upon binding with nucleic acids of microorganisms. In addition we have used GFP labeled organisms. The fluorescence signal increase can be correlated to the amount of nucleic acid present in the sample. In addition we have used GFP labeled organisms. Our results show that we are able to detect a wide range of bacteria concentrations without dilution or filtration (1-108 CFU/ml) using different optical probes we designed. This high sensitivity is due to efficient light delivery with an appropriate collection volume and in situ fluorescence detection as well as the use of a sensitive CCD spectrometer. By monitoring the laser power, we can account for laser fluctuations while measuring the fluorescence signal which improves as well the system accuracy. A synchronized laser shutter allows us to achieve a high SNR with minimal integration time, thereby reducing the photobleaching effect. In summary, we conclude that our optical setup may offer a robust method for near real time bacterial detection in aquatic environment.

  4. Applications of an automated stem measurer for precision forestry

    Treesearch

    N. Clark

    2001-01-01

    Accurate stem measurements are required for the determination of many silvicultural prescriptions, i.e., what are we going to do with a stand of trees. This would only be amplified in a precision forestry context. Many methods have been proposed for optimal ways to evaluate stems for a variety of characteristics. These methods usually involve the acquisition of total...

  5. Fundamentals of precision medicine

    PubMed Central

    Divaris, Kimon

    2018-01-01

    Imagine a world where clinicians make accurate diagnoses and provide targeted therapies to their patients according to well-defined, biologically-informed disease subtypes, accounting for individual differences in genetic make-up, behaviors, cultures, lifestyles and the environment. This is not as utopic as it may seem. Relatively recent advances in science and technology have led to an explosion of new information on what underlies health and what constitutes disease. These novel insights emanate from studies of the human genome and microbiome, their associated transcriptomes, proteomes and metabolomes, as well as epigenomics and exposomics—such ‘omics data can now be generated at unprecedented depth and scale, and at rapidly decreasing cost. Making sense and integrating these fundamental information domains to transform health care and improve health remains a challenge—an ambitious, laudable and high-yield goal. Precision dentistry is no longer a distant vision; it is becoming part of the rapidly evolving present. Insights from studies of the human genome and microbiome, their associated transcriptomes, proteomes and metabolomes, and epigenomics and exposomics have reached an unprecedented depth and scale. Much more needs to be done, however, for the realization of precision medicine in the oral health domain. PMID:29227115

  6. High-Precision U-Pb Geochronology of Ice River Perovskite: A Possible Interlaboratory and Intertechnique EARTHTIME Standard

    NASA Astrophysics Data System (ADS)

    Burgess, S. D.; Bowring, S. A.; Heaman, L. M.

    2012-12-01

    to determine the amount of ingrown Pb. First, by measuring the U/Pb ratio in clinopyroxene and assuming a crystallization age the amount of ingrown Pb can be calculated. Second, by assuming that perovskite and clinopyroxene (± other phases) are isochronous, the initial Pb isotopic composition can be calculated using the y-intercept on 206Pb/238U, 207Pb/235U, and 3-D isochron diagrams. To further develop a perovskite mineral standard for use in high-precision dating applications, we have focused on single grains/fragments of perovskite and multi-grain clinopyroxene fractions from a melteigite sample (IR90.3) within the Ice River complex, a zoned alkaline-ultramafic intrusion in southeastern British Columbia. Perovskite from this sample has variable measured 206Pb/204Pb (22-263), making this an ideal sample on which to test the sensitivity of the date on grains with variable amounts of radiogenic Pb to changes in common Pb composition. Using co-existing clinopyroxene for the initial common Pb composition by both direct measurement and by the isochron method allows us to calculate an accurate weighted-mean 206Pb/238U date on perovskite at the < 0.1% level, which overlaps within uncertainty for the two different methods. We recommend the Ice River 90.3 perovskite as a suitable EARTHTIME standard for interlaboratory and intertechnique comparison.

  7. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  8. Determination of doping peptides via solid-phase microelution and accurate-mass quadrupole time-of-flight LC-MS.

    PubMed

    Cuervo, Darío; Loli, Cynthia; Fernández-Álvarez, María; Muñoz, Gloria; Carreras, Daniel

    2017-10-15

    A complete analytical protocol for the determination of 25 doping-related peptidic drugs and 3 metabolites in urine was developed by means of accurate-mass quadrupole time-of-flight (Q-TOF) LC-MS analysis following solid-phase extraction (SPE) on microplates and conventional SPE pre-treatment for initial testing and confirmation, respectively. These substances included growth hormone releasing factors, gonadotropin releasing factors and anti-diuretic hormones, with molecular weights ranging from 540 to 1320Da. Optimal experimental conditions were stablished after investigation of different parameters concerning sample preparation and instrumental analysis. Weak cation exchange SPE followed by C18 HPLC chromatography and accurate mass detection provided the required sensitivity and selectivity for all the target peptides under study. 2mg SPE on 96-well microplates can be used in combination with full scan MS detection for the initial testing, thus providing a fast, cost-effective and high-throughput protocol for the processing of a large batch of samples simultaneously. On the other hand, extraction on 30mg SPE cartridges and subsequent target MS/MS determination was the protocol of choice for confirmatory purposes. The methodology was validated in terms of selectivity, recovery, matrix effect, precision, sensitivity (limit of detection, LOD), cross contamination, carryover, robustness and stability. Recoveries ranged from 6 to 70% (microplates) and 17-95% (cartridges), with LODs from 0.1 to 1ng/mL. The suitability of the method was assessed by analyzing different spiked or excreted urines containing some of the target substances. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. A novel high sensitivity HPLC assay for topiramate, using 4-chloro-7-nitrobenzofurazan as pre-column fluorescence derivatizing agent.

    PubMed

    Bahrami, Gholamreza; Mohammadi, Bahareh

    2007-05-01

    A new, sensitive and simple high-performance liquid chromatographic method for analysis of topiramate, an antiepileptic agent, using 4-chloro-7-nitrobenzofurazan as pre-column derivatization agent is described. Following liquid-liquid extraction of topiramate and an internal standard (amlodipine) from human serum, derivatization of the drugs was performed by the labeling agent in the presence of dichloromethane, methanol, acetonitrile and borate buffer (0.05 M; pH 10.6). A mixture of sodium phosphate buffer (0.05 M; pH 2.4): methanol (35:65 v/v) was eluted as mobile phase and chromatographic separation was achieved using a Shimpack CLC-C18 (150 x 4.6 mm) column. In this method the limit of quantification of 0.01 microg/mL was obtained and the procedure was validated over the concentration range of 0.01 to 12.8 microg/mL. No interferences were found from commonly co-administrated antiepileptic drugs including phenytoin, phenobarbital carbamazepine, lamotrigine, zonisamide, primidone, gabapentin, vigabatrin, and ethosuximide. The analysis performance was carried-out in terms of specificity, sensitivity, linearity, precision, accuracy and stability and the method was shown to be accurate, with intra-day and inter-day accuracy from -3.4 to 10% and precise, with intra-day and inter-day precision from 1.1 to 18%.

  10. Precise monitoring of global temperature trends from satellites

    NASA Technical Reports Server (NTRS)

    Spencer, Roy W.; Christy, John R.

    1990-01-01

    Passive microwave radiometry from satellites provides more precise atmospheric temperature information than that obtained from the relatively sparse distribution of thermometers over the earth's surface. Accurate global atmospheric temperature estimates are needed for detection of possible greenhouse warming, evaluation of computer models of climate change, and for understanding important factors in the climate system. Analysis of the first 10 years (1979 to 1988) of satellite measurements of lower atmospheric temperature changes reveals a monthly precision of 0.01 C, large temperature variability on time scales from weeks to several years, but no obvious trend for the 10-year period. The warmest years, in descending order, were 1987, 1988, 1983, and 1980. The years 1984, 1985, and 1986 were the coolest.

  11. GPS-based precision orbit determination - A Topex flight experiment

    NASA Technical Reports Server (NTRS)

    Melbourne, William G.; Davis, Edgar S.

    1988-01-01

    Plans for a Topex/Poseiden flight experiment to test the accuracy of using GPS data for precision orbit determination of earth satellites are presented. It is expected that the GPS-based precision orbit determination will provide subdecimeter accuracies in the radial component of the Topex orbit when the extant gravity model is tuned for wavelengths longer than about 1000 kms. The concept, design, flight receiver, antenna system, ground processing, and data processing of GPS are examined. Also, an accurate quasi-geometric orbit determination approach called nondynamic or reduced dynamic tracking which relies on the use of the pseudorange and the carrier phase measurements to reduce orbit errors arising from mismodeled dynamics is discussed.

  12. High precision spectroscopy and imaging in THz frequency range

    NASA Astrophysics Data System (ADS)

    Vaks, Vladimir L.

    2014-03-01

    Application of microwave methods for development of the THz frequency range has resulted in elaboration of high precision THz spectrometers based on nonstationary effects. The spectrometers characteristics (spectral resolution and sensitivity) meet the requirements for high precision analysis. The gas analyzers, based on the high precision spectrometers, have been successfully applied for analytical investigations of gas impurities in high pure substances. These investigations can be carried out both in absorption cell and in reactor. The devices can be used for ecological monitoring, detecting the components of chemical weapons and explosive in the atmosphere. The great field of THz investigations is the medicine application. Using the THz spectrometers developed one can detect markers for some diseases in exhaled air.

  13. Anatomical brain images alone can accurately diagnose chronic neuropsychiatric illnesses.

    PubMed

    Bansal, Ravi; Staib, Lawrence H; Laine, Andrew F; Hao, Xuejun; Xu, Dongrong; Liu, Jun; Weissman, Myrna; Peterson, Bradley S

    2012-01-01

    Diagnoses using imaging-based measures alone offer the hope of improving the accuracy of clinical diagnosis, thereby reducing the costs associated with incorrect treatments. Previous attempts to use brain imaging for diagnosis, however, have had only limited success in diagnosing patients who are independent of the samples used to derive the diagnostic algorithms. We aimed to develop a classification algorithm that can accurately diagnose chronic, well-characterized neuropsychiatric illness in single individuals, given the availability of sufficiently precise delineations of brain regions across several neural systems in anatomical MR images of the brain. We have developed an automated method to diagnose individuals as having one of various neuropsychiatric illnesses using only anatomical MRI scans. The method employs a semi-supervised learning algorithm that discovers natural groupings of brains based on the spatial patterns of variation in the morphology of the cerebral cortex and other brain regions. We used split-half and leave-one-out cross-validation analyses in large MRI datasets to assess the reproducibility and diagnostic accuracy of those groupings. In MRI datasets from persons with Attention-Deficit/Hyperactivity Disorder, Schizophrenia, Tourette Syndrome, Bipolar Disorder, or persons at high or low familial risk for Major Depressive Disorder, our method discriminated with high specificity and nearly perfect sensitivity the brains of persons who had one specific neuropsychiatric disorder from the brains of healthy participants and the brains of persons who had a different neuropsychiatric disorder. Although the classification algorithm presupposes the availability of precisely delineated brain regions, our findings suggest that patterns of morphological variation across brain surfaces, extracted from MRI scans alone, can successfully diagnose the presence of chronic neuropsychiatric disorders. Extensions of these methods are likely to provide biomarkers

  14. Molecular Imaging and Precision Medicine in Lung Cancer.

    PubMed

    Zukotynski, Katherine A; Gerbaudo, Victor H

    2017-01-01

    Precision medicine allows tailoring of preventive or therapeutic interventions to avoid the expense and toxicity of futile treatment given to those who will not respond. Lung cancer is a heterogeneous disease functionally and morphologically. PET is a sensitive molecular imaging technique with a major role in the precision medicine algorithm of patients with lung cancer. It contributes to the precision medicine of lung neoplasia by interrogating tumor heterogeneity throughout the body. It provides anatomofunctional insight during diagnosis, staging, and restaging of the disease. It is a biomarker of tumoral heterogeneity that helps direct selection of the most appropriate treatment, the prediction of early response to cytotoxic and cytostatic therapies, and is a prognostic biomarker in patients with lung cancer. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  15. Ultrathin conformal devices for precise and continuous thermal characterization of human skin

    NASA Astrophysics Data System (ADS)

    Webb, R. Chad; Bonifas, Andrew P.; Behnaz, Alex; Zhang, Yihui; Yu, Ki Jun; Cheng, Huanyu; Shi, Mingxing; Bian, Zuguang; Liu, Zhuangjian; Kim, Yun-Soung; Yeo, Woon-Hong; Park, Jae Suk; Song, Jizhou; Li, Yuhang; Huang, Yonggang; Gorbach, Alexander M.; Rogers, John A.

    2013-10-01

    Precision thermometry of the skin can, together with other measurements, provide clinically relevant information about cardiovascular health, cognitive state, malignancy and many other important aspects of human physiology. Here, we introduce an ultrathin, compliant skin-like sensor/actuator technology that can pliably laminate onto the epidermis to provide continuous, accurate thermal characterizations that are unavailable with other methods. Examples include non-invasive spatial mapping of skin temperature with millikelvin precision, and simultaneous quantitative assessment of tissue thermal conductivity. Such devices can also be implemented in ways that reveal the time-dynamic influence of blood flow and perfusion on these properties. Experimental and theoretical studies establish the underlying principles of operation, and define engineering guidelines for device design. Evaluation of subtle variations in skin temperature associated with mental activity, physical stimulation and vasoconstriction/dilation along with accurate determination of skin hydration through measurements of thermal conductivity represent some important operational examples.

  16. Accuracy and precision of protein-ligand interaction kinetics determined from chemical shift titrations.

    PubMed

    Markin, Craig J; Spyracopoulos, Leo

    2012-12-01

    NMR-monitored chemical shift titrations for the study of weak protein-ligand interactions represent a rich source of information regarding thermodynamic parameters such as dissociation constants (K ( D )) in the micro- to millimolar range, populations for the free and ligand-bound states, and the kinetics of interconversion between states, which are typically within the fast exchange regime on the NMR timescale. We recently developed two chemical shift titration methods wherein co-variation of the total protein and ligand concentrations gives increased precision for the K ( D ) value of a 1:1 protein-ligand interaction (Markin and Spyracopoulos in J Biomol NMR 53: 125-138, 2012). In this study, we demonstrate that classical line shape analysis applied to a single set of (1)H-(15)N 2D HSQC NMR spectra acquired using precise protein-ligand chemical shift titration methods we developed, produces accurate and precise kinetic parameters such as the off-rate (k ( off )). For experimentally determined kinetics in the fast exchange regime on the NMR timescale, k ( off ) ~ 3,000 s(-1) in this work, the accuracy of classical line shape analysis was determined to be better than 5 % by conducting quantum mechanical NMR simulations of the chemical shift titration methods with the magnetic resonance toolkit GAMMA. Using Monte Carlo simulations, the experimental precision for k ( off ) from line shape analysis of NMR spectra was determined to be 13 %, in agreement with the theoretical precision of 12 % from line shape analysis of the GAMMA simulations in the presence of noise and protein concentration errors. In addition, GAMMA simulations were employed to demonstrate that line shape analysis has the potential to provide reasonably accurate and precise k ( off ) values over a wide range, from 100 to 15,000 s(-1). The validity of line shape analysis for k ( off ) values approaching intermediate exchange (~100 s(-1)), may be facilitated by more accurate K ( D ) measurements

  17. System and method for high precision isotope ratio destructive analysis

    DOEpatents

    Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R

    2013-07-02

    A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).

  18. Rapid and precise determination of ATP using a modified photometer

    USGS Publications Warehouse

    Shultz, David J.; Stephens, Doyle W.

    1980-01-01

    An inexpensive delay timer was designed to modify a commercially available ATP photometer which allows a disposable tip pipette to be used for injecting either enzyme or sample into the reaction cuvette. The disposable tip pipette is as precise and accurate as a fixed-needle syringe but eliminates the problem of sample contamination and decreases analytical time. (USGS)

  19. ACCURATE ORBITAL INTEGRATION OF THE GENERAL THREE-BODY PROBLEM BASED ON THE D'ALEMBERT-TYPE SCHEME

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Minesaki, Yukitaka

    2013-03-15

    We propose an accurate orbital integration scheme for the general three-body problem that retains all conserved quantities except angular momentum. The scheme is provided by an extension of the d'Alembert-type scheme for constrained autonomous Hamiltonian systems. Although the proposed scheme is merely second-order accurate, it can precisely reproduce some periodic, quasiperiodic, and escape orbits. The Levi-Civita transformation plays a role in designing the scheme.

  20. Precision Landing and Hazard Avoidance Doman

    NASA Technical Reports Server (NTRS)

    Robertson, Edward A.; Carson, John M., III

    2016-01-01

    The Precision Landing and Hazard Avoidance (PL&HA) domain addresses the development, integration, testing, and spaceflight infusion of sensing, processing, and GN&C functions critical to the success and safety of future human and robotic exploration missions. PL&HA sensors also have applications to other mission events, such as rendezvous and docking. Autonomous PL&HA builds upon the core GN&C capabilities developed to enable soft, controlled landings on the Moon, Mars, and other solar system bodies. Through the addition of a Terrain Relative Navigation (TRN) function, precision landing within tens of meters of a map-based target is possible. The addition of a 3-D terrain mapping lidar sensor improves the probability of a safe landing via autonomous, real-time Hazard Detection and Avoidance (HDA). PL&HA significantly improves the probability of mission success and enhances access to sites of scientific interest located in challenging terrain. PL&HA can also utilize external navigation aids, such as navigation satellites and surface beacons. Advanced Lidar Sensors High precision ranging, velocimetry, and 3-D terrain mapping Terrain Relative Navigation (TRN) TRN compares onboard reconnaissance data with real-time terrain imaging data to update the S/C position estimate Hazard Detection and Avoidance (HDA) Generates a high-resolution, 3-D terrain map in real-time during the approach trajectory to identify safe landing targets Inertial Navigation During Terminal Descent High precision surface relative sensors enable accurate inertial navigation during terminal descent and a tightly controlled touchdown within meters of the selected safe landing target.

  1. Two-dimensional flow nanometry of biological nanoparticles for accurate determination of their size and emission intensity

    NASA Astrophysics Data System (ADS)

    Block, Stephan; Fast, Björn Johansson; Lundgren, Anders; Zhdanov, Vladimir P.; Höök, Fredrik

    2016-09-01

    Biological nanoparticles (BNPs) are of high interest due to their key role in various biological processes and use as biomarkers. BNP size and composition are decisive for their functions, but simultaneous determination of both properties with high accuracy remains challenging. Optical microscopy allows precise determination of fluorescence/scattering intensity, but not the size of individual BNPs. The latter is better determined by tracking their random motion in bulk, but the limited illumination volume for tracking this motion impedes reliable intensity determination. Here, we show that by attaching BNPs to a supported lipid bilayer, subjecting them to hydrodynamic flows and tracking their motion via surface-sensitive optical imaging enable determination of their diffusion coefficients and flow-induced drifts, from which accurate quantification of both BNP size and emission intensity can be made. For vesicles, the accuracy of this approach is demonstrated by resolving the expected radius-squared dependence of their fluorescence intensity for radii down to 15 nm.

  2. Precision electroweak physics at LEP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mannelli, M.

    1994-12-01

    Copious event statistics, a precise understanding of the LEP energy scale, and a favorable experimental situation at the Z{sup 0} resonance have allowed the LEP experiments to provide both dramatic confirmation of the Standard Model of strong and electroweak interactions and to place substantially improved constraints on the parameters of the model. The author concentrates on those measurements relevant to the electroweak sector. It will be seen that the precision of these measurements probes sensitively the structure of the Standard Model at the one-loop level, where the calculation of the observables measured at LEP is affected by the value chosenmore » for the top quark mass. One finds that the LEP measurements are consistent with the Standard Model, but only if the mass of the top quark is measured to be within a restricted range of about 20 GeV.« less

  3. Generalization of the normal-exponential model: exploration of a more accurate parametrisation for the signal distribution on Illumina BeadArrays.

    PubMed

    Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv

    2012-12-11

    Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement

  4. Mechanism and experimental research on ultra-precision grinding of ferrite

    NASA Astrophysics Data System (ADS)

    Ban, Xinxing; Zhao, Huiying; Dong, Longchao; Zhu, Xueliang; Zhang, Chupeng; Gu, Yawen

    2017-02-01

    Ultra-precision grinding of ferrite is conducted to investigate the removal mechanism. Effect of the accuracy of machine tool key components on grinding surface quality is analyzed. The surface generation model of ferrite ultra-precision grinding machining is established. In order to reveal the surface formation mechanism of ferrite in the process of ultraprecision grinding, furthermore, the scientific and accurate of the calculation model are taken into account to verify the grinding surface roughness, which is proposed. Orthogonal experiment is designed using the high precision aerostatic turntable and aerostatic spindle for ferrite which is a typical hard brittle materials. Based on the experimental results, the influence factors and laws of ultra-precision grinding surface of ferrite are discussed through the analysis of the surface roughness. The results show that the quality of ferrite grinding surface is the optimal parameters, when the wheel speed of 20000r/mm, feed rate of 10mm/min, grinding depth of 0.005mm, and turntable rotary speed of 5r/min, the surface roughness Ra can up to 75nm.

  5. High precision NC lathe feeding system rigid-flexible coupling model reduction technology

    NASA Astrophysics Data System (ADS)

    Xuan, He; Hua, Qingsong; Cheng, Lianjun; Zhang, Hongxin; Zhao, Qinghai; Mao, Xinkai

    2017-08-01

    This paper proposes the use of dynamic substructure method of reduction of order to achieve effective reduction of feed system for high precision NC lathe feeding system rigid-flexible coupling model, namely the use of ADAMS to establish the rigid flexible coupling simulation model of high precision NC lathe, and then the vibration simulation of the period by using the FD 3D damper is very effective for feed system of bolt connection reduction of multi degree of freedom model. The vibration simulation calculation is more accurate, more quickly.

  6. The Use of Scale-Dependent Precision to Increase Forecast Accuracy in Earth System Modelling

    NASA Astrophysics Data System (ADS)

    Thornes, Tobias; Duben, Peter; Palmer, Tim

    2016-04-01

    At the current pace of development, it may be decades before the 'exa-scale' computers needed to resolve individual convective clouds in weather and climate models become available to forecasters, and such machines will incur very high power demands. But the resolution could be improved today by switching to more efficient, 'inexact' hardware with which variables can be represented in 'reduced precision'. Currently, all numbers in our models are represented as double-precision floating points - each requiring 64 bits of memory - to minimise rounding errors, regardless of spatial scale. Yet observational and modelling constraints mean that values of atmospheric variables are inevitably known less precisely on smaller scales, suggesting that this may be a waste of computer resources. More accurate forecasts might therefore be obtained by taking a scale-selective approach whereby the precision of variables is gradually decreased at smaller spatial scales to optimise the overall efficiency of the model. To study the effect of reducing precision to different levels on multiple spatial scales, we here introduce a new model atmosphere developed by extending the Lorenz '96 idealised system to encompass three tiers of variables - which represent large-, medium- and small-scale features - for the first time. In this chaotic but computationally tractable system, the 'true' state can be defined by explicitly resolving all three tiers. The abilities of low resolution (single-tier) double-precision models and similar-cost high resolution (two-tier) models in mixed-precision to produce accurate forecasts of this 'truth' are compared. The high resolution models outperform the low resolution ones even when small-scale variables are resolved in half-precision (16 bits). This suggests that using scale-dependent levels of precision in more complicated real-world Earth System models could allow forecasts to be made at higher resolution and with improved accuracy. If adopted, this new

  7. Shape design sensitivity analysis using domain information

    NASA Technical Reports Server (NTRS)

    Seong, Hwal-Gyeong; Choi, Kyung K.

    1985-01-01

    A numerical method for obtaining accurate shape design sensitivity information for built-up structures is developed and demonstrated through analysis of examples. The basic character of the finite element method, which gives more accurate domain information than boundary information, is utilized for shape design sensitivity improvement. A domain approach for shape design sensitivity analysis of built-up structures is derived using the material derivative idea of structural mechanics and the adjoint variable method of design sensitivity analysis. Velocity elements and B-spline curves are introduced to alleviate difficulties in generating domain velocity fields. The regularity requirements of the design velocity field are studied.

  8. [Precision nutrition in the era of precision medicine].

    PubMed

    Chen, P Z; Wang, H

    2016-12-06

    Precision medicine has been increasingly incorporated into clinical practice and is enabling a new era for disease prevention and treatment. As an important constituent of precision medicine, precision nutrition has also been drawing more attention during physical examinations. The main aim of precision nutrition is to provide safe and efficient intervention methods for disease treatment and management, through fully considering the genetics, lifestyle (dietary, exercise and lifestyle choices), metabolic status, gut microbiota and physiological status (nutrient level and disease status) of individuals. Three major components should be considered in precision nutrition, including individual criteria for sufficient nutritional status, biomarker monitoring or techniques for nutrient detection and the applicable therapeutic or intervention methods. It was suggested that, in clinical practice, many inherited and chronic metabolic diseases might be prevented or managed through precision nutritional intervention. For generally healthy populations, because lifestyles, dietary factors, genetic factors and environmental exposures vary among individuals, precision nutrition is warranted to improve their physical activity and reduce disease risks. In summary, research and practice is leading toward precision nutrition becoming an integral constituent of clinical nutrition and disease prevention in the era of precision medicine.

  9. Higgs-precision constraints on colored naturalness

    DOE PAGES

    Essig, Rouven; Meade, Patrick; Ramani, Harikrishnan; ...

    2017-09-19

    The presence of weak-scale colored top partners is among the simplest solutions to the Higgs hierarchy problem and allows for a natural electroweak scale. We examine the constraints on generic colored top partners coming solely from their effect on the production and decay rates of the observed Higgs with a mass of 125 GeV. We use the latest Higgs precision data from the Tevatron and the LHC as of EPS 2017 to derive the current limits on spin-0, spin-1/2, and spin-1 colored top partners. We also investigate the expected sensitivity from the Run 3 and Run 4 of the LHC,more » as well from possible future electron-positron and proton-proton colliders, including the ILC, CEPC, FCC-ee, and FCC-hh. We discuss constraints on top partners in the Minimal Supersymmetric Standard Model and Little Higgs theories. We also consider various model-building aspects — multiple top partners, modified couplings between the Higgs and Standard-Model particles, and non-Standard-Model Higgs sectors — and evaluate how these weaken the current limits and expected sensitivities. By modifying other Standard-Model Higgs couplings, we find that the best way to hide low-mass top partners from current data is through modifications of the top-Yukawa coupling, although future measurements of top-quark-pair production in association with a Higgs will extensively probe this possibility. We also demonstrate that models with multiple top partners can generically avoid current and future Higgs precision measurements. Nevertheless, some of the model parameter space can be probed with precision measurements at future electron-positron colliders of, for example, the e + e - → Zhcrosssection.« less

  10. Higgs-precision constraints on colored naturalness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Essig, Rouven; Meade, Patrick; Ramani, Harikrishnan

    The presence of weak-scale colored top partners is among the simplest solutions to the Higgs hierarchy problem and allows for a natural electroweak scale. We examine the constraints on generic colored top partners coming solely from their effect on the production and decay rates of the observed Higgs with a mass of 125 GeV. We use the latest Higgs precision data from the Tevatron and the LHC as of EPS 2017 to derive the current limits on spin-0, spin-1/2, and spin-1 colored top partners. We also investigate the expected sensitivity from the Run 3 and Run 4 of the LHC,more » as well from possible future electron-positron and proton-proton colliders, including the ILC, CEPC, FCC-ee, and FCC-hh. We discuss constraints on top partners in the Minimal Supersymmetric Standard Model and Little Higgs theories. We also consider various model-building aspects — multiple top partners, modified couplings between the Higgs and Standard-Model particles, and non-Standard-Model Higgs sectors — and evaluate how these weaken the current limits and expected sensitivities. By modifying other Standard-Model Higgs couplings, we find that the best way to hide low-mass top partners from current data is through modifications of the top-Yukawa coupling, although future measurements of top-quark-pair production in association with a Higgs will extensively probe this possibility. We also demonstrate that models with multiple top partners can generically avoid current and future Higgs precision measurements. Nevertheless, some of the model parameter space can be probed with precision measurements at future electron-positron colliders of, for example, the e + e - → Zhcrosssection.« less

  11. Ultrathin conformal devices for precise and continuous thermal characterization of human skin

    PubMed Central

    Webb, R. Chad; Bonifas, Andrew P.; Behnaz, Alex; Zhang, Yihui; Yu, Ki Jun; Cheng, Huanyu; Shi, Mingxing; Bian, Zuguang; Liu, Zhuangjian; Kim, Yun-Soung; Yeo, Woon-Hong; Park, Jae Suk; Song, Jizhou; Li, Yuhang; Huang, Yonggang; Gorbach, Alexander M.; Rogers, John A.

    2013-01-01

    Precision thermometry of the skin can, together with other measurements, provide clinically relevant information about cardiovascular health, cognitive state, malignancy and many other important aspects of human physiology. Here, we introduce an ultrathin, compliant skin-like sensor/actuator technology that can pliably laminate onto the epidermis to provide continuous, accurate thermal characterizations that are unavailable with other methods. Examples include non-invasive spatial mapping of skin temperature with millikelvin precision, and simultaneous quantitative assessment of tissue thermal conductivity. Such devices can also be implemented in ways that reveal the time-dynamic influence of blood flow and perfusion on these properties. Experimental and theoretical studies establish the underlying principles of operation, and define engineering guidelines for device design. Evaluation of subtle variations in skin temperature associated with mental activity, physical stimulation and vasoconstriction/dilation along with accurate determination of skin hydration through measurements of thermal conductivity represent some important operational examples. PMID:24037122

  12. Accurate quantification of magnetic particle properties by intra-pair magnetophoresis for nanobiotechnology

    NASA Astrophysics Data System (ADS)

    van Reenen, Alexander; Gao, Yang; Bos, Arjen H.; de Jong, Arthur M.; Hulsen, Martien A.; den Toonder, Jaap M. J.; Prins, Menno W. J.

    2013-07-01

    The application of magnetic particles in biomedical research and in-vitro diagnostics requires accurate characterization of their magnetic properties, with single-particle resolution and good statistics. Here, we report intra-pair magnetophoresis as a method to accurately quantify the field-dependent magnetic moments of magnetic particles and to rapidly generate histograms of the magnetic moments with good statistics. We demonstrate our method with particles of different sizes and from different sources, with a measurement precision of a few percent. We expect that intra-pair magnetophoresis will be a powerful tool for the characterization and improvement of particles for the upcoming field of particle-based nanobiotechnology.

  13. Estimating sensitivity and specificity for technology assessment based on observer studies.

    PubMed

    Nishikawa, Robert M; Pesce, Lorenzo L

    2013-07-01

    The goal of this study was to determine the accuracy and precision of using scores from a receiver operating characteristic rating scale to estimate sensitivity and specificity. We used data collected in a previous study that measured the improvements in radiologists' ability to classify mammographic microcalcification clusters as benign or malignant with and without the use of a computer-aided diagnosis scheme. Sensitivity and specificity were estimated from the rating data from a question that directly asked the radiologists their biopsy recommendations, which was used as the "truth," because it is the actual recall decision, thus it is their subjective truth. By thresholding the rating data, sensitivity and specificity were estimated for different threshold values. Because of interreader and intrareader variability, estimated sensitivity and specificity values for individual readers could be as much as 100% in error when using rating data compared to using the biopsy recommendation data. When pooled together, the estimates using thresholding the rating data were in good agreement with sensitivity and specificity estimated from the recommendation data. However, the statistical power of the rating data estimates was lower. By simply asking the observer his or her explicit recommendation (eg, biopsy or no biopsy), sensitivity and specificity can be measured directly, giving a more accurate description of empirical variability and the power of the study can be maximized. Copyright © 2013 AUR. Published by Elsevier Inc. All rights reserved.

  14. Precision Therapy of Head and Neck Squamous Cell Carcinoma.

    PubMed

    Polverini, P J; D'Silva, N J; Lei, Y L

    2018-06-01

    Precision medicine is an approach to disease prevention and treatment that takes into account genetic variability and environmental and lifestyle influences that are unique to each patient. It facilitates stratification of patient populations that vary in their susceptibility to disease and response to therapy. Shared databases and the implementation of new technology systems designed to advance the integration of this information will enable health care providers to more accurately predict and customize prevention and treatment strategies for patients. Although precision medicine has had a limited impact in most areas of medicine, it has been shown to be an increasingly successful approach to cancer therapy. Despite early promising results targeting aberrant signaling pathways or inhibitors designed to block tumor-driven processes such as angiogenesis, limited success emphasizes the need to discover new biomarkers and treatment targets that are more reliable in predicting response to therapy and result in better health outcomes. Recent successes in the use of immunity-inducing antibodies have stimulated increased interest in the use of precision immunotherapy of head and neck squamous cell carcinoma. Using next-generation sequencing, the precise profiling of tumor-infiltrating lymphocytes has great promise to identify hypoimmunogenic cancer that would benefit from a rationally designed combinatorial approach. Continued interrogation of tumors will reveal new actionable targets with increasing therapeutic efficacy and fulfill the promise of precision therapy of head and neck cancer.

  15. Laser-Induced Focused Ultrasound for Cavitation Treatment: Toward High-Precision Invisible Sonic Scalpel.

    PubMed

    Lee, Taehwa; Luo, Wei; Li, Qiaochu; Demirci, Hakan; Guo, L Jay

    2017-10-01

    Beyond the implementation of the photoacoustic effect to photoacoustic imaging and laser ultrasonics, this study demonstrates a novel application of the photoacoustic effect for high-precision cavitation treatment of tissue using laser-induced focused ultrasound. The focused ultrasound is generated by pulsed optical excitation of an efficient photoacoustic film coated on a concave surface, and its amplitude is high enough to produce controllable microcavitation within the focal region (lateral focus <100 µm). Such microcavitation is used to cut or ablate soft tissue in a highly precise manner. This work demonstrates precise cutting of tissue-mimicking gels as well as accurate ablation of gels and animal eye tissues. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Improving precision of forage yield trials: A case study

    USDA-ARS?s Scientific Manuscript database

    Field-based agronomic and genetic research relies heavily on the data generated from field evaluations. Therefore, it is imperative to optimize the precision of yield estimates in cultivar evaluation trials to make reliable selections. Experimental error in yield trials is sensitive to several facto...

  17. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  18. Development of position-sensitive time-of-flight spectrometer for fission fragment research

    DOE PAGES

    Arnold, C. W.; Tovesson, F.; Meierbachtol, K.; ...

    2014-07-09

    A position-sensitive, high-resolution time-of-flight detector for fission fragments has been developed. The SPectrometer for Ion DEtermination in fission Research (SPIDER) is a 2E–2v spectrometer designed to measure the mass of light fission fragments to a single mass unit. The time pick-off detector pairs to be used in SPIDER have been tested with α-particles from 229Th and its decay chain and α-particles and spontaneous fission fragments from 252Cf. Each detector module is comprised of thin electron conversion foil, electrostatic mirror, microchannel plates, and delay-line anodes. Particle trajectories on the order of 700 mm are determined accurately to within 0.7 mm. Flightmore » times were measured with 250 ps resolution FWHM. Computed particle velocities are accurate to within 0.06 mm/ns corresponding to a precision of 0.5%. As a result, an ionization chamber capable of 400 keV energy resolution coupled with the velocity measurements described here will pave the way for modestly efficient measurements of light fission fragments with unit mass resolution.« less

  19. Accurately estimating PSF with straight lines detected by Hough transform

    NASA Astrophysics Data System (ADS)

    Wang, Ruichen; Xu, Liangpeng; Fan, Chunxiao; Li, Yong

    2018-04-01

    This paper presents an approach to estimating point spread function (PSF) from low resolution (LR) images. Existing techniques usually rely on accurate detection of ending points of the profile normal to edges. In practice however, it is often a great challenge to accurately localize profiles of edges from a LR image, which hence leads to a poor PSF estimation of the lens taking the LR image. For precisely estimating the PSF, this paper proposes firstly estimating a 1-D PSF kernel with straight lines, and then robustly obtaining the 2-D PSF from the 1-D kernel by least squares techniques and random sample consensus. Canny operator is applied to the LR image for obtaining edges and then Hough transform is utilized to extract straight lines of all orientations. Estimating 1-D PSF kernel with straight lines effectively alleviates the influence of the inaccurate edge detection on PSF estimation. The proposed method is investigated on both natural and synthetic images for estimating PSF. Experimental results show that the proposed method outperforms the state-ofthe- art and does not rely on accurate edge detection.

  20. Precise chronologies of Holocene glacial culminations in the Cordillera Vilcabamba of southern Peru

    NASA Astrophysics Data System (ADS)

    Licciardi, J. M.; Schaefer, J. M.; Schweinsberg, A. D.

    2012-12-01

    Records of past fluctuations in climatically sensitive tropical mountain glaciers are among the best indicators of regional paleoclimatic trends and controls. The majority of the world's present-day tropical glaciers are found in the Peruvian Andes, but accurate and precise chronologies of past glacial activity in this region remain relatively scarce, particularly during the Holocene. Here we present ~50 new 10Be exposure ages derived from boulders on well-preserved moraine successions in several glaciated drainages in the Cordillera Vilcabamba of southern Peru (13°20'S latitude). The new results suggest that prominent moraines in these valleys are correlative with previously published moraine ages near Nevado Salcantay in this range (Licciardi et al., 2009), but also expand on the initial surface exposure chronologies to reveal additional periods of glacier stabilization not found in previous work. A provisional composite chronology that merges the new and previously obtained moraine ages indicates at least five discrete glacial culminations from the Lateglacial to the late Holocene. Forthcoming 10Be ages from an additional ~50 samples collected from moraine boulders will increase the precision and completeness of the Vilcabamba moraine chronologies. Basal radiocarbon ages are being developed from bog and lake sediments in stratigraphic contact with the 10Be-dated moraines. These new 14C age data will help constrain the local cosmogenic 10Be production rate, thereby increasing the accuracy of the 10Be chronologies.

  1. Precision half-life measurement of 11C: The most precise mirror transition F t value

    NASA Astrophysics Data System (ADS)

    Valverde, A. A.; Brodeur, M.; Ahn, T.; Allen, J.; Bardayan, D. W.; Becchetti, F. D.; Blankstein, D.; Brown, G.; Burdette, D. P.; Frentz, B.; Gilardy, G.; Hall, M. R.; King, S.; Kolata, J. J.; Long, J.; Macon, K. T.; Nelson, A.; O'Malley, P. D.; Skulski, M.; Strauss, S. Y.; Vande Kolk, B.

    2018-03-01

    Background: The precise determination of the F t value in T =1 /2 mixed mirror decays is an important avenue for testing the standard model of the electroweak interaction through the determination of Vu d in nuclear β decays. 11C is an interesting case, as its low mass and small QE C value make it particularly sensitive to violations of the conserved vector current hypothesis. The present dominant source of uncertainty in the 11CF t value is the half-life. Purpose: A high-precision measurement of the 11C half-life was performed, and a new world average half-life was calculated. Method: 11C was created by transfer reactions and separated using the TwinSol facility at the Nuclear Science Laboratory at the University of Notre Dame. It was then implanted into a tantalum foil, and β counting was used to determine the half-life. Results: The new half-life, t1 /2=1220.27 (26 ) s, is consistent with the previous values but significantly more precise. A new world average was calculated, t1/2 world=1220.41 (32 ) s, and a new estimate for the Gamow-Teller to Fermi mixing ratio ρ is presented along with standard model correlation parameters. Conclusions: The new 11C world average half-life allows the calculation of a F tmirror value that is now the most precise value for all superallowed mixed mirror transitions. This gives a strong impetus for an experimental determination of ρ , to allow for the determination of Vu d from this decay.

  2. [Assessment of precision and accuracy of digital surface photogrammetry with the DSP 400 system].

    PubMed

    Krimmel, M; Kluba, S; Dietz, K; Reinert, S

    2005-03-01

    The objective of the present study was to evaluate the precision and accuracy of facial anthropometric measurements obtained through digital 3-D surface photogrammetry with the DSP 400 system in comparison to traditional 2-D photogrammetry. Fifty plaster casts of cleft infants were imaged and 21 standard anthropometric measurements were obtained. For precision assessment the measurements were performed twice in a subsample. Accuracy was determined by comparison of direct measurements and indirect 2-D and 3-D image measurements. Precision of digital surface photogrammetry was almost as good as direct anthropometry and clearly better than 2-D photogrammetry. Measurements derived from 3-D images showed better congruence to direct measurements than from 2-D photos. Digital surface photogrammetry with the DSP 400 system is sufficiently precise and accurate for craniofacial anthropometric examinations.

  3. A comparison of five methods for monitoring the precision of automated x-ray film processors.

    PubMed

    Nickoloff, E L; Leo, F; Reese, M

    1978-11-01

    Five different methods for preparing sensitometric strips used to monitor the precision of automated film processors are compared. A method for determining the sensitivity of each system to processor variations is presented; the observed statistical variability is multiplied by the system response to temperature or chemical changes. Pre-exposed sensitometric strips required the use of accurate densitometers and stringent control limits to be effective. X-ray exposed sensitometric strips demonstrated large variations in the x-ray output (2 omega approximately equal to 8.0%) over a period of one month. Some light sensitometers were capable of detecting +/- 1.0 degrees F (+/- 0.6 degrees C) variations in developer temperature in the processor and/or about 10.0 ml of chemical contamination in the processor. Nevertheless, even the light sensitometers were susceptible to problems, e.g. film emulsion selection, line voltage variations, and latent image fading. Advantages and disadvantages of the various sensitometric methods are discussed.

  4. Experimental evaluation of active-member control of precision structures

    NASA Technical Reports Server (NTRS)

    Fanson, James; Blackwood, Gary; Chu, Cheng-Chih

    1989-01-01

    The results of closed loop experiments that use piezoelectric active-members to control the flexible motion of a precision truss structure are described. These experiments are directed toward the development of high-performance structural systems as part of the Control/Structure Interaction (CSI) program at JPL. The focus of CSI activity at JPL is to develop the technology necessary to accurately control both the shape and vibration levels in the precision structures from which proposed large space-based observatories will be built. Structural error budgets for these types of structures will likely be in the sub-micron regime; optical tolerances will be even tighter. In order to achieve system level stability and local positioning at this level, it is generally expected that some form of active control will be required.

  5. High-throughput dual-colour precision imaging for brain-wide connectome with cytoarchitectonic landmarks at the cellular level

    PubMed Central

    Gong, Hui; Xu, Dongli; Yuan, Jing; Li, Xiangning; Guo, Congdi; Peng, Jie; Li, Yuxin; Schwarz, Lindsay A.; Li, Anan; Hu, Bihe; Xiong, Benyi; Sun, Qingtao; Zhang, Yalun; Liu, Jiepeng; Zhong, Qiuyuan; Xu, Tonghui; Zeng, Shaoqun; Luo, Qingming

    2016-01-01

    The precise annotation and accurate identification of neural structures are prerequisites for studying mammalian brain function. The orientation of neurons and neural circuits is usually determined by mapping brain images to coarse axial-sampling planar reference atlases. However, individual differences at the cellular level likely lead to position errors and an inability to orient neural projections at single-cell resolution. Here, we present a high-throughput precision imaging method that can acquire a co-localized brain-wide data set of both fluorescent-labelled neurons and counterstained cell bodies at a voxel size of 0.32 × 0.32 × 2.0 μm in 3 days for a single mouse brain. We acquire mouse whole-brain imaging data sets of multiple types of neurons and projections with anatomical annotation at single-neuron resolution. The results show that the simultaneous acquisition of labelled neural structures and cytoarchitecture reference in the same brain greatly facilitates precise tracing of long-range projections and accurate locating of nuclei. PMID:27374071

  6. KLY5 Kappabridge: High sensitivity susceptibility and anisotropy meter precisely decomposing in-phase and out-of-phase components

    NASA Astrophysics Data System (ADS)

    Pokorny, Petr; Pokorny, Jiri; Chadima, Martin; Hrouda, Frantisek; Studynka, Jan; Vejlupek, Josef

    2016-04-01

    The KLY5 Kappabridge is equipped, in addition to standard measurement of in-phase magnetic susceptibility and its anisotropy, for precise and calibrated measurement of out-of-phase susceptibility and its anisotropy. The phase angle is measured in "absolute" terms, i.e. without any residual phase error. The measured value of the out-of-phase susceptibility is independent on both the magnitude of the complex susceptibility and intensity of the driving magnetic field. The precise decomposition of the complex susceptibility into the in-phase and out-of-phase components is verified through presumably zero out-of-phase susceptibility of pure gadolinium oxide. The outstanding sensitivity in measurement of weak samples is achieved by newly developed drift compensation routine in addition to the latest models of electronic devices. In rocks, soils, and environmental materials, in which it is usually due to viscous relaxation, the out-of-phase susceptibility is able to substitute the more laborious frequency-dependent susceptibility routinely used in magnetic granulometry. Another new feature is measurement of the anisotropy of out-of-phase magnetic susceptibility (opAMS), which is also performed simultaneously and automatically with standard (in-phase) AMS measurement. The opAMS enables the direct determination of the magnetic sub-fabrics of the minerals that show non-zero out-of-phase susceptibility either due to viscous relaxation (ultrafine grains of magnetite or maghemite), or due to weak-field hysteresis (titanomagnetite, hematite, pyrrhotite), or due to eddy currents (in conductive minerals). Using the 3D rotator, the instrument performs the measurement of both the AMS and opAMS by only one insertion of the specimen into the specimen holder. In addition, fully automated measurement of the field variation of the AMS and opAMS is possible. The instrument is able to measure, in conjunction with the CS-4 Furnace and CS-L Cryostat, the temperature variation of

  7. Highly accurate pulse-per-second timing distribution over optical fibre network using VCSEL side-mode injection

    NASA Astrophysics Data System (ADS)

    Wassin, Shukree; Isoe, George M.; Gamatham, Romeo R. G.; Leitch, Andrew W. R.; Gibbon, Tim B.

    2017-01-01

    Precise and accurate timing signals distributed between a centralized location and several end-users are widely used in both metro-access and speciality networks for Coordinated Universal Time (UTC), GPS satellite systems, banking, very long baseline interferometry and science projects such as SKA radio telescope. Such systems utilize time and frequency technology to ensure phase coherence among data signals distributed across an optical fibre network. For accurate timing requirements, precise time intervals should be measured between successive pulses. In this paper we describe a novel, all optical method for quantifying one-way propagation times and phase perturbations in the fibre length, using pulse-persecond (PPS) signals. The approach utilizes side mode injection of a 1550nm 10Gbps vertical cavity surface emitting laser (VCSEL) at the remote end. A 125 μs one-way time of flight was accurately measured for 25 km G655 fibre. Since the approach is all-optical, it avoids measurement inaccuracies introduced by electro-optical conversion phase delays. Furthermore, the implementation uses cost effective VCSEL technology and suited to a flexible range of network architectures, supporting a number of end-users conducting measurements at the remote end.

  8. Optimization and Validation of a Sensitive Method for HPLC-PDA Simultaneous Determination of Torasemide and Spironolactone in Human Plasma using Central Composite Design.

    PubMed

    Subramanian, Venkatesan; Nagappan, Kannappan; Sandeep Mannemala, Sai

    2015-01-01

    A sensitive, accurate, precise and rapid HPLC-PDA method was developed and validated for the simultaneous determination of torasemide and spironolactone in human plasma using Design of experiments. Central composite design was used to optimize the method using content of acetonitrile, concentration of buffer and pH of mobile phase as independent variables, while the retention factor of spironolactone, resolution between torasemide and phenobarbitone; and retention time of phenobarbitone were chosen as dependent variables. The chromatographic separation was achieved on Phenomenex C(18) column and the mobile phase comprising 20 mM potassium dihydrogen ortho phosphate buffer (pH-3.2) and acetonitrile in 82.5:17.5 v/v pumped at a flow rate of 1.0 mL min(-1). The method was validated according to USFDA guidelines in terms of selectivity, linearity, accuracy, precision, recovery and stability. The limit of quantitation values were 80 and 50 ng mL(-1) for torasemide and spironolactone respectively. Furthermore, the sensitivity and simplicity of the method suggests the validity of method for routine clinical studies.

  9. Advanced structural design for precision radial velocity instruments

    NASA Astrophysics Data System (ADS)

    Baldwin, Dan; Szentgyorgyi, Andrew; Barnes, Stuart; Bean, Jacob; Ben-Ami, Sagi; Brennan, Patricia; Budynkiewicz, Jamie; Chun, Moo-Young; Conroy, Charlie; Crane, Jeffrey D.; Epps, Harland; Evans, Ian; Evans, Janet; Foster, Jeff; Frebel, Anna; Gauron, Thomas; Guzman, Dani; Hare, Tyson; Jang, Bi-Ho; Jang, Jeong-Gyun; Jordan, Andres; Kim, Jihun; Kim, Kang-Min; Mendes de Oliveira, Claudia; Lopez-Morales, Mercedes; McCracken, Kenneth; McMuldroch, Stuart; Miller, Joseph; Mueller, Mark; Oh, Jae Sok; Ordway, Mark; Park, Byeong-Gon; Park, Chan; Park, Sung-Joon; Paxson, Charles; Phillips, David; Plummer, David; Podgorski, William; Seifahrt, Andreas; Stark, Daniel; Steiner, Joao; Uomoto, Alan; Walsworth, Ronald; Yu, Young-Sam

    2016-07-01

    The GMT-Consortium Large Earth Finder (G-CLEF) is an echelle spectrograph with precision radial velocity (PRV) capability that will be a first light instrument for the Giant Magellan Telescope (GMT). G-CLEF has a PRV precision goal of 40 cm/sec (10 cm/s for multiple measurements) to enable detection of Earth-like exoplanets in the habitable zones of sun-like stars1. This precision is a primary driver of G-CLEF's structural design. Extreme stability is necessary to minimize image motions at the CCD detectors. Minute changes in temperature, pressure, and acceleration environments cause structural deformations, inducing image motions which degrade PRV precision. The instrument's structural design will ensure that the PRV goal is achieved under the environments G-CLEF will be subjected to as installed on the GMT azimuth platform, including: Millikelvin (0.001 °K) thermal soaks and gradients 10 millibar changes in ambient pressure Changes in acceleration due to instrument tip/tilt and telescope slewing Carbon fiber/cyanate composite was selected for the optical bench structure in order to meet performance goals. Low coefficient of thermal expansion (CTE) and high stiffness-to-weight are key features of the composite optical bench design. Manufacturability and serviceability of the instrument are also drivers of the design. In this paper, we discuss analyses leading to technical choices made to minimize G-CLEF's sensitivity to changing environments. Finite element analysis (FEA) and image motion sensitivity studies were conducted to determine PRV performance under operational environments. We discuss the design of the optical bench structure to optimize stiffness-to-weight and minimize deformations due to inertial and pressure effects. We also discuss quasi-kinematic mounting of optical elements and assemblies, and optimization of these to ensure minimal image motion under thermal, pressure, and inertial loads expected during PRV observations.

  10. Vacuum decay container/closure integrity testing technology. Part 1. ASTM F2338-09 precision and bias studies.

    PubMed

    Wolf, Heinz; Stauffer, Tony; Chen, Shu-Chen Y; Lee, Yoojin; Forster, Ronald; Ludzinski, Miron; Kamat, Madhav; Godorov, Phillip; Guazzo, Dana Morton

    2009-01-01

    ASTM F2338-09 Standard Test Method for Nondestructive Detection of Leaks in Packages by Vacuum Decay Method is applicable for leak-testing rigid and semi-rigid non-lidded trays; trays or cups sealed with porous barrier lidding materials; rigid, nonporous packages; and flexible, nonporous packages. Part 1 of this series describes the precision and bias studies performed in 2008 to expand this method's scope to include rigid, nonporous packages completely or partially filled with liquid. Round robin tests using three VeriPac 325/LV vacuum decay leak testers (Packaging Technologies & Inspection, LLC, Tuckahoe, NY) were performed at three test sites. Test packages were 1-mL glass syringes. Positive controls had laser-drilled holes in the barrel ranging from about 5 to 15 microm in nominal diameter. Two different leak tests methods were performed at each site: a "gas leak test" performed at 250 mbar (absolute) and a "liquid leak test" performed at about 1 mbar (absolute). The gas leak test was used to test empty, air-filled syringes. All defects with holes > or = 5.0 microm and all no-defect controls were correctly identified. The only false negative result was attributed to a single syringe with a < 5.0-microm hole. Tests performed using a calibrated air leak supported a 0.10-cm3 x min(-1) (ccm) sensitivity limit (99/99 lower tolerance limit). The liquid leak test was used to test both empty, air-filled syringes and water-filled syringes. Test results were 100% accurate for all empty and water-filled syringes, both without holes and with holes (5, 10, and 15 microm). Tests performed using calibrated air flow leaks of 0, 0.05, and 0.10 ccm were also 100% accurate; data supported a 0.10-ccm sensitivity limit (99/99 lower tolerance limit). Quantitative differential pressure results strongly correlated to hole size using either liquid or gas vacuum decay leak tests. The higher vacuum liquid leak test gave noticeably higher pressure readings when water was present in the

  11. The GLAS Algorithm Theoretical Basis Document for Precision Orbit Determination (POD)

    NASA Technical Reports Server (NTRS)

    Rim, Hyung Jin; Yoon, S. P.; Schultz, Bob E.

    2013-01-01

    The Geoscience Laser Altimeter System (GLAS) was the sole instrument for NASA's Ice, Cloud and land Elevation Satellite (ICESat) laser altimetry mission. The primary purpose of the ICESat mission was to make ice sheet elevation measurements of the polar regions. Additional goals were to measure the global distribution of clouds and aerosols and to map sea ice, land topography and vegetation. ICESat was the benchmark Earth Observing System (EOS) mission to be used to determine the mass balance of the ice sheets, as well as for providing cloud property information, especially for stratospheric clouds common over polar areas. The GLAS instrument operated from 2003 to 2009 and provided multi-year elevation data needed to determine changes in sea ice freeboard, land topography and vegetation around the globe, in addition to elevation changes of the Greenland and Antarctic ice sheets. This document describes the Precision Orbit Determination (POD) algorithm for the ICESat mission. The problem of determining an accurate ephemeris for an orbiting satellite involves estimating the position and velocity of the satellite from a sequence of observations. The ICESatGLAS elevation measurements must be very accurately geolocated, combining precise orbit information with precision pointing information. The ICESat mission POD requirement states that the position of the instrument should be determined with an accuracy of 5 and 20 cm (1-s) in radial and horizontal components, respectively, to meet the science requirements for determining elevation change.

  12. Improving Weather Forecasts Through Reduced Precision Data Assimilation

    NASA Astrophysics Data System (ADS)

    Hatfield, Samuel; Düben, Peter; Palmer, Tim

    2017-04-01

    We present a new approach for improving the efficiency of data assimilation, by trading numerical precision for computational speed. Future supercomputers will allow a greater choice of precision, so that models can use a level of precision that is commensurate with the model uncertainty. Previous studies have already indicated that the quality of climate and weather forecasts is not significantly degraded when using a precision less than double precision [1,2], but so far these studies have not considered data assimilation. Data assimilation is inherently uncertain due to the use of relatively long assimilation windows, noisy observations and imperfect models. Thus, the larger rounding errors incurred from reducing precision may be within the tolerance of the system. Lower precision arithmetic is cheaper, and so by reducing precision in ensemble data assimilation, we can redistribute computational resources towards, for example, a larger ensemble size. Because larger ensembles provide a better estimate of the underlying distribution and are less reliant on covariance inflation and localisation, lowering precision could actually allow us to improve the accuracy of weather forecasts. We will present results on how lowering numerical precision affects the performance of an ensemble data assimilation system, consisting of the Lorenz '96 toy atmospheric model and the ensemble square root filter. We run the system at half precision (using an emulation tool), and compare the results with simulations at single and double precision. We estimate that half precision assimilation with a larger ensemble can reduce assimilation error by 30%, with respect to double precision assimilation with a smaller ensemble, for no extra computational cost. This results in around half a day extra of skillful weather forecasts, if the error-doubling characteristics of the Lorenz '96 model are mapped to those of the real atmosphere. Additionally, we investigate the sensitivity of these results

  13. ESP Toolbox: A Computational Framework for Precise, Scale-Independent Analysis of Bulk Elastic and Seismic Properties

    NASA Astrophysics Data System (ADS)

    Johnson, S. E.; Vel, S. S.; Cook, A. C.; Song, W. J.; Gerbi, C. C.; Okaya, D. A.

    2014-12-01

    Owing to the abundance of highly anisotropic minerals in the crust, the Voigt and Reuss bounds on the seismic velocities can be separated by more than 1 km/s. These bounds are determined by modal mineralogy and crystallographic preferred orientations (CPO) of the constituent minerals, but where the true velocities lie between these bounds is determined by other fabric parameters such as the shapes, shape-preferred orientations, and spatial arrangements of grains. Thus, the calculation of accurate bulk stiffness relies on explicitly treating the grain-scale heterogeneity, and the same principle applies at larger scales, for example calculating accurate bulk stiffness for a crustal volume with varying proportions and distributions of folds or shear zones. We have developed stand-alone GUI software - ESP Toolbox - for the calculation of 3D bulk elastic and seismic properties of heterogeneous and polycrystalline materials using image or EBSD data. The GUI includes a number of different homogenization techniques, including Voigt, Reuss, Hill, geometric mean, self-consistent and asymptotic expansion homogenization (AEH) methods. The AEH method, which uses a finite element mesh, is most accurate since it explicitly accounts for elastic interactions of constituent minerals/phases. The user need only specify the microstructure and material properties of the minerals/phases. We use the Toolbox to explore changes in bulk elasticity and related seismic anisotropy caused by specific variables, including: (a) the quartz alpha-beta phase change in rocks with varying proportions of quartz, (b) changes in modal mineralogy and CPO fabric that occur during progressive deformation and metamorphism, and (c) shear zones of varying thickness, abundance and geometry in continental crust. The Toolbox allows rapid sensitivity analysis around these and other variables, and the resulting bulk stiffness matrices can be used to populate volumes for synthetic wave propagation experiments that

  14. Accurate LC Peak Boundary Detection for 16 O/ 18 O Labeled LC-MS Data

    PubMed Central

    Cui, Jian; Petritis, Konstantinos; Tegeler, Tony; Petritis, Brianne; Ma, Xuepo; Jin, Yufang; Gao, Shou-Jiang (SJ); Zhang, Jianqiu (Michelle)

    2013-01-01

    In liquid chromatography-mass spectrometry (LC-MS), parts of LC peaks are often corrupted by their co-eluting peptides, which results in increased quantification variance. In this paper, we propose to apply accurate LC peak boundary detection to remove the corrupted part of LC peaks. Accurate LC peak boundary detection is achieved by checking the consistency of intensity patterns within peptide elution time ranges. In addition, we remove peptides with erroneous mass assignment through model fitness check, which compares observed intensity patterns to theoretically constructed ones. The proposed algorithm can significantly improve the accuracy and precision of peptide ratio measurements. PMID:24115998

  15. Search Filter Precision Can Be Improved By NOTing Out Irrelevant Content

    PubMed Central

    Wilczynski, Nancy L.; McKibbon, K. Ann; Haynes, R. Brian

    2011-01-01

    Background: Most methodologic search filters developed for use in large electronic databases such as MEDLINE have low precision. One method that has been proposed but not tested for improving precision is NOTing out irrelevant content. Objective: To determine if search filter precision can be improved by NOTing out the text words and index terms assigned to those articles that are retrieved but are off-target. Design: Analytic survey. Methods: NOTing out unique terms in off-target articles and testing search filter performance in the Clinical Hedges Database. Main Outcome Measures: Sensitivity, specificity, precision and number needed to read (NNR). Results: For all purpose categories (diagnosis, prognosis and etiology) except treatment and for all databases (MEDLINE, EMBASE, CINAHL and PsycINFO), constructing search filters that NOTed out irrelevant content resulted in substantive improvements in NNR (over four-fold for some purpose categories and databases). Conclusion: Search filter precision can be improved by NOTing out irrelevant content. PMID:22195215

  16. Accurate positioning based on acoustic and optical sensors

    NASA Astrophysics Data System (ADS)

    Cai, Kerong; Deng, Jiahao; Guo, Hualing

    2009-11-01

    Unattended laser target designator (ULTD) was designed to partly take the place of conventional LTDs for accurate positioning and laser marking. Analyzed the precision, accuracy and errors of acoustic sensor array, the requirements of laser generator, and the technology of image analysis and tracking, the major system modules were determined. The target's classification, velocity and position can be measured by sensors, and then coded laser beam will be emitted intelligently to mark the excellent position at the excellent time. The conclusion shows that, ULTD can not only avoid security threats, be deployed massively, and accomplish battle damage assessment (BDA), but also be fit for information-based warfare.

  17. Pink-Beam, Highly-Accurate Compact Water Cooled Slits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lyndaker, Aaron; Deyhim, Alex; Jayne, Richard

    2007-01-19

    Advanced Design Consulting, Inc. (ADC) has designed accurate compact slits for applications where high precision is required. The system consists of vertical and horizontal slit mechanisms, a vacuum vessel which houses them, water cooling lines with vacuum guards connected to the individual blades, stepper motors with linear encoders, limit (home position) switches and electrical connections including internal wiring for a drain current measurement system. The total slit size is adjustable from 0 to 15 mm both vertically and horizontally. Each of the four blades are individually controlled and motorized. In this paper, a summary of the design and Finite Elementmore » Analysis of the system are presented.« less

  18. Fast and accurate edge orientation processing during object manipulation

    PubMed Central

    Flanagan, J Randall; Johansson, Roland S

    2018-01-01

    Quickly and accurately extracting information about a touched object’s orientation is a critical aspect of dexterous object manipulation. However, the speed and acuity of tactile edge orientation processing with respect to the fingertips as reported in previous perceptual studies appear inadequate in these respects. Here we directly establish the tactile system’s capacity to process edge-orientation information during dexterous manipulation. Participants extracted tactile information about edge orientation very quickly, using it within 200 ms of first touching the object. Participants were also strikingly accurate. With edges spanning the entire fingertip, edge-orientation resolution was better than 3° in our object manipulation task, which is several times better than reported in previous perceptual studies. Performance remained impressive even with edges as short as 2 mm, consistent with our ability to precisely manipulate very small objects. Taken together, our results radically redefine the spatial processing capacity of the tactile system. PMID:29611804

  19. Accurate modeling of the hose instability in plasma wakefield accelerators

    DOE PAGES

    Mehrling, T. J.; Benedetti, C.; Schroeder, C. B.; ...

    2018-05-20

    Hosing is a major challenge for the applicability of plasma wakefield accelerators and its modeling is therefore of fundamental importance to facilitate future stable and compact plasma-based particle accelerators. In this contribution, we present a new model for the evolution of the plasma centroid, which enables the accurate investigation of the hose instability in the nonlinear blowout regime. Lastly, it paves the road for more precise and comprehensive studies of hosing, e.g., with drive and witness beams, which were not possible with previous models.

  20. Accurate modeling of the hose instability in plasma wakefield accelerators

    NASA Astrophysics Data System (ADS)

    Mehrling, T. J.; Benedetti, C.; Schroeder, C. B.; Martinez de la Ossa, A.; Osterhoff, J.; Esarey, E.; Leemans, W. P.

    2018-05-01

    Hosing is a major challenge for the applicability of plasma wakefield accelerators and its modeling is therefore of fundamental importance to facilitate future stable and compact plasma-based particle accelerators. In this contribution, we present a new model for the evolution of the plasma centroid, which enables the accurate investigation of the hose instability in the nonlinear blowout regime. It paves the road for more precise and comprehensive studies of hosing, e.g., with drive and witness beams, which were not possible with previous models.

  1. Accurate modeling of the hose instability in plasma wakefield accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mehrling, T. J.; Benedetti, C.; Schroeder, C. B.

    Hosing is a major challenge for the applicability of plasma wakefield accelerators and its modeling is therefore of fundamental importance to facilitate future stable and compact plasma-based particle accelerators. In this contribution, we present a new model for the evolution of the plasma centroid, which enables the accurate investigation of the hose instability in the nonlinear blowout regime. Lastly, it paves the road for more precise and comprehensive studies of hosing, e.g., with drive and witness beams, which were not possible with previous models.

  2. An eclipsing-binary distance to the Large Magellanic Cloud accurate to two per cent.

    PubMed

    Pietrzyński, G; Graczyk, D; Gieren, W; Thompson, I B; Pilecki, B; Udalski, A; Soszyński, I; Kozłowski, S; Konorski, P; Suchomska, K; Bono, G; Moroni, P G Prada; Villanova, S; Nardetto, N; Bresolin, F; Kudritzki, R P; Storm, J; Gallenne, A; Smolec, R; Minniti, D; Kubiak, M; Szymański, M K; Poleski, R; Wyrzykowski, L; Ulaczyk, K; Pietrukowicz, P; Górski, M; Karczmarek, P

    2013-03-07

    In the era of precision cosmology, it is essential to determine the Hubble constant to an accuracy of three per cent or better. At present, its uncertainty is dominated by the uncertainty in the distance to the Large Magellanic Cloud (LMC), which, being our second-closest galaxy, serves as the best anchor point for the cosmic distance scale. Observations of eclipsing binaries offer a unique opportunity to measure stellar parameters and distances precisely and accurately. The eclipsing-binary method was previously applied to the LMC, but the accuracy of the distance results was lessened by the need to model the bright, early-type systems used in those studies. Here we report determinations of the distances to eight long-period, late-type eclipsing systems in the LMC, composed of cool, giant stars. For these systems, we can accurately measure both the linear and the angular sizes of their components and avoid the most important problems related to the hot, early-type systems. The LMC distance that we derive from these systems (49.97 ± 0.19 (statistical) ± 1.11 (systematic) kiloparsecs) is accurate to 2.2 per cent and provides a firm base for a 3-per-cent determination of the Hubble constant, with prospects for improvement to 2 per cent in the future.

  3. Electroweak precision data and gravitino dark matter

    NASA Astrophysics Data System (ADS)

    Heinemeyer, S.

    2007-11-01

    Electroweak precision measurements can provide indirect information about the possible scale of supersymmetry already at the present level of accuracy. We review present day sensitivities of precision data in mSUGRA-type models with the gravitino as the lightest supersymmetric particle (LSP). The c2 fit is based on MW, sin2 qeff, (g-2)m , BR (b xAE sl) and the lightest MSSM Higgs boson mass, Mh. We find indications for relatively light soft supersymmetry-breaking masses, offering good prospects for the LHC and the ILC, and in some cases also for the Tevatron.

  4. Design and control of a macro-micro robot for precise force applications

    NASA Technical Reports Server (NTRS)

    Wang, Yulun; Mangaser, Amante; Laby, Keith; Jordan, Steve; Wilson, Jeff

    1993-01-01

    Creating a robot which can delicately interact with its environment has been the goal of much research. Primarily two difficulties have made this goal hard to attain. The execution of control strategies which enable precise force manipulations are difficult to implement in real time because such algorithms have been too computationally complex for available controllers. Also, a robot mechanism which can quickly and precisely execute a force command is difficult to design. Actuation joints must be sufficiently stiff, frictionless, and lightweight so that desired torques can be accurately applied. This paper describes a robotic system which is capable of delicate manipulations. A modular high-performance multiprocessor control system was designed to provide sufficient compute power for executing advanced control methods. An 8 degree of freedom macro-micro mechanism was constructed to enable accurate tip forces. Control algorithms based on the impedance control method were derived, coded, and load balanced for maximum execution speed on the multiprocessor system. Delicate force tasks such as polishing, finishing, cleaning, and deburring, are the target applications of the robot.

  5. Precision cosmological parameter estimation

    NASA Astrophysics Data System (ADS)

    Fendt, William Ashton, Jr.

    2009-09-01

    Experimental efforts of the last few decades have brought. a golden age to mankind's endeavor to understand tine physical properties of the Universe throughout its history. Recent measurements of the cosmic microwave background (CMB) provide strong confirmation of the standard big bang paradigm, as well as introducing new mysteries, to unexplained by current physical models. In the following decades. even more ambitious scientific endeavours will begin to shed light on the new physics by looking at the detailed structure of the Universe both at very early and recent times. Modern data has allowed us to begins to test inflationary models of the early Universe, and the near future will bring higher precision data and much stronger tests. Cracking the codes hidden in these cosmological observables is a difficult and computationally intensive problem. The challenges will continue to increase as future experiments bring larger and more precise data sets. Because of the complexity of the problem, we are forced to use approximate techniques and make simplifying assumptions to ease the computational workload. While this has been reasonably sufficient until now, hints of the limitations of our techniques have begun to come to light. For example, the likelihood approximation used for analysis of CMB data from the Wilkinson Microwave Anistropy Probe (WMAP) satellite was shown to have short falls, leading to pre-emptive conclusions drawn about current cosmological theories. Also it can he shown that an approximate method used by all current analysis codes to describe the recombination history of the Universe will not be sufficiently accurate for future experiments. With a new CMB satellite scheduled for launch in the coming months, it is vital that we develop techniques to improve the analysis of cosmological data. This work develops a novel technique of both avoiding the use of approximate computational codes as well as allowing the application of new, more precise analysis

  6. Dynamic and accurate assessment of acetaminophen-induced hepatotoxicity by integrated photoacoustic imaging and mechanistic biomarkers in vivo.

    PubMed

    Brillant, Nathalie; Elmasry, Mohamed; Burton, Neal C; Rodriguez, Josep Monne; Sharkey, Jack W; Fenwick, Stephen; Poptani, Harish; Kitteringham, Neil R; Goldring, Christopher E; Kipar, Anja; Park, B Kevin; Antoine, Daniel J

    2017-10-01

    The prediction and understanding of acetaminophen (APAP)-induced liver injury (APAP-ILI) and the response to therapeutic interventions is complex. This is due in part to sensitivity and specificity limitations of currently used assessment techniques. Here we sought to determine the utility of integrating translational non-invasive photoacoustic imaging of liver function with mechanistic circulating biomarkers of hepatotoxicity with histological assessment to facilitate the more accurate and precise characterization of APAP-ILI and the efficacy of therapeutic intervention. Perturbation of liver function and cellular viability was assessed in C57BL/6J male mice by Indocyanine green (ICG) clearance (Multispectral Optoacoustic Tomography (MSOT)) and by measurement of mechanistic (miR-122, HMGB1) and established (ALT, bilirubin) circulating biomarkers in response to the acetaminophen and its treatment with acetylcysteine (NAC) in vivo. We utilised a 60% partial hepatectomy model as a situation of defined hepatic functional mass loss to compared acetaminophen-induced changes to. Integration of these mechanistic markers correlated with histological features of APAP hepatotoxicity in a time-dependent manner. They accurately reflected the onset and recovery from hepatotoxicity compared to traditional biomarkers and also reported the efficacy of NAC with high sensitivity. ICG clearance kinetics correlated with histological scores for acute liver damage for APAP (i.e. 3h timepoint; r=0.90, P<0.0001) and elevations in both of the mechanistic biomarkers, miR-122 (e.g. 6h timepoint; r=0.70, P=0.005) and HMGB1 (e.g. 6h timepoint; r=0.56, P=0.04). For the first time we report the utility of this non-invasive longitudinal imaging approach to provide direct visualisation of the liver function coupled with mechanistic biomarkers, in the same animal, allowing the investigation of the toxicological and pharmacological aspects of APAP-ILI and hepatic regeneration. Copyright © 2017

  7. Influence of speckle image reconstruction on photometric precision for large solar telescopes

    NASA Astrophysics Data System (ADS)

    Peck, C. L.; Wöger, F.; Marino, J.

    2017-11-01

    Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.

  8. Input-variable sensitivity assessment for sediment transport relations

    NASA Astrophysics Data System (ADS)

    Fernández, Roberto; Garcia, Marcelo H.

    2017-09-01

    A methodology to assess input-variable sensitivity for sediment transport relations is presented. The Mean Value First Order Second Moment Method (MVFOSM) is applied to two bed load transport equations showing that it may be used to rank all input variables in terms of how their specific variance affects the overall variance of the sediment transport estimation. In sites where data are scarce or nonexistent, the results obtained may be used to (i) determine what variables would have the largest impact when estimating sediment loads in the absence of field observations and (ii) design field campaigns to specifically measure those variables for which a given transport equation is most sensitive; in sites where data are readily available, the results would allow quantifying the effect that the variance associated with each input variable has on the variance of the sediment transport estimates. An application of the method to two transport relations using data from a tropical mountain river in Costa Rica is implemented to exemplify the potential of the method in places where input data are limited. Results are compared against Monte Carlo simulations to assess the reliability of the method and validate its results. For both of the sediment transport relations used in the sensitivity analysis, accurate knowledge of sediment size was found to have more impact on sediment transport predictions than precise knowledge of other input variables such as channel slope and flow discharge.

  9. Improving precise positioning of surgical robotic instruments by a three-side-view presentation system on telesurgery.

    PubMed

    Hori, Kenta; Kuroda, Tomohiro; Oyama, Hiroshi; Ozaki, Yasuhiko; Nakamura, Takehiko; Takahashi, Takashi

    2005-12-01

    For faultless collaboration among the surgeon, surgical staffs, and surgical robots in telesurgery, communication must include environmental information of the remote operating room, such as behavior of robots and staffs, vital information of a patient, named supporting information, in addition to view of surgical field. "Surgical Cockpit System, " which is a telesurgery support system that has been developed by the authors, is mainly focused on supporting information exchange between remote sites. Live video presentation is important technology for Surgical Cockpit System. Visualization method to give precise location/posture of surgical instruments is indispensable for accurate control and faultless operation. In this paper, the authors propose three-side-view presentation method for precise location/posture control of surgical instruments in telesurgery. The experimental results show that the proposed method improved accurate positioning of a telemanipulator.

  10. Precision measurement of transition matrix elements via light shift cancellation.

    PubMed

    Herold, C D; Vaidya, V D; Li, X; Rolston, S L; Porto, J V; Safronova, M S

    2012-12-14

    We present a method for accurate determination of atomic transition matrix elements at the 10(-3) level. Measurements of the ac Stark (light) shift around "magic-zero" wavelengths, where the light shift vanishes, provide precise constraints on the matrix elements. We make the first measurement of the 5s - 6p matrix elements in rubidium by measuring the light shift around the 421 and 423 nm zeros through diffraction of a condensate off a sequence of standing wave pulses. In conjunction with existing theoretical and experimental data, we find 0.3235(9)ea(0) and 0.5230(8)ea(0) for the 5s - 6p(1/2) and 5s - 6p(3/2) elements, respectively, an order of magnitude more accurate than the best theoretical values. This technique can provide needed, accurate matrix elements for many atoms, including those used in atomic clocks, tests of fundamental symmetries, and quantum information.

  11. Citation searches are more sensitive than keyword searches to identify studies using specific measurement instruments.

    PubMed

    Linder, Suzanne K; Kamath, Geetanjali R; Pratt, Gregory F; Saraykar, Smita S; Volk, Robert J

    2015-04-01

    To compare the effectiveness of two search methods in identifying studies that used the Control Preferences Scale (CPS), a health care decision-making instrument commonly used in clinical settings. We searched the literature using two methods: (1) keyword searching using variations of "Control Preferences Scale" and (2) cited reference searching using two seminal CPS publications. We searched three bibliographic databases [PubMed, Scopus, and Web of Science (WOS)] and one full-text database (Google Scholar). We report precision and sensitivity as measures of effectiveness. Keyword searches in bibliographic databases yielded high average precision (90%) but low average sensitivity (16%). PubMed was the most precise, followed closely by Scopus and WOS. The Google Scholar keyword search had low precision (54%) but provided the highest sensitivity (70%). Cited reference searches in all databases yielded moderate sensitivity (45-54%), but precision ranged from 35% to 75% with Scopus being the most precise. Cited reference searches were more sensitive than keyword searches, making it a more comprehensive strategy to identify all studies that use a particular instrument. Keyword searches provide a quick way of finding some but not all relevant articles. Goals, time, and resources should dictate the combination of which methods and databases are used. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Citation searches are more sensitive than keyword searches to identify studies using specific measurement instruments

    PubMed Central

    Linder, Suzanne K.; Kamath, Geetanjali R.; Pratt, Gregory F.; Saraykar, Smita S.; Volk, Robert J.

    2015-01-01

    Objective To compare the effectiveness of two search methods in identifying studies that used the Control Preferences Scale (CPS), a healthcare decision-making instrument commonly used in clinical settings. Study Design & Setting We searched the literature using two methods: 1) keyword searching using variations of “control preferences scale” and 2) cited reference searching using two seminal CPS publications. We searched three bibliographic databases [PubMed, Scopus, Web of Science (WOS)] and one full-text database (Google Scholar). We report precision and sensitivity as measures of effectiveness. Results Keyword searches in bibliographic databases yielded high average precision (90%), but low average sensitivity (16%). PubMed was the most precise, followed closely by Scopus and WOS. The Google Scholar keyword search had low precision (54%) but provided the highest sensitivity (70%). Cited reference searches in all databases yielded moderate sensitivity (45–54%), but precision ranged from 35–75% with Scopus being the most precise. Conclusion Cited reference searches were more sensitive than keyword searches, making it a more comprehensive strategy to identify all studies that use a particular instrument. Keyword searches provide a quick way of finding some but not all relevant articles. Goals, time and resources should dictate the combination of which methods and databases are used. PMID:25554521

  13. Atomic precision etch using a low-electron temperature plasma

    NASA Astrophysics Data System (ADS)

    Dorf, L.; Wang, J.-C.; Rauf, S.; Zhang, Y.; Agarwal, A.; Kenney, J.; Ramaswamy, K.; Collins, K.

    2016-03-01

    Sub-nm precision is increasingly being required of many critical plasma etching processes in the semiconductor industry. Accurate control over ion energy and ion/radical composition is needed during plasma processing to meet these stringent requirements. Described in this work is a new plasma etch system which has been designed with the requirements of atomic precision plasma processing in mind. In this system, an electron sheet beam parallel to the substrate surface produces a plasma with an order of magnitude lower electron temperature Te (~ 0.3 eV) and ion energy Ei (< 3 eV without applied bias) compared to conventional radio-frequency (RF) plasma technologies. Electron beam plasmas are characterized by higher ion-to-radical fraction compared to RF plasmas, so a separate radical source is used to provide accurate control over relative ion and radical concentrations. Another important element in this plasma system is low frequency RF bias capability which allows control of ion energy in the 2-50 eV range. Presented in this work are the results of etching of a variety of materials and structures performed in this system. In addition to high selectivity and low controllable etch rate, an important requirement of atomic precision etch processes is no (or minimal) damage to the remaining material surface. It has traditionally not been possible to avoid damage in RF plasma processing systems, even during atomic layer etch. The experiments for Si etch in Cl2 based plasmas in the aforementioned etch system show that damage can be minimized if the ion energy is kept below 10 eV. Layer-by-layer etch of Si is also demonstrated in this etch system using electrical and gas pulsing.

  14. StatSTEM: An efficient approach for accurate and precise model-based quantification of atomic resolution electron microscopy images.

    PubMed

    De Backer, A; van den Bos, K H W; Van den Broek, W; Sijbers, J; Van Aert, S

    2016-12-01

    An efficient model-based estimation algorithm is introduced to quantify the atomic column positions and intensities from atomic resolution (scanning) transmission electron microscopy ((S)TEM) images. This algorithm uses the least squares estimator on image segments containing individual columns fully accounting for overlap between neighbouring columns, enabling the analysis of a large field of view. For this algorithm, the accuracy and precision with which measurements for the atomic column positions and scattering cross-sections from annular dark field (ADF) STEM images can be estimated, has been investigated. The highest attainable precision is reached even for low dose images. Furthermore, the advantages of the model-based approach taking into account overlap between neighbouring columns are highlighted. This is done for the estimation of the distance between two neighbouring columns as a function of their distance and for the estimation of the scattering cross-section which is compared to the integrated intensity from a Voronoi cell. To provide end-users this well-established quantification method, a user friendly program, StatSTEM, is developed which is freely available under a GNU public license. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. High-precision relative position and attitude measurement for on-orbit maintenance of spacecraft

    NASA Astrophysics Data System (ADS)

    Zhu, Bing; Chen, Feng; Li, Dongdong; Wang, Ying

    2018-02-01

    In order to realize long-term on-orbit running of satellites, space stations, etc spacecrafts, in addition to the long life design of devices, The life of the spacecraft can also be extended by the on-orbit servicing and maintenance. Therefore, it is necessary to keep precise and detailed maintenance of key components. In this paper, a high-precision relative position and attitude measurement method used in the maintenance of key components is given. This method mainly considers the design of the passive cooperative marker, light-emitting device and high resolution camera in the presence of spatial stray light and noise. By using a series of algorithms, such as background elimination, feature extraction, position and attitude calculation, and so on, the high precision relative pose parameters as the input to the control system between key operation parts and maintenance equipment are obtained. The simulation results show that the algorithm is accurate and effective, satisfying the requirements of the precision operation technique.

  16. Precise Truss Assembly Using Commodity Parts and Low Precision Welding

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, W. R.; Correll, Nikolaus

    2014-01-01

    Hardware and software design and system integration for an intelligent precision jigging robot (IPJR), which allows high precision assembly using commodity parts and low-precision bonding, is described. Preliminary 2D experiments that are motivated by the problem of assembling space telescope optical benches and very large manipulators on orbit using inexpensive, stock hardware and low-precision welding are also described. An IPJR is a robot that acts as the precise "jigging", holding parts of a local structure assembly site in place, while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (for this prototype, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. The analysis of the assembly error and the results of building a square structure and a ring structure are discussed. Options for future work, to extend the IPJR paradigm to building in 3D structures at micron precision are also summarized.

  17. Precise Truss Assembly using Commodity Parts and Low Precision Welding

    NASA Technical Reports Server (NTRS)

    Komendera, Erik; Reishus, Dustin; Dorsey, John T.; Doggett, William R.; Correll, Nikolaus

    2013-01-01

    We describe an Intelligent Precision Jigging Robot (IPJR), which allows high precision assembly of commodity parts with low-precision bonding. We present preliminary experiments in 2D that are motivated by the problem of assembling a space telescope optical bench on orbit using inexpensive, stock hardware and low-precision welding. An IPJR is a robot that acts as the precise "jigging", holding parts of a local assembly site in place while an external low precision assembly agent cuts and welds members. The prototype presented in this paper allows an assembly agent (in this case, a human using only low precision tools), to assemble a 2D truss made of wooden dowels to a precision on the order of millimeters over a span on the order of meters. We report the challenges of designing the IPJR hardware and software, analyze the error in assembly, document the test results over several experiments including a large-scale ring structure, and describe future work to implement the IPJR in 3D and with micron precision.

  18. SU-F-I-56: High-Precision Gamma-Ray Analysis of Medical Isotopes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chopra, N; Chillery, T; Chowdhury, P

    2016-06-15

    Purpose: Advanced, time-resolved, Compton-suppressed gamma-ray spectroscopy with germanium detectors is implemented for assaying medical isotopes to study the radioactive decay process leading to a more accurate appraisal of the received dose and treatment planning. Lowell’s Array for Radiological Assay (LARA), a detector array that is comprised of six Compton-suppressed high-purity germanium detectors, is currently under development at UMass-Lowell which combines Compton-suppression and time-and-angle correlations to allow for highly efficient and highly sensitive measurements. Methods: Two isotopes produced Brookhaven Linac Isotope Producer (BLIP) were investigated. {sup 82}Sr which is the parent isotope for producing {sup 82}Rb is often used in cardiacmore » PET. {sup 82}Sr gamma-ray spectrum is dominated by the 511keV photons from positron annihilation which prevent precise measurement of co-produced contaminant isotopes. A second project was to investigate the production of platinum isotopes. Natural platinum was bombarded with protons from 53MeV to 200MeV. The resulting spectrum was complicated due to the large number of stable platinum isotopes in the target, the variety of open reaction channels (p,xn), (p,pxn), (p,axn). Results: By using face-to-face NaI(Tl) counters 90-degrees to the Compton-suppressed germaniums to detect the 511keV photons, a much cleaner and more sensitive measurement of {sup 85}Sr and other contaminants was obtained. For the platinum target, we identified the production of {sup 188–189–191–195}Pt, {sup 191–192–193–194–195–196}Au and {sup 186–188–189–190–192–194–189–190–192–194}Ir. For example, at the lower energies (53 and 65MeV), we measured {sup 191}Pt production cross-sections of 144mb and 157mb. Considerable care was needed in following the process of dissolving and diluting the samples to get consistent results. The new LARA array will help us better ascertain the absolute efficiency of the

  19. Rigorous high-precision enclosures of fixed points and their invariant manifolds

    NASA Astrophysics Data System (ADS)

    Wittig, Alexander N.

    The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by

  20. Precision Spectroscopy of Atomic Hydrogen

    NASA Astrophysics Data System (ADS)

    Hänsch, Theodor W.

    1994-08-01

    The simple hydrogen atom permits unique confrontations between spectroscopic experiment and fundamental theory. The experimental resolution and measurement accuracy continue to improve exponentially. Recent advances include a new measurement of the Lamb shift of the 1S ground state which provides now the most stringent test of QED for an atom and reveals unexpectedly large two-loop binding corrections. The H-D isotope shift of the extremely narrow 1S-2S two-photon resonance is yielding a new value for the structure radius of the deuteron, in agreement with nuclear theory. The Rydberg constant as determined within 3 parts in 1011 by two independent groups has become the most accurately known of any fundamental constant. Advances in the art of absolute optical frequency measurements will permit still more precise experiments in the near future.

  1. Research on the high-precision non-contact optical detection technology for banknotes

    NASA Astrophysics Data System (ADS)

    Jin, Xiaofeng; Liang, Tiancai; Luo, Pengfeng; Sun, Jianfeng

    2015-09-01

    The technology of high-precision laser interferometry was introduced for optical measurement of the banknotes in this paper. Taking advantage of laser short wavelength and high sensitivity, information of adhesive tape and cavity about the banknotes could be checked efficiently. Compared with current measurement devices, including mechanical wheel measurement device, Infrared measurement device, ultrasonic measurement device, the laser interferometry measurement has higher precision and reliability. This will improve the ability of banknotes feature information in financial electronic equipment.

  2. Large-scale extraction of accurate drug-disease treatment pairs from biomedical literature for drug repurposing

    PubMed Central

    2013-01-01

    Background A large-scale, highly accurate, machine-understandable drug-disease treatment relationship knowledge base is important for computational approaches to drug repurposing. The large body of published biomedical research articles and clinical case reports available on MEDLINE is a rich source of FDA-approved drug-disease indication as well as drug-repurposing knowledge that is crucial for applying FDA-approved drugs for new diseases. However, much of this information is buried in free text and not captured in any existing databases. The goal of this study is to extract a large number of accurate drug-disease treatment pairs from published literature. Results In this study, we developed a simple but highly accurate pattern-learning approach to extract treatment-specific drug-disease pairs from 20 million biomedical abstracts available on MEDLINE. We extracted a total of 34,305 unique drug-disease treatment pairs, the majority of which are not included in existing structured databases. Our algorithm achieved a precision of 0.904 and a recall of 0.131 in extracting all pairs, and a precision of 0.904 and a recall of 0.842 in extracting frequent pairs. In addition, we have shown that the extracted pairs strongly correlate with both drug target genes and therapeutic classes, therefore may have high potential in drug discovery. Conclusions We demonstrated that our simple pattern-learning relationship extraction algorithm is able to accurately extract many drug-disease pairs from the free text of biomedical literature that are not captured in structured databases. The large-scale, accurate, machine-understandable drug-disease treatment knowledge base that is resultant of our study, in combination with pairs from structured databases, will have high potential in computational drug repurposing tasks. PMID:23742147

  3. Accurate frequency and time dissemination in the optical domain

    NASA Astrophysics Data System (ADS)

    Khabarova, K. Yu; Kalganova, E. S.; Kolachevsky, N. N.

    2018-02-01

    The development of the optical frequency comb technique has enabled a wide use of atomic optical clocks by allowing frequency conversion from the optical to the radio frequency range. Today, the fractional instability of such clocks has reached the record eighteen-digit level, two orders of magnitude better than for cesium fountains representing the primary frequency standard. This is paralleled by the development of techniques for transferring accurate time and optical frequency signals, including fiber links. With this technology, the fractional instability of transferred frequency can be lowered to below 10‑18 with an averaging time of 1000 s for a 1000 km optical link. At a distance of 500 km, a time signal uncertainty of 250 ps has been achieved. Optical links allow comparing optical clocks and creating a synchronized time and frequency standard network at a new level of precision. Prospects for solving new problems arise, including the determination of the gravitational potential, the measurement of the continental Sagnac effect, and precise tests of fundamental theories.

  4. Precision laser automatic tracking system.

    PubMed

    Lucy, R F; Peters, C J; McGann, E J; Lang, K T

    1966-04-01

    A precision laser tracker has been constructed and tested that is capable of tracking a low-acceleration target to an accuracy of about 25 microrad root mean square. In tracking high-acceleration targets, the error is directly proportional to the angular acceleration. For an angular acceleration of 0.6 rad/sec(2), the measured tracking error was about 0.1 mrad. The basic components in this tracker, similar in configuration to a heliostat, are a laser and an image dissector, which are mounted on a stationary frame, and a servocontrolled tracking mirror. The daytime sensitivity of this system is approximately 3 x 10(-10) W/m(2); the ultimate nighttime sensitivity is approximately 3 x 10(-14) W/m(2). Experimental tests were performed to evaluate both dynamic characteristics of this system and the system sensitivity. Dynamic performance of the system was obtained, using a small rocket covered with retroreflective material launched at an acceleration of about 13 g at a point 204 m from the tracker. The daytime sensitivity of the system was checked, using an efficient retroreflector mounted on a light aircraft. This aircraft was tracked out to a maximum range of 15 km, which checked the daytime sensitivity of the system measured by other means. The system also has been used to track passively stars and the Echo I satellite. Also, the system tracked passively a +7.5 magnitude star, and the signal-to-noise ratio in this experiment indicates that it should be possible to track a + 12.5 magnitude star.

  5. The double-edged sword of high-precision U-Pb geochronology or be careful what you wish for. (Invited)

    NASA Astrophysics Data System (ADS)

    Bowring, S. A.

    2010-12-01

    Over the past two decades, U-Pb geochronology by ID-TIMS has been refined to achieve internal (analytical) uncertainties on a single grain analysis of ± ~ 0.1-0.2%, and 0.05% or better on weighted mean dates. This level of precision enables unprecedented evaluation of the rates and durations of geological processes, from magma chamber evolution to mass extinctions and recoveries. The increased precision, however, exposes complexity in magmatic/volcanic systems and highlights the importance of corrections related to disequilibrium partitioning of intermediate daughter products, and raises questions as to how best to interpret the complex spectrum of dates characteristic of many volcanic rocks. In addition, the increased precision requires renewed emphasis on the accuracy of U decay constants, the isotopic composition of U, the calibration of isotopic tracers, and the accurate propagation of uncertainties It is now commonplace in the high precision dating of volcanic ash-beds to analyze 5-20 single grains of zircon in an attempt to resolve the eruption/depositional age. Data sets with dispersion far in excess of analytical uncertainties are interpreted to reflect Pb-loss, inheritance, and protracted crystallization, often supported with zircon chemistry. In most cases, a weighted mean of the youngest reproducible dates is interpreted as the time of eruption/deposition. Crystallization histories of silicic magmatic systems recovered from plutonic rocks may also be protracted, though may not be directly applicable to silicic eruptions; each sample must be evaluated independently. A key to robust interpretations is the integration high-spatial resolution zircon trace element geochemistry with high-precision ID-TIMS analyses. The EARTHTIME initiative has focused on many of these issues, and the larger subject of constructing a timeline for earth history using both U-Pb and Ar-Ar chronometers. Despite continuing improvements in both, comparing dates for the same rock

  6. The impact of targeting repetitive BamHI-W sequences on the sensitivity and precision of EBV DNA quantification.

    PubMed

    Sanosyan, Armen; Fayd'herbe de Maudave, Alexis; Bollore, Karine; Zimmermann, Valérie; Foulongne, Vincent; Van de Perre, Philippe; Tuaillon, Edouard

    2017-01-01

    Viral load monitoring and early Epstein-Barr virus (EBV) DNA detection are essential in routine laboratory testing, especially in preemptive management of Post-transplant Lymphoproliferative Disorder. Targeting the repetitive BamHI-W sequence was shown to increase the sensitivity of EBV DNA quantification, but the variability of BamHI-W reiterations was suggested to be a source of quantification bias. We aimed to assess the extent of variability associated with BamHI-W PCR and its impact on the sensitivity of EBV DNA quantification using the 1st WHO international standard, EBV strains and clinical samples. Repetitive BamHI-W- and LMP2 single- sequences were amplified by in-house qPCRs and BXLF-1 sequence by a commercial assay (EBV R-gene™, BioMerieux). Linearity and limits of detection of in-house methods were assessed. The impact of repeated versus single target sequences on EBV DNA quantification precision was tested on B95.8 and Raji cell lines, possessing 11 and 7 copies of the BamHI-W sequence, respectively, and on clinical samples. BamHI-W qPCR demonstrated a lower limit of detection compared to LMP2 qPCR (2.33 log10 versus 3.08 log10 IU/mL; P = 0.0002). BamHI-W qPCR underestimated the EBV DNA load on Raji strain which contained fewer BamHI-W copies than the WHO standard derived from the B95.8 EBV strain (mean bias: - 0.21 log10; 95% CI, -0.54 to 0.12). Comparison of BamHI-W qPCR versus LMP2 and BXLF-1 qPCR showed an acceptable variability between EBV DNA levels in clinical samples with the mean bias being within 0.5 log10 IU/mL EBV DNA, whereas a better quantitative concordance was observed between LMP2 and BXLF-1 assays. Targeting BamHI-W resulted to a higher sensitivity compared to LMP2 but the variable reiterations of BamHI-W segment are associated with higher quantification variability. BamHI-W can be considered for clinical and therapeutic monitoring to detect an early EBV DNA and a dynamic change in viral load.

  7. The impact of targeting repetitive BamHI-W sequences on the sensitivity and precision of EBV DNA quantification

    PubMed Central

    Fayd’herbe de Maudave, Alexis; Bollore, Karine; Zimmermann, Valérie; Foulongne, Vincent; Van de Perre, Philippe; Tuaillon, Edouard

    2017-01-01

    Background Viral load monitoring and early Epstein-Barr virus (EBV) DNA detection are essential in routine laboratory testing, especially in preemptive management of Post-transplant Lymphoproliferative Disorder. Targeting the repetitive BamHI-W sequence was shown to increase the sensitivity of EBV DNA quantification, but the variability of BamHI-W reiterations was suggested to be a source of quantification bias. We aimed to assess the extent of variability associated with BamHI-W PCR and its impact on the sensitivity of EBV DNA quantification using the 1st WHO international standard, EBV strains and clinical samples. Methods Repetitive BamHI-W- and LMP2 single- sequences were amplified by in-house qPCRs and BXLF-1 sequence by a commercial assay (EBV R-gene™, BioMerieux). Linearity and limits of detection of in-house methods were assessed. The impact of repeated versus single target sequences on EBV DNA quantification precision was tested on B95.8 and Raji cell lines, possessing 11 and 7 copies of the BamHI-W sequence, respectively, and on clinical samples. Results BamHI-W qPCR demonstrated a lower limit of detection compared to LMP2 qPCR (2.33 log10 versus 3.08 log10 IU/mL; P = 0.0002). BamHI-W qPCR underestimated the EBV DNA load on Raji strain which contained fewer BamHI-W copies than the WHO standard derived from the B95.8 EBV strain (mean bias: - 0.21 log10; 95% CI, -0.54 to 0.12). Comparison of BamHI-W qPCR versus LMP2 and BXLF-1 qPCR showed an acceptable variability between EBV DNA levels in clinical samples with the mean bias being within 0.5 log10 IU/mL EBV DNA, whereas a better quantitative concordance was observed between LMP2 and BXLF-1 assays. Conclusions Targeting BamHI-W resulted to a higher sensitivity compared to LMP2 but the variable reiterations of BamHI-W segment are associated with higher quantification variability. BamHI-W can be considered for clinical and therapeutic monitoring to detect an early EBV DNA and a dynamic change in viral load

  8. Highly Accurate and Precise Infrared Transition Frequencies of the H_3^+ Cation

    NASA Astrophysics Data System (ADS)

    Perry, Adam J.; Markus, Charles R.; Hodges, James N.; Kocheril, G. Stephen; McCall, Benjamin J.

    2016-06-01

    Calculation of ab initio potential energy surfaces for molecules to high accuracy is only manageable for a handful of molecular systems. Among them is the simplest polyatomic molecule, the H_3^+ cation. In order to achieve a high degree of accuracy (<1 wn) corrections must be made to the to the traditional Born-Oppenheimer approximation that take into account not only adiabatic and non-adiabatic couplings, but quantum electrodynamic corrections as well. For the lowest rovibrational levels the agreement between theory and experiment is approaching 0.001 wn, whereas the agreement is on the order of 0.01 - 0.1 wn for higher levels which are closely rivaling the uncertainties on the experimental data. As method development for calculating these various corrections progresses it becomes necessary for the uncertainties on the experimental data to be improved in order to properly benchmark the calculations. Previously we have measured 20 rovibrational transitions of H_3^+ with MHz-level precision, all of which have arisen from low lying rotational levels. Here we present new measurements of rovibrational transitions arising from higher rotational and vibrational levels. These transitions not only allow for probing higher energies on the potential energy surface, but through the use of combination differences, will ultimately lead to prediction of the "forbidden" rotational transitions with MHz-level accuracy. L.G. Diniz, J.R. Mohallem, A. Alijah, M. Pavanello, L. Adamowicz, O.L. Polyansky, J. Tennyson Phys. Rev. A (2013), 88, 032506 O.L. Polyansky, A. Alijah, N.F. Zobov, I.I. Mizus, R.I. Ovsyannikov, J. Tennyson, L. Lodi, T. Szidarovszky, A.G. Császár Phil. Trans. R. Soc. A (2012), 370, 5014 J.N. Hodges, A.J. Perry, P.A. Jenkins II, B.M. Siller, B.J. McCall J. Chem. Phys. (2013), 139, 164201 A.J. Perry, J.N. Hodges, C.R. Markus, G.S. Kocheril, B.J. McCall J. Molec. Spectrosc. (2015), 317, 71-73.

  9. Polynomial Fitting of DT-MRI Fiber Tracts Allows Accurate Estimation of Muscle Architectural Parameters

    PubMed Central

    Damon, Bruce M.; Heemskerk, Anneriet M.; Ding, Zhaohua

    2012-01-01

    Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor MRI fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image datasets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8, and 15.3 m−1), signal-to-noise ratio (50, 75, 100, and 150), and voxel geometry (13.8 and 27.0 mm3 voxel volume with isotropic resolution; 13.5 mm3 volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to 2nd order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m−1), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation. PMID:22503094

  10. Enhanced sensitivity for Os isotope ratios by magnetic sector ICP-MS with a capacitive decoupling Pt guard electrode.

    PubMed

    Townsend, A T

    2000-08-01

    A magnetic sector ICP-MS with enhanced sensitivity was used to measure Os isotope ratios in solutions of low Os concentration (approximately 1 ng g(-1) or less). Ratios with 192Os as the basis were determined, while the geologically useful 187Os/188Os ratio was also measured. Sample introduction was via the traditional nebuliser-spray chamber method. A capacitive decoupling Pt shield torch was developed "in-house" and was found to increase Os signals by approximately 5 x under "moderate" plasma conditions (1050 W) over that found during normal operation (1250 W). Sensitivity using the guard electrode for 192Os was approximately 250-350,000 counts s(-1) per ng g(-1) Os. For a I ng g(-1) Os solution with no guard electrode, precisions of the order of 0.2-0.3% (189Os/192Os and 190Os/192Os) to approximately 1% or greater (186Os/192Os, 187Os/192Os and 187Os/188Os) were found (values as 1 sigma for n = 10). With the guard electrode in use, ratio precisions were found to improve to 0.2 to 0.8%. The total amount of Os used in the acquisition of this data was approximately 2.5 ng per measurement per replicate. At the higher concentration of 10 ng g(-1), precisions of the order of 0.15-0.3% were measured (for all ratios), irrespective of whether the shield torch was used. Ratio accuracy was confirmed by comparison with independently obtained NTIMS data. For both Os concentrations considered, the improvement in precision offered by the guard electrode (if any) was small in comparison to calculated theoretical values based on Poisson counting statistics, suggesting noise contributions from other sources (such as the sample introduction system, plasma flicker etc). At lower Os concentrations (to 100 pg g(-1)) no appreciable loss of ratio accuracy was observed, although as expected based on counting statistics, poorer precisions of the order of 0.45-3% (1 sigma, n = 5) were noted. Re was found to have a detrimental effect on the precision of Os ratios involving 187Os, indicating

  11. A Road Map for Precision Medicine in the Epilepsies

    PubMed Central

    2015-01-01

    Summary Technological advances have paved the way for accelerated genomic discovery and are bringing precision medicine clearly into view. Epilepsy research in particular is well-suited to serve as a model for the development and deployment of targeted therapeutics in precision medicine because of the rapidly expanding genetic knowledge base in epilepsy, the availability of good in vitro and in vivo model systems to efficiently study the biological consequences of genetic mutations, the ability to turn these models into effective drug screening platforms, and the establishment of collaborative research groups. Moving forward, it is critical that we strengthen these collaborations, particularly through integrated research platforms to provide robust analyses both for accurate personal genome analysis and gene and drug discovery. Similarly, the implementation of clinical trial networks will allow the expansion of patient sample populations with genetically defined epilepsy so that drug discovery can be translated into clinical practice. PMID:26416172

  12. Retinotopic memory is more precise than spatiotopic memory.

    PubMed

    Golomb, Julie D; Kanwisher, Nancy

    2012-01-31

    Successful visually guided behavior requires information about spatiotopic (i.e., world-centered) locations, but how accurately is this information actually derived from initial retinotopic (i.e., eye-centered) visual input? We conducted a spatial working memory task in which subjects remembered a cued location in spatiotopic or retinotopic coordinates while making guided eye movements during the memory delay. Surprisingly, after a saccade, subjects were significantly more accurate and precise at reporting retinotopic locations than spatiotopic locations. This difference grew with each eye movement, such that spatiotopic memory continued to deteriorate, whereas retinotopic memory did not accumulate error. The loss in spatiotopic fidelity is therefore not a generic consequence of eye movements, but a direct result of converting visual information from native retinotopic coordinates. Thus, despite our conscious experience of an effortlessly stable spatiotopic world and our lifetime of practice with spatiotopic tasks, memory is actually more reliable in raw retinotopic coordinates than in ecologically relevant spatiotopic coordinates.

  13. Accuracy and Precision in Measurements of Biomass Oxidative Ratio and Carbon Oxidation State

    NASA Astrophysics Data System (ADS)

    Gallagher, M. E.; Masiello, C. A.; Randerson, J. T.; Chadwick, O. A.; Robertson, G. P.

    2007-12-01

    Ecosystem oxidative ratio (OR) is a critical parameter in the apportionment of anthropogenic CO2 between the terrestrial biosphere and ocean carbon reservoirs. OR is the ratio of O2 to CO2 in gas exchange fluxes between the terrestrial biosphere and atmosphere. Ecosystem OR is linearly related to biomass carbon oxidation state (Cox), a fundamental property of the earth system describing the bonding environment of carbon in molecules. Cox can range from -4 to +4 (CH4 to CO2). Variations in both Cox and OR are driven by photosynthesis, respiration, and decomposition. We are developing several techniques to accurately measure variations in ecosystem Cox and OR; these include elemental analysis, bomb calorimetry, and 13C nuclear magnetic resonance spectroscopy. A previous study, comparing the accuracy and precision of elemental analysis versus bomb calorimetry for pure chemicals, showed that elemental analysis-based measurements are more accurate, while calorimetry- based measurements yield more precise data. However, the limited biochemical range of natural samples makes it possible that calorimetry may ultimately prove most accurate, as well as most cost-effective. Here we examine more closely the accuracy of Cox and OR values generated by calorimetry on a large set of natural biomass samples collected from the Kellogg Biological Station-Long Term Ecological Research (KBS-LTER) site in Michigan.

  14. Ensemble MD simulations restrained via crystallographic data: Accurate structure leads to accurate dynamics

    PubMed Central

    Xue, Yi; Skrynnikov, Nikolai R

    2014-01-01

    Currently, the best existing molecular dynamics (MD) force fields cannot accurately reproduce the global free-energy minimum which realizes the experimental protein structure. As a result, long MD trajectories tend to drift away from the starting coordinates (e.g., crystallographic structures). To address this problem, we have devised a new simulation strategy aimed at protein crystals. An MD simulation of protein crystal is essentially an ensemble simulation involving multiple protein molecules in a crystal unit cell (or a block of unit cells). To ensure that average protein coordinates remain correct during the simulation, we introduced crystallography-based restraints into the MD protocol. Because these restraints are aimed at the ensemble-average structure, they have only minimal impact on conformational dynamics of the individual protein molecules. So long as the average structure remains reasonable, the proteins move in a native-like fashion as dictated by the original force field. To validate this approach, we have used the data from solid-state NMR spectroscopy, which is the orthogonal experimental technique uniquely sensitive to protein local dynamics. The new method has been tested on the well-established model protein, ubiquitin. The ensemble-restrained MD simulations produced lower crystallographic R factors than conventional simulations; they also led to more accurate predictions for crystallographic temperature factors, solid-state chemical shifts, and backbone order parameters. The predictions for 15N R1 relaxation rates are at least as accurate as those obtained from conventional simulations. Taken together, these results suggest that the presented trajectories may be among the most realistic protein MD simulations ever reported. In this context, the ensemble restraints based on high-resolution crystallographic data can be viewed as protein-specific empirical corrections to the standard force fields. PMID:24452989

  15. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  16. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  17. Subcortical Control of Precision Grip after Human Spinal Cord Injury

    PubMed Central

    Bunday, Karen L.; Tazoe, Toshiki; Rothwell, John C.

    2014-01-01

    The motor cortex and the corticospinal system contribute to the control of a precision grip between the thumb and index finger. The involvement of subcortical pathways during human precision grip remains unclear. Using noninvasive cortical and cervicomedullary stimulation, we examined motor evoked potentials (MEPs) and the activity in intracortical and subcortical pathways targeting an intrinsic hand muscle when grasping a small (6 mm) cylinder between the thumb and index finger and during index finger abduction in uninjured humans and in patients with subcortical damage due to incomplete cervical spinal cord injury (SCI). We demonstrate that cortical and cervicomedullary MEP size was reduced during precision grip compared with index finger abduction in uninjured humans, but was unchanged in SCI patients. Regardless of whether cortical and cervicomedullary stimulation was used, suppression of the MEP was only evident 1–3 ms after its onset. Long-term (∼5 years) use of the GABAb receptor agonist baclofen by SCI patients reduced MEP size during precision grip to similar levels as uninjured humans. Index finger sensory function correlated with MEP size during precision grip in SCI patients. Intracortical inhibition decreased during precision grip and spinal motoneuron excitability remained unchanged in all groups. Our results demonstrate that the control of precision grip in humans involves premotoneuronal subcortical mechanisms, likely disynaptic or polysynaptic spinal pathways that are lacking after SCI and restored by long-term use of baclofen. We propose that spinal GABAb-ergic interneuronal circuits, which are sensitive to baclofen, are part of the subcortical premotoneuronal network shaping corticospinal output during human precision grip. PMID:24849366

  18. No galaxy left behind: accurate measurements with the faintest objects in the Dark Energy Survey

    NASA Astrophysics Data System (ADS)

    Suchyta, E.; Huff, E. M.; Aleksić, J.; Melchior, P.; Jouvel, S.; MacCrann, N.; Ross, A. J.; Crocce, M.; Gaztanaga, E.; Honscheid, K.; Leistedt, B.; Peiris, H. V.; Rykoff, E. S.; Sheldon, E.; Abbott, T.; Abdalla, F. B.; Allam, S.; Banerji, M.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; DePoy, D. L.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Doel, P.; Eifler, T. F.; Estrada, J.; Evrard, A. E.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gerdes, D. W.; Gruen, D.; Gruendl, R. A.; James, D. J.; Jarvis, M.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Lima, M.; Maia, M. A. G.; March, M.; Marshall, J. L.; Miller, C. J.; Miquel, R.; Neilsen, E.; Nichol, R. C.; Nord, B.; Ogando, R.; Percival, W. J.; Reil, K.; Roodman, A.; Sako, M.; Sanchez, E.; Scarpine, V.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Swanson, M. E. C.; Tarle, G.; Thaler, J.; Thomas, D.; Vikram, V.; Walker, A. R.; Wechsler, R. H.; Zhang, Y.; DES Collaboration

    2016-03-01

    Accurate statistical measurement with large imaging surveys has traditionally required throwing away a sizable fraction of the data. This is because most measurements have relied on selecting nearly complete samples, where variations in the composition of the galaxy population with seeing, depth, or other survey characteristics are small. We introduce a new measurement method that aims to minimize this wastage, allowing precision measurement for any class of detectable stars or galaxies. We have implemented our proposal in BALROG, software which embeds fake objects in real imaging to accurately characterize measurement biases. We demonstrate this technique with an angular clustering measurement using Dark Energy Survey (DES) data. We first show that recovery of our injected galaxies depends on a variety of survey characteristics in the same way as the real data. We then construct a flux-limited sample of the faintest galaxies in DES, chosen specifically for their sensitivity to depth and seeing variations. Using the synthetic galaxies as randoms in the Landy-Szalay estimator suppresses the effects of variable survey selection by at least two orders of magnitude. With this correction, our measured angular clustering is found to be in excellent agreement with that of a matched sample from much deeper, higher resolution space-based Cosmological Evolution Survey (COSMOS) imaging; over angular scales of 0.004° < θ < 0.2°, we find a best-fitting scaling amplitude between the DES and COSMOS measurements of 1.00 ± 0.09. We expect this methodology to be broadly useful for extending measurements' statistical reach in a variety of upcoming imaging surveys.

  19. Precision diagnostics: moving towards protein biomarker signatures of clinical utility in cancer.

    PubMed

    Borrebaeck, Carl A K

    2017-03-01

    Interest in precision diagnostics has been fuelled by the concept that early detection of cancer would benefit patients; that is, if detected early, more tumours should be resectable and treatment more efficacious. Serum contains massive amounts of potentially diagnostic information, and affinity proteomics has risen as an accurate approach to decipher this, to generate actionable information that should result in more precise and evidence-based options to manage cancer. To achieve this, we need to move from single to multiplex biomarkers, a so-called signature, that can provide significantly increased diagnostic accuracy. This Opinion article focuses on the progress being made in identifying protein biomarker signatures of clinical utility, using blood-based proteomics.

  20. FASTSIM2: a second-order accurate frictional rolling contact algorithm

    NASA Astrophysics Data System (ADS)

    Vollebregt, E. A. H.; Wilders, P.

    2011-01-01

    In this paper we consider the frictional (tangential) steady rolling contact problem. We confine ourselves to the simplified theory, instead of using full elastostatic theory, in order to be able to compute results fast, as needed for on-line application in vehicle system dynamics simulation packages. The FASTSIM algorithm is the leading technology in this field and is employed in all dominant railway vehicle system dynamics packages (VSD) in the world. The main contribution of this paper is a new version "FASTSIM2" of the FASTSIM algorithm, which is second-order accurate. This is relevant for VSD, because with the new algorithm 16 times less grid points are required for sufficiently accurate computations of the contact forces. The approach is based on new insights in the characteristics of the rolling contact problem when using the simplified theory, and on taking precise care of the contact conditions in the numerical integration scheme employed.

  1. Accuracy of complete-arch dental impressions: a new method of measuring trueness and precision.

    PubMed

    Ender, Andreas; Mehl, Albert

    2013-02-01

    A new approach to both 3-dimensional (3D) trueness and precision is necessary to assess the accuracy of intraoral digital impressions and compare them to conventionally acquired impressions. The purpose of this in vitro study was to evaluate whether a new reference scanner is capable of measuring conventional and digital intraoral complete-arch impressions for 3D accuracy. A steel reference dentate model was fabricated and measured with a reference scanner (digital reference model). Conventional impressions were made from the reference model, poured with Type IV dental stone, scanned with the reference scanner, and exported as digital models. Additionally, digital impressions of the reference model were made and the digital models were exported. Precision was measured by superimposing the digital models within each group. Superimposing the digital models on the digital reference model assessed the trueness of each impression method. Statistical significance was assessed with an independent sample t test (α=.05). The reference scanner delivered high accuracy over the entire dental arch with a precision of 1.6 ±0.6 µm and a trueness of 5.3 ±1.1 µm. Conventional impressions showed significantly higher precision (12.5 ±2.5 µm) and trueness values (20.4 ±2.2 µm) with small deviations in the second molar region (P<.001). Digital impressions were significantly less accurate with a precision of 32.4 ±9.6 µm and a trueness of 58.6 ±15.8µm (P<.001). More systematic deviations of the digital models were visible across the entire dental arch. The new reference scanner is capable of measuring the precision and trueness of both digital and conventional complete-arch impressions. The digital impression is less accurate and shows a different pattern of deviation than the conventional impression. Copyright © 2013 The Editorial Council of the Journal of Prosthetic Dentistry. Published by Mosby, Inc. All rights reserved.

  2. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  3. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  4. Accurate computation of gravitational field of a tesseroid

    NASA Astrophysics Data System (ADS)

    Fukushima, Toshio

    2018-02-01

    We developed an accurate method to compute the gravitational field of a tesseroid. The method numerically integrates a surface integral representation of the gravitational potential of the tesseroid by conditionally splitting its line integration intervals and by using the double exponential quadrature rule. Then, it evaluates the gravitational acceleration vector and the gravity gradient tensor by numerically differentiating the numerically integrated potential. The numerical differentiation is conducted by appropriately switching the central and the single-sided second-order difference formulas with a suitable choice of the test argument displacement. If necessary, the new method is extended to the case of a general tesseroid with the variable density profile, the variable surface height functions, and/or the variable intervals in longitude or in latitude. The new method is capable of computing the gravitational field of the tesseroid independently on the location of the evaluation point, namely whether outside, near the surface of, on the surface of, or inside the tesseroid. The achievable precision is 14-15 digits for the potential, 9-11 digits for the acceleration vector, and 6-8 digits for the gradient tensor in the double precision environment. The correct digits are roughly doubled if employing the quadruple precision computation. The new method provides a reliable procedure to compute the topographic gravitational field, especially that near, on, and below the surface. Also, it could potentially serve as a sure reference to complement and elaborate the existing approaches using the Gauss-Legendre quadrature or other standard methods of numerical integration.

  5. Field design factors affecting the precision of ryegrass forage yield estimation

    USDA-ARS?s Scientific Manuscript database

    Field-based agronomic and genetic research relies heavily on the data generated from field evaluations. Therefore, it is imperative to optimize the precision and accuracy of yield estimates in cultivar evaluation trials to make reliable selections. Experimental error in yield trials is sensitive to ...

  6. Improved management of radiotherapy departments through accurate cost data.

    PubMed

    Kesteloot, K; Lievens, Y; van der Schueren, E

    2000-06-01

    Escalating health care expenses urge governments towards cost containment. More accurate data on the precise costs of health care interventions are needed. We performed an aggregate cost calculation of radiation therapy departments and treatments and discussed the different cost components. The costs of a radiotherapy department were estimated, based on accreditation norms for radiotherapy departments set forth in the Belgian legislation. The major cost components of radiotherapy are the cost of buildings and facilities, equipment, medical and non-medical staff, materials and overhead. They respectively represent around 3, 30, 50, 4 and 13% of the total costs, irrespective of the department size. The average cost per patient lowers with increasing department size and optimal utilization of resources. Radiotherapy treatment costs vary in a stepwise fashion: minor variations of patient load do not affect the cost picture significantly due to a small impact of variable costs. With larger increases in patient load however, additional equipment and/or staff will become necessary, resulting in additional semi-fixed costs and an important increase in costs. A sensitivity analysis of these two major cost inputs shows that a decrease in total costs of 12-13% can be obtained by assuming a 20% less than full time availability of personnel; that due to evolving seniority levels, the annual increase in wage costs is estimated to be more than 1%; that by changing the clinical life-time of buildings and equipment with unchanged interest rate, a 5% reduction of total costs and cost per patient can be calculated. More sophisticated equipment will not have a very large impact on the cost (+/-4000 BEF/patient), provided that the additional equipment is adapted to the size of the department. That the recommendations we used, based on the Belgian legislation, are not outrageous is shown by replacing them by the USA Blue book recommendations. Depending on the department size, costs in

  7. Strategy for Realizing High-Precision VUV Spectro-Polarimeter

    NASA Astrophysics Data System (ADS)

    Ishikawa, R.; Narukage, N.; Kubo, M.; Ishikawa, S.; Kano, R.; Tsuneta, S.

    2014-12-01

    Spectro-polarimetric observations in the vacuum ultraviolet (VUV) range are currently the only means to measure magnetic fields in the upper chromosphere and transition region of the solar atmosphere. The Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP) aims to measure linear polarization at the hydrogen Lyman- α line (121.6 nm). This measurement requires a polarization sensitivity better than 0.1 %, which is unprecedented in the VUV range. We here present a strategy with which to realize such high-precision spectro-polarimetry. This involves the optimization of instrument design, testing of optical components, extensive analyses of polarization errors, polarization calibration of the instrument, and calibration with onboard data. We expect that this strategy will aid the development of other advanced high-precision polarimeters in the UV as well as in other wavelength ranges.

  8. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  9. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  10. eRAM: encyclopedia of rare disease annotations for precision medicine.

    PubMed

    Jia, Jinmeng; An, Zhongxin; Ming, Yue; Guo, Yongli; Li, Wei; Liang, Yunxiang; Guo, Dongming; Li, Xin; Tai, Jun; Chen, Geng; Jin, Yaqiong; Liu, Zhimei; Ni, Xin; Shi, Tieliu

    2018-01-04

    Rare diseases affect over a hundred million people worldwide, most of these patients are not accurately diagnosed and effectively treated. The limited knowledge of rare diseases forms the biggest obstacle for improving their treatment. Detailed clinical phenotyping is considered as a keystone of deciphering genes and realizing the precision medicine for rare diseases. Here, we preset a standardized system for various types of rare diseases, called encyclopedia of Rare disease Annotations for Precision Medicine (eRAM). eRAM was built by text-mining nearly 10 million scientific publications and electronic medical records, and integrating various data in existing recognized databases (such as Unified Medical Language System (UMLS), Human Phenotype Ontology, Orphanet, OMIM, GWAS). eRAM systematically incorporates currently available data on clinical manifestations and molecular mechanisms of rare diseases and uncovers many novel associations among diseases. eRAM provides enriched annotations for 15 942 rare diseases, yielding 6147 human disease related phenotype terms, 31 661 mammalians phenotype terms, 10,202 symptoms from UMLS, 18 815 genes and 92 580 genotypes. eRAM can not only provide information about rare disease mechanism but also facilitate clinicians to make accurate diagnostic and therapeutic decisions towards rare diseases. eRAM can be freely accessed at http://www.unimd.org/eram/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. High precision analytical description of the allowed β spectrum shape

    NASA Astrophysics Data System (ADS)

    Hayen, Leendert; Severijns, Nathal; Bodek, Kazimierz; Rozpedzik, Dagmara; Mougeot, Xavier

    2018-01-01

    A fully analytical description of the allowed β spectrum shape is given in view of ongoing and planned measurements. Its study forms an invaluable tool in the search for physics beyond the standard electroweak model and the weak magnetism recoil term. Contributions stemming from finite size corrections, mass effects, and radiative corrections are reviewed. Particular focus is placed on atomic and chemical effects, where the existing description is extended and analytically provided. The effects of QCD-induced recoil terms are discussed, and cross-checks were performed for different theoretical formalisms. Special attention was given to a comparison of the treatment of nuclear structure effects in different formalisms. Corrections were derived for both Fermi and Gamow-Teller transitions, and methods of analytical evaluation thoroughly discussed. In its integrated form, calculated f values were in agreement with the most precise numerical results within the aimed for precision. The need for an accurate evaluation of weak magnetism contributions was stressed, and the possible significance of the oft-neglected induced pseudoscalar interaction was noted. Together with improved atomic corrections, an analytical description was presented of the allowed β spectrum shape accurate to a few parts in 10-4 down to 1 keV for low to medium Z nuclei, thereby extending the work by previous authors by nearly an order of magnitude.

  12. An Accurate Transmitting Power Control Method in Wireless Communication Transceivers

    NASA Astrophysics Data System (ADS)

    Zhang, Naikang; Wen, Zhiping; Hou, Xunping; Bi, Bo

    2018-01-01

    Power control circuits are widely used in transceivers aiming at stabilizing the transmitted signal power to a specified value, thereby reducing power consumption and interference to other frequency bands. In order to overcome the shortcomings of traditional modes of power control, this paper proposes an accurate signal power detection method by multiplexing the receiver and realizes transmitting power control in the digital domain. The simulation results show that this novel digital power control approach has advantages of small delay, high precision and simplified design procedure. The proposed method is applicable to transceivers working at large frequency dynamic range, and has good engineering practicability.

  13. Accurate age determinations of several nearby open clusters containing magnetic Ap stars

    NASA Astrophysics Data System (ADS)

    Silaj, J.; Landstreet, J. D.

    2014-06-01

    Context. To study the time evolution of magnetic fields, chemical abundance peculiarities, and other characteristics of magnetic Ap and Bp stars during their main sequence lives, a sample of these stars in open clusters has been obtained, as such stars can be assumed to have the same ages as the clusters to which they belong. However, in exploring age determinations in the literature, we find a large dispersion among different age determinations, even for bright, nearby clusters. Aims: Our aim is to obtain ages that are as accurate as possible for the seven nearby open clusters α Per, Coma Ber, IC 2602, NGC 2232, NGC 2451A, NGC 2516, and NGC 6475, each of which contains at least one magnetic Ap or Bp star. Simultaneously, we test the current calibrations of Te and luminosity for the Ap/Bp star members, and identify clearly blue stragglers in the clusters studied. Methods: We explore the possibility that isochrone fitting in the theoretical Hertzsprung-Russell diagram (i.e. log (L/L⊙) vs. log Te), rather than in the conventional colour-magnitude diagram, can provide more precise and accurate cluster ages, with well-defined uncertainties. Results: Well-defined ages are found for all the clusters studied. For the nearby clusters studied, the derived ages are not very sensitive to the small uncertainties in distance, reddening, membership, metallicity, or choice of isochrones. Our age determinations are all within the range of previously determined values, but the associated uncertainties are considerably smaller than the spread in recent age determinations from the literature. Furthermore, examination of proper motions and HR diagrams confirms that the Ap stars identified in these clusters are members, and that the presently accepted temperature scale and bolometric corrections for Ap stars are approximately correct. We show that in these theoretical HR diagrams blue stragglers are particularly easy to identify. Conclusions: Constructing the theoretical HR diagram

  14. Gene expression during blow fly development: improving the precision of age estimates in forensic entomology.

    PubMed

    Tarone, Aaron M; Foran, David R

    2011-01-01

    Forensic entomologists use size and developmental stage to estimate blow fly age, and from those, a postmortem interval. Since such estimates are generally accurate but often lack precision, particularly in the older developmental stages, alternative aging methods would be advantageous. Presented here is a means of incorporating developmentally regulated gene expression levels into traditional stage and size data, with a goal of more precisely estimating developmental age of immature Lucilia sericata. Generalized additive models of development showed improved statistical support compared to models that did not include gene expression data, resulting in an increase in estimate precision, especially for postfeeding third instars and pupae. The models were then used to make blind estimates of development for 86 immature L. sericata raised on rat carcasses. Overall, inclusion of gene expression data resulted in increased precision in aging blow flies. © 2010 American Academy of Forensic Sciences.

  15. The European Society for Medical Oncology (ESMO) Precision Medicine Glossary.

    PubMed

    Yates, L R; Seoane, J; Le Tourneau, C; Siu, L L; Marais, R; Michiels, S; Soria, J C; Campbell, P; Normanno, N; Scarpa, A; Reis-Filho, J S; Rodon, J; Swanton, C; Andre, F

    2018-01-01

    Precision medicine is rapidly evolving within the field of oncology and has brought many new concepts and terminologies that are often poorly defined when first introduced, which may subsequently lead to miscommunication within the oncology community. The European Society for Medical Oncology (ESMO) recognises these challenges and is committed to support the adoption of precision medicine in oncology. To add clarity to the language used by oncologists and basic scientists within the context of precision medicine, the ESMO Translational Research and Personalised Medicine Working Group has developed a standardised glossary of relevant terms. Relevant terms for inclusion in the glossary were identified via an ESMO member survey conducted in Autumn 2016, and by the ESMO Translational Research and Personalised Medicine Working Group members. Each term was defined by experts in the field, discussed and, if necessary, modified by the Working Group before reaching consensus approval. A literature search was carried out to determine which of the terms, 'precision medicine' and 'personalised medicine', is most appropriate to describe this field. A total of 43 terms are included in the glossary, grouped into five main themes-(i) mechanisms of decision, (ii) characteristics of molecular alterations, (iii) tumour characteristics, (iv) clinical trials and statistics and (v) new research tools. The glossary classes 'precision medicine' or 'personalised medicine' as technically interchangeable but the term 'precision medicine' is favoured as it more accurately reflects the highly precise nature of new technologies that permit base pair resolution dissection of cancer genomes and is less likely to be misinterpreted. The ESMO Precision Medicine Glossary provides a resource to facilitate consistent communication in this field by clarifying and raising awareness of the language employed in cancer research and oncology practice. The glossary will be a dynamic entity, undergoing

  16. Precision imaging of 4.4 MeV gamma rays using a 3-D position sensitive Compton camera.

    PubMed

    Koide, Ayako; Kataoka, Jun; Masuda, Takamitsu; Mochizuki, Saku; Taya, Takanori; Sueoka, Koki; Tagawa, Leo; Fujieda, Kazuya; Maruhashi, Takuya; Kurihara, Takuya; Inaniwa, Taku

    2018-05-25

    Imaging of nuclear gamma-ray lines in the 1-10 MeV range is far from being established in both medical and physical applications. In proton therapy, 4.4 MeV gamma rays are emitted from the excited nucleus of either 12 C* or 11 B* and are considered good indicators of dose delivery and/or range verification. Further, in gamma-ray astronomy, 4.4 MeV gamma rays are produced by cosmic ray interactions in the interstellar medium, and can thus be used to probe nucleothynthesis in the universe. In this paper, we present a high-precision image of 4.4 MeV gamma rays taken by newly developed 3-D position sensitive Compton camera (3D-PSCC). To mimic the situation in proton therapy, we first irradiated water, PMMA and Ca(OH)2 with a 70 MeV proton beam, then we identified various nuclear lines with the HPGe detector. The 4.4 MeV gamma rays constitute a broad peak, including single and double escape peaks. Thus, by setting an energy window of 3D-PSCC from 3 to 5 MeV, we show that a gamma ray image sharply concentrates near the Bragg peak, as expected from the minimum energy threshold and sharp peak profile in the cross section of 12 C(p,p) 12 C*.

  17. Sensitive Precise p H Measurement with Large-Area Graphene Field-Effect Transistors at the Quantum-Capacitance Limit

    NASA Astrophysics Data System (ADS)

    Fakih, Ibrahim; Mahvash, Farzaneh; Siaj, Mohamed; Szkopek, Thomas

    2017-10-01

    A challenge for p H sensing is decreasing the minimum measurable p H per unit bandwidth in an economical fashion. Minimizing noise to reach the inherent limit imposed by charge fluctuation remains an obstacle. We demonstrate here graphene-based ion-sensing field-effect transistors that saturate the physical limit of sensitivity, defined here as the change in electrical response with respect to p H , and achieve a precision limited by charge-fluctuation noise at the sensing layer. We present a model outlining the necessity for maximizing the device carrier mobility, active sensing area, and capacitive coupling in order to minimize noise. We encapsulate large-area graphene with an ultrathin layer of parylene, a hydrophobic polymer, and deposit an ultrathin, stoichiometric p H -sensing layer of either aluminum oxide or tantalum pentoxide. With these structures, we achieve gate capacitances ˜0.6 μ F /cm2 , approaching the quantum-capacitance limit inherent to graphene, along with a near-Nernstian p H response of ˜55 ±2 mV /p H . We observe field-effect mobilities as high as 7000 cm2 V-1 s-1 with minimal hysteresis as a result of the parylene encapsulation. A detection limit of 0.1 m p H in a 60-Hz electrical bandwidth is observed in optimized graphene transistors.

  18. Accurate LC peak boundary detection for ¹⁶O/¹⁸O labeled LC-MS data.

    PubMed

    Cui, Jian; Petritis, Konstantinos; Tegeler, Tony; Petritis, Brianne; Ma, Xuepo; Jin, Yufang; Gao, Shou-Jiang S J; Zhang, Jianqiu Michelle

    2013-01-01

    In liquid chromatography-mass spectrometry (LC-MS), parts of LC peaks are often corrupted by their co-eluting peptides, which results in increased quantification variance. In this paper, we propose to apply accurate LC peak boundary detection to remove the corrupted part of LC peaks. Accurate LC peak boundary detection is achieved by checking the consistency of intensity patterns within peptide elution time ranges. In addition, we remove peptides with erroneous mass assignment through model fitness check, which compares observed intensity patterns to theoretically constructed ones. The proposed algorithm can significantly improve the accuracy and precision of peptide ratio measurements.

  19. Advancing the speed, sensitivity and accuracy of biomolecular detection using multi-length-scale engineering

    PubMed Central

    Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.

    2015-01-01

    Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices. PMID:25466541

  20. Advancing the speed, sensitivity and accuracy of biomolecular detection using multi-length-scale engineering

    NASA Astrophysics Data System (ADS)

    Kelley, Shana O.; Mirkin, Chad A.; Walt, David R.; Ismagilov, Rustem F.; Toner, Mehmet; Sargent, Edward H.

    2014-12-01

    Rapid progress in identifying disease biomarkers has increased the importance of creating high-performance detection technologies. Over the last decade, the design of many detection platforms has focused on either the nano or micro length scale. Here, we review recent strategies that combine nano- and microscale materials and devices to produce large improvements in detection sensitivity, speed and accuracy, allowing previously undetectable biomarkers to be identified in clinical samples. Microsensors that incorporate nanoscale features can now rapidly detect disease-related nucleic acids expressed in patient samples. New microdevices that separate large clinical samples into nanocompartments allow precise quantitation of analytes, and microfluidic systems that utilize nanoscale binding events can detect rare cancer cells in the bloodstream more accurately than before. These advances will lead to faster and more reliable clinical diagnostic devices.

  1. Motion and gravity effects in the precision of quantum clocks

    PubMed Central

    Lindkvist, Joel; Sabín, Carlos; Johansson, Göran; Fuentes, Ivette

    2015-01-01

    We show that motion and gravity affect the precision of quantum clocks. We consider a localised quantum field as a fundamental model of a quantum clock moving in spacetime and show that its state is modified due to changes in acceleration. By computing the quantum Fisher information we determine how relativistic motion modifies the ultimate bound in the precision of the measurement of time. While in the absence of motion the squeezed vacuum is the ideal state for time estimation, we find that it is highly sensitive to the motion-induced degradation of the quantum Fisher information. We show that coherent states are generally more resilient to this degradation and that in the case of very low initial number of photons, the optimal precision can be even increased by motion. These results can be tested with current technology by using superconducting resonators with tunable boundary conditions. PMID:25988238

  2. Motion and gravity effects in the precision of quantum clocks.

    PubMed

    Lindkvist, Joel; Sabín, Carlos; Johansson, Göran; Fuentes, Ivette

    2015-05-19

    We show that motion and gravity affect the precision of quantum clocks. We consider a localised quantum field as a fundamental model of a quantum clock moving in spacetime and show that its state is modified due to changes in acceleration. By computing the quantum Fisher information we determine how relativistic motion modifies the ultimate bound in the precision of the measurement of time. While in the absence of motion the squeezed vacuum is the ideal state for time estimation, we find that it is highly sensitive to the motion-induced degradation of the quantum Fisher information. We show that coherent states are generally more resilient to this degradation and that in the case of very low initial number of photons, the optimal precision can be even increased by motion. These results can be tested with current technology by using superconducting resonators with tunable boundary conditions.

  3. Extracting accurate and precise topography from LROC narrow angle camera stereo observations

    NASA Astrophysics Data System (ADS)

    Henriksen, M. R.; Manheim, M. R.; Burns, K. N.; Seymour, P.; Speyerer, E. J.; Deran, A.; Boyd, A. K.; Howington-Kraus, E.; Rosiek, M. R.; Archinal, B. A.; Robinson, M. S.

    2017-02-01

    The Lunar Reconnaissance Orbiter Camera (LROC) includes two identical Narrow Angle Cameras (NAC) that each provide 0.5 to 2.0 m scale images of the lunar surface. Although not designed as a stereo system, LROC can acquire NAC stereo observations over two or more orbits using at least one off-nadir slew. Digital terrain models (DTMs) are generated from sets of stereo images and registered to profiles from the Lunar Orbiter Laser Altimeter (LOLA) to improve absolute accuracy. With current processing methods, DTMs have absolute accuracies better than the uncertainties of the LOLA profiles and relative vertical and horizontal precisions less than the pixel scale of the DTMs (2-5 m). We computed slope statistics from 81 highland and 31 mare DTMs across a range of baselines. For a baseline of 15 m the highland mean slope parameters are: median = 9.1°, mean = 11.0°, standard deviation = 7.0°. For the mare the mean slope parameters are: median = 3.5°, mean = 4.9°, standard deviation = 4.5°. The slope values for the highland terrain are steeper than previously reported, likely due to a bias in targeting of the NAC DTMs toward higher relief features in the highland terrain. Overlapping DTMs of single stereo sets were also combined to form larger area DTM mosaics that enable detailed characterization of large geomorphic features. From one DTM mosaic we mapped a large viscous flow related to the Orientale basin ejecta and estimated its thickness and volume to exceed 300 m and 500 km3, respectively. Despite its ∼3.8 billion year age the flow still exhibits unconfined margin slopes above 30°, in some cases exceeding the angle of repose, consistent with deposition of material rich in impact melt. We show that the NAC stereo pairs and derived DTMs represent an invaluable tool for science and exploration purposes. At this date about 2% of the lunar surface is imaged in high-resolution stereo, and continued acquisition of stereo observations will serve to strengthen our

  4. Precision measurements on trapped antihydrogen in the ALPHA experiment.

    PubMed

    Eriksson, S

    2018-03-28

    Both the 1S-2S transition and the ground state hyperfine spectrum have been observed in trapped antihydrogen. The former constitutes the first observation of resonant interaction of light with an anti-atom, and the latter is the first detailed measurement of a spectral feature in antihydrogen. Owing to the narrow intrinsic linewidth of the 1S-2S transition and use of two-photon laser excitation, the transition energy can be precisely determined in both hydrogen and antihydrogen, allowing a direct comparison as a test of fundamental symmetry. The result is consistent with CPT invariance at a relative precision of around 2×10 -10 This constitutes the most precise measurement of a property of antihydrogen. The hyperfine spectrum of antihydrogen is determined to a relative uncertainty of 4×10 -4 The excited state and the hyperfine spectroscopy techniques currently both show sensitivity at the few 100 kHz level on the absolute scale. Here, the most recent work of the ALPHA collaboration on precision spectroscopy of antihydrogen is presented together with an outlook on improving the precision of measurements involving lasers and microwave radiation. Prospects of measuring the Lamb shift and determining the antiproton charge radius in trapped antihydrogen in the ALPHA apparatus are presented. Future perspectives of precision measurements of trapped antihydrogen in the ALPHA apparatus when the ELENA facility becomes available to experiments at CERN are discussed.This article is part of the Theo Murphy meeting issue 'Antiproton physics in the ELENA era'. © 2018 The Author(s).

  5. Precision measurements on trapped antihydrogen in the ALPHA experiment

    NASA Astrophysics Data System (ADS)

    Eriksson, S.

    2018-03-01

    Both the 1S-2S transition and the ground state hyperfine spectrum have been observed in trapped antihydrogen. The former constitutes the first observation of resonant interaction of light with an anti-atom, and the latter is the first detailed measurement of a spectral feature in antihydrogen. Owing to the narrow intrinsic linewidth of the 1S-2S transition and use of two-photon laser excitation, the transition energy can be precisely determined in both hydrogen and antihydrogen, allowing a direct comparison as a test of fundamental symmetry. The result is consistent with CPT invariance at a relative precision of around 2×10-10. This constitutes the most precise measurement of a property of antihydrogen. The hyperfine spectrum of antihydrogen is determined to a relative uncertainty of 4×10-4. The excited state and the hyperfine spectroscopy techniques currently both show sensitivity at the few 100 kHz level on the absolute scale. Here, the most recent work of the ALPHA collaboration on precision spectroscopy of antihydrogen is presented together with an outlook on improving the precision of measurements involving lasers and microwave radiation. Prospects of measuring the Lamb shift and determining the antiproton charge radius in trapped antihydrogen in the ALPHA apparatus are presented. Future perspectives of precision measurements of trapped antihydrogen in the ALPHA apparatus when the ELENA facility becomes available to experiments at CERN are discussed. This article is part of the Theo Murphy meeting issue `Antiproton physics in the ELENA era'.

  6. Fluorescence confocal mosaicing microscopy of basal cell carcinomas ex vivo: demonstration of rapid surgical pathology with high sensitivity and specificity

    NASA Astrophysics Data System (ADS)

    Gareau, Daniel S.; Karen, Julie K.; Dusza, Stephen W.; Tudisco, Marie; Nehal, Kishwer S.; Rajadhyaksha, Milind

    2009-02-01

    Mohs surgery, for the precise removal of basal cell carcinomas (BCCs), consists of a series of excisions guided by the surgeon's examination of the frozen histology of the previous excision. The histology reveals atypical nuclear morphology, identifying cancer. The preparation of frozen histology is accurate but labor-intensive and slow. Nuclear pathology can be achieved by staining with acridine orange (1 mM, 20 s) BCCs in Mohs surgical skin excisions within 5-9 minutes, compared to 20-45 for frozen histology. For clinical utility, images must have high contrast and high resolution. We report tumor contrast of 10-100 fold over the background dermis and submicron (diffraction limited) resolution over a cm field of view. BCCs were detected with an overall sensitivity of 96.6%, specificity of 89.2%, positive predictive value of 93.0% and negative predictive value of 94.7%. The technique was therefore accurate for normal tissue as well as tumor. We conclude that fluorescence confocal mosaicing serves as a sensitive and rapid pathological tool. Beyond Mohs surgery, this technology may be extended to suit other pathological needs with the development of new contrast agents. The technique reported here accurately detects all subtypes of BCC in skin excisions, including the large nodular, small micronodular, and tiny sclerodermaform tumors. However, this technique may be applicable to imaging tissue that is larger, more irregular and of various mechanical compliances with further engineering of the tissue mounting and staging mechanisms.

  7. A Monte Carlo Simulation Comparing the Statistical Precision of Two High-Stakes Teacher Evaluation Methods: A Value-Added Model and a Composite Measure

    ERIC Educational Resources Information Center

    Spencer, Bryden

    2016-01-01

    Value-added models are a class of growth models used in education to assign responsibility for student growth to teachers or schools. For value-added models to be used fairly, sufficient statistical precision is necessary for accurate teacher classification. Previous research indicated precision below practical limits. An alternative approach has…

  8. Precision enhancement of pavement roughness localization with connected vehicles

    NASA Astrophysics Data System (ADS)

    Bridgelall, R.; Huang, Y.; Zhang, Z.; Deng, F.

    2016-02-01

    Transportation agencies rely on the accurate localization and reporting of roadway anomalies that could pose serious hazards to the traveling public. However, the cost and technical limitations of present methods prevent their scaling to all roadways. Connected vehicles with on-board accelerometers and conventional geospatial position receivers offer an attractive alternative because of their potential to monitor all roadways in real-time. The conventional global positioning system is ubiquitous and essentially free to use but it produces impractically large position errors. This study evaluated the improvement in precision achievable by augmenting the conventional geo-fence system with a standard speed bump or an existing anomaly at a pre-determined position to establish a reference inertial marker. The speed sensor subsequently generates position tags for the remaining inertial samples by computing their path distances relative to the reference position. The error model and a case study using smartphones to emulate connected vehicles revealed that the precision in localization improves from tens of metres to sub-centimetre levels, and the accuracy of measuring localized roughness more than doubles. The research results demonstrate that transportation agencies will benefit from using the connected vehicle method to achieve precision and accuracy levels that are comparable to existing laser-based inertial profilers.

  9. High precision measurements on fission-fragment de-excitation

    NASA Astrophysics Data System (ADS)

    Oberstedt, Stephan; Gatera, Angélique; Geerts, Wouter; Göök, Alf; Hambsch, Franz-Josef; Vidali, Marzio; Oberstedt, Andreas

    2017-11-01

    In recent years nuclear fission has gained renewed interest both from the nuclear energy community and in basic science. The first, represented by the OECD Nuclear Energy Agency, expressed the need for more accurate fission cross-section and fragment yield data for safety assessments of Generation IV reactor systems. In basic science modelling made much progress in describing the de-excitation mechanism of neutron-rich isotopes, e.g. produced in nuclear fission. Benchmarking the different models require a precise experimental data on prompt fission neutron and γ-ray emission, e.g. multiplicity, average energy per particle and total dissipated energy per fission, preferably as function of fission-fragment mass and total kinetic energy. A collaboration of scientists from JRC Geel (formerly known as JRC IRMM) and other institutes took the lead in establishing a dedicated measurement programme on prompt fission neutron and γ-ray characteristics, which has triggered even more measurement activities around the world. This contribution presents new advanced instrumentation and methodology we use to generate high-precision spectral data and will give a flavour of future data needs and opportunities.

  10. Enhanced Precision Time Synchronization for Wireless Sensor Networks

    PubMed Central

    Cho, Hyuntae; Kim, Jongdeok; Baek, Yunju

    2011-01-01

    Time synchronization in wireless sensor networks (WSNs) is a fundamental issue for the coordination of distributed entities and events. Nondeterministic latency, which may decrease the accuracy and precision of time synchronization can occur at any point in the network layers. Specially, random back-off by channel contention leads to a large uncertainty. In order to reduce the large nondeterministic uncertainty from channel contention, we propose an enhanced precision time synchronization protocol in this paper. The proposed method reduces the traffic needed for the synchronization procedure by selectively forwarding the packet. Furthermore, the time difference between sensor nodes increases as time advances because of the use of a clock source with a cheap crystal oscillator. In addition, we provide a means to maintain accurate time by adopting hardware-assisted time stamp and drift correction. Experiments are conducted to evaluate the performance of the proposed method, for which sensor nodes are designed and implemented. According to the evaluation results, the performance of the proposed method is better than that of a traditional time synchronization protocol. PMID:22164035

  11. Enhanced precision time synchronization for wireless sensor networks.

    PubMed

    Cho, Hyuntae; Kim, Jongdeok; Baek, Yunju

    2011-01-01

    Time synchronization in wireless sensor networks (WSNs) is a fundamental issue for the coordination of distributed entities and events. Nondeterministic latency, which may decrease the accuracy and precision of time synchronization can occur at any point in the network layers. Specially, random back-off by channel contention leads to a large uncertainty. In order to reduce the large nondeterministic uncertainty from channel contention, we propose an enhanced precision time synchronization protocol in this paper. The proposed method reduces the traffic needed for the synchronization procedure by selectively forwarding the packet. Furthermore, the time difference between sensor nodes increases as time advances because of the use of a clock source with a cheap crystal oscillator. In addition, we provide a means to maintain accurate time by adopting hardware-assisted time stamp and drift correction. Experiments are conducted to evaluate the performance of the proposed method, for which sensor nodes are designed and implemented. According to the evaluation results, the performance of the proposed method is better than that of a traditional time synchronization protocol.

  12. Evaluation of the precision agricultural landscape modeling system (PALMS) in the semiarid Texas southern high plains

    USDA-ARS?s Scientific Manuscript database

    Accurate models to simulate the soil water balance in semiarid cropping systems are needed to evaluate management practices for soil and water conservation in both irrigated and dryland production systems. The objective of this study was to evaluate the application of the Precision Agricultural Land...

  13. Evaluation of the Precision Agricultural Landscape Modeling System (PALMS) in the Semiarid Texas Southern High Plains

    USDA-ARS?s Scientific Manuscript database

    Accurate models to simulate the soil water balance in semiarid cropping systems are needed to evaluate management practices for soil and water conservation in both irrigated and dryland production systems. The objective of this study was to evaluate the application of the Precision Agricultural Land...

  14. Precision of spiral-bevel gears

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.

    1983-01-01

    The kinematic errors in spiral bevel gear trains caused by the generation of nonconjugate surfaces, by axial displacements of the gears during assembly, and by eccentricity of the assembled gears were determined. One mathematical model corresponds to the motion of the contact ellipse across the tooth surface, (geometry I) and the other along the tooth surface (geometry II). The following results were obtained: (1) kinematic errors induced by errors of manufacture may be minimized by applying special machine settings, the original error may be reduced by order of magnitude, the procedure is most effective for geometry 2 gears, (2) when trying to adjust the bearing contact pattern between the gear teeth for geometry I gears, it is more desirable to shim the gear axially; for geometry II gears, shim the pinion axially; (3) the kinematic accuracy of spiral bevel drives are most sensitive to eccentricities of the gear and less sensitive to eccentricities of the pinion. The precision of mounting accuracy and manufacture are most crucial for the gear, and less so for the pinion. Previously announced in STAR as N82-30552

  15. Precision of spiral-bevel gears

    NASA Technical Reports Server (NTRS)

    Litvin, F. L.; Goldrich, R. N.; Coy, J. J.; Zaretsky, E. V.

    1982-01-01

    The kinematic errors in spiral bevel gear trains caused by the generation of nonconjugate surfaces, by axial displacements of the gears during assembly, and by eccentricity of the assembled gears were determined. One mathematical model corresponds to the motion of the contact ellipse across the tooth surface, (geometry I) and the other along the tooth surface (geometry II). The following results were obtained: (1) kinematic errors induced by errors of manufacture may be minimized by applying special machine settings, the original error may be reduced by order of magnitude, the procedure is most effective for geometry 2 gears, (2) when trying to adjust the bearing contact pattern between the gear teeth for geometry 1 gears, it is more desirable to shim the gear axially; for geometry II gears, shim the pinion axially; (3) the kinematic accuracy of spiral bevel drives are most sensitive to eccentricities of the gear and less sensitive to eccentricities of the pinion. The precision of mounting accuracy and manufacture are most crucial for the gear, and less so for the pinion.

  16. The Gaussian atmospheric transport model and its sensitivity to the joint frequency distribution and parametric variability.

    PubMed

    Hamby, D M

    2002-01-01

    Reconstructed meteorological data are often used in some form of long-term wind trajectory models for estimating the historical impacts of atmospheric emissions. Meteorological data for the straight-line Gaussian plume model are put into a joint frequency distribution, a three-dimensional array describing atmospheric wind direction, speed, and stability. Methods using the Gaussian model and joint frequency distribution inputs provide reasonable estimates of downwind concentration and have been shown to be accurate to within a factor of four. We have used multiple joint frequency distributions and probabilistic techniques to assess the Gaussian plume model and determine concentration-estimate uncertainty and model sensitivity. We examine the straight-line Gaussian model while calculating both sector-averaged and annual-averaged relative concentrations at various downwind distances. The sector-average concentration model was found to be most sensitive to wind speed, followed by horizontal dispersion (sigmaZ), the importance of which increases as stability increases. The Gaussian model is not sensitive to stack height uncertainty. Precision of the frequency data appears to be most important to meteorological inputs when calculations are made for near-field receptors, increasing as stack height increases.

  17. A sensitive and accurate quantification method for the detection of hepatitis B virus covalently closed circular DNA by the application of a droplet digital polymerase chain reaction amplification system.

    PubMed

    Mu, Di; Yan, Liang; Tang, Hui; Liao, Yong

    2015-10-01

    To develop a sensitive and accurate assay system for the quantification of covalently closed circular HBV DNA (cccDNA) for future clinical monitoring of cccDNA fluctuation during antiviral therapy in the liver of infected patients. A droplet digital PCR (ddPCR)-based assay system detected template DNA input at the single copy level (or ~10(-5) pg of plasmid HBV DNA) by using serially diluted plasmid HBV DNA samples. Compared with the conventional quantitative PCR assay in the detection of cccDNA, which required at least 50 ng of template DNA input, a parallel experiment applying a ddPCR system demonstrates that the lowest detection limit of cccDNA from HepG2.215 cellular DNA samples is around 1 ng, which is equivalent to 0.54 ± 0.94 copies of cccDNA. In addition, we demonstrated that the addition of cccDNA-safe exonuclease and utilization of cccDNA-specific primers in the ddPCR assay system significantly improved the detection accuracy of HBV cccDNA from HepG2.215 cellular DNA samples. The ddPCR-based cccDNA detection system is a sensitive and accurate assay for the quantification of cccDNA in HBV-transfected HepG2.215 cellular DNA samples and may represent an important method for future application in monitoring cccDNA fluctuation during antiviral therapy.

  18. High-Precision Simulation of the Gravity Field of Rapidly-Rotating Barotropes in Hydrostatic Equilibrium

    NASA Astrophysics Data System (ADS)

    Hubbard, W. B.

    2013-12-01

    The so-called theory of figures (TOF) uses potential theory to solve for the structure of highly distorted rotating liquid planets in hydrostatic equilibrium. TOF is noteworthy both for its antiquity (Maclaurin 1742) and its mathematical complexity. Planned high-precision gravity measurements near the surfaces of Jupiter and Saturn (possibly detecting signals ~ microgal) will place unprecedented requirements on TOF, not because one expects hydrostatic equilibrium to that level, but because nonhydrostatic components in the surface gravity, at expected levels ~ 1 milligal, must be referenced to precise hydrostatic-equilibrium models. The Maclaurin spheroid is both a useful test of numerical TOF codes (Hubbard 2012, ApJ Lett 756:L15), and an approach to an efficient TOF code for arbitrary barotropes of variable density (Hubbard 2013, ApJ 768:43). For the latter, one trades off vertical resolution by replacing a continuous barotropic pressure-density relation with a stairstep relation, corresponding to N concentric Maclaurin spheroids (CMS), each of constant density. The benefit of this trade-off is that two-dimensional integrals over the mass distributions at each interface are reduced to one-dimensional integrals, quickly and accurately evaluated by Gaussian quadrature. The shapes of the spheroids comprise N level surfaces within the planet and at its surface, are gravitationally coupled to each other, and are found by self-consistent iteration, relaxing to a final configuration to within the computer's precision limits. The angular and radial variation of external gravity (using the usual geophysical expansion in multipole moments) can be found to the limit of typical floating point precision (~ 1.e-14), much better than the expected noise/signal for either the Juno or Cassini gravity experiments. The stairstep barotrope can be adjusted to fit a prescribed continuous or discontinuous interior barotrope, and can be made to approximate it to any required precision by

  19. Pressure measurements with a precision of 0.001 ppm in magnetic fields at low temperatures

    NASA Astrophysics Data System (ADS)

    Miura, Y.; Matsushima, N.; Ando, T.; Kuno, S.; Inoue, S.; Ito, K.; Mamiya, T.

    1993-11-01

    Pressure measurements made by an ac bridge technique with a precision of 0.001 ppm in magnetic fields at low temperatures using a Straty-Adams type gauge are described. In order to improve the sensitivity and the long-term stability of the bridge system, coaxial cables without dielectric insulator were developed, with a small cable capacitance temperature coefficient of the impedance. This pressure measurement system has a sensitivity of dP/P˜5×10-10 and a long-term stability of dP/P˜2.4×10-9 over 18 h. This is especially useful for measurements such as electric and magnetic susceptibility measurements in magnetic fields at low temperatures requiring a high precision.

  20. Approaches for the accurate definition of geological time boundaries

    NASA Astrophysics Data System (ADS)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  1. Effect of plot and sample size on timing and precision of urban forest assessments

    Treesearch

    David J. Nowak; Jeffrey T. Walton; Jack C. Stevens; Daniel E. Crane; Robert E. Hoehn

    2008-01-01

    Accurate field data can be used to assess ecosystem services from trees and to improve urban forest management, yet little is known about the optimization of field data collection in the urban environment. Various field and Geographic Information System (GIS) tests were performed to help understand how time costs and precision of tree population estimates change with...

  2. The John Charnley Award: an accurate and sensitive method to separate, display, and characterize wear debris: part 1: polyethylene particles.

    PubMed

    Billi, Fabrizio; Benya, Paul; Kavanaugh, Aaron; Adams, John; Ebramzadeh, Edward; McKellop, Harry

    2012-02-01

    Numerous studies indicate highly crosslinked polyethylenes reduce the wear debris volume generated by hip arthroplasty acetabular liners. This, in turns, requires new methods to isolate and characterize them. We describe a method for extracting polyethylene wear particles from bovine serum typically used in wear tests and for characterizing their size, distribution, and morphology. Serum proteins were completely digested using an optimized enzymatic digestion method that prevented the loss of the smallest particles and minimized their clumping. Density-gradient ultracentrifugation was designed to remove contaminants and recover the particles without filtration, depositing them directly onto a silicon wafer. This provided uniform distribution of the particles and high contrast against the background, facilitating accurate, automated, morphometric image analysis. The accuracy and precision of the new protocol were assessed by recovering and characterizing particles from wear tests of three types of polyethylene acetabular cups (no crosslinking and 5 Mrads and 7.5 Mrads of gamma irradiation crosslinking). The new method demonstrated important differences in the particle size distributions and morphologic parameters among the three types of polyethylene that could not be detected using prior isolation methods. The new protocol overcomes a number of limitations, such as loss of nanometer-sized particles and artifactual clumping, among others. The analysis of polyethylene wear particles produced in joint simulator wear tests of prosthetic joints is a key tool to identify the wear mechanisms that produce the particles and predict and evaluate their effects on periprosthetic tissues.

  3. Ceric and ferrous dosimeters show precision for 50-5000 rad range

    NASA Technical Reports Server (NTRS)

    Frigerio, N. A.; Henry, V. D.

    1968-01-01

    Ammonium thiocyanate, added to the usual ferrous sulfate dosimeter solution, yielded a very stable, precise and temperature-independent system eight times as sensitive as the classical Fricke system in the 50 to 5000 rad range. The ceric dosimeters, promising for use in mixed radiation fields, respond nearly independently of LET.

  4. System for precise position registration

    DOEpatents

    Sundelin, Ronald M.; Wang, Tong

    2005-11-22

    An apparatus for enabling accurate retaining of a precise position, such as for reacquisition of a microscopic spot or feature having a size of 0.1 mm or less, on broad-area surfaces after non-in situ processing. The apparatus includes a sample and sample holder. The sample holder includes a base and three support posts. Two of the support posts interact with a cylindrical hole and a U-groove in the sample to establish location of one point on the sample and a line through the sample. Simultaneous contact of the third support post with the surface of the sample defines a plane through the sample. All points of the sample are therefore uniquely defined by the sample and sample holder. The position registration system of the current invention provides accuracy, as measured in x, y repeatability, of at least 140 .mu.m.

  5. Clinical proteomics-driven precision medicine for targeted cancer therapy: current overview and future perspectives.

    PubMed

    Zhou, Li; Wang, Kui; Li, Qifu; Nice, Edouard C; Zhang, Haiyuan; Huang, Canhua

    2016-01-01

    Cancer is a common disease that is a leading cause of death worldwide. Currently, early detection and novel therapeutic strategies are urgently needed for more effective management of cancer. Importantly, protein profiling using clinical proteomic strategies, with spectacular sensitivity and precision, offer excellent promise for the identification of potential biomarkers that would direct the development of targeted therapeutic anticancer drugs for precision medicine. In particular, clinical sample sources, including tumor tissues and body fluids (blood, feces, urine and saliva), have been widely investigated using modern high-throughput mass spectrometry-based proteomic approaches combined with bioinformatic analysis, to pursue the possibilities of precision medicine for targeted cancer therapy. Discussed in this review are the current advantages and limitations of clinical proteomics, the available strategies of clinical proteomics for the management of precision medicine, as well as the challenges and future perspectives of clinical proteomics-driven precision medicine for targeted cancer therapy.

  6. Precision powder feeder

    DOEpatents

    Schlienger, M. Eric; Schmale, David T.; Oliver, Michael S.

    2001-07-10

    A new class of precision powder feeders is disclosed. These feeders provide a precision flow of a wide range of powdered materials, while remaining robust against jamming or damage. These feeders can be precisely controlled by feedback mechanisms.

  7. The Nab Spectrometer, Precision Field Mapping, and Associated Systematic Effects

    NASA Astrophysics Data System (ADS)

    Fry, Jason; Nab Collaboration

    2017-09-01

    The Nab experiment will make precision measurements of a, the e- ν correlation parameter, and b, the Fierz interference term, in neutron beta decay, aiming to deliver an independent determination of the ratio λ =GA /GV to sensitively test CKM unitarity. Nab utilizes a novel, long asymmetric spectrometer to measure the proton TOF and electron energy. We extract a from the slope of the measured TOF distribution for different electron energies. A reliable relation of the measured proton TOF to a requires detailed knowledge of the effective proton pathlength, which in turn imposes further requirements on the precision of the magnetic fields in the Nab spectrometer. The Nab spectrometer, magnetometry, and associated systematics will be discussed.

  8. Precision Medicine and PET/Computed Tomography in Melanoma.

    PubMed

    Mena, Esther; Sanli, Yasemin; Marcus, Charles; Subramaniam, Rathan M

    2017-10-01

    Recent advances in genomic profiling and sequencing of melanoma have provided new insights into the development of the basis for molecular biology to more accurately subgroup patients with melanoma. The development of novel mutation-targeted and immunomodulation therapy as a major component of precision oncology has revolutionized the management and outcome of patients with metastatic melanoma. PET imaging plays an important role in noninvasively assessing the tumor biological behavior, to guide individualized treatment and assess response to therapy. This review summarizes the recent genomic discoveries in melanoma in the era of targeted therapy and their implications for functional PET imaging. Published by Elsevier Inc.

  9. Development of MMC Gamma Detectors for Precise Characterization of Uranium Isotopes

    NASA Astrophysics Data System (ADS)

    Kim, G. B.; Flynn, C. C.; Kempf, S.; Gastaldo, L.; Fleischmann, A.; Enss, C.; Friedrich, S.

    2018-06-01

    Precise nuclear data from radioactive decays are important for the accurate non-destructive assay of fissile materials in nuclear safeguards. We are developing high energy resolution gamma detectors based on metallic magnetic calorimeters (MMCs) to accurately measure gamma-ray energies and branching ratios of uranium isotopes. Our MMC gamma detectors exhibit good linearity, reproducibility and a consistent response function for low energy gamma-rays. We illustrate the capabilities of MMCs to improve literature values of nuclear data with an analysis of gamma spectra of U-233. In this context, we also improve the value of the energy for the single gamma-ray of the U-233 daughter Ra-225 by over an order of magnitude from 40.09 ± 0.05 to 40.0932 ± 0.0007 keV.

  10. Synthetic Gene Expression Circuits for Designing Precision Tools in Oncology

    PubMed Central

    Re, Angela

    2017-01-01

    Precision medicine in oncology needs to enhance its capabilities to match diagnostic and therapeutic technologies to individual patients. Synthetic biology streamlines the design and construction of functionalized devices through standardization and rational engineering of basic biological elements decoupled from their natural context. Remarkable improvements have opened the prospects for the availability of synthetic devices of enhanced mechanism clarity, robustness, sensitivity, as well as scalability and portability, which might bring new capabilities in precision cancer medicine implementations. In this review, we begin by presenting a brief overview of some of the major advances in the engineering of synthetic genetic circuits aimed to the control of gene expression and operating at the transcriptional, post-transcriptional/translational, and post-translational levels. We then focus on engineering synthetic circuits as an enabling methodology for the successful establishment of precision technologies in oncology. We describe significant advancements in our capabilities to tailor synthetic genetic circuits to specific applications in tumor diagnosis, tumor cell- and gene-based therapy, and drug delivery. PMID:28894736

  11. Exploratory Movement Generates Higher-Order Information That Is Sufficient for Accurate Perception of Scaled Egocentric Distance

    PubMed Central

    Mantel, Bruno; Stoffregen, Thomas A.; Campbell, Alain; Bardy, Benoît G.

    2015-01-01

    Body movement influences the structure of multiple forms of ambient energy, including optics and gravito-inertial force. Some researchers have argued that egocentric distance is derived from inferential integration of visual and non-visual stimulation. We suggest that accurate information about egocentric distance exists in perceptual stimulation as higher-order patterns that extend across optics and inertia. We formalize a pattern that specifies the egocentric distance of a stationary object across higher-order relations between optics and inertia. This higher-order parameter is created by self-generated movement of the perceiver in inertial space relative to the illuminated environment. For this reason, we placed minimal restrictions on the exploratory movements of our participants. We asked whether humans can detect and use the information available in this higher-order pattern. Participants judged whether a virtual object was within reach. We manipulated relations between body movement and the ambient structure of optics and inertia. Judgments were precise and accurate when the higher-order optical-inertial parameter was available. When only optic flow was available, judgments were poor. Our results reveal that participants perceived egocentric distance from the higher-order, optical-inertial consequences of their own exploratory activity. Analysis of participants’ movement trajectories revealed that self-selected movements were complex, and tended to optimize availability of the optical-inertial pattern that specifies egocentric distance. We argue that accurate information about egocentric distance exists in higher-order patterns of ambient energy, that self-generated movement can generate these higher-order patterns, and that these patterns can be detected and used to support perception of egocentric distance that is precise and accurate. PMID:25856410

  12. High-precision x-ray spectroscopy of highly charged ions with microcalorimeters

    NASA Astrophysics Data System (ADS)

    Kraft-Bermuth, S.; Andrianov, V.; Bleile, A.; Echler, A.; Egelhof, P.; Grabitz, P.; Ilieva, S.; Kilbourne, C.; Kiselev, O.; McCammon, D.; Meier, J.

    2013-09-01

    The precise determination of the energy of the Lyman α1 and α2 lines in hydrogen-like heavy ions provides a sensitive test of quantum electrodynamics in very strong Coulomb fields. To improve the experimental precision, the new detector concept of microcalorimeters is now exploited for such measurements. Such detectors consist of compensated-doped silicon thermistors and Pb or Sn absorbers to obtain high quantum efficiency in the energy range of 40-70 keV, where the Doppler-shifted Lyman lines are located. For the first time, a microcalorimeter was applied in an experiment to precisely determine the transition energy of the Lyman lines of lead ions at the experimental storage ring at GSI. The energy of the Ly α1 line E(Ly-α1, 207Pb81+) = (77937 ± 12stat ± 25syst) eV agrees within error bars with theoretical predictions. To improve the experimental precision, a new detector array with more pixels and better energy resolution was equipped and successfully applied in an experiment to determine the Lyman-α lines of gold ions 197Au78+.

  13. Epidermal devices for noninvasive, precise, and continuous mapping of macrovascular and microvascular blood flow

    PubMed Central

    Webb, R. Chad; Ma, Yinji; Krishnan, Siddharth; Li, Yuhang; Yoon, Stephen; Guo, Xiaogang; Feng, Xue; Shi, Yan; Seidel, Miles; Cho, Nam Heon; Kurniawan, Jonas; Ahad, James; Sheth, Niral; Kim, Joseph; Taylor VI, James G.; Darlington, Tom; Chang, Ken; Huang, Weizhong; Ayers, Joshua; Gruebele, Alexander; Pielak, Rafal M.; Slepian, Marvin J.; Huang, Yonggang; Gorbach, Alexander M.; Rogers, John A.

    2015-01-01

    Continuous monitoring of variations in blood flow is vital in assessing the status of microvascular and macrovascular beds for a wide range of clinical and research scenarios. Although a variety of techniques exist, most require complete immobilization of the subject, thereby limiting their utility to hospital or clinical settings. Those that can be rendered in wearable formats suffer from limited accuracy, motion artifacts, and other shortcomings that follow from an inability to achieve intimate, noninvasive mechanical linkage of sensors with the surface of the skin. We introduce an ultrathin, soft, skin-conforming sensor technology that offers advanced capabilities in continuous and precise blood flow mapping. Systematic work establishes a set of experimental procedures and theoretical models for quantitative measurements and guidelines in design and operation. Experimental studies on human subjects, including validation with measurements performed using state-of-the-art clinical techniques, demonstrate sensitive and accurate assessment of both macrovascular and microvascular flow under a range of physiological conditions. Refined operational modes eliminate long-term drifts and reduce power consumption, thereby providing steps toward the use of this technology for continuous monitoring during daily activities. PMID:26601309

  14. Raman fingerprints of atomically precise graphene nanoribbons

    DOE PAGES

    Verzhbitskiy, Ivan A.; Corato, Marzio De; Ruini, Alice; ...

    2016-02-23

    Bottom-up approaches allow the production of ultranarrow and atomically precise graphene nanoribbons (GNRs) with electronic and optical properties controlled by the specific atomic structure. Combining Raman spectroscopy and ab initio simulations, we show that GNR width, edge geometry, and functional groups all influence their Raman spectra. As a result, the low-energy spectral region below 1000 cm –1 is particularly sensitive to edge morphology and functionalization, while the D peak dispersion can be used to uniquely fingerprint the presence of GNRs and differentiates them from other sp 2 carbon nanostructures.

  15. Accurate determination of interfacial protein secondary structure by combining interfacial-sensitive amide I and amide III spectral signals.

    PubMed

    Ye, Shuji; Li, Hongchun; Yang, Weilai; Luo, Yi

    2014-01-29

    Accurate determination of protein structures at the interface is essential to understand the nature of interfacial protein interactions, but it can only be done with a few, very limited experimental methods. Here, we demonstrate for the first time that sum frequency generation vibrational spectroscopy can unambiguously differentiate the interfacial protein secondary structures by combining surface-sensitive amide I and amide III spectral signals. This combination offers a powerful tool to directly distinguish random-coil (disordered) and α-helical structures in proteins. From a systematic study on the interactions between several antimicrobial peptides (including LKα14, mastoparan X, cecropin P1, melittin, and pardaxin) and lipid bilayers, it is found that the spectral profiles of the random-coil and α-helical structures are well separated in the amide III spectra, appearing below and above 1260 cm(-1), respectively. For the peptides with a straight backbone chain, the strength ratio for the peaks of the random-coil and α-helical structures shows a distinct linear relationship with the fraction of the disordered structure deduced from independent NMR experiments reported in the literature. It is revealed that increasing the fraction of negatively charged lipids can induce a conformational change of pardaxin from random-coil to α-helical structures. This experimental protocol can be employed for determining the interfacial protein secondary structures and dynamics in situ and in real time without extraneous labels.

  16. No Galaxy Left Behind: Accurate Measurements with the Faintest Objects in the Dark Energy Survey

    DOE PAGES

    Suchyta, E.

    2016-01-27

    Accurate statistical measurement with large imaging surveys has traditionally required throwing away a sizable fraction of the data. This is because most measurements have have relied on selecting nearly complete samples, where variations in the composition of the galaxy population with seeing, depth, or other survey characteristics are small. We introduce a new measurement method that aims to minimize this wastage, allowing precision measurement for any class of stars or galaxies detectable in an imaging survey. We have implemented our proposal in Balrog, a software package which embeds fake objects in real imaging in order to accurately characterize measurement biases.more » We also demonstrate this technique with an angular clustering measurement using Dark Energy Survey (DES) data. We first show that recovery of our injected galaxies depends on a wide variety of survey characteristics in the same way as the real data. We then construct a flux-limited sample of the faintest galaxies in DES, chosen specifically for their sensitivity to depth and seeing variations. Using the synthetic galaxies as randoms in the standard LandySzalay correlation function estimator suppresses the effects of variable survey selection by at least two orders of magnitude. Now our measured angular clustering is found to be in excellent agreement with that of a matched sample drawn from much deeper, higherresolution space-based COSMOS imaging; over angular scales of 0.004° < θ < 0.2 ° , we find a best-fit scaling amplitude between the DES and COSMOS measurements of 1.00 ± 0.09. We expect this methodology to be broadly useful for extending the statistical reach of measurements in a wide variety of coming imaging surveys.« less

  17. Lidar Systems for Precision Navigation and Safe Landing on Planetary Bodies

    NASA Technical Reports Server (NTRS)

    Amzajerdian, Farzin; Pierrottet, Diego F.; Petway, Larry B.; Hines, Glenn D.; Roback, Vincent E.

    2011-01-01

    The ability of lidar technology to provide three-dimensional elevation maps of the terrain, high precision distance to the ground, and approach velocity can enable safe landing of robotic and manned vehicles with a high degree of precision. Currently, NASA is developing novel lidar sensors aimed at needs of future planetary landing missions. These lidar sensors are a 3-Dimensional Imaging Flash Lidar, a Doppler Lidar, and a Laser Altimeter. The Flash Lidar is capable of generating elevation maps of the terrain that indicate hazardous features such as rocks, craters, and steep slopes. The elevation maps collected during the approach phase of a landing vehicle, at about 1 km above the ground, can be used to determine the most suitable safe landing site. The Doppler Lidar provides highly accurate ground relative velocity and distance data allowing for precision navigation to the landing site. Our Doppler lidar utilizes three laser beams pointed to different directions to measure line of sight velocities and ranges to the ground from altitudes of over 2 km. Throughout the landing trajectory starting at altitudes of about 20 km, the Laser Altimeter can provide very accurate ground relative altitude measurements that are used to improve the vehicle position knowledge obtained from the vehicle navigation system. At altitudes from approximately 15 km to 10 km, either the Laser Altimeter or the Flash Lidar can be used to generate contour maps of the terrain, identifying known surface features such as craters, to perform Terrain relative Navigation thus further reducing the vehicle s relative position error. This paper describes the operational capabilities of each lidar sensor and provides a status of their development. Keywords: Laser Remote Sensing, Laser Radar, Doppler Lidar, Flash Lidar, 3-D Imaging, Laser Altimeter, Precession Landing, Hazard Detection

  18. Precision Gamma-Ray Branching Ratios for Long-Lived Radioactive Nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonchev, Anton

    Many properties of the high-energy-density environments in nuclear weapons tests, advanced laser-fusion experiments, the interior of stars, and other astrophysical bodies must be inferred from the resulting long-lived radioactive nuclei that are produced. These radioactive nuclei are most easily and sensitively identified by studying the characteristic gamma rays emitted during decay. Measuring a number of decays via detection of the characteristic gamma-rays emitted during the gamma-decay (the gamma-ray branching ratio) of the long-lived fission products is one of the most straightforward and reliable ways to determine the number of fissions that occurred in a nuclear weapon test. The fission productsmore » 147Nd, 144Ce, 156Eu, and certain other long-lived isotopes play a crucial role in science-based stockpile stewardship, however, the large uncertainties (about 8%) on the branching ratios measured for these isotopes are currently limiting the usefulness of the existing data [1,2]. We performed highly accurate gamma-ray branching-ratio measurements for a group of high-atomic-number rare earth isotopes to greatly improve the precision and reliability with which the fission yield and reaction products in high-energy-density environments can be determined. We have developed techniques that take advantage of new radioactive-beam facilities, such as DOE's CARIBU located at Argonne National Laboratory, to produce radioactive samples and perform decay spectroscopy measurements. The absolute gamma-ray branching ratios for 147Nd and 144Ce are reduced <2% precision. In addition, high-energy monoenergetic neutron beams from the FN Tandem accelerator in TUNL at Duke University was used to produce 167Tm using the 169Tm(n,3n) reaction. Fourtime improved branching ratio of 167Tm is used now to measure reaction-in-flight (RIF) neutrons from a burning DT capsule at NIF [10]. This represents the first measurement of RIF neutrons in any laboratory fusion system, and the magnitude

  19. Magnetic resonance imaging for precise radiotherapy of small laboratory animals.

    PubMed

    Frenzel, Thorsten; Kaul, Michael Gerhard; Ernst, Thomas Michael; Salamon, Johannes; Jäckel, Maria; Schumacher, Udo; Krüll, Andreas

    2017-03-01

    Radiotherapy of small laboratory animals (SLA) is often not as precisely applied as in humans. Here we describe the use of a dedicated SLA magnetic resonance imaging (MRI) scanner for precise tumor volumetry, radiotherapy treatment planning, and diagnostic imaging in order to make the experiments more accurate. Different human cancer cells were injected at the lower trunk of pfp/rag2 and SCID mice to allow for local tumor growth. Data from cross sectional MRI scans were transferred to a clinical treatment planning system (TPS) for humans. Manual palpation of the tumor size was compared with calculated tumor size of the TPS and with tumor weight at necropsy. As a feasibility study MRI based treatment plans were calculated for a clinical 6MV linear accelerator using a micro multileaf collimator (μMLC). In addition, diagnostic MRI scans were used to investigate animals which did clinical poorly during the study. MRI is superior in precise tumor volume definition whereas manual palpation underestimates their size. Cross sectional MRI allow for treatment planning so that conformal irradiation of mice with a clinical linear accelerator using a μMLC is in principle feasible. Several internal pathologies were detected during the experiment using the dedicated scanner. MRI is a key technology for precise radiotherapy of SLA. The scanning protocols provided are suited for tumor volumetry, treatment planning, and diagnostic imaging. Copyright © 2016. Published by Elsevier GmbH.

  20. Methods for applying accurate digital PCR analysis on low copy DNA samples.

    PubMed

    Whale, Alexandra S; Cowen, Simon; Foy, Carole A; Huggett, Jim F

    2013-01-01

    Digital PCR (dPCR) is a highly accurate molecular approach, capable of precise measurements, offering a number of unique opportunities. However, in its current format dPCR can be limited by the amount of sample that can be analysed and consequently additional considerations such as performing multiplex reactions or pre-amplification can be considered. This study investigated the impact of duplexing and pre-amplification on dPCR analysis by using three different assays targeting a model template (a portion of the Arabidopsis thaliana alcohol dehydrogenase gene). We also investigated the impact of different template types (linearised plasmid clone and more complex genomic DNA) on measurement precision using dPCR. We were able to demonstrate that duplex dPCR can provide a more precise measurement than uniplex dPCR, while applying pre-amplification or varying template type can significantly decrease the precision of dPCR. Furthermore, we also demonstrate that the pre-amplification step can introduce measurement bias that is not consistent between experiments for a sample or assay and so could not be compensated for during the analysis of this data set. We also describe a model for estimating the prevalence of molecular dropout and identify this as a source of dPCR imprecision. Our data have demonstrated that the precision afforded by dPCR at low sample concentration can exceed that of the same template post pre-amplification thereby negating the need for this additional step. Our findings also highlight the technical differences between different templates types containing the same sequence that must be considered if plasmid DNA is to be used to assess or control for more complex templates like genomic DNA.

  1. Methods for Applying Accurate Digital PCR Analysis on Low Copy DNA Samples

    PubMed Central

    Whale, Alexandra S.; Cowen, Simon; Foy, Carole A.; Huggett, Jim F.

    2013-01-01

    Digital PCR (dPCR) is a highly accurate molecular approach, capable of precise measurements, offering a number of unique opportunities. However, in its current format dPCR can be limited by the amount of sample that can be analysed and consequently additional considerations such as performing multiplex reactions or pre-amplification can be considered. This study investigated the impact of duplexing and pre-amplification on dPCR analysis by using three different assays targeting a model template (a portion of the Arabidopsis thaliana alcohol dehydrogenase gene). We also investigated the impact of different template types (linearised plasmid clone and more complex genomic DNA) on measurement precision using dPCR. We were able to demonstrate that duplex dPCR can provide a more precise measurement than uniplex dPCR, while applying pre-amplification or varying template type can significantly decrease the precision of dPCR. Furthermore, we also demonstrate that the pre-amplification step can introduce measurement bias that is not consistent between experiments for a sample or assay and so could not be compensated for during the analysis of this data set. We also describe a model for estimating the prevalence of molecular dropout and identify this as a source of dPCR imprecision. Our data have demonstrated that the precision afforded by dPCR at low sample concentration can exceed that of the same template post pre-amplification thereby negating the need for this additional step. Our findings also highlight the technical differences between different templates types containing the same sequence that must be considered if plasmid DNA is to be used to assess or control for more complex templates like genomic DNA. PMID:23472156

  2. Improving the photometric precision of IRAC Channel 1

    NASA Astrophysics Data System (ADS)

    Mighell, Kenneth J.; Glaccum, William; Hoffmann, William

    2008-07-01

    Planning is underway for a possible post-cryogenic mission with the Spitzer Space Telescope. Only Channels 1 and 2 (3.6 and 4.5 μm) of the Infrared Array Camera (IRAC) will be operational; they will have unmatched sensitivity from 3 to 5 microns until the James Webb Space Telescope is launched. At SPIE Orlando, Mighell described his NASA-funded MATPHOT algorithm for precision stellar photometry and astrometry and presented MATPHOT-based simulations that suggested Channel 1 stellar photometry may be significantly improved by modeling the nonuniform RQE within each pixel, which, when not taken into account in aperture photometry, causes the derived flux to vary according to where the centroid falls within a single pixel (the pixel-phase effect). We analyze archival observations of calibration stars and compare the precision of stellar aperture photometry, with the recommended 1-dimensional and a new 2-dimensional pixel-phase aperture-flux correction, and MATPHOT-based PSF-fitting photometry which accounts for the observed loss of stellar flux due to the nonuniform intrapixel quantum efficiency. We show how the precision of aperture photometry of bright isolated stars corrected with the new 2-dimensional aperture-flux correction function can yield photometry that is almost as precise as that produced by PSF-fitting procedures. This timely research effort is intended to enhance the science return not only of observations already in Spitzer data archive but also those that would be made during the Spitzer Warm Mission.

  3. Modeling the Relationship between Prosodic Sensitivity and Early Literacy

    ERIC Educational Resources Information Center

    Holliman, Andrew; Critten, Sarah; Lawrence, Tony; Harrison, Emily; Wood, Clare; Hughes, David

    2014-01-01

    A growing literature has demonstrated that prosodic sensitivity is related to early literacy development; however, the precise nature of this relationship remains unclear. It has been speculated in recent theoretical models that the observed relationship between prosodic sensitivity and early literacy might be partially mediated by children's…

  4. SU-E-QI-06: Design and Initial Validation of a Precise Capillary Phantom to Test Perfusion Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, R; Iacobucci, G; Khobragade, P

    2014-06-15

    Purpose: To design a precise perfusion phantom mimicking capillaries of the brain vasculature which could be used to test various perfusion protocols and algorithms which generate perfusion maps. Methods: A perfusion phantom was designed in Solidworks and built using additive manufacturing. The phantom was an overall cylindrical shape of diameter and height 20mm and containing capillaries of 200μm or 300μm which were parallel and in contact making up the inside volume where flow was allowed. We created a flow loop using a peristaltic pump and contrast agent was injected manually. Digital Subtraction Angiographic images and low contrast images with conemore » beam CT were acquired after the contrast was injected. These images were analyzed by our own code in LabVIEW software and Time-Density Curve, MTT and TTP was calculated. Results: Perfused area was visible in the cone beam CT images; however, individual capillaries were not distinguishable. The Time-Density Curve acquired was accurate, sensitive and repeatable. The parameters MTT, and TTP offered by the phantom were very sensitive to slight changes in the TDC shape. Conclusion: We have created a robust calibrating model for evaluation of existing perfusion data analysis systems. This approach is extremely sensitive to changes in the flow due to the high temporal resolution and could be used as a golden standard to assist developers in calibrating and testing of imaging perfusion systems and software algorithms. Supported by NIH Grant: 2R01EB002873 and an equipment grant from Toshiba Medical Systems Corporation.« less

  5. Automatical and accurate segmentation of cerebral tissues in fMRI dataset with combination of image processing and deep learning

    NASA Astrophysics Data System (ADS)

    Kong, Zhenglun; Luo, Junyi; Xu, Shengpu; Li, Ting

    2018-02-01

    Image segmentation plays an important role in medical science. One application is multimodality imaging, especially the fusion of structural imaging with functional imaging, which includes CT, MRI and new types of imaging technology such as optical imaging to obtain functional images. The fusion process require precisely extracted structural information, in order to register the image to it. Here we used image enhancement, morphometry methods to extract the accurate contours of different tissues such as skull, cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) on 5 fMRI head image datasets. Then we utilized convolutional neural network to realize automatic segmentation of images in deep learning way. Such approach greatly reduced the processing time compared to manual and semi-automatic segmentation and is of great importance in improving speed and accuracy as more and more samples being learned. The contours of the borders of different tissues on all images were accurately extracted and 3D visualized. This can be used in low-level light therapy and optical simulation software such as MCVM. We obtained a precise three-dimensional distribution of brain, which offered doctors and researchers quantitative volume data and detailed morphological characterization for personal precise medicine of Cerebral atrophy/expansion. We hope this technique can bring convenience to visualization medical and personalized medicine.

  6. Some More Sensitive Measures of Sensitivity and Response Bias

    NASA Technical Reports Server (NTRS)

    Balakrishnan, J. D.

    1998-01-01

    In this article, the author proposes a new pair of sensitivity and response bias indices and compares them to other measures currently available, including d' and Beta of signal detection theory. Unlike d' and Beta, these new performance measures do not depend on specific distributional assumptions or assumptions about the transformation from stimulus information to a discrimination judgment with simulated and empirical data, the new sensitivity index is shown to be more accurate than d' and 16 other indices when these measures are used to compare the sensitivity levels of 2 experimental conditions. Results from a perceptual discrimination experiment demonstrate the feasibility of the new distribution-free bias index and suggest that biases of the type defined within the signal detection theory framework (i.e., the placement of a decision criterion) do not exist, even under an asymmetric payoff manipulation.

  7. Precision medicine in myasthenia graves: begin from the data precision

    PubMed Central

    Hong, Yu; Xie, Yanchen; Hao, Hong-Jun; Sun, Ren-Cheng

    2016-01-01

    Myasthenia gravis (MG) is a prototypic autoimmune disease with overt clinical and immunological heterogeneity. The data of MG is far from individually precise now, partially due to the rarity and heterogeneity of this disease. In this review, we provide the basic insights of MG data precision, including onset age, presenting symptoms, generalization, thymus status, pathogenic autoantibodies, muscle involvement, severity and response to treatment based on references and our previous studies. Subgroups and quantitative traits of MG are discussed in the sense of data precision. The role of disease registries and scientific bases of precise analysis are also discussed to ensure better collection and analysis of MG data. PMID:27127759

  8. Precise attitude rate estimation using star images obtained by mission telescope for satellite missions

    NASA Astrophysics Data System (ADS)

    Inamori, Takaya; Hosonuma, Takayuki; Ikari, Satoshi; Saisutjarit, Phongsatorn; Sako, Nobutada; Nakasuka, Shinichi

    2015-02-01

    Recently, small satellites have been employed in various satellite missions such as astronomical observation and remote sensing. During these missions, the attitudes of small satellites should be stabilized to a higher accuracy to obtain accurate science data and images. To achieve precise attitude stabilization, these small satellites should estimate their attitude rate under the strict constraints of mass, space, and cost. This research presents a new method for small satellites to precisely estimate angular rate using star blurred images by employing a mission telescope to achieve precise attitude stabilization. In this method, the angular velocity is estimated by assessing the quality of a star image, based on how blurred it appears to be. Because the proposed method utilizes existing mission devices, a satellite does not require additional precise rate sensors, which makes it easier to achieve precise stabilization given the strict constraints possessed by small satellites. The research studied the relationship between estimation accuracy and parameters used to achieve an attitude rate estimation, which has a precision greater than 1 × 10-6 rad/s. The method can be applied to all attitude sensors, which use optics systems such as sun sensors and star trackers (STTs). Finally, the method is applied to the nano astrometry satellite Nano-JASMINE, and we investigate the problems that are expected to arise with real small satellites by performing numerical simulations.

  9. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities

    PubMed Central

    Helb, Danica A.; Tetteh, Kevin K. A.; Felgner, Philip L.; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R.; Beeson, James G.; Tappero, Jordan; Smith, David L.; Crompton, Peter D.; Rosenthal, Philip J.; Dorsey, Grant; Drakeley, Christopher J.; Greenhouse, Bryan

    2015-01-01

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual’s recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86–0.93), whereas responses to six antigens accurately estimated an individual’s malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs. PMID:26216993

  10. Novel serologic biomarkers provide accurate estimates of recent Plasmodium falciparum exposure for individuals and communities.

    PubMed

    Helb, Danica A; Tetteh, Kevin K A; Felgner, Philip L; Skinner, Jeff; Hubbard, Alan; Arinaitwe, Emmanuel; Mayanja-Kizza, Harriet; Ssewanyana, Isaac; Kamya, Moses R; Beeson, James G; Tappero, Jordan; Smith, David L; Crompton, Peter D; Rosenthal, Philip J; Dorsey, Grant; Drakeley, Christopher J; Greenhouse, Bryan

    2015-08-11

    Tools to reliably measure Plasmodium falciparum (Pf) exposure in individuals and communities are needed to guide and evaluate malaria control interventions. Serologic assays can potentially produce precise exposure estimates at low cost; however, current approaches based on responses to a few characterized antigens are not designed to estimate exposure in individuals. Pf-specific antibody responses differ by antigen, suggesting that selection of antigens with defined kinetic profiles will improve estimates of Pf exposure. To identify novel serologic biomarkers of malaria exposure, we evaluated responses to 856 Pf antigens by protein microarray in 186 Ugandan children, for whom detailed Pf exposure data were available. Using data-adaptive statistical methods, we identified combinations of antibody responses that maximized information on an individual's recent exposure. Responses to three novel Pf antigens accurately classified whether an individual had been infected within the last 30, 90, or 365 d (cross-validated area under the curve = 0.86-0.93), whereas responses to six antigens accurately estimated an individual's malaria incidence in the prior year. Cross-validated incidence predictions for individuals in different communities provided accurate stratification of exposure between populations and suggest that precise estimates of community exposure can be obtained from sampling a small subset of that community. In addition, serologic incidence predictions from cross-sectional samples characterized heterogeneity within a community similarly to 1 y of continuous passive surveillance. Development of simple ELISA-based assays derived from the successful selection strategy outlined here offers the potential to generate rich epidemiologic surveillance data that will be widely accessible to malaria control programs.

  11. Smart and precise alignment of optical systems

    NASA Astrophysics Data System (ADS)

    Langehanenberg, Patrik; Heinisch, Josef; Stickler, Daniel

    2013-09-01

    For the assembly of any kind of optical systems the precise centration of every single element is of particular importance. Classically the precise alignment of optical components is based on the precise centering of all components to an external axis (usually a high-precision rotary spindle axis). Main drawback of this timeconsuming process is that it is significantly sensitive to misalignments of the reference (e.g. the housing) axis. In order to facilitate process in this contribution we present a novel alignment strategy for the TRIOPTICS OptiCentric® instrument family that directly aligns two elements with respect to each other by measuring the first element's axis and using this axis as alignment reference without the detour of considering an external reference. According to the optical design any axis in the system can be chosen as target axis. In case of the alignment to a barrel this axis is measured by using a distance sensor (e.g., the classically used dial indicator). Instead of fine alignment the obtained data is used for the calculation of its orientation within the setup. Alternatively, the axis of an optical element (single lens or group of lenses) whose orientation is measured with the standard OptiCentric MultiLens concept can be used as a reference. In the instrument's software the decentering of the adjusting element to the calculated axis is displayed in realtime and indicated by a target mark that can be used for the manual alignment. In addition, the obtained information can also be applied for active and fully automated alignment of lens assemblies with the help of motorized actuators.

  12. Precision Distances with the Tip of the Red Giant Branch Method

    NASA Astrophysics Data System (ADS)

    Beaton, Rachael Lynn; Carnegie-Chicago Hubble Program Team

    2018-01-01

    The Carnegie-Chicago Hubble Program aims to construct a distance ladder that utilizes old stellar populations in the outskirts of galaxies to produce a high precision measurement of the Hubble Constant that is independent of Cepheids. The CCHP uses the tip of the red giant branch (TRGB) method, which is a statistical measurement technique that utilizes the termination of the red giant branch. Two innovations combine to make the TRGB a competitive route to the Hubble Constant (i) the large-scale measurement of trigonometric parallax by the Gaia mission and (ii) the development of both precise and accurate means of determining the TRGB in both nearby (~1 Mpc) and distant (~20 Mpc) galaxies. Here I will summarize our progress in developing these standardized techniques, focusing on both our edge-detection algorithm and our field selection strategy. Using these methods, the CCHP has determined equally precise (~2%) distances to galaxies in the Local Group (< 1 Mpc) and across the Local Volume (< 20 Mpc). The TRGB is, thus, an incredibly powerful and straightforward means to determine distances to galaxies of any Hubble Type and, thus, has enormous potential for putting any number of astrophyiscal phenomena on absolute units.

  13. Influence of Time-Pickoff Circuit Parameters on LiDAR Range Precision

    PubMed Central

    Wang, Hongming; Yang, Bingwei; Huyan, Jiayue; Xu, Lijun

    2017-01-01

    A pulsed time-of-flight (TOF) measurement-based Light Detection and Ranging (LiDAR) system is more effective for medium-long range distances. As a key ranging unit, a time-pickoff circuit based on automatic gain control (AGC) and constant fraction discriminator (CFD) is designed to reduce the walk error and the timing jitter for obtaining the accurate time interval. Compared with Cramer–Rao lower bound (CRLB) and the estimation of the timing jitter, four parameters-based Monte Carlo simulations are established to show how the range precision is influenced by the parameters, including pulse amplitude, pulse width, attenuation fraction and delay time of the CFD. Experiments were carried out to verify the relationship between the range precision and three of the parameters, exclusing pulse width. It can be concluded that two parameters of the ranging circuit (attenuation fraction and delay time) were selected according to the ranging performance of the minimum pulse amplitude. The attenuation fraction should be selected in the range from 0.2 to 0.6 to achieve high range precision. The selection criterion of the time-pickoff circuit parameters is helpful for the ranging circuit design of TOF LiDAR system. PMID:29039772

  14. Nanoscale tailor-made membranes for precise and rapid molecular sieve separation.

    PubMed

    Wang, Jing; Zhu, Junyong; Zhang, Yatao; Liu, Jindun; Van der Bruggen, Bart

    2017-03-02

    The precise and rapid separation of different molecules from aqueous, organic solutions and gas mixtures is critical to many technologies in the context of resource-saving and sustainable development. The strength of membrane-based technologies is well recognized and they are extensively applied as cost-effective, highly efficient separation techniques. Currently, empirical-based approaches, lacking an accurate nanoscale control, are used to prepare the most advanced membranes. In contrast, nanoscale control renders the membrane molecular specificity (sub-2 nm) necessary for efficient and rapid molecular separation. Therefore, as a growing trend in membrane technology, the field of nanoscale tailor-made membranes is highlighted in this review. An in-depth analysis of the latest advances in tailor-made membranes for precise and rapid molecule sieving is given, along with an outlook to future perspectives of such membranes. Special attention is paid to the established processing strategies, as well as the application of molecular dynamics (MD) simulation in nanoporous membrane design. This review will provide useful guidelines for future research in the development of nanoscale tailor-made membranes with a precise and rapid molecular sieve separation property.

  15. Sensitivity of some tests for alcohol abuse : findings in nonalcoholics recovering from intoxication.

    DOT National Transportation Integrated Search

    1983-01-01

    A variety of measurements are sensitive to alcoholism; some may be applicable to screening programs, but more precise knowledge of sensitivity and specificity would help to select a minimal test battery. This study assessed the sensitivity of some te...

  16. Precision Control Module For UV Laser 3D Micromachining

    NASA Astrophysics Data System (ADS)

    Wu, Wen-Hong; Hung, Min-Wei; Chang, Chun-Li

    2011-01-01

    UV laser has been widely used in various micromachining such as micro-scribing or patterning processing. At present, most of the semiconductors, LEDs, photovoltaic solar panels and touch panels industries need the UV laser processing system. However, most of the UV laser processing applications in the industries utilize two dimensional (2D) plane processing. And there are tremendous business opportunities that can be developed, such as three dimensional (3D) structures of micro-electromechanical (MEMS) sensor or the precision depth control of indium tin oxide (ITO) thin films edge insulation in touch panels. This research aims to develop a UV laser 3D micromachining module that can create the novel applications for industries. By special designed beam expender in optical system, the focal point of UV laser can be adjusted quickly and accurately through the optical path control lens of laser beam expender optical system. Furthermore, the integrated software for galvanometric scanner and focal point adjustment mechanism is developed as well, so as to carry out the precise 3D microstructure machining.

  17. Precision Attitude Control for the BETTII Balloon-Borne Interferometer

    NASA Technical Reports Server (NTRS)

    Benford, Dominic J.; Fixsen, Dale J.; Rinehart. Stephen

    2012-01-01

    The Balloon Experimental Twin Telescope for Infrared Interferometry (BETTII) is an 8-meter baseline far-infrared interferometer to fly on a high altitude balloon. Operating at wavelengths of 30-90 microns, BETTII will obtain spatial and spectral information on science targets at angular resolutions down to less than half an arcsecond, a capability unmatched by other far-infrared facilities. This requires attitude control at a level ofless than a tenth of an arcsecond, a great challenge for a lightweight balloon-borne system. We have designed a precision attitude determination system to provide gondola attitude knowledge at a level of 2 milliarcseconds at rates up to 100Hz, with accurate absolute attitude determination at the half arcsecond level at rates of up to 10Hz. A mUlti-stage control system involving rigid body motion and tip-tilt-piston correction provides precision pointing stability to the level required for the far-infrared instrument to perform its spatial/spectral interferometry in an open-loop control. We present key aspects of the design of the attitude determination and control and its development status.

  18. Fully Nonlinear Modeling and Analysis of Precision Membranes

    NASA Technical Reports Server (NTRS)

    Pai, P. Frank; Young, Leyland G.

    2003-01-01

    High precision membranes are used in many current space applications. This paper presents a fully nonlinear membrane theory with forward and inverse analyses of high precision membrane structures. The fully nonlinear membrane theory is derived from Jaumann strains and stresses, exact coordinate transformations, the concept of local relative displacements, and orthogonal virtual rotations. In this theory, energy and Newtonian formulations are fully correlated, and every structural term can be interpreted in terms of vectors. Fully nonlinear ordinary differential equations (ODES) governing the large static deformations of known axisymmetric membranes under known axisymmetric loading (i.e., forward problems) are presented as first-order ODES, and a method for obtaining numerically exact solutions using the multiple shooting procedure is shown. A method for obtaining the undeformed geometry of any axisymmetric membrane with a known inflated geometry and a known internal pressure (i.e., inverse problems) is also derived. Numerical results from forward analysis are verified using results in the literature, and results from inverse analysis are verified using known exact solutions and solutions from the forward analysis. Results show that the membrane theory and the proposed numerical methods for solving nonlinear forward and inverse membrane problems are accurate.

  19. A Method for Assessing the Accuracy of a Photogrammetry System for Precision Deployable Structures

    NASA Technical Reports Server (NTRS)

    Moore, Ashley

    2005-01-01

    The measurement techniques used to validate analytical models of large deployable structures are an integral Part of the technology development process and must be precise and accurate. Photogrammetry and videogrammetry are viable, accurate, and unobtrusive methods for measuring such large Structures. Photogrammetry uses Software to determine the three-dimensional position of a target using camera images. Videogrammetry is based on the same principle, except a series of timed images are analyzed. This work addresses the accuracy of a digital photogrammetry system used for measurement of large, deployable space structures at JPL. First, photogrammetry tests are performed on a precision space truss test article, and the images are processed using Photomodeler software. The accuracy of the Photomodeler results is determined through, comparison with measurements of the test article taken by an external testing group using the VSTARS photogrammetry system. These two measurements are then compared with Australis photogrammetry software that simulates a measurement test to predict its accuracy. The software is then used to study how particular factors, such as camera resolution and placement, affect the system accuracy to help design the setup for the videogrammetry system that will offer the highest level of accuracy for measurement of deploying structures.

  20. ACCURATE CHARACTERIZATION OF HIGH-DEGREE MODES USING MDI OBSERVATIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korzennik, S. G.; Rabello-Soares, M. C.; Schou, J.

    2013-08-01

    discuss their uncertainties and the precision of the ridge-to-mode correction schemes, through a detailed assessment of the sensitivity of the model to its input set. The precision of the ridge-to-mode correction is indicative of any possible residual systematic biases in the inferred mode characteristics. In our conclusions, we address how to further improve these estimates, and the implications for other data sets, like GONG+ and HMI.« less

  1. Improvements in absolute seismometer sensitivity calibration using local earth gravity measurements

    USGS Publications Warehouse

    Anthony, Robert E.; Ringler, Adam; Wilson, David

    2018-01-01

    The ability to determine both absolute and relative seismic amplitudes is fundamentally limited by the accuracy and precision with which scientists are able to calibrate seismometer sensitivities and characterize their response. Currently, across the Global Seismic Network (GSN), errors in midband sensitivity exceed 3% at the 95% confidence interval and are the least‐constrained response parameter in seismic recording systems. We explore a new methodology utilizing precise absolute Earth gravity measurements to determine the midband sensitivity of seismic instruments. We first determine the absolute sensitivity of Kinemetrics EpiSensor accelerometers to 0.06% at the 99% confidence interval by inverting them in a known gravity field at the Albuquerque Seismological Laboratory (ASL). After the accelerometer is calibrated, we install it in its normal configuration next to broadband seismometers and subject the sensors to identical ground motions to perform relative calibrations of the broadband sensors. Using this technique, we are able to determine the absolute midband sensitivity of the vertical components of Nanometrics Trillium Compact seismometers to within 0.11% and Streckeisen STS‐2 seismometers to within 0.14% at the 99% confidence interval. The technique enables absolute calibrations from first principles that are traceable to National Institute of Standards and Technology (NIST) measurements while providing nearly an order of magnitude more precision than step‐table calibrations.

  2. Molecular diagnosis and precision medicine in allergy management.

    PubMed

    Riccio, Anna Maria; De Ferrari, Laura; Chiappori, Alessandra; Ledda, Sabina; Passalacqua, Giovanni; Melioli, Giovanni; Canonica, Giorgio Walter

    2016-11-01

    Precision medicine (PM) can be defined as a structural model aimed at customizing healthcare, with medical decisions/products tailored on an individual patient at a highly detailed level. In this sense, allergy diagnostics based on molecular allergen components allows to accurately define the patient's IgE repertoire. The availability of highly specialized singleplexed and multiplexed platforms support allergists with an advanced diagnostic armamentarium. The therapeutic intervention, driven by the standard diagnostic approach, but further supported by these innovative tools may result, for instance, in a more appropriate prescription of allergen immunotherapy (AIT). Also, the phenotyping of patients, which may have relevant effects on the treatment strategy, could be take advantage by the molecular allergy diagnosis.

  3. Precise and absolute measurements of complex third-order optical susceptibility

    NASA Astrophysics Data System (ADS)

    Santran, Stephane; Canioni, Lionel; Cardinal, Thierry; Fargin, Evelyne; Le Flem, Gilles; Rouyer, Claude; Sarger, Laurent

    2000-11-01

    We present precise and absolute measurements of full complex third order optical susceptibility on different fused silica and original glasses composed of tellurium, titanium, niobium erbium. These materials are designed to be the key point for applications ranging form high power laser systems to optoelectronics, their nonlinear index of refraction is a major property and thus must be accurately known. Due to the accuracy and sensitivity of our technique, we have been able to find a large dispersion (more than 30%) of the non linear index of fused silica glasses as a function of their processing mode. On the other hand, measurements on tellurium glasses have shown very strong nonlinearities (40 times higher than fused silica), to be linked to the configurations of their cations and anions. Although the titanium and niobium glasses are less nonlinear, they can be promising matrices for addition of luminescent entities like erbium leading to very interesting laser amplification materials. The experimental set-up is a collinear pump-probe (orthogonally polarized) experiment using transient absorption technique. It is built with around a 100 femtosecond laser oscillator. A fast oscillating delay between the pump and the probe allows us to measure the electronic nonlinearity in quasi real-time. This experiment has the following specifications: an absolute measurement accuracy below 10% mainly due to the laser parameters characterization, a relative measurement accuracy of 1% and a resolution less than 5.10-24m2/V2(50 times less than fused silica).

  4. A compact, large-range interferometer for precision measurement and inertial sensing

    NASA Astrophysics Data System (ADS)

    Cooper, S. J.; Collins, C. J.; Green, A. C.; Hoyland, D.; Speake, C. C.; Freise, A.; Mow-Lowry, C. M.

    2018-05-01

    We present a compact, fibre-coupled interferometer with high sensitivity and a large working range. We propose to use this interferometer as a readout mechanism for future inertial sensors, removing a major limiting noise source, and in precision positioning systems. The interferometer’s peak sensitivity is 2 × 10-{14} m \\sqrt{Hz-1} at 70 Hz and 7 × 10-{11} m \\sqrt{Hz-1} at 10 mHz. If deployed on a GS-13 geophone, the resulting inertial sensing output will be limited by the suspension thermal noise of the reference mass from 10 mHz to 2 Hz.

  5. Cardiac vagal flexibility and accurate personality impressions: Examining a physiological correlate of the good judge.

    PubMed

    Human, Lauren J; Mendes, Wendy Berry

    2018-02-23

    Research has long sought to identify which individuals are best at accurately perceiving others' personalities or are good judges, yet consistent predictors of this ability have been difficult to find. In the current studies, we revisit this question by examining a novel physiological correlate of social sensitivity, cardiac vagal flexibility, which reflects dynamic modulation of cardiac vagal control. We examined whether greater cardiac vagal flexibility was associated with forming more accurate personality impressions, defined as viewing targets more in line with their distinctive self-reported profile of traits, in two studies, including a thin-slice video perceptions study (N = 109) and a dyadic interaction study (N = 175). Across studies, we found that individuals higher in vagal flexibility formed significantly more accurate first impressions of others' more observable personality traits (e.g., extraversion, creativity, warmth). These associations held while including a range of relevant covariates, including cardiac vagal tone, sympathetic activation, and gender. In sum, social sensitivity as indexed by cardiac vagal flexibility is linked to forming more accurate impressions of others' observable traits, shedding light on a characteristic that may help to identify the elusive good judge and providing insight into its neurobiological underpinnings. © 2018 Wiley Periodicals, Inc.

  6. Estimating climate sensitivity from paleo-data.

    NASA Astrophysics Data System (ADS)

    Crowley, T. J.; Hegerl, G. C.

    2003-12-01

    For twenty years estimates of climate sensitivity from the instrumental record have neen between about 1.5-4.5° C for a doubling of CO2. Various efforts, most notably by J. Hansen, and M. Hoffert and C. Covey. have been made to test this range against paleo-data for the ice age and Cretaceous, yielding approximately the same range with a "best guess" sensitivity of about 2.0-3.0° C. Here we re-examine this issue with new paleo-data and also include information for the time period 1000-present. For this latter interval formal pdfs can for the first time be calculated for paleo data. Regardless of the time interval examined we generally find that paleo-sensitivities still fall within the range of about 1.5-4.5° C. The primary impediments to more precise determinations involve not only uncertainties in forcings but also the paleo reconstructions. Barring a dramatic breakthrough in reconciliation of some long-standing differences in the magnitude of paleotemperature estimates for different proxies, the range of paleo-sensitivities will continue to have this uncertainty. This range can be considered either unsatisfactory or satisfactory. It is unsatisfactory because some may consider it insufficiently precise. It is satisfactory in the sense that the range is both robust and entirely consistent with the range independently estimated from the instrumental record.

  7. Interoceptive sensitivity predicts sensitivity to the emotions of others.

    PubMed

    Terasawa, Yuri; Moriguchi, Yoshiya; Tochizawa, Saiko; Umeda, Satoshi

    2014-01-01

    Some theories of emotion emphasise a close relationship between interoception and subjective experiences of emotion. In this study, we used facial expressions to examine whether interoceptive sensibility modulated emotional experience in a social context. Interoceptive sensibility was measured using the heartbeat detection task. To estimate individual emotional sensitivity, we made morphed photos that ranged between a neutral and an emotional facial expression (i.e., anger, sadness, disgust and happy). Recognition rates of particular emotions from these photos were calculated and considered as emotional sensitivity thresholds. Our results indicate that participants with accurate interoceptive awareness are sensitive to the emotions of others, especially for expressions of sadness and happy. We also found that false responses to sad faces were closely related with an individual's degree of social anxiety. These results suggest that interoceptive awareness modulates the intensity of the subjective experience of emotion and affects individual traits related to emotion processing.

  8. Highly sensitive quantification for human plasma-targeted metabolomics using an amine derivatization reagent.

    PubMed

    Arashida, Naoko; Nishimoto, Rumi; Harada, Masashi; Shimbo, Kazutaka; Yamada, Naoyuki

    2017-02-15

    Amino acids and their related metabolites play important roles in various physiological processes and have consequently become biomarkers for diseases. However, accurate quantification methods have only been established for major compounds, such as amino acids and a limited number of target metabolites. We previously reported a highly sensitive high-throughput method for the simultaneous quantification of amines using 3-aminopyridyl-N-succinimidyl carbamate as a derivatization reagent combined with liquid chromatography-tandem mass spectrometry (LC-MS/MS). Herein, we report the successful development of a practical and accurate LC-MS/MS method to analyze low concentrations of 40 physiological amines in 19 min. Thirty-five of these amines showed good linearity, limits of quantification, accuracy, precision, and recovery characteristics in plasma, with scheduled selected reaction monitoring acquisitions. Plasma samples from 10 healthy volunteers were evaluated using our newly developed method. The results revealed that 27 amines were detected in one of the samples, and that 24 of these compounds could be quantified. Notably, this new method successfully quantified metabolites with high accuracy across three orders of magnitude, with lowest and highest averaged concentrations of 31.7 nM (for spermine) and 18.3 μM (for α-aminobutyric acid), respectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Can free energy calculations be fast and accurate at the same time? Binding of low-affinity, non-peptide inhibitors to the SH2 domain of the src protein

    NASA Astrophysics Data System (ADS)

    Chipot, Christophe; Rozanska, Xavier; Dixit, Surjit B.

    2005-11-01

    The usefulness of free-energy calculations in non-academic environments, in general, and in the pharmaceutical industry, in particular, is a long-time debated issue, often considered from the angle of cost/performance criteria. In the context of the rational drug design of low-affinity, non-peptide inhibitors to the SH2 domain of the pp60src tyrosine kinase, the continuing difficulties encountered in an attempt to obtain accurate free-energy estimates are addressed. free-energy calculations can provide a convincing answer, assuming that two key-requirements are fulfilled: (i) thorough sampling of the configurational space is necessary to minimize the statistical error, hence raising the question: to which extent can we sacrifice the computational effort, yet without jeopardizing the precision of the free-energy calculation? (ii) the sensitivity of binding free-energies to the parameters utilized imposes an appropriate parametrization of the potential energy function, especially for non-peptide molecules that are usually poorly described by multipurpose macromolecular force fields. Employing the free-energy perturbation method, accurate ranking, within ±0.7 kcal/mol, is obtained in the case of four non-peptide mimes of a sequence recognized by the pp60src SH2 domain.

  10. A new, high-precision measurement of the X-ray Cu K α spectrum

    NASA Astrophysics Data System (ADS)

    Mendenhall, Marcus H.; Cline, James P.; Henins, Albert; Hudson, Lawrence T.; Szabo, Csilla I.; Windover, Donald

    2016-03-01

    One of the primary measurement issues addressed with NIST Standard Reference Materials (SRMs) for powder diffraction is that of line position. SRMs for this purpose are certified with respect to lattice parameter, traceable to the SI through precise measurement of the emission spectrum of the X-ray source. Therefore, accurate characterization of the emission spectrum is critical to a minimization of the error bounds on the certified parameters. The presently accepted sources for the SI traceable characterization of the Cu K α emission spectrum are those of Härtwig, Hölzer et al., published in the 1990s. The structure of the X-ray emission lines of the Cu K α complex has been remeasured on a newly commissioned double-crystal instrument, with six-bounce Si (440) optics, in a manner directly traceable to the SI definition of the meter. In this measurement, the entire region from 8020 eV to 8100 eV has been covered with a highly precise angular scale and well-defined system efficiency, providing accurate wavelengths and relative intensities. This measurement is in modest disagreement with reference values for the wavelength of the Kα1 line, and strong disagreement for the wavelength of the Kα2 line.

  11. Precision mechatronics based on high-precision measuring and positioning systems and machines

    NASA Astrophysics Data System (ADS)

    Jäger, Gerd; Manske, Eberhard; Hausotte, Tino; Mastylo, Rostyslav; Dorozhovets, Natalja; Hofmann, Norbert

    2007-06-01

    Precision mechatronics is defined in the paper as the science and engineering of a new generation of high precision systems and machines. Nanomeasuring and nanopositioning engineering represents important fields of precision mechatronics. The nanometrology is described as the today's limit of the precision engineering. The problem, how to design nanopositioning machines with uncertainties as small as possible will be discussed. The integration of several optical and tactile nanoprobes makes the 3D-nanopositioning machine suitable for various tasks, such as long range scanning probe microscopy, mask and wafer inspection, nanotribology, nanoindentation, free form surface measurement as well as measurement of microoptics, precision molds, microgears, ring gauges and small holes.

  12. Flight control and landing precision in the nocturnal bee Megalopta is robust to large changes in light intensity.

    PubMed

    Baird, Emily; Fernandez, Diana C; Wcislo, William T; Warrant, Eric J

    2015-01-01

    Like their diurnal relatives, Megalopta genalis use visual information to control flight. Unlike their diurnal relatives, however, they do this at extremely low light intensities. Although Megalopta has developed optical specializations to increase visual sensitivity, theoretical studies suggest that this enhanced sensitivity does not enable them to capture enough light to use visual information to reliably control flight in the rainforest at night. It has been proposed that Megalopta gain extra sensitivity by summing visual information over time. While enhancing the reliability of vision, this strategy would decrease the accuracy with which they can detect image motion-a crucial cue for flight control. Here, we test this temporal summation hypothesis by investigating how Megalopta's flight control and landing precision is affected by light intensity and compare our findings with the results of similar experiments performed on the diurnal bumblebee Bombus terrestris, to explore the extent to which Megalopta's adaptations to dim light affect their precision. We find that, unlike Bombus, light intensity does not affect flight and landing precision in Megalopta. Overall, we find little evidence that Megalopta uses a temporal summation strategy in dim light, while we find strong support for the use of this strategy in Bombus.

  13. Computational Calorimetry: High-Precision Calculation of Host–Guest Binding Thermodynamics

    PubMed Central

    2015-01-01

    We present a strategy for carrying out high-precision calculations of binding free energy and binding enthalpy values from molecular dynamics simulations with explicit solvent. The approach is used to calculate the thermodynamic profiles for binding of nine small molecule guests to either the cucurbit[7]uril (CB7) or β-cyclodextrin (βCD) host. For these systems, calculations using commodity hardware can yield binding free energy and binding enthalpy values with a precision of ∼0.5 kcal/mol (95% CI) in a matter of days. Crucially, the self-consistency of the approach is established by calculating the binding enthalpy directly, via end point potential energy calculations, and indirectly, via the temperature dependence of the binding free energy, i.e., by the van’t Hoff equation. Excellent agreement between the direct and van’t Hoff methods is demonstrated for both host–guest systems and an ion-pair model system for which particularly well-converged results are attainable. Additionally, we find that hydrogen mass repartitioning allows marked acceleration of the calculations with no discernible cost in precision or accuracy. Finally, we provide guidance for accurately assessing numerical uncertainty of the results in settings where complex correlations in the time series can pose challenges to statistical analysis. The routine nature and high precision of these binding calculations opens the possibility of including measured binding thermodynamics as target data in force field optimization so that simulations may be used to reliably interpret experimental data and guide molecular design. PMID:26523125

  14. A portable, stable and precise laser differential refractometer

    NASA Astrophysics Data System (ADS)

    Gong, Xiangjun; Ngai, To; Wu, Chi

    2013-11-01

    In this work, we present a portable laser differential refractometer with real-time detection and high precision based on the Snell's law and a 2f-2f optical design. The 2f-2f configuration solves a traditional position drifting problem of the laser beam and enhances the signal stability, where a small pinhole is illuminated by the laser light and imaged to the detector by lens placed in the middle between the detector and the pinhole. However, it also leads to a larger dimension of the instrument, limiting its applications and its sensitivity that is proportional to the optical path. Therefore, for a portable device on the basis of the 2f-2f design, a combination of a mirror and a lens was developed to minimize the optical path without affecting the 2f-2f design. Our simple and compact design reaches a resolution of 10-6 refractive index units (RIU). Moreover, the dimension of such a modified differential refractometer is significantly reduced to be portable. Owing to its real-time detection speed and high precision, this newly developed refractometer is particularly attractive when it is used as an independent and ultra-sensitive detector in many research and industrial applications wherein there is a time-dependent concentration change, e.g., the concentration determination, quality control, and study of kinetic processes in solution, including adsorption, sedimentation, and dissolution, to name few but not limited.

  15. A portable, stable and precise laser differential refractometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Xiangjun, E-mail: xjgong@cuhk.edu.hk, E-mail: chiwu@cuhk.edu.hk; Ngai, To; Wu, Chi, E-mail: xjgong@cuhk.edu.hk, E-mail: chiwu@cuhk.edu.hk

    In this work, we present a portable laser differential refractometer with real-time detection and high precision based on the Snell's law and a 2f-2f optical design. The 2f-2f configuration solves a traditional position drifting problem of the laser beam and enhances the signal stability, where a small pinhole is illuminated by the laser light and imaged to the detector by lens placed in the middle between the detector and the pinhole. However, it also leads to a larger dimension of the instrument, limiting its applications and its sensitivity that is proportional to the optical path. Therefore, for a portable devicemore » on the basis of the 2f-2f design, a combination of a mirror and a lens was developed to minimize the optical path without affecting the 2f-2f design. Our simple and compact design reaches a resolution of 10{sup −6} refractive index units (RIU). Moreover, the dimension of such a modified differential refractometer is significantly reduced to be portable. Owing to its real-time detection speed and high precision, this newly developed refractometer is particularly attractive when it is used as an independent and ultra-sensitive detector in many research and industrial applications wherein there is a time-dependent concentration change, e.g., the concentration determination, quality control, and study of kinetic processes in solution, including adsorption, sedimentation, and dissolution, to name few but not limited.« less

  16. Precision Teaching.

    ERIC Educational Resources Information Center

    Couch, Richard W.

    Precision teaching (PT) is an approach to the science of human behavior that focuses on precise monitoring of carefully defined behaviors in an attempt to construct an environmental analysis of that behavior and its controlling variables. A variety of subjects have been used with PT, ranging in academic objectives from beginning reading to college…

  17. Newborn Screening in the Era of Precision Medicine.

    PubMed

    Yang, Lan; Chen, Jiajia; Shen, Bairong

    2017-01-01

    As newborn screening success stories gained general confirmation during the past 50 years, scientists quickly discovered diagnostic tests for a host of genetic disorders that could be treated at birth. Outstanding progress in sequencing technologies over the last two decades has made it possible to comprehensively profile newborn screening (NBS) and identify clinically relevant genomic alterations. With the rapid developments in whole-genome sequencing (WGS) and whole-exome sequencing (WES) recently, we can detect newborns at the genomic level and be able to direct the appropriate diagnosis to the different individuals at the appropriate time, which is also encompassed in the concept of precision medicine. Besides, we can develop novel interventions directed at the molecular characteristics of genetic diseases in newborns. The implementation of genomics in NBS programs would provide an effective premise for the identification of the majority of genetic aberrations and primarily help in accurate guidance in treatment and better prediction. However, there are some debate correlated with the widespread application of genome sequencing in NBS due to some major concerns such as clinical analysis, result interpretation, storage of sequencing data, and communication of clinically relevant mutations to pediatricians and parents, along with the ethical, legal, and social implications (so-called ELSI). This review is focused on these critical issues and concerns about the expanding role of genomics in NBS for precision medicine. If WGS or WES is to be incorporated into NBS practice, considerations about these challenges should be carefully regarded and tackled properly to adapt the requirement of genome sequencing in the era of precision medicine.

  18. Accurate Cell Division in Bacteria: How Does a Bacterium Know Where its Middle Is?

    NASA Astrophysics Data System (ADS)

    Howard, Martin; Rutenberg, Andrew

    2004-03-01

    I will discuss the physical principles lying behind the acquisition of accurate positional information in bacteria. A good application of these ideas is to the rod-shaped bacterium E. coli which divides precisely at its cellular midplane. This positioning is controlled by the Min system of proteins. These proteins coherently oscillate from end to end of the bacterium. I will present a reaction-diffusion model that describes the diffusion of the Min proteins, and their binding/unbinding from the cell membrane. The system possesses an instability that spontaneously generates the Min oscillations, which control accurate placement of the midcell division site. I will then discuss the role of fluctuations in protein dynamics, and investigate whether fluctuations set optimal protein concentration levels. Finally I will examine cell division in a different bacteria, B. subtilis. where different physical principles are used to regulate accurate cell division. See: Howard, Rutenberg, de Vet: Dynamic compartmentalization of bacteria: accurate division in E. coli. Phys. Rev. Lett. 87 278102 (2001). Howard, Rutenberg: Pattern formation inside bacteria: fluctuations due to the low copy number of proteins. Phys. Rev. Lett. 90 128102 (2003). Howard: A mechanism for polar protein localization in bacteria. J. Mol. Biol. 335 655-663 (2004).

  19. Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification.

    PubMed

    Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried; De Vos, Winnok H

    2017-01-01

    A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows.

  20. Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification

    PubMed Central

    Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried

    2017-01-01

    A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows. PMID:28125723

  1. One novel type of miniaturization FBG rotation angle sensor with high measurement precision and temperature self-compensation

    NASA Astrophysics Data System (ADS)

    Jiang, Shanchao; Wang, Jing; Sui, Qingmei

    2018-03-01

    In order to achieve rotation angle measurement, one novel type of miniaturization fiber Bragg grating (FBG) rotation angle sensor with high measurement precision and temperature self-compensation is proposed and studied in this paper. The FBG rotation angle sensor mainly contains two core sensitivity elements (FBG1 and FBG2), triangular cantilever beam, and rotation angle transfer element. In theory, the proposed sensor can achieve temperature self-compensation by complementation of the two core sensitivity elements (FBG1 and FBG2), and it has a boundless angel measurement range with 2π rad period duo to the function of the rotation angle transfer element. Based on introducing the joint working processes, the theory calculation model of the FBG rotation angel sensor is established, and the calibration experiment on one prototype is also carried out to obtain its measurement performance. After experimental data analyses, the measurement precision of the FBG rotation angle sensor prototype is 0.2 ° with excellent linearity, and the temperature sensitivities of FBG1 and FBG2 are 10 pm/° and 10.1 pm/°, correspondingly. All these experimental results confirm that the FBG rotation angle sensor can achieve large-range angle measurement with high precision and temperature self-compensation.

  2. Precision positioning device

    DOEpatents

    McInroy, John E.

    2005-01-18

    A precision positioning device is provided. The precision positioning device comprises a precision measuring/vibration isolation mechanism. A first plate is provided with the precision measuring mean secured to the first plate. A second plate is secured to the first plate. A third plate is secured to the second plate with the first plate being positioned between the second plate and the third plate. A fourth plate is secured to the third plate with the second plate being positioned between the third plate and the fourth plate. An adjusting mechanism for adjusting the position of the first plate, the second plate, the third plate, and the fourth plate relative to each other.

  3. Lamb mode selection for accurate wall loss estimation via guided wave tomography

    NASA Astrophysics Data System (ADS)

    Huthwaite, P.; Ribichini, R.; Lowe, M. J. S.; Cawley, P.

    2014-02-01

    Guided wave tomography offers a method to accurately quantify wall thickness losses in pipes and vessels caused by corrosion. This is achieved using ultrasonic waves transmitted over distances of approximately 1-2m, which are measured by an array of transducers and then used to reconstruct a map of wall thickness throughout the inspected region. To achieve accurate estimations of remnant wall thickness, it is vital that a suitable Lamb mode is chosen. This paper presents a detailed evaluation of the fundamental modes, S0 and A0, which are of primary interest in guided wave tomography thickness estimates since the higher order modes do not exist at all thicknesses, to compare their performance using both numerical and experimental data while considering a range of challenging phenomena. The sensitivity of A0 to thickness variations was shown to be superior to S0, however, the attenuation from A0 when a liquid loading was present was much higher than S0. A0 was less sensitive to the presence of coatings on the surface of than S0.

  4. Precision and sensitivity of the measurement of 15N enrichment in D-alanine from bacterial cell walls using positive/negative ion mass spectrometry

    NASA Technical Reports Server (NTRS)

    Tunlid, A.; Odham, G.; Findlay, R. H.; White, D. C.

    1985-01-01

    Sensitive detection of cellular components from specific groups of microbes can be utilized as 'signatures' in the examination of microbial consortia from soils, sediments or biofilms. Utilizing capillary gas chromatography/mass spectrometry and stereospecific derivatizing agents, D-alanine, a component localized in the prokaryotic (bacterial) cell wall, can be detected reproducibly. Enrichments of D-[15N]alanine determined in E. coli grown with [15N]ammonia can be determined with precision at 1.0 atom%. Chemical ionization with methane gas and the detection of negative ions (M - HF)- and (M - F or M + H - HF)- formed from the heptafluorobutyryl D-2 butanol ester of D-alanine allowed as little as 8 pg (90 fmol) to be detected reproducibly. This method can be utilized to define the metabolic activity in terms of 15N incorporation at the level of 10(3)-10(4) cells, as a function of the 15N-14N ratio.

  5. Rapid and precise determination of total sulphur in soda-lime-silica glasses.

    PubMed

    Beesley, W J; Chamberlain, B R

    1974-04-01

    A method is described for the determination of total sulphur in small amounts of soda-lime-silica glasses (100 mg or less). The crushed glass is mixed with vanadium pentoxide and decomposed at 1450 degrees under oxygen. The sulphur is quantitatively removed from the glass and determined by a conductometric technique. The method is standardized by accurately injecting sulphur dioxide into the furnace tube. The analysis time is about 10 min and the overall precision (2s) is of the order of 5%.

  6. Developing and implementing a high precision setup system

    NASA Astrophysics Data System (ADS)

    Peng, Lee-Cheng

    The demand for high-precision radiotherapy (HPRT) was first implemented in stereotactic radiosurgery using a rigid, invasive stereotactic head frame. Fractionated stereotactic radiotherapy (SRT) with a frameless device was developed along a growing interest in sophisticated treatment with a tight margin and high-dose gradient. This dissertation establishes the complete management for HPRT in the process of frameless SRT, including image-guided localization, immobilization, and dose evaluation. The most ideal and precise positioning system can allow for ease of relocation, real-time patient movement assessment, high accuracy, and no additional dose in daily use. A new image-guided stereotactic positioning system (IGSPS), the Align RT3C 3D surface camera system (ART, VisionRT), which combines 3D surface images and uses a real-time tracking technique, was developed to ensure accurate positioning at the first place. The uncertainties of current optical tracking system, which causes patient discomfort due to additional bite plates using the dental impression technique and external markers, are found. The accuracy and feasibility of ART is validated by comparisons with the optical tracking and cone-beam computed tomography (CBCT) systems. Additionally, an effective daily quality assurance (QA) program for the linear accelerator and multiple IGSPSs is the most important factor to ensure system performance in daily use. Currently, systematic errors from the phantom variety and long measurement time caused by switching phantoms were discovered. We investigated the use of a commercially available daily QA device to improve the efficiency and thoroughness. Reasonable action level has been established by considering dosimetric relevance and clinic flow. As for intricate treatments, the effect of dose deviation caused by setup errors remains uncertain on tumor coverage and toxicity on OARs. The lack of adequate dosimetric simulations based on the true treatment coordinates from

  7. LAI-2000 Accuracy, Precision, and Application to Visual Estimation of Leaf Area Index of Loblolly Pine

    Treesearch

    Jason A. Gatch; Timothy B. Harrington; James P. Castleberry

    2002-01-01

    Leaf area index (LAI) is an important parameter of forest stand productivity that has been used to diagnose stand vigor and potential fertilizer response of southern pines. The LAI-2000 was tested for its ability to provide accurate and precise estimates of LAI of loblolly pine (Pinus taeda L.). To test instrument accuracy, regression was used to...

  8. Sensitive bridge circuit measures conductance of low-conductivity electrolyte solutions

    NASA Technical Reports Server (NTRS)

    Schmidt, K.

    1967-01-01

    Compact bridge circuit measures sensitive and accurate conductance of low-conductivity electrolyte solutions. The bridge utilizes a phase sensitive detector to obtain a linear deflection of the null indicator relative to the measured conductance.

  9. An Accurate Co-registration Method for Airborne Repeat-pass InSAR

    NASA Astrophysics Data System (ADS)

    Dong, X. T.; Zhao, Y. H.; Yue, X. J.; Han, C. M.

    2017-10-01

    Interferometric Synthetic Aperture Radar (InSAR) technology plays a significant role in topographic mapping and surface deformation detection. Comparing with spaceborne repeat-pass InSAR, airborne repeat-pass InSAR solves the problems of long revisit time and low-resolution images. Due to the advantages of flexible, accurate, and fast obtaining abundant information, airborne repeat-pass InSAR is significant in deformation monitoring of shallow ground. In order to getting precise ground elevation information and interferometric coherence of deformation monitoring from master and slave images, accurate co-registration must be promised. Because of side looking, repeat observing path and long baseline, there are very different initial slant ranges and flight heights between repeat flight paths. The differences of initial slant ranges and flight height lead to the pixels, located identical coordinates on master and slave images, correspond to different size of ground resolution cells. The mismatching phenomenon performs very obvious on the long slant range parts of master image and slave image. In order to resolving the different sizes of pixels and getting accurate co-registration results, a new method is proposed based on Range-Doppler (RD) imaging model. VV-Polarization C-band airborne repeat-pass InSAR images were used in experiment. The experiment result shows that the proposed method leads to superior co-registration accuracy.

  10. Research on precise modeling of buildings based on multi-source data fusion of air to ground

    NASA Astrophysics Data System (ADS)

    Li, Yongqiang; Niu, Lubiao; Yang, Shasha; Li, Lixue; Zhang, Xitong

    2016-03-01

    Aims at the accuracy problem of precise modeling of buildings, a test research was conducted based on multi-source data for buildings of the same test area , including top data of air-borne LiDAR, aerial orthophotos, and façade data of vehicle-borne LiDAR. After accurately extracted the top and bottom outlines of building clusters, a series of qualitative and quantitative analysis was carried out for the 2D interval between outlines. Research results provide a reliable accuracy support for precise modeling of buildings of air ground multi-source data fusion, on the same time, discussed some solution for key technical problems.

  11. Precision Medicine: Functional Advancements.

    PubMed

    Caskey, Thomas

    2018-01-29

    Precision medicine was conceptualized on the strength of genomic sequence analysis. High-throughput functional metrics have enhanced sequence interpretation and clinical precision. These technologies include metabolomics, magnetic resonance imaging, and I rhythm (cardiac monitoring), among others. These technologies are discussed and placed in clinical context for the medical specialties of internal medicine, pediatrics, obstetrics, and gynecology. Publications in these fields support the concept of a higher level of precision in identifying disease risk. Precise disease risk identification has the potential to enable intervention with greater specificity, resulting in disease prevention-an important goal of precision medicine.

  12. Development and validation of a whole-exome sequencing test for simultaneous detection of point mutations, indels and copy-number alterations for precision cancer care

    PubMed Central

    Rennert, Hanna; Eng, Kenneth; Zhang, Tuo; Tan, Adrian; Xiang, Jenny; Romanel, Alessandro; Kim, Robert; Tam, Wayne; Liu, Yen-Chun; Bhinder, Bhavneet; Cyrta, Joanna; Beltran, Himisha; Robinson, Brian; Mosquera, Juan Miguel; Fernandes, Helen; Demichelis, Francesca; Sboner, Andrea; Kluk, Michael; Rubin, Mark A; Elemento, Olivier

    2016-01-01

    We describe Exome Cancer Test v1.0 (EXaCT-1), the first New York State-Department of Health-approved whole-exome sequencing (WES)-based test for precision cancer care. EXaCT-1 uses HaloPlex (Agilent) target enrichment followed by next-generation sequencing (Illumina) of tumour and matched constitutional control DNA. We present a detailed clinical development and validation pipeline suitable for simultaneous detection of somatic point/indel mutations and copy-number alterations (CNAs). A computational framework for data analysis, reporting and sign-out is also presented. For the validation, we tested EXaCT-1 on 57 tumours covering five distinct clinically relevant mutations. Results demonstrated elevated and uniform coverage compatible with clinical testing as well as complete concordance in variant quality metrics between formalin-fixed paraffin embedded and fresh-frozen tumours. Extensive sensitivity studies identified limits of detection threshold for point/indel mutations and CNAs. Prospective analysis of 337 cancer cases revealed mutations in clinically relevant genes in 82% of tumours, demonstrating that EXaCT-1 is an accurate and sensitive method for identifying actionable mutations, with reasonable costs and time, greatly expanding its utility for advanced cancer care. PMID:28781886

  13. An in-line micro-pyrolysis system to remove contaminating organic species for precise and accurate water isotope analysis by spectroscopic techniques

    NASA Astrophysics Data System (ADS)

    Panetta, R. J.; Hsiao, G.

    2011-12-01

    Trace levels of organic contaminants such as short alcohols and terpenoids have been shown to cause spectral interference in water isotope analysis by spectroscopic techniques. The result is degraded precision and accuracy in both δD and δ18O for samples such as beverages, plant extracts or slightly contaminated waters. An initial approach offered by manufacturers is post-processing software that analyzes spectral features to identify and flag contaminated samples. However, it is impossible for this software to accurately reconstruct the water isotope signature, thus it is primarily a metric for data quality. Here, we describe a novel in-line pyrolysis system (Micro-Pyrolysis Technology, MPT) placed just prior to the inlet of a cavity ring-down spectroscopy (CRDS) analyzer that effectively removes interfering organic molecules without altering the isotope values of the water. Following injection of the water sample, N2 carrier gas passes the sample through a micro-pyrolysis tube heated with multiple high temperature elements in an oxygen-free environment. The temperature is maintained above the thermal decomposition threshold of most organic compounds (≤ 900 oC), but well below that of water (~2000 oC). The main products of the pyrolysis reaction are non-interfering species such as elemental carbon and H2 gas. To test the efficacy and applicability of the system, waters of known isotopic composition were spiked with varying amounts of common interfering alcohols (methanol, ethanol, propanol, hexanol, trans-2-hexenol, cis-3-hexanol up to 5 % v/v) and common soluble plant terpenoids (carveol, linalool, geraniol, prenol). Spiked samples with no treatment to remove the organics show strong interfering absorption peaks that adversely affect the δD and δ18O values. However, with the MPT in place, all interfering absorption peaks are removed and the water absorption spectrum is fully restored. As a consequence, the δD and δ18O values also return to their original

  14. Reliability, precision, and gender differences in knee internal/external rotation proprioception measurements.

    PubMed

    Nagai, Takashi; Sell, Timothy C; Abt, John P; Lephart, Scott M

    2012-11-01

    To develop and assess the reliability and precision of knee internal/external rotation (IR/ER) threshold to detect passive motion (TTDPM) and determine if gender differences exist. Test-retest for the reliability/precision and cross-sectional for gender comparisons. University neuromuscular and human performance research laboratory. Ten subjects for the reliability and precision aim. Twenty subjects (10 males and 10 females) for gender comparisons. All TTDPM tests were performed using a multi-mode dynamometer. Subjects performed TTDPM at two knee positions (near IR or ER end-range). Intraclass correlation coefficient (ICC (3,k)) and standard error of measurement (SEM) were used to evaluate the reliability and precision. Independent t-tests were used to compare genders. TTDPM toward IR and ER at two knee positions. Intrasession and intersession reliability and precision were good (ICC=0.68-0.86; SEM=0.22°-0.37°). Females had significantly diminished TTDPM toward IR at IR-test position (males: 0.77°±0.14°, females: 1.18°±0.46°, p=0.021) and TTDPM toward IR at the ER-test position (males: 0.87°±0.13°, females: 1.36°±0.58°, p=0.026). No other significant gender differences were found (p>0.05). The current IR/ER TTDPM methods are reliable and accurate for the test-retest or cross-section research design. Gender differences were found toward IR where the ACL acts as the secondary restraint. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. High channel count microphone array accurately and precisely localizes ultrasonic signals from freely-moving mice.

    PubMed

    Warren, Megan R; Sangiamo, Daniel T; Neunuebel, Joshua P

    2018-03-01

    An integral component in the assessment of vocal behavior in groups of freely interacting animals is the ability to determine which animal is producing each vocal signal. This process is facilitated by using microphone arrays with multiple channels. Here, we made important refinements to a state-of-the-art microphone array based system used to localize vocal signals produced by freely interacting laboratory mice. Key changes to the system included increasing the number of microphones as well as refining the methodology for localizing and assigning vocal signals to individual mice. We systematically demonstrate that the improvements in the methodology for localizing mouse vocal signals led to an increase in the number of signals detected as well as the number of signals accurately assigned to an animal. These changes facilitated the acquisition of larger and more comprehensive data sets that better represent the vocal activity within an experiment. Furthermore, this system will allow more thorough analyses of the role that vocal signals play in social communication. We expect that such advances will broaden our understanding of social communication deficits in mouse models of neurological disorders. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Identification of precision treatment strategies for relapsed/refractory multiple myeloma by functional drug sensitivity testing.

    PubMed

    Majumder, Muntasir Mamun; Silvennoinen, Raija; Anttila, Pekka; Tamborero, David; Eldfors, Samuli; Yadav, Bhagwan; Karjalainen, Riikka; Kuusanmäki, Heikki; Lievonen, Juha; Parsons, Alun; Suvela, Minna; Jantunen, Esa; Porkka, Kimmo; Heckman, Caroline A

    2017-08-22

    Novel agents have increased survival of multiple myeloma (MM) patients, however high-risk and relapsed/refractory patients remain challenging to treat and their outcome is poor. To identify novel therapies and aid treatment selection for MM, we assessed the ex vivo sensitivity of 50 MM patient samples to 308 approved and investigational drugs. With the results we i) classified patients based on their ex vivo drug response profile; ii) identified and matched potential drug candidates to recurrent cytogenetic alterations; and iii) correlated ex vivo drug sensitivity to patient outcome. Based on their drug sensitivity profiles, MM patients were stratified into four distinct subgroups with varied survival outcomes. Patients with progressive disease and poor survival clustered in a drug response group exhibiting high sensitivity to signal transduction inhibitors. Del(17p) positive samples were resistant to most drugs tested with the exception of histone deacetylase and BCL2 inhibitors. Samples positive for t(4;14) were highly sensitive to immunomodulatory drugs, proteasome inhibitors and several targeted drugs. Three patients treated based on the ex vivo results showed good response to the selected treatments. Our results demonstrate that ex vivo drug testing may potentially be applied to optimize treatment selection and achieve therapeutic benefit for relapsed/refractory MM.

  17. An Online Gravity Modeling Method Applied for High Precision Free-INS

    PubMed Central

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-01-01

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS. PMID:27669261

  18. An Online Gravity Modeling Method Applied for High Precision Free-INS.

    PubMed

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-09-23

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS.

  19. Thermoelectric technique to precisely control hyperthermic exposures of human whole blood.

    PubMed

    DuBose, D A; Langevin, R C; Morehouse, D H

    1996-12-01

    The need in military research to avoid exposing humans to harsh environments and reduce animal use requires the development of in vitro models for the study of hyperthermic injury. A thermoelectric module (TEM) system was employed to heat human whole blood (HWB) in a manner similar to that experienced by heat-stroked rats. This system precisely and accurately replicated mild, moderate, and extreme heat-stress exposures. Temperature changes could be monitored without the introduction of a test sample thermistor, which reduced contamination problems. HWB with hematocrits of 45 or 50% had similar heating curves, indicating that the system compensated for differences in sample character. The unit's size permitted its containment within a standard carbon dioxide incubator to further control sample environment. These results indicate that the TEM system can precisely control temperature change in this heat stress in vitro model employing HWB. Information obtained from such a model could contribute to military preparedness.

  20. Improving 14C dating precision in dynamic, brackish waters by one order of magnitude: 87Sr/86Sr isotopes as a quantitative proxy for 14C reservoir age.

    NASA Astrophysics Data System (ADS)

    Lougheed, B.; Davies, G.; Filipsson, H. L.; van der Lubbe, J.; Snowball, I.

    2016-12-01

    Accurate geochronologies are crucial for reconstructing the sensitivity of brackish and estuarine environments to external impacts. A common geochronological method used for such studies is radiocarbon (14C) dating, but its application in brackish environments is severely limited by an inability to quantify spatiotemporal variations in 14C reservoir age, or R(t), due to dynamic interplay between river runoff and marine water in these environments. Additionally, old carbon effects and species-specific behavioural processes also influence 14C ages. Using the world's largest brackish water body (the estuarine Baltic Sea) as a test-bed, combined with a comprehensive approach that objectively excludes both old carbon and species-specific effects, we demonstrate that it is possible to use 87Sr/86Sr ratios to quantify R(t) in ubiquitous mollusc shell material, leading to an almost one order of magnitude increase in Baltic Sea 14C geochronological precision over the current state-of-the-art. We propose that this novel proxy method can be developed for other brackish water bodies worldwide, thereby improving geochronological control in these climate sensitive, near-coastal environments.

  1. Prompt and Precise Prototyping

    NASA Technical Reports Server (NTRS)

    2003-01-01

    For Sanders Design International, Inc., of Wilton, New Hampshire, every passing second between the concept and realization of a product is essential to succeed in the rapid prototyping industry where amongst heavy competition, faster time-to-market means more business. To separate itself from its rivals, Sanders Design aligned with NASA's Marshall Space Flight Center to develop what it considers to be the most accurate rapid prototyping machine for fabrication of extremely precise tooling prototypes. The company's Rapid ToolMaker System has revolutionized production of high quality, small-to-medium sized prototype patterns and tooling molds with an exactness that surpasses that of computer numerically-controlled (CNC) machining devices. Created with funding and support from Marshall under a Small Business Innovation Research (SBIR) contract, the Rapid ToolMaker is a dual-use technology with applications in both commercial and military aerospace fields. The advanced technology provides cost savings in the design and manufacturing of automotive, electronic, and medical parts, as well as in other areas of consumer interest, such as jewelry and toys. For aerospace applications, the Rapid ToolMaker enables fabrication of high-quality turbine and compressor blades for jet engines on unmanned air vehicles, aircraft, and missiles.

  2. Arrival Metering Precision Study

    NASA Technical Reports Server (NTRS)

    Prevot, Thomas; Mercer, Joey; Homola, Jeffrey; Hunt, Sarah; Gomez, Ashley; Bienert, Nancy; Omar, Faisal; Kraut, Joshua; Brasil, Connie; Wu, Minghong, G.

    2015-01-01

    This paper describes the background, method and results of the Arrival Metering Precision Study (AMPS) conducted in the Airspace Operations Laboratory at NASA Ames Research Center in May 2014. The simulation study measured delivery accuracy, flight efficiency, controller workload, and acceptability of time-based metering operations to a meter fix at the terminal area boundary for different resolution levels of metering delay times displayed to the air traffic controllers and different levels of airspeed information made available to the Time-Based Flow Management (TBFM) system computing the delay. The results show that the resolution of the delay countdown timer (DCT) on the controllers display has a significant impact on the delivery accuracy at the meter fix. Using the 10 seconds rounded and 1 minute rounded DCT resolutions resulted in more accurate delivery than 1 minute truncated and were preferred by the controllers. Using the speeds the controllers entered into the fourth line of the data tag to update the delay computation in TBFM in high and low altitude sectors increased air traffic control efficiency and reduced fuel burn for arriving aircraft during time based metering.

  3. High precision radial velocities with GIANO spectra

    NASA Astrophysics Data System (ADS)

    Carleo, I.; Sanna, N.; Gratton, R.; Benatti, S.; Bonavita, M.; Oliva, E.; Origlia, L.; Desidera, S.; Claudi, R.; Sissa, E.

    2016-06-01

    Radial velocities (RV) measured from near-infrared (NIR) spectra are a potentially excellent tool to search for extrasolar planets around cool or active stars. High resolution infrared (IR) spectrographs now available are reaching the high precision of visible instruments, with a constant improvement over time. GIANO is an infrared echelle spectrograph at the Telescopio Nazionale Galileo (TNG) and it is a powerful tool to provide high resolution spectra for accurate RV measurements of exoplanets and for chemical and dynamical studies of stellar or extragalactic objects. No other high spectral resolution IR instrument has GIANO's capability to cover the entire NIR wavelength range (0.95-2.45 μm) in a single exposure. In this paper we describe the ensemble of procedures that we have developed to measure high precision RVs on GIANO spectra acquired during the Science Verification (SV) run, using the telluric lines as wavelength reference. We used the Cross Correlation Function (CCF) method to determine the velocity for both the star and the telluric lines. For this purpose, we constructed two suitable digital masks that include about 2000 stellar lines, and a similar number of telluric lines. The method is applied to various targets with different spectral type, from K2V to M8 stars. We reached different precisions mainly depending on the H-magnitudes: for H ˜ 5 we obtain an rms scatter of ˜ 10 m s-1, while for H ˜ 9 the standard deviation increases to ˜ 50 ÷ 80 m s-1. The corresponding theoretical error expectations are ˜ 4 m s-1 and 30 m s-1, respectively. Finally we provide the RVs measured with our procedure for the targets observed during GIANO Science Verification.

  4. Precise positioning method for multi-process connecting based on binocular vision

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Ding, Lichao; Zhao, Kai; Li, Xiao; Wang, Ling; Jia, Zhenyuan

    2016-01-01

    With the rapid development of aviation and aerospace, the demand for metal coating parts such as antenna reflector, eddy-current sensor and signal transmitter, etc. is more and more urgent. Such parts with varied feature dimensions, complex three-dimensional structures, and high geometric accuracy are generally fabricated by the combination of different manufacturing technology. However, it is difficult to ensure the machining precision because of the connection error between different processing methods. Therefore, a precise positioning method is proposed based on binocular micro stereo vision in this paper. Firstly, a novel and efficient camera calibration method for stereoscopic microscope is presented to solve the problems of narrow view field, small depth of focus and too many nonlinear distortions. Secondly, the extraction algorithms for law curve and free curve are given, and the spatial position relationship between the micro vision system and the machining system is determined accurately. Thirdly, a precise positioning system based on micro stereovision is set up and then embedded in a CNC machining experiment platform. Finally, the verification experiment of the positioning accuracy is conducted and the experimental results indicated that the average errors of the proposed method in the X and Y directions are 2.250 μm and 1.777 μm, respectively.

  5. Separation of Platinum from Palladium and Iridium in Iron Meteorites and Accurate High-Precision Determination of Platinum Isotopes by Multi-Collector ICP-MS.

    PubMed

    Hunt, Alison C; Ek, Mattias; Schönbächler, Maria

    2017-12-01

    This study presents a new measurement procedure for the isolation of Pt from iron meteorite samples. The method also allows for the separation of Pd from the same sample aliquot. The separation entails a two-stage anion-exchange procedure. In the first stage, Pt and Pd are separated from each other and from major matrix constituents including Fe and Ni. In the second stage, Ir is reduced with ascorbic acid and eluted from the column before Pt collection. Platinum yields for the total procedure were typically 50-70%. After purification, high-precision Pt isotope determinations were performed by multi-collector ICP-MS. The precision of the new method was assessed using the IIAB iron meteorite North Chile. Replicate analyses of multiple digestions of this material yielded an intermediate precision for the measurement results of 0.73 for ε 192 Pt, 0.15 for ε 194 Pt and 0.09 for ε 196 Pt (2 standard deviations). The NIST SRM 3140 Pt solution reference material was passed through the measurement procedure and yielded an isotopic composition that is identical to the unprocessed Pt reference material. This indicates that the new technique is unbiased within the limit of the estimated uncertainties. Data for three iron meteorites support that Pt isotope variations in these samples are due to exposure to galactic cosmic rays in space.

  6. The influence of number line estimation precision and numeracy on risky financial decision making.

    PubMed

    Park, Inkyung; Cho, Soohyun

    2018-01-10

    This study examined whether different aspects of mathematical proficiency influence one's ability to make adaptive financial decisions. "Numeracy" refers to the ability to process numerical and probabilistic information and is commonly reported as an important factor which contributes to financial decision-making ability. The precision of mental number representation (MNR), measured with the number line estimation (NLE) task has been reported to be another critical factor. This study aimed to examine the contribution of these mathematical proficiencies while controlling for the influence of fluid intelligence, math anxiety and personality factors. In our decision-making task, participants chose between two options offering probabilistic monetary gain or loss. Sensitivity to expected value was measured as an index for the ability to discriminate between optimal versus suboptimal options. Partial correlation and hierarchical regression analyses revealed that NLE precision better explained EV sensitivity compared to numeracy, after controlling for all covariates. These results suggest that individuals with more precise MNR are capable of making more rational financial decisions. We also propose that the measurement of "numeracy," which is commonly used interchangeably with general mathematical proficiency, should include more diverse aspects of mathematical cognition including basic understanding of number magnitude. © 2018 International Union of Psychological Science.

  7. PubMed had a higher sensitivity than Ovid-MEDLINE in the search for systematic reviews.

    PubMed

    Katchamart, Wanruchada; Faulkner, Amy; Feldman, Brian; Tomlinson, George; Bombardier, Claire

    2011-07-01

    To compare the performance of Ovid-MEDLINE vs. PubMed for identifying randomized controlled trials of methotrexate (MTX) in patients with rheumatoid arthritis (RA). We created search strategies for Ovid-MEDLINE and PubMed for a systematic review of MTX in RA. Their performance was evaluated using sensitivity, precision, and number needed to read (NNR). Comparing searches in Ovid-MEDLINE vs. PubMed, PubMed retrieved more citations overall than Ovid-MEDLINE; however, of the 20 citations that met eligibility criteria for the review, Ovid-MEDLINE retrieved 17 and PubMed 18. The sensitivity was 85% for Ovid-MEDLINE vs. 90% for PubMed, whereas the precision and NNR were comparable (precision: 0.881% for Ovid-MEDLINE vs. 0.884% for PubMed and NNR: 114 for Ovid-MEDLINE vs. 113 for PubMed). In systematic reviews of RA, PubMed has higher sensitivity than Ovid-MEDLINE with comparable precision and NNR. This study highlights the importance of well-designed database-specific search strategies. Copyright © 2010 Elsevier Inc. All rights reserved.

  8. Ion traps for precision experiments at rare-isotope-beam facilities

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, Anna

    2016-09-01

    Ion traps first entered experimental nuclear physics when the ISOLTRAP team demonstrated Penning trap mass spectrometry of radionuclides. From then on, the demand for ion traps has grown at radioactive-ion-beam (RIB) facilities since beams can be tailored for the desired experiment. Ion traps have been deployed for beam preparation, from bunching (thereby allowing time coincidences) to beam purification. Isomerically pure beams needed for nuclear-structure investigations can be prepared for trap-assisted or in-trap decay spectroscopy. The latter permits studies of highly charged ions for stellar evolution, which would be impossible with traditional experimental nuclear-physics methods. Moreover, the textbook-like conditions and advanced ion manipulation - even of a single ion - permit high-precision experiments. Consequently, the most accurate and precise mass measurements are now performed in Penning traps. After a brief introduction to ion trapping, I will focus on examples which showcase the versatility and utility of the technique at RIB facilities. I will demonstrate how this atomic-physics technique has been integrated into nuclear science, accelerator physics, and chemistry. DOE.

  9. Rapid and sensitive gas chromatography ion-trap mass spectrometry method for the determination of tobacco specific N-nitrosamines in secondhand smoke

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SLEIMAN, Mohamad; MADDALENA, Randy L.; GUNDEL, Lara A.

    Tobacco-specific nitrosamines (TSNAs) are some of the most potent carcinogens in tobacco and cigarette smoke. Accurate quantification of these chemicals is needed to help assess public health risks. We developed and validated a specific and sensitive method to measure four TSNAs in both the gas- and particle-phase of secondhand smoke (SHS) using gas chromatography and ion-trap tandem mass spectrometry,. A smoking machine in an 18-m3 room-sized chamber generated relevant concentrations of SHS that were actively sampled on Teflon coated fiber glass (TCFG) filters, and passively sampled on cellulose substrates. A simple solid-liquid extraction protocol using methanol as solvent was successfullymore » applied to both filters with high recoveries ranging from 85 to 115percent. Tandem MS parameters were optimized to obtain the best sensitivity in terms of signal to-noise ratio (S/N) for the target compounds. For each TSNA, the major fragmentation pathways as well as ion structures were elucidated and compared with previously published data. The method showed excellent performances with a linear dynamic range between 2 and 1000 ng mL-1, low detection limits (S/N> 3) of 30-300 pg.ml-1 and precision with experimental errors below 10percent for all compounds. Moreover, no interfering peaks were observed indicating a high selectivity of MS/MS without the need for a sample clean up step. The sampling and analysis method provides a sensitive and accurate tool to detect and quantify traces of TSNA in SHS polluted indoor environments.« less

  10. Precise positioning with sparse radio tracking: How LRO-LOLA and GRAIL enable future lunar exploration

    NASA Astrophysics Data System (ADS)

    Mazarico, E.; Goossens, S. J.; Barker, M. K.; Neumann, G. A.; Zuber, M. T.; Smith, D. E.

    2017-12-01

    Two recent NASA missions to the Moon, the Lunar Reconnaissance Orbiter (LRO) and the Gravity Recovery and Interior Laboratory (GRAIL), have obtained highly accurate information about the lunar shape and gravity field. These global geodetic datasets resolve long-standing issues with mission planning; the tidal lock of the Moon long prevented collection of accurate gravity measurements over the farside, and deteriorated precise positioning of topographic data. We describe key datasets and results from the LRO and GRAIL mission that are directly relevant to future lunar missions. SmallSat and CubeSat missions especially would benefit from these recent improvements, as they are typically more resource-constrained. Even with limited radio tracking data, accurate knowledge of topography and gravity enables precise orbit determination (OD) (e.g., limiting the scope of geolocation and co-registration tasks) and long-term predictions of altitude (e.g., dramatically reducing uncertainties in impact time). With one S-band tracking pass per day, LRO OD now routinely achieves total position knowledge better than 10 meters and radial position knowledge around 0.5 meter. Other tracking data, such as Laser Ranging from Earth-based SLR stations, can further support OD. We also show how altimetry can be used to substantially improve orbit reconstruction with the accurate topographic maps now available from Lunar Orbiter Laser Altimeter (LOLA) data. We present new results with SELENE extended mission and LRO orbits processed with direct altimetry measurements. With even a simple laser altimeter onboard, high-quality OD can be achieved for future missions because of the datasets acquired by LRO and GRAIL, without the need for regular radio contact. Onboard processing of altimetric ranges would bring high-quality real-time position knowledge to support autonomous operation. We also describe why optical ranging transponders are ideal payloads for future lunar missions, as they can

  11. Fast and Accurate Microplate Method (Biolog MT2) for Detection of Fusarium Fungicides Resistance/Sensitivity.

    PubMed

    Frąc, Magdalena; Gryta, Agata; Oszust, Karolina; Kotowicz, Natalia

    2016-01-01

    The need for finding fungicides against Fusarium is a key step in the chemical plant protection and using appropriate chemical agents. Existing, conventional methods of evaluation of Fusarium isolates resistance to fungicides are costly, time-consuming and potentially environmentally harmful due to usage of high amounts of potentially toxic chemicals. Therefore, the development of fast, accurate and effective detection methods for Fusarium resistance to fungicides is urgently required. MT2 microplates (Biolog(TM)) method is traditionally used for bacteria identification and the evaluation of their ability to utilize different carbon substrates. However, to the best of our knowledge, there is no reports concerning the use of this technical tool to determine fungicides resistance of the Fusarium isolates. For this reason, the objectives of this study are to develop a fast method for Fusarium resistance to fungicides detection and to validate the effectiveness approach between both traditional hole-plate and MT2 microplates assays. In presented study MT2 microplate-based assay was evaluated for potential use as an alternative resistance detection method. This was carried out using three commercially available fungicides, containing following active substances: triazoles (tebuconazole), benzimidazoles (carbendazim) and strobilurins (azoxystrobin), in six concentrations (0, 0.0005, 0.005, 0.05, 0.1, 0.2%), for nine selected Fusarium isolates. In this study, the particular concentrations of each fungicides was loaded into MT2 microplate wells. The wells were inoculated with the Fusarium mycelium suspended in PM4-IF inoculating fluid. Before inoculation the suspension was standardized for each isolates into 75% of transmittance. Traditional hole-plate method was used as a control assay. The fungicides concentrations in control method were the following: 0, 0.0005, 0.005, 0.05, 0.5, 1, 2, 5, 10, 25, and 50%. Strong relationships between MT2 microplate and traditional hole

  12. Fast and Accurate Microplate Method (Biolog MT2) for Detection of Fusarium Fungicides Resistance/Sensitivity

    PubMed Central

    Frąc, Magdalena; Gryta, Agata; Oszust, Karolina; Kotowicz, Natalia

    2016-01-01

    The need for finding fungicides against Fusarium is a key step in the chemical plant protection and using appropriate chemical agents. Existing, conventional methods of evaluation of Fusarium isolates resistance to fungicides are costly, time-consuming and potentially environmentally harmful due to usage of high amounts of potentially toxic chemicals. Therefore, the development of fast, accurate and effective detection methods for Fusarium resistance to fungicides is urgently required. MT2 microplates (BiologTM) method is traditionally used for bacteria identification and the evaluation of their ability to utilize different carbon substrates. However, to the best of our knowledge, there is no reports concerning the use of this technical tool to determine fungicides resistance of the Fusarium isolates. For this reason, the objectives of this study are to develop a fast method for Fusarium resistance to fungicides detection and to validate the effectiveness approach between both traditional hole-plate and MT2 microplates assays. In presented study MT2 microplate-based assay was evaluated for potential use as an alternative resistance detection method. This was carried out using three commercially available fungicides, containing following active substances: triazoles (tebuconazole), benzimidazoles (carbendazim) and strobilurins (azoxystrobin), in six concentrations (0, 0.0005, 0.005, 0.05, 0.1, 0.2%), for nine selected Fusarium isolates. In this study, the particular concentrations of each fungicides was loaded into MT2 microplate wells. The wells were inoculated with the Fusarium mycelium suspended in PM4-IF inoculating fluid. Before inoculation the suspension was standardized for each isolates into 75% of transmittance. Traditional hole-plate method was used as a control assay. The fungicides concentrations in control method were the following: 0, 0.0005, 0.005, 0.05, 0.5, 1, 2, 5, 10, 25, and 50%. Strong relationships between MT2 microplate and traditional hole

  13. Precise attitude determination of defunct satellite laser ranging tragets

    NASA Astrophysics Data System (ADS)

    Pittet, Jean-Noel; Schildknecht, Thomas; Silha, Jiri

    2016-07-01

    The Satellite Laser Ranging (SLR) technology is used to determine the dynamics of objects equipped with so-called retro-reflectors or retro-reflector arrays (RRA). This type of measurement allows to range to the spacecraft with very high precision, which leads to determination of very accurate orbits. Non-active spacecraft, which are not any more attitude controlled, tend to start to spin or tumble under influence of the external and internal torques. Such a spinning can be around one constant axis of rotation or it can be more complex, when also precession and nutation motions are present. The rotation of the RRA around the spacecraft's centre of mass can create both a oscillation pattern of laser range signal and a periodic signal interruption when the RRA is hidden behind the satellite. In our work we will demonstrate how the SLR ranging technique to cooperative targets can be used to determine precisely their attitude state. The processing of the obtained data will be discussed, as well as the attitude determination based on parameters estimation. Continuous SLR measurements to one target can allow to accurately monitor attitude change over time which can be further used for the future attitude modelling. We will show our solutions of the attitude states determined for the non-active ESA satellite ENVISAT based on measurements acquired during year 2013-2015 by Zimmerwald SLR station, Switzerland. The angular momentum shows a stable behaviour with respect to the orbital plane but is not aligned with orbital momentum. The determination of the inertial rotation over time, shows it evolving between 130 to 190 seconds within two year. Parameter estimation also bring a strong indication of a retrograde rotation. Results on other former satellites in low and medium Earth orbit such as TOPEX/Poseidon or GLONASS type will be also presented.

  14. Notes From the Field: Secondary Task Precision for Cognitive Load Estimation During Virtual Reality Surgical Simulation Training.

    PubMed

    Rasmussen, Sebastian R; Konge, Lars; Mikkelsen, Peter T; Sørensen, Mads S; Andersen, Steven A W

    2016-03-01

    Cognitive load (CL) theory suggests that working memory can be overloaded in complex learning tasks such as surgical technical skills training, which can impair learning. Valid and feasible methods for estimating the CL in specific learning contexts are necessary before the efficacy of CL-lowering instructional interventions can be established. This study aims to explore secondary task precision for the estimation of CL in virtual reality (VR) surgical simulation and also investigate the effects of CL-modifying factors such as simulator-integrated tutoring and repeated practice. Twenty-four participants were randomized for visual assistance by a simulator-integrated tutor function during the first 5 of 12 repeated mastoidectomy procedures on a VR temporal bone simulator. Secondary task precision was found to be significantly lower during simulation compared with nonsimulation baseline, p < .001. Contrary to expectations, simulator-integrated tutoring and repeated practice did not have an impact on secondary task precision. This finding suggests that even though considerable changes in CL are reflected in secondary task precision, it lacks sensitivity. In contrast, secondary task reaction time could be more sensitive, but requires substantial postprocessing of data. Therefore, future studies on the effect of CL modifying interventions should weigh the pros and cons of the various secondary task measurements. © The Author(s) 2015.

  15. Accurate mass and velocity functions of dark matter haloes

    NASA Astrophysics Data System (ADS)

    Comparat, Johan; Prada, Francisco; Yepes, Gustavo; Klypin, Anatoly

    2017-08-01

    N-body cosmological simulations are an essential tool to understand the observed distribution of galaxies. We use the MultiDark simulation suite, run with the Planck cosmological parameters, to revisit the mass and velocity functions. At redshift z = 0, the simulations cover four orders of magnitude in halo mass from ˜1011M⊙ with 8783 874 distinct haloes and 532 533 subhaloes. The total volume used is ˜515 Gpc3, more than eight times larger than in previous studies. We measure and model the halo mass function, its covariance matrix w.r.t halo mass and the large-scale halo bias. With the formalism of the excursion-set mass function, we explicit the tight interconnection between the covariance matrix, bias and halo mass function. We obtain a very accurate (<2 per cent level) model of the distinct halo mass function. We also model the subhalo mass function and its relation to the distinct halo mass function. The set of models obtained provides a complete and precise framework for the description of haloes in the concordance Planck cosmology. Finally, we provide precise analytical fits of the Vmax maximum velocity function up to redshift z < 2.3 to push for the development of halo occupation distribution using Vmax. The data and the analysis code are made publicly available in the Skies and Universes data base.

  16. Precise Temperature Measurement for Increasing the Survival of Newborn Babies in Incubator Environments

    PubMed Central

    Frischer, Robert; Penhaker, Marek; Krejcar, Ondrej; Kacerovsky, Marian; Selamat, Ali

    2014-01-01

    Precise temperature measurement is essential in a wide range of applications in the medical environment, however the regarding the problem of temperature measurement inside a simple incubator, neither a simple nor a low cost solution have been proposed yet. Given that standard temperature sensors don't satisfy the necessary expectations, the problem is not measuring temperature, but rather achieving the desired sensitivity. In response, this paper introduces a novel hardware design as well as the implementation that increases measurement sensitivity in defined temperature intervals at low cost. PMID:25494352

  17. Note: Precise radial distribution of charged particles in a magnetic guiding field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Backe, H., E-mail: backe@kph.uni-mainz.de

    2015-07-15

    Current high precision beta decay experiments of polarized neutrons, employing magnetic guiding fields in combination with position sensitive and energy dispersive detectors, resulted in a detailed study of the mono-energetic point spread function (PSF) for a homogeneous magnetic field. A PSF describes the radial probability distribution of mono-energetic electrons at the detector plane emitted from a point-like source. With regard to accuracy considerations, unwanted singularities occur as a function of the radial detector coordinate which have recently been investigated by subdividing the radial coordinate into small bins or employing analytical approximations. In this note, a series expansion of the PSFmore » is presented which can numerically be evaluated with arbitrary precision.« less

  18. Sensitive and Flexible Polymeric Strain Sensor for Accurate Human Motion Monitoring

    PubMed Central

    Khan, Hassan; Kottapalli, Ajay; Asadnia, Mohsen

    2018-01-01

    Flexible electronic devices offer the capability to integrate and adapt with human body. These devices are mountable on surfaces with various shapes, which allow us to attach them to clothes or directly onto the body. This paper suggests a facile fabrication strategy via electrospinning to develop a stretchable, and sensitive poly (vinylidene fluoride) nanofibrous strain sensor for human motion monitoring. A complete characterization on the single PVDF nano fiber has been performed. The charge generated by PVDF electrospun strain sensor changes was employed as a parameter to control the finger motion of the robotic arm. As a proof of concept, we developed a smart glove with five sensors integrated into it to detect the fingers motion and transfer it to a robotic hand. Our results shows that the proposed strain sensors are able to detect tiny motion of fingers and successfully run the robotic hand. PMID:29389851

  19. Precision Voltage Referencing Techniques in MOS Technology.

    NASA Astrophysics Data System (ADS)

    Song, Bang-Sup

    With the increasing complexity of functions on a single MOS chip, precision analog cicuits implemented in the same technology are in great demand so as to be integrated together with digital circuits. The future development of MOS data acquisition systems will require precision on-chip MOS voltage references. This dissertation will probe two most promising configurations of on-chip voltage references both in NMOS and CMOS technologies. In NMOS, an ion-implantation effect on the temperature behavior of MOS devices is investigated to identify the fundamental limiting factors of a threshold voltage difference as an NMOS voltage source. For this kind of voltage reference, the temperature stability on the order of 20ppm/(DEGREES)C is achievable with a shallow single-threshold implant and a low-current, high-body bias operation. In CMOS, a monolithic prototype bandgap reference is designed, fabricated and tested which embodies a curvature compensation and exhibits a minimized sensitivity to the process parameter variation. Experimental results imply that an average temperature stability on the order of 10ppm/(DEGREES)C with a production spread of less than 10ppm/(DEGREES)C feasible over the commercial temperature range.

  20. Molecular profiling of sarcomas: new vistas for precision medicine.

    PubMed

    Al-Zaid, Tariq; Wang, Wei-Lien; Somaiah, Neeta; Lazar, Alexander J

    2017-08-01

    Sarcoma is a large and heterogeneous group of malignant mesenchymal neoplasms with significant histological overlap. Accurate diagnosis can be challenging yet important for selecting the appropriate treatment approach and prognosis. The currently torrid pace of new genomic discoveries aids our classification and diagnosis of sarcomas, understanding of pathogenesis, development of new medications, and identification of alterations that predict prognosis and response to therapy. Unfortunately, demonstrating effective targets for precision oncology has been elusive in most sarcoma types. The list of potential targets greatly outnumbers the list of available inhibitors at the present time. This review will discuss the role of molecular profiling in sarcomas in general with emphasis on selected entities with particular clinical relevance.

  1. Flight control and landing precision in the nocturnal bee Megalopta is robust to large changes in light intensity

    PubMed Central

    Baird, Emily; Fernandez, Diana C.; Wcislo, William T.; Warrant, Eric J.

    2015-01-01

    Like their diurnal relatives, Megalopta genalis use visual information to control flight. Unlike their diurnal relatives, however, they do this at extremely low light intensities. Although Megalopta has developed optical specializations to increase visual sensitivity, theoretical studies suggest that this enhanced sensitivity does not enable them to capture enough light to use visual information to reliably control flight in the rainforest at night. It has been proposed that Megalopta gain extra sensitivity by summing visual information over time. While enhancing the reliability of vision, this strategy would decrease the accuracy with which they can detect image motion—a crucial cue for flight control. Here, we test this temporal summation hypothesis by investigating how Megalopta's flight control and landing precision is affected by light intensity and compare our findings with the results of similar experiments performed on the diurnal bumblebee Bombus terrestris, to explore the extent to which Megalopta's adaptations to dim light affect their precision. We find that, unlike Bombus, light intensity does not affect flight and landing precision in Megalopta. Overall, we find little evidence that Megalopta uses a temporal summation strategy in dim light, while we find strong support for the use of this strategy in Bombus. PMID:26578977

  2. A novel design of the high-precision magnetic locator with three-dimension measurement capability applying dynamically sensing mechanism

    NASA Astrophysics Data System (ADS)

    Huang, Wen-Nan; Chen, Po-Shen; Chen, Mu-Ping; Teng, Ching-Cheng

    2006-09-01

    A novel design of the magnetic locator, for obtaining the high-precision measurement information of variety of the buried metal pipes, is presented in this paper. The concept of dynamically sensing mechanism, including the vibrating and moving devices, proposed herein is a simple and effective way to improve the precision of three-dimension location sensing for the underground utilities. Based on the primary magnetism of Lenz's law and Faraday's law, the functions of the amplifying effect for the sensing magnetic signals, as well as the distinguishing effect by the simple filtering algorithms embedded in processing programs, are achieved while the relatively strong noise exists. The verification results of these integration designs demonstrate the effectiveness both by precise locating for the buried utility, and accurate measurement for the depth.

  3. Engineering the Mechanical Properties of Polymer Networks with Precise Doping of Primary Defects.

    PubMed

    Chan, Doreen; Ding, Yichuan; Dauskardt, Reinhold H; Appel, Eric A

    2017-12-06

    Polymer networks are extensively utilized across numerous applications ranging from commodity superabsorbent polymers and coatings to high-performance microelectronics and biomaterials. For many applications, desirable properties are known; however, achieving them has been challenging. Additionally, the accurate prediction of elastic modulus has been a long-standing difficulty owing to the presence of loops. By tuning the prepolymer formulation through precise doping of monomers, specific primary network defects can be programmed into an elastomeric scaffold, without alteration of their resulting chemistry. The addition of these monomers that respond mechanically as primary defects is used both to understand their impact on the resulting mechanical properties of the materials and as a method to engineer the mechanical properties. Indeed, these materials exhibit identical bulk and surface chemistry, yet vastly different mechanical properties. Further, we have adapted the real elastic network theory (RENT) to the case of primary defects in the absence of loops, thus providing new insights into the mechanism for material strength and failure in polymer networks arising from primary network defects, and to accurately predict the elastic modulus of the polymer system. The versatility of the approach we describe and the fundamental knowledge gained from this study can lead to new advancements in the development of novel materials with precisely defined and predictable chemical, physical, and mechanical properties.

  4. Application of Multimodality Imaging Fusion Technology in Diagnosis and Treatment of Malignant Tumors under the Precision Medicine Plan.

    PubMed

    Wang, Shun-Yi; Chen, Xian-Xia; Li, Yi; Zhang, Yu-Ying

    2016-12-20

    The arrival of precision medicine plan brings new opportunities and challenges for patients undergoing precision diagnosis and treatment of malignant tumors. With the development of medical imaging, information on different modality imaging can be integrated and comprehensively analyzed by imaging fusion system. This review aimed to update the application of multimodality imaging fusion technology in the precise diagnosis and treatment of malignant tumors under the precision medicine plan. We introduced several multimodality imaging fusion technologies and their application to the diagnosis and treatment of malignant tumors in clinical practice. The data cited in this review were obtained mainly from the PubMed database from 1996 to 2016, using the keywords of "precision medicine", "fusion imaging", "multimodality", and "tumor diagnosis and treatment". Original articles, clinical practice, reviews, and other relevant literatures published in English were reviewed. Papers focusing on precision medicine, fusion imaging, multimodality, and tumor diagnosis and treatment were selected. Duplicated papers were excluded. Multimodality imaging fusion technology plays an important role in tumor diagnosis and treatment under the precision medicine plan, such as accurate location, qualitative diagnosis, tumor staging, treatment plan design, and real-time intraoperative monitoring. Multimodality imaging fusion systems could provide more imaging information of tumors from different dimensions and angles, thereby offing strong technical support for the implementation of precision oncology. Under the precision medicine plan, personalized treatment of tumors is a distinct possibility. We believe that multimodality imaging fusion technology will find an increasingly wide application in clinical practice.

  5. Precise and traceable carbon isotope ratio measurements by multicollector ICP-MS: what next?

    PubMed

    Santamaria-Fernandez, Rebeca

    2010-06-01

    This article reviews recent developments in the use of multicollector inductively coupled plasma mass spectrometry (MC-ICP-MS) to provide high-precision carbon isotope ratio measurements. MC-ICP-MS could become an alternative method to isotope ratio mass spectrometry (IRMS) for rapid carbon isotope ratio determinations in organic compounds and characterisation and certification of isotopic reference materials. In this overview, the advantages, drawbacks and potential of the method for future applications are critically discussed. Furthermore, suggestions for future improvements in terms of precision and sensitivity are made. No doubt, this is an exciting analytical challenge and, as such, hurdles will need to be cleared.

  6. An algorithm for selecting the most accurate protocol for contact angle measurement by drop shape analysis.

    PubMed

    Xu, Z N

    2014-12-01

    In this study, an error analysis is performed to study real water drop images and the corresponding numerically generated water drop profiles for three widely used static contact angle algorithms: the circle- and ellipse-fitting algorithms and the axisymmetric drop shape analysis-profile (ADSA-P) algorithm. The results demonstrate the accuracy of the numerically generated drop profiles based on the Laplace equation. A significant number of water drop profiles with different volumes, contact angles, and noise levels are generated, and the influences of the three factors on the accuracies of the three algorithms are systematically investigated. The results reveal that the above-mentioned three algorithms are complementary. In fact, the circle- and ellipse-fitting algorithms show low errors and are highly resistant to noise for water drops with small/medium volumes and contact angles, while for water drop with large volumes and contact angles just the ADSA-P algorithm can meet accuracy requirement. However, this algorithm introduces significant errors in the case of small volumes and contact angles because of its high sensitivity to noise. The critical water drop volumes of the circle- and ellipse-fitting algorithms corresponding to a certain contact angle error are obtained through a significant amount of computation. To improve the precision of the static contact angle measurement, a more accurate algorithm based on a combination of the three algorithms is proposed. Following a systematic investigation, the algorithm selection rule is described in detail, while maintaining the advantages of the three algorithms and overcoming their deficiencies. In general, static contact angles over the entire hydrophobicity range can be accurately evaluated using the proposed algorithm. The ease of erroneous judgment in static contact angle measurements is avoided. The proposed algorithm is validated by a static contact angle evaluation of real and numerically generated water drop

  7. Research on the Rapid and Accurate Positioning and Orientation Approach for Land Missile-Launching Vehicle

    PubMed Central

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-01-01

    Getting a land vehicle’s accurate position, azimuth and attitude rapidly is significant for vehicle based weapons’ combat effectiveness. In this paper, a new approach to acquire vehicle’s accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle’s accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm’s iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system’s working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min. PMID:26492249

  8. Research on the rapid and accurate positioning and orientation approach for land missile-launching vehicle.

    PubMed

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-10-20

    Getting a land vehicle's accurate position, azimuth and attitude rapidly is significant for vehicle based weapons' combat effectiveness. In this paper, a new approach to acquire vehicle's accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle's accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm's iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system's working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min.

  9. Pico-coulomb charge measured at BELLA to percent-level precision using a Turbo-ICT

    NASA Astrophysics Data System (ADS)

    Nakamura, K.; Mittelberger, D. E.; Gonsalves, A. J.; Daniels, J.; Mao, H.-S.; Stulle, F.; Bergoz, J.; Leemans, W. P.

    2016-03-01

    Precise diagnostics of picocoulomb level particle bunches produced by laser plasma accelerators (LPAs) can be a significant challenge. Without proper care, the small signals associated with such bunches can be dominated by a background generated by laser, target, laser-plasma interaction and particle induced radiation. In this paper, we report on first charge measurements using the newly developed Turbo-ICT for LPAs. We outline the Turbo-ICT working principle, which allows precise sub-picocoulomb measurements even in the presence of significant background signals. A comparison of the Turbo-ICT, a conventional integrating current transformer (ICT) and a scintillating screen (Lanex) was carried out at the Berkeley Lab Laser Accelerator. Results show that the Turbo-ICT can measure sub-picocoulomb charge accurately and has significantly improved noise immunity compared to the ICT.

  10. High Precision 2-D Grating Groove Density Measurement

    NASA Astrophysics Data System (ADS)

    Zhang, Ningxiao; McEntaffer, Randall; Tedesco, Ross

    2017-08-01

    Our research group at Penn State University is working on producing X-ray reflection gratings with high spectral resolving power and high diffraction efficiency. To estimate our fabrication accuracy, we apply a precise 2-D grating groove density measurement to plot groove density distributions of gratings on 6-inch wafers. In addition to plotting a fixed groove density distribution, this method is also sensitive to measuring the variation of the groove density simultaneously. This system can reach a measuring accuracy (ΔN/N) of 10-3. Here we present this groove density measurement and some applications.

  11. Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.

    2003-01-01

    An efficient incremental iterative approach for differentiating advanced flow codes is successfully demonstrated on a two-dimensional inviscid model problem. The method employs the reverse-mode capability of the automatic differentiation software tool ADIFOR 3.0 and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straightforward, black-box reverse-mode applicaiton of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-rder aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoinct) procedures; then, a very efficient noniterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hesian matrices) of lift, wave drag, and pitching-moment coefficients are calculated with respect to geometric shape, angle of attack, and freestream Mach number.

  12. Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.

    2001-01-01

    An efficient incremental-iterative approach for differentiating advanced flow codes is successfully demonstrated on a 2D inviscid model problem. The method employs the reverse-mode capability of the automatic- differentiation software tool ADIFOR 3.0, and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straight-forward, black-box reverse- mode application of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-order aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoint) procedures; then, a very efficient non-iterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hessian matrices) of lift, wave-drag, and pitching-moment coefficients are calculated with respect to geometric- shape, angle-of-attack, and freestream Mach number

  13. Revisit to three-dimensional percolation theory: Accurate analysis for highly stretchable conductive composite materials

    PubMed Central

    Kim, Sangwoo; Choi, Seongdae; Oh, Eunho; Byun, Junghwan; Kim, Hyunjong; Lee, Byeongmoon; Lee, Seunghwan; Hong, Yongtaek

    2016-01-01

    A percolation theory based on variation of conductive filler fraction has been widely used to explain the behavior of conductive composite materials under both small and large deformation conditions. However, it typically fails in properly analyzing the materials under the large deformation since the assumption may not be valid in such a case. Therefore, we proposed a new three-dimensional percolation theory by considering three key factors: nonlinear elasticity, precisely measured strain-dependent Poisson’s ratio, and strain-dependent percolation threshold. Digital image correlation (DIC) method was used to determine actual Poisson’s ratios at various strain levels, which were used to accurately estimate variation of conductive filler volume fraction under deformation. We also adopted strain-dependent percolation threshold caused by the filler re-location with deformation. When three key factors were considered, electrical performance change was accurately analyzed for composite materials with both isotropic and anisotropic mechanical properties. PMID:27694856

  14. An efficient and accurate 3D displacements tracking strategy for digital volume correlation

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Wang, Bo; Wu, Dafang; Lubineau, Gilles

    2014-07-01

    Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost.

  15. Comparison of Vehicle-Broadcasted Fuel Consumption Rates against Precise Fuel Measurements for Medium- and Heavy-Duty Vehicles and Engines

    DOE PAGES

    Pink, Alex; Ragatz, Adam; Wang, Lijuan; ...

    2017-03-28

    Vehicles continuously report real-time fuel consumption estimates over their data bus, known as the controller area network (CAN). However, the accuracy of these fueling estimates is uncertain to researchers who collect these data from any given vehicle. To assess the accuracy of these estimates, CAN-reported fuel consumption data are compared against fuel measurements from precise instrumentation. The data analyzed consisted of eight medium/heavy-duty vehicles and two medium-duty engines. Varying discrepancies between CAN fueling rates and the more accurate measurements emerged but without a vehicular trend-for some vehicles the CAN under-reported fuel consumption and for others the CAN over-reported fuel consumption.more » Furthermore, a qualitative real-time analysis revealed that the operating conditions under which these fueling discrepancies arose varied among vehicles. A drive cycle analysis revealed that while CAN fueling estimate accuracy differs for individual vehicles, that CAN estimates capture the relative fuel consumption differences between drive cycles within 4% for all vehicles and even more accurately for some vehicles. Furthermore, in situations where only CAN-reported data are available, CAN fueling estimates can provide relative fuel consumption trends but not accurate or precise fuel consumption rates.« less

  16. Comparison of Vehicle-Broadcasted Fuel Consumption Rates against Precise Fuel Measurements for Medium- and Heavy-Duty Vehicles and Engines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pink, Alex; Ragatz, Adam; Wang, Lijuan

    Vehicles continuously report real-time fuel consumption estimates over their data bus, known as the controller area network (CAN). However, the accuracy of these fueling estimates is uncertain to researchers who collect these data from any given vehicle. To assess the accuracy of these estimates, CAN-reported fuel consumption data are compared against fuel measurements from precise instrumentation. The data analyzed consisted of eight medium/heavy-duty vehicles and two medium-duty engines. Varying discrepancies between CAN fueling rates and the more accurate measurements emerged but without a vehicular trend-for some vehicles the CAN under-reported fuel consumption and for others the CAN over-reported fuel consumption.more » Furthermore, a qualitative real-time analysis revealed that the operating conditions under which these fueling discrepancies arose varied among vehicles. A drive cycle analysis revealed that while CAN fueling estimate accuracy differs for individual vehicles, that CAN estimates capture the relative fuel consumption differences between drive cycles within 4% for all vehicles and even more accurately for some vehicles. Furthermore, in situations where only CAN-reported data are available, CAN fueling estimates can provide relative fuel consumption trends but not accurate or precise fuel consumption rates.« less

  17. Accurate PSF-matched photometry for the J-PAS survey

    NASA Astrophysics Data System (ADS)

    Jimenez-Teja, Yolanda; Benitez, Txitxo; Dupke, Renato a.

    2015-08-01

    The Javalambre-PAU Astrophysical Survey (J-PAS) is expected to map 8,600 squared-degrees of the sky using 54 narrow and 5 broad band filters. Carried out by a Spanish-Brazilian consortium, the main goal of this survey is to measure Baryon Acoustic Oscillations (BAOs) with photometric redshifts. The expectation is measuring these photometric redshifts with a precision of dz/(1 + z) ~ 0.003 for 100 million galaxies and a few millions of quasars, and reaching an accuracy of dz/(1 + z) ~ 0.01 for other 300 million galaxies. With these numbers, it will be possible to determine w in the Dark Energy (DE) equation of state with an accuracy of < 4%. To achieve such precision and measure the radial BAOs, not only an advanced technical setup but also special data processing tools are required. These tools must be accurate as well as suitable to be implemented in fully automated and computationally efficient algorithms.The factor that most influences the photometric redshift precision is the quality of the photometry. For that reason we have developed a new technique based on the Chebyshev-Fourier bases (CHEFs, Jiménez-Teja & Benítez 2012, ApJ, 745, 150) to obtain a highly precise multicolor photometry without PSF consideration, thus saving a considerable amount of time and circumventing severe problems such as the PSF variability across the images. The CHEFs are a set of mathematical orthonormal bases with different scale and resolution levels, originally designed to fit the surface light distribution of galaxies. They have proved to be able to model any kind of morphology, including spirals, highly elliptical, or irregular galaxies, including isophotal twists and fine substructure. They also fit high signal-to-noise images, lensing arcs and stars with great accuracy. We can calculate optimal, unbiased, total magnitudes directly through these CHEFs models and, thus, colors without needing the PSF.We compare our photometry with widely-used codes such as SExtractor (Bertin

  18. Accurate Determination of the Values of Fundamental Physical Constants: The Basis of the New "Quantum" SI Units

    NASA Astrophysics Data System (ADS)

    Karshenboim, S. G.

    2018-03-01

    The metric system appeared as the system of units designed for macroscopic (laboratory scale) measurements. The progress in accurate determination of the values of quantum constants (such as the Planck constant) in SI units shows that the capabilities in high-precision measurement of microscopic and macroscopic quantities in terms of the same units have increased substantially recently. At the same time, relative microscopic measurements (for example, the comparison of atomic transition frequencies or atomic masses) are often much more accurate than relative measurements of macroscopic quantities. This is the basis for the strategy to define units in microscopic phenomena and then use them on the laboratory scale, which plays a crucial role in practical methodological applications determined by everyday life and technologies. The international CODATA task group on fundamental constants regularly performs an overall analysis of the precision world data (the so-called Adjustment of the Fundamental Constants) and publishes their recommended values. The most recent evaluation was based on the data published by the end of 2014; here, we review the corresponding data and results. The accuracy in determination of the Boltzmann constant has increased, the consistency of the data on determination of the Planck constant has improved; it is these two dimensional constants that will be used in near future as the basis for the new definition of the kelvin and kilogram, respectively. The contradictions in determination of the Rydberg constant and the proton charge radius remain. The accuracy of determination of the fine structure constant and relative atomic weight of the electron has improved. Overall, we give a detailed review of the state of the art in precision determination of the values of fundamental constants. The mathematical procedure of the Adjustment, the new data and results are considered in detail. The limitations due to macroscopic properties of material

  19. Variational calculation of second-order reduced density matrices by strong N-representability conditions and an accurate semidefinite programming solver.

    PubMed

    Nakata, Maho; Braams, Bastiaan J; Fujisawa, Katsuki; Fukuda, Mituhiro; Percus, Jerome K; Yamashita, Makoto; Zhao, Zhengji

    2008-04-28

    The reduced density matrix (RDM) method, which is a variational calculation based on the second-order reduced density matrix, is applied to the ground state energies and the dipole moments for 57 different states of atoms, molecules, and to the ground state energies and the elements of 2-RDM for the Hubbard model. We explore the well-known N-representability conditions (P, Q, and G) together with the more recent and much stronger T1 and T2(') conditions. T2(') condition was recently rederived and it implies T2 condition. Using these N-representability conditions, we can usually calculate correlation energies in percentage ranging from 100% to 101%, whose accuracy is similar to CCSD(T) and even better for high spin states or anion systems where CCSD(T) fails. Highly accurate calculations are carried out by handling equality constraints and/or developing multiple precision arithmetic in the semidefinite programming (SDP) solver. Results show that handling equality constraints correctly improves the accuracy from 0.1 to 0.6 mhartree. Additionally, improvements by replacing T2 condition with T2(') condition are typically of 0.1-0.5 mhartree. The newly developed multiple precision arithmetic version of SDP solver calculates extraordinary accurate energies for the one dimensional Hubbard model and Be atom. It gives at least 16 significant digits for energies, where double precision calculations gives only two to eight digits. It also provides physically meaningful results for the Hubbard model in the high correlation limit.

  20. Accurate van der Waals force field for gas adsorption in porous materials.

    PubMed

    Sun, Lei; Yang, Li; Zhang, Ya-Dong; Shi, Qi; Lu, Rui-Feng; Deng, Wei-Qiao

    2017-09-05

    An accurate van der Waals force field (VDW FF) was derived from highly precise quantum mechanical (QM) calculations. Small molecular clusters were used to explore van der Waals interactions between gas molecules and porous materials. The parameters of the accurate van der Waals force field were determined by QM calculations. To validate the force field, the prediction results from the VDW FF were compared with standard FFs, such as UFF, Dreiding, Pcff, and Compass. The results from the VDW FF were in excellent agreement with the experimental measurements. This force field can be applied to the prediction of the gas density (H 2 , CO 2 , C 2 H 4 , CH 4 , N 2 , O 2 ) and adsorption performance inside porous materials, such as covalent organic frameworks (COFs), zeolites and metal organic frameworks (MOFs), consisting of H, B, N, C, O, S, Si, Al, Zn, Mg, Ni, and Co. This work provides a solid basis for studying gas adsorption in porous materials. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. Precise measurement of ultra-narrow laser linewidths using the strong coherent envelope

    NASA Astrophysics Data System (ADS)

    Huang, Shihong; Zhu, Tao; Liu, Min; Huang, Wei

    2017-02-01

    Laser linewidth narrowing down to kHz or even Hz is an important topic in areas like clock synchronization technology, laser radars, quantum optics, and high-precision detection. Conventional decoherence measurement methods like delayed self-heterodyne/homodyne interferometry cannot measure such narrow linewidths accurately. This is because a broadening of the Gaussian spectrum, which hides the laser’s intrinsic Lorentzian linewidth, cannot be avoided. Here, we introduce a new method using the strong coherent envelope to characterize the laser’s intrinsic linewidth through self-coherent detection. This method can eliminate the effect of the broadened Gaussian spectrum induced by the 1/f frequency noise. We analyze, in detail, the relationship between intrinsic laser linewidth, contrast difference with the second peak and the second trough (CDSPST) of the strong coherent envelope, and the length of the delaying fiber. The correct length for the delaying fiber can be chosen by combining the estimated laser linewidth (Δfest) with a specific CDSPST (ΔS) to obtain the accurate laser linewidth (Δf). Our results indicate that this method can be used as an accurate detection tool for measurements of narrow or super-narrow linewidths.

  2. Improved Precision and Accuracy of Quantification of Rare Earth Element Abundances via Medium-Resolution LA-ICP-MS.

    PubMed

    Funderburg, Rebecca; Arevalo, Ricardo; Locmelis, Marek; Adachi, Tomoko

    2017-11-01

    Laser ablation ICP-MS enables streamlined, high-sensitivity measurements of rare earth element (REE) abundances in geological materials. However, many REE isotope mass stations are plagued by isobaric interferences, particularly from diatomic oxides and argides. In this study, we compare REE abundances quantitated from mass spectra collected with low-resolution (m/Δm = 300 at 5% peak height) and medium-resolution (m/Δm = 2500) mass discrimination. A wide array of geological samples was analyzed, including USGS and NIST glasses ranging from mafic to felsic in composition, with NIST 610 employed as the bracketing calibrating reference material. The medium-resolution REE analyses are shown to be significantly more accurate and precise (at the 95% confidence level) than low-resolution analyses, particularly in samples characterized by low (<μg/g levels) REE abundances. A list of preferred mass stations that are least susceptible to isobaric interferences is reported. These findings impact the reliability of REE abundances derived from LA-ICP-MS methods, particularly those relying on mass analyzers that do not offer tuneable mass-resolution and/or collision cell technologies that can reduce oxide and/or argide formation. Graphical Abstract ᅟ.

  3. Improved Precision and Accuracy of Quantification of Rare Earth Element Abundances via Medium-Resolution LA-ICP-MS

    NASA Astrophysics Data System (ADS)

    Funderburg, Rebecca; Arevalo, Ricardo; Locmelis, Marek; Adachi, Tomoko

    2017-07-01

    Laser ablation ICP-MS enables streamlined, high-sensitivity measurements of rare earth element (REE) abundances in geological materials. However, many REE isotope mass stations are plagued by isobaric interferences, particularly from diatomic oxides and argides. In this study, we compare REE abundances quantitated from mass spectra collected with low-resolution (m/Δm = 300 at 5% peak height) and medium-resolution (m/Δm = 2500) mass discrimination. A wide array of geological samples was analyzed, including USGS and NIST glasses ranging from mafic to felsic in composition, with NIST 610 employed as the bracketing calibrating reference material. The medium-resolution REE analyses are shown to be significantly more accurate and precise (at the 95% confidence level) than low-resolution analyses, particularly in samples characterized by low (<μg/g levels) REE abundances. A list of preferred mass stations that are least susceptible to isobaric interferences is reported. These findings impact the reliability of REE abundances derived from LA-ICP-MS methods, particularly those relying on mass analyzers that do not offer tuneable mass-resolution and/or collision cell technologies that can reduce oxide and/or argide formation.

  4. Accurate modelling of unsteady flows in collapsible tubes.

    PubMed

    Marchandise, Emilie; Flaud, Patrice

    2010-01-01

    The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.

  5. High-precision timeline for Earth's most severe extinction.

    PubMed

    Burgess, Seth D; Bowring, Samuel; Shen, Shu-zhong

    2014-03-04

    The end-Permian mass extinction was the most severe loss of marine and terrestrial biota in the last 542 My. Understanding its cause and the controls on extinction/recovery dynamics depends on an accurate and precise age model. U-Pb zircon dates for five volcanic ash beds from the Global Stratotype Section and Point for the Permian-Triassic boundary at Meishan, China, define an age model for the extinction and allow exploration of the links between global environmental perturbation, carbon cycle disruption, mass extinction, and recovery at millennial timescales. The extinction occurred between 251.941 ± 0.037 and 251.880 ± 0.031 Mya, an interval of 60 ± 48 ka. Onset of a major reorganization of the carbon cycle immediately precedes the initiation of extinction and is punctuated by a sharp (3‰), short-lived negative spike in the isotopic composition of carbonate carbon. Carbon cycle volatility persists for ∼500 ka before a return to near preextinction values. Decamillenial to millennial level resolution of the mass extinction and its aftermath will permit a refined evaluation of the relative roles of rate-dependent processes contributing to the extinction, allowing insight into postextinction ecosystem expansion, and establish an accurate time point for evaluating the plausibility of trigger and kill mechanisms.

  6. An ultra-precision tool nanoindentation instrument for replication of single point diamond tool cutting edges

    NASA Astrophysics Data System (ADS)

    Cai, Yindi; Chen, Yuan-Liu; Xu, Malu; Shimizu, Yuki; Ito, So; Matsukuma, Hiraku; Gao, Wei

    2018-05-01

    Precision replication of the diamond tool cutting edge is required for non-destructive tool metrology. This paper presents an ultra-precision tool nanoindentation instrument designed and constructed for replication of the cutting edge of a single point diamond tool onto a selected soft metal workpiece by precisely indenting the tool cutting edge into the workpiece surface. The instrument has the ability to control the indentation depth with a nanometric resolution, enabling the replication of tool cutting edges with high precision. The motion of the diamond tool along the indentation direction is controlled by the piezoelectric actuator of a fast tool servo (FTS). An integrated capacitive sensor of the FTS is employed to detect the displacement of the diamond tool. The soft metal workpiece is attached to an aluminum cantilever whose deflection is monitored by another capacitive sensor, referred to as an outside capacitive sensor. The indentation force and depth can be accurately evaluated from the diamond tool displacement, the cantilever deflection and the cantilever spring constant. Experiments were carried out by replicating the cutting edge of a single point diamond tool with a nose radius of 2.0 mm on a copper workpiece surface. The profile of the replicated tool cutting edge was measured using an atomic force microscope (AFM). The effectiveness of the instrument in precision replication of diamond tool cutting edges is well-verified by the experimental results.

  7. Combined fabrication technique for high-precision aspheric optical windows

    NASA Astrophysics Data System (ADS)

    Hu, Hao; Song, Ci; Xie, Xuhui

    2016-07-01

    Specifications made on optical components are becoming more and more stringent with the performance improvement of modern optical systems. These strict requirements not only involve low spatial frequency surface accuracy, mid-and-high spatial frequency surface errors, but also surface smoothness and so on. This presentation mainly focuses on the fabrication process for square aspheric window which combines accurate grinding, magnetorheological finishing (MRF) and smoothing polishing (SP). In order to remove the low spatial frequency surface errors and subsurface defects after accurate grinding, the deterministic polishing method MRF with high convergence and stable material removal rate is applied. Then the SP technology with pseudo-random path is adopted to eliminate the mid-and-high spatial frequency surface ripples and high slope errors which is the defect for MRF. Additionally, the coordinate measurement method and interferometry are combined in different phase. Acid-etched method and ion beam figuring (IBF) are also investigated on observing and reducing the subsurface defects. Actual fabrication result indicates that the combined fabrication technique can lead to high machining efficiency on manufaturing the high-precision and high-quality optical aspheric windows.

  8. Ultra-accurate collaborative information filtering via directed user similarity

    NASA Astrophysics Data System (ADS)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  9. Development of a precision reverse offset printing system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Hyunchang; Lee, Eonseok; Choi, Young-Man

    2016-01-15

    In printed electronics technology, the overlay accuracy of printed patterns is a very important issue when applying printing technology to the production of electric devices. In order to achieve accurate positioning of the printed patterns, this study proposes a novel precision reverse offset printing system. Furthermore, the study evaluates the effects of synchronization and printing force on position errors of the printed patterns, and presents methods of controlling synchronization and printing force so as to eliminate positional errors caused by the above-mentioned reasons. Finally, the printing position repeatability of 0.40 μm and 0.32 μm (x and y direction, respectively) atmore » a sigma level is obtained over the dimension of 100 mm under repeated printing tests with identical printing conditions.« less

  10. High-sensitivity MALDI-TOF MS quantification of anthrax lethal toxin for diagnostics and evaluation of medical countermeasures.

    PubMed

    Boyer, Anne E; Gallegos-Candela, Maribel; Quinn, Conrad P; Woolfitt, Adrian R; Brumlow, Judith O; Isbell, Katherine; Hoffmaster, Alex R; Lins, Renato C; Barr, John R

    2015-04-01

    Inhalation anthrax has a rapid progression and high fatality rate. Pathology and death from inhalation of Bacillus anthracis spores are attributed to the actions of secreted protein toxins. Protective antigen (PA) binds and imports the catalytic component lethal factor (LF), a zinc endoprotease, and edema factor (EF), an adenylyl cyclase, into susceptible cells. PA-LF is termed lethal toxin (LTx) and PA-EF, edema toxin. As the universal transporter for both toxins, PA is an important target for vaccination and immunotherapeutic intervention. However, its quantification has been limited to methods of relatively low analytic sensitivity. Quantification of LTx may be more clinically relevant than LF or PA alone because LTx is the toxic form that acts on cells. A method was developed for LTx-specific quantification in plasma using anti-PA IgG magnetic immunoprecipitation of PA and quantification of LF activity that co-purified with PA. The method was fast (<4 h total time to detection), sensitive at 0.033 ng/mL LTx in plasma for the fast analysis (0.0075 ng/mL LTx in plasma for an 18 h reaction), precise (6.3-9.9% coefficient of variation), and accurate (0.1-12.7%error; n ≥ 25). Diagnostic sensitivity was 100% (n = 27 animal/clinical cases). Diagnostic specificity was 100% (n = 141). LTx was detected post-antibiotic treatment in 6/6 treated rhesus macaques and 3/3 clinical cases of inhalation anthrax and as long as 8 days post-treatment. Over the course of infection in two rhesus macaques, LTx was first detected at 0.101 and 0.237 ng/mL at 36 h post-exposure and increased to 1147 and 12,107 ng/mL in late-stage anthrax. This demonstrated the importance of LTx as a diagnostic and therapeutic target. This method provides a sensitive, accurate tool for anthrax toxin detection and evaluation of PA-directed therapeutics.

  11. More noise does not mean more precision: A review of Aldenberg and Rorije (2013).

    PubMed

    Fox, David R

    2015-09-01

    This paper provides a critical review of recently published work that suggests that the precision of hazardous concentration estimates from Species Sensitivity Distributions (SSDs) is improved when the uncertainty in the input data is taken into account. Our review confirms that this counter-intuitive result is indeed incorrect. 2015 FRAME.

  12. Negative emotion enhances mnemonic precision and subjective feelings of remembering in visual long-term memory.

    PubMed

    Xie, Weizhen; Zhang, Weiwei

    2017-09-01

    Negative emotion sometimes enhances memory (higher accuracy and/or vividness, e.g., flashbulb memories). The present study investigates whether it is the qualitative (precision) or quantitative (the probability of successful retrieval) aspect of memory that drives these effects. In a visual long-term memory task, observers memorized colors (Experiment 1a) or orientations (Experiment 1b) of sequentially presented everyday objects under negative, neutral, or positive emotions induced with International Affective Picture System images. In a subsequent test phase, observers reconstructed objects' colors or orientations using the method of adjustment. We found that mnemonic precision was enhanced under the negative condition relative to the neutral and positive conditions. In contrast, the probability of successful retrieval was comparable across the emotion conditions. Furthermore, the boost in memory precision was associated with elevated subjective feelings of remembering (vividness and confidence) and metacognitive sensitivity in Experiment 2. Altogether, these findings suggest a novel precision-based account for emotional memories. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Methods for the accurate estimation of confidence intervals on protein folding ϕ-values

    PubMed Central

    Ruczinski, Ingo; Sosnick, Tobin R.; Plaxco, Kevin W.

    2006-01-01

    ϕ-Values provide an important benchmark for the comparison of experimental protein folding studies to computer simulations and theories of the folding process. Despite the growing importance of ϕ measurements, however, formulas to quantify the precision with which ϕ is measured have seen little significant discussion. Moreover, a commonly employed method for the determination of standard errors on ϕ estimates assumes that estimates of the changes in free energy of the transition and folded states are independent. Here we demonstrate that this assumption is usually incorrect and that this typically leads to the underestimation of ϕ precision. We derive an analytical expression for the precision of ϕ estimates (assuming linear chevron behavior) that explicitly takes this dependence into account. We also describe an alternative method that implicitly corrects for the effect. By simulating experimental chevron data, we show that both methods accurately estimate ϕ confidence intervals. We also explore the effects of the commonly employed techniques of calculating ϕ from kinetics estimated at non-zero denaturant concentrations and via the assumption of parallel chevron arms. We find that these approaches can produce significantly different estimates for ϕ (again, even for truly linear chevron behavior), indicating that they are not equivalent, interchangeable measures of transition state structure. Lastly, we describe a Web-based implementation of the above algorithms for general use by the protein folding community. PMID:17008714

  14. Simple, sensitive, selective and validated spectrophotometric methods for the estimation of a biomarker trigonelline from polyherbal gels

    NASA Astrophysics Data System (ADS)

    Chopra, Shruti; Motwani, Sanjay K.; Ahmad, Farhan J.; Khar, Roop K.

    2007-11-01

    Simple, accurate, reproducible, selective, sensitive and cost effective UV-spectrophotometric methods were developed and validated for the estimation of trigonelline in bulk and pharmaceutical formulations. Trigonelline was estimated at 265 nm in deionised water and at 264 nm in phosphate buffer (pH 4.5). Beer's law was obeyed in the concentration ranges of 1-20 μg mL -1 ( r2 = 0.9999) in deionised water and 1-24 μg mL -1 ( r2 = 0.9999) in the phosphate buffer medium. The apparent molar absorptivity and Sandell's sensitivity coefficient were found to be 4.04 × 10 3 L mol -1 cm -1 and 0.0422 μg cm -2/0.001A in deionised water; and 3.05 × 10 3 L mol -1 cm -1 and 0.0567 μg cm -2/0.001A in phosphate buffer media, respectively. These methods were tested and validated for various parameters according to ICH guidelines. The detection and quantitation limits were found to be 0.12 and 0.37 μg mL -1 in deionised water and 0.13 and 0.40 μg mL -1 in phosphate buffer medium, respectively. The proposed methods were successfully applied for the determination of trigonelline in pharmaceutical formulations (vaginal tablets and bioadhesive vaginal gels). The results demonstrated that the procedure is accurate, precise, specific and reproducible (percent relative standard deviation <2%), while being simple and less time consuming and hence can be suitably applied for the estimation of trigonelline in different dosage forms and dissolution studies.

  15. [Precision and accuracy of "a pocket" pulse oximeter in Mexico City].

    PubMed

    Torre-Bouscoulet, Luis; Chávez-Plascencia, Elizabeth; Vázquez-García, Juan Carlos; Pérez-Padilla, Rogelio

    2006-01-01

    Pulse oximeters are frequently used in the clinical practice and we must known their precision and accuracy. The objective was to evaluate the precision and accuracy of a "pocket" pulse oximeter at an altitude of 2,240 m above sea level. We tested miniature pulse oximeters (Onyx 9,500, Nonin Finger Pulse Oximeter) in 96 patients sent to the pulmonary laboratory for an arterial blood sample. Patients were tested with 5 pulse oximeters placed in each of the fingers of the hand oposite to that used for the arterial puncture. The gold standard was the oxygen saturation of the arterial blood sample. Blood samples had SaO2 of 87.2 +/- 11.0 (between 42.2 and 97.9%). Pulse oximeters had a mean error of 0.28 +/- 3.1%. SaO2 = (1.204 x SpO2) - 17.45966 (r = 0.92, p < 0.0001). Intraclass correlation coefficient between each of five pulse oximeters against the arterial blood standard ranged between 0.87 and 0.99. HbCO (2.4 +/- 0.6) did not affect the accuracy. The miniature oximeter Nonin is precise and accurate at 2,240 m of altitude. The observed levels of HbCO did not affect the performance of the equipment. The oximeter good performance, small size and low cost enhances its clinical usefulness.

  16. Rapid and accurate peripheral nerve detection using multipoint Raman imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kumamoto, Yasuaki; Minamikawa, Takeo; Kawamura, Akinori; Matsumura, Junichi; Tsuda, Yuichiro; Ukon, Juichiro; Harada, Yoshinori; Tanaka, Hideo; Takamatsu, Tetsuro

    2017-02-01

    Nerve-sparing surgery is essential to avoid functional deficits of the limbs and organs. Raman scattering, a label-free, minimally invasive, and accurate modality, is one of the best candidate technologies to detect nerves for nerve-sparing surgery. However, Raman scattering imaging is too time-consuming to be employed in surgery. Here we present a rapid and accurate nerve visualization method using a multipoint Raman imaging technique that has enabled simultaneous spectra measurement from different locations (n=32) of a sample. Five sec is sufficient for measuring n=32 spectra with good S/N from a given tissue. Principal component regression discriminant analysis discriminated spectra obtained from peripheral nerves (n=863 from n=161 myelinated nerves) and connective tissue (n=828 from n=121 tendons) with sensitivity and specificity of 88.3% and 94.8%, respectively. To compensate the spatial information of a multipoint-Raman-derived tissue discrimination image that is too sparse to visualize nerve arrangement, we used morphological information obtained from a bright-field image. When merged with the sparse tissue discrimination image, a morphological image of a sample shows what portion of Raman measurement points in arbitrary structure is determined as nerve. Setting a nerve detection criterion on the portion of "nerve" points in the structure as 40% or more, myelinated nerves (n=161) and tendons (n=121) were discriminated with sensitivity and specificity of 97.5%. The presented technique utilizing a sparse multipoint Raman image and a bright-field image has enabled rapid, safe, and accurate detection of peripheral nerves.

  17. Number-Density Measurements of CO2 in Real Time with an Optical Frequency Comb for High Accuracy and Precision

    NASA Astrophysics Data System (ADS)

    Scholten, Sarah K.; Perrella, Christopher; Anstie, James D.; White, Richard T.; Al-Ashwal, Waddah; Hébert, Nicolas Bourbeau; Genest, Jérôme; Luiten, Andre N.

    2018-05-01

    Real-time and accurate measurements of gas properties are highly desirable for numerous real-world applications. Here, we use an optical-frequency comb to demonstrate absolute number-density and temperature measurements of a sample gas with state-of-the-art precision and accuracy. The technique is demonstrated by measuring the number density of 12C16O2 with an accuracy of better than 1% and a precision of 0.04% in a measurement and analysis cycle of less than 1 s. This technique is transferable to numerous molecular species, thus offering an avenue for near-universal gas concentration measurements.

  18. Statistical precision of the intensities retrieved from constrained fitting of overlapping peaks in high-resolution mass spectra

    DOE PAGES

    Cubison, M. J.; Jimenez, J. L.

    2015-06-05

    Least-squares fitting of overlapping peaks is often needed to separately quantify ions in high-resolution mass spectrometer data. A statistical simulation approach is used to assess the statistical precision of the retrieved peak intensities. The sensitivity of the fitted peak intensities to statistical noise due to ion counting is probed for synthetic data systems consisting of two overlapping ion peaks whose positions are pre-defined and fixed in the fitting procedure. The fitted intensities are sensitive to imperfections in the m/Q calibration. These propagate as a limiting precision in the fitted intensities that may greatly exceed the precision arising from counting statistics.more » The precision on the fitted peak intensity falls into one of three regimes. In the "counting-limited regime" (regime I), above a peak separation χ ~ 2 to 3 half-widths at half-maximum (HWHM), the intensity precision is similar to that due to counting error for an isolated ion. For smaller χ and higher ion counts (~ 1000 and higher), the intensity precision rapidly degrades as the peak separation is reduced ("calibration-limited regime", regime II). Alternatively for χ < 1.6 but lower ion counts (e.g. 10–100) the intensity precision is dominated by the additional ion count noise from the overlapping ion and is not affected by the imprecision in the m/Q calibration ("overlapping-limited regime", regime III). The transition between the counting and m/Q calibration-limited regimes is shown to be weakly dependent on resolving power and data spacing and can thus be approximated by a simple parameterisation based only on peak intensity ratios and separation. A simple equation can be used to find potentially problematic ion pairs when evaluating results from fitted spectra containing many ions. Longer integration times can improve the precision in regimes I and III, but a given ion pair can only be moved out of regime II through increased spectrometer resolving power. As a result

  19. Sensitivity in error detection of patient specific QA tools for IMRT plans

    NASA Astrophysics Data System (ADS)

    Lat, S. Z.; Suriyapee, S.; Sanghangthum, T.

    2016-03-01

    The high complexity of dose calculation in treatment planning and accurate delivery of IMRT plan need high precision of verification method. The purpose of this study is to investigate error detection capability of patient specific QA tools for IMRT plans. The two H&N and two prostate IMRT plans with MapCHECK2 and portal dosimetry QA tools were studied. Measurements were undertaken for original and modified plans with errors introduced. The intentional errors composed of prescribed dose (±2 to ±6%) and position shifting in X-axis and Y-axis (±1 to ±5mm). After measurement, gamma pass between original and modified plans were compared. The average gamma pass for original H&N and prostate plans were 98.3% and 100% for MapCHECK2 and 95.9% and 99.8% for portal dosimetry, respectively. In H&N plan, MapCHECK2 can detect position shift errors starting from 3mm while portal dosimetry can detect errors started from 2mm. Both devices showed similar sensitivity in detection of position shift error in prostate plan. For H&N plan, MapCHECK2 can detect dose errors starting at ±4%, whereas portal dosimetry can detect from ±2%. For prostate plan, both devices can identify dose errors starting from ±4%. Sensitivity of error detection depends on type of errors and plan complexity.

  20. A Self Contained Method for Safe and Precise Lunar Landing

    NASA Technical Reports Server (NTRS)

    Paschall, Stephen C., II; Brady, Tye; Cohanim, Babak; Sostaric, Ronald

    2008-01-01

    The return of humans to the Moon will require increased capability beyond that of the previous Apollo missions. Longer stay times and a greater flexibility with regards to landing locations are among the many improvements planned. A descent and landing system that can land the vehicle more accurately than Apollo with a greater ability to detect and avoid hazards is essential to the development of a Lunar Outpost, and also for increasing the number of potentially reachable Lunar Sortie locations. This descent and landing system should allow landings in more challenging terrain and provide more flexibility with regards to mission timing and lighting considerations, while maintaining safety as the top priority. The lunar landing system under development by the ALHAT (Autonomous precision Landing and Hazard detection Avoidance Technology) project is addressing this by providing terrain-relative navigation measurements to enhance global-scale precision, an onboard hazard-detection system to select safe landing locations, and an Autonomous GNC (Guidance, Navigation, and Control) capability to process these measurements and safely direct the vehicle to this landing location. This ALHAT landing system will enable safe and precise lunar landings without requiring lunar infrastructure in the form of navigation aids or a priori identified hazard-free landing locations. The safe landing capability provided by ALHAT uses onboard active sensing to detect hazards that are large enough to be a danger to the vehicle but too small to be detected from orbit, given currently planned orbital terrain resolution limits. Algorithms to interpret raw active sensor terrain data and generate hazard maps as well as identify safe sites and recalculate new trajectories to those sites are included as part of the ALHAT System. These improvements to descent and landing will help contribute to repeated safe and precise landings for a wide variety of terrain on the Moon.

  1. Accurate reconstruction of viral quasispecies spectra through improved estimation of strain richness

    PubMed Central

    2015-01-01

    Background Estimating the number of different species (richness) in a mixed microbial population has been a main focus in metagenomic research. Existing methods of species richness estimation ride on the assumption that the reads in each assembled contig correspond to only one of the microbial genomes in the population. This assumption and the underlying probabilistic formulations of existing methods are not useful for quasispecies populations where the strains are highly genetically related. The lack of knowledge on the number of different strains in a quasispecies population is observed to hinder the precision of existing Viral Quasispecies Spectrum Reconstruction (QSR) methods due to the uncontrolled reconstruction of a large number of in silico false positives. In this work, we formulated a novel probabilistic method for strain richness estimation specifically targeting viral quasispecies. By using this approach we improved our recently proposed spectrum reconstruction pipeline ViQuaS to achieve higher levels of precision in reconstructed quasispecies spectra without compromising the recall rates. We also discuss how one other existing popular QSR method named ShoRAH can be improved using this new approach. Results On benchmark data sets, our estimation method provided accurate richness estimates (< 0.2 median estimation error) and improved the precision of ViQuaS by 2%-13% and F-score by 1%-9% without compromising the recall rates. We also demonstrate that our estimation method can be used to improve the precision and F-score of ShoRAH by 0%-7% and 0%-5% respectively. Conclusions The proposed probabilistic estimation method can be used to estimate the richness of viral populations with a quasispecies behavior and to improve the accuracy of the quasispecies spectra reconstructed by the existing methods ViQuaS and ShoRAH in the presence of a moderate level of technical sequencing errors. Availability http://sourceforge.net/projects/viquas/ PMID:26678073

  2. [Precision and personalized medicine].

    PubMed

    Sipka, Sándor

    2016-10-01

    The author describes the concept of "personalized medicine" and the newly introduced "precision medicine". "Precision medicine" applies the terms of "phenotype", "endotype" and "biomarker" in order to characterize more precisely the various diseases. Using "biomarkers" the homogeneous type of a disease (a "phenotype") can be divided into subgroups called "endotypes" requiring different forms of treatment and financing. The good results of "precision medicine" have become especially apparent in relation with allergic and autoimmune diseases. The application of this new way of thinking is going to be necessary in Hungary, too, in the near future for participants, controllers and financing boards of healthcare. Orv. Hetil., 2016, 157(44), 1739-1741.

  3. Measurement precision in a series of visual fields acquired by the standard and fast versions of the Swedish interactive thresholding algorithm: analysis of large-scale data from clinics.

    PubMed

    Saunders, Luke J; Russell, Richard A; Crabb, David P

    2015-01-01

    Swedish Interactive Thresholding Algorithm (SITA) testing strategies for the Humphrey Field Analyzer have become a clinical standard. Measurements from SITA Fast are thought to be more variable than SITA Standard, yet some clinics routinely use SITA Fast because it is quicker. To examine the measurement precision of the 2 SITA strategies across a range of sensitivities using a large number of visual field (VF) series from 4 glaucoma clinics in England. Retrospective cohort study at Moorfields Eye Hospital in London, England; Gloucestershire Eye Unit at Cheltenham General Hospital; Queen Alexandra Hospital in Portsmouth, England; and the Calderdale and Huddersfield National Health Service Foundation Trust that included 66,974 Humphrey 24-2 SITA Standard VFs (10,124 eyes) and 19,819 Humphrey 24-2 SITA Fast VFs (3654 eyes) recorded between May 20, 1997, and September 20, 2012. Pointwise ordinary least squares linear regression of measured sensitivity over time was conducted using VF series of 1 random eye from each patient. Residuals from the regression were pooled according to fitted sensitivities. For each sensitivity (decibel) level, the standard deviation of the residuals was used to estimate measurement precision and were compared for SITA Standard and SITA Fast. Simulations of progression from different VF baselines were used to evaluate how different levels of precision would affect time to detect VF progression. Median years required to detect progression. Median (interquartile range) patient age, follow-up, and series lengths for SITA Standard were 64 (53-72) years, 6.0 (4.0-8.5) years, and 6 (4-8) VFs, respectively; for SITA Fast, medians (interquartile range) were 70 (61-78) years, 5.1 (3.2-7.3) years, and 5 (4-6) VFs. Measurement precision worsened as sensitivity decreased for both test strategies. In the 20 to 5 dB range, SITA Fast was less precise than SITA Standard; this difference was largest between 15 to 10 dB, where variability in both methods

  4. Pragmatic precision oncology: the secondary uses of clinical tumor molecular profiling

    PubMed Central

    Thota, Ramya; Staggs, David B; Johnson, Douglas B; Warner, Jeremy L

    2016-01-01

    Background Precision oncology increasingly utilizes molecular profiling of tumors to determine treatment decisions with targeted therapeutics. The molecular profiling data is valuable in the treatment of individual patients as well as for multiple secondary uses. Objective To automatically parse, categorize, and aggregate clinical molecular profile data generated during cancer care as well as use this data to address multiple secondary use cases. Methods A system to parse, categorize and aggregate molecular profile data was created. A naÿve Bayesian classifier categorized results according to clinical groups. The accuracy of these systems were validated against a published expertly-curated subset of molecular profiling data. Results Following one year of operation, 819 samples have been accurately parsed and categorized to generate a data repository of 10,620 genetic variants. The database has been used for operational, clinical trial, and discovery science research. Conclusions A real-time database of molecular profiling data is a pragmatic solution to several knowledge management problems in the practice and science of precision oncology. PMID:27026612

  5. Development and validation of a rapid and sensitive UPLC-MS/MS method for determination of uracil and dihydrouracil in human plasma.

    PubMed

    Jacobs, Bart A W; Rosing, Hilde; de Vries, Niels; Meulendijks, Didier; Henricks, Linda M; Schellens, Jan H M; Beijnen, Jos H

    2016-07-15

    Quantification of the endogenous dihydropyrimidine dehydrogenase (DPD) substrate uracil (U) and the reaction product dihydrouracil (UH2) in plasma might be suitable for identification of patients at risk of fluoropyrimidine-induced toxicity as a result of DPD deficiency. In this paper, we describe the development and validation of a rapid and sensitive ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) assay for quantification of U and UH2 in human plasma. Analytes were extracted by protein precipitation, chromatographically separated on an Acquity UPLC(®) HSS T3 column with gradient elution and analyzed with a tandem mass spectrometer equipped with an electrospray ionization source. U was quantified in the negative ion mode and UH2 in the positive ion mode. Stable isotopes for U and UH2 were used as internal standards. Total chromatographic run time was 5min. Validated concentration ranges for U and UH2 were from 1 to 100ng/mL and 10 to 1000ng/mL, respectively. Inter-assay bias and inter-assay precision for U were within ±2.8% and ≤12.4%. For UH2, inter-assay bias and inter-assay precision were within ±2.9% and ≤7.2%. Adequate stability of U and UH2 in dry extract, final extract, stock solution and plasma was demonstrated. Stability of U and UH2 in whole blood was only satisfactory when stored up to 4hours at 2-8°C, but not at ambient temperatures. An accurate, precise and sensitive UPLC-MS/MS assay for quantification of U and UH2 in plasma was developed. This assay is now applied to support clinical studies with fluoropyrimidine drugs. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Precision Diagnosis Of Melanoma And Other Skin Lesions From Digital Images.

    PubMed

    Bhattacharya, Abhishek; Young, Albert; Wong, Andrew; Stalling, Simone; Wei, Maria; Hadley, Dexter

    2017-01-01

    Melanoma will affect an estimated 73,000 new cases this year and result in 9,000 deaths, yet precise diagnosis remains a serious problem. Without early detection and preventative care, melanoma can quickly spread to become fatal (Stage IV 5-year survival rate is 20-10%) from a once localized skin lesion (Stage IA 5- year survival rate is 97%). There is no biomarker for melanoma in clinical use, and the current diagnostic criteria for skin lesions remains subjective and imprecise. Accurate diagnosis of melanoma relies on a histopathologic gold standard; thus, aggressive excision of melanocytic skin lesions has been the mainstay of treatment. It is estimated that 36 biopsies are performed for every melanoma confirmed by pathology among excised lesions. There is significant morbidity in misdiagnosing melanoma such as progression of the disease for a false negative prediction vs the risks of unnecessary surgery for a false positive prediction. Every year, poor diagnostic precision adds an estimated $673 million in overall cost to manage the disease. Currently, manual dermatoscopic imaging is the standard of care in selecting atypical skin lesions for biopsy, and at best it achieves 90% sensitivity but only 59% specificity when performed by an expert dermatologist. Many computer vision (CV) algorithms perform better than dermatologists in classifying skin lesions although not significantly so in clinical practice. Meanwhile, open source deep learning (DL) techniques in CV have been gaining dominance since 2012 for image classification, and today DL can outperform humans in classifying millions of digital images with less than 5% error rates. Moreover, DL algorithms are readily run on commoditized hardware and have a strong online community of developers supporting their rapid adoption. In this work, we performed a successful pilot study to show proof of concept to DL skin pathology from images. However, DL algorithms must be trained on very large labelled datasets of

  7. The Too-Much-Precision Effect.

    PubMed

    Loschelder, David D; Friese, Malte; Schaerer, Michael; Galinsky, Adam D

    2016-12-01

    Past research has suggested a fundamental principle of price precision: The more precise an opening price, the more it anchors counteroffers. The present research challenges this principle by demonstrating a too-much-precision effect. Five experiments (involving 1,320 experts and amateurs in real-estate, jewelry, car, and human-resources negotiations) showed that increasing the precision of an opening offer had positive linear effects for amateurs but inverted-U-shaped effects for experts. Anchor precision backfired because experts saw too much precision as reflecting a lack of competence. This negative effect held unless first movers gave rationales that boosted experts' perception of their competence. Statistical mediation and experimental moderation established the critical role of competence attributions. This research disentangles competing theoretical accounts (attribution of competence vs. scale granularity) and qualifies two putative truisms: that anchors affect experts and amateurs equally, and that more precise prices are linearly more potent anchors. The results refine current theoretical understanding of anchoring and have significant implications for everyday life.

  8. Evaluating measurements of carbon dioxide emissions using a precision source--A natural gas burner.

    PubMed

    Bryant, Rodney; Bundy, Matthew; Zong, Ruowen

    2015-07-01

    A natural gas burner has been used as a precise and accurate source for generating large quantities of carbon dioxide (CO2) to evaluate emissions measurements at near-industrial scale. Two methods for determining carbon dioxide emissions from stationary sources are considered here: predicting emissions based on fuel consumption measurements-predicted emissions measurements, and direct measurement of emissions quantities in the flue gas-direct emissions measurements. Uncertainty for the predicted emissions measurement was estimated at less than 1%. Uncertainty estimates for the direct emissions measurement of carbon dioxide were on the order of ±4%. The relative difference between the direct emissions measurements and the predicted emissions measurements was within the range of the measurement uncertainty, therefore demonstrating good agreement. The study demonstrates how independent methods are used to validate source emissions measurements, while also demonstrating how a fire research facility can be used as a precision test-bed to evaluate and improve carbon dioxide emissions measurements from stationary sources. Fossil-fuel-consuming stationary sources such as electric power plants and industrial facilities account for more than half of the CO2 emissions in the United States. Therefore, accurate emissions measurements from these sources are critical for evaluating efforts to reduce greenhouse gas emissions. This study demonstrates how a surrogate for a stationary source, a fire research facility, can be used to evaluate the accuracy of measurements of CO2 emissions.

  9. Precise determination of protein extinction coefficients under native and denaturing conditions using SV-AUC.

    PubMed

    Hoffmann, Andreas; Grassl, Kerstin; Gommert, Janine; Schlesak, Christian; Bepperling, Alexander

    2018-04-17

    The accurate determination of protein concentration is an important though non-trivial task during the development of a biopharmaceutical. The fundamental prerequisite for this is the availability of an accurate extinction coefficient. Common approaches for the determination of an extinction coefficient for a given protein are either based on the theoretical prediction utilizing the amino acid sequence or the photometric determination combined with a measurement of absolute protein concentration. Here, we report on an improved SV-AUC based method utilizing an analytical ultracentrifuge equipped with absorbance and Rayleigh interference optics. Global fitting of datasets helped to overcome some of the obstacles encountered with the traditional method employing synthetic boundary cells. Careful calculation of dn/dc values taking glycosylation and solvent composition into account allowed the determination of the extinction coefficients of monoclonal antibodies and an Fc-fusion protein under native as well as under denaturing conditions. An intra-assay precision of 0.9% and an accuracy of 1.8% compared to the theoretical value was achieved for monoclonal antibodies. Due to the large number of data points of a single dataset, no meaningful difference between the ProteomeLab XL-I and the new Optima AUC platform could be observed. Thus, the AUC-based approach offers a precise, convenient and versatile alternative to conventional methods like total amino acid analysis (AAA).

  10. Precise orbit determination based on raw GPS measurements

    NASA Astrophysics Data System (ADS)

    Zehentner, Norbert; Mayer-Gürr, Torsten

    2016-03-01

    Precise orbit determination is an essential part of the most scientific satellite missions. Highly accurate knowledge of the satellite position is used to geolocate measurements of the onboard sensors. For applications in the field of gravity field research, the position itself can be used as observation. In this context, kinematic orbits of low earth orbiters (LEO) are widely used, because they do not include a priori information about the gravity field. The limiting factor for the achievable accuracy of the gravity field through LEO positions is the orbit accuracy. We make use of raw global positioning system (GPS) observations to estimate the kinematic satellite positions. The method is based on the principles of precise point positioning. Systematic influences are reduced by modeling and correcting for all known error sources. Remaining effects such as the ionospheric influence on the signal propagation are either unknown or not known to a sufficient level of accuracy. These effects are modeled as unknown parameters in the estimation process. The redundancy in the adjustment is reduced; however, an improvement in orbit accuracy leads to a better gravity field estimation. This paper describes our orbit determination approach and its mathematical background. Some examples of real data applications highlight the feasibility of the orbit determination method based on raw GPS measurements. Its suitability for gravity field estimation is presented in a second step.

  11. Precision Optics Curriculum.

    ERIC Educational Resources Information Center

    Reid, Robert L.; And Others

    This guide outlines the competency-based, two-year precision optics curriculum that the American Precision Optics Manufacturers Association has proposed to fill the void that it suggests will soon exist as many of the master opticians currently employed retire. The model, which closely resembles the old European apprenticeship model, calls for 300…

  12. Precise Quantitation of MicroRNA in a Single Cell with Droplet Digital PCR Based on Ligation Reaction.

    PubMed

    Tian, Hui; Sun, Yuanyuan; Liu, Chenghui; Duan, Xinrui; Tang, Wei; Li, Zhengping

    2016-12-06

    MicroRNA (miRNA) analysis in a single cell is extremely important because it allows deep understanding of the exact correlation between the miRNAs and cell functions. Herein, we wish to report a highly sensitive and precisely quantitative assay for miRNA detection based on ligation-based droplet digital polymerase chain reaction (ddPCR), which permits the quantitation of miRNA in a single cell. In this ligation-based ddPCR assay, two target-specific oligonucleotide probes can be simply designed to be complementary to the half-sequence of the target miRNA, respectively, which avoids the sophisticated design of reverse transcription and provides high specificity to discriminate a single-base difference among miRNAs with simple operations. After the miRNA-templated ligation, the ddPCR partitions individual ligated products into a water-in-oil droplet and digitally counts the fluorescence-positive and negative droplets after PCR amplification for quantification of the target molecules, which possesses the power of precise quantitation and robustness to variation in PCR efficiency. By integrating the advantages of the precise quantification of ddPCR and the simplicity of the ligation-based PCR, the proposed method can sensitively measure let-7a miRNA with a detection limit of 20 aM (12 copies per microliter), and even a single-base difference can be discriminated in let-7 family members. More importantly, due to its high selectivity and sensitivity, the proposed method can achieve precise quantitation of miRNAs in single-cell lysate. Therefore, the ligation-based ddPCR assay may serve as a useful tool to exactly reveal the miRNAs' actions in a single cell, which is of great importance for the study of miRNAs' biofunction as well as for the related biomedical studies.

  13. On the precision of experimentally determined protein folding rates and φ-values

    PubMed Central

    De Los Rios, Miguel A.; Muralidhara, B.K.; Wildes, David; Sosnick, Tobin R.; Marqusee, Susan; Wittung-Stafshede, Pernilla; Plaxco, Kevin W.; Ruczinski, Ingo

    2006-01-01

    φ-Values, a relatively direct probe of transition-state structure, are an important benchmark in both experimental and theoretical studies of protein folding. Recently, however, significant controversy has emerged regarding the reliability with which φ-values can be determined experimentally: Because φ is a ratio of differences between experimental observables it is extremely sensitive to errors in those observations when the differences are small. Here we address this issue directly by performing blind, replicate measurements in three laboratories. By monitoring within- and between-laboratory variability, we have determined the precision with which folding rates and φ-values are measured using generally accepted laboratory practices and under conditions typical of our laboratories. We find that, unless the change in free energy associated with the probing mutation is quite large, the precision of φ-values is relatively poor when determined using rates extrapolated to the absence of denaturant. In contrast, when we employ rates estimated at nonzero denaturant concentrations or assume that the slopes of the chevron arms (mf and mu) are invariant upon mutation, the precision of our estimates of φ is significantly improved. Nevertheless, the reproducibility we thus obtain still compares poorly with the confidence intervals typically reported in the literature. This discrepancy appears to arise due to differences in how precision is calculated, the dependence of precision on the number of data points employed in defining a chevron, and interlaboratory sources of variability that may have been largely ignored in the prior literature. PMID:16501226

  14. Preformed Frequencies of Cytomegalovirus (CMV)–Specific Memory T and B Cells Identify Protected CMV-Sensitized Individuals Among Seronegative Kidney Transplant Recipients

    PubMed Central

    Lúcia, Marc; Crespo, Elena; Melilli, Edoardo; Cruzado, Josep M.; Luque, Sergi; Llaudó, Inés; Niubó, Jordi; Torras, Joan; Fernandez, Núria; Grinyó, Josep M.; Bestard, Oriol

    2014-01-01

    Background. Cytomegalovirus (CMV) infection remains a major complication after kidney transplantation. Baseline CMV risk is typically determined by the serological presence of preformed CMV-specific immunoglobulin (Ig) G antibodies, even though T-cell responses to major viral antigens are crucial when controlling viral replication. Some IgG-seronegative patients who receive an IgG-seropositive allograft do not develop CMV infection despite not receiving prophylaxis. We hypothesized that a more precise evaluation of pretransplant CMV-specific immune-sensitization using the B and T-cell enzyme-linked immunospot assays may identify CMV-sensitized individuals more accurately, regardless of serological evidence of CMV-specific IgG titers. Methods. We compared the presence of preformed CMV-specific memory B and T cells in kidney transplant recipients between 43 CMV IgG–seronegative (sR−) and 86 CMV IgG–seropositive (sR+) patients. Clinical outcome was evaluated in both groups. Results. All sR+ patients showed a wide range of CMV-specific memory T- and B-cell responses. High memory T- and B-cell frequencies were also clearly detected in 30% of sR− patients, and those with high CMV-specific T-cell frequencies had a significantly lower incidence of late CMV infection after prophylactic therapy. Receiver operating characteristic curve analysis for predicting CMV viremia and disease showed a high area under the receiver operating characteristic curve (>0.8), which translated into a high sensitivity and negative predictive value of the test. Conclusions. Assessment of CMV-specific memory T- and B-cell responses before kidney transplantation among sR− recipients may help identify immunized individuals more precisely, being ultimately at lower risk for CMV infection. PMID:25048845

  15. Improving precision of glomerular filtration rate estimating model by ensemble learning.

    PubMed

    Liu, Xun; Li, Ningshan; Lv, Linsheng; Fu, Yongmei; Cheng, Cailian; Wang, Caixia; Ye, Yuqiu; Li, Shaomin; Lou, Tanqi

    2017-11-09

    Accurate assessment of kidney function is clinically important, but estimates of glomerular filtration rate (GFR) by regression are imprecise. We hypothesized that ensemble learning could improve precision. A total of 1419 participants were enrolled, with 1002 in the development dataset and 417 in the external validation dataset. GFR was independently estimated from age, sex and serum creatinine using an artificial neural network (ANN), support vector machine (SVM), regression, and ensemble learning. GFR was measured by 99mTc-DTPA renal dynamic imaging calibrated with dual plasma sample 99mTc-DTPA GFR. Mean measured GFRs were 70.0 ml/min/1.73 m 2 in the developmental and 53.4 ml/min/1.73 m 2 in the external validation cohorts. In the external validation cohort, precision was better in the ensemble model of the ANN, SVM and regression equation (IQR = 13.5 ml/min/1.73 m 2 ) than in the new regression model (IQR = 14.0 ml/min/1.73 m 2 , P < 0.001). The precision of ensemble learning was the best of the three models, but the models had similar bias and accuracy. The median difference ranged from 2.3 to 3.7 ml/min/1.73 m 2 , 30% accuracy ranged from 73.1 to 76.0%, and P was > 0.05 for all comparisons of the new regression equation and the other new models. An ensemble learning model including three variables, the average ANN, SVM, and regression equation values, was more precise than the new regression model. A more complex ensemble learning strategy may further improve GFR estimates.

  16. THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Habib, Salman; Biswas, Rahul

    2016-04-01

    Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less

  17. The mira-titan universe. Precision predictions for dark energy surveys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heitmann, Katrin; Bingham, Derek; Lawrence, Earl

    2016-03-28

    Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less

  18. Precise comparisons of bottom-pressure and altimetric ocean tides

    NASA Astrophysics Data System (ADS)

    Ray, R. D.

    2013-09-01

    A new set of pelagic tide determinations is constructed from seafloor pressure measurements obtained at 151 sites in the deep ocean. To maximize precision of estimated tides, only stations with long time series are used; median time series length is 567 days. Geographical coverage is considerably improved by use of the international tsunami network, but coverage in the Indian Ocean and South Pacific is still weak. As a tool for assessing global ocean tide models, the data set is considerably more reliable than older data sets: the root-mean-square difference with a recent altimetric tide model is approximately 5 mm for the M2 constituent. Precision is sufficiently high to allow secondary effects in altimetric and bottom-pressure tide differences to be studied. The atmospheric tide in bottom pressure is clearly detected at the S1, S2, and T2 frequencies. The altimetric tide model is improved if satellite altimetry is corrected for crustal loading by the atmospheric tide. Models of the solid body tide can also be constrained. The free core-nutation effect in the K1 Love number is easily detected, but the overall estimates are not as accurate as a recent determination with very long baseline interferometry.

  19. Precision Stellar and Planetary Astrophysics with TESS and Gaia

    NASA Astrophysics Data System (ADS)

    Stevens, Daniel J.; KELT Collaboration

    2018-01-01

    There is an ever-present need for precise and accurate stellar parameters, particularly for low-mass stars. For example, some fraction of measured M dwarf radii are inflated and have effective temperatures that are suppressed relative to predictions from models, but the physical cause of these effects is still uncertain. This is exacerbated by the fact that only a handful of M dwarfs -- all from double-lined eclipsing binaries (EBs) -- have both masses and radii measured to 3% or better. In the Gaia era, we can now measure model-independent masses and radii for single-lined EBs, thus expanding the sample of stars with precisely measured parameters by at least an order of magnitude, in principle. I will illustrate how one can combine Gaia parallaxes and broad-band stellar fluxes with the eclipse and radial velocity data to provide model-independent masses and radii. I will present our expected achievable constraints on the masses and radii of single-lined EBs. I will discuss both our current effort to turn several dozens of single-lined EBs discovered by the KELT and HATNet surveys into a catalog of exquisitely characterized stars and exoplanets as well as the prospects for achieving similar science for a much larger number of systems with TESS.

  20. Precise Comparisons of Bottom-Pressure and Altimetric Ocean Tides

    NASA Technical Reports Server (NTRS)

    Ray, Richard D.

    2013-01-01

    A new set of pelagic tide determinations is constructed from seafloor pressure measurements obtained at 151 sites in the deep ocean. To maximize precision of estimated tides, only stations with long time series are used; median time series length is 567 days. Geographical coverage is considerably improved by use of the international tsunami network, but coverage in the Indian Ocean and South Pacific is still weak. As a tool for assessing global ocean tide models, the data set is considerably more reliable than older data sets : the root-mean-square difference with a recent altimetric tide model is approximately 5 mm for the M2 constituent. Precision is sufficiently high to allow secondary effects in altimetric and bottom-pressure tide differences to be studied. The atmospheric tide in bottom pressure is clearly detected at the S1, S2, and T2 frequencies. The altimetric tide model is improved if satellite altimetry is corrected for crustal loading by the atmospheric tide. Models of the solid body tide can also be constrained. The free corenutation effect in the K1 Love number is easily detected, but the overall estimates are not as accurate as a recent determination with very long baseline interferometry.

  1. Application of Smart Infrastructure Systems approach to precision medicine.

    PubMed

    Govindaraju, Diddahally R; Annaswamy, Anuradha M

    2015-12-01

    All biological variation is hierarchically organized dynamic network system of genomic components, organelles, cells, tissues, organs, individuals, families, populations and metapopulations. Individuals are axial in this hierarchy, as they represent antecedent, attendant and anticipated aspects of health, disease, evolution and medical care. Humans show individual specific genetic and clinical features such as complexity, cooperation, resilience, robustness, vulnerability, self-organization, latent and emergent behavior during their development, growth and senescence. Accurate collection, measurement, organization and analyses of individual specific data, embedded at all stratified levels of biological, demographic and cultural diversity - the big data - is necessary to make informed decisions on health, disease and longevity; which is a central theme of precision medicine initiative (PMI). This initiative also calls for the development of novel analytical approaches to handle complex multidimensional data. Here we suggest the application of Smart Infrastructure Systems (SIS) approach to accomplish some of the goals set forth by the PMI on the premise that biological systems and the SIS share many common features. The latter has been successfully employed in managing complex networks of non-linear adaptive controls, commonly encountered in smart engineering systems. We highlight their concordance and discuss the utility of the SIS approach in precision medicine programs.

  2. The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets.

    PubMed

    Saito, Takaya; Rehmsmeier, Marc

    2015-01-01

    Binary classifiers are routinely evaluated with performance measures such as sensitivity and specificity, and performance is frequently illustrated with Receiver Operating Characteristics (ROC) plots. Alternative measures such as positive predictive value (PPV) and the associated Precision/Recall (PRC) plots are used less frequently. Many bioinformatics studies develop and evaluate classifiers that are to be applied to strongly imbalanced datasets in which the number of negatives outweighs the number of positives significantly. While ROC plots are visually appealing and provide an overview of a classifier's performance across a wide range of specificities, one can ask whether ROC plots could be misleading when applied in imbalanced classification scenarios. We show here that the visual interpretability of ROC plots in the context of imbalanced datasets can be deceptive with respect to conclusions about the reliability of classification performance, owing to an intuitive but wrong interpretation of specificity. PRC plots, on the other hand, can provide the viewer with an accurate prediction of future classification performance due to the fact that they evaluate the fraction of true positives among positive predictions. Our findings have potential implications for the interpretation of a large number of studies that use ROC plots on imbalanced datasets.

  3. LOFAR Lightning Imaging: Mapping Lightning With Nanosecond Precision

    NASA Astrophysics Data System (ADS)

    Hare, B. M.; Scholten, O.; Bonardi, A.; Buitink, S.; Corstanje, A.; Ebert, U.; Falcke, H.; Hörandel, J. R.; Leijnse, H.; Mitra, P.; Mulrey, K.; Nelles, A.; Rachen, J. P.; Rossetto, L.; Rutjes, C.; Schellart, P.; Thoudam, S.; Trinh, T. N. G.; ter Veen, S.; Winchen, T.

    2018-03-01

    Lightning mapping technology has proven instrumental in understanding lightning. In this work we present a pipeline that can use lightning observed by the LOw-Frequency ARray (LOFAR) radio telescope to construct a 3-D map of the flash. We show that LOFAR has unparalleled precision, on the order of meters, even for lightning flashes that are over 20 km outside the area enclosed by LOFAR antennas (˜3,200 km2), and can potentially locate over 10,000 sources per lightning flash. We also show that LOFAR is the first lightning mapping system that is sensitive to the spatial structure of the electrical current during individual lightning leader steps.

  4. Optical coherence tomography allows for the reliable identification of laryngeal epithelial dysplasia and for precise biopsy: a clinicopathological study of 61 patients undergoing microlaryngoscopy.

    PubMed

    Just, Tino; Lankenau, Eva; Prall, Friedrich; Hüttmann, Gereon; Pau, Hans Wilhelm; Sommer, Konrad

    2010-10-01

    A newly developed microscope-based spectral-domain optical coherence tomography (SD-OCT) device and an endoscope-based time-domain OCT (TD-OCT) were used to assess the inter-rater reliability, sensitivity, specificity, and accuracy of benign and dysplastic laryngeal epithelial lesions. Prospective study. OCT during microlaryngoscopy was done on 35 patients with an endoscope-based TD-OCT, and on 26 patients by an SD-OCT system integrated into an operating microscope. Biopsies were taken from microscopically suspicious lesions allowing comparative study of OCT images and histology. Thickness of the epithelium was seen to be the main criterion for degree of dysplasia. The inter-rater reliability for two observers was found to be kappa = 0.74 (P <.001) for OCT. OCT provided test outcomes for differentiation between benign laryngeal lesions and dysplasia/CIS with sensitivity of 88%, specificity of 89%, PPV of 85%, NPV of 91%, and predictive accuracy of 88%. However, because of the limited penetration depth of the laser light primarily in hyperkeratotic lesions (thickness above 1.5 mm), the basal cell layer was no longer visible, precluding reliable assessment of such lesions. OCT allows for a fairly accurate assessment of benign and dysplastic laryngeal epithelial lesion and greatly facilitates the taking of precise biopsies. Laryngoscope, 2010.

  5. Exploring the sensitivity of current and future experiments to θ⊙

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, Abhijit; Choubey, Sandhya; Goswami, Srubabati

    2003-06-01

    The first results from the KamLAND experiment in conjunction with the global solar neutrino data have demonstrated the striking ability to constrain the Δm2⊙ (Δm221) very precisely. However the allowed range of θ⊙ (θ12) did not change much with the inclusion of the KamLAND results. In this paper we probe if future data from KamLAND can increase the accuracy of the allowed range in θ⊙ and conclude that even after 3 kton yr of statistics and with the most optimistic error estimates, KamLAND may find it hard to significantly improve the bounds on the mixing angle obtained from the solar neutrino data. We discuss the θ12 sensitivity of the survival probabilities in matter (vacuum) as relevant for the solar (KamLAND) experiments. We find that the presence of matter effects in the survival probabilities for 8B neutrinos gives the solar neutrino experiments SK and SNO an edge over KamLAND, as far as θ12 sensitivity is concerned, particularly near the maximal mixing. Among solar neutrino experiments we identify SNO as a most promising candidate for constraining θ12 and make a projected sensitivity test for the mixing angle by reducing the error in the neutral current measurement at SNO. Finally, we argue that the most accurate bounds on θ12 can be achieved in a reactor experiment, if the corresponding baseline and energy can be tuned to a minimum in the survival probability. We propose a new reactor experiment that can give the value of tan2θ12 to within 14%. We also discuss the future Borexino and LowNu experiments.

  6. Optical Manipulation along Optical Axis with Polarization Sensitive Meta-lens.

    PubMed

    Markovich, Hen; Shishkin, Ivan; Hendler, Netta; Ginzburg, Pavel

    2018-06-27

    The ability to manipulate small objects with focused laser beams opens a broad spectrum of opportunities in fundamental and applied studies, where a precise control over mechanical path and stability is required. While conventional optical tweezers are based on bulky diffractive optical elements, developing compact integrable within a fluid cell trapping devices is highly demanded. Here, plasmonic polarization sensitive metasurface-based lens, embedded within a fluid, is demonstrated to provide several stable trapping centers along the optical axis. The position of a particle is controlled with the polarization of the incident light, interacting with plasmonic nanoscale patch antennas, organized within overlapping Fresnel zones of the lens. While standard diffractive optical elements face challenges to trap objects in lateral direction outside the depth of focus, bi-focal Fresnel meta-lens demonstrates the capability to manipulate a bead along 4 micrometers line. Additional fluorescent module, incorporated within the optical trapping setup, was implemented and enabled accurate mapping of optical potential via a particle tracking algorithm. Auxiliary micro- and nano- structures, integrated within fluidic devices, provide numerous opportunities to achieve flexible optomechanical manipulation, including, transport, trapping and sorting, which are highly demanded in lab-on-a-chip applications and many others.

  7. Measuring the bias, precision, accuracy, and validity of self-reported height and weight in assessing overweight and obesity status among adolescents using a surveillance system.

    PubMed

    Pérez, Adriana; Gabriel, Kelley; Nehme, Eileen K; Mandell, Dorothy J; Hoelscher, Deanna M

    2015-07-27

    Evidence regarding bias, precision, and accuracy in adolescent self-reported height and weight across demographic subpopulations is lacking. The bias, precision, and accuracy of adolescent self-reported height and weight across subpopulations were examined using a large, diverse and representative sample of adolescents. A second objective was to develop correction equations for self-reported height and weight to provide more accurate estimates of body mass index (BMI) and weight status. A total of 24,221 students from 8th and 11th grade in Texas participated in the School Physical Activity and Nutrition (SPAN) surveillance system in years 2000-2002 and 2004-2005. To assess bias, the differences between the self-reported and objective measures, for height and weight were estimated. To assess precision and accuracy, the Lin's concordance correlation coefficient was used. BMI was estimated for self-reported and objective measures. The prevalence of students' weight status was estimated using self-reported and objective measures; absolute (bias) and relative error (relative bias) were assessed subsequently. Correction equations for sex and race/ethnicity subpopulations were developed to estimate objective measures of height, weight and BMI from self-reported measures using weighted linear regression. Sensitivity, specificity and positive predictive values of weight status classification using self-reported measures and correction equations are assessed by sex and grade. Students in 8th- and 11th-grade overestimated their height from 0.68cm (White girls) to 2.02 cm (African-American boys), and underestimated their weight from 0.4 kg (Hispanic girls) to 0.98 kg (African-American girls). The differences in self-reported versus objectively-measured height and weight resulted in underestimation of BMI ranging from -0.23 kg/m2 (White boys) to -0.7 kg/m2 (African-American girls). The sensitivity of self-reported measures to classify weight status as obese was 70.8% and 81

  8. Determination of aerodynamic sensitivity coefficients for wings in transonic flow

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.; El-Banna, Hesham M.

    1992-01-01

    The quasianalytical approach is applied to the 3-D full potential equation to compute wing aerodynamic sensitivity coefficients in the transonic regime. Symbolic manipulation is used to reduce the effort associated with obtaining the sensitivity equations, and the large sensitivity system is solved using 'state of the art' routines. The quasianalytical approach is believed to be reasonably accurate and computationally efficient for 3-D problems.

  9. Utilization of independent component analysis for accurate pathological ripple detection in intracranial EEG recordings recorded extra- and intra-operatively.

    PubMed

    Shimamoto, Shoichi; Waldman, Zachary J; Orosz, Iren; Song, Inkyung; Bragin, Anatol; Fried, Itzhak; Engel, Jerome; Staba, Richard; Sharan, Ashwini; Wu, Chengyuan; Sperling, Michael R; Weiss, Shennan A

    2018-01-01

    To develop and validate a detector that identifies ripple (80-200 Hz) events in intracranial EEG (iEEG) recordings in a referential montage and utilizes independent component analysis (ICA) to eliminate or reduce high-frequency artifact contamination. Also, investigate the correspondence of detected ripples and the seizure onset zone (SOZ). iEEG recordings from 16 patients were first band-pass filtered (80-600 Hz) and Infomax ICA was next applied to derive the first independent component (IC1). IC1 was subsequently pruned, and an artifact index was derived to reduce the identification of high-frequency events introduced by the reference electrode signal. A Hilbert detector identified ripple events in the processed iEEG recordings using amplitude and duration criteria. The identified ripple events were further classified and characterized as true or false ripple on spikes, or ripples on oscillations by utilizing a topographical analysis to their time-frequency plot, and confirmed by visual inspection. The signal to noise ratio was improved by pruning IC1. The precision of the detector for ripple events was 91.27 ± 4.3%, and the sensitivity of the detector was 79.4 ± 3.0% (N = 16 patients, 5842 ripple events). The sensitivity and precision of the detector was equivalent in iEEG recordings obtained during sleep or intra-operatively. Across all the patients, true ripple on spike rates and also the rates of false ripple on spikes, that were generated due to filter ringing, classified the seizure onset zone (SOZ) with an area under the receiver operating curve (AUROC) of >76%. The magnitude and spectral content of true ripple on spikes generated in the SOZ was distinct as compared with the ripples generated in the NSOZ (p < .001). Utilizing ICA to analyze iEEG recordings in referential montage provides many benefits to the study of high-frequency oscillations. The ripple rates and properties defined using this approach may accurately delineate the

  10. Utilization of independent component analysis for accurate pathological ripple detection in intracranial EEG recordings recorded extra- and intra-operatively

    PubMed Central

    Shimamoto, Shoichi; Waldman, Zachary J.; Orosz, Iren; Song, Inkyung; Bragin, Anatol; Fried, Itzhak; Engel, Jerome; Staba, Richard; Sharan, Ashwini; Wu, Chengyuan; Sperling, Michael R.; Weiss, Shennan A.

    2018-01-01

    Objective To develop and validate a detector that identifies ripple (80–200 Hz) events in intracranial EEG (iEEG) recordings in a referential montage and utilizes independent component analysis (ICA) to eliminate or reduce high-frequency artifact contamination. Also, investigate the correspondence of detected ripples and the seizure onset zone (SOZ). Methods iEEG recordings from 16 patients were first band-pass filtered (80–600 Hz) and Infomax ICA was next applied to derive the first independent component (IC1). IC1 was subsequently pruned, and an artifact index was derived to reduce the identification of high-frequency events introduced by the reference electrode signal. A Hilbert detector identified ripple events in the processed iEEG recordings using amplitude and duration criteria. The identified ripple events were further classified and characterized as true or false ripple on spikes, or ripples on oscillations by utilizing a topographical analysis to their time-frequency plot, and confirmed by visual inspection. Results The signal to noise ratio was improved by pruning IC1. The precision of the detector for ripple events was 91.27 ± 4.3%, and the sensitivity of the detector was 79.4 ± 3.0% (N = 16 patients, 5842 ripple events). The sensitivity and precision of the detector was equivalent in iEEG recordings obtained during sleep or intra-operatively. Across all the patients, true ripple on spike rates and also the rates of false ripple on spikes, that were generated due to filter ringing, classified the seizure onset zone (SOZ) with an area under the receiver operating curve (AUROC) of >76%. The magnitude and spectral content of true ripple on spikes generated in the SOZ was distinct as compared with the ripples generated in the NSOZ (p < .001). Conclusions Utilizing ICA to analyze iEEG recordings in referential montage provides many benefits to the study of high-frequency oscillations. The ripple rates and properties defined using this approach may

  11. Pulse oximeter accuracy and precision affected by sensor location in cyanotic children.

    PubMed

    Sedaghat-Yazdi, Farshad; Torres, Adalberto; Fortuna, Randall; Geiss, Dale M

    2008-07-01

    Children's digits are often too small for proper attachment of oximeter sensors, necessitating sensor placement on the sole of the foot or palm of the hand. No study has determined what effect these sensor locations have on the accuracy and precision of this technology. The objective of this study was to assess the effect of sensor location on pulse oximeter accuracy (i.e., bias) and precision in critically ill children. Prospective, observational study with consecutive sampling. Tertiary care, pediatric intensive care unit. Fifty critically ill children, newborn to 2 yrs of age, with an indwelling arterial catheter. Forty-seven of 50 (94%) patients were postcardiac surgery. None. Co-oximeter-measured arterial oxygen saturation (Sao2) was compared with simultaneously obtained pulse oximetry saturations (Spo2). A total of 98 measurements were obtained, 48 measurements in the upper extremities (finger and palm) and 50 measurements in the lower extremities (toe and sole). The median Sao2 was 92% (66% to 100%). There was a significant difference in bias (i.e., average Spo2 - Sao2) and precision (+/-1 sd) when the sole and toe were compared (sole, 2.9 +/- 3.9 vs. toe, 1.6 +/- 2.2, p = .02) but no significant difference in bias and precision between the palm and the finger (palm, 1.4 +/- 3.2 vs. finger, 1.2 +/- 2.3, p = .99). There was a significant difference in bias +/- precision when the Sao2 was <90% compared with when Sao2 was >or=90% in the sole (6.0 +/- 5.7 vs. 1.8 +/- 2.1, p = .002) and palm (4.5 +/- 4.5 vs. 0.7 +/- 2.4, p = .006) but no significant difference in the finger (1.8 +/- 3.8 vs. 1.1 +/- 1.8, p = .95) or toe (1.9 +/- 2.9 vs. 1.6 +/- 1.9, p = .65). The Philips M1020A pulse oximeter and Nellcor MAX-N sensors were less accurate and precise when used on the sole of the foot or palm of the hand of a child with an Sao2 <90%.

  12. The MOLLER Experiment: ``An Ultra-precise Measurement of the Weak Charge of the Electron using moller Scattering''

    NASA Astrophysics Data System (ADS)

    Beminiwattha, Rakitha; Moller Collaboration

    2017-09-01

    Parity Violating Electron Scattering (PVES) is an extremely successful precision frontier tool that has been used for testing the Standard Model (SM) and understanding nucleon structure. Several generations of highly successful PVES programs at SLAC, MIT-Bates, MAMI-Mainz, and Jefferson Lab have contributed to the understanding of nucleon structure and testing the SM. But missing phenomena like matter-antimatter asymmetry, neutrino flavor oscillations, and dark matter and energy suggest that the SM is only a `low energy' effective theory. The MOLLER experiment at Jefferson Lab will measure the weak charge of the electron, QWe = 1 - 4sin2θW , with a precision of 2.4 % by measuring the parity violating asymmetry in electron-electron () scattering and will be sensitive to subtle but measurable deviations from precisely calculable predictions from the SM. The MOLLER experiment will provide the best contact interaction search for leptons at low OR high energy makes it a probe of physics beyond the Standard Model with sensitivities to mass-scales of new PV physics up to 7.5 TeV. Overview of the experiment and recent pre-R&D progress will be reported.

  13. Production of zinc oxide nanowires power with precisely defined morphology

    NASA Astrophysics Data System (ADS)

    Mičová, Júlia; Remeš, Zdeněk; Chan, Yu-Ying

    2017-12-01

    The interest about zinc oxide is increasing thanks to its unique chemical and physical properties. Our attention has focused on preparation powder of 1D nanostructures of ZnO nanowires with precisely defined morphology include characterization size (length and diameter) and shape controlled in the scanning electron microscopy (SEM). We have compared results of SEM with dynamic light scattering (DLS) technique. We have found out that SEM method gives more accurate results. We have proposed transformation process from ZnO nanowires on substrates to ZnO nanowires powder by ultrasound peeling to colloid followed by lyophilization. This method of the mass production of the ZnO nanowires powder has some advantages: simplicity, cost effective, large-scale and environment friendly.

  14. Comprehensive and accurate tracking of carbon origin of LC-tandem mass spectrometry collisional fragments for 13C-MFA.

    PubMed

    Kappelmann, Jannick; Klein, Bianca; Geilenkirchen, Petra; Noack, Stephan

    2017-03-01

    In recent years the benefit of measuring positionally resolved 13 C-labeling enrichment from tandem mass spectrometry (MS/MS) collisional fragments for improved precision of 13 C-Metabolic Flux Analysis ( 13 C-MFA) has become evident. However, the usage of positional labeling information for 13 C-MFA faces two challenges: (1) The mass spectrometric acquisition of a large number of potentially interfering mass transitions may hamper accuracy and sensitivity. (2) The positional identity of carbon atoms of product ions needs to be known. The present contribution addresses the latter challenge by deducing the maximal positional labeling information contained in LC-ESI-MS/MS spectra of product anions of central metabolism as well as product cations of amino acids. For this purpose, we draw on accurate mass spectrometry, selectively labeled standards, and published fragmentation pathways to structurally annotate all dominant mass peaks of a large collection of metabolites, some of which with a complete fragmentation pathway. Compiling all available information, we arrive at the most detailed map of carbon atom fate of LC-ESI-MS/MS collisional fragments yet, comprising 170 intense and structurally annotated product ions with unique carbon origin from 76 precursor ions of 72 metabolites. Our 13 C-data proof that heuristic fragmentation rules often fail to yield correct fragment structures and we expose common pitfalls in the structural annotation of product ions. We show that the positionally resolved 13 C-label information contained in the product ions that we structurally annotated allows to infer the entire isotopomer distribution of several central metabolism intermediates, which is experimentally demonstrated for malate using quadrupole-time-of-flight MS technology. Finally, the inclusion of the label information from a subset of these fragments improves flux precision in a Corynebacterium glutamicum model of the central carbon metabolism.

  15. Scaling up the precision in a ytterbium Bose-Einstein condensate interferometer

    NASA Astrophysics Data System (ADS)

    McAlpine, Katherine; Plotkin-Swing, Benjamin; Gochnauer, Daniel; Saxberg, Brendan; Gupta, Subhadeep

    2016-05-01

    We report on progress toward a high-precision ytterbium (Yb) Bose-Einstein condensate (BEC) interferometer, with the goal of measuring h/m and thus the fine structure constant α. Here h is Planck's constant and m is the mass of a Yb atom. The use of the non-magnetic Yb atom makes our experiment insensitive to magnetic field noise. Our chosen symmetric 3-path interferometer geometry suppresses errors from vibration, rotation, and acceleration. The precision scales with the phase accrued due to the kinetic energy difference between the interferometer arms, resulting in a quadratic sensitivity to the momentum difference. We are installing and testing the laser pulses for large momentum transfer via Bloch oscillations. We will report on Yb BEC production in a new apparatus and progress toward realizing the atom optical elements for high precision measurements. We will also discuss approaches to mitigate two important systematics: (i) atom interaction effects can be suppressed by creating the BEC in a dynamically shaped optical trap to reduce the density; (ii) diffraction phase effects from the various atom-optical elements can be accounted for through an analysis of the light-atom interaction for each pulse.

  16. PRECISE ANGLE MONITOR BASED ON THE CONCEPT OF PENCIL-BEAM INTERFEROMETRY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    QIAN,S.; TAKACS,P.

    2000-07-30

    The precise angle monitoring is a very important metrology task for research, development and industrial applications. Autocollimator is one of the most powerful and widely applied instruments for small angle monitoring, which is based on the principle of geometric optics. In this paper the authors introduce a new precise angle monitoring system, Pencil-beam Angle Monitor (PAM), base on pencil beam interferometry. Its principle of operation is a combination of physical and geometrical optics. The angle calculation method is similar to the autocollimator. However, the autocollimator creates a cross image but the precise pencil-beam angle monitoring system produces an interference fringemore » on the focal plane. The advantages of the PAM are: high angular sensitivity, long-term stability character making angle monitoring over long time periods possible, high measurement accuracy in the order of sub-microradian, simultaneous measurement ability in two perpendicular directions or on two different objects, dynamic measurement possibility, insensitive to the vibration and air turbulence, automatic display, storage and analysis by use of the computer, small beam diameter making the alignment extremely easy and longer test distance. Some test examples are presented.« less

  17. Precision medicine in cardiology.

    PubMed

    Antman, Elliott M; Loscalzo, Joseph

    2016-10-01

    The cardiovascular research and clinical communities are ideally positioned to address the epidemic of noncommunicable causes of death, as well as advance our understanding of human health and disease, through the development and implementation of precision medicine. New tools will be needed for describing the cardiovascular health status of individuals and populations, including 'omic' data, exposome and social determinants of health, the microbiome, behaviours and motivations, patient-generated data, and the array of data in electronic medical records. Cardiovascular specialists can build on their experience and use precision medicine to facilitate discovery science and improve the efficiency of clinical research, with the goal of providing more precise information to improve the health of individuals and populations. Overcoming the barriers to implementing precision medicine will require addressing a range of technical and sociopolitical issues. Health care under precision medicine will become a more integrated, dynamic system, in which patients are no longer a passive entity on whom measurements are made, but instead are central stakeholders who contribute data and participate actively in shared decision-making. Many traditionally defined diseases have common mechanisms; therefore, elimination of a siloed approach to medicine will ultimately pave the path to the creation of a universal precision medicine environment.

  18. An accurate and precise representation of drug ingredients.

    PubMed

    Hanna, Josh; Bian, Jiang; Hogan, William R

    2016-01-01

    In previous work, we built the Drug Ontology (DrOn) to support comparative effectiveness research use cases. Here, we have updated our representation of ingredients to include both active ingredients (and their strengths) and excipients. Our update had three primary lines of work: 1) analysing and extracting excipients, 2) analysing and extracting strength information for active ingredients, and 3) representing the binding of active ingredients to cytochrome P450 isoenzymes as substrates and inhibitors of those enzymes. To properly differentiate between excipients and active ingredients, we conducted an ontological analysis of the roles that various ingredients, including excipients, have in drug products. We used the value specification model of the Ontology for Biomedical Investigations to represent strengths of active ingredients and then analyzed RxNorm to extract excipient and strength information and modeled them according to the results of our analysis. We also analyzed and defined dispositions of molecules used in aggregate as active ingredients to bind cytochrome P450 isoenzymes. Our analysis of excipients led to 17 new classes representing the various roles that excipients can bear. We then extracted excipients from RxNorm and added them to DrOn for branded drugs. We found excipients for 5,743 branded drugs, covering ~27% of the 21,191 branded drugs in DrOn. Our analysis of active ingredients resulted in another new class, active ingredient role. We also extracted strengths for all types of tablets, capsules, and caplets, resulting in strengths for 5,782 drug forms, covering ~41% of the 14,035 total drug forms and accounting for ~97 % of the 5,970 tablets, capsules, and caplets in DrOn. We represented binding-as-substrate and binding-as-inhibitor dispositions to two cytochrome P450 (CYP) isoenzymes (CYP2C19 and CYP2D6) and linked these dispositions to 65 compounds. It is now possible to query DrOn automatically for all drug products that contain active ingredients whose molecular grains inhibit or are metabolized by a particular CYP isoenzyme. DrOn is open source and is available at http://purl.obolibrary.org/obo/dron.owl.

  19. Laser-generated ultrasound for high-precision cutting of tissue-mimicking gels (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Lee, Taehwa; Luo, Wei; Li, Qiaochu; Guo, L. Jay

    2017-03-01

    Laser-generated focused ultrasound has shown great promise in precisely treating cells and tissues by producing controlled micro-cavitation within the acoustic focal volume (<100 um). However, the previous demonstration used cells and tissues cultured on glass substrates. The glass substrates were found to be critical to cavitation, because ultrasound amplitude doubles due to the reflection from the substrate, thus allowing for reaching pressure amplitude to cavitation threshold. In other words, without the sound reflecting substrate, pressure amplitude may not be strong enough to create cavitation, thus limiting its application to only cultured biomaterials on the rigid substrates. By using laser-generated focused ultrasound without relying on sound-reflecting substrates, we demonstrate free-field cavitation in water and its application to high-precision cutting of tissue-mimicking gels. In the absence of a rigid boundary, strong pressure for cavitation was enabled by recently optimized photoacoustic lens with increased focal gain (>30 MPa, negative pressure amplitude). By moving cavitation spots along pre-defined paths through a motorized stage, tissue-mimicking gels of different elastic moduli were cut into different shapes (rectangle, triangle, and circle), leaving behind the same shape of holes, whose sizes are less than 1 mm. The cut line width is estimated to be less than 50 um (corresponding to localized cavitation region), allowing for accurate cutting. This novel approach could open new possibility for in-vivo treatment of diseased tissues in a high-precision manner (i.e., high-precision invisible sonic scalpel).

  20. Future microfluidic and nanofluidic modular platforms for nucleic acid liquid biopsy in precision medicine

    PubMed Central

    Egatz-Gomez, Ana; Wang, Ceming; Klacsmann, Flora; Pan, Zehao; Marczak, Steve; Wang, Yunshan; Sun, Gongchen; Senapati, Satyajyoti; Chang, Hsueh-Chia

    2016-01-01

    Nucleic acid biomarkers have enormous potential in non-invasive diagnostics and disease management. In medical research and in the near future in the clinics, there is a great demand for accurate miRNA, mRNA, and ctDNA identification and profiling. They may lead to screening of early stage cancer that is not detectable by tissue biopsy or imaging. Moreover, because their cost is low and they are non-invasive, they can become a regular screening test during annual checkups or allow a dynamic treatment program that adjusts its drug and dosage frequently. We briefly review a few existing viral and endogenous RNA assays that have been approved by the Federal Drug Administration. These tests are based on the main nucleic acid detection technologies, namely, quantitative reverse transcription polymerase chain reaction (PCR), microarrays, and next-generation sequencing. Several of the challenges that these three technologies still face regarding the quantitative measurement of a panel of nucleic acids are outlined. Finally, we review a cluster of microfluidic technologies from our group with potential for point-of-care nucleic acid quantification without nucleic acid amplification, designed to overcome specific limitations of current technologies. We suggest that integration of these technologies in a modular design can offer a low-cost, robust, and yet sensitive/selective platform for a variety of precision medicine applications. PMID:27190565

  1. Development and validation of sensitive LC/MS/MS method for quantitative bioanalysis of levonorgestrel in rat plasma and application to pharmacokinetics study.

    PubMed

    Ananthula, Suryatheja; Janagam, Dileep R; Jamalapuram, Seshulatha; Johnson, James R; Mandrell, Timothy D; Lowe, Tao L

    2015-10-15

    Rapid, sensitive, selective and accurate LC/MS/MS method was developed for quantitative determination of levonorgestrel (LNG) in rat plasma and further validated for specificity, linearity, accuracy, precision, sensitivity, matrix effect, recovery efficiency and stability. Liquid-liquid extraction procedure using hexane:ethyl acetate mixture at 80:20 v:v ratio was employed to efficiently extract LNG from rat plasma. Reversed phase Luna column C18(2) (50×2.0mm i.d., 3μM) installed on a AB SCIEX Triple Quad™ 4500 LC/MS/MS system was used to perform chromatographic separation. LNG was identified within 2min with high specificity. Linear calibration curve was drawn within 0.5-50ng·mL(-1) concentration range. The developed method was validated for intra-day and inter-day accuracy and precision whose values fell in the acceptable limits. Matrix effect was found to be minimal. Recovery efficiency at three quality control (QC) concentrations 0.5 (low), 5 (medium) and 50 (high) ng·mL(-1) was found to be >90%. Stability of LNG at various stages of experiment including storage, extraction and analysis was evaluated using QC samples, and the results showed that LNG was stable at all the conditions. This validated method was successfully used to study the pharmacokinetics of LNG in rats after SubQ injection, providing its applicability in relevant preclinical studies. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Seeking: Accurate Measurement Techniques for Deep-Bone Density and Structure

    NASA Technical Reports Server (NTRS)

    Sibonga, Jean

    2009-01-01

    We are seeking a clinically-useful technology with enough sensitivity to assess the microstructure of "spongy" bone that is found in the marrow cavities of whole bones. However, this technology must be for skeletal sites surrounded by layers of soft tissues, such as the spine and the hip. Soft tissue interferes with conventional imaging and using a more accessible area -- for example, the wrist or the ankle of limbs-- as a proxy for the less accessible skeletal regions, will not be accurate. A non-radioactive technology is strongly preferred.

  3. Precise measurement of the performance of thermoelectric modules

    NASA Astrophysics Data System (ADS)

    Díaz-Chao, Pablo; Muñiz-Piniella, Andrés; Selezneva, Ekaterina; Cuenat, Alexandre

    2016-08-01

    The potential exploitation of thermoelectric modules into mass market applications such as exhaust gas heat recovery in combustion engines requires an accurate knowledge of their performance. Further expansion of the market will also require confidence on the results provided by suppliers to end-users. However, large variation in performance and maximum operating point is observed for identical modules when tested by different laboratories. Here, we present the first metrological study of the impact of mounting and testing procedures on the precision of thermoelectric modules measurement. Variability in the electrical output due to mechanical pressure or type of thermal interface materials is quantified for the first time. The respective contribution of the temperature difference and the mean temperature to the variation in the output performance is quantified. The contribution of these factors to the total uncertainties in module characterisation is detailed.

  4. Precision bounds for gradient magnetometry with atomic ensembles

    NASA Astrophysics Data System (ADS)

    Apellaniz, Iagoba; Urizar-Lanz, Iñigo; Zimborás, Zoltán; Hyllus, Philipp; Tóth, Géza

    2018-05-01

    We study gradient magnetometry with an ensemble of atoms with arbitrary spin. We calculate precision bounds for estimating the gradient of the magnetic field based on the quantum Fisher information. For quantum states that are invariant under homogeneous magnetic fields, we need to measure a single observable to estimate the gradient. On the other hand, for states that are sensitive to homogeneous fields, a simultaneous measurement is needed, as the homogeneous field must also be estimated. We prove that for the cases studied in this paper, such a measurement is feasible. We present a method to calculate precision bounds for gradient estimation with a chain of atoms or with two spatially separated atomic ensembles. We also consider a single atomic ensemble with an arbitrary density profile, where the atoms cannot be addressed individually, and which is a very relevant case for experiments. Our model can take into account even correlations between particle positions. While in most of the discussion we consider an ensemble of localized particles that are classical with respect to their spatial degree of freedom, we also discuss the case of gradient metrology with a single Bose-Einstein condensate.

  5. Precision measurement of the three 2(3)P(J) helium fine structure intervals.

    PubMed

    Zelevinsky, T; Farkas, D; Gabrielse, G

    2005-11-11

    The three 2(3)P fine structure intervals of 4H are measured at an improved accuracy that is sufficient to test two-electron QED theory and to determine the fine structure constant alpha to 14 parts in 10(9). The more accurate determination of alpha, to a precision higher than attained with the quantum Hall and Josephson effects, awaits the reconciliation of two inconsistent theoretical calculations now being compared term by term. A low pressure helium discharge presents experimental uncertainties quite different than for earlier measurements and allows direct measurements of light pressure shifts.

  6. Integration of radar altimeter, precision navigation, and digital terrain data for low-altitude flight

    NASA Technical Reports Server (NTRS)

    Zelenka, Richard E.

    1992-01-01

    Avionic systems that depend on digitized terrain elevation data for guidance generation or navigational reference require accurate absolute and relative distance measurements to the terrain, especially as they approach lower altitudes. This is particularly exacting in low-altitude helicopter missions, where aggressive terrain hugging maneuvers create minimal horizontal and vertical clearances and demand precise terrain positioning. Sole reliance on airborne precision navigation and stored terrain elevation data for above-ground-level (AGL) positioning severely limits the operational altitude of such systems. A Kalman filter is presented which blends radar altimeter returns, precision navigation, and stored terrain elevation data for AGL positioning. The filter is evaluated using low-altitude helicopter flight test data acquired over moderately rugged terrain. The proposed Kalman filter is found to remove large disparities in predicted AGL altitude (i.e., from airborne navigation and terrain elevation data) in the presence of measurement anomalies and dropouts. Previous work suggested a minimum clearance altitude of 220 ft AGL for a near-terrain guidance system; integration of a radar altimeter allows for operation of that system below 50 ft, subject to obstacle-avoidance limitations.

  7. Motor imagery training improves precision of an upper limb movement in patients with hemiparesis.

    PubMed

    Grabherr, Luzia; Jola, Corinne; Berra, Gilberto; Theiler, Robert; Mast, Fred W

    2015-01-01

    In healthy participants, beneficial effects of motor imagery training on movement execution have been shown for precision, strength, and speed. In the clinical context, it is still debated whether motor imagery provides an effective rehabilitation technique in patients with motor deficits. To compare the effectiveness of two different types of movement training: motor imagery vs. motor execution. Twenty-five patients with hemiparesis were assigned to one of two training groups: the imagery or the execution-training group. Both groups completed a baseline test before they received six training sessions, each of which was followed by a test session. Using a novel and precisely quantifiable test, we assessed how accurately patients performed an upper limb movement. Both training groups improved performance over the six test sessions but the improvement was significantly larger in the imagery group. That is, the imagery group was able to perform more precise movements than the execution group after the sixth training session while there was no difference at the beginning of the training. The results provide evidence for the benefit of motor imagery training in patients with hemiparesis and thus suggest the integration of cognitive training in conventional physiotherapy practice.

  8. Muver, a computational framework for accurately calling accumulated mutations.

    PubMed

    Burkholder, Adam B; Lujan, Scott A; Lavender, Christopher A; Grimm, Sara A; Kunkel, Thomas A; Fargo, David C

    2018-05-09

    Identification of mutations from next-generation sequencing data typically requires a balance between sensitivity and accuracy. This is particularly true of DNA insertions and deletions (indels), that can impart significant phenotypic consequences on cells but are harder to call than substitution mutations from whole genome mutation accumulation experiments. To overcome these difficulties, we present muver, a computational framework that integrates established bioinformatics tools with novel analytical methods to generate mutation calls with the extremely low false positive rates and high sensitivity required for accurate mutation rate determination and comparison. Muver uses statistical comparison of ancestral and descendant allelic frequencies to identify variant loci and assigns genotypes with models that include per-sample assessments of sequencing errors by mutation type and repeat context. Muver identifies maximally parsimonious mutation pathways that connect these genotypes, differentiating potential allelic conversion events and delineating ambiguities in mutation location, type, and size. Benchmarking with a human gold standard father-son pair demonstrates muver's sensitivity and low false positive rates. In DNA mismatch repair (MMR) deficient Saccharomyces cerevisiae, muver detects multi-base deletions in homopolymers longer than the replicative polymerase footprint at rates greater than predicted for sequential single-base deletions, implying a novel multi-repeat-unit slippage mechanism. Benchmarking results demonstrate the high accuracy and sensitivity achieved with muver, particularly for indels, relative to available tools. Applied to an MMR-deficient Saccharomyces cerevisiae system, muver mutation calls facilitate mechanistic insights into DNA replication fidelity.

  9. Position-sensitive radiation monitoring (surface contamination monitor). Innovative technology summary report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1999-06-01

    The Shonka Research Associates, Inc. Position-Sensitive Radiation Monitor both detects surface radiation and prepares electronic survey map/survey report of surveyed area automatically. The electronically recorded map can be downloaded to a personal computer for review and a map/report can be generated for inclusion in work packages. Switching from beta-gamma detection to alpha detection is relatively simple and entails moving a switch position to alpha and adjusting the voltage level to an alpha detection level. No field calibration is required when switching from beta-gamma to alpha detection. The system can be used for free-release surveys because it meets the federal detectionmore » level sensitivity limits requires for surface survey instrumentation. This technology is superior to traditionally-used floor contamination monitor (FCM) and hand-held survey instrumentation because it can precisely register locations of radioactivity and accurately correlate contamination levels to specific locations. Additionally, it can collect and store continuous radiological data in database format, which can be used to produce real-time imagery as well as automated graphics of survey data. Its flexible design can accommodate a variety of detectors. The cost of the innovative technology is 13% to 57% lower than traditional methods. This technology is suited for radiological surveys of flat surfaces at US Department of Energy (DOE) nuclear facility decontamination and decommissioning (D and D) sites or similar public or commercial sites.« less

  10. Evaluation of accuracy and precision of a smartphone based automated parasite egg counting system in comparison to the McMaster and Mini-FLOTAC methods.

    PubMed

    Scare, J A; Slusarewicz, P; Noel, M L; Wielgus, K M; Nielsen, M K

    2017-11-30

    Fecal egg counts are emphasized for guiding equine helminth parasite control regimens due to the rise of anthelmintic resistance. This, however, poses further challenges, since egg counting results are prone to issues such as operator dependency, method variability, equipment requirements, and time commitment. The use of image analysis software for performing fecal egg counts is promoted in recent studies to reduce the operator dependency associated with manual counts. In an attempt to remove operator dependency associated with current methods, we developed a diagnostic system that utilizes a smartphone and employs image analysis to generate automated egg counts. The aims of this study were (1) to determine precision of the first smartphone prototype, the modified McMaster and ImageJ; (2) to determine precision, accuracy, sensitivity, and specificity of the second smartphone prototype, the modified McMaster, and Mini-FLOTAC techniques. Repeated counts on fecal samples naturally infected with equine strongyle eggs were performed using each technique to evaluate precision. Triplicate counts on 36 egg count negative samples and 36 samples spiked with strongyle eggs at 5, 50, 500, and 1000 eggs per gram were performed using a second smartphone system prototype, Mini-FLOTAC, and McMaster to determine technique accuracy. Precision across the techniques was evaluated using the coefficient of variation. In regards to the first aim of the study, the McMaster technique performed with significantly less variance than the first smartphone prototype and ImageJ (p<0.0001). The smartphone and ImageJ performed with equal variance. In regards to the second aim of the study, the second smartphone system prototype had significantly better precision than the McMaster (p<0.0001) and Mini-FLOTAC (p<0.0001) methods, and the Mini-FLOTAC was significantly more precise than the McMaster (p=0.0228). Mean accuracies for the Mini-FLOTAC, McMaster, and smartphone system were 64.51%, 21.67%, and

  11. Analytical Validation of a Highly Quantitative, Sensitive, Accurate, and Reproducible Assay (HERmark) for the Measurement of HER2 Total Protein and HER2 Homodimers in FFPE Breast Cancer Tumor Specimens.

    PubMed

    Larson, Jeffrey S; Goodman, Laurie J; Tan, Yuping; Defazio-Eli, Lisa; Paquet, Agnes C; Cook, Jennifer W; Rivera, Amber; Frankson, Kristi; Bose, Jolly; Chen, Lili; Cheung, Judy; Shi, Yining; Irwin, Sarah; Kiss, Linda D B; Huang, Weidong; Utter, Shannon; Sherwood, Thomas; Bates, Michael; Weidler, Jodi; Parry, Gordon; Winslow, John; Petropoulos, Christos J; Whitcomb, Jeannette M

    2010-06-28

    We report here the results of the analytical validation of assays that measure HER2 total protein (H2T) and HER2 homodimer (H2D) expression in Formalin Fixed Paraffin Embedded (FFPE) breast cancer tumors as well as cell line controls. The assays are based on the VeraTag technology platform and are commercially available through a central CAP-accredited clinical reference laboratory. The accuracy of H2T measurements spans a broad dynamic range (2-3 logs) as evaluated by comparison with cross-validating technologies. The measurement of H2T expression demonstrates a sensitivity that is approximately 7-10 times greater than conventional immunohistochemistry (IHC) (HercepTest). The HERmark assay is a quantitative assay that sensitively and reproducibly measures continuous H2T and H2D protein expression levels and therefore may have the potential to stratify patients more accurately with respect to response to HER2-targeted therapies than current methods which rely on semiquantitative protein measurements (IHC) or on indirect assessments of gene amplification (FISH).

  12. Improvement of VLBI EOP Accuracy and Precision

    NASA Technical Reports Server (NTRS)

    MacMillan, Daniel; Ma, Chopo

    2000-01-01

    In the CORE program, EOP measurements will be made with several different networks, each operating on a different day. It is essential that systematic differences between EOP derived by the different networks be minimized. Observed biases between the simultaneous CORE-A and NEOS-A sessions are about 60-130 micro(as) for PM, UT1 and nutation parameters. After removing biases, the observed rms differences are consistent with an increase in the formal precision of the measurements by factors ranging from 1.05 to 1.4. We discuss the possible sources of unmodeled error that account for these factors and the biases and the sensitivities of the network differences to modeling errors. We also discuss differences between VLBI and GPS PM measurements.

  13. Precision and Disclosure in Text and Voice Interviews on Smartphones.

    PubMed

    Schober, Michael F; Conrad, Frederick G; Antoun, Christopher; Ehlen, Patrick; Fail, Stefanie; Hupp, Andrew L; Johnston, Michael; Vickers, Lucas; Yan, H Yanna; Zhang, Chan

    2015-01-01

    As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (e.g., SMS), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. In the study reported here, 634 people who had agreed to participate in an interview on their iPhone were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched. The key question was how the interview mode affected the quality of the response data, in particular the precision of numerical answers (how many were not rounded), variation in answers to multiple questions with the same response scale (differentiation), and disclosure of socially undesirable information. Texting led to higher quality data-fewer rounded numerical answers, more differentiated answers to a battery of questions, and more disclosure of sensitive information-than voice interviews, both with human and automated interviewers. Text respondents also reported a strong preference for future interviews by text. The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. The findings also suggest that answers from text interviews, when aggregated across a sample, can tell a different story about a population than answers from voice interviews, potentially altering the policy implications from a survey.

  14. Precision and Disclosure in Text and Voice Interviews on Smartphones

    PubMed Central

    Antoun, Christopher; Ehlen, Patrick; Fail, Stefanie; Hupp, Andrew L.; Johnston, Michael; Vickers, Lucas; Yan, H. Yanna; Zhang, Chan

    2015-01-01

    As people increasingly communicate via asynchronous non-spoken modes on mobile devices, particularly text messaging (e.g., SMS), longstanding assumptions and practices of social measurement via telephone survey interviewing are being challenged. In the study reported here, 634 people who had agreed to participate in an interview on their iPhone were randomly assigned to answer 32 questions from US social surveys via text messaging or speech, administered either by a human interviewer or by an automated interviewing system. 10 interviewers from the University of Michigan Survey Research Center administered voice and text interviews; automated systems launched parallel text and voice interviews at the same time as the human interviews were launched. The key question was how the interview mode affected the quality of the response data, in particular the precision of numerical answers (how many were not rounded), variation in answers to multiple questions with the same response scale (differentiation), and disclosure of socially undesirable information. Texting led to higher quality data—fewer rounded numerical answers, more differentiated answers to a battery of questions, and more disclosure of sensitive information—than voice interviews, both with human and automated interviewers. Text respondents also reported a strong preference for future interviews by text. The findings suggest that people interviewed on mobile devices at a time and place that is convenient for them, even when they are multitasking, can give more trustworthy and accurate answers than those in more traditional spoken interviews. The findings also suggest that answers from text interviews, when aggregated across a sample, can tell a different story about a population than answers from voice interviews, potentially altering the policy implications from a survey. PMID:26060991

  15. Micro-mass standards to calibrate the sensitivity of mass comparators

    NASA Astrophysics Data System (ADS)

    Madec, Tanguy; Mann, Gaëlle; Meury, Paul-André; Rabault, Thierry

    2007-10-01

    In mass metrology, the standards currently used are calibrated by a chain of comparisons, performed using mass comparators, that extends ultimately from the international prototype (which is the definition of the unit of mass) to the standards in routine use. The differences measured in the course of these comparisons become smaller and smaller as the standards approach the definitions of their units, precisely because of how accurately they have been adjusted. One source of uncertainty in the determination of the difference of mass between the mass compared and the reference mass is the sensitivity error of the comparator used. Unfortunately, in the market there are no mass standards small enough (of the order of a few hundreds of micrograms) for a valid evaluation of this source of uncertainty. The users of these comparators therefore have no choice but to rely on the characteristics claimed by the makers of the comparators, or else to determine this sensitivity error at higher values (at least 1 mg) and interpolate from this result to smaller differences of mass. For this reason, the LNE decided to produce and calibrate micro-mass standards having nominal values between 100 µg and 900 µg. These standards were developed, then tested in multiple comparisons on an A5 type automatic comparator. They have since been qualified and calibrated in a weighing design, repeatedly and over an extended period of time, to establish their stability with respect to oxidation and the harmlessness of the handling and storage procedure associated with their use. Finally, the micro-standards so qualified were used to characterize the sensitivity errors of two of the LNE's mass comparators, including the one used to tie France's Platinum reference standard (Pt 35) to stainless steel and superalloy standards.

  16. Precision nutrition - review of methods for point-of-care assessment of nutritional status.

    PubMed

    Srinivasan, Balaji; Lee, Seoho; Erickson, David; Mehta, Saurabh

    2017-04-01

    Precision nutrition encompasses prevention and treatment strategies for optimizing health that consider individual variability in diet, lifestyle, environment and genes by accurately determining an individual's nutritional status. This is particularly important as malnutrition now affects a third of the global population, with most of those affected or their care providers having limited means of determining their nutritional status. Similarly, program implementers often have no way of determining the impact or success of their interventions, thus hindering their scale-up. Exciting new developments in the area of point-of-care diagnostics promise to provide improved access to nutritional status assessment, as a first step towards enabling precision nutrition and tailored interventions at both the individual and community levels. In this review, we focus on the current advances in developing portable diagnostics for assessment of nutritional status at point-of-care, along with the numerous design challenges in this process and potential solutions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Neural timing signal for precise tactile timing judgments

    PubMed Central

    Watanabe, Junji; Nishida, Shin'ya

    2016-01-01

    The brain can precisely encode the temporal relationship between tactile inputs. While behavioural studies have demonstrated precise interfinger temporal judgments, the underlying neural mechanism remains unknown. Computationally, two kinds of neural responses can act as the information source. One is the phase-locked response to the phase of relatively slow inputs, and the other is the response to the amplitude change of relatively fast inputs. To isolate the contributions of these components, we measured performance of a synchrony judgment task for sine wave and amplitude-modulation (AM) wave stimuli. The sine wave stimulus was a low-frequency sinusoid, with the phase shifted in the asynchronous stimulus. The AM wave stimulus was a low-frequency sinusoidal AM of a 250-Hz carrier, with only the envelope shifted in the asynchronous stimulus. In the experiment, three stimulus pairs, two synchronous ones and one asynchronous one, were sequentially presented to neighboring fingers, and participants were asked to report which one was the asynchronous pair. We found that the asynchrony of AM waves could be detected as precisely as single impulse pair, with the threshold asynchrony being ∼20 ms. On the other hand, the asynchrony of sine waves could not be detected at all in the range from 5 to 30 Hz. Our results suggest that the timing signal for tactile judgments is provided not by the stimulus phase information but by the envelope of the response of the high-frequency-sensitive Pacini channel (PC), although they do not exclude a possible contribution of the envelope of non-PCs. PMID:26843600

  18. Detection and quantitation of trace phenolphthalein (in pharmaceutical preparations and in forensic exhibits) by liquid chromatography-tandem mass spectrometry, a sensitive and accurate method.

    PubMed

    Sharma, Kakali; Sharma, Shiba P; Lahiri, Sujit C

    2013-01-01

    Phenolphthalein, an acid-base indicator and laxative, is important as a constituent of widely used weight-reducing multicomponent food formulations. Phenolphthalein is an useful reagent in forensic science for the identification of blood stains of suspected victims and for apprehending erring officials accepting bribes in graft or trap cases. The pink-colored alkaline hand washes originating from the phenolphthalein-smeared notes can easily be determined spectrophotometrically. But in many cases, colored solution turns colorless with time, which renders the genuineness of bribe cases doubtful to the judiciary. No method is known till now for the detection and identification of phenolphthalein in colorless forensic exhibits with positive proof. Liquid chromatography-tandem mass spectrometry had been found to be most sensitive, accurate method capable of detection and quantitation of trace phenolphthalein in commercial formulations and colorless forensic exhibits with positive proof. The detection limit of phenolphthalein was found to be 1.66 pg/L or ng/mL, and the calibration curve shows good linearity (r(2) = 0.9974). © 2012 American Academy of Forensic Sciences.

  19. Accurate Attitude Estimation Using ARS under Conditions of Vehicle Movement Based on Disturbance Acceleration Adaptive Estimation and Correction

    PubMed Central

    Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong

    2016-01-01

    This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions. PMID:27754469

  20. Accurate Attitude Estimation Using ARS under Conditions of Vehicle Movement Based on Disturbance Acceleration Adaptive Estimation and Correction.

    PubMed

    Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong

    2016-10-16

    This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions.