Science.gov

Sample records for accurate light-time correction

  1. Correcting incompatible DN values and geometric errors in nighttime lights time series images

    SciTech Connect

    Zhao, Naizhuo; Zhou, Yuyu; Samson, Eric L.

    2014-09-19

    The Defense Meteorological Satellite Program’s Operational Linescan System (DMSP-OLS) nighttime lights imagery has proven to be a powerful remote sensing tool to monitor urbanization and assess socioeconomic activities at large scales. However, the existence of incompatible digital number (DN) values and geometric errors severely limit application of nighttime light image data on multi-year quantitative research. In this study we extend and improve previous studies on inter-calibrating nighttime lights image data to obtain more compatible and reliable nighttime lights time series (NLT) image data for China and the United States (US) through four steps: inter-calibration, geometric correction, steady increase adjustment, and population data correction. We then use gross domestic product (GDP) data to test the processed NLT image data indirectly and find that sum light (summed DN value of pixels in a nighttime light image) maintains apparent increase trends with relatively large GDP growth rates but does not increase or decrease with relatively small GDP growth rates. As nighttime light is a sensitive indicator for economic activity, the temporally consistent trends between sum light and GDP growth rate imply that brightness of nighttime lights on the ground is correctly represented by the processed NLT image data. Finally, through analyzing the corrected NLT image data from 1992 to 2008, we find that China experienced apparent nighttime lights development in 1992-1997 and 2001-2008 respectively and the US suffered from nighttime lights decay in large areas after 2001.

  2. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    ERIC Educational Resources Information Center

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  3. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    ERIC Educational Resources Information Center

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  4. Highly accurate tau-leaping methods with random corrections.

    PubMed

    Hu, Yucheng; Li, Tiejun

    2009-03-28

    We aim to construct higher order tau-leaping methods for numerically simulating stochastic chemical kinetic systems in this paper. By adding a random correction to the primitive tau-leaping scheme in each time step, we greatly improve the accuracy of the tau-leaping approximations. This gain in accuracy actually comes from the reduction in the local truncation error of the scheme in the order of tau, the marching time step size. While the local truncation error of the primitive tau-leaping method is O(tau(2)) for all moments, our Poisson random correction tau-leaping method, in which the correction term is a Poisson random variable, can reduce the local truncation error for the mean to O(tau(3)), and both Gaussian random correction tau-leaping methods, in which the correction term is a Gaussian random variable, can reduce the local truncation error for both the mean and covariance to O(tau(3)). Numerical results demonstrate that these novel methods more accurately capture crucial properties such as the mean and variance than existing methods for simulating chemical reaction systems. This work constitutes a first step to construct high order numerical methods for simulating jump processes. With further refinement and appropriately modified step-size selection procedures, the random correction methods should provide a viable way of simulating chemical reaction systems accurately and efficiently.

  5. An Accurate Temperature Correction Model for Thermocouple Hygrometers 1

    PubMed Central

    Savage, Michael J.; Cass, Alfred; de Jager, James M.

    1982-01-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241

  6. An accurate temperature correction model for thermocouple hygrometers.

    PubMed

    Savage, M J; Cass, A; de Jager, J M

    1982-02-01

    Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature.

  7. Does the Taylor Spatial Frame Accurately Correct Tibial Deformities?

    PubMed Central

    Segal, Kira; Ilizarov, Svetlana; Fragomen, Austin T.; Ilizarov, Gabriel

    2009-01-01

    Background Optimal leg alignment is the goal of tibial osteotomy. The Taylor Spatial Frame (TSF) and the Ilizarov method enable gradual realignment of angulation and translation in the coronal, sagittal, and axial planes, therefore, the term six-axis correction. Questions/purposes We asked whether this approach would allow precise correction of tibial deformities. Methods We retrospectively reviewed 102 patients (122 tibiae) with tibial deformities treated with percutaneous osteotomy and gradual correction with the TSF. The proximal osteotomy group was subdivided into two subgroups to distinguish those with an intentional overcorrection of the mechanical axis deviation (MAD). The minimum followup after frame removal was 10 months (average, 48 months; range, 10–98 months). Results In the proximal osteotomy group, patients with varus and valgus deformities for whom the goal of alignment was neutral or overcorrection experienced accurate correction of MAD. In the proximal tibia, the medial proximal tibial angle improved from 80° to 89° in patients with a varus deformity and from 96° to 85° in patients with a valgus deformity. In the middle osteotomy group, all patients had less than 5° coronal plane deformity and 15 of 17 patients had less that 5° sagittal plane deformity. In the distal osteotomy group, the lateral distal tibial angle improved from 77° to 86° in patients with a valgus deformity and from 101° to 90° for patients with a varus deformity. Conclusions Gradual correction of all tibial deformities with the TSF was accurate and with few complications. Level of Evidence Level IV, therapeutic study. See the Guidelines for Authors for a complete description of levels of evidence. PMID:19911244

  8. Accurate documentation, correct coding, and compliance: it's your best defense!

    PubMed

    Coles, T S; Babb, E F

    1999-07-01

    This article focuses on the need for physicians to maintain an awareness of regulatory policy and the law impacting the federal government's medical insurance programs, and to internalize and apply this knowledge in their practices. Basic information concerning selected fraud and abuse statutes and the civil monetary penalties and sanctions for noncompliance is discussed. The application of accurate documentation and correct coding principles, as well as the rationale for implementating an effective compliance plan in order to prevent fraud and abuse and/or minimize disciplinary action from government regulatory agencies, are emphasized.

  9. Dynamical correction of control laws for marine ships' accurate steering

    NASA Astrophysics Data System (ADS)

    Veremey, Evgeny I.

    2014-06-01

    The objective of this work is the analytical synthesis problem for marine vehicles autopilots design. Despite numerous known methods for a solution, the mentioned problem is very complicated due to the presence of an extensive population of certain dynamical conditions, requirements and restrictions, which must be satisfied by the appropriate choice of a steering control law. The aim of this paper is to simplify the procedure of the synthesis, providing accurate steering with desirable dynamics of the control system. The approach proposed here is based on the usage of a special unified multipurpose control law structure that allows decoupling a synthesis into simpler particular optimization problems. In particular, this structure includes a dynamical corrector to support the desirable features for the vehicle's motion under the action of sea wave disturbances. As a result, a specialized new method for the corrector design is proposed to provide an accurate steering or a trade-off between accurate steering and economical steering of the ship. This method guaranties a certain flexibility of the control law with respect to an actual environment of the sailing; its corresponding turning can be realized in real time onboard.

  10. Etch modeling for accurate full-chip process proximity correction

    NASA Astrophysics Data System (ADS)

    Beale, Daniel F.; Shiely, James P.

    2005-05-01

    The challenges of the 65 nm node and beyond require new formulations of the compact convolution models used in OPC. In addition to simulating more optical and resist effects, these models must accommodate pattern distortions due to etch which can no longer be treated as small perturbations on photo-lithographic effects. (Methods for combining optical and process modules while optimizing the speed/accuracy tradeoff were described in "Advanced Model Formulations for Optical and Process Proximity Correction", D. Beale et al, SPIE 2004.) In this paper, we evaluate new physics-based etch model formulations that differ from the convolution-based process models used previously. The new models are expressed within the compact modeling framework described by J. Stirniman et al. in SPIE, vol. 3051, p469, 1997, and thus can be used for high-speed process simulation during full-chip OPC.

  11. Using BRDFs for accurate albedo calculations and adjacency effect corrections

    SciTech Connect

    Borel, C.C.; Gerstl, S.A.W.

    1996-09-01

    In this paper the authors discuss two uses of BRDFs in remote sensing: (1) in determining the clear sky top of the atmosphere (TOA) albedo, (2) in quantifying the effect of the BRDF on the adjacency point-spread function and on atmospheric corrections. The TOA spectral albedo is an important parameter retrieved by the Multi-angle Imaging Spectro-Radiometer (MISR). Its accuracy depends mainly on how well one can model the surface BRDF for many different situations. The authors present results from an algorithm which matches several semi-empirical functions to the nine MISR measured BRFs that are then numerically integrated to yield the clear sky TOA spectral albedo in four spectral channels. They show that absolute accuracies in the albedo of better than 1% are possible for the visible and better than 2% in the near infrared channels. Using a simplified extensive radiosity model, the authors show that the shape of the adjacency point-spread function (PSF) depends on the underlying surface BRDFs. The adjacency point-spread function at a given offset (x,y) from the center pixel is given by the integral of transmission-weighted products of BRDF and scattering phase function along the line of sight.

  12. A new accurate CTE photometric correction formula for ACS/WFC

    NASA Astrophysics Data System (ADS)

    Chiaberge, M.

    2012-10-01

    We present a new CTE photometric correction formula based on observation of 47Tuc obtained during Cycles 17, 18 and 19. Images were taken with two filters and different exposure times, in order to sample a wide range of background levels. In addition, the Cycle 19 program included imaging of a denser field near the center of 47Tuc with the F502N filter. Thanks to the increased number of stars available for the analysis, we are able to characterize CTE losses down to the lowest background levels (down to ~0.2e-) without significant loss of accuracy with respect to higher sky levels. The data from these three Cycles allow us to derive a new form of the correction formula that is significantly more accurate that those previously published. The formula may be used to correct stellar photometry for CTE losses on drizzled images taken after SM4. We compare the results of our new CTE correction to previous versions of the correction formula for ACS/WFC, and with the pixel-based CTE correction that is currently available as part of CALACS. The formula presented in this ISR and the pixel-based correction are in substantial agreement at high stellar fluxes and for relatively high background levels. However, the former is significantly more accurate than the latter for faint stars superimposed to a low sky background.

  13. Accurate and efficient correction of adjacency effects for high resolution imagery: comparison to the Lambertian correction for Landsat

    NASA Astrophysics Data System (ADS)

    Sei, Alain

    2016-10-01

    The state of the art of atmospheric correction for moderate resolution and high resolution sensors is based on assuming that the surface reflectance at the bottom of the atmosphere is uniform. This assumption accounts for multiple scattering but ignores the contribution of neighboring pixels, that is it ignores adjacency effects. Its great advantage however is to substantially reduce the computational cost of performing atmospheric correction and make the problem computationally tractable. In a recent paper, (Sei, 2015) a computationally efficient method was introduced for the correction of adjacency effects through the use of fast FFT-based evaluations of singular integrals and the use of analytic continuation. It was shown that divergent Neumann series can be avoided and accurate results be obtained for clear and turbid atmospheres. We analyze in this paper the error of the standard state of the art Lambertian atmospheric correction method on Landsat imagery and compare it to our newly introduced method. We show that for high contrast scenes the state of the art atmospheric correction yields much larger errors than our method.

  14. Significance of accurate diffraction corrections for the second harmonic wave in determining the acoustic nonlinearity parameter

    SciTech Connect

    Jeong, Hyunjo; Zhang, Shuzeng; Li, Xiongbing; Barnard, Dan

    2015-09-15

    The accurate measurement of acoustic nonlinearity parameter β for fluids or solids generally requires making corrections for diffraction effects due to finite size geometry of transmitter and receiver. These effects are well known in linear acoustics, while those for second harmonic waves have not been well addressed and therefore not properly considered in previous studies. In this work, we explicitly define the attenuation and diffraction corrections using the multi-Gaussian beam (MGB) equations which were developed from the quasilinear solutions of the KZK equation. The effects of making these corrections are examined through the simulation of β determination in water. Diffraction corrections are found to have more significant effects than attenuation corrections, and the β values of water can be estimated experimentally with less than 5% errors when the exact second harmonic diffraction corrections are used together with the negligible attenuation correction effects on the basis of linear frequency dependence between attenuation coefficients, α{sub 2} ≃ 2α{sub 1}.

  15. Accurate self-correction of errors in long reads using de Bruijn graphs

    PubMed Central

    Walve, Riku; Rivals, Eric; Ukkonen, Esko

    2017-01-01

    Abstract Motivation: New long read sequencing technologies, like PacBio SMRT and Oxford NanoPore, can produce sequencing reads up to 50 000 bp long but with an error rate of at least 15%. Reducing the error rate is necessary for subsequent utilization of the reads in, e.g. de novo genome assembly. The error correction problem has been tackled either by aligning the long reads against each other or by a hybrid approach that uses the more accurate short reads produced by second generation sequencing technologies to correct the long reads. Results: We present an error correction method that uses long reads only. The method consists of two phases: first, we use an iterative alignment-free correction method based on de Bruijn graphs with increasing length of k-mers, and second, the corrected reads are further polished using long-distance dependencies that are found using multiple alignments. According to our experiments, the proposed method is the most accurate one relying on long reads only for read sets with high coverage. Furthermore, when the coverage of the read set is at least 75×, the throughput of the new method is at least 20% higher. Availability and Implementation: LoRMA is freely available at http://www.cs.helsinki.fi/u/lmsalmel/LoRMA/. Contact: leena.salmela@cs.helsinki.fi PMID:27273673

  16. Accurate Modeling of Organic Molecular Crystals by Dispersion-Corrected Density Functional Tight Binding (DFTB).

    PubMed

    Brandenburg, Jan Gerit; Grimme, Stefan

    2014-06-05

    The ambitious goal of organic crystal structure prediction challenges theoretical methods regarding their accuracy and efficiency. Dispersion-corrected density functional theory (DFT-D) in principle is applicable, but the computational demands, for example, to compute a huge number of polymorphs, are too high. Here, we demonstrate that this task can be carried out by a dispersion-corrected density functional tight binding (DFTB) method. The semiempirical Hamiltonian with the D3 correction can accurately and efficiently model both solid- and gas-phase inter- and intramolecular interactions at a speed up of 2 orders of magnitude compared to DFT-D. The mean absolute deviations for interaction (lattice) energies for various databases are typically 2-3 kcal/mol (10-20%), that is, only about two times larger than those for DFT-D. For zero-point phonon energies, small deviations of <0.5 kcal/mol compared to DFT-D are obtained.

  17. Correction of spatial distortion in MR imaging: a prerequisite for accurate stereotaxy.

    PubMed

    Schad, L; Lott, S; Schmitt, F; Sturm, V; Lorenz, W J

    1987-01-01

    With magnetic resonance (MR) imaging, accurate spatial information--critical for effective stereotaxy--demands a homogeneous static field and linear gradients. Inhomogeneities and nonlinearities induced by eddy currents during the pulse sequences distort the images and produce spurious displacements of the stereotactic coordinates in both the x-y plane and the z axis. These errors in position can be assessed by means of two phantoms placed within the stereotactic guidance system--a "two-dimensional phantom" displaying "pincushion" distortion in the image (i.e., x, y) plane, and the "three-dimensional phantom" displaying displacement, warp, and tilt of the image plane itself. The pincushion distortion can be "corrected" (reducing displacements from 5 to 1-2 mm) by calculations based on modeling the distortion as a fourth order two-dimensional polynomial. Based on these corrected images, errors in the z coordinate and tilt of image planes may be corrected by adjustment of the gradient shimming currents. Such correction not only implements stereotaxy under MR guidance but also provides for the accurate transfer of anatomic/pathologic information between MR and CT images.

  18. Accurate scatter correction for transmission computed tomography using an uncollimated line array source.

    PubMed

    Kojima, Akihiro; Matsumoto, Masanori; Tomiguchi, Seiji; Katsuda, Noboru; Yamashita, Yasuyuki; Motomura, Nobutoku

    2004-02-01

    We investigated scatter correction in transmission computed tomography (TCT) imaging by the combination of an uncollimated transmission source and a parallel-hole collimator. We employed the triple energy window (TEW) as the scatter correction and found that the conventional TEW method, which is accurate in emission computed tomography (ECT) imaging, needs some modification in TCT imaging based on our phantom studies. In this study a Tc-99m uncollimated line array source (area: 55 cm x 40 cm) was attached to one camera head of a dual-head gamma camera as a transmission source, and TCT data were acquired with a low-energy, general purpose (LEGP), parallel-hole collimator equipped on the other camera head. The energy spectra for 140 keV-photons transmitted through various attenuating material thicknesses were measured and analyzed for scatter fraction. The results of the energy spectra showed that the photons transmitted had an energy distribution that constructs a scatter peak within the 140 keV-photopeak energy window. In TCT imaging with a cylindrical water phantom, the conventional TEW method with triangle estimates (subtraction factor, K = 0.5) was not sufficient for accurate scatter correction (micro = 0.131 cm(-1) for water), whereas the modified TEW method with K = 1.0 gave the accurate attenuation coefficient of 0.153 cm(-1) for water. For the TCT imaging with the combination of the uncollimated Tc-99m line array source and parallel hole collimator, the modified TEW method with K = 1.0 gives the accurate TCT data for quantitative SPECT imaging in comparison with the conventional TEW method with K = 0.5.

  19. Accurate representation of interference colours (Michel-Lévy chart): from rendering to image colour correction.

    PubMed

    Linge Johnsen, S A; Bollmann, J; Lee, H W; Zhou, Y

    2017-09-21

    Here a work flow towards an accurate representation of interference colours (Michel-Lévy chart) digitally captured on a polarised light microscope using dry and oil immersion objectives is presented. The work flow includes accurate rendering of interference colours considering the colour temperature of the light source of the microscope and chromatic adaptation to white points of RGB colour spaces as well as the colour correction of the camera using readily available colour targets. The quality of different colour correction profiles was tested independently on an IT8.7/1 target. The best performing profile was using the XYZ cLUT algorithm and it revealed a ΔE00 of 1.9 (6.4 no profile) at 5× and 1.1 (8.4 no profile) at 100× magnification, respectively. The overall performance of the workflow was tested by comparing rendered interference colours with colour-corrected images of a quartz wedge captured over a retardation range from 80-2500 nm at 5× magnification. Uncorrected images of the quartz wedge in sRGB colour space revealed a mean ΔE00 of 12.3, which could be reduced to a mean of 4.9 by applying a camera correction profile based on an IT8.7/1 target and the Matrix only algorithm (ΔE00 < 1.0 signifies colour differences imperceptible by the human eye). ΔE00 varied significantly over the retardation range of 80-2500 nm of the quartz wedge, but the reasons for this variation is not well understood and the quality of colour correction might be further improved in future by using custom made colour targets specifically designed for the analysis of high-order interference colours. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  20. A defect corrected finite element approach for the accurate evaluation of magnetic fields on unstructured grids

    NASA Astrophysics Data System (ADS)

    Römer, Ulrich; Schöps, Sebastian; De Gersem, Herbert

    2017-04-01

    In electromagnetic simulations of magnets and machines, one is often interested in a highly accurate and local evaluation of the magnetic field uniformity. Based on local post-processing of the solution, a defect correction scheme is proposed as an easy to realize alternative to higher order finite element or hybrid approaches. Radial basis functions (RBFs) are key for the generality of the method, which in particular can handle unstructured grids. Also, contrary to conventional finite element basis functions, higher derivatives of the solution can be evaluated, as required, e.g., for deflection magnets. Defect correction is applied to obtain a solution with improved accuracy and adjoint techniques are used to estimate the remaining error for a specific quantity of interest. Significantly improved (local) convergence orders are obtained. The scheme is also applied to the simulation of a Stern-Gerlach magnet currently in operation.

  1. A Cavity Corrected 3D-RISM Functional for Accurate Solvation Free Energies

    PubMed Central

    2014-01-01

    We show that an Ng bridge function modified version of the three-dimensional reference interaction site model (3D-RISM-NgB) solvation free energy method can accurately predict the hydration free energy (HFE) of a set of 504 organic molecules. To achieve this, a single unique constant parameter was adjusted to the computed HFE of single atom Lennard-Jones solutes. It is shown that 3D-RISM is relatively accurate at predicting the electrostatic component of the HFE without correction but requires a modification of the nonpolar contribution that originates in the formation of the cavity created by the solute in water. We use a free energy functional with the Ng scaling of the direct correlation function [Ng, K. C. J. Chem. Phys.1974, 61, 2680]. This produces a rapid, reliable small molecule HFE calculation for applications in drug design. PMID:24634616

  2. Correction of susceptibility-induced GRE phase shift for accurate PRFS thermometry proximal to cryoablation iceball.

    PubMed

    Kickhefel, Antje; Weiss, Clifford; Roland, Joerg; Gross, Patrick; Schick, Fritz; Salomir, Rares

    2012-02-01

    The susceptibility contrast between frozen and unfrozen tissue disturbs the local magnetic field in the proximity of the ice-ball during cryotherapy. This effect should be corrected for in real time to allow PRFS-based monitoring of near-zero temperatures during intervention. Susceptibility artifacts were corrected post-processing, using a rapid numerical algorithm. The difference in bulk magnetic susceptibility between frozen and non-frozen tissue was approximated to be uniform over the ice-ball volume and was determined from the isothermal principle applied to the phase-transition frontier of compartments. Subsequently, the magnetic perturbation field was calculated rapidly in 3D using a Fourier-convolution. Experimental studies were performed for two scenarios: tissue defrosting in a water bath and induction of an ice-ball by a MR-compatible cryogenic probe. The susceptibility artifacts yielded PRFS temperature errors as high as 10-12°C proximal to the ice-ball, positive or negative depending on the relative orientation of the position vector from the B(o) direction. These effects were fully corrected for to within the noise range. The susceptibility-corrected PRFS temperature values were consistent with the phase-transition isothermal condition, irrespective of the local orientation of the position vector. By implementing on-line the post processing algorithm, PRFS MRT may be used as a safety tool for non-invasive and accurate monitoring of near-zero temperatures during MR-guided clinical cryotherapy.

  3. Accurate and fast multiple-testing correction in eQTL studies.

    PubMed

    Sul, Jae Hoon; Raj, Towfique; de Jong, Simone; de Bakker, Paul I W; Raychaudhuri, Soumya; Ophoff, Roel A; Stranger, Barbara E; Eskin, Eleazar; Han, Buhm

    2015-06-04

    In studies of expression quantitative trait loci (eQTLs), it is of increasing interest to identify eGenes, the genes whose expression levels are associated with variation at a particular genetic variant. Detecting eGenes is important for follow-up analyses and prioritization because genes are the main entities in biological processes. To detect eGenes, one typically focuses on the genetic variant with the minimum p value among all variants in cis with a gene and corrects for multiple testing to obtain a gene-level p value. For performing multiple-testing correction, a permutation test is widely used. Because of growing sample sizes of eQTL studies, however, the permutation test has become a computational bottleneck in eQTL studies. In this paper, we propose an efficient approach for correcting for multiple testing and assess eGene p values by utilizing a multivariate normal distribution. Our approach properly takes into account the linkage-disequilibrium structure among variants, and its time complexity is independent of sample size. By applying our small-sample correction techniques, our method achieves high accuracy in both small and large studies. We have shown that our method consistently produces extremely accurate p values (accuracy > 98%) for three human eQTL datasets with different sample sizes and SNP densities: the Genotype-Tissue Expression pilot dataset, the multi-region brain dataset, and the HapMap 3 dataset.

  4. Accurate and Fast Multiple-Testing Correction in eQTL Studies

    PubMed Central

    Sul, Jae Hoon; Raj, Towfique; de Jong, Simone; de Bakker, Paul I.W.; Raychaudhuri, Soumya; Ophoff, Roel A.; Stranger, Barbara E.; Eskin, Eleazar; Han, Buhm

    2015-01-01

    In studies of expression quantitative trait loci (eQTLs), it is of increasing interest to identify eGenes, the genes whose expression levels are associated with variation at a particular genetic variant. Detecting eGenes is important for follow-up analyses and prioritization because genes are the main entities in biological processes. To detect eGenes, one typically focuses on the genetic variant with the minimum p value among all variants in cis with a gene and corrects for multiple testing to obtain a gene-level p value. For performing multiple-testing correction, a permutation test is widely used. Because of growing sample sizes of eQTL studies, however, the permutation test has become a computational bottleneck in eQTL studies. In this paper, we propose an efficient approach for correcting for multiple testing and assess eGene p values by utilizing a multivariate normal distribution. Our approach properly takes into account the linkage-disequilibrium structure among variants, and its time complexity is independent of sample size. By applying our small-sample correction techniques, our method achieves high accuracy in both small and large studies. We have shown that our method consistently produces extremely accurate p values (accuracy > 98%) for three human eQTL datasets with different sample sizes and SNP densities: the Genotype-Tissue Expression pilot dataset, the multi-region brain dataset, and the HapMap 3 dataset. PMID:26027500

  5. Accurate prediction of adsorption energies on graphene, using a dispersion-corrected semiempirical method including solvation.

    PubMed

    Vincent, Mark A; Hillier, Ian H

    2014-08-25

    The accurate prediction of the adsorption energies of unsaturated molecules on graphene in the presence of water is essential for the design of molecules that can modify its properties and that can aid its processability. We here show that a semiempirical MO method corrected for dispersive interactions (PM6-DH2) can predict the adsorption energies of unsaturated hydrocarbons and the effect of substitution on these values to an accuracy comparable to DFT values and in good agreement with the experiment. The adsorption energies of TCNE, TCNQ, and a number of sulfonated pyrenes are also predicted, along with the effect of hydration using the COSMO model.

  6. Accurate structures and energetics of neutral-framework zeotypes from dispersion-corrected DFT calculations

    NASA Astrophysics Data System (ADS)

    Fischer, Michael; Angel, Ross J.

    2017-05-01

    Density-functional theory (DFT) calculations incorporating a pairwise dispersion correction were employed to optimize the structures of various neutral-framework compounds with zeolite topologies. The calculations used the PBE functional for solids (PBEsol) in combination with two different dispersion correction schemes, the D2 correction devised by Grimme and the TS correction of Tkatchenko and Scheffler. In the first part of the study, a benchmarking of the DFT-optimized structures against experimental crystal structure data was carried out, considering a total of 14 structures (8 all-silica zeolites, 4 aluminophosphate zeotypes, and 2 dense phases). Both PBEsol-D2 and PBEsol-TS showed an excellent performance, improving significantly over the best-performing approach identified in a previous study (PBE-TS). The temperature dependence of lattice parameters and bond lengths was assessed for those zeotypes where the available experimental data permitted such an analysis. In most instances, the agreement between DFT and experiment improved when the experimental data were corrected for the effects of thermal motion and when low-temperature structure data rather than room-temperature structure data were used as a reference. In the second part, a benchmarking against experimental enthalpies of transition (with respect to α-quartz) was carried out for 16 all-silica zeolites. Excellent agreement was obtained with the PBEsol-D2 functional, with the overall error being in the same range as the experimental uncertainty. Altogether, PBEsol-D2 can be recommended as a computationally efficient DFT approach that simultaneously delivers accurate structures and energetics of neutral-framework zeotypes.

  7. Correction for solute/solvent interaction extends accurate freezing point depression theory to high concentration range.

    PubMed

    Fullerton, G D; Keener, C R; Cameron, I L

    1994-12-01

    The authors describe empirical corrections to ideally dilute expressions for freezing point depression of aqueous solutions to arrive at new expressions accurate up to three molal concentration. The method assumes non-ideality is due primarily to solute/solvent interactions such that the correct free water mass Mwc is the mass of water in solution Mw minus I.M(s) where M(s) is the mass of solute and I an empirical solute/solvent interaction coefficient. The interaction coefficient is easily derived from the constant in the linear regression fit to the experimental plot of Mw/M(s) as a function of 1/delta T (inverse freezing point depression). The I-value, when substituted into the new thermodynamic expressions derived from the assumption of equivalent activity of water in solution and ice, provides accurate predictions of freezing point depression (+/- 0.05 degrees C) up to 2.5 molal concentration for all the test molecules evaluated; glucose, sucrose, glycerol and ethylene glycol. The concentration limit is the approximate monolayer water coverage limit for the solutes which suggests that direct solute/solute interactions are negligible below this limit. This is contrary to the view of many authors due to the common practice of including hydration forces (a soft potential added to the hard core atomic potential) in the interaction potential between solute particles. When this is recognized the two viewpoints are in fundamental agreement.

  8. Accurate tracking of tumor volume change during radiotherapy by CT-CBCT registration with intensity correction

    NASA Astrophysics Data System (ADS)

    Park, Seyoun; Robinson, Adam; Quon, Harry; Kiess, Ana P.; Shen, Colette; Wong, John; Plishker, William; Shekhar, Raj; Lee, Junghoon

    2016-03-01

    In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician's contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70+/-2.30 (B-spline), 1.25+/-1.78 (demons), 0.93+/-1.14 (optical flow), and 4.39+/-3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.

  9. DEM sourcing guidelines for computing 1 Eö accurate terrain corrections for airborne gravity gradiometry

    NASA Astrophysics Data System (ADS)

    Annecchione, Maria; Hatch, David; Hefford, Shane W.

    2017-01-01

    In this paper we investigate digital elevation model (DEM) sourcing requirements to compute gravity gradiometry terrain corrections accurate to 1 Eötvös (Eö) at observation heights of 80 m or more above ground. Such survey heights are typical in fixed-wing airborne surveying for resource exploration where the maximum signal-to-noise ratio is sought. We consider the accuracy of terrain corrections relevant for recent commercial airborne gravity gradiometry systems operating at the 10 Eö noise level and for future systems with a target noise level of 1 Eö. We focus on the requirements for the vertical gradient of the vertical component of gravity (Gdd) because this element of the gradient tensor is most commonly interpreted qualitatively and quantitatively. Terrain correction accuracy depends on the bare-earth DEM accuracy and spatial resolution. The bare-earth DEM accuracy and spatial resolution depends on its source. Two possible sources are considered: airborne LiDAR and Shuttle Radar Topography Mission (SRTM). The accuracy of an SRTM DEM is affected by vegetation height. The SRTM footprint is also larger and the DEM resolution is thus lower. However, resolution requirements relax as relief decreases. Publicly available LiDAR data and 1 arc-second and 3 arc-second SRTM data were selected over four study areas representing end member cases of vegetation cover and relief. The four study areas are presented as reference material for processing airborne gravity gradiometry data at the 1 Eö noise level with 50 m spatial resolution. From this investigation we find that to achieve 1 Eö accuracy in the terrain correction at 80 m height airborne LiDAR data are required even when terrain relief is a few tens of meters and the vegetation is sparse. However, as satellite ranging technologies progress bare-earth DEMs of sufficient accuracy and resolution may be sourced at lesser cost. We found that a bare-earth DEM of 10 m resolution and 2 m accuracy are sufficient for

  10. A field size specific backscatter correction algorithm for accurate EPID dosimetry.

    PubMed

    Berry, Sean L; Polvorosa, Cynthia S; Wuu, Cheng-Shie

    2010-06-01

    . Correcting for FS specific backscatter is important for accurate EPID dosimetry and can be carried out using the methods presented within this investigation. © 2010 American Association of Physicists in Medicine.

  11. Patient-tailored plate for bone fixation and accurate 3D positioning in corrective osteotomy.

    PubMed

    Dobbe, J G G; Vroemen, J C; Strackee, S D; Streekstra, G J

    2013-02-01

    A bone fracture may lead to malunion of bone segments, which gives discomfort to the patient and may lead to chronic pain, reduced function and finally to early osteoarthritis. Corrective osteotomy is a treatment option to realign the bone segments. In this procedure, the surgeon tries to improve alignment by cutting the bone at, or near, the fracture location and fixates the bone segments in an improved position, using a plate and screws. Three-dimensional positioning is very complex and difficult to plan, perform and evaluate using standard 2D fluoroscopy imaging. This study introduces a new technique that uses preoperative 3D imaging to plan positioning and design a patient-tailored fixation plate that only fits in one way and realigns the bone segments as planned. The method is evaluated using artificial bones and renders realignment highly accurate and very reproducible (d(err) < 1.2 ± 0.8 mm and φ(err) < 1.8° ± 2.1°). Application of a patient-tailored plate is expected to be of great value for future corrective osteotomy surgeries.

  12. Accurate Relative Location Estimates for the North Korean Nuclear Tests Using Empirical Slowness Corrections

    NASA Astrophysics Data System (ADS)

    Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna', T.; Mykkeltveit, S.

    2016-10-01

    Declared North Korean nuclear tests in 2006, 2009, 2013, and 2016 were observed seismically at regional and teleseismic distances. Waveform similarity allows the events to be located relatively with far greater accuracy than the absolute locations can be determined from seismic data alone. There is now significant redundancy in the data given the large number of regional and teleseismic stations that have recorded multiple events, and relative location estimates can be confirmed independently by performing calculations on many mutually exclusive sets of measurements. Using a 1-dimensional global velocity model, the distances between the events estimated using teleseismic P phases are found to be approximately 25% shorter than the distances between events estimated using regional Pn phases. The 2009, 2013, and 2016 events all take place within 1 km of each other and the discrepancy between the regional and teleseismic relative location estimates is no more than about 150 m. The discrepancy is much more significant when estimating the location of the more distant 2006 event relative to the later explosions with regional and teleseismic estimates varying by many hundreds of meters. The relative location of the 2006 event is challenging given the smaller number of observing stations, the lower signal-to-noise ratio, and significant waveform dissimilarity at some regional stations. The 2006 event is however highly significant in constraining the absolute locations in the terrain at the Punggye-ri test-site in relation to observed surface infrastructure. For each seismic arrival used to estimate the relative locations, we define a slowness scaling factor which multiplies the gradient of seismic traveltime versus distance, evaluated at the source, relative to the applied 1-d velocity model. A procedure for estimating correction terms which reduce the double-difference time residual vector norms is presented together with a discussion of the associated uncertainty. The

  13. Accurate relative location estimates for the North Korean nuclear tests using empirical slowness corrections

    NASA Astrophysics Data System (ADS)

    Gibbons, S. J.; Pabian, F.; Näsholm, S. P.; Kværna, T.; Mykkeltveit, S.

    2017-01-01

    Declared North Korean nuclear tests in 2006, 2009, 2013 and 2016 were observed seismically at regional and teleseismic distances. Waveform similarity allows the events to be located relatively with far greater accuracy than the absolute locations can be determined from seismic data alone. There is now significant redundancy in the data given the large number of regional and teleseismic stations that have recorded multiple events, and relative location estimates can be confirmed independently by performing calculations on many mutually exclusive sets of measurements. Using a 1-D global velocity model, the distances between the events estimated using teleseismic P phases are found to be approximately 25 per cent shorter than the distances between events estimated using regional Pn phases. The 2009, 2013 and 2016 events all take place within 1 km of each other and the discrepancy between the regional and teleseismic relative location estimates is no more than about 150 m. The discrepancy is much more significant when estimating the location of the more distant 2006 event relative to the later explosions with regional and teleseismic estimates varying by many hundreds of metres. The relative location of the 2006 event is challenging given the smaller number of observing stations, the lower signal-to-noise ratio and significant waveform dissimilarity at some regional stations. The 2006 event is however highly significant in constraining the absolute locations in the terrain at the Punggye-ri test-site in relation to observed surface infrastructure. For each seismic arrival used to estimate the relative locations, we define a slowness scaling factor which multiplies the gradient of seismic traveltime versus distance, evaluated at the source, relative to the applied 1-D velocity model. A procedure for estimating correction terms which reduce the double-difference time residual vector norms is presented together with a discussion of the associated uncertainty. The modified

  14. Differential Effects of Focused and Unfocused Written Correction on the Accurate Use of Grammatical Forms by Adult ESL Learners

    ERIC Educational Resources Information Center

    Sheen, Younghee; Wright, David; Moldawa, Anna

    2009-01-01

    Building on Sheen's (2007) study of the effects of written corrective feedback (CF) on the acquisition of English articles, this article investigated whether direct focused CF, direct unfocused CF and writing practice alone produced differential effects on the accurate use of grammatical forms by adult ESL learners. Using six intact adult ESL…

  15. Differential Effects of Focused and Unfocused Written Correction on the Accurate Use of Grammatical Forms by Adult ESL Learners

    ERIC Educational Resources Information Center

    Sheen, Younghee; Wright, David; Moldawa, Anna

    2009-01-01

    Building on Sheen's (2007) study of the effects of written corrective feedback (CF) on the acquisition of English articles, this article investigated whether direct focused CF, direct unfocused CF and writing practice alone produced differential effects on the accurate use of grammatical forms by adult ESL learners. Using six intact adult ESL…

  16. Accurate elevation and normal moveout corrections of seismic reflection data on rugged topography

    USGS Publications Warehouse

    Liu, J.; Xia, J.; Chen, C.; Zhang, G.

    2005-01-01

    The application of the seismic reflection method is often limited in areas of complex terrain. The problem is the incorrect correction of time shifts caused by topography. To apply normal moveout (NMO) correction to reflection data correctly, static corrections are necessary to be applied in advance for the compensation of the time distortions of topography and the time delays from near-surface weathered layers. For environment and engineering investigation, weathered layers are our targets, so that the static correction mainly serves the adjustment of time shifts due to an undulating surface. In practice, seismic reflected raypaths are assumed to be almost vertical through the near-surface layers because they have much lower velocities than layers below. This assumption is acceptable in most cases since it results in little residual error for small elevation changes and small offsets in reflection events. Although static algorithms based on choosing a floating datum related to common midpoint gathers or residual surface-consistent functions are available and effective, errors caused by the assumption of vertical raypaths often generate pseudo-indications of structures. This paper presents the comparison of applying corrections based on the vertical raypaths and bias (non-vertical) raypaths. It also provides an approach of combining elevation and NMO corrections. The advantages of the approach are demonstrated by synthetic and real-world examples of multi-coverage seismic reflection surveys on rough topography. ?? The Royal Society of New Zealand 2005.

  17. Spin-dependent gradient correction for more accurate atomization energies of molecules.

    PubMed

    Constantin, Lucian A; Fabiano, Eduardo; Della Sala, Fabio

    2012-11-21

    We discuss, simplify, and improve the spin-dependent correction of Constantin et al. [Phys. Rev. B 84, 233103 (2011)] for atomization energies, and develop a density parameter of the form v ∝ |∇n|/n(10/9), found from the statistical ensemble of one-electron densities. The here constructed exchange-correlation generalized gradient approximations (GGAs), named zvPBEsol and zvPBEint, show a broad applicability, and a good accuracy for many applications, because these corrected functionals significantly improve the atomization and binding energies of molecular systems, without worsening the behavior of the original functionals (PBEsol and PBEint) for other properties. This spin-dependent correction is also applied to meta-GGA dynamical correlation functionals combined with exact-exchange; in this case a significant (about 30%) improvement in atomization energies of small molecules is found.

  18. Ionospheric Correction of InSAR for Accurate Ice Motion Mapping at High Latitudes

    NASA Astrophysics Data System (ADS)

    Liao, H.; Meyer, F. J.

    2016-12-01

    Monitoring the motion of the large ice sheets is of great importance for determining ice mass balance and its contribution to sea level rise. Recently the first comprehensive ice motion of the Greenland and the Antarctica have been generated with InSAR. However, these studies have indicated that the performance of InSAR-based ice motion mapping is limited by the presence of the ionosphere. This is particularly true at high latitudes and for low-frequency SAR data. Filter-based and empirical methods (e.g., removing polynomials), which have often been used to mitigate ionospheric effects, are often ineffective in these areas due to the typically strong spatial variability of ionospheric phase delay in high latitudes and due to the risk of removing true deformation signals from the observations. In this study, we will first present an outline of our split-spectrum InSAR-based ionospheric correction approach and particularly highlight how our method improves upon published techniques, such as the multiple sub-band approach to boost estimation accuracy as well as advanced error correction and filtering algorithms. We applied our work flow to a large number of ionosphere-affected dataset over the large ice sheets to estimate the benefit of ionospheric correction on ice motion mapping accuracy. Appropriate test sites over Greenland and the Antarctic have been chosen through cooperation with authors (UW, Ian Joughin) of previous ice motion studies. To demonstrate the magnitude of ionospheric noise and to showcase the performance of ionospheric correction, we will show examples of ionospheric-affected InSAR data and our ionosphere corrected result for comparison in visual. We also compared the corrected phase data to known ice velocity fields quantitatively for the analyzed areas from experts in ice velocity mapping. From our studies we found that ionospheric correction significantly reduces biases in ice velocity estimates and boosts accuracy by a factor that depends on a

  19. A Multidimensional B-Spline Correction for Accurate Modeling Sugar Puckering in QM/MM Simulations.

    PubMed

    Huang, Ming; Dissanayake, Thakshila; Kuechler, Erich; Radak, Brian K; Lee, Tai-Sung; Giese, Timothy J; York, Darrin M

    2017-09-12

    The computational efficiency of approximate quantum mechanical methods allows their use for the construction of multidimensional reaction free energy profiles. It has recently been demonstrated that quantum models based on the neglect of diatomic differential overlap (NNDO) approximation have difficulty modeling deoxyribose and ribose sugar ring puckers and thus limit their predictive value in the study of RNA and DNA systems. A method has been introduced in our previous work to improve the description of the sugar puckering conformational landscape that uses a multidimensional B-spline correction map (BMAP correction) for systems involving intrinsically coupled torsion angles. This method greatly improved the adiabatic potential energy surface profiles of DNA and RNA sugar rings relative to high-level ab initio methods even for highly problematic NDDO-based models. In the present work, a BMAP correction is developed, implemented, and tested in molecular dynamics simulations using the AM1/d-PhoT semiempirical Hamiltonian for biological phosphoryl transfer reactions. Results are presented for gas-phase adiabatic potential energy surfaces of RNA transesterification model reactions and condensed-phase QM/MM free energy surfaces for nonenzymatic and RNase A-catalyzed transesterification reactions. The results show that the BMAP correction is stable, efficient, and leads to improvement in both the potential energy and free energy profiles for the reactions studied, as compared with ab initio and experimental reference data. Exploration of the effect of the size of the quantum mechanical region indicates the best agreement with experimental reaction barriers occurs when the full CpA dinucleotide substrate is treated quantum mechanically with the sugar pucker correction.

  20. Size-extensivity-corrected multireference configuration interaction schemes to accurately predict bond dissociation energies of oxygenated hydrocarbons

    NASA Astrophysics Data System (ADS)

    Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.

    2014-01-01

    Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.

  1. Size-extensivity-corrected multireference configuration interaction schemes to accurately predict bond dissociation energies of oxygenated hydrocarbons

    SciTech Connect

    Oyeyemi, Victor B.; Krisiloff, David B.; Keith, John A.; Libisch, Florian; Pavone, Michele; Carter, Emily A.

    2014-01-28

    Oxygenated hydrocarbons play important roles in combustion science as renewable fuels and additives, but many details about their combustion chemistry remain poorly understood. Although many methods exist for computing accurate electronic energies of molecules at equilibrium geometries, a consistent description of entire combustion reaction potential energy surfaces (PESs) requires multireference correlated wavefunction theories. Here we use bond dissociation energies (BDEs) as a foundational metric to benchmark methods based on multireference configuration interaction (MRCI) for several classes of oxygenated compounds (alcohols, aldehydes, carboxylic acids, and methyl esters). We compare results from multireference singles and doubles configuration interaction to those utilizing a posteriori and a priori size-extensivity corrections, benchmarked against experiment and coupled cluster theory. We demonstrate that size-extensivity corrections are necessary for chemically accurate BDE predictions even in relatively small molecules and furnish examples of unphysical BDE predictions resulting from using too-small orbital active spaces. We also outline the specific challenges in using MRCI methods for carbonyl-containing compounds. The resulting complete basis set extrapolated, size-extensivity-corrected MRCI scheme produces BDEs generally accurate to within 1 kcal/mol, laying the foundation for this scheme's use on larger molecules and for more complex regions of combustion PESs.

  2. AN ACCURATE NEW METHOD OF CALCULATING ABSOLUTE MAGNITUDES AND K-CORRECTIONS APPLIED TO THE SLOAN FILTER SET

    SciTech Connect

    Beare, Richard; Brown, Michael J. I.; Pimbblet, Kevin

    2014-12-20

    We describe an accurate new method for determining absolute magnitudes, and hence also K-corrections, that is simpler than most previous methods, being based on a quadratic function of just one suitably chosen observed color. The method relies on the extensive and accurate new set of 129 empirical galaxy template spectral energy distributions from Brown et al. A key advantage of our method is that we can reliably estimate random errors in computed absolute magnitudes due to galaxy diversity, photometric error and redshift error. We derive K-corrections for the five Sloan Digital Sky Survey filters and provide parameter tables for use by the astronomical community. Using the New York Value-Added Galaxy Catalog, we compare our K-corrections with those from kcorrect. Our K-corrections produce absolute magnitudes that are generally in good agreement with kcorrect. Absolute griz magnitudes differ by less than 0.02 mag and those in the u band by ∼0.04 mag. The evolution of rest-frame colors as a function of redshift is better behaved using our method, with relatively few galaxies being assigned anomalously red colors and a tight red sequence being observed across the whole 0.0 < z < 0.5 redshift range.

  3. The importance of accurately correcting for the natural abundance of stable isotopes.

    PubMed

    Midani, Firas S; Wynn, Michelle L; Schnell, Santiago

    2017-03-01

    The use of isotopically labeled tracer substrates is an experimental approach for measuring in vivo and in vitro intracellular metabolic dynamics. Stable isotopes that alter the mass but not the chemical behavior of a molecule are commonly used in isotope tracer studies. Because stable isotopes of some atoms naturally occur at non-negligible abundances, it is important to account for the natural abundance of these isotopes when analyzing data from isotope labeling experiments. Specifically, a distinction must be made between isotopes introduced experimentally via an isotopically labeled tracer and the isotopes naturally present at the start of an experiment. In this tutorial review, we explain the underlying theory of natural abundance correction of stable isotopes, a concept not always understood by metabolic researchers. We also provide a comparison of distinct methods for performing this correction and discuss natural abundance correction in the context of steady state (13)C metabolic flux, a method increasingly used to infer intracellular metabolic flux from isotope experiments. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Accurate Template-Based Correction of Brain MRI Intensity Distortion With Application to Dementia and Aging

    PubMed Central

    Studholme, C.; Cardenas, V.; Song, E.; Ezekiel, F.; Maudsley, A.; Weiner, M.

    2007-01-01

    This paper examines an alternative approach to separating magnetic resonance imaging (MRI) intensity inhomogeneity from underlying tissue-intensity structure using a direct template-based paradigm. This permits the explicit spatial modeling of subtle intensity variations present in normal anatomy which may confound common retrospective correction techniques using criteria derived from a global intensity model. A fine-scale entropy driven spatial normalisation procedure is employed to map intensity distorted MR images to a tissue reference template. This allows a direct estimation of the relative bias field between template and subject MR images, from the ratio of their low-pass filtered intensity values. A tissue template for an aging individual is constructed and used to correct distortion in a set of data acquired as part of a study on dementia. A careful validation based on manual segmentation and correction of nine datasets with a range of anatomies and distortion levels is carried out. This reveals a consistent improvement in the removal of global intensity variation in terms of the agreement with a global manual bias estimate, and in the reduction in the coefficient of intensity variation in manually delineated regions of white matter. PMID:14719691

  5. A hybrid solution using computational prediction and measured data to accurately determine process corrections with reduced overlay sampling

    NASA Astrophysics Data System (ADS)

    Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen

    2017-03-01

    Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.

  6. A fast experimental beam hardening correction method for accurate bone mineral measurements in 3D μCT imaging system.

    PubMed

    Koubar, Khodor; Bekaert, Virgile; Brasse, David; Laquerriere, Patrice

    2015-06-01

    Bone mineral density plays an important role in the determination of bone strength and fracture risks. Consequently, it is very important to obtain accurate bone mineral density measurements. The microcomputerized tomography system provides 3D information about the architectural properties of bone. Quantitative analysis accuracy is decreased by the presence of artefacts in the reconstructed images, mainly due to beam hardening artefacts (such as cupping artefacts). In this paper, we introduced a new beam hardening correction method based on a postreconstruction technique performed with the use of off-line water and bone linearization curves experimentally calculated aiming to take into account the nonhomogeneity in the scanned animal. In order to evaluate the mass correction rate, calibration line has been carried out to convert the reconstructed linear attenuation coefficient into bone masses. The presented correction method was then applied on a multimaterial cylindrical phantom and on mouse skeleton images. Mass correction rate up to 18% between uncorrected and corrected images were obtained as well as a remarkable improvement of a calculated mouse femur mass has been noticed. Results were also compared to those obtained when using the simple water linearization technique which does not take into account the nonhomogeneity in the object.

  7. Geometric optimisation of an accurate cosine correcting optic fibre coupler for solar spectral measurement

    NASA Astrophysics Data System (ADS)

    Cahuantzi, Roberto; Buckley, Alastair

    2017-09-01

    Making accurate and reliable measurements of solar irradiance is important for understanding performance in the photovoltaic energy sector. In this paper, we present design details and performance of a number of fibre optic couplers for use in irradiance measurement systems employing remote light sensors applicable for either spectrally resolved or broadband measurement. The angular and spectral characteristics of different coupler designs are characterised and compared with existing state-of-the-art commercial technology. The new coupler designs are fabricated from polytetrafluorethylene (PTFE) rods and operate through forward scattering of incident sunlight on the front surfaces of the structure into an optic fibre located in a cavity to the rear of the structure. The PTFE couplers exhibit up to 4.8% variation in scattered transmission intensity between 425 nm and 700 nm and show minimal specular reflection, making the designs accurate and reliable over the visible region. Through careful geometric optimization near perfect cosine dependence on the angular response of the coupler can be achieved. The PTFE designs represent a significant improvement over the state of the art with less than 0.01% error compared with ideal cosine response for angles of incidence up to 50°.

  8. Structure-based sampling and self-correcting machine learning for accurate calculations of potential energy surfaces and vibrational levels.

    PubMed

    Dral, Pavlo O; Owens, Alec; Yurchenko, Sergei N; Thiel, Walter

    2017-06-28

    We present an efficient approach for generating highly accurate molecular potential energy surfaces (PESs) using self-correcting, kernel ridge regression (KRR) based machine learning (ML). We introduce structure-based sampling to automatically assign nuclear configurations from a pre-defined grid to the training and prediction sets, respectively. Accurate high-level ab initio energies are required only for the points in the training set, while the energies for the remaining points are provided by the ML model with negligible computational cost. The proposed sampling procedure is shown to be superior to random sampling and also eliminates the need for training several ML models. Self-correcting machine learning has been implemented such that each additional layer corrects errors from the previous layer. The performance of our approach is demonstrated in a case study on a published high-level ab initio PES of methyl chloride with 44 819 points. The ML model is trained on sets of different sizes and then used to predict the energies for tens of thousands of nuclear configurations within seconds. The resulting datasets are utilized in variational calculations of the vibrational energy levels of CH3Cl. By using both structure-based sampling and self-correction, the size of the training set can be kept small (e.g., 10% of the points) without any significant loss of accuracy. In ab initio rovibrational spectroscopy, it is thus possible to reduce the number of computationally costly electronic structure calculations through structure-based sampling and self-correcting KRR-based machine learning by up to 90%.

  9. Structure-based sampling and self-correcting machine learning for accurate calculations of potential energy surfaces and vibrational levels

    NASA Astrophysics Data System (ADS)

    Dral, Pavlo O.; Owens, Alec; Yurchenko, Sergei N.; Thiel, Walter

    2017-06-01

    We present an efficient approach for generating highly accurate molecular potential energy surfaces (PESs) using self-correcting, kernel ridge regression (KRR) based machine learning (ML). We introduce structure-based sampling to automatically assign nuclear configurations from a pre-defined grid to the training and prediction sets, respectively. Accurate high-level ab initio energies are required only for the points in the training set, while the energies for the remaining points are provided by the ML model with negligible computational cost. The proposed sampling procedure is shown to be superior to random sampling and also eliminates the need for training several ML models. Self-correcting machine learning has been implemented such that each additional layer corrects errors from the previous layer. The performance of our approach is demonstrated in a case study on a published high-level ab initio PES of methyl chloride with 44 819 points. The ML model is trained on sets of different sizes and then used to predict the energies for tens of thousands of nuclear configurations within seconds. The resulting datasets are utilized in variational calculations of the vibrational energy levels of CH3Cl. By using both structure-based sampling and self-correction, the size of the training set can be kept small (e.g., 10% of the points) without any significant loss of accuracy. In ab initio rovibrational spectroscopy, it is thus possible to reduce the number of computationally costly electronic structure calculations through structure-based sampling and self-correcting KRR-based machine learning by up to 90%.

  10. Accurate NIRS measurement of muscle oxygenation by correcting the influence of a subcutaneous fat layer

    NASA Astrophysics Data System (ADS)

    Yamamoto, Katsuyuki; Niwayama, Masatsugu; Lin, Ling; Shiga, Toshikazu; Kudo, Nobuki; Takahashi, Makoto

    1998-01-01

    Although the inhomogeneity of tissue structure affects the sensitivity of tissue oxygenation measurement by reflectance near-infrared spectroscopy, few analyses of this effect have been reported. In this study, the influence of a subcutaneous fat layer on muscle oxygenation measurement was investigated by Monte Carlo simulation and experimental studies. In the experiments, measurement sensitivity was examined by measuring the falling rate of oxygenation in occlusion tests on the forearm using a tissue oxygen monitor. The fat layer thickness was measured by ultrasonography. Results of the simulation and occlusion tests clearly showed that the presence of a fat layer greatly decreases the measurement sensitivity and increases the light intensity at the detector. The correction factors of sensitivity were obtained from this relationship and were successfully validated by experiments on 12 subjects whose fat layer thickness ranged from 3.5 to 8 mm.

  11. Accurate NIRS measurement of muscle oxygenation by correcting the influence of a subcutaneous fat layer

    NASA Astrophysics Data System (ADS)

    Yamamoto, Katsuyuki; Niwayama, Masatsugu; Lin, Ling; Shiga, Toshikazu; Kudo, Nobuki; Takahashi, Makoto

    1997-12-01

    Although the inhomogeneity of tissue structure affects the sensitivity of tissue oxygenation measurement by reflectance near-infrared spectroscopy, few analyses of this effect have been reported. In this study, the influence of a subcutaneous fat layer on muscle oxygenation measurement was investigated by Monte Carlo simulation and experimental studies. In the experiments, measurement sensitivity was examined by measuring the falling rate of oxygenation in occlusion tests on the forearm using a tissue oxygen monitor. The fat layer thickness was measured by ultrasonography. Results of the simulation and occlusion tests clearly showed that the presence of a fat layer greatly decreases the measurement sensitivity and increases the light intensity at the detector. The correction factors of sensitivity were obtained from this relationship and were successfully validated by experiments on 12 subjects whose fat layer thickness ranged from 3.5 to 8 mm.

  12. Drift correction for accurate PRF-shift MR thermometry during mild hyperthermia treatments with MR-HIFU.

    PubMed

    Bing, Chenchen; Staruch, Robert M; Tillander, Matti; Köhler, Max O; Mougenot, Charles; Ylihautala, Mika; Laetsch, Theodore W; Chopra, Rajiv

    2016-09-01

    There is growing interest in performing hyperthermia treatments with clinical magnetic resonance imaging-guided high-intensity focused ultrasound (MR-HIFU) therapy systems designed for tissue ablation. During hyperthermia treatment, however, due to the narrow therapeutic window (41-45 °C), careful evaluation of the accuracy of proton resonant frequency (PRF) shift MR thermometry for these types of exposures is required. The purpose of this study was to evaluate the accuracy of MR thermometry using a clinical MR-HIFU system equipped with a hyperthermia treatment algorithm. Mild heating was performed in a tissue-mimicking phantom with implanted temperature sensors using the clinical MR-HIFU system. The influence of image-acquisition settings and post-acquisition correction algorithms on the accuracy of temperature measurements was investigated. The ability to achieve uniform heating for up to 40 min was evaluated in rabbit experiments. Automatic centre-frequency adjustments prior to image-acquisition corrected the image-shifts in the order of 0.1 mm/min. Zero- and first-order phase variations were observed over time, supporting the use of a combined drift correction algorithm. The temperature accuracy achieved using both centre-frequency adjustment and the combined drift correction algorithm was 0.57° ± 0.58 °C in the heated region and 0.54° ± 0.42 °C in the unheated region. Accurate temperature monitoring of hyperthermia exposures using PRF shift MR thermometry is possible through careful implementation of image-acquisition settings and drift correction algorithms. For the evaluated clinical MR-HIFU system, centre-frequency adjustment eliminated image shifts, and a combined drift correction algorithm achieved temperature measurements with an acceptable accuracy for monitoring and controlling hyperthermia exposures.

  13. Distribution of high-stability 10 GHz local oscillator over 100 km optical fiber with accurate phase-correction system.

    PubMed

    Wang, Siwei; Sun, Dongning; Dong, Yi; Xie, Weilin; Shi, Hongxiao; Yi, Lilin; Hu, Weisheng

    2014-02-15

    We have developed a radio-frequency local oscillator remote distribution system, which transfers a phase-stabilized 10.03 GHz signal over 100 km optical fiber. The phase noise of the remote signal caused by temperature and mechanical stress variations on the fiber is compensated by a high-precision phase-correction system, which is achieved using a single sideband modulator to transfer the phase correction from intermediate frequency to radio frequency, thus enabling accurate phase control of the 10 GHz signal. The residual phase noise of the remote 10.03 GHz signal is measured to be -70  dBc/Hz at 1 Hz offset, and long-term stability of less than 1×10⁻¹⁶ at 10,000 s averaging time is achieved. Phase error is less than ±0.03π.

  14. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    NASA Astrophysics Data System (ADS)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  15. Validation of a Method to Accurately Correct Anterior Superior Iliac Spine Marker Occlusion

    PubMed Central

    Hoffman, Joshua T.; McNally, Michael P.; Wordeman, Samuel C.; Hewett, Timothy E.

    2015-01-01

    Anterior superior iliac spine (ASIS) marker occlusion commonly occurs during three-dimensional (3-D) motion capture of dynamic tasks with deep hip flexion. The purpose of this study was to validate a universal technique to correct ASIS occlusion. 420ms of bilateral ASIS marker occlusion was simulated in fourteen drop vertical jump (DVJ) trials (n=14). Kinematic and kinetic hip data calculated for pelvic segments based on iliac crest (IC) marker and virtual ASIS (produced by our algorithm and a commercial virtual join) trajectories was compared to true ASIS marker tracking data. Root mean squared errors (RMSEs; mean ± standard deviation) and intra-class correlations (ICCs) between pelvic tracking based on virtual ASIS trajectories filled by our algorithm and true ASIS position were 2.3±0.9° (ICC=0.982) flexion/extension, 0.8±0.2° (ICC=0.954) abduction/adduction for hip angles, and 0.40±0.17N-m (ICC=1.000) and 1.05±0.36N-m (ICC=0,998) for sagittal and frontal plane moments. RMSEs for IC pelvic tracking were 6.9±1.8° (ICC=0.888) flexion/extension, 0.8±0.3° (ICC=0.949) abduction/adduction for hip angles, and 0.31±0.13N-m (ICC=1.00) and 1.48±0.69N-m (ICC=0.996) for sagittal and frontal plane moments. Finally, the commercially-available virtual join demonstrated RMSEs of 4.4±1.5° (ICC=0.945) flexion/extension, 0.7±0.2° (ICC=0.972) abduction/adduction for hip angles, and 0.97±0.62N-m (ICC=1.000) and 1.49±0.67N-m (ICC=0.996) for sagittal and frontal plane moments. The presented algorithm exceeded the a priori ICC cutoff of 0.95 for excellent validity and is an acceptable tracking alternative. While ICCs for the commercially available virtual join did not exhibit excellent correlation, good validity was observed for all kinematics and kinetics. IC marker pelvic tracking is not a valid alternative. PMID:25704531

  16. Subject-specific bone attenuation correction for brain PET/MR: can ZTE-MRI substitute CT scan accurately?

    PubMed

    Khalifé, Maya; Fernandez, Brice; Jaubert, Olivier; Soussan, Michael; Brulon, Vincent; Buvat, Irene; Comtat, Claude

    2017-08-24

    In brain PET/MR applications, accurate attenuation maps are required for accurate PET image quantification. An implemented attenuation correction (AC) method for brain imaging is the single-atlas approach which estimates an AC map from an averaged CT template. As an alternative, we propose to use a Zero Echo Time (ZTE) pulse sequence to segment bone, air and soft tissue. A linear relationship between histogram normalized ZTE intensity and measured CT density in Hounsfield Units (HU) in bone has been established thanks to a CT-MR database of 16 patients. Continuous AC maps were computed based on the segmented ZTE by setting a fixed linear attenuation coefficient (LAC) for air and soft tissue and by using the linear relationship to generate a continuous LAC map for the bone. Additionally, for comparison purpose, four other AC maps were generated: a ZTE derived AC map with a fixed LAC for the bone, an AC map based on the single-atlas approach as provided by the PET/MR manufacturer, a soft-tissue only AC map where the bone is ignored and, finally, the CT derived attenuation map used as the gold standard (CTAC). All these AC maps were used with different levels of smoothing for PET image reconstruction with and without time-of-flight (TOF). The subject-specific AC map gen- erated by combining ZTE-based segmentation and linear scaling of the normalized ZTE signal into HU was found to be a good substitute for the measured CTAC map in brain PET/MR when used with a Gaussian smoothing kernel of 4 mm corresponding to the PET scanner intrinsic resolution. As expected TOF reduces AC error regardless of the AC method. The continuous ZTE-AC performed better than the other alternative MR derived AC methods, reducing the quantification error between the MRAC corrected PET image and the reference CTAC corrected PET image. © 2017 Institute of Physics and Engineering in Medicine.

  17. Subject-specific bone attenuation correction for brain PET/MR: can ZTE-MRI substitute CT scan accurately?

    NASA Astrophysics Data System (ADS)

    Khalifé, Maya; Fernandez, Brice; Jaubert, Olivier; Soussan, Michael; Brulon, Vincent; Buvat, Irène; Comtat, Claude

    2017-10-01

    In brain PET/MR applications, accurate attenuation maps are required for accurate PET image quantification. An implemented attenuation correction (AC) method for brain imaging is the single-atlas approach that estimates an AC map from an averaged CT template. As an alternative, we propose to use a zero echo time (ZTE) pulse sequence to segment bone, air and soft tissue. A linear relationship between histogram normalized ZTE intensity and measured CT density in Hounsfield units (HU ) in bone has been established thanks to a CT-MR database of 16 patients. Continuous AC maps were computed based on the segmented ZTE by setting a fixed linear attenuation coefficient (LAC) to air and soft tissue and by using the linear relationship to generate continuous μ values for the bone. Additionally, for the purpose of comparison, four other AC maps were generated: a ZTE derived AC map with a fixed LAC for the bone, an AC map based on the single-atlas approach as provided by the PET/MR manufacturer, a soft-tissue only AC map and, finally, the CT derived attenuation map used as the gold standard (CTAC). All these AC maps were used with different levels of smoothing for PET image reconstruction with and without time-of-flight (TOF). The subject-specific AC map generated by combining ZTE-based segmentation and linear scaling of the normalized ZTE signal into HU was found to be a good substitute for the measured CTAC map in brain PET/MR when used with a Gaussian smoothing kernel of 4~mm corresponding to the PET scanner intrinsic resolution. As expected TOF reduces AC error regardless of the AC method. The continuous ZTE-AC performed better than the other alternative MR derived AC methods, reducing the quantification error between the MRAC corrected PET image and the reference CTAC corrected PET image.

  18. Accurate Attitude Estimation Using ARS under Conditions of Vehicle Movement Based on Disturbance Acceleration Adaptive Estimation and Correction

    PubMed Central

    Xing, Li; Hang, Yijun; Xiong, Zhi; Liu, Jianye; Wan, Zhong

    2016-01-01

    This paper describes a disturbance acceleration adaptive estimate and correction approach for an attitude reference system (ARS) so as to improve the attitude estimate precision under vehicle movement conditions. The proposed approach depends on a Kalman filter, where the attitude error, the gyroscope zero offset error and the disturbance acceleration error are estimated. By switching the filter decay coefficient of the disturbance acceleration model in different acceleration modes, the disturbance acceleration is adaptively estimated and corrected, and then the attitude estimate precision is improved. The filter was tested in three different disturbance acceleration modes (non-acceleration, vibration-acceleration and sustained-acceleration mode, respectively) by digital simulation. Moreover, the proposed approach was tested in a kinematic vehicle experiment as well. Using the designed simulations and kinematic vehicle experiments, it has been shown that the disturbance acceleration of each mode can be accurately estimated and corrected. Moreover, compared with the complementary filter, the experimental results have explicitly demonstrated the proposed approach further improves the attitude estimate precision under vehicle movement conditions. PMID:27754469

  19. On the accurate long-time solution of the wave equation in exterior domains: Asymptotic expansions and corrected boundary conditions

    NASA Technical Reports Server (NTRS)

    Hagstrom, Thomas; Hariharan, S. I.; Maccamy, R. C.

    1993-01-01

    We consider the solution of scattering problems for the wave equation using approximate boundary conditions at artificial boundaries. These conditions are explicitly viewed as approximations to an exact boundary condition satisfied by the solution on the unbounded domain. We study the short and long term behavior of the error. It is provided that, in two space dimensions, no local in time, constant coefficient boundary operator can lead to accurate results uniformly in time for the class of problems we consider. A variable coefficient operator is developed which attains better accuracy (uniformly in time) than is possible with constant coefficient approximations. The theory is illustrated by numerical examples. We also analyze the proposed boundary conditions using energy methods, leading to asymptotically correct error bounds.

  20. Accurate Estimation of Effective Population Size in the Korean Dairy Cattle Based on Linkage Disequilibrium Corrected by Genomic Relationship Matrix

    PubMed Central

    Shin, Dong-Hyun; Cho, Kwang-Hyun; Park, Kyoung-Do; Lee, Hyun-Jeong; Kim, Heebal

    2013-01-01

    Linkage disequilibrium between markers or genetic variants underlying interesting traits affects many genomic methodologies. In many genomic methodologies, the effective population size (Ne) is important to assess the genetic diversity of animal populations. In this study, dairy cattle were genotyped using the Illumina BovineHD Genotyping BeadChips for over 777,000 SNPs located across all autosomes, mitochondria and sex chromosomes, and 70,000 autosomal SNPs were selected randomly for the final analysis. We characterized more accurate linkage disequilibrium in a sample of 96 dairy cattle producing milk in Korea. Estimated linkage disequilibrium was relatively high between closely linked markers (>0.6 at 10 kb) and decreased with increasing distance. Using formulae that related the expected linkage disequilibrium to Ne, and assuming a constant actual population size, Ne was estimated to be approximately 122 in this population. Historical Ne, calculated assuming linear population growth, was suggestive of a rapid increase Ne over the past 10 generations, and increased slowly thereafter. Additionally, we corrected the genomic relationship structure per chromosome in calculating r2 and estimated Ne. The observed Ne based on r2 corrected by genomics relationship structure can be rationalized using current knowledge of the history of the dairy cattle breeds producing milk in Korea. PMID:25049757

  1. Accurate estimation of effective population size in the korean dairy cattle based on linkage disequilibrium corrected by genomic relationship matrix.

    PubMed

    Shin, Dong-Hyun; Cho, Kwang-Hyun; Park, Kyoung-Do; Lee, Hyun-Jeong; Kim, Heebal

    2013-12-01

    Linkage disequilibrium between markers or genetic variants underlying interesting traits affects many genomic methodologies. In many genomic methodologies, the effective population size (Ne) is important to assess the genetic diversity of animal populations. In this study, dairy cattle were genotyped using the Illumina BovineHD Genotyping BeadChips for over 777,000 SNPs located across all autosomes, mitochondria and sex chromosomes, and 70,000 autosomal SNPs were selected randomly for the final analysis. We characterized more accurate linkage disequilibrium in a sample of 96 dairy cattle producing milk in Korea. Estimated linkage disequilibrium was relatively high between closely linked markers (>0.6 at 10 kb) and decreased with increasing distance. Using formulae that related the expected linkage disequilibrium to Ne, and assuming a constant actual population size, Ne was estimated to be approximately 122 in this population. Historical Ne, calculated assuming linear population growth, was suggestive of a rapid increase Ne over the past 10 generations, and increased slowly thereafter. Additionally, we corrected the genomic relationship structure per chromosome in calculating r(2) and estimated Ne. The observed Ne based on r(2) corrected by genomics relationship structure can be rationalized using current knowledge of the history of the dairy cattle breeds producing milk in Korea.

  2. ETHNOPRED: a novel machine learning method for accurate continental and sub-continental ancestry identification and population stratification correction.

    PubMed

    Hajiloo, Mohsen; Sapkota, Yadav; Mackey, John R; Robson, Paula; Greiner, Russell; Damaraju, Sambasivarao

    2013-02-22

    Population stratification is a systematic difference in allele frequencies between subpopulations. This can lead to spurious association findings in the case-control genome wide association studies (GWASs) used to identify single nucleotide polymorphisms (SNPs) associated with disease-linked phenotypes. Methods such as self-declared ancestry, ancestry informative markers, genomic control, structured association, and principal component analysis are used to assess and correct population stratification but each has limitations. We provide an alternative technique to address population stratification. We propose a novel machine learning method, ETHNOPRED, which uses the genotype and ethnicity data from the HapMap project to learn ensembles of disjoint decision trees, capable of accurately predicting an individual's continental and sub-continental ancestry. To predict an individual's continental ancestry, ETHNOPRED produced an ensemble of 3 decision trees involving a total of 10 SNPs, with 10-fold cross validation accuracy of 100% using HapMap II dataset. We extended this model to involve 29 disjoint decision trees over 149 SNPs, and showed that this ensemble has an accuracy of ≥ 99.9%, even if some of those 149 SNP values were missing. On an independent dataset, predominantly of Caucasian origin, our continental classifier showed 96.8% accuracy and improved genomic control's λ from 1.22 to 1.11. We next used the HapMap III dataset to learn classifiers to distinguish European subpopulations (North-Western vs. Southern), East Asian subpopulations (Chinese vs. Japanese), African subpopulations (Eastern vs. Western), North American subpopulations (European vs. Chinese vs. African vs. Mexican vs. Indian), and Kenyan subpopulations (Luhya vs. Maasai). In these cases, ETHNOPRED produced ensembles of 3, 39, 21, 11, and 25 disjoint decision trees, respectively involving 31, 502, 526, 242 and 271 SNPs, with 10-fold cross validation accuracy of 86.5% ± 2.4%, 95.6% ± 3

  3. ETHNOPRED: a novel machine learning method for accurate continental and sub-continental ancestry identification and population stratification correction

    PubMed Central

    2013-01-01

    Background Population stratification is a systematic difference in allele frequencies between subpopulations. This can lead to spurious association findings in the case–control genome wide association studies (GWASs) used to identify single nucleotide polymorphisms (SNPs) associated with disease-linked phenotypes. Methods such as self-declared ancestry, ancestry informative markers, genomic control, structured association, and principal component analysis are used to assess and correct population stratification but each has limitations. We provide an alternative technique to address population stratification. Results We propose a novel machine learning method, ETHNOPRED, which uses the genotype and ethnicity data from the HapMap project to learn ensembles of disjoint decision trees, capable of accurately predicting an individual’s continental and sub-continental ancestry. To predict an individual’s continental ancestry, ETHNOPRED produced an ensemble of 3 decision trees involving a total of 10 SNPs, with 10-fold cross validation accuracy of 100% using HapMap II dataset. We extended this model to involve 29 disjoint decision trees over 149 SNPs, and showed that this ensemble has an accuracy of ≥ 99.9%, even if some of those 149 SNP values were missing. On an independent dataset, predominantly of Caucasian origin, our continental classifier showed 96.8% accuracy and improved genomic control’s λ from 1.22 to 1.11. We next used the HapMap III dataset to learn classifiers to distinguish European subpopulations (North-Western vs. Southern), East Asian subpopulations (Chinese vs. Japanese), African subpopulations (Eastern vs. Western), North American subpopulations (European vs. Chinese vs. African vs. Mexican vs. Indian), and Kenyan subpopulations (Luhya vs. Maasai). In these cases, ETHNOPRED produced ensembles of 3, 39, 21, 11, and 25 disjoint decision trees, respectively involving 31, 502, 526, 242 and 271 SNPs, with 10-fold cross validation accuracy of

  4. Benchmark studies of the Bending Corrected Rotating Linear Model (BCRLM) reactive scattering code: Implications for accurate quantum calculations

    SciTech Connect

    Hayes, E.F.; Darakjian, Z. . Dept. of Chemistry); Walker, R.B. )

    1990-01-01

    The Bending Corrected Rotating Linear Model (BCRLM), developed by Hayes and Walker, is a simple approximation to the true multidimensional scattering problem for reaction of the type: A + BC {yields} AB + C. While the BCRLM method is simpler than methods designed to obtain accurate three dimensional quantum scattering results, this turns out to be a major advantage in terms of our benchmarking studies. The computer code used to obtain BCRLM scattering results is written for the most part in standard FORTRAN and has been reported to several scalar, vector, and parallel architecture computers including the IBM 3090-600J, the Cray XMP and YMP, the Ardent Titan, IBM RISC System/6000, Convex C-1 and the MIPS 2000. Benchmark results will be reported for each of these machines with an emphasis on comparing the scalar, vector, and parallel performance for the standard code with minimum modifications. Detailed analysis of the mapping of the BCRLM approach onto both shared and distributed memory parallel architecture machines indicates the importance of introducing several key changes in the basic strategy and algorithums used to calculate scattering results. This analysis of the BCRLM approach provides some insights into optimal strategies for mapping three dimensional quantum scattering methods, such as the Parker-Pack method, onto shared or distributed memory parallel computers.

  5. Correction

    NASA Astrophysics Data System (ADS)

    1995-04-01

    Seismic images of the Brooks Range, Arctic Alaska, reveal crustal-scale duplexing: Correction Geology, v. 23, p. 65 68 (January 1995) The correct Figure 4A, for the loose insert, is given here. See Figure 4A below. Corrected inserts will be available to those requesting copies of the article from the senior author, Gary S. Fuis, U.S. Geological Survey, 345 Middlefield Road, Menlo Park, CA 94025. Figure 4A. P-wave velocity model of Brooks Range region (thin gray contours) with migrated wide-angle reflections (heavy red lines) and migreated vertical-incidence reflections (short black lines) superimposed. Velocity contour interval is 0.25 km/s; 4,5, and 6 km/s contours are labeled. Estimated error in velocities is one contour interval. Symbols on faults shown at top are as in Figure 2 caption.

  6. Accurate and fast numerical solution of Poisson's equation for arbitrary, space-filling convex Voronoi polyhedra: near-field corrections revisited

    NASA Astrophysics Data System (ADS)

    Alam, Aftab; Wilson, Brian G.; Johnson, Duane D.

    2012-02-01

    We present an accurate and rapid solution of Poisson's equation for space-filling, arbitrarily-shaped, convex Voronoi polyhedra (VP); the method is O(N), where N is the number of distinct VP representing the system. In effect, we resolve the longstanding problem of fast but accurate numerical solution of the near-field corrections (NFC), contributions to each VP potential from nearby VP -- typically involving multipole-type conditionally-convergent sums, or fast Fourier transforms. Our method avoids all ill-convergent sums, is simple, accurate, efficient, and works generally, i.e., for periodic solids, molecules, or systems with disorder or imperfections. We demonstrate the method's practicality by numerical calculations compared to exactly solvable models.

  7. Additional correction for energy transfer efficiency calculation in filter-based Förster resonance energy transfer microscopy for more accurate results

    NASA Astrophysics Data System (ADS)

    Sun, Yuansheng; Periasamy, Ammasi

    2010-03-01

    Förster resonance energy transfer (FRET) microscopy is commonly used to monitor protein interactions with filter-based imaging systems, which require spectral bleedthrough (or cross talk) correction to accurately measure energy transfer efficiency (E). The double-label (donor+acceptor) specimen is excited with the donor wavelength, the acceptor emission provided the uncorrected FRET signal and the donor emission (the donor channel) represents the quenched donor (qD), the basis for the E calculation. Our results indicate this is not the most accurate determination of the quenched donor signal as it fails to consider the donor spectral bleedthrough (DSBT) signals in the qD for the E calculation, which our new model addresses, leading to a more accurate E result. This refinement improves E comparisons made with lifetime and spectral FRET imaging microscopy as shown here using several genetic (FRET standard) constructs, where cerulean and venus fluorescent proteins are tethered by different amino acid linkers.

  8. Partial volume correction and image segmentation for accurate measurement of standardized uptake value of grey matter in the brain.

    PubMed

    Bural, Gonca; Torigian, Drew; Basu, Sandip; Houseni, Mohamed; Zhuge, Ying; Rubello, Domenico; Udupa, Jayaram; Alavi, Abass

    2015-12-01

    Our aim was to explore a novel quantitative method [based upon an MRI-based image segmentation that allows actual calculation of grey matter, white matter and cerebrospinal fluid (CSF) volumes] for overcoming the difficulties associated with conventional techniques for measuring actual metabolic activity of the grey matter. We included four patients with normal brain MRI and fluorine-18 fluorodeoxyglucose (F-FDG)-PET scans (two women and two men; mean age 46±14 years) in this analysis. The time interval between the two scans was 0-180 days. We calculated the volumes of grey matter, white matter and CSF by using a novel segmentation technique applied to the MRI images. We measured the mean standardized uptake value (SUV) representing the whole metabolic activity of the brain from the F-FDG-PET images. We also calculated the white matter SUV from the upper transaxial slices (centrum semiovale) of the F-FDG-PET images. The whole brain volume was calculated by summing up the volumes of the white matter, grey matter and CSF. The global cerebral metabolic activity was calculated by multiplying the mean SUV with total brain volume. The whole brain white matter metabolic activity was calculated by multiplying the mean SUV for the white matter by the white matter volume. The global cerebral metabolic activity only reflects those of the grey matter and the white matter, whereas that of the CSF is zero. We subtracted the global white matter metabolic activity from that of the whole brain, resulting in the global grey matter metabolism alone. We then divided the grey matter global metabolic activity by grey matter volume to accurately calculate the SUV for the grey matter alone. The brain volumes ranged between 1546 and 1924 ml. The mean SUV for total brain was 4.8-7. Total metabolic burden of the brain ranged from 5565 to 9617. The mean SUV for white matter was 2.8-4.1. On the basis of these measurements we generated the grey matter SUV, which ranged from 8.1 to 11.3. The

  9. Harmonic Allocation of Authorship Credit: Source-Level Correction of Bibliometric Bias Assures Accurate Publication and Citation Analysis

    PubMed Central

    Hagen, Nils T.

    2008-01-01

    Authorship credit for multi-authored scientific publications is routinely allocated either by issuing full publication credit repeatedly to all coauthors, or by dividing one credit equally among all coauthors. The ensuing inflationary and equalizing biases distort derived bibliometric measures of merit by systematically benefiting secondary authors at the expense of primary authors. Here I show how harmonic counting, which allocates credit according to authorship rank and the number of coauthors, provides simultaneous source-level correction for both biases as well as accommodating further decoding of byline information. I also demonstrate large and erratic effects of counting bias on the original h-index, and show how the harmonic version of the h-index provides unbiased bibliometric ranking of scientific merit while retaining the original's essential simplicity, transparency and intended fairness. Harmonic decoding of byline information resolves the conundrum of authorship credit allocation by providing a simple recipe for source-level correction of inflationary and equalizing bias. Harmonic counting could also offer unrivalled accuracy in automated assessments of scientific productivity, impact and achievement. PMID:19107201

  10. Corrections.

    PubMed

    2015-07-01

    Lai Y-S, Biedermann P, Ekpo UF, et al. Spatial distribution of schistosomiasis and treatment needs in sub-Saharan Africa: a systematic review and geostatistical analysis. Lancet Infect Dis 2015; published online May 22. http://dx.doi.org/10.1016/S1473-3099(15)00066-3—Figure 1 of this Article should have contained a box stating ‘100 references added’ with an arrow pointing inwards, rather than a box stating ‘199 records excluded’, and an asterisk should have been added after ‘1473 records extracted into GNTD’. Additionally, the positioning of the ‘§ and ‘†’ footnotes has been corrected in table 1. These corrections have been made to the online version as of June 4, 2015.

  11. Correction.

    PubMed

    2016-02-01

    In the article by Guessous et al (Guessous I, Pruijm M, Ponte B, Ackermann D, Ehret G, Ansermot N, Vuistiner P, Staessen J, Gu Y, Paccaud F, Mohaupt M, Vogt B, Pechère-Bertschi A, Martin PY, Burnier M, Eap CB, Bochud M. Associations of ambulatory blood pressure with urinary caffeine and caffeine metabolite excretions. Hypertension. 2015;65:691–696. doi: 10.1161/HYPERTENSIONAHA.114.04512), which published online ahead of print December 8, 2014, and appeared in the March 2015 issue of the journal, a correction was needed.One of the author surnames was misspelled. Antoinette Pechère-Berstchi has been corrected to read Antoinette Pechère-Bertschi.The authors apologize for this error.

  12. Correction

    NASA Astrophysics Data System (ADS)

    1998-12-01

    Alleged mosasaur bite marks on Late Cretaceous ammonites are limpet (patellogastropod) home scars Geology, v. 26, p. 947 950 (October 1998) This article had the following printing errors: p. 947, Abstract, line 11, “sepia” should be “septa” p. 947, 1st paragraph under Introduction, line 2, “creep” should be “deep” p. 948, column 1, 2nd paragraph, line 7, “creep” should be “deep” p. 949, column 1, 1st paragraph, line 1, “creep” should be “deep” p. 949, column 1, 1st paragraph, line 5, “19774” should be “1977)” p. 949, column 1, 4th paragraph, line 7, “in particular” should be “In particular” CORRECTION Mammalian community response to the latest Paleocene thermal maximum: An isotaphonomic study in the northern Bighorn Basin, Wyoming Geology, v. 26, p. 1011 1014 (November 1998) An error appeared in the References Cited. The correct reference appears below: Fricke, H. C., Clyde, W. C., O'Neil, J. R., and Gingerich, P. D., 1998, Evidence for rapid climate change in North America during the latest Paleocene thermal maximum: Oxygen isotope compositions of biogenic phosphate from the Bighorn Basin (Wyoming): Earth and Planetary Science Letters, v. 160, p. 193 208.

  13. Direct-Space Corrections Enable Fast and Accurate Lorentz-Berthelot Combination Rule Lennard-Jones Lattice Summation.

    PubMed

    Wennberg, Christian L; Murtola, Teemu; Páll, Szilárd; Abraham, Mark J; Hess, Berk; Lindahl, Erik

    2015-12-08

    Long-range lattice summation techniques such as the particle-mesh Ewald (PME) algorithm for electrostatics have been revolutionary to the precision and accuracy of molecular simulations in general. Despite the performance penalty associated with lattice summation electrostatics, few biomolecular simulations today are performed without it. There are increasingly strong arguments for moving in the same direction for Lennard-Jones (LJ) interactions, and by using geometric approximations of the combination rules in reciprocal space, we have been able to make a very high-performance implementation available in GROMACS. Here, we present a new way to correct for these approximations to achieve exact treatment of Lorentz-Berthelot combination rules within the cutoff, and only a very small approximation error remains outside the cutoff (a part that would be completely ignored without LJ-PME). This not only improves accuracy by almost an order of magnitude but also achieves absolute biomolecular simulation performance that is an order of magnitude faster than any other available lattice summation technique for LJ interactions. The implementation includes both CPU and GPU acceleration, and its combination with improved scaling LJ-PME simulations now provides performance close to the truncated potential methods in GROMACS but with much higher accuracy.

  14. Assessment of long-range-corrected exchange-correlation kernels for solids: Accurate exciton binding energies via an empirically scaled bootstrap kernel

    NASA Astrophysics Data System (ADS)

    Byun, Young-Moo; Ullrich, Carsten A.

    2017-05-01

    In time-dependent density-functional theory, a family of exchange-correlation kernels, known as long-range-corrected (LRC) kernels, have shown promise in the calculation of excitonic effects in solids. We perform a systematic assessment of existing static LRC kernels (empirical LRC, Bootstrap, and jellium-with-a-gap model) for a range of semiconductors and insulators, focusing on optical spectra and exciton binding energies. We find that no LRC kernel is capable of simultaneously producing good optical spectra and quantitatively accurate exciton binding energies for both semiconductors and insulators. We propose a simple and universal, empirically scaled Bootstrap kernel that yields accurate exciton binding energies for all materials under consideration, with low computational cost.

  15. Radiochromic film dosimetry with flatbed scanners: a fast and accurate method for dose calibration and uniformity correction with single film exposure.

    PubMed

    Menegotti, L; Delana, A; Martignano, A

    2008-07-01

    Film dosimetry is an attractive tool for dose distribution verification in intensity modulated radiotherapy (IMRT). A critical aspect of radiochromic film dosimetry is the scanner used for the readout of the film: the output needs to be calibrated in dose response and corrected for pixel value and spatial dependent nonuniformity caused by light scattering; these procedures can take a long time. A method for a fast and accurate calibration and uniformity correction for radiochromic film dosimetry is presented: a single film exposure is used to do both calibration and correction. Gafchromic EBT films were read with two flatbed charge coupled device scanners (Epson V750 and 1680Pro). The accuracy of the method is investigated with specific dose patterns and an IMRT beam. The comparisons with a two-dimensional array of ionization chambers using a 18 x 18 cm2 open field and an inverse pyramid dose pattern show an increment in the percentage of points which pass the gamma analysis (tolerance parameters of 3% and 3 mm), passing from 55% and 64% for the 1680Pro and V750 scanners, respectively, to 94% for both scanners for the 18 x 18 open field, and from 76% and 75% to 91% for the inverse pyramid pattern. Application to an IMRT beam also shows better gamma index results, passing from 88% and 86% for the two scanners, respectively, to 94% for both. The number of points and dose range considered for correction and calibration appears to be appropriate for use in IMRT verification. The method showed to be fast and to correct properly the nonuniformity and has been adopted for routine clinical IMRT dose verification.

  16. Radiochromic film dosimetry with flatbed scanners: A fast and accurate method for dose calibration and uniformity correction with single film exposure

    SciTech Connect

    Menegotti, L.; Delana, A.; Martignano, A.

    2008-07-15

    Film dosimetry is an attractive tool for dose distribution verification in intensity modulated radiotherapy (IMRT). A critical aspect of radiochromic film dosimetry is the scanner used for the readout of the film: the output needs to be calibrated in dose response and corrected for pixel value and spatial dependent nonuniformity caused by light scattering; these procedures can take a long time. A method for a fast and accurate calibration and uniformity correction for radiochromic film dosimetry is presented: a single film exposure is used to do both calibration and correction. Gafchromic EBT films were read with two flatbed charge coupled device scanners (Epson V750 and 1680Pro). The accuracy of the method is investigated with specific dose patterns and an IMRT beam. The comparisons with a two-dimensional array of ionization chambers using a 18x18 cm{sup 2} open field and an inverse pyramid dose pattern show an increment in the percentage of points which pass the gamma analysis (tolerance parameters of 3% and 3 mm), passing from 55% and 64% for the 1680Pro and V750 scanners, respectively, to 94% for both scanners for the 18x18 open field, and from 76% and 75% to 91% for the inverse pyramid pattern. Application to an IMRT beam also shows better gamma index results, passing from 88% and 86% for the two scanners, respectively, to 94% for both. The number of points and dose range considered for correction and calibration appears to be appropriate for use in IMRT verification. The method showed to be fast and to correct properly the nonuniformity and has been adopted for routine clinical IMRT dose verification.

  17. The accurate calculation of the band gap of liquid water by means of GW corrections applied to plane-wave density functional theory molecular dynamics simulations.

    PubMed

    Fang, Changming; Li, Wun-Fan; Koster, Rik S; Klimeš, Jiří; van Blaaderen, Alfons; van Huis, Marijn A

    2015-01-07

    Knowledge about the intrinsic electronic properties of water is imperative for understanding the behaviour of aqueous solutions that are used throughout biology, chemistry, physics, and industry. The calculation of the electronic band gap of liquids is challenging, because the most accurate ab initio approaches can be applied only to small numbers of atoms, while large numbers of atoms are required for having configurations that are representative of a liquid. Here we show that a high-accuracy value for the electronic band gap of water can be obtained by combining beyond-DFT methods and statistical time-averaging. Liquid water is simulated at 300 K using a plane-wave density functional theory molecular dynamics (PW-DFT-MD) simulation and a van der Waals density functional (optB88-vdW). After applying a self-consistent GW correction the band gap of liquid water at 300 K is calculated as 7.3 eV, in good agreement with recent experimental observations in the literature (6.9 eV). For simulations of phase transformations and chemical reactions in water or aqueous solutions whereby an accurate description of the electronic structure is required, we suggest to use these advanced GW corrections in combination with the statistical analysis of quantum mechanical MD simulations.

  18. Fast, accurate, and robust automatic marker detection for motion correction based on oblique kV or MV projection image pairs.

    PubMed

    Slagmolen, Pieter; Hermans, Jeroen; Maes, Frederik; Budiharto, Tom; Haustermans, Karin; van den Heuvel, Frank

    2010-04-01

    A robust and accurate method that allows the automatic detection of fiducial markers in MV and kV projection image pairs is proposed. The method allows to automatically correct for inter or intrafraction motion. Intratreatment MV projection images are acquired during each of five treatment beams of prostate cancer patients with four implanted fiducial markers. The projection images are first preprocessed using a series of marker enhancing filters. 2D candidate marker locations are generated for each of the filtered projection images and 3D candidate marker locations are reconstructed by pairing candidates in subsequent projection images. The correct marker positions are retrieved in 3D by the minimization of a cost function that combines 2D image intensity and 3D geometric or shape information for the entire marker configuration simultaneously. This optimization problem is solved using dynamic programming such that the globally optimal configuration for all markers is always found. Translational interfraction and intrafraction prostate motion and the required patient repositioning is assessed from the position of the centroid of the detected markers in different MV image pairs. The method was validated on a phantom using CT as ground-truth and on clinical data sets of 16 patients using manual marker annotations as ground-truth. The entire setup was confirmed to be accurate to around 1 mm by the phantom measurements. The reproducibility of the manual marker selection was less than 3.5 pixels in the MV images. In patient images, markers were correctly identified in at least 99% of the cases for anterior projection images and 96% of the cases for oblique projection images. The average marker detection accuracy was 1.4 +/- 1.8 pixels in the projection images. The centroid of all four reconstructed marker positions in 3D was positioned within 2 mm of the ground-truth position in 99.73% of all cases. Detecting four markers in a pair of MV images takes a little less than

  19. Toward accurate thermochemistry of the (24)MgH, (25)MgH, and (26)MgH molecules at elevated temperatures: corrections due to unbound states.

    PubMed

    Szidarovszky, Tamás; Császár, Attila G

    2015-01-07

    The total partition functions QT and their first two moments Q(')T and Q(″)T, together with the isobaric heat capacities CpT, are computed a priori for three major MgH isotopologues on the temperature range of T = 100-3000 K using the recent highly accurate potential energy curve, spin-rotation, and non-adiabatic correction functions of Henderson et al. [J. Phys. Chem. A 117, 13373 (2013)]. Nuclear motion computations are carried out on the ground electronic state to determine the (ro)vibrational energy levels and the scattering phase shifts. The effect of resonance states is found to be significant above about 1000 K and it increases with temperature. Even very short-lived states, due to their relatively large number, have significant contributions to QT at elevated temperatures. The contribution of scattering states is around one fourth of that of resonance states but opposite in sign. Uncertainty estimates are given for the possible error sources, suggesting that all computed thermochemical properties have an accuracy better than 0.005% up to 1200 K. Between 1200 and 2500 K, the uncertainties can rise to around 0.1%, while between 2500 K and 3000 K, a further increase to 0.5% might be observed for Q(″)T and CpT, principally due to the neglect of excited electronic states. The accurate thermochemical data determined are presented in the supplementary material for the three isotopologues of (24)MgH, (25)MgH, and (26)MgH at 1 K increments. These data, which differ significantly from older standard data, should prove useful for astronomical models incorporating thermodynamic properties of these species.

  20. A consistent and accurate ab initio parametrization of density functional dispersion correction (DFT-D) for the 94 elements H-Pu.

    PubMed

    Grimme, Stefan; Antony, Jens; Ehrlich, Stephan; Krieg, Helge

    2010-04-21

    The method of dispersion correction as an add-on to standard Kohn-Sham density functional theory (DFT-D) has been refined regarding higher accuracy, broader range of applicability, and less empiricism. The main new ingredients are atom-pairwise specific dispersion coefficients and cutoff radii that are both computed from first principles. The coefficients for new eighth-order dispersion terms are computed using established recursion relations. System (geometry) dependent information is used for the first time in a DFT-D type approach by employing the new concept of fractional coordination numbers (CN). They are used to interpolate between dispersion coefficients of atoms in different chemical environments. The method only requires adjustment of two global parameters for each density functional, is asymptotically exact for a gas of weakly interacting neutral atoms, and easily allows the computation of atomic forces. Three-body nonadditivity terms are considered. The method has been assessed on standard benchmark sets for inter- and intramolecular noncovalent interactions with a particular emphasis on a consistent description of light and heavy element systems. The mean absolute deviations for the S22 benchmark set of noncovalent interactions for 11 standard density functionals decrease by 15%-40% compared to the previous (already accurate) DFT-D version. Spectacular improvements are found for a tripeptide-folding model and all tested metallic systems. The rectification of the long-range behavior and the use of more accurate C(6) coefficients also lead to a much better description of large (infinite) systems as shown for graphene sheets and the adsorption of benzene on an Ag(111) surface. For graphene it is found that the inclusion of three-body terms substantially (by about 10%) weakens the interlayer binding. We propose the revised DFT-D method as a general tool for the computation of the dispersion energy in molecules and solids of any kind with DFT and related

  1. WAIS-IV reliable digit span is no more accurate than age corrected scaled score as an indicator of invalid performance in a veteran sample undergoing evaluation for mTBI.

    PubMed

    Spencer, Robert J; Axelrod, Bradley N; Drag, Lauren L; Waldron-Perrine, Brigid; Pangilinan, Percival H; Bieliauskas, Linas A

    2013-01-01

    Reliable Digit Span (RDS) is a measure of effort derived from the Digit Span subtest of the Wechsler intelligence scales. Some authors have suggested that the age-corrected scaled score provides a more accurate measure of effort than RDS. This study examined the relative diagnostic accuracy of the traditional RDS, an extended RDS including the new Sequencing task from the Wechsler Adult Intelligence Scale-IV, and the age-corrected scaled score, relative to performance validity as determined by the Test of Memory Malingering. Data were collected from 138 Veterans seen in a traumatic brain injury clinic. The traditional RDS (≤ 7), revised RDS (≤ 11), and Digit Span age-corrected scaled score ( ≤ 6) had respective sensitivities of 39%, 39%, and 33%, and respective specificities of 82%, 89%, and 91%. Of these indices, revised RDS and the Digit Span age-corrected scaled score provide the most accurate measure of performance validity among the three measures.

  2. Accurate real-time ionospheric corrections as the key to extend the centimeter-error-level GNSS navigation at continental scale (WARTK)

    NASA Astrophysics Data System (ADS)

    Hernandez-Pajares, M.; Juan, J.; Sanz, J.; Aragon-Angel, A.

    2007-05-01

    The main focus of this presentation is to show the recent improvements in real-time GNSS ionospheric determination extending the service area of the so called "Wide Area Real Time Kinematic" technique (WARTK), which allow centimeter-error-level navigation up to hundreds of kilometers far from the nearest GNSS reference site.[-4mm] The real-time GNSS navigation with centimeters of error has been feasible since the nineties thanks to the so- called "Real-Time Kinematic" technique (RTK), by exactly solving the integer values of the double-differenced carrier phase ambiguities. This was possible thanks to dual-frequency carrier phase data acquired simultaneously with data from a close (less than 10-20 km) reference GNSS site, under the assumption of common atmospheric effects on the satellite signal. This technique has been improved by different authors with the consideration of a network of reference sites. However the differential ionospheric refraction has remained as the main limiting factor in the extension of the applicability distance regarding to the reference site.[-4mm] In this context the authors have been developing the Wide Area RTK technique (WARTK) in different works and projects since 1998, overworking the mentioned limitations. In this way the RTK applicability with the existing sparse (Wide Area) networks of reference GPS stations, separated hundreds of kilometers, is feasible. And such networks are presently deployed in the context of other projects, such as SBAS support, over Europe and North America (EGNOS and WAAS respectively) among other regions.[-4mm] In particular WARTK is based on computing very accurate differential ionospheric corrections from a Wide Area network of permanent GNSS receivers, and providing them in real-time to the users. The key points addressed by the technique are an accurate real-time ionospheric modeling -combined with the corresponding geodetic model- by means of:[-4mm] a) A tomographic voxel model of the ionosphere

  3. Accurate ab initio determination of the adiabatic potential energy function and the Born-Oppenheimer breakdown corrections for the electronic ground state of LiH isotopologues

    NASA Astrophysics Data System (ADS)

    Holka, Filip; Szalay, Péter G.; Fremont, Julien; Rey, Michael; Peterson, Kirk A.; Tyuterev, Vladimir G.

    2011-03-01

    High level ab initio potential energy functions have been constructed for LiH in order to predict vibrational levels up to dissociation. After careful tests of the parameters of the calculation, the final adiabatic potential energy function has been composed from: (a) an ab initio nonrelativistic potential obtained at the multireference configuration interaction with singles and doubles level including a size-extensivity correction and quintuple-sextuple ζ extrapolations of the basis, (b) a mass-velocity-Darwin relativistic correction, and (c) a diagonal Born-Oppenheimer (BO) correction. Finally, nonadiabatic effects have also been considered by including a nonadiabatic correction to the kinetic energy operator of the nuclei. This correction is calculated from nonadiabatic matrix elements between the ground and excited electronic states. The calculated vibrational levels have been compared with those obtained from the experimental data [J. A. Coxon and C. S. Dickinson, J. Chem. Phys. 134, 9378 (2004)]. It was found that the calculated BO potential results in vibrational levels which have root mean square (rms) deviations of about 6-7 cm-1 for LiH and ˜3 cm-1 for LiD. With all the above mentioned corrections accounted for, the rms deviation falls down to ˜1 cm-1. These results represent a drastic improvement over previous theoretical predictions of vibrational levels for all isotopologues of LiH.

  4. Accurate ab initio determination of the adiabatic potential energy function and the Born-Oppenheimer breakdown corrections for the electronic ground state of LiH isotopologues.

    PubMed

    Holka, Filip; Szalay, Péter G; Fremont, Julien; Rey, Michael; Peterson, Kirk A; Tyuterev, Vladimir G

    2011-03-07

    High level ab initio potential energy functions have been constructed for LiH in order to predict vibrational levels up to dissociation. After careful tests of the parameters of the calculation, the final adiabatic potential energy function has been composed from: (a) an ab initio nonrelativistic potential obtained at the multireference configuration interaction with singles and doubles level including a size-extensivity correction and quintuple-sextuple ζ extrapolations of the basis, (b) a mass-velocity-Darwin relativistic correction, and (c) a diagonal Born-Oppenheimer (BO) correction. Finally, nonadiabatic effects have also been considered by including a nonadiabatic correction to the kinetic energy operator of the nuclei. This correction is calculated from nonadiabatic matrix elements between the ground and excited electronic states. The calculated vibrational levels have been compared with those obtained from the experimental data [J. A. Coxon and C. S. Dickinson, J. Chem. Phys. 134, 9378 (2004)]. It was found that the calculated BO potential results in vibrational levels which have root mean square (rms) deviations of about 6-7 cm(-1) for LiH and ∼3 cm(-1) for LiD. With all the above mentioned corrections accounted for, the rms deviation falls down to ∼1 cm(-1). These results represent a drastic improvement over previous theoretical predictions of vibrational levels for all isotopologues of LiH.

  5. Accurate evaluations of the field shift and lowest-order QED correction for the ground 1¹S-states of some light two-electron ions.

    PubMed

    Frolov, Alexei M; Wardlaw, David M

    2014-09-14

    Mass-dependent and field shift components of the isotopic shift are determined to high accuracy for the ground 1(1)S-states of some light two-electron Li(+), Be(2+), B(3+), and C(4+) ions. To determine the field components of these isotopic shifts we apply the Racah-Rosental-Breit formula. We also determine the lowest order QED corrections to the isotopic shifts for each of these two-electron ions.

  6. Accurate evaluations of the field shift and lowest-order QED correction for the ground 1{sup 1}S−states of some light two-electron ions

    SciTech Connect

    Frolov, Alexei M.; Wardlaw, David M.

    2014-09-14

    Mass-dependent and field shift components of the isotopic shift are determined to high accuracy for the ground 1{sup 1}S−states of some light two-electron Li{sup +}, Be{sup 2+}, B{sup 3+}, and C{sup 4+} ions. To determine the field components of these isotopic shifts we apply the Racah-Rosental-Breit formula. We also determine the lowest order QED corrections to the isotopic shifts for each of these two-electron ions.

  7. Comment on: Accurate and fast numerical solution of Poisson s equation for arbitrary, space-filling Voronoi polyhedra: Near-field corrections revisited

    SciTech Connect

    Gonis, Antonios; Zhang, Xiaoguang

    2012-01-01

    This is a comment on the paper by Aftab Alam, Brian G. Wilson, and D. D. Johnson [1], proposing the solution of the near-field corrections (NFC s) problem for the Poisson equation for extended, e.g., space filling, charge densities. We point out that the problem considered by the authors can be simply avoided by means of performing certain integrals in a particular order, while their method does not address the genuine problem of NFC s that arises when the solution of the Poisson equation is attempted within multiple scattering theory. We also point out a flaw in their line of reasoning leading to the expression for the potential inside the bounding sphere of a cell that makes it inapplicable to certain geometries.

  8. Band-structure calculations of noble-gas and alkali halide solids using accurate Kohn-Sham potentials with self-interaction correction

    SciTech Connect

    Li, Y.; Krieger, J.B. ); Norman, M.R. ); Iafrate, G.J. )

    1991-11-15

    The optimized-effective-potential (OEP) method and a method developed recently by Krieger, Li, and Iafrate (KLI) are applied to the band-structure calculations of noble-gas and alkali halide solids employing the self-interaction-corrected (SIC) local-spin-density (LSD) approximation for the exchange-correlation energy functional. The resulting band gaps from both calculations are found to be in fair agreement with the experimental values. The discrepancies are typically within a few percent with results that are nearly the same as those of previously published orbital-dependent multipotential SIC calculations, whereas the LSD results underestimate the band gaps by as much as 40%. As in the LSD---and it is believed to be the case even for the exact Kohn-Sham potential---both the OEP and KLI predict valence-band widths which are narrower than those of experiment. In all cases, the KLI method yields essentially the same results as the OEP.

  9. A fast and accurate method for controlling the correct labeling of products containing buffalo meat using High Resolution Melting (HRM) analysis.

    PubMed

    Sakaridis, Ioannis; Ganopoulos, Ioannis; Argiriou, Anagnostis; Tsaftaris, Athanasios

    2013-05-01

    The substitution of high priced meat with low cost ones and the fraudulent labeling of meat products make the identification and traceability of meat species and their processed products in the food chain important. A polymerase chain reaction followed by a High Resolution Melting (HRM) analysis was developed for species specific detection of buffalo; it was applied in six commercial meat products. A pair of specific 12S and universal 18S rRNA primers were employed and yielded DNA fragments of 220bp and 77bp, respectively. All tested products were found to contain buffalo meat and presented melting curves with at least two visible inflection points derived from the amplicons of the 12S specific and 18S universal primers. The presence of buffalo meat in meat products and the adulteration of buffalo products with unknown species were established down to a level of 0.1%. HRM was proven to be a fast and accurate technique for authentication testing of meat products. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Ole Roemer and the Light-Time Effect

    NASA Astrophysics Data System (ADS)

    Sterken, C.

    2005-07-01

    We discuss the observational background of Roemer's remarkable hypothesis that the velocity of light is finite. The outcome of the joint efforts of a highly-skilled instrumentalist and a team of surveyors driven to produce accurate maps and technically supported by the revolutionary advancements in horology, illustrates the synergy between the accuracy of the O and the C terms in the O-C concept which led to one of the most fundamental discoveries of the Renaissance.

  11. Efficient and Accurate Identification of Platinum-Group Minerals by a Combination of Mineral Liberation and Electron Probe Microanalysis with a New Approach to the Offline Overlap Correction of Platinum-Group Element Concentrations.

    PubMed

    Osbahr, Inga; Krause, Joachim; Bachmann, Kai; Gutzmer, Jens

    2015-10-01

    Identification and accurate characterization of platinum-group minerals (PGMs) is usually a very cumbersome procedure due to their small grain size (typically below 10 µm) and inconspicuous appearance under reflected light. A novel strategy for finding PGMs and quantifying their composition was developed. It combines a mineral liberation analyzer (MLA), a point logging system, and electron probe microanalysis (EPMA). As a first step, the PGMs are identified using the MLA. Grains identified as PGMs are then marked and coordinates recorded and transferred to the EPMA. Case studies illustrate that the combination of MLA, point logging, and EPMA results in the identification of a significantly higher number of PGM grains than reflected light microscopy. Analysis of PGMs by EPMA requires considerable effort due to the often significant overlaps between the X-ray spectra of almost all platinum-group and associated elements. X-ray lines suitable for quantitative analysis need to be carefully selected. As peak overlaps cannot be avoided completely, an offline overlap correction based on weight proportions has been developed. Results obtained with the procedure proposed in this study attain acceptable totals and atomic proportions, indicating that the applied corrections are appropriate.

  12. Physical response of light-time gravitational wave detectors

    NASA Astrophysics Data System (ADS)

    Koop, Michael J.; Finn, Lee Samuel

    2014-09-01

    Gravitational wave detectors are typically described as responding to gravitational wave metric perturbations, which are gauge-dependent and—correspondingly—unphysical quantities. This is particularly true for ground-based interferometric detectors, like LIGO, space-based detectors, like LISA and its derivatives, spacecraft Doppler tracking detectors, and pulsar timing array detectors. The description of gravitational waves, and a gravitational wave detector's response, to the unphysical metric perturbation has lead to a proliferation of false analogies and descriptions regarding how these detectors function, and true misunderstandings of the physical character of gravitational waves. Here we provide a fully physical and gauge-invariant description of the response of a wide class of gravitational wave detectors in terms of the Riemann curvature, the physical quantity that describes gravitational phenomena in general relativity. In the limit of high frequency gravitational waves, the Riemann curvature separates into two independent gauge-invariant quantities: a "background" curvature contribution and a "wave" curvature contribution. In this limit the gravitational wave contribution to the detector response reduces to an integral of the gravitational wave contribution of the curvature along the unperturbed photon path between components of the detector. The description presented here provides an unambiguous physical description of what a gravitational wave detector measures and how it operates, a simple means of computing corrections to a detectors response owing to general detector motion, a straightforward way of connecting the results of numerical relativity simulations to gravitational wave detection, and a basis for a general and fully relativistic pulsar timing formula.

  13. Toward accurate thermochemistry of the {sup 24}MgH, {sup 25}MgH, and {sup 26}MgH molecules at elevated temperatures: Corrections due to unbound states

    SciTech Connect

    Szidarovszky, Tamás; Császár, Attila G.

    2015-01-07

    The total partition functions Q(T) and their first two moments Q{sup ′}(T) and Q{sup ″}(T), together with the isobaric heat capacities C{sub p}(T), are computed a priori for three major MgH isotopologues on the temperature range of T = 100–3000 K using the recent highly accurate potential energy curve, spin-rotation, and non-adiabatic correction functions of Henderson et al. [J. Phys. Chem. A 117, 13373 (2013)]. Nuclear motion computations are carried out on the ground electronic state to determine the (ro)vibrational energy levels and the scattering phase shifts. The effect of resonance states is found to be significant above about 1000 K and it increases with temperature. Even very short-lived states, due to their relatively large number, have significant contributions to Q(T) at elevated temperatures. The contribution of scattering states is around one fourth of that of resonance states but opposite in sign. Uncertainty estimates are given for the possible error sources, suggesting that all computed thermochemical properties have an accuracy better than 0.005% up to 1200 K. Between 1200 and 2500 K, the uncertainties can rise to around 0.1%, while between 2500 K and 3000 K, a further increase to 0.5% might be observed for Q{sup ″}(T) and C{sub p}(T), principally due to the neglect of excited electronic states. The accurate thermochemical data determined are presented in the supplementary material for the three isotopologues of {sup 24}MgH, {sup 25}MgH, and {sup 26}MgH at 1 K increments. These data, which differ significantly from older standard data, should prove useful for astronomical models incorporating thermodynamic properties of these species.

  14. Use of Ga for mass bias correction for the accurate determination of copper isotope ratio in the NIST SRM 3114 Cu standard and geological samples by MC-ICP MS

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Zhou, L.; Tong, S.

    2015-12-01

    The absolute determination of the Cu isotope ratio in NIST SRM 3114 based on a regression mass bias correction model is performed for the first time with NIST SRM 944 Ga as the calibrant. A value of 0.4471±0.0013 (2SD, n=37) for the 65Cu/63Cu ratio was obtained with a value of +0.18±0.04 ‰ (2SD, n=5) for δ65Cu relative to NIST 976.The availability of the NIST SRM 3114 material, now with the absolute value of the 65Cu/63Cu ratio and a δ65Cu value relative to NIST 976 makes it suitable as a new candidate reference material for Cu isotope studies. In addition, a protocol is described for the accurate and precise determination of δ65Cu values of geological reference materials. Purification of Cu from the sample matrix was performed using the AG MP-1M Bio-Rad resin. The column recovery for geological samples was found to be 100±2% (2SD, n=15).A modified method of standard-sample bracketing with internal normalization for mass bias correction was employed by adding natural Ga to both the sample and the solution of NIST SRM 3114, which was used as the bracketing standard. An absolute value of 0.4471±0.0013 (2SD, n=37) for 65Cu/63Cu quantified in this study was used to calibrate the 69Ga/71Ga ratio in the two adjacent bracketing standards of SRM 3114,their average value of 69Ga/71Ga was then used to correct the 65Cu/63Cu ratio in the sample. Measured δ65Cu values of 0.18±0.04‰ (2SD, n=20),0.13±0.04‰ (2SD, n=9),0.08±0.03‰ (2SD, n=6),0.01±0.06‰(2SD, n=4) and 0.26±0.04‰ (2SD, n=7) were obtained for five geological reference materials of BCR-2,BHVO-2,AGV-2,BIR-1a,and GSP-2,respectively,in agreement with values obtained in previous studies.

  15. Solving post-Newtonian accurate Kepler equation

    NASA Astrophysics Data System (ADS)

    Boetzel, Yannick; Susobhanan, Abhimanyu; Gopakumar, Achamveedu; Klein, Antoine; Jetzer, Philippe

    2017-08-01

    We provide an elegant way of solving analytically the third post-Newtonian (3PN) accurate Kepler equation, associated with the 3PN-accurate generalized quasi-Keplerian parametrization for compact binaries in eccentric orbits. An additional analytic solution is presented to check the correctness of our compact solution and we perform comparisons between our PN-accurate analytic solution and a very accurate numerical solution of the PN-accurate Kepler equation. We adapt our approach to compute crucial 3PN-accurate inputs that will be required to compute analytically both the time and frequency domain ready-to-use amplitude-corrected PN-accurate search templates for compact binaries in inspiralling eccentric orbits.

  16. Corrective Primary Impression Technique

    PubMed Central

    Fernandes, Aquaviva; Dua, Neha; Herekar, Manisha

    2010-01-01

    The article describes a simple, quick and corrective technique for making the preliminary impression. It records the extensions better as compared to the impressions made using only impression compound. This technique is accurate and gives properly extended custom tray. Any deficiencies seen in the compound primary impression are corrected using this technique hence, this technique is called as a “corrective primary impression technique”. PMID:20502648

  17. High-Precision Tungsten Isotopic Analysis by Multicollection Negative Thermal Ionization Mass Spectrometry Based on Simultaneous Measurement of W and (18)O/(16)O Isotope Ratios for Accurate Fractionation Correction.

    PubMed

    Trinquier, Anne; Touboul, Mathieu; Walker, Richard J

    2016-02-02

    Determination of the (182)W/(184)W ratio to a precision of ± 5 ppm (2σ) is desirable for constraining the timing of core formation and other early planetary differentiation processes. However, WO3(-) analysis by negative thermal ionization mass spectrometry normally results in a residual correlation between the instrumental-mass-fractionation-corrected (182)W/(184)W and (183)W/(184)W ratios that is attributed to mass-dependent variability of O isotopes over the course of an analysis and between different analyses. A second-order correction using the (183)W/(184)W ratio relies on the assumption that this ratio is constant in nature. This may prove invalid, as has already been realized for other isotope systems. The present study utilizes simultaneous monitoring of the (18)O/(16)O and W isotope ratios to correct oxide interferences on a per-integration basis and thus avoid the need for a double normalization of W isotopes. After normalization of W isotope ratios to a pair of W isotopes, following the exponential law, no residual W-O isotope correlation is observed. However, there is a nonideal mass bias residual correlation between (182)W/(i)W and (183)W/(i)W with time. Without double normalization of W isotopes and on the basis of three or four duplicate analyses, the external reproducibility per session of (182)W/(184)W and (183)W/(184)W normalized to (186)W/(183)W is 5-6 ppm (2σ, 1-3 μg loads). The combined uncertainty per session is less than 4 ppm for (183)W/(184)W and less than 6 ppm for (182)W/(184)W (2σm) for loads between 3000 and 50 ng.

  18. Political Correctness--Correct?

    ERIC Educational Resources Information Center

    Boase, Paul H.

    1993-01-01

    Examines the phenomenon of political correctness, its roots and objectives, and its successes and failures in coping with the conflicts and clashes of multicultural campuses. Argues that speech codes indicate failure in academia's primary mission to civilize and educate through talk, discussion, thought,166 and persuasion. (SR)

  19. Impact of aerosols on the OMI tropospheric NO2 retrievals over industrialized regions: how accurate is the aerosol correction of cloud-free scenes via a simple cloud model?

    NASA Astrophysics Data System (ADS)

    Chimot, J.; Vlemmix, T.; Veefkind, J. P.; de Haan, J. F.; Levelt, P. F.

    2015-08-01

    The Ozone Monitoring Instrument (OMI) instrument has provided daily global measurements of tropospheric NO2 for more than a decade. Numerous studies have drawn attention to the complexities related to measurements of tropospheric NO2 in the presence of aerosols. Fine particles affect the OMI spectral measurements and the length of the average light path followed by the photons. However, they are not explicitly taken into account in the current OMI tropospheric NO2 retrieval chain. Instead, the operational OMI O2-O2 cloud retrieval algorithm is applied both to cloudy scenes and to cloud free scenes with aerosols present. This paper describes in detail the complex interplay between the spectral effects of aerosols, the OMI O2-O2 cloud retrieval algorithm and the impact on the accuracy of the tropospheric NO2 retrievals through the computed Air Mass Factor (AMF) over cloud-free scenes. Collocated OMI NO2 and MODIS Aqua aerosol products are analysed over East China, in industrialized area. In addition, aerosol effects on the tropospheric NO2 AMF and the retrieval of OMI cloud parameters are simulated. Both the observation-based and the simulation-based approach demonstrate that the retrieved cloud fraction linearly increases with increasing Aerosol Optical Thickness (AOT), but the magnitude of this increase depends on the aerosol properties and surface albedo. This increase is induced by the additional scattering effects of aerosols which enhance the scene brightness. The decreasing effective cloud pressure with increasing AOT represents primarily the absorbing effects of aerosols. The study cases show that the actual aerosol correction based on the implemented OMI cloud model results in biases between -20 and -40 % for the DOMINO tropospheric NO2 product in cases of high aerosol pollution (AOT ≥ 0.6) and elevated particles. On the contrary, when aerosols are relatively close to the surface or mixed with NO2, aerosol correction based on the cloud model results in

  20. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    PubMed Central

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.

    2013-01-01

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol−1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB

  1. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    NASA Astrophysics Data System (ADS)

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.

    2013-11-01

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol-1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB

  2. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: an accurate correction scheme for electrostatic finite-size effects.

    PubMed

    Rocklin, Gabriel J; Mobley, David L; Dill, Ken A; Hünenberger, Philippe H

    2013-11-14

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol(-1)) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB

  3. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    SciTech Connect

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.

    2013-11-14

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol{sup −1}) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non

  4. Light-time effect in two eclipsing binaries: NO Vul and EW Lyr

    NASA Astrophysics Data System (ADS)

    Bulut, A.; Bulut, I.; ćiçek, C.; Erdem, A.

    2017-02-01

    In this study, orbital period variations of two eclipsing binary systems (NO Vul and EW Lyr) were discussed. Possible light time effects due to third bodies in these systems were re-examined. The mass function and orbital period of hypothetical third bodies were calculated to be 0.000627 ± 0.000003 M⊙, 26.17 ± 0.05 years and 0.12682 ± 0.00003 M⊙, 77.23 ± 0.72 years for NO Vul and EW Lyr, respectively.

  5. Accurate spectral color measurements

    NASA Astrophysics Data System (ADS)

    Hiltunen, Jouni; Jaeaeskelaeinen, Timo; Parkkinen, Jussi P. S.

    1999-08-01

    Surface color measurement is of importance in a very wide range of industrial applications including paint, paper, printing, photography, textiles, plastics and so on. For a demanding color measurements spectral approach is often needed. One can measure a color spectrum with a spectrophotometer using calibrated standard samples as a reference. Because it is impossible to define absolute color values of a sample, we always work with approximations. The human eye can perceive color difference as small as 0.5 CIELAB units and thus distinguish millions of colors. This 0.5 unit difference should be a goal for the precise color measurements. This limit is not a problem if we only want to measure the color difference of two samples, but if we want to know in a same time exact color coordinate values accuracy problems arise. The values of two instruments can be astonishingly different. The accuracy of the instrument used in color measurement may depend on various errors such as photometric non-linearity, wavelength error, integrating sphere dark level error, integrating sphere error in both specular included and specular excluded modes. Thus the correction formulas should be used to get more accurate results. Another question is how many channels i.e. wavelengths we are using to measure a spectrum. It is obvious that the sampling interval should be short to get more precise results. Furthermore, the result we get is always compromise of measuring time, conditions and cost. Sometimes we have to use portable syste or the shape and the size of samples makes it impossible to use sensitive equipment. In this study a small set of calibrated color tiles measured with the Perkin Elmer Lamda 18 and the Minolta CM-2002 spectrophotometers are compared. In the paper we explain the typical error sources of spectral color measurements, and show which are the accuracy demands a good colorimeter should have.

  6. Impact of aerosols on the OMI tropospheric NO2 retrievals over industrialized regions: how accurate is the aerosol correction of cloud-free scenes via a simple cloud model?

    NASA Astrophysics Data System (ADS)

    Chimot, J.; Vlemmix, T.; Veefkind, J. P.; de Haan, J. F.; Levelt, P. F.

    2016-02-01

    The Ozone Monitoring Instrument (OMI) has provided daily global measurements of tropospheric NO2 for more than a decade. Numerous studies have drawn attention to the complexities related to measurements of tropospheric NO2 in the presence of aerosols. Fine particles affect the OMI spectral measurements and the length of the average light path followed by the photons. However, they are not explicitly taken into account in the current operational OMI tropospheric NO2 retrieval chain (DOMINO - Derivation of OMI tropospheric NO2) product. Instead, the operational OMI O2 - O2 cloud retrieval algorithm is applied both to cloudy and to cloud-free scenes (i.e. clear sky) dominated by the presence of aerosols. This paper describes in detail the complex interplay between the spectral effects of aerosols in the satellite observation and the associated response of the OMI O2 - O2 cloud retrieval algorithm. Then, it evaluates the impact on the accuracy of the tropospheric NO2 retrievals through the computed Air Mass Factor (AMF) with a focus on cloud-free scenes. For that purpose, collocated OMI NO2 and MODIS (Moderate Resolution Imaging Spectroradiometer) Aqua aerosol products are analysed over the strongly industrialized East China area. In addition, aerosol effects on the tropospheric NO2 AMF and the retrieval of OMI cloud parameters are simulated. Both the observation-based and the simulation-based approach demonstrate that the retrieved cloud fraction increases with increasing Aerosol Optical Thickness (AOT), but the magnitude of this increase depends on the aerosol properties and surface albedo. This increase is induced by the additional scattering effects of aerosols which enhance the scene brightness. The decreasing effective cloud pressure with increasing AOT primarily represents the shielding effects of the O2 - O2 column located below the aerosol layers. The study cases show that the aerosol correction based on the implemented OMI cloud model results in biases

  7. New analysis of the light time effect in TU Ursae Majoris

    NASA Astrophysics Data System (ADS)

    Liška, J.; Skarka, M.; Mikulášek, Z.; Zejda, M.; Chrastina, M.

    2016-05-01

    Context. Recent statistical studies prove that the percentage of RR Lyrae pulsators that are located in binaries or multiple stellar systems is considerably lower than might be expected. This can be better understood from an in-depth analysis of individual candidates. We investigate in detail the light time effect of the most probable binary candidate TU UMa. This is complicated because the pulsation period shows secular variation. Aims: We model possible light time effect of TU UMa using a new code applied on previously available and newly determined maxima timings to confirm binarity and refine parameters of the orbit of the RRab component in the binary system. The binary hypothesis is also tested using radial velocity measurements. Methods: We used new approach to determine brightness maxima timings based on template fitting. This can also be used on sparse or scattered data. This approach was successfully applied on measurements from different sources. To determine the orbital parameters of the double star TU UMa, we developed a new code to analyse light time effect that also includes secular variation in the pulsation period. Its usability was successfully tested on CL Aur, an eclipsing binary with mass-transfer in a triple system that shows similar changes in the O-C diagram. Since orbital motion would cause systematic shifts in mean radial velocities (dominated by pulsations), we computed and compared our model with centre-of-mass velocities. They were determined using high-quality templates of radial velocity curves of RRab stars. Results: Maxima timings adopted from the GEOS database (168) together with those newly determined from sky surveys and new measurements (85) were used to construct an O-C diagram spanning almost five proposed orbital cycles. This data set is three times larger than data sets used by previous authors. Modelling of the O-C dependence resulted in 23.3-yr orbital period, which translates into a minimum mass of the second component of

  8. Nighttime lights time series of tsunami damage, recovery, and economic metrics in Sumatra, Indonesia.

    PubMed

    Gillespie, Thomas W; Frankenberg, Elizabeth; Chum, Kai Fung; Thomas, Duncan

    2014-01-01

    On 26 December 2004, a magnitude 9.2 earthquake off the west coast of the northern Sumatra, Indonesia resulted in 160,000 Indonesians killed. We examine the Defense Meteorological Satellite Program-Operational Linescan System (DMSP-OLS) nighttime light imagery brightness values for 307 communities in the Study of the Tsunami Aftermath and Recovery (STAR), a household survey in Sumatra from 2004 to 2008. We examined night light time series between the annual brightness and extent of damage, economic metrics collected from STAR households and aggregated to the community level. There were significant changes in brightness values from 2004 to 2008 with a significant drop in brightness values in 2005 due to the tsunami and pre-tsunami nighttime light values returning in 2006 for all damage zones. There were significant relationships between the nighttime imagery brightness and per capita expenditures, and spending on energy and on food. Results suggest that Defense Meteorological Satellite Program nighttime light imagery can be used to capture the impacts and recovery from the tsunami and other natural disasters and estimate time series economic metrics at the community level in developing countries.

  9. A correction to a highly accurate voight function algorithm

    NASA Technical Reports Server (NTRS)

    Shippony, Z.; Read, W. G.

    2002-01-01

    An algorithm for rapidly computing the complex Voigt function was published by Shippony and Read. Its claimed accuracy was 1 part in 10^8. It was brought to our attention by Wells that Shippony and Read was not meeting its claimed accuracy for extremely small but non zero y values. Although true, the fix to the code is so trivial to warrant this note for those who use this algorithm.

  10. THE NEAR-CONTACT BINARY RZ DRACONIS WITH TWO POSSIBLE LIGHT-TIME ORBITS

    SciTech Connect

    Yang, Y.-G.; Dai, H.-F.; Li, H.-L.; Zhang, L.-Y.

    2010-12-15

    We present new multicolor photometry for RZ Draconis, observed in 2009 at the Xinglong Station of the National Astronomical Observatories of China. By using the updated version of the Wilson-Devinney Code, the photometric-spectroscopic elements were deduced from new photometric observations and published radial velocity data. The mass ratio and orbital inclination are q = 0.375({+-}0.002) and i = 84.{sup 0}60({+-}0.{sup 0}13), respectively. The fill-out factor of the primary is f = 98.3%, implying that RZ Dra is an Algol-like near-contact binary. Based on 683 light minimum times from 1907 to 2009, the orbital period change was investigated in detail. From the O - C curve, it is discovered that two quasi-sinusoidal variations may exist (i.e., P{sub 3} = 75.62({+-}2.20) yr and P{sub 4} = 27.59({+-}0.10) yr), which likely result from light-time effects via the presence of two additional bodies. In a coplanar orbit with the binary system, the third and fourth bodies may be low-mass drafts (i.e., M{sub 3} = 0.175 M{sub sun} and M{sub 4} = 0.074 M{sub sun}). If this is true, RZ Dra may be a quadruple star. The additional body could extract angular momentum from the binary system, which may cause the orbit to shrink. With the orbit shrinking, the primary may fill its Roche lobe and RZ Dra evolves into a contact configuration.

  11. Accurate monotone cubic interpolation

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1991-01-01

    Monotone piecewise cubic interpolants are simple and effective. They are generally third-order accurate, except near strict local extrema where accuracy degenerates to second-order due to the monotonicity constraint. Algorithms for piecewise cubic interpolants, which preserve monotonicity as well as uniform third and fourth-order accuracy are presented. The gain of accuracy is obtained by relaxing the monotonicity constraint in a geometric framework in which the median function plays a crucial role.

  12. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  13. Corrective work.

    ERIC Educational Resources Information Center

    Hill, Leslie A.

    1978-01-01

    Discusses some general principles for planning corrective instruction and exercises in English as a second language, and follows with examples from the areas of phonemics, phonology, lexicon, idioms, morphology, and syntax. (IFS/WGA)

  14. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  15. Accurate ab Initio Spin Densities.

    PubMed

    Boguslawski, Katharina; Marti, Konrad H; Legeza, Ors; Reiher, Markus

    2012-06-12

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740].

  16. Accurate ab Initio Spin Densities

    PubMed Central

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as a basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys.2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CASCI-type wave function provides insight into chemically interesting features of the molecule under study such as the distribution of α and β electrons in terms of Slater determinants, CI coefficients, and natural orbitals. The methodology is applied to an iron nitrosyl complex which we have identified as a challenging system for standard approaches [J. Chem. Theory Comput.2011, 7, 2740]. PMID:22707921

  17. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...

    EPA Pesticide Factsheets

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad

  18. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...

    EPA Pesticide Factsheets

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad

  19. Jitter Correction

    NASA Technical Reports Server (NTRS)

    Waegell, Mordecai J.; Palacios, David M.

    2011-01-01

    Jitter_Correct.m is a MATLAB function that automatically measures and corrects inter-frame jitter in an image sequence to a user-specified precision. In addition, the algorithm dynamically adjusts the image sample size to increase the accuracy of the measurement. The Jitter_Correct.m function takes an image sequence with unknown frame-to-frame jitter and computes the translations of each frame (column and row, in pixels) relative to a chosen reference frame with sub-pixel accuracy. The translations are measured using a Cross Correlation Fourier transformation method in which the relative phase of the two transformed images is fit to a plane. The measured translations are then used to correct the inter-frame jitter of the image sequence. The function also dynamically expands the image sample size over which the cross-correlation is measured to increase the accuracy of the measurement. This increases the robustness of the measurement to variable magnitudes of inter-frame jitter

  20. A CORRECTION.

    PubMed

    Johnson, D

    1940-03-22

    IN a recently published volume on "The Origin of Submarine Canyons" the writer inadvertently credited to A. C. Veatch an excerpt from a submarine chart actually contoured by P. A. Smith, of the U. S. Coast and Geodetic Survey. The chart in question is Chart IVB of Special Paper No. 7 of the Geological Society of America entitled "Atlantic Submarine Valleys of the United States and the Congo Submarine Valley, by A. C. Veatch and P. A. Smith," and the excerpt appears as Plate III of the volume fist cited above. In view of the heavy labor involved in contouring the charts accompanying the paper by Veatch and Smith and the beauty of the finished product, it would be unfair to Mr. Smith to permit the error to go uncorrected. Excerpts from two other charts are correctly ascribed to Dr. Veatch.

  1. 77 FR 72199 - Technical Corrections; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-05

    ... COMMISSION 10 CFR Part 171 RIN 3150-AJ16 Technical Corrections; Correction AGENCY: Nuclear Regulatory... corrections, including updating the street address for the Region I office, correcting authority citations and... rule. DATES: The correction is effective on December 5, 2012. FOR FURTHER INFORMATION CONTACT:...

  2. Unsupervised exposure correction for video

    NASA Astrophysics Data System (ADS)

    Petrova, X.; Sedunov, S.; Ignatov, A.

    2009-02-01

    The paper describes an "off-the-shelf" algorithmic solution for unsupervised exposure correction for video. An important feature of the algorithm is accurate processing not only for natural video sequences, but also for edited, rendered or combined content, including content with letter-boxes or pillar-boxes captured from TV broadcasts. The algorithm allows to change degree of exposure correction smoothly for continuous video scenes and to change it promptly on cuts. Solution includes scene change detection, letter-box detection, pillar-box detection, exposure correction adaptation, exposure correction and color correction. Exposure correction adaptation is based on histogram analysis and soft logics inference. Decision rules are based on relative number of entries in the low tones, mid tones and highlights, maximum entries in the low tones and mid tones, number of non-empty histogram entries and width of the middle range of the histogram. All decision rules have physical meaning, which allows to tune parameters easily for display devices of different classes. Exposure correction consists of computation of local average using edge-preserving filtering, applying local tone mapping and post-processing. At the final stage color correction aiming to reduce color distortions is applied.

  3. A density dependent dispersion correction.

    PubMed

    Steinmann, Stephan N; Corminboeuf, Clémence

    2011-01-01

    Density functional approximations fail to provide an accurate treatment of weak interactions. More recent, but not readily available functionals can lead to significant improvements. A simple alternative to correct for the missing weak interactions is to add, a posteriori, an atom pair-wise dispersion correction. We here present a density dependent dispersion correction, dDXDM, which dramatically improves the performance of popular functionals (e.g., PBE-dDXDM or B3LYP-dDXDM) for a set of 145 systems featuring both inter- and intramolecular interactions. Whereas the highly parameterized M06-2X functional, the long-range corrected LC-BLYP and the fully non-local van der Waals density functional rPW86-W09 also lead to improved results as compared to standard DFT methods, the enhanced performance of dDXDM remains the most impressive.

  4. Position Error Covariance Matrix Validation and Correction

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe, Jr.

    2016-01-01

    In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.

  5. Accurate measurement of unsteady state fluid temperature

    NASA Astrophysics Data System (ADS)

    Jaremkiewicz, Magdalena

    2017-03-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  6. 78 FR 75449 - Miscellaneous Corrections; Corrections

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-12

    ..., 50, 52, and 70 RIN 3150-AJ23 Miscellaneous Corrections; Corrections AGENCY: Nuclear Regulatory... final rule in the Federal Register on June 7, 2013, to make miscellaneous corrections to its regulations... miscellaneous corrections to its regulations in chapter I of Title 10 of the Code of Federal Regulations (10...

  7. Accurate Evaluation of Quantum Integrals

    NASA Technical Reports Server (NTRS)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  8. Device accurately measures and records low gas-flow rates

    NASA Technical Reports Server (NTRS)

    Branum, L. W.

    1966-01-01

    Free-floating piston in a vertical column accurately measures and records low gas-flow rates. The system may be calibrated, using an adjustable flow-rate gas supply, a low pressure gage, and a sequence recorder. From the calibration rates, a nomograph may be made for easy reduction. Temperature correction may be added for further accuracy.

  9. Accurate metacognition for visual sensory memory representations.

    PubMed

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.

  10. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages.

  11. Universality of quantum gravity corrections.

    PubMed

    Das, Saurya; Vagenas, Elias C

    2008-11-28

    We show that the existence of a minimum measurable length and the related generalized uncertainty principle (GUP), predicted by theories of quantum gravity, influence all quantum Hamiltonians. Thus, they predict quantum gravity corrections to various quantum phenomena. We compute such corrections to the Lamb shift, the Landau levels, and the tunneling current in a scanning tunneling microscope. We show that these corrections can be interpreted in two ways: (a) either that they are exceedingly small, beyond the reach of current experiments, or (b) that they predict upper bounds on the quantum gravity parameter in the GUP, compatible with experiments at the electroweak scale. Thus, more accurate measurements in the future should either be able to test these predictions, or further tighten the above bounds and predict an intermediate length scale between the electroweak and the Planck scale.

  12. Accurate Theoretical Thermochemistry for Fluoroethyl Radicals.

    PubMed

    Ganyecz, Ádám; Kállay, Mihály; Csontos, József

    2017-02-09

    An accurate coupled-cluster (CC) based model chemistry was applied to calculate reliable thermochemical quantities for hydrofluorocarbon derivatives including radicals 1-fluoroethyl (CH3-CHF), 1,1-difluoroethyl (CH3-CF2), 2-fluoroethyl (CH2F-CH2), 1,2-difluoroethyl (CH2F-CHF), 2,2-difluoroethyl (CHF2-CH2), 2,2,2-trifluoroethyl (CF3-CH2), 1,2,2,2-tetrafluoroethyl (CF3-CHF), and pentafluoroethyl (CF3-CF2). The model chemistry used contains iterative triple and perturbative quadruple excitations in CC theory, as well as scalar relativistic and diagonal Born-Oppenheimer corrections. To obtain heat of formation values with better than chemical accuracy perturbative quadruple excitations and scalar relativistic corrections were inevitable. Their contributions to the heats of formation steadily increase with the number of fluorine atoms in the radical reaching 10 kJ/mol for CF3-CF2. When discrepancies were found between the experimental and our values it was always possible to resolve the issue by recalculating the experimental result with currently recommended auxiliary data. For each radical studied here this study delivers the best heat of formation as well as entropy data.

  13. Improved VCF normalization for accurate VCF comparison.

    PubMed

    Bayat, Arash; Gaëta, Bruno; Ignjatovic, Aleksandar; Parameswaran, Sri

    2017-04-01

    The Variant Call Format (VCF) is widely used to store data about genetic variation. Variant calling workflows detect potential variants in large numbers of short sequence reads generated by DNA sequencing and report them in VCF format. To evaluate the accuracy of variant callers, it is critical to correctly compare their output against a reference VCF file containing a gold standard set of variants. However, comparing VCF files is a complicated task as an individual genomic variant can be represented in several different ways and is therefore not necessarily reported in a unique way by different software. We introduce a VCF normalization method called Best Alignment Normalisation (BAN) that results in more accurate VCF file comparison. BAN applies all the variations in a VCF file to the reference genome to create a sample genome, and then recalls the variants by aligning this sample genome back with the reference genome. Since the purpose of BAN is to get an accurate result at the time of VCF comparison, we define a better normalization method as the one resulting in less disagreement between the outputs of different VCF comparators. The BAN Linux bash script along with required software are publicly available on https://sites.google.com/site/banadf16. A.Bayat@unsw.edu.au. Supplementary data are available at Bioinformatics online.

  14. Accurate shear measurement with faint sources

    SciTech Connect

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao E-mail: walt@shao.ac.cn

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  15. Accurate upwind methods for the Euler equations

    NASA Technical Reports Server (NTRS)

    Huynh, Hung T.

    1993-01-01

    A new class of piecewise linear methods for the numerical solution of the one-dimensional Euler equations of gas dynamics is presented. These methods are uniformly second-order accurate, and can be considered as extensions of Godunov's scheme. With an appropriate definition of monotonicity preservation for the case of linear convection, it can be shown that they preserve monotonicity. Similar to Van Leer's MUSCL scheme, they consist of two key steps: a reconstruction step followed by an upwind step. For the reconstruction step, a monotonicity constraint that preserves uniform second-order accuracy is introduced. Computational efficiency is enhanced by devising a criterion that detects the 'smooth' part of the data where the constraint is redundant. The concept and coding of the constraint are simplified by the use of the median function. A slope steepening technique, which has no effect at smooth regions and can resolve a contact discontinuity in four cells, is described. As for the upwind step, existing and new methods are applied in a manner slightly different from those in the literature. These methods are derived by approximating the Euler equations via linearization and diagonalization. At a 'smooth' interface, Harten, Lax, and Van Leer's one intermediate state model is employed. A modification for this model that can resolve contact discontinuities is presented. Near a discontinuity, either this modified model or a more accurate one, namely, Roe's flux-difference splitting. is used. The current presentation of Roe's method, via the conceptually simple flux-vector splitting, not only establishes a connection between the two splittings, but also leads to an admissibility correction with no conditional statement, and an efficient approximation to Osher's approximate Riemann solver. These reconstruction and upwind steps result in schemes that are uniformly second-order accurate and economical at smooth regions, and yield high resolution at discontinuities.

  16. Achieving perceptually-accurate aural telepresence

    NASA Astrophysics Data System (ADS)

    Henderson, Paul D.

    Immersive multimedia requires not only realistic visual imagery but also a perceptually-accurate aural experience. A sound field may be presented simultaneously to a listener via a loudspeaker rendering system using the direct sound from acoustic sources as well as a simulation or "auralization" of room acoustics. Beginning with classical Wave-Field Synthesis (WFS), improvements are made to correct for asymmetries in loudspeaker array geometry. Presented is a new Spatially-Equalized WFS (SE-WFS) technique to maintain the energy-time balance of a simulated room by equalizing the reproduced spectrum at the listener for a distribution of possible source angles. Each reproduced source or reflection is filtered according to its incidence angle to the listener. An SE-WFS loudspeaker array of arbitrary geometry reproduces the sound field of a room with correct spectral and temporal balance, compared with classically-processed WFS systems. Localization accuracy of human listeners in SE-WFS sound fields is quantified by psychoacoustical testing. At a loudspeaker spacing of 0.17 m (equivalent to an aliasing cutoff frequency of 1 kHz), SE-WFS exhibits a localization blur of 3 degrees, nearly equal to real point sources. Increasing the loudspeaker spacing to 0.68 m (for a cutoff frequency of 170 Hz) results in a blur of less than 5 degrees. In contrast, stereophonic reproduction is less accurate with a blur of 7 degrees. The ventriloquist effect is psychometrically investigated to determine the effect of an intentional directional incongruence between audio and video stimuli. Subjects were presented with prerecorded full-spectrum speech and motion video of a talker's head as well as broadband noise bursts with a static image. The video image was displaced from the audio stimulus in azimuth by varying amounts, and the perceived auditory location measured. A strong bias was detectable for small angular discrepancies between audio and video stimuli for separations of less than 8

  17. A Possible Detection of a Second Light-Time Orbit for the Massive, Early-Type Eclipsing Binary Star AH Cephei

    NASA Astrophysics Data System (ADS)

    Kim, Chun-Hwey; Nha, Il-Seong; Kreiner, Jerzy M.

    2005-02-01

    All published and newly observed times of minimum light of the massive, early-type eclipsing binary star AH Cep were analyzed. After subtracting the light-time effect due to the well-known third body from the residuals of the observed times of minimum light, it was found that the second-order O-C residuals varied in a cyclical way. It was assumed that the secondary oscillations were produced by a light-time effect due to a fourth body so all the times of minimum light were reanalyzed with a differential least-squares scheme in order to obtain the light-time orbits due to both the third and fourth bodies. The periods, eccentricities, and semiamplitudes of the light-time orbits for the third and fourth bodies were derived as P3=67.6 and P4=9.6 yr, e3=0.52 and e4=0.64, and K3=0.0608 and K4=0.0040 days, respectively. The radial velocities of AH Cep published so far do not conflict with the hypothesis of the multiplicity of the system, but their accuracies are not high enough to support the interpretation. Other properties of the distant bodies are discussed for assorted possible inclinations of their orbits.

  18. 77 FR 2435 - Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-18

    ...- Free Treatment Under the Generalized System of Preferences and for Other Purposes Correction In... following correction: On page 407, the date following the proclamation number should read ``December...

  19. 78 FR 2193 - Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-10

    ... United States-Panama Trade Promotion Agreement and for Other Purposes Correction In Presidential document... correction: On page 66507, the proclamation identification heading on line one should read...

  20. Rethinking political correctness.

    PubMed

    Ely, Robin J; Meyerson, Debra E; Davidson, Martin N

    2006-09-01

    Legal and cultural changes over the past 40 years ushered unprecedented numbers of women and people of color into companies' professional ranks. Laws now protect these traditionally underrepresented groups from blatant forms of discrimination in hiring and promotion. Meanwhile, political correctness has reset the standards for civility and respect in people's day-to-day interactions. Despite this obvious progress, the authors' research has shown that political correctness is a double-edged sword. While it has helped many employees feel unlimited by their race, gender, or religion,the PC rule book can hinder people's ability to develop effective relationships across race, gender, and religious lines. Companies need to equip workers with skills--not rules--for building these relationships. The authors offer the following five principles for healthy resolution of the tensions that commonly arise over difference: Pause to short-circuit the emotion and reflect; connect with others, affirming the importance of relationships; question yourself to identify blind spots and discover what makes you defensive; get genuine support that helps you gain a broader perspective; and shift your mind-set from one that says, "You need to change," to one that asks, "What can I change?" When people treat their cultural differences--and related conflicts and tensions--as opportunities to gain a more accurate view of themselves, one another, and the situation, trust builds and relationships become stronger. Leaders should put aside the PC rule book and instead model and encourage risk taking in the service of building the organization's relational capacity. The benefits will reverberate through every dimension of the company's work.

  1. Ensemble MD simulations restrained via crystallographic data: Accurate structure leads to accurate dynamics

    PubMed Central

    Xue, Yi; Skrynnikov, Nikolai R

    2014-01-01

    Currently, the best existing molecular dynamics (MD) force fields cannot accurately reproduce the global free-energy minimum which realizes the experimental protein structure. As a result, long MD trajectories tend to drift away from the starting coordinates (e.g., crystallographic structures). To address this problem, we have devised a new simulation strategy aimed at protein crystals. An MD simulation of protein crystal is essentially an ensemble simulation involving multiple protein molecules in a crystal unit cell (or a block of unit cells). To ensure that average protein coordinates remain correct during the simulation, we introduced crystallography-based restraints into the MD protocol. Because these restraints are aimed at the ensemble-average structure, they have only minimal impact on conformational dynamics of the individual protein molecules. So long as the average structure remains reasonable, the proteins move in a native-like fashion as dictated by the original force field. To validate this approach, we have used the data from solid-state NMR spectroscopy, which is the orthogonal experimental technique uniquely sensitive to protein local dynamics. The new method has been tested on the well-established model protein, ubiquitin. The ensemble-restrained MD simulations produced lower crystallographic R factors than conventional simulations; they also led to more accurate predictions for crystallographic temperature factors, solid-state chemical shifts, and backbone order parameters. The predictions for 15N R1 relaxation rates are at least as accurate as those obtained from conventional simulations. Taken together, these results suggest that the presented trajectories may be among the most realistic protein MD simulations ever reported. In this context, the ensemble restraints based on high-resolution crystallographic data can be viewed as protein-specific empirical corrections to the standard force fields. PMID:24452989

  2. TPX correction coil studies

    SciTech Connect

    Hanson, J.D.

    1994-11-03

    Error correction coils are planned for the TPX (Tokamak Plasma Experiment) in order to avoid error field induced locked modes and disruption. The FT (Fix Tokamak) code is used to evaluate the ability of these correction coils to remove islands caused by symmetry breaking magnetic field errors. The proposed correction coils are capable of correcting a variety of error fields.

  3. Deconvolution with Correct Sampling

    NASA Astrophysics Data System (ADS)

    Magain, P.; Courbin, F.; Sohy, S.

    1998-02-01

    A new method for improving the resolution of astronomical images is presented. It is based on the principle that sampled data cannot be fully deconvolved without violating the sampling theorem. Thus, the sampled image should be deconvolved not by the total point-spread function but by a narrower function chosen so that the resolution of the deconvolved image is compatible with the adopted sampling. Our deconvolution method gives results that are, in at least some cases, superior to those of other commonly used techniques: in particular, it does not produce ringing around point sources superposed on a smooth background. Moreover, it allows researchers to perform accurate astrometry and photometry of crowded fields. These improvements are a consequence of both the correct treatment of sampling and the recognition that the most probable astronomical image is not a flat one. The method is also well adapted to the optimal combination of different images of the same object, as can be obtained, e.g., from infrared observations or via adaptive optics techniques.

  4. Thermodynamics of Error Correction

    NASA Astrophysics Data System (ADS)

    Sartori, Pablo; Pigolotti, Simone

    2015-10-01

    Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  5. MR image intensity inhomogeneity correction

    NASA Astrophysics Data System (ADS)

    (Vişan Pungǎ, Mirela; Moldovanu, Simona; Moraru, Luminita

    2015-01-01

    MR technology is one of the best and most reliable ways of studying the brain. Its main drawback is the so-called intensity inhomogeneity or bias field which impairs the visual inspection and the medical proceedings for diagnosis and strongly affects the quantitative image analysis. Noise is yet another artifact in medical images. In order to accurately and effectively restore the original signal, reference is hereof made to filtering, bias correction and quantitative analysis of correction. In this report, two denoising algorithms are used; (i) Basis rotation fields of experts (BRFoE) and (ii) Anisotropic Diffusion (when Gaussian noise, the Perona-Malik and Tukey's biweight functions and the standard deviation of the noise of the input image are considered).

  6. On numerically accurate finite element

    NASA Technical Reports Server (NTRS)

    Nagtegaal, J. C.; Parks, D. M.; Rice, J. R.

    1974-01-01

    A general criterion for testing a mesh with topologically similar repeat units is given, and the analysis shows that only a few conventional element types and arrangements are, or can be made suitable for computations in the fully plastic range. Further, a new variational principle, which can easily and simply be incorporated into an existing finite element program, is presented. This allows accurate computations to be made even for element designs that would not normally be suitable. Numerical results are given for three plane strain problems, namely pure bending of a beam, a thick-walled tube under pressure, and a deep double edge cracked tensile specimen. The effects of various element designs and of the new variational procedure are illustrated. Elastic-plastic computation at finite strain are discussed.

  7. Accurate, reproducible measurement of blood pressure.

    PubMed Central

    Campbell, N R; Chockalingam, A; Fodor, J G; McKay, D W

    1990-01-01

    The diagnosis of mild hypertension and the treatment of hypertension require accurate measurement of blood pressure. Blood pressure readings are altered by various factors that influence the patient, the techniques used and the accuracy of the sphygmomanometer. The variability of readings can be reduced if informed patients prepare in advance by emptying their bladder and bowel, by avoiding over-the-counter vasoactive drugs the day of measurement and by avoiding exposure to cold, caffeine consumption, smoking and physical exertion within half an hour before measurement. The use of standardized techniques to measure blood pressure will help to avoid large systematic errors. Poor technique can account for differences in readings of more than 15 mm Hg and ultimately misdiagnosis. Most of the recommended procedures are simple and, when routinely incorporated into clinical practice, require little additional time. The equipment must be appropriate and in good condition. Physicians should have a suitable selection of cuff sizes readily available; the use of the correct cuff size is essential to minimize systematic errors in blood pressure measurement. Semiannual calibration of aneroid sphygmomanometers and annual inspection of mercury sphygmomanometers and blood pressure cuffs are recommended. We review the methods recommended for measuring blood pressure and discuss the factors known to produce large differences in blood pressure readings. PMID:2192791

  8. Accurate equilibrium structures for piperidine and cyclohexane.

    PubMed

    Demaison, Jean; Craig, Norman C; Groner, Peter; Écija, Patricia; Cocinero, Emilio J; Lesarri, Alberto; Rudolph, Heinz Dieter

    2015-03-05

    Extended and improved microwave (MW) measurements are reported for the isotopologues of piperidine. New ground state (GS) rotational constants are fitted to MW transitions with quartic centrifugal distortion constants taken from ab initio calculations. Predicate values for the geometric parameters of piperidine and cyclohexane are found from a high level of ab initio theory including adjustments for basis set dependence and for correlation of the core electrons. Equilibrium rotational constants are obtained from GS rotational constants corrected for vibration-rotation interactions and electronic contributions. Equilibrium structures for piperidine and cyclohexane are fitted by the mixed estimation method. In this method, structural parameters are fitted concurrently to predicate parameters (with appropriate uncertainties) and moments of inertia (with uncertainties). The new structures are regarded as being accurate to 0.001 Å and 0.2°. Comparisons are made between bond parameters in equatorial piperidine and cyclohexane. Another interesting result of this study is that a structure determination is an effective way to check the accuracy of the ground state experimental rotational constants.

  9. Accurate and precise zinc isotope ratio measurements in urban aerosols.

    PubMed

    Gioia, Simone; Weiss, Dominik; Coles, Barry; Arnold, Tim; Babinski, Marly

    2008-12-15

    We developed an analytical method and constrained procedural boundary conditions that enable accurate and precise Zn isotope ratio measurements in urban aerosols. We also demonstrate the potential of this new isotope system for air pollutant source tracing. The procedural blank is around 5 ng and significantly lower than published methods due to a tailored ion chromatographic separation. Accurate mass bias correction using external correction with Cu is limited to Zn sample content of approximately 50 ng due to the combined effect of blank contribution of Cu and Zn from the ion exchange procedure and the need to maintain a Cu/Zn ratio of approximately 1. Mass bias is corrected for by applying the common analyte internal standardization method approach. Comparison with other mass bias correction methods demonstrates the accuracy of the method. The average precision of delta(66)Zn determinations in aerosols is around 0.05 per thousand per atomic mass unit. The method was tested on aerosols collected in Sao Paulo City, Brazil. The measurements reveal significant variations in delta(66)Zn(Imperial) ranging between -0.96 and -0.37 per thousand in coarse and between -1.04 and 0.02 per thousand in fine particular matter. This variability suggests that Zn isotopic compositions distinguish atmospheric sources. The isotopic light signature suggests traffic as the main source. We present further delta(66)Zn(Imperial) data for the standard reference material NIST SRM 2783 (delta(66)Zn(Imperial) = 0.26 +/- 0.10 per thousand).

  10. Accurate mass measurement: terminology and treatment of data.

    PubMed

    Brenton, A Gareth; Godfrey, A Ruth

    2010-11-01

    High-resolution mass spectrometry has become ever more accessible with improvements in instrumentation, such as modern FT-ICR and Orbitrap mass spectrometers. This has resulted in an increase in the number of articles submitted for publication quoting accurate mass data. There is a plethora of terms related to accurate mass analysis that are in current usage, many employed incorrectly or inconsistently. This article is based on a set of notes prepared by the authors for research students and staff in our laboratories as a guide to the correct terminology and basic statistical procedures to apply in relation to mass measurement, particularly for accurate mass measurement. It elaborates on the editorial by Gross in 1994 regarding the use of accurate masses for structure confirmation. We have presented and defined the main terms in use with reference to the International Union of Pure and Applied Chemistry (IUPAC) recommendations for nomenclature and symbolism for mass spectrometry. The correct use of statistics and treatment of data is illustrated as a guide to new and existing mass spectrometry users with a series of examples as well as statistical methods to compare different experimental methods and datasets. Copyright © 2010. Published by Elsevier Inc.

  11. Accurate Fiber Length Measurement Using Time-of-Flight Technique

    NASA Astrophysics Data System (ADS)

    Terra, Osama; Hussein, Hatem

    2016-06-01

    Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.

  12. Accurate and Inaccurate Conceptions about Osmosis That Accompanied Meaningful Problem Solving.

    ERIC Educational Resources Information Center

    Zuckerman, June Trop

    This study focused on the knowledge of six outstanding science students who solved an osmosis problem meaningfully. That is, they used appropriate and substantially accurate conceptual knowledge to generate an answer. Three generated a correct answer; three, an incorrect answer. This paper identifies both the accurate and inaccurate conceptions…

  13. 75 FR 18747 - Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-13

    ... Day: A National Day of Celebration of Greek and American Democracy, 2010 Correction In Presidential... correction: On page 15601, the first line of the heading should read ``Proclamation 8485 of March 24,...

  14. 77 FR 45469 - Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-01

    ... Respect to the Former Liberian Regime of Charles Taylor Correction In Presidential document 2012-17703 beginning on page 42415 in the issue of Wednesday, July 18, 2012, make the following correction: On...

  15. 78 FR 7255 - Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-01

    ... Unobligated Funds Under the American Recovery and Reinvestment Act of 2009 Correction In Presidential document... correction: On page 70883, the document identification heading on line one should read ``Notice of...

  16. 75 FR 68413 - Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-08

    ... Correction In Presidential document 2010-27676 beginning on page 67019 in the issue of Monday, November 1, 2010, make the following correction: On page 67019, the Presidential Determination number should...

  17. 75 FR 1013 - Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-08

    ... Correction In Presidential document E9-31418 beginning on page 707 in the issue of Tuesday, January 5, 2010, make the following correction: On page 731, the date line below the President's signature should...

  18. 75 FR 68409 - Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-08

    ... Migration Needs Resulting From Flooding In Pakistan Correction In Presidential document 2010-27673 beginning on page 67015 in the issue of Monday, November 1, 2010, make the following correction: On page...

  19. 78 FR 73377 - Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-06

    ...--Continuation of U.S. Drug Interdiction Assistance to the Government of Colombia Correction In Presidential... correction: On page 51647, the heading of the document was omitted and should read ``Continuation of...

  20. 77 FR 60037 - Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-02

    ... Commit, Threaten To Commit, or Support Terrorism Correction In Presidential document 2012-22710 beginning on page 56519 in the issue of Wednesday, September 12, 2012, make the following correction: On...

  1. 75 FR 68407 - Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-08

    ... Migration Needs Resulting from Violence in Kyrgyzstan Correction In Presidential document 2010-27672 beginning on page 67013 in the issue of Monday, November 1, 2010, make the following correction: On...

  2. Research in Correctional Rehabilitation.

    ERIC Educational Resources Information Center

    Rehabilitation Services Administration (DHEW), Washington, DC.

    Forty-three leaders in corrections and rehabilitation participated in the seminar planned to provide an indication of the status of research in correctional rehabilitation. Papers include: (1) "Program Trends in Correctional Rehabilitation" by John P. Conrad, (2) "Federal Offenders Rahabilitation Program" by Percy B. Bell and Merlyn Mathews, (3)…

  3. Teaching Politically Correct Language

    ERIC Educational Resources Information Center

    Tsehelska, Maryna

    2006-01-01

    This article argues that teaching politically correct language to English learners provides them with important information and opportunities to be exposed to cultural issues. The author offers a brief review of how political correctness became an issue and how being politically correct influences the use of language. The article then presents…

  4. The Utility of Maze Accurate Response Rate in Assessing Reading Comprehension in Upper Elementary and Middle School Students

    ERIC Educational Resources Information Center

    McCane-Bowling, Sara J.; Strait, Andrea D.; Guess, Pamela E.; Wiedo, Jennifer R.; Muncie, Eric

    2014-01-01

    This study examined the predictive utility of five formative reading measures: words correct per minute, number of comprehension questions correct, reading comprehension rate, number of maze correct responses, and maze accurate response rate (MARR). Broad Reading cluster scores obtained via the Woodcock-Johnson III (WJ III) Tests of Achievement…

  5. Accurate, meshless methods for magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.; Raives, Matthias J.

    2016-01-01

    Recently, we explored new meshless finite-volume Lagrangian methods for hydrodynamics: the `meshless finite mass' (MFM) and `meshless finite volume' (MFV) methods; these capture advantages of both smoothed particle hydrodynamics (SPH) and adaptive mesh refinement (AMR) schemes. We extend these to include ideal magnetohydrodynamics (MHD). The MHD equations are second-order consistent and conservative. We augment these with a divergence-cleaning scheme, which maintains nabla \\cdot B≈ 0. We implement these in the code GIZMO, together with state-of-the-art SPH MHD. We consider a large test suite, and show that on all problems the new methods are competitive with AMR using constrained transport (CT) to ensure nabla \\cdot B=0. They correctly capture the growth/structure of the magnetorotational instability, MHD turbulence, and launching of magnetic jets, in some cases converging more rapidly than state-of-the-art AMR. Compared to SPH, the MFM/MFV methods exhibit convergence at fixed neighbour number, sharp shock-capturing, and dramatically reduced noise, divergence errors, and diffusion. Still, `modern' SPH can handle most test problems, at the cost of larger kernels and `by hand' adjustment of artificial diffusion. Compared to non-moving meshes, the new methods exhibit enhanced `grid noise' but reduced advection errors and diffusion, easily include self-gravity, and feature velocity-independent errors and superior angular momentum conservation. They converge more slowly on some problems (smooth, slow-moving flows), but more rapidly on others (involving advection/rotation). In all cases, we show divergence control beyond the Powell 8-wave approach is necessary, or all methods can converge to unphysical answers even at high resolution.

  6. Effective and Accurate Colormap Selection

    NASA Astrophysics Data System (ADS)

    Thyng, K. M.; Greene, C. A.; Hetland, R. D.; Zimmerle, H.; DiMarco, S. F.

    2016-12-01

    Science is often communicated through plots, and design choices can elucidate or obscure the presented data. The colormap used can honestly and clearly display data in a visually-appealing way, or can falsely exaggerate data gradients and confuse viewers. Fortunately, there is a large resource of literature in color science on how color is perceived which we can use to inform our own choices. Following this literature, colormaps can be designed to be perceptually uniform; that is, so an equally-sized jump in the colormap at any location is perceived by the viewer as the same size. This ensures that gradients in the data are accurately percieved. The same colormap is often used to represent many different fields in the same paper or presentation. However, this can cause difficulty in quick interpretation of multiple plots. For example, in one plot the viewer may have trained their eye to recognize that red represents high salinity, and therefore higher density, while in the subsequent temperature plot they need to adjust their interpretation so that red represents high temperature and therefore lower density. In the same way that a single Greek letter is typically chosen to represent a field for a paper, we propose to choose a single colormap to represent a field in a paper, and use multiple colormaps for multiple fields. We have created a set of colormaps that are perceptually uniform, and follow several other design guidelines. There are 18 colormaps to give options to choose from for intuitive representation. For example, a colormap of greens may be used to represent chlorophyll concentration, or browns for turbidity. With careful consideration of human perception and design principles, colormaps may be chosen which faithfully represent the data while also engaging viewers.

  7. How flatbed scanners upset accurate film dosimetry

    NASA Astrophysics Data System (ADS)

    van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  8. How flatbed scanners upset accurate film dosimetry.

    PubMed

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  9. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect to... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4...

  10. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect to... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Accurate measurement. 4...

  11. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect to... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Accurate measurement. 4...

  12. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect to... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Accurate measurement. 4...

  13. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect to... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Accurate measurement. 4...

  14. Shuttle program: Computing atmospheric scale height for refraction corrections

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    Methods for computing the atmospheric scale height to determine radio wave refraction were investigated for different atmospheres, and different angles of elevation. Tables of refractivity versus altitude are included. The equations used to compute the refraction corrections are given. It is concluded that very accurate corrections are determined with the assumption of an exponential atmosphere.

  15. Request for Correction 10003

    EPA Pesticide Factsheets

    Letter from Jeff Rush requesting rescinding and correction online and printed information regarding alleged greenhouse gas emissions reductions resulting from beneficial use of coal combustion waste products.

  16. 78 FR 55169 - Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-10

    ... Commodities and Services From Any Agency of the United States Government to the Syrian Opposition Coalition (SOC) and the Syrian Opposition's Supreme Military Council (SMC) Correction In Presidential...

  17. Clarifying types of uncertainty: when are models accurate, and uncertainties small?

    PubMed

    Cox, Louis Anthony Tony

    2011-10-01

    Professor Aven has recently noted the importance of clarifying the meaning of terms such as "scientific uncertainty" for use in risk management and policy decisions, such as when to trigger application of the precautionary principle. This comment examines some fundamental conceptual challenges for efforts to define "accurate" models and "small" input uncertainties by showing that increasing uncertainty in model inputs may reduce uncertainty in model outputs; that even correct models with "small" input uncertainties need not yield accurate or useful predictions for quantities of interest in risk management (such as the duration of an epidemic); and that accurate predictive models need not be accurate causal models.

  18. Acquisition of accurate data from intramolecular quenched fluorescence protease assays.

    PubMed

    Arachea, Buenafe T; Wiener, Michael C

    2017-04-01

    The Intramolecular Quenched Fluorescence (IQF) protease assay utilizes peptide substrates containing donor-quencher pairs that flank the scissile bond. Following protease cleavage, the dequenched donor emission of the product is subsequently measured. Inspection of the IQF literature indicates that rigorous treatment of systematic errors in observed fluorescence arising from inner-filter absorbance (IF) and non-specific intermolecular quenching (NSQ) is incompletely performed. As substrate and product concentrations vary during the time-course of enzyme activity, iterative solution of the kinetic rate equations is, generally, required to obtain the proper time-dependent correction to the initial velocity fluorescence data. Here, we demonstrate that, if the IQF assay is performed under conditions where IF and NSQ are approximately constant during the measurement of initial velocity for a given initial substrate concentration, then a simple correction as a function of initial substrate concentration can be derived and utilized to obtain accurate initial velocity data for analysis.

  19. Image correction in magneto-optical microscopy

    NASA Astrophysics Data System (ADS)

    Paturi, P.; Larsen, B. Hvolbæk; Jacobsen, B. A.; Andersen, N. H.

    2003-06-01

    An image-processing procedure that assures correct determination of the magnetic field distribution of magneto-optical images is presented. The method remedies image faults resulting from sources that are proportional to the incident light intensity, such as different types of defects in the indicator film and unevenness of light, as well as additive signals from detector bias, external light sources, etc. When properly corrected a better measurement of the local magnetic field can be made, even in the case of heavily damaged films. For superconductors the magnetic field distributions may be used for accurate determination of the current distributions without the spurious current loops associated with defects in the films.

  20. Turbulence Models for Accurate Aerothermal Prediction in Hypersonic Flows

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang-Hong; Wu, Yi-Zao; Wang, Jiang-Feng

    Accurate description of the aerodynamic and aerothermal environment is crucial to the integrated design and optimization for high performance hypersonic vehicles. In the simulation of aerothermal environment, the effect of viscosity is crucial. The turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating. In this paper, three turbulent models were studied: the one-equation eddy viscosity transport model of Spalart-Allmaras, the Wilcox k-ω model and the Menter SST model. For the k-ω model and SST model, the compressibility correction, press dilatation and low Reynolds number correction were considered. The influence of these corrections for flow properties were discussed by comparing with the results without corrections. In this paper the emphasis is on the assessment and evaluation of the turbulence models in prediction of heat transfer as applied to a range of hypersonic flows with comparison to experimental data. This will enable establishing factor of safety for the design of thermal protection systems of hypersonic vehicle.

  1. Accurate pressure gradient calculations in hydrostatic atmospheric models

    NASA Technical Reports Server (NTRS)

    Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet

    1987-01-01

    A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.

  2. Submicrometer Single Crystal Diffractometry for Highly Accurate Structure Determination

    SciTech Connect

    Yasuda, Nobuhiro; Fukuyama, Yoshimitsu; Kimura, Shigeru; Toriumi, Koshiro; Takata, Masaki

    2010-06-23

    Submicrometer single crystal diffractometry for highly accurate structure determination was developed using the extremely stable and highly brilliant synchrotron radiation from SPring-8. This was achieved using a microbeam focusing system and the submicrometer precision low-eccentric goniometer system. We demonstrated the structure analyses with 2x2x2 {mu}m{sup 3} cytidine, 600x600x300 nm{sup 3} BaTiO{sub 3}, and 1x1x1 {mu}m{sup 3} silicon. The observed structure factors of the silicon crystal were in agreement with the structure factors determined by the Pendelloesung method and do not require absorption and extinction corrections.

  3. Simple and accurate temperature correction for moisture pin calibrations in oriented strand board

    Treesearch

    Charles Boardman; Samuel V. Glass; Patricia K. Lebow

    2017-01-01

    Oriented strand board (OSB) is commonly used in the residential construction market in North America and its moisture-related durability is a critical consideration for building envelope design. Measurement of OSB moisture content (MC), a key determinant of durability, is often done using moisture pins and relies on a correlation between MC and the electrical...

  4. Laser correcting mirror

    DOEpatents

    Sawicki, Richard H.

    1994-01-01

    An improved laser correction mirror (10) for correcting aberrations in a laser beam wavefront having a rectangular mirror body (12) with a plurality of legs (14, 16, 18, 20, 22, 24, 26, 28) arranged into opposing pairs (34, 36, 38, 40) along the long sides (30, 32) of the mirror body (12). Vector force pairs (49, 50, 52, 54) are applied by adjustment mechanisms (42, 44, 46, 48) between members of the opposing pairs (34, 36, 38, 40) for bending a reflective surface 13 of the mirror body 12 into a shape defining a function which can be used to correct for comatic aberrations.

  5. Static Scene Statistical Non-Uniformity Correction

    DTIC Science & Technology

    2015-03-01

    and correct fixed pattern, systematic errors. The algorithm was tested in simulation and with measured data and the results indicate that the S3NUC...algorithm is an accurate method of applying NUC. The algorithm was also able to track global array response changes over time in simulated and measured ... measurable charge level. Over that period of time, the detector collects and stores the charge from all arriving photons. Photon arrivals are

  6. A full-chip DSA correction framework

    NASA Astrophysics Data System (ADS)

    Wang, Wei-Long; Latypov, Azat; Zou, Yi; Coskun, Tamer

    2014-03-01

    The graphoepitaxy DSA process relies on lithographically created confinement wells to perform directed self-assembly in the thin film of the block copolymer. These self-assembled patterns are then etch transferred into the substrate. The conventional DUV immersion or EUV lithography is still required to print these confinement wells, and the lithographic patterning residual errors propagate to the final patterns created by DSA process. DSA proximity correction (PC), in addition to OPC, is essential to obtain accurate confinement well shapes that resolve the final DSA patterns precisely. In this study, we proposed a novel correction flow that integrates our co-optimization algorithms, rigorous 2-D DSA simulation engine, and OPC tool. This flow enables us to optimize our process and integration as well as provides a guidance to design optimization. We also showed that novel RET techniques such as DSA-Aware assist feature generation can be used to improve the process window. The feasibility of our DSA correction framework on large layout with promising correction accuracy has been demonstrated. A robust and efficient correction algorithm is also determined by rigorous verification studies. We also explored how the knowledge of DSA natural pitches and lithography printing constraints provide a good guidance to establish DSA-Friendly designs. Finally application of our DSA full-chip computational correction framework to several real designs of contact-like holes is discussed. We also summarize the challenges associated with computational DSA technology.

  7. Correcting Hubble Vision.

    ERIC Educational Resources Information Center

    Shaw, John M.; Sheahen, Thomas P.

    1994-01-01

    Describes the theory behind the workings of the Hubble Space Telescope, the spherical aberration in the primary mirror that caused a reduction in image quality, and the corrective device that compensated for the error. (JRH)

  8. Corrected Age for Preemies

    MedlinePlus

    ... Breastfeeding Crying & Colic Diapers & Clothing Feeding & Nutrition Preemie Sleep Teething & Tooth Care Toddler Preschool Gradeschool Teen Young Adult Healthy Children > Ages & Stages > Baby > Preemie > Corrected Age For Preemies Ages & Stages ...

  9. Correcting Hubble Vision.

    ERIC Educational Resources Information Center

    Shaw, John M.; Sheahen, Thomas P.

    1994-01-01

    Describes the theory behind the workings of the Hubble Space Telescope, the spherical aberration in the primary mirror that caused a reduction in image quality, and the corrective device that compensated for the error. (JRH)

  10. Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Lidar, Daniel A.; Brun, Todd A.

    2013-09-01

    Prologue; Preface; Part I. Background: 1. Introduction to decoherence and noise in open quantum systems Daniel Lidar and Todd Brun; 2. Introduction to quantum error correction Dave Bacon; 3. Introduction to decoherence-free subspaces and noiseless subsystems Daniel Lidar; 4. Introduction to quantum dynamical decoupling Lorenza Viola; 5. Introduction to quantum fault tolerance Panos Aliferis; Part II. Generalized Approaches to Quantum Error Correction: 6. Operator quantum error correction David Kribs and David Poulin; 7. Entanglement-assisted quantum error-correcting codes Todd Brun and Min-Hsiu Hsieh; 8. Continuous-time quantum error correction Ognyan Oreshkov; Part III. Advanced Quantum Codes: 9. Quantum convolutional codes Mark Wilde; 10. Non-additive quantum codes Markus Grassl and Martin Rötteler; 11. Iterative quantum coding systems David Poulin; 12. Algebraic quantum coding theory Andreas Klappenecker; 13. Optimization-based quantum error correction Andrew Fletcher; Part IV. Advanced Dynamical Decoupling: 14. High order dynamical decoupling Zhen-Yu Wang and Ren-Bao Liu; 15. Combinatorial approaches to dynamical decoupling Martin Rötteler and Pawel Wocjan; Part V. Alternative Quantum Computation Approaches: 16. Holonomic quantum computation Paolo Zanardi; 17. Fault tolerance for holonomic quantum computation Ognyan Oreshkov, Todd Brun and Daniel Lidar; 18. Fault tolerant measurement-based quantum computing Debbie Leung; Part VI. Topological Methods: 19. Topological codes Héctor Bombín; 20. Fault tolerant topological cluster state quantum computing Austin Fowler and Kovid Goyal; Part VII. Applications and Implementations: 21. Experimental quantum error correction Dave Bacon; 22. Experimental dynamical decoupling Lorenza Viola; 23. Architectures Jacob Taylor; 24. Error correction in quantum communication Mark Wilde; Part VIII. Critical Evaluation of Fault Tolerance: 25. Hamiltonian methods in QEC and fault tolerance Eduardo Novais, Eduardo Mucciolo and

  11. Adaptable DC offset correction

    NASA Technical Reports Server (NTRS)

    Golusky, John M. (Inventor); Muldoon, Kelly P. (Inventor)

    2009-01-01

    Methods and systems for adaptable DC offset correction are provided. An exemplary adaptable DC offset correction system evaluates an incoming baseband signal to determine an appropriate DC offset removal scheme; removes a DC offset from the incoming baseband signal based on the appropriate DC offset scheme in response to the evaluated incoming baseband signal; and outputs a reduced DC baseband signal in response to the DC offset removed from the incoming baseband signal.

  12. moco: Fast Motion Correction for Calcium Imaging.

    PubMed

    Dubbs, Alexander; Guevara, James; Yuste, Rafael

    2016-01-01

    Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm which uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many L 2 norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ.

  13. Mobile image based color correction using deblurring

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.

    2015-03-01

    Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space.

  14. moco: Fast Motion Correction for Calcium Imaging

    PubMed Central

    Dubbs, Alexander; Guevara, James; Yuste, Rafael

    2016-01-01

    Motion correction is the first step in a pipeline of algorithms to analyze calcium imaging videos and extract biologically relevant information, for example the network structure of the neurons therein. Fast motion correction is especially critical for closed-loop activity triggered stimulation experiments, where accurate detection and targeting of specific cells in necessary. We introduce a novel motion-correction algorithm which uses a Fourier-transform approach, and a combination of judicious downsampling and the accelerated computation of many L2 norms using dynamic programming and two-dimensional, fft-accelerated convolutions, to enhance its efficiency. Its accuracy is comparable to that of established community-used algorithms, and it is more stable to large translational motions. It is programmed in Java and is compatible with ImageJ. PMID:26909035

  15. Mobile Image Based Color Correction Using Deblurring

    PubMed Central

    Wang, Yu; Xu, Chang; Boushey, Carol; Zhu, Fengqing; Delp, Edward J.

    2016-01-01

    Dietary intake, the process of determining what someone eats during the course of a day, provides valuable insights for mounting intervention programs for prevention of many chronic diseases such as obesity and cancer. The goals of the Technology Assisted Dietary Assessment (TADA) System, developed at Purdue University, is to automatically identify and quantify foods and beverages consumed by utilizing food images acquired with a mobile device. Color correction serves as a critical step to ensure accurate food identification and volume estimation. We make use of a specifically designed color checkerboard (i.e. a fiducial marker) to calibrate the imaging system so that the variations of food appearance under different lighting conditions can be determined. In this paper, we propose an image quality enhancement technique by combining image de-blurring and color correction. The contribution consists of introducing an automatic camera shake removal method using a saliency map and improving the polynomial color correction model using the LMS color space. PMID:28572697

  16. Landsat TM memory effect characterization and correction

    USGS Publications Warehouse

    Helder, D.; Boncyk, W.; Morfitt, R.

    1997-01-01

    Before radiometric calibration of Landsat Thematic Mapper (TM) data can be done accurately, it is necessary to minimize the effects of artifacts present in the data that originate in the instrument's signal processing path. These artifacts have been observed in downlinked image data since shortly after launch of Landsat 4 and 5. However, no comprehensive work has been done to characterize all the artifacts and develop methods for their correction. In this paper, the most problematic artifact is discussed: memory effect (ME). Characterization of this artifact is presented, including the parameters necessary for its correction. In addition, a correction algorithm is described that removes the artifact from TM imagery. It will be shown that this artifact causes significant radiometry errors, but the effect can be removed in a straightforward manner.

  17. Respiration correction by clustering in ultrasound images

    NASA Astrophysics Data System (ADS)

    Wu, Kaizhi; Chen, Xi; Ding, Mingyue; Sang, Nong

    2016-03-01

    Respiratory motion is a challenging factor for image acquisition, image-guided procedures and perfusion quantification using contrast-enhanced ultrasound in the abdominal and thoracic region. In order to reduce the influence of respiratory motion, respiratory correction methods were investigated. In this paper we propose a novel, cluster-based respiratory correction method. In the proposed method, we assign the image frames of the corresponding respiratory phase using spectral clustering firstly. And then, we achieve the images correction automatically by finding a cluster in which points are close to each other. Unlike the traditional gating method, we don't need to estimate the breathing cycle accurate. It is because images are similar at the corresponding respiratory phase, and they are close in high-dimensional space. The proposed method is tested on simulation image sequence and real ultrasound image sequence. The experimental results show the effectiveness of our proposed method in quantitative and qualitative.

  18. Ellipsoidal corrections for geoid undulation computations

    NASA Technical Reports Server (NTRS)

    Rapp, R. H.

    1981-01-01

    The computation of accurate geoid undulations is usually done combining potential coefficient information and terrestrial gravity data in a cap surrounding the computation point. In doing this a spherical approximation is made that can cause the errors that were investigated. The equations dealing with ellipsoidal corrections developed by Lelgemann and by Moritz were used to develop a computational procedure considering the ellipsoid as a reference surface. Terms in the resulting expression for the geoid undulation are identified as ellipsoidal correction terms. These equations were developed for the case where the Stokes function is used, and for the case where the modified Stokes function is used. For a cap of 20 deg the correction can reach -33 cm.

  19. Measuring work stress among correctional staff: a Rasch measurement approach.

    PubMed

    Higgins, George E; Tewksbury, Richard; Denney, Andrew

    2012-01-01

    Today, the amount of stress the correctional staff endures at work is an important issue. Research has addressed this issue, but has yielded no consensus as to a properly calibrated measure of perceptions of work stress for correctional staff. Using data from a non-random sample of correctional staff (n = 228), the Rasch model was used to assess whether a specific measure of work stress would fit the model. Results show that three items rather than six items accurately represented correctional staff perceptions of work stress.

  20. Method of absorbance correction in a spectroscopic heating value sensor

    SciTech Connect

    Saveliev, Alexei; Jangale, Vilas Vyankatrao; Zelepouga, Sergeui; Pratapas, John

    2013-09-17

    A method and apparatus for absorbance correction in a spectroscopic heating value sensor in which a reference light intensity measurement is made on a non-absorbing reference fluid, a light intensity measurement is made on a sample fluid, and a measured light absorbance of the sample fluid is determined. A corrective light intensity measurement at a non-absorbing wavelength of the sample fluid is made on the sample fluid from which an absorbance correction factor is determined. The absorbance correction factor is then applied to the measured light absorbance of the sample fluid to arrive at a true or accurate absorbance for the sample fluid.

  1. Accurate Mass Assignment of Native Protein Complexes Detected by Electrospray Mass Spectrometry

    PubMed Central

    Liepold, Lars O.; Oltrogge, Luke M.; Suci, Peter; Douglas, Trevor; Young, Mark J.

    2009-01-01

    Correct charge state assignment is crucial to assigning an accurate mass to supramolecular complexes analyzed by electrospray mass spectrometry. Conventional charge state assignment techniques fall short of reliably and unambiguously predicting the correct charge state for many supramolecular complexes. We provide an explanation of the shortcomings of the conventional techniques and have developed a robust charge state assignment method that is applicable to all spectra. PMID:19103497

  2. Geological Corrections in Gravimetry

    NASA Astrophysics Data System (ADS)

    Mikuška, J.; Marušiak, I.

    2015-12-01

    Applying corrections for the known geology to gravity data can be traced back into the first quarter of the 20th century. Later on, mostly in areas with sedimentary cover, at local and regional scales, the correction known as gravity stripping has been in use since the mid 1960s, provided that there was enough geological information. Stripping at regional to global scales became possible after releasing the CRUST 2.0 and later CRUST 1.0 models in the years 2000 and 2013, respectively. Especially the later model provides quite a new view on the relevant geometries and on the topographic and crustal densities as well as on the crust/mantle density contrast. Thus, the isostatic corrections, which have been often used in the past, can now be replaced by procedures working with an independent information interpreted primarily from seismic studies. We have developed software for performing geological corrections in space domain, based on a-priori geometry and density grids which can be of either rectangular or spherical/ellipsoidal types with cells of the shapes of rectangles, tesseroids or triangles. It enables us to calculate the required gravitational effects not only in the form of surface maps or profiles but, for instance, also along vertical lines, which can shed some additional light on the nature of the geological correction. The software can work at a variety of scales and considers the input information to an optional distance from the calculation point up to the antipodes. Our main objective is to treat geological correction as an alternative to accounting for the topography with varying densities since the bottoms of the topographic masses, namely the geoid or ellipsoid, generally do not represent geological boundaries. As well we would like to call attention to the possible distortions of the corrected gravity anomalies. This work was supported by the Slovak Research and Development Agency under the contract APVV-0827-12.

  3. A comparison of accurate automatic hippocampal segmentation methods.

    PubMed

    Zandifar, Azar; Fonov, Vladimir; Coupé, Pierrick; Pruessner, Jens; Collins, D Louis

    2017-07-15

    The hippocampus is one of the first brain structures affected by Alzheimer's disease (AD). While many automatic methods for hippocampal segmentation exist, few studies have compared them on the same data. In this study, we compare four fully automated hippocampal segmentation methods in terms of their conformity with manual segmentation and their ability to be used as an AD biomarker in clinical settings. We also apply error correction to the four automatic segmentation methods, and complete a comprehensive validation to investigate differences between the methods. The effect size and classification performance is measured for AD versus normal control (NC) groups and for stable mild cognitive impairment (sMCI) versus progressive mild cognitive impairment (pMCI) groups. Our study shows that the nonlinear patch-based segmentation method with error correction is the most accurate automatic segmentation method and yields the most conformity with manual segmentation (κ=0.894). The largest effect size between AD versus NC and sMCI versus pMCI is produced by FreeSurfer with error correction. We further show that, using only hippocampal volume, age, and sex as features, the area under the receiver operating characteristic curve reaches up to 0.8813 for AD versus NC and 0.6451 for sMCI versus pMCI. However, the automatic segmentation methods are not significantly different in their performance. Copyright © 2017. Published by Elsevier Inc.

  4. Accurate ab initio vibrational energies of methyl chloride

    SciTech Connect

    Owens, Alec; Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2015-06-28

    Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH{sub 3}{sup 35}Cl and CH{sub 3}{sup 37}Cl. The respective PESs, CBS-35{sup  HL}, and CBS-37{sup  HL}, are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY {sub 3}Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35{sup  HL} and CBS-37{sup  HL} PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm{sup −1}, respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH{sub 3}Cl without empirical refinement of the respective PESs.

  5. Accurate transition rates for intercombination lines of singly ionized nitrogen

    NASA Astrophysics Data System (ADS)

    Tayal, S. S.

    2011-01-01

    The transition energies and rates for the 2s22p2 3P1,2-2s2p3 5S2o and 2s22p3s-2s22p3p intercombination transitions have been calculated using term-dependent nonorthogonal orbitals in the multiconfiguration Hartree-Fock approach. Several sets of spectroscopic and correlation nonorthogonal functions have been chosen to describe adequately term dependence of wave functions and various correlation corrections. Special attention has been focused on the accurate representation of strong interactions between the 2s2p3 1,3P1o and 2s22p3s 1,3P1olevels. The relativistic corrections are included through the one-body mass correction, Darwin, and spin-orbit operators and two-body spin-other-orbit and spin-spin operators in the Breit-Pauli Hamiltonian. The importance of core-valence correlation effects has been examined. The accuracy of present transition rates is evaluated by the agreement between the length and velocity formulations combined with the agreement between the calculated and measured transition energies. The present results for transition probabilities, branching fraction, and lifetimes have been compared with previous calculations and experiments.

  6. Accurate transition rates for intercombination lines of singly ionized nitrogen

    SciTech Connect

    Tayal, S. S.

    2011-01-15

    The transition energies and rates for the 2s{sup 2}2p{sup 2} {sup 3}P{sub 1,2}-2s2p{sup 3} {sup 5}S{sub 2}{sup o} and 2s{sup 2}2p3s-2s{sup 2}2p3p intercombination transitions have been calculated using term-dependent nonorthogonal orbitals in the multiconfiguration Hartree-Fock approach. Several sets of spectroscopic and correlation nonorthogonal functions have been chosen to describe adequately term dependence of wave functions and various correlation corrections. Special attention has been focused on the accurate representation of strong interactions between the 2s2p{sup 3} {sup 1,3}P{sub 1}{sup o} and 2s{sup 2}2p3s {sup 1,3}P{sub 1}{sup o}levels. The relativistic corrections are included through the one-body mass correction, Darwin, and spin-orbit operators and two-body spin-other-orbit and spin-spin operators in the Breit-Pauli Hamiltonian. The importance of core-valence correlation effects has been examined. The accuracy of present transition rates is evaluated by the agreement between the length and velocity formulations combined with the agreement between the calculated and measured transition energies. The present results for transition probabilities, branching fraction, and lifetimes have been compared with previous calculations and experiments.

  7. Accurate thermoelastic tensor and acoustic velocities of NaCl

    SciTech Connect

    Marcondes, Michel L.; Shukla, Gaurav; Silveira, Pedro da; Wentzcovitch, Renata M.

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  8. Accurate thermoelastic tensor and acoustic velocities of NaCl

    NASA Astrophysics Data System (ADS)

    Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.

    2015-12-01

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  9. Can cancer researchers accurately judge whether preclinical reports will reproduce?

    PubMed Central

    Mandel, David R.; Kimmelman, Jonathan

    2017-01-01

    There is vigorous debate about the reproducibility of research findings in cancer biology. Whether scientists can accurately assess which experiments will reproduce original findings is important to determining the pace at which science self-corrects. We collected forecasts from basic and preclinical cancer researchers on the first 6 replication studies conducted by the Reproducibility Project: Cancer Biology (RP:CB) to assess the accuracy of expert judgments on specific replication outcomes. On average, researchers forecasted a 75% probability of replicating the statistical significance and a 50% probability of replicating the effect size, yet none of these studies successfully replicated on either criterion (for the 5 studies with results reported). Accuracy was related to expertise: experts with higher h-indices were more accurate, whereas experts with more topic-specific expertise were less accurate. Our findings suggest that experts, especially those with specialized knowledge, were overconfident about the RP:CB replicating individual experiments within published reports; researcher optimism likely reflects a combination of overestimating the validity of original studies and underestimating the difficulties of repeating their methodologies. PMID:28662052

  10. Mill profiler machines soft materials accurately

    NASA Technical Reports Server (NTRS)

    Rauschl, J. A.

    1966-01-01

    Mill profiler machines bevels, slots, and grooves in soft materials, such as styrofoam phenolic-filled cores, to any desired thickness. A single operator can accurately control cutting depths in contour or straight line work.

  11. Peteye detection and correction

    NASA Astrophysics Data System (ADS)

    Yen, Jonathan; Luo, Huitao; Tretter, Daniel

    2007-01-01

    Redeyes are caused by the camera flash light reflecting off the retina. Peteyes refer to similar artifacts in the eyes of other mammals caused by camera flash. In this paper we present a peteye removal algorithm for detecting and correcting peteye artifacts in digital images. Peteye removal for animals is significantly more difficult than redeye removal for humans, because peteyes can be any of a variety of colors, and human face detection cannot be used to localize the animal eyes. In many animals, including dogs and cats, the retina has a special reflective layer that can cause a variety of peteye colors, depending on the animal's breed, age, or fur color, etc. This makes the peteye correction more challenging. We have developed a semi-automatic algorithm for peteye removal that can detect peteyes based on the cursor position provided by the user and correct them by neutralizing the colors with glare reduction and glint retention.

  12. Phaeochromocytoma [corrected] crisis.

    PubMed

    Whitelaw, B C; Prague, J K; Mustafa, O G; Schulte, K-M; Hopkins, P A; Gilbert, J A; McGregor, A M; Aylwin, S J B

    2014-01-01

    Phaeochromocytoma [corrected] crisis is an endocrine emergency associated with significant mortality. There is little published guidance on the management of phaeochromocytoma [corrected] crisis. This clinical practice update summarizes the relevant published literature, including a detailed review of cases published in the past 5 years, and a proposed classification system. We review the recommended management of phaeochromocytoma [corrected] crisis including the use of alpha-blockade, which is strongly associated with survival of a crisis. Mechanical circulatory supportive therapy (including intra-aortic balloon pump or extra-corporeal membrane oxygenation) is strongly recommended for patients with sustained hypotension. Surgical intervention should be deferred until medical stabilization is achieved. © 2013 John Wiley & Sons Ltd.

  13. Correction coil cable

    DOEpatents

    Wang, S.T.

    1994-11-01

    A wire cable assembly adapted for the winding of electrical coils is taught. A primary intended use is for use in particle tube assemblies for the Superconducting Super Collider. The correction coil cables have wires collected in wire array with a center rib sandwiched therebetween to form a core assembly. The core assembly is surrounded by an assembly housing having an inner spiral wrap and a counter wound outer spiral wrap. An alternate embodiment of the invention is rolled into a keystoned shape to improve radial alignment of the correction coil cable on a particle tube in a particle tube assembly. 7 figs.

  14. Target mass corrections revisited

    SciTech Connect

    Steffens, F.M.; Melnitchouk, W.

    2006-05-15

    We propose a new implementation of target mass corrections to nucleon structure functions which, unlike existing treatments, has the correct kinematic threshold behavior at finite Q{sup 2} in the x{yields}1 limit. We illustrate the differences between the new approach and existing prescriptions by considering specific examples for the F{sub 2} and F{sub L} structure functions, and discuss the broader implications of our results, which call into question the notion of universal parton distribution at finite Q{sup 2}.

  15. Target Mass Corrections Revisited

    SciTech Connect

    W. Melnitchouk; F. Steffens

    2006-03-07

    We propose a new implementation of target mass corrections to nucleon structure functions which, unlike existing treatments, has the correct kinematic threshold behavior at finite Q{sup 2} in the x {yields} 1 limit. We illustrate the differences between the new approach and existing prescriptions by considering specific examples for the F{sub 2} and F{sub L} structure functions, and discuss the broader implications of our results, which call into question the notion of universal parton distribution at finite Q{sup 2}.

  16. Corrective midfoot osteotomies.

    PubMed

    Stapleton, John J; DiDomenico, Lawrence A; Zgonis, Thomas

    2008-10-01

    Corrective midfoot osteotomies involve complete separation of the forefoot and hindfoot through the level of the midfoot, followed by uni-, bi-, or triplanar realignment and arthrodesis. This technique can be performed through various approaches; however, in the high-risk patient, percutaneous and minimum incision techniques are necessary to limit the potential of developing soft tissue injury. These master level techniques require extensive surgical experience and detailed knowledge of lower extremity biomechanics. The authors discuss preoperative clinical and radiographic evaluation, specific operative techniques used, and postoperative management for the high-risk patient undergoing corrective midfoot osteotomy.

  17. Refraction corrections for surveying

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1979-01-01

    Optical measurements of range and elevation angle are distorted by the earth's atmosphere. High precision refraction correction equations are presented which are ideally suited for surveying because their inputs are optically measured range and optically measured elevation angle. The outputs are true straight line range and true geometric elevation angle. The 'short distances' used in surveying allow the calculations of true range and true elevation angle to be quickly made using a programmable pocket calculator. Topics covered include the spherical form of Snell's Law; ray path equations; and integrating the equations. Short-, medium-, and long-range refraction corrections are presented in tables.

  18. Correction of ocular dystopia.

    PubMed

    Janecka, I P

    1996-04-01

    The purpose of this study was to examine results with elective surgical correction of enophthalmos. The study was a retrospective assessment in a university-based referral practice. A consecutive sample of 10 patients who developed ocular dystopia following orbital trauma was examined. The main outcome measures were a subjective evaluation by patients and objective measurements of patients' eye position. The intervention was three-dimensional orbital reconstruction with titanium plates. It is concluded that satisfactory correction of enophthalmos and ocular dystopia can be achieved with elective surgery using titanium plates. In addition, intraoperative measurements of eye position in three planes increases the precision of surgery.

  19. Prostate Diffusion Imaging with Distortion Correction

    PubMed Central

    Rakow-Penner, Rebecca A.; White, Nathan S.; Margolis, Daniel J. A.; Parsons, J. Kellogg; Schenker-Ahmed, Natalie; Kuperman, Joshua M.; Bartsch, Hauke; Choi, Hyung W.; Bradley, William G.; Shabaik, Ahmed; Huang, Jiaoti; Liss, Michael A.; Marks, Leonard; Kane, Christopher J.; Reiter, Robert E.; Raman, Steven S.; Karow, David S.; Dale, Anders M.

    2015-01-01

    Purpose Diffusion imaging in the prostate is susceptible to distortion from B0 inhomogeneity. Distortion correction in prostate imaging is not routinely performed, resulting in diffusion images without accurate localization of tumors. We performed and evaluated distortion correction for diffusion imaging in the prostate. Materials and Methods 28 patients underwent pre-operative MRI (T2, Gadolinium perfusion, diffusion at b = 800 s/mm2). The restriction spectrum protocol parameters included b-values of 0, 800, 1500, and 4000 s/mm2 in 30 directions for each nonzero b-value. To correct for distortion, forward and reverse trajectories were collected at b = 0 s/mm2. Distortion maps were generated to reflect the offset of the collected data versus the corrected data. Whole-mount histology, was available for correlation. Results: Across the 27 patients evaluated (excluding one patient due to data collection error), the average root mean square distortion distance of the prostate was 3.1 mm (standard deviation, 2.2 mm; and maximum distortion, 12 mm). Conclusion Improved localization of prostate cancer by MRI will allow better surgical planning, targeted biopsies and image-guided treatment therapies. Distortion distances of up to 12 mm due to standard diffusion imaging may grossly misdirect treatment decisions. Distortion correction for diffusion imaging in the prostate improves tumor localization. PMID:26220859

  20. Probabilistic error correction for RNA sequencing.

    PubMed

    Le, Hai-Son; Schulz, Marcel H; McCauley, Brenna M; Hinman, Veronica F; Bar-Joseph, Ziv

    2013-05-01

    Sequencing of RNAs (RNA-Seq) has revolutionized the field of transcriptomics, but the reads obtained often contain errors. Read error correction can have a large impact on our ability to accurately assemble transcripts. This is especially true for de novo transcriptome analysis, where a reference genome is not available. Current read error correction methods, developed for DNA sequence data, cannot handle the overlapping effects of non-uniform abundance, polymorphisms and alternative splicing. Here we present SEquencing Error CorrEction in Rna-seq data (SEECER), a hidden Markov Model (HMM)-based method, which is the first to successfully address these problems. SEECER efficiently learns hundreds of thousands of HMMs and uses these to correct sequencing errors. Using human RNA-Seq data, we show that SEECER greatly improves on previous methods in terms of quality of read alignment to the genome and assembly accuracy. To illustrate the usefulness of SEECER for de novo transcriptome studies, we generated new RNA-Seq data to study the development of the sea cucumber Parastichopus parvimensis. Our corrected assembled transcripts shed new light on two important stages in sea cucumber development. Comparison of the assembled transcripts to known transcripts in other species has also revealed novel transcripts that are unique to sea cucumber, some of which we have experimentally validated. Supporting website: http://sb.cs.cmu.edu/seecer/.

  1. Probabilistic error correction for RNA sequencing

    PubMed Central

    Le, Hai-Son; Schulz, Marcel H.; McCauley, Brenna M.; Hinman, Veronica F.; Bar-Joseph, Ziv

    2013-01-01

    Sequencing of RNAs (RNA-Seq) has revolutionized the field of transcriptomics, but the reads obtained often contain errors. Read error correction can have a large impact on our ability to accurately assemble transcripts. This is especially true for de novo transcriptome analysis, where a reference genome is not available. Current read error correction methods, developed for DNA sequence data, cannot handle the overlapping effects of non-uniform abundance, polymorphisms and alternative splicing. Here we present SEquencing Error CorrEction in Rna-seq data (SEECER), a hidden Markov Model (HMM)–based method, which is the first to successfully address these problems. SEECER efficiently learns hundreds of thousands of HMMs and uses these to correct sequencing errors. Using human RNA-Seq data, we show that SEECER greatly improves on previous methods in terms of quality of read alignment to the genome and assembly accuracy. To illustrate the usefulness of SEECER for de novo transcriptome studies, we generated new RNA-Seq data to study the development of the sea cucumber Parastichopus parvimensis. Our corrected assembled transcripts shed new light on two important stages in sea cucumber development. Comparison of the assembled transcripts to known transcripts in other species has also revealed novel transcripts that are unique to sea cucumber, some of which we have experimentally validated. Supporting website: http://sb.cs.cmu.edu/seecer/. PMID:23558750

  2. Partial Volume Correction in Quantitative Amyloid Imaging

    PubMed Central

    Su, Yi; Blazey, Tyler M.; Snyder, Abraham Z.; Raichle, Marcus E.; Marcus, Daniel S.; Ances, Beau M.; Bateman, Randall J.; Cairns, Nigel J.; Aldea, Patricia; Cash, Lisa; Christensen, Jon J.; Friedrichsen, Karl; Hornbeck, Russ C.; Farrar, Angela M.; Owen, Christopher J.; Mayeux, Richard; Brickman, Adam M.; Klunk, William; Price, Julie C.; Thompson, Paul M.; Ghetti, Bernardino; Saykin, Andrew J.; Sperling, Reisa A.; Johnson, Keith A.; Schofield, Peter R.; Buckles, Virginia; Morris, John C.; Benzinger, Tammie. LS.

    2014-01-01

    Amyloid imaging is a valuable tool for research and diagnosis in dementing disorders. As positron emission tomography (PET) scanners have limited spatial resolution, measured signals are distorted by partial volume effects. Various techniques have been proposed for correcting partial volume effects, but there is no consensus as to whether these techniques are necessary in amyloid imaging, and, if so, how they should be implemented. We evaluated a two-component partial volume correction technique and a regional spread function technique using both simulated and human Pittsburgh compound B (PiB) PET imaging data. Both correction techniques compensated for partial volume effects and yielded improved detection of subtle changes in PiB retention. However, the regional spread function technique was more accurate in application to simulated data. Because PiB retention estimates depend on the correction technique, standardization is necessary to compare results across groups. Partial volume correction has sometimes been avoided because it increases the sensitivity to inaccuracy in image registration and segmentation. However, our results indicate that appropriate PVC may enhance our ability to detect changes in amyloid deposition. PMID:25485714

  3. Optical Correction of Aphakia in Children

    PubMed Central

    Baradaran-Rafii, Alireza; Shirzadeh, Ebrahim; Eslani, Medi; Akbari, Mitra

    2014-01-01

    There are several reasons for which the correction of aphakia differs between children and adults. First, a child's eye is still growing during the first few years of life and during early childhood, the refractive elements of the eye undergo radical changes. Second, the immature visual system in young children puts them at risk of developing amblyopia if visual input is defocused or unequal between the two eyes. Third, the incidence of many complications, in which certain risks are acceptable in adults, is unacceptable in children. The optical correction of aphakia in children has changed dramatically however, accurate optical rehabilitation and postoperative supervision in pediatric cases is more difficult than adults. Treatment and optical rehabilitation in pediatric aphakic patients remains a challenge for ophthalmologists. The aim of this review is to cover issues regarding optical correction of pediatric aphakia in children; kinds of optical correction , indications, timing of intraocular lens (IOL) implantation, types of IOLs, site of implantation, IOL power calculations and selection, complications of IOL implantation in pediatric patients and finally to determine the preferred choice of optical correction. However treatment of pediatric aphakia is one step on the long road to visual rehabilitation, not the end of the journey. PMID:24982736

  4. Extremely Accurate On-Orbit Position Accuracy using TDRSS

    NASA Technical Reports Server (NTRS)

    Stocklin, Frank; Toral, Marco; Bar-Sever, Yoaz; Rush, John

    2006-01-01

    NASA is planning to launch a new service for Earth satellites providing them with precise GPS differential corrections and other ancillary information enabling decimeter level orbit determination accuracy and nanosecond time-transfer accuracy, onboard, in real-time. The TDRSS Augmentation Service for Satellites (TASS) will broadcast its message on the S-band multiple access forward channel of NASA s Tracking and Data Relay Satellite System (TDRSS). The satellite's phase array antenna has been configured to provide a wide beam, extending coverage up to 1000 km altitude over the poles. Global coverage will be ensured with broadcast from three or more TDRSS satellites. The GPS differential corrections are provided by the NASA Global Differential GPS (GDGPS) System, developed and operated by JPL. The GDGPS System employs global ground network of more than 70 GPS receivers to monitor the GPS constellation in real time. The system provides real-time estimates of the GPS satellite states, as well as many other real-time products such as differential corrections, global ionospheric maps, and integrity monitoring. The unique multiply redundant architecture of the GDGPS System ensures very high reliability, with 99.999% demonstrated since the inception of the system in early 2000. The estimated real time GPS orbit and clock states provided by the GDGPS system are accurate to better than 20 cm 3D RMS, and have been demonstrated to support sub-decimeter real time positioning and orbit determination for a variety of terrestrial, airborne, and spaceborne applications. In addition to the GPS differential corrections, TASS will provide real-time Earth orientation and solar flux information that enable precise onboard knowledge of the Earth-fixed position of the spacecraft, and precise orbit prediction and planning capabilities. TASS will also provide 5 seconds alarms for GPS integrity failures based on the unique GPS integrity monitoring service of the GDGPS System.

  5. Cyclic period changes and the light-time effect in eclipsing binaries: A low-mass companion around the system VV Ursae Majoris

    NASA Astrophysics Data System (ADS)

    Tanrıver, Mehmet

    2015-04-01

    In this article, a period analysis of the late-type eclipsing binary VV UMa is presented. This work is based on the periodic variation of eclipse timings of the VV UMa binary. We determined the orbital properties and mass of a third orbiting body in the system by analyzing the light-travel time effect. The O-C diagram constructed for all available minima times of VV UMa exhibits a cyclic character superimposed on a linear variation. This variation includes three maxima and two minima within approximately 28,240 orbital periods of the system, which can be explained as the light-travel time effect (LITE) because of an unseen third body in a triple system that causes variations of the eclipse arrival times. New parameter values of the light-time travel effect because of the third body were computed with a period of 23.22 ± 0.17 years in the system. The cyclic-variation analysis produces a value of 0.0139 day as the semi-amplitude of the light-travel time effect and 0.35 as the orbital eccentricity of the third body. The mass of the third body that orbits the eclipsing binary stars is 0.787 ± 0.02 M⊙, and the semi-major axis of its orbit is 10.75 AU.

  6. Correction to ATel 10782

    NASA Astrophysics Data System (ADS)

    Zhang, Jujia

    2017-09-01

    I report a correction to the spectroscopic classification of the optical transients announced in ATEL #10782. In the main text of the telegram, the date of observation should be UT 2017 Sep. 25.6, which was written as UT 2017 Sep. 26.6 in the original report. I apologize for any confusion caused by this typo error.

  7. Errors and Their Corrections

    ERIC Educational Resources Information Center

    Joosten, Albert Max

    2016-01-01

    "Our primary concern is not that the child learns to do something without mistakes. Our real concern is that the child does what he needs, with interest." The reaction of so many adults to the mistakes of children is to correct, immediately and directly, says Joosten. To truly aid the child in development, we must learn to control our…

  8. New Directions in Corrections.

    ERIC Educational Resources Information Center

    McKee, John M.

    A picture of the American prison situation in the past and in its present changing form is presented. The object of the correctional community is becoming more and more that of successfully reintegrating the ex-offender into the social community from which he has been separated. It is predicted that within the next five years: (1) Every state will…

  9. Correction to ATel 10681

    NASA Astrophysics Data System (ADS)

    Wang, Xiaofeng

    2017-08-01

    We report a correction to the spectroscopic classification of two optical transients announced in ATel #10681. In the main text of the telegram, SN 2017giq and MASTER OT J033744.97+723159.0 should be classified as type Ic and type IIb supernovae, respectively, which were reversed in the original report. We apologize for any confusion caused by this typo error.

  10. Rethinking Correctional Staff Development.

    ERIC Educational Resources Information Center

    Williams, David C.

    There have been enduring conflicts in correctional institutions between personnel charged with rehabilitative duties and those who oversee authority. It is only within the past few years that realistic communication between these groups has been tolerated. The same period of time has been characterized by the infusion of training and staff…

  11. Refraction corrections for surveying

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    Optical measurements of range and elevation angles are distorted by refraction of Earth's atmosphere. Theoretical discussion of effect, along with equations for determining exact range and elevation corrections, is presented in report. Potentially useful in optical site surveying and related applications, analysis is easily programmed on pocket calculator. Input to equation is measured range and measured elevation; output is true range and true elevation.

  12. Spelling Words Correctly.

    ERIC Educational Resources Information Center

    Ediger, Marlow

    Traditional methods of teaching spelling emphasized that pupils might write each new spelling word correctly and repeatedly from a weekly list in the spelling textbook. Some weaknesses in this approach are that rote learning is being stressed without emphasizing application of what has been learned, and that there is nothing which relates the…

  13. Thermodynamically Correct Bioavailability Estimations

    DTIC Science & Technology

    1992-04-30

    6448 I 1. SWPPUMENTA* NOTIS lIa. OISTUAMJTiOAVAILAIILTY STATIMENT 121 OT REbT ostwosCo z I Approved for public release; distribution unlimited... research is to develop thermodynamically correct bioavailability estimations using chromatographic stationary phases as a model of the "interphase

  14. Refraction corrections for surveying

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    Optical measurements of range and elevation angles are distorted by refraction of Earth's atmosphere. Theoretical discussion of effect, along with equations for determining exact range and elevation corrections, is presented in report. Potentially useful in optical site surveying and related applications, analysis is easily programmed on pocket calculator. Input to equation is measured range and measured elevation; output is true range and true elevation.

  15. Holographic Phase Correction.

    DTIC Science & Technology

    1987-06-01

    aberrated wavefront. With this in mind , the following example was considered. 3.2 REPLAY EFFICIENCY - AN EXAMPLE This example represents the phase...practical points to bear in mind when considering the phase correction - in particular, the flatness of the hologram input and output surfaces, and the...DOCUMENT CONTROL SHEET Overall securty clasification of sheet UNCLASSIFIED

  16. Issues in Correctional Training and Casework. Correctional Monograph.

    ERIC Educational Resources Information Center

    Wolford, Bruce I., Ed.; Lawrenz, Pam, Ed.

    The eight papers contained in this monograph were drawn from two national meetings on correctional training and casework. Titles and authors are: "The Challenge of Professionalism in Correctional Training" (Michael J. Gilbert); "A New Perspective in Correctional Training" (Jack Lewis); "Reasonable Expectations in Correctional Officer Training:…

  17. DNA barcode data accurately assign higher spider taxa.

    PubMed

    Coddington, Jonathan A; Agnarsson, Ingi; Cheng, Ren-Chung; Čandek, Klemen; Driskell, Amy; Frick, Holger; Gregorič, Matjaž; Kostanjšek, Rok; Kropf, Christian; Kweskin, Matthew; Lokovšek, Tjaša; Pipan, Miha; Vidergar, Nina; Kuntner, Matjaž

    2016-01-01

    The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios "barcodes" (whether single or multiple, organelle or nuclear, loci) clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families-taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75-100%). Accurate assignment of higher taxa (PIdent above which errors totaled less than 5%) occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However, the quality of the

  18. DNA barcode data accurately assign higher spider taxa

    PubMed Central

    Coddington, Jonathan A.; Agnarsson, Ingi; Cheng, Ren-Chung; Čandek, Klemen; Driskell, Amy; Frick, Holger; Gregorič, Matjaž; Kostanjšek, Rok; Kropf, Christian; Kweskin, Matthew; Lokovšek, Tjaša; Pipan, Miha; Vidergar, Nina

    2016-01-01

    The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci) clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%). Accurate assignment of higher taxa (PIdent above which errors totaled less than 5%) occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However, the quality of

  19. Accurately determining log and bark volumes of saw logs using high-resolution laser scan data

    Treesearch

    R. Edward Thomas; Neal D. Bennett

    2014-01-01

    Accurately determining the volume of logs and bark is crucial to estimating the total expected value recovery from a log. Knowing the correct size and volume of a log helps to determine which processing method, if any, should be used on a given log. However, applying volume estimation methods consistently can be difficult. Errors in log measurement and oddly shaped...

  20. Accurate superimposition of perimetry data onto fundus photographs.

    PubMed

    Bek, T; Lund-Andersen, H

    1990-02-01

    A technique for accurate superimposition of computerized perimetry data onto the corresponding retinal locations seen on fundus photographs was developed. The technique was designed to take into account: 1) that the photographic field of view of the fundus camera varies with ametropia-dependent camera focusing 2) possible distortion by the fundus camera, and 3) that corrective lenses employed during perimetry magnify or minify the visual field. The technique allowed an overlay of perimetry data of the central 60 degrees of the visual field onto fundus photographs with an accuracy of 0.5 degree. The correlation of localized retinal morphology to localized retinal function was therefore limited by the spatial resolution of the computerized perimetry, which was 2.5 degrees in the Dicon AP-2500 perimeter employed for this study. The theoretical assumptions of the technique were confirmed by comparing visual field records to fundus photographs from patients with morphologically well-defined non-functioning lesions in the retina.

  1. Simple and accurate sum rules for highly relativistic systems

    NASA Astrophysics Data System (ADS)

    Cohen, Scott M.

    2005-03-01

    In this paper, I consider the Bethe and Thomas-Reiche-Kuhn sum rules, which together form the foundation of Bethe's theory of energy loss from fast charged particles to matter. For nonrelativistic target systems, the use of closure leads directly to simple expressions for these quantities. In the case of relativistic systems, on the other hand, the calculation of sum rules is fraught with difficulties. Various perturbative approaches have been used over the years to obtain relativistic corrections, but these methods fail badly when the system in question is very strongly bound. Here, I present an approach that leads to relatively simple expressions yielding accurate sums, even for highly relativistic many-electron systems. I also offer an explanation for the difference between relativistic and nonrelativistic sum rules in terms of the Zitterbewegung of the electrons.

  2. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    PubMed

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results.

  3. A fast and accurate FPGA based QRS detection system.

    PubMed

    Shukla, Ashish; Macchiarulo, Luca

    2008-01-01

    An accurate Field Programmable Gate Array (FPGA) based ECG Analysis system is described in this paper. The design, based on a popular software based QRS detection algorithm, calculates the threshold value for the next peak detection cycle, from the median of eight previously detected peaks. The hardware design has accuracy in excess of 96% in detecting the beats correctly when tested with a subset of five 30 minute data records obtained from the MIT-BIH Arrhythmia database. The design, implemented using a proprietary design tool (System Generator), is an extension of our previous work and uses 76% resources available in a small-sized FPGA device (Xilinx Spartan xc3s500), has a higher detection accuracy as compared to our previous design and takes almost half the analysis time in comparison to software based approach.

  4. Neutron supermirrors: an accurate theory for layer thickness computation

    NASA Astrophysics Data System (ADS)

    Bray, Michael

    2001-11-01

    We present a new theory for the computation of Super-Mirror stacks, using accurate formulas derived from the classical optics field. Approximations are introduced into the computation, but at a later stage than existing theories, providing a more rigorous treatment of the problem. The final result is a continuous thickness stack, whose properties can be determined at the outset of the design. We find that the well-known fourth power dependence of number of layers versus maximum angle is (of course) asymptotically correct. We find a formula giving directly the relation between desired reflectance, maximum angle, and number of layers (for a given pair of materials). Note: The author of this article, a classical opticist, has limited knowledge of the Neutron world, and begs forgiveness for any shortcomings, erroneous assumptions and/or misinterpretation of previous authors' work on the subject.

  5. CALCULATING ACCURATE SHUFFLER COUNT RATES WITH APPLICATIONS

    SciTech Connect

    P. M. RINARD

    2001-05-01

    Shufflers are used to assay uranium and other fissile elements in bulk and waste quantities. They normally require physical calibration standards to achieve the most-accurate results, but such standards are generally rare and expensive, so inappropriate standards are often used out of necessity. This paper reports on a new technique that has been developed to calculate accurate count rates, in effect simulating physical standards with rapid and inexpensive calculations. The technique has been benchmarked on existing oxide and metallic standards, used to study a variety of conditions for which standards do not exist, and applied to inventory items needing verification measurements even though appropriate physical standards do not exist.

  6. Correction coil cable

    DOEpatents

    Wang, Sou-Tien

    1994-11-01

    A wire cable assembly (10, 310) adapted for the winding of electrical coils is taught. A primary intended use is for use in particle tube assemblies (532) for the superconducting super collider. The correction coil cables (10, 310) have wires (14, 314) collected in wire arrays (12, 312) with a center rib (16, 316) sandwiched therebetween to form a core assembly (18, 318 ). The core assembly (18, 318) is surrounded by an assembly housing (20, 320) having an inner spiral wrap (22, 322) and a counter wound outer spiral wrap (24, 324). An alternate embodiment (410) of the invention is rolled into a keystoned shape to improve radial alignment of the correction coil cable (410) on a particle tube (733) in a particle tube assembly (732).

  7. CTI Correction Code

    NASA Astrophysics Data System (ADS)

    Massey, Richard; Stoughton, Chris; Leauthaud, Alexie; Rhodes, Jason; Koekemoer, Anton; Ellis, Richard; Shaghoulian, Edgar

    2013-07-01

    Charge Transfer Inefficiency (CTI) due to radiation damage above the Earth's atmosphere creates spurious trailing in images from Charge-Coupled Device (CCD) imaging detectors. Radiation damage also creates unrelated warm pixels, which can be used to measure CTI. This code provides pixel-based correction for CTI and has proven effective in Hubble Space Telescope Advanced Camera for Surveys raw images, successfully reducing the CTI trails by a factor of ~30 everywhere in the CCD and at all flux levels. The core is written in java for speed, and a front-end user interface is provided in IDL. The code operates on raw data by returning individual electrons to pixels from which they were unintentionally dragged during readout. Correction takes about 25 minutes per ACS exposure, but is trivially parallelisable to multiple processors.

  8. Voltage correction power flow

    SciTech Connect

    Rajicic, D.; Ackovski, R.; Taleski, R. . Dept. of Electrical Engineering)

    1994-04-01

    A method for power flow solution of weakly meshed distribution and transmission networks is presented. It is based on oriented ordering of network elements. That allows an efficient construction of the loop impedance matrix and rational organization of the processes such as: power summation (backward sweep), current summation (backward sweep) and node voltage calculation (forward sweep). The first step of the algorithm is calculation of node voltages on the radial part of the network. The second step is calculation of the breakpoint currents. Then, the procedure continues with the first step, which is preceded by voltage correction. It is illustrated that using voltage correction approach, the iterative process of weakly meshed network voltage calculation is faster and more reliable.

  9. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  10. Correcting Duporcq's theorem☆

    PubMed Central

    Nawratil, Georg

    2014-01-01

    In 1898, Ernest Duporcq stated a famous theorem about rigid-body motions with spherical trajectories, without giving a rigorous proof. Today, this theorem is again of interest, as it is strongly connected with the topic of self-motions of planar Stewart–Gough platforms. We discuss Duporcq's theorem from this point of view and demonstrate that it is not correct. Moreover, we also present a revised version of this theorem. PMID:25540467

  11. Accurate pointing of tungsten welding electrodes

    NASA Technical Reports Server (NTRS)

    Ziegelmeier, P.

    1971-01-01

    Thoriated-tungsten is pointed accurately and quickly by using sodium nitrite. Point produced is smooth and no effort is necessary to hold the tungsten rod concentric. The chemically produced point can be used several times longer than ground points. This method reduces time and cost of preparing tungsten electrodes.

  12. BTPS correction for ceramic flow sensor.

    PubMed

    Hankinson, J L; Viola, J O; Petsonk, E L; Ebeling, T R

    1994-05-01

    Several commercially available spirometers use unheated ceramic elements as flow sensors to determine flow and calculate volume of air. The usual method of correcting the resulting flow and volume values to body temperature pressure saturated (BTPS) is to apply a constant factor approximately equal to 30 percent of the full BTPS correction factor. To evaluate the usual BTPS correction factor technique, we tested several sensors with a mechanical pump using both room air and air heated to 37 degrees C and saturated with water vapor. The volume signals used to test the sensors were volume ramps (constant flow) and the first four American Thoracic Society (ATS) standard waveforms. The percent difference in FEV1 obtained using room vs heated-humidified air (proportional to the magnitude of the BTPS correction factor needed) ranged from 0.3 percent to 6.2 percent and varied with the number of maneuvers previously performed, the time interval between maneuvers, the volume of the current and previous maneuvers, and the starting temperature of the sensor. The temperature of the air leaving the sensor (exit temperature) showed a steady rise with each successive maneuver using heated air. When six subjects performed repeated tests over several days (each test consisting of at least three maneuvers), a maneuver order effect was observed similar to the results using the mechanical pump. These results suggest that a dynamic, rather than static, BTPS correction factor is needed for accurate estimations of forced expiratory volumes and to reduce erroneous variability between successive maneuvers. Use of exit air temperature provides a means of estimating a dynamic BTPS correction factor, and this technique may be sufficient to provide an FEV1 accuracy of less than +/- 3 percent for exit air temperatures from 5 degrees to 28 degrees C.

  13. A highly accurate ab initio potential energy surface for methane

    NASA Astrophysics Data System (ADS)

    Owens, Alec; Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2016-09-01

    A new nine-dimensional potential energy surface (PES) for methane has been generated using state-of-the-art ab initio theory. The PES is based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set limit and incorporates a range of higher-level additive energy corrections. These include core-valence electron correlation, higher-order coupled cluster terms beyond perturbative triples, scalar relativistic effects, and the diagonal Born-Oppenheimer correction. Sub-wavenumber accuracy is achieved for the majority of experimentally known vibrational energy levels with the four fundamentals of 12CH4 reproduced with a root-mean-square error of 0.70 cm-1. The computed ab initio equilibrium C-H bond length is in excellent agreement with previous values despite pure rotational energies displaying minor systematic errors as J (rotational excitation) increases. It is shown that these errors can be significantly reduced by adjusting the equilibrium geometry. The PES represents the most accurate ab initio surface to date and will serve as a good starting point for empirical refinement.

  14. Interventions to Correct Misinformation About Tobacco Products

    PubMed Central

    Cappella, Joseph N.; Maloney, Erin; Ophir, Yotam; Brennan, Emily

    2016-01-01

    In 2006, the U.S. District Court held that tobacco companies had “falsely and fraudulently” denied: tobacco causes lung cancer; environmental smoke endangers children’s respiratory systems; nicotine is highly addictive; low tar cigarettes were less harmful when they were not; they marketed to children; they manipulated nicotine delivery to enhance addiction; and they concealed and destroyed evidence to prevent accurate public knowledge. The courts required the tobacco companies to repair this misinformation. Several studies evaluated types of corrective statements (CS). We argue that most CS proposed (“simple CS’s”) will fall prey to “belief echoes” leaving affective remnants of the misinformation untouched while correcting underlying knowledge. Alternative forms for CS (“enhanced CS’s”) are proposed that include narrative forms, causal linkage, and emotional links to the receiver. PMID:27135046

  15. FIELD CORRECTION FACTORS FOR PERSONAL NEUTRON DOSEMETERS.

    PubMed

    Luszik-Bhadra, M

    2016-09-01

    A field-dependent correction factor can be obtained by comparing the readings of two albedo neutron dosemeters fixed in opposite directions on a polyethylene sphere to the H*(10) reading as determined with a thermal neutron detector in the centre of the same sphere. The work shows that the field calibration technique as used for albedo neutron dosemeters can be generalised for all kind of dosemeters, since H*(10) is a conservative estimate of the sum of the personal dose equivalents Hp(10) in two opposite directions. This result is drawn from reference values as determined by spectrometers within the EVIDOS project at workplace of nuclear installations in Europe. More accurate field-dependent correction factors can be achieved by the analysis of several personal dosimeters on a phantom, but reliable angular responses of these dosemeters need to be taken into account.

  16. Accurate phylogenetic classification of DNA fragments based onsequence composition

    SciTech Connect

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.

  17. An Accurate and Efficient Method of Computing Differential Seismograms

    NASA Astrophysics Data System (ADS)

    Hu, S.; Zhu, L.

    2013-12-01

    Inversion of seismic waveforms for Earth structure usually requires computing partial derivatives of seismograms with respect to velocity model parameters. We developed an accurate and efficient method to calculate differential seismograms for multi-layered elastic media, based on the Thompson-Haskell propagator matrix technique. We first derived the partial derivatives of the Haskell matrix and its compound matrix respect to the layer parameters (P wave velocity, shear wave velocity and density). We then derived the partial derivatives of surface displacement kernels in the frequency-wavenumber domain. The differential seismograms are obtained by using the frequency-wavenumber double integration method. The implementation is computationally efficient and the total computing time is proportional to the time of computing the seismogram itself, i.e., independent of the number of layers in the model. We verified the correctness of results by comparing with differential seismograms computed using the finite differences method. Our results are more accurate because of the analytical nature of the derived partial derivatives.

  18. Anisotropic Turbulence Modeling for Accurate Rod Bundle Simulations

    SciTech Connect

    Baglietto, Emilio

    2006-07-01

    An improved anisotropic eddy viscosity model has been developed for accurate predictions of the thermal hydraulic performances of nuclear reactor fuel assemblies. The proposed model adopts a non-linear formulation of the stress-strain relationship in order to include the reproduction of the anisotropic phenomena, and in combination with an optimized low-Reynolds-number formulation based on Direct Numerical Simulation (DNS) to produce correct damping of the turbulent viscosity in the near wall region. This work underlines the importance of accurate anisotropic modeling to faithfully reproduce the scale of the turbulence driven secondary flows inside the bundle subchannels, by comparison with various isothermal and heated experimental cases. The very low scale secondary motion is responsible for the increased turbulence transport which produces a noticeable homogenization of the velocity distribution and consequently of the circumferential cladding temperature distribution, which is of main interest in bundle design. Various fully developed bare bundles test cases are shown for different geometrical and flow conditions, where the proposed model shows clearly improved predictions, in close agreement with experimental findings, for regular as well as distorted geometries. Finally the applicability of the model for practical bundle calculations is evaluated through its application in the high-Reynolds form on coarse grids, with excellent results. (author)

  19. Quality metric for accurate overlay control in <20nm nodes

    NASA Astrophysics Data System (ADS)

    Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki

    2013-04-01

    The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.

  20. A new and accurate continuum description of moving fronts

    NASA Astrophysics Data System (ADS)

    Johnston, S. T.; Baker, R. E.; Simpson, M. J.

    2017-03-01

    Processes that involve moving fronts of populations are prevalent in ecology and cell biology. A common approach to describe these processes is a lattice-based random walk model, which can include mechanisms such as crowding, birth, death, movement and agent–agent adhesion. However, these models are generally analytically intractable and it is computationally expensive to perform sufficiently many realisations of the model to obtain an estimate of average behaviour that is not dominated by random fluctuations. To avoid these issues, both mean-field (MF) and corrected mean-field (CMF) continuum descriptions of random walk models have been proposed. However, both continuum descriptions are inaccurate outside of limited parameter regimes, and CMF descriptions cannot be employed to describe moving fronts. Here we present an alternative description in terms of the dynamics of groups of contiguous occupied lattice sites and contiguous vacant lattice sites. Our description provides an accurate prediction of the average random walk behaviour in all parameter regimes. Critically, our description accurately predicts the persistence or extinction of the population in situations where previous continuum descriptions predict the opposite outcome. Furthermore, unlike traditional MF models, our approach provides information about the spatial clustering within the population and, subsequently, the moving front.

  1. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  2. Accurate determination of membrane dynamics with line-scan FCS.

    PubMed

    Ries, Jonas; Chiantia, Salvatore; Schwille, Petra

    2009-03-04

    Here we present an efficient implementation of line-scan fluorescence correlation spectroscopy (i.e., one-dimensional spatio-temporal image correlation spectroscopy) using a commercial laser scanning microscope, which allows the accurate measurement of diffusion coefficients and concentrations in biological lipid membranes within seconds. Line-scan fluorescence correlation spectroscopy is a calibration-free technique. Therefore, it is insensitive to optical artifacts, saturation, or incorrect positioning of the laser focus. In addition, it is virtually unaffected by photobleaching. Correction schemes for residual inhomogeneities and depletion of fluorophores due to photobleaching extend the applicability of line-scan fluorescence correlation spectroscopy to more demanding systems. This technique enabled us to measure accurate diffusion coefficients and partition coefficients of fluorescent lipids in phase-separating supported bilayers of three commonly used raft-mimicking compositions. Furthermore, we probed the temperature dependence of the diffusion coefficient in several model membranes, and in human embryonic kidney cell membranes not affected by temperature-induced optical aberrations.

  3. Automated numerical calculation of Sagnac correction for photonic paths

    NASA Astrophysics Data System (ADS)

    Šlapák, Martin; Vojtěch, Josef; Velc, Radek

    2017-04-01

    Relativistic effects must be taken into account for highly accurate time and frequency transfers. The most important is the Sagnac correction which is also source of non-reciprocity in various directions of any transfer in relation with the Earth rotation. In this case, not all important parameters as exact trajectory of the optical fibre path (leased fibres) are known with sufficient precision thus it is necessary to estimate lower and upper bounds of computed corrections. The presented approach deals with uncertainty in knowledge of detailed fibre paths, and also with complex paths with loops. We made the whole process of calculation of the Sagnac correction fully automated.

  4. Quantum-electrodynamics corrections in pionic hydrogen

    SciTech Connect

    Schlesser, S.; Le Bigot, E.-O.; Indelicato, P.; Pachucki, K.

    2011-07-15

    We investigate all pure quantum-electrodynamics corrections to the np{yields}1s, n=2-4 transition energies of pionic hydrogen larger than 1 meV, which requires an accurate evaluation of all relevant contributions up to order {alpha}{sup 5}. These values are needed to extract an accurate strong interaction shift from experiment. Many small effects, such as second-order and double vacuum polarization contribution, proton and pion self-energies, finite size and recoil effects are included with exact mass dependence. Our final value differs from previous calculations by up to {approx_equal}11 ppm for the 1s state, while a recent experiment aims at a 4 ppm accuracy.

  5. Refining atmospheric correction for aquatic remote spectroscopy

    NASA Astrophysics Data System (ADS)

    Thompson, D. R.; Guild, L. S.; Negrey, K.; Kudela, R. M.; Palacios, S. L.; Gao, B. C.; Green, R. O.

    2015-12-01

    Remote spectroscopic investigations of aquatic ecosystems typically measure radiance at high spectral resolution and then correct these data for atmospheric effects to estimate Remote Sensing Reflectance (Rrs) at the surface. These reflectance spectra reveal phytoplankton absorption and scattering features, enabling accurate retrieval of traditional remote sensing parameters, such as chlorophyll-a, and new retrievals of additional parameters, such as phytoplankton functional type. Future missions will significantly expand coverage of these datasets with airborne campaigns (CORAL, ORCAS, and the HyspIRI Preparatory Campaign) and orbital instruments (EnMAP, HyspIRI). Remote characterization of phytoplankton can be influenced by errors in atmospheric correction due to uncertain atmospheric constituents such as aerosols. The "empirical line method" is an expedient solution that estimates a linear relationship between observed radiances and in-situ reflectance measurements. While this approach is common for terrestrial data, there are few examples involving aquatic scenes. Aquatic scenes are challenging due to the difficulty of acquiring in situ measurements from open water; with only a handful of reference spectra, the resulting corrections may not be stable. Here we present a brief overview of methods for atmospheric correction, and describe ongoing experiments on empirical line adjustment with AVIRIS overflights of Monterey Bay from the 2013-2014 HyspIRI preparatory campaign. We present new methods, based on generalized Tikhonov regularization, to improve stability and performance when few reference spectra are available. Copyright 2015 California Institute of Technology. All Rights Reserved. US Government Support Acknowledged.

  6. Exemplar-based human action pose correction.

    PubMed

    Shen, Wei; Deng, Ke; Bai, Xiang; Leyvand, Tommer; Guo, Baining; Tu, Zhuowen

    2014-07-01

    The launch of Xbox Kinect has built a very successful computer vision product and made a big impact on the gaming industry. This sheds lights onto a wide variety of potential applications related to action recognition. The accurate estimation of human poses from the depth image is universally a critical step. However, existing pose estimation systems exhibit failures when facing severe occlusion. In this paper, we propose an exemplar-based method to learn to correct the initially estimated poses. We learn an inhomogeneous systematic bias by leveraging the exemplar information within a specific human action domain. Furthermore, as an extension, we learn a conditional model by incorporation of pose tags to further increase the accuracy of pose correction. In the experiments, significant improvements on both joint-based skeleton correction and tag prediction are observed over the contemporary approaches, including what is delivered by the current Kinect system. Our experiments for the facial landmark correction also illustrate that our algorithm can improve the accuracy of other detection/estimation systems.

  7. A precise technique for manufacturing correction coil

    SciTech Connect

    Schieber, L.

    1992-01-01

    An automated method of manufacturing correction coils has been developed which provides a precise embodiment of the coil design. Numerically controlled machines have been developed to accurately position coil windings on the beam tube. Two types of machines have been built. One machine bonds the wire to a substrate which is wrapped around the beam tube after it is completed while the second machine bonds the wire directly to the beam tube. Both machines use the Multiwire[reg sign] technique of bonding the wire to the substrate utilizing an ultrasonic stylus. These machines are being used to manufacture coils for both the SSC and RHIC.

  8. A precise technique for manufacturing correction coil

    SciTech Connect

    Schieber, L.

    1992-11-01

    An automated method of manufacturing correction coils has been developed which provides a precise embodiment of the coil design. Numerically controlled machines have been developed to accurately position coil windings on the beam tube. Two types of machines have been built. One machine bonds the wire to a substrate which is wrapped around the beam tube after it is completed while the second machine bonds the wire directly to the beam tube. Both machines use the Multiwire{reg_sign} technique of bonding the wire to the substrate utilizing an ultrasonic stylus. These machines are being used to manufacture coils for both the SSC and RHIC.

  9. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy

    PubMed Central

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  10. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    PubMed

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features.

  11. Correction for inhomogeneous line broadening in spin labels, II

    NASA Astrophysics Data System (ADS)

    Bales, Barney L.

    Our methods to correct for inhomogeneous line broadening in the EPR of nitroxide spin labels are extended. Previously, knowledge of the hyperfine pattern of the nuclei responsible for the inhomogeneous broadening was necessary in order to carry out the corrections. This normally meant that either a separate NMR experiment or EPR spectral simulation was needed. Here a very simple method is developed, based upon measurement of four points on the experimental EPR spectrum itself, that allows one to carry out the correction procedure with precision rivaling that attained using NMR or spectral simulation. Two associated problems are solved: (1) the EPR signal strength is estimated without the need to carry out double integrations and (2) linewidth ratios, important in calculating rotational correlation times, are corrected. In all cases except one, the corrections are effected from the four measured points using only a hand-held programmable calculator. Experimental examples illustrate the methods and show them to be amazingly accurate.

  12. [An Algorithm for Correcting Fetal Heart Rate Baseline].

    PubMed

    Li, Xiaodong; Lu, Yaosheng

    2015-10-01

    Fetal heart rate (FHR) baseline estimation is of significance for the computerized analysis of fetal heart rate and the assessment of fetal state. In our work, a fetal heart rate baseline correction algorithm was presented to make the existing baseline more accurate and fit to the tracings. Firstly, the deviation of the existing FHR baseline was found and corrected. And then a new baseline was obtained finally after treatment with some smoothing methods. To assess the performance of FHR baseline correction algorithm, a new FHR baseline estimation algorithm that combined baseline estimation algorithm and the baseline correction algorithm was compared with two existing FHR baseline estimation algorithms. The results showed that the new FHR baseline estimation algorithm did well in both accuracy and efficiency. And the results also proved the effectiveness of the FHR baseline correction algorithm.

  13. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  14. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  15. Experimental repetitive quantum error correction.

    PubMed

    Schindler, Philipp; Barreiro, Julio T; Monz, Thomas; Nebendahl, Volckmar; Nigg, Daniel; Chwalla, Michael; Hennrich, Markus; Blatt, Rainer

    2011-05-27

    The computational potential of a quantum processor can only be unleashed if errors during a quantum computation can be controlled and corrected for. Quantum error correction works if imperfections of quantum gate operations and measurements are below a certain threshold and corrections can be applied repeatedly. We implement multiple quantum error correction cycles for phase-flip errors on qubits encoded with trapped ions. Errors are corrected by a quantum-feedback algorithm using high-fidelity gate operations and a reset technique for the auxiliary qubits. Up to three consecutive correction cycles are realized, and the behavior of the algorithm for different noise environments is analyzed.

  16. Biasing errors and corrections

    NASA Technical Reports Server (NTRS)

    Meyers, James F.

    1991-01-01

    The dependence of laser velocimeter measurement rate on flow velocity is discussed. Investigations outlining that any dependence is purely statistical, and is nonstationary both spatially and temporally, are described. Main conclusions drawn are that the times between successive particle arrivals should be routinely measured and the calculation of the velocity data rate correlation coefficient should be performed to determine if a dependency exists. If none is found, accept the data ensemble as an independent sample of the flow. If a dependency is found, the data should be modified to obtain an independent sample. Universal correcting procedures should never be applied because their underlying assumptions are not valid.

  17. [Correctional health care].

    PubMed

    Fix, Michel

    2013-01-01

    Court decisions taking away someone's freedom by requiring them to serve a jail sentence should not deny them access to the same health care available to free citizens in full compliance with patient confidentiality. Health institutions, responsible for administering somatic care, offer a comprehensive response to the medical needs of those under justice control, both in jails and conventional care units. For a physician, working in the correctional setting implies accepting its constraints, and violence, and protecting and enforcing fundamental rights, as well as rights to dignity, confidential care and freedom to accept or refuse a treatment.

  18. [Correction of paralytic lagophthalmos].

    PubMed

    Iskusnykh, N S; Grusha, Y O

    2015-01-01

    Current options for correction of paralytic lagophthalmos are either temporary (external eyelid weight placement, hyaluronic acid gel or botulinum toxin A injection) or permanent (various procedures for narrowing of the palpebral fissure, upper eyelid weights or spring implantation). Neuroplastic surgery (cross-facial nerve grafting, nerve anastomoses) and muscle transposition surgery is not effective enough. The majority of elderly and medically compromised patients should not be considered for such complicated and long procedures. Upper eyelid weight implantation thus appears the most reliable and simple treatment.

  19. Atmospheric correction with multi-angle polarimeters: information content assessment

    NASA Astrophysics Data System (ADS)

    Knobelspiesse, K. D.; Chowdhary, J.; Franz, B. A.

    2016-12-01

    Accurate ocean color remote sensing requires an appropriate atmospheric correction, to compensate for the atmosphere so that ocean geophysical properties can be determined. At optical wavelengths, atmospheric aerosols are the largest contributor to atmospheric correction uncertainty. In canonical missions such as SeaWiFS (Sea-Viewing Wide Field-of-View Sensor) and MODIS (Moderate Resolution Imaging Spectroradiometer), atmospheric correction uses observations in the Near Infrared (NIR) to determine aerosol optical properties, which are extrapolated to shorter wavelengths in the visible (VIS), where they are used correct for the aerosol signal. This works because ocean reflectance is very small in the NIR, but the technique is limited by the ability to determine aerosol optical properties in only that spectral range. The Ocean Color Instrument (OCI) on the upcoming NASA PACE (Plankton, Aerosol, Cloud, and ocean Ecosystem) mission will have greater spectral sensitivity and range than previous instruments, requiring atmospheric correction technique improvements. For this reason, PACE is considering an additional instrument, a multi-angle, multi-spectral, polarimeter. Such an instrument could provide more information about aerosols and significantly improve atmospheric correction. However, the atmospheric correction process is complex and nonlinear, and understanding the relationship between instrument characteristics and atmospheric correction success can be difficult without quantitative tools. We present a toolset we have developed, which couples radiative transfer simulations with information content assessment tools, to predict and explore the atmospheric correction benefit of different multi-angle polarimeter designs.

  20. Using Online Annotations to Support Error Correction and Corrective Feedback

    ERIC Educational Resources Information Center

    Yeh, Shiou-Wen; Lo, Jia-Jiunn

    2009-01-01

    Giving feedback on second language (L2) writing is a challenging task. This research proposed an interactive environment for error correction and corrective feedback. First, we developed an online corrective feedback and error analysis system called "Online Annotator for EFL Writing". The system consisted of five facilities: Document Maker,…

  1. Mental Health in Corrections: An Overview for Correctional Staff.

    ERIC Educational Resources Information Center

    Sowers, Wesley; Thompson, Kenneth; Mullins, Stephen

    This volume is designed to provide corrections practitioners with basic staff training on the needs of those with mental illness and impairments in our correctional systems. Chapter titles are: (1) "Mental Illness in the Correctional Setting"; (2) "Substance Use Disorders"; (3) "Problems with Mood"; (4) "Problems…

  2. Using Online Annotations to Support Error Correction and Corrective Feedback

    ERIC Educational Resources Information Center

    Yeh, Shiou-Wen; Lo, Jia-Jiunn

    2009-01-01

    Giving feedback on second language (L2) writing is a challenging task. This research proposed an interactive environment for error correction and corrective feedback. First, we developed an online corrective feedback and error analysis system called "Online Annotator for EFL Writing". The system consisted of five facilities: Document Maker,…

  3. Mental Health in Corrections: An Overview for Correctional Staff.

    ERIC Educational Resources Information Center

    Sowers, Wesley; Thompson, Kenneth; Mullins, Stephen

    This volume is designed to provide corrections practitioners with basic staff training on the needs of those with mental illness and impairments in our correctional systems. Chapter titles are: (1) "Mental Illness in the Correctional Setting"; (2) "Substance Use Disorders"; (3) "Problems with Mood"; (4) "Problems…

  4. New model accurately predicts reformate composition

    SciTech Connect

    Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )

    1994-01-31

    Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.

  5. Two highly accurate methods for pitch calibration

    NASA Astrophysics Data System (ADS)

    Kniel, K.; Härtig, F.; Osawa, S.; Sato, O.

    2009-11-01

    Among profiles, helix and tooth thickness pitch is one of the most important parameters of an involute gear measurement evaluation. In principle, coordinate measuring machines (CMM) and CNC-controlled gear measuring machines as a variant of a CMM are suited for these kinds of gear measurements. Now the Japan National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) and the German national metrology institute the Physikalisch-Technische Bundesanstalt (PTB) have each developed independently highly accurate pitch calibration methods applicable to CMM or gear measuring machines. Both calibration methods are based on the so-called closure technique which allows the separation of the systematic errors of the measurement device and the errors of the gear. For the verification of both calibration methods, NMIJ/AIST and PTB performed measurements on a specially designed pitch artifact. The comparison of the results shows that both methods can be used for highly accurate calibrations of pitch standards.

  6. Accurate guitar tuning by cochlear implant musicians.

    PubMed

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  7. Accurate Guitar Tuning by Cochlear Implant Musicians

    PubMed Central

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  8. Accurate colorimetric feedback for RGB LED clusters

    NASA Astrophysics Data System (ADS)

    Man, Kwong; Ashdown, Ian

    2006-08-01

    We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.

  9. Lipidomics, en route to accurate quantitation.

    PubMed

    Lam, Sin Man; Tian, He; Shui, Guanghou

    2017-08-01

    Accurate quantitation is prerequisite for the sustainable development of lipidomics via enabling its applications in various biological and biomedical settings. In this review, the technical considerations and limitations of existent lipidomics technologies, particularly in terms of accurate quantitation; as well as the potential sources of errors along a typical lipidomic workflow that could ultimately give rise to quantitative inaccuracies will be addressed. Furthermore, the pressing need to exercise stricter definitions of terms and protocol standardization pertaining to quantitative lipidomics will be critically discussed, as quantitative accuracy may substantially impact upon the persevering development of lipidomics in the long run. This article is part of a Special Issue entitled: BBALIP_Lipidomics Opinion Articles edited by Sepp Kohlwein. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Accurate mask model for advanced nodes

    NASA Astrophysics Data System (ADS)

    Zine El Abidine, Nacer; Sundermann, Frank; Yesilada, Emek; Ndiaye, El Hadji Omar; Mishra, Kushlendra; Paninjath, Sankaranarayanan; Bork, Ingo; Buck, Peter; Toublan, Olivier; Schanen, Isabelle

    2014-07-01

    Standard OPC models consist of a physical optical model and an empirical resist model. The resist model compensates the optical model imprecision on top of modeling resist development. The optical model imprecision may result from mask topography effects and real mask information including mask ebeam writing and mask process contributions. For advanced technology nodes, significant progress has been made to model mask topography to improve optical model accuracy. However, mask information is difficult to decorrelate from standard OPC model. Our goal is to establish an accurate mask model through a dedicated calibration exercise. In this paper, we present a flow to calibrate an accurate mask enabling its implementation. The study covers the different effects that should be embedded in the mask model as well as the experiment required to model them.

  11. Accurate Control of Josephson Phase Qubits

    DTIC Science & Technology

    2016-04-14

    PHYSICAL REVIEW B 68, 224518 ~2003!Accurate control of Josephson phase qubits Matthias Steffen,1,2,* John M. Martinis,3 and Isaac L. Chuang1 1Center...qubits, we believe they could also be fruitful in other systems where one wishes to control a par- ticular subspace of Hilbert space. This work...access the two- state system as a controllable qubit. The ratio DU/\\vp pa- rameterizes the anharmonicity of the cubic potential with regard to the qubit

  12. An accurate registration technique for distorted images

    NASA Technical Reports Server (NTRS)

    Delapena, Michele; Shaw, Richard A.; Linde, Peter; Dravins, Dainis

    1990-01-01

    Accurate registration of International Ultraviolet Explorer (IUE) images is crucial because the variability of the geometrical distortions that are introduced by the SEC-Vidicon cameras ensures that raw science images are never perfectly aligned with the Intensity Transfer Functions (ITFs) (i.e., graded floodlamp exposures that are used to linearize and normalize the camera response). A technique for precisely registering IUE images which uses a cross correlation of the fixed pattern that exists in all raw IUE images is described.

  13. Accurate confidence limits for stratified clinical trials.

    PubMed

    Lloyd, Chris J

    2013-09-10

    For stratified 2 × 2 tables, standard approximate confidence limits can perform poorly from a strict frequentist perspective, even for moderate-sized samples, yet they are routinely used. In this paper, I show how to use importance sampling to compute highly accurate limits in reasonable time. The methodology is very general and simple to implement, and orders of magnitude are faster than existing alternatives. Copyright © 2013 John Wiley & Sons, Ltd.

  14. An Accurate, Simplified Model Intrabeam Scattering

    SciTech Connect

    Bane, Karl LF

    2002-05-23

    Beginning with the general Bjorken-Mtingwa solution for intrabeam scattering (IBS) we derive an accurate, greatly simplified model of IBS, valid for high energy beams in normal storage ring lattices. In addition, we show that, under the same conditions, a modified version of Piwinski's IBS formulation (where {eta}{sub x,y}{sup 2}/{beta}{sub x,y} has been replaced by {Eta}{sub x,y}) asymptotically approaches the result of Bjorken-Mtingwa.

  15. Arbitrarily accurate narrowband composite pulse sequences

    SciTech Connect

    Vitanov, Nikolay V.

    2011-12-15

    Narrowband composite pulse sequences containing an arbitrary number N of identical pulses are presented. The composite phases are given by a very simple analytic formula and the transition probability is merely sin{sup 2N}(A/2), where A is the pulse area. These narrowband sequences can be made accurate to any order with respect to variations in A for sufficiently many constituent pulses, i.e., excitation can be suppressed below any desired value for any pulse area but {pi}.

  16. On accurate determination of contact angle

    NASA Technical Reports Server (NTRS)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  17. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance.

    PubMed

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Suffredini, Anthony F; Sacks, David B; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple 'fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.

  18. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  19. Accurate finite element modeling of acoustic waves

    NASA Astrophysics Data System (ADS)

    Idesman, A.; Pham, D.

    2014-07-01

    In the paper we suggest an accurate finite element approach for the modeling of acoustic waves under a suddenly applied load. We consider the standard linear elements and the linear elements with reduced dispersion for the space discretization as well as the explicit central-difference method for time integration. The analytical study of the numerical dispersion shows that the most accurate results can be obtained with the time increments close to the stability limit. However, even in this case and the use of the linear elements with reduced dispersion, mesh refinement leads to divergent numerical results for acoustic waves under a suddenly applied load. This is explained by large spurious high-frequency oscillations. For the quantification and the suppression of spurious oscillations, we have modified and applied a two-stage time-integration technique that includes the stage of basic computations and the filtering stage. This technique allows accurate convergent results at mesh refinement as well as significantly reduces the numerical anisotropy of solutions. We should mention that the approach suggested is very general and can be equally applied to any loading as well as for any space-discretization technique and any explicit or implicit time-integration method.

  20. Smooth eigenvalue correction

    NASA Astrophysics Data System (ADS)

    Hendrikse, Anne; Veldhuis, Raymond; Spreeuwers, Luuk

    2013-12-01

    Second-order statistics play an important role in data modeling. Nowadays, there is a tendency toward measuring more signals with higher resolution (e.g., high-resolution video), causing a rapid increase of dimensionality of the measured samples, while the number of samples remains more or less the same. As a result the eigenvalue estimates are significantly biased as described by the Marčenko Pastur equation for the limit of both the number of samples and their dimensionality going to infinity. By introducing a smoothness factor, we show that the Marčenko Pastur equation can be used in practical situations where both the number of samples and their dimensionality remain finite. Based on this result we derive methods, one already known and one new to our knowledge, to estimate the sample eigenvalues when the population eigenvalues are known. However, usually the sample eigenvalues are known and the population eigenvalues are required. We therefore applied one of the these methods in a feedback loop, resulting in an eigenvalue bias correction method. We compare this eigenvalue correction method with the state-of-the-art methods and show that our method outperforms other methods particularly in real-life situations often encountered in biometrics: underdetermined configurations, high-dimensional configurations, and configurations where the eigenvalues are exponentially distributed.

  1. Worldwide radiosonde temperature corrections

    SciTech Connect

    Luers, J.; Eskridge, R.

    1997-11-01

    Detailed heat transfer analyses have been performed on ten of the world`s most commonly used radiosondes from 1960 to present. These radiosondes are the USA VIZ and Space Data, the Vaisala RS-80, RS-185/21, and RS12/15, the Japanese RS2-80, Russian MARS, RKZ, and A22, and the Chinese GZZ. The temperature error of each radiosonde has been calculated as a function of altitude and the sonde and environmental parameters that influence its magnitude. Computer models have been developed that allow the correction of temperature data from each sonde as a function of these parameters. Recommendations are made concerning the use of data from each of the radiosondes for climate studies. For some radiosondes, nighttime data requires no corrections. Other radiosondes require that day and daytime data is not feasible because parameters of significance, such as balloon rise rate, are not retrievable. The results from this study provide essential information for anyone attempting to perform climate studies using radiosonde data. 6 refs., 1 tab.

  2. Turbulence compressibility corrections

    NASA Technical Reports Server (NTRS)

    Coakley, T. J.; Horstman, C. C.; Marvin, J. G.; Viegas, J. R.; Bardina, J. E.; Huang, P. G.; Kussoy, M. I.

    1994-01-01

    The basic objective of this research was to identify, develop and recommend turbulence models which could be incorporated into CFD codes used in the design of the National AeroSpace Plane vehicles. To accomplish this goal, a combined effort consisting of experimental and theoretical phases was undertaken. The experimental phase consisted of a literature survey to collect and assess a database of well documented experimental flows, with emphasis on high speed or hypersonic flows, which could be used to validate turbulence models. Since it was anticipated that this database would be incomplete and would need supplementing, additional experiments in the NASA Ames 3.5-Foot Hypersonic Wind Tunnel (HWT) were also undertaken. The theoretical phase consisted of identifying promising turbulence models through applications to simple flows, and then investigating more promising models in applications to complex flows. The complex flows were selected from the database developed in the first phase of the study. For these flows it was anticipated that model performance would not be entirely satisfactory, so that model improvements or corrections would be required. The primary goals of the investigation were essentially achieved. A large database of flows was collected and assessed, a number of additional hypersonic experiments were conducted in the Ames HWT, and two turbulence models (kappa-epsilon and kappa-omega models with corrections) were determined which gave superior performance for most of the flows studied and are now recommended for NASP applications.

  3. Complications of auricular correction

    PubMed Central

    Staindl, Otto; Siedek, Vanessa

    2008-01-01

    The risk of complications of auricular correction is underestimated. There is around a 5% risk of early complications (haematoma, infection, fistulae caused by stitches and granulomae, allergic reactions, pressure ulcers, feelings of pain and asymmetry in side comparison) and a 20% risk of late complications (recurrences, telehone ear, excessive edge formation, auricle fitting too closely, narrowing of the auditory canal, keloids and complete collapse of the ear). Deformities are evaluated less critically by patients than by the surgeons, providing they do not concern how the ear is positioned. The causes of complications and deformities are, in the vast majority of cases, incorrect diagnosis and wrong choice of operating procedure. The choice of operating procedure must be adapted to suit the individual ear morphology. Bandaging technique and inspections and, if necessary, early revision are of great importance for the occurence and progress of early complications, in addition to operation techniques. In cases of late complications such as keloids and auricles that are too closely fitting, unfixed full-thickness skin flaps have proved to be the most successful. Large deformities can often only be corrected to a limited degree of satisfaction. PMID:22073079

  4. Contact Lenses for Vision Correction

    MedlinePlus

    ... Ophthalmologist Patient Stories Español Eye Health / Glasses & Contacts Contact Lenses Sections Contact Lenses for Vision Correction Contact ... to Know About Contact Lenses Colored Contact Lenses Contact Lenses for Vision Correction Leer en Español: Lentes ...

  5. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  6. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  7. Using Scaling for accurate stochastic macroweather forecasts (including the "pause")

    NASA Astrophysics Data System (ADS)

    Lovejoy, Shaun; del Rio Amador, Lenin

    2015-04-01

    At scales corresponding to the lifetimes of structures of planetary extent (about 5 - 10 days), atmospheric processes undergo a drastic "dimensional transition" from high frequency weather to lower frequency macroweather processes. While conventional GCM's generally well reproduce both the transition and the corresponding (scaling) statistics, due to their sensitive dependence on initial conditions, the role of the weather scale processes is to provide random perturbations to the macroweather processes. The main problem with GCM's is thus that their long term (control run, unforced) statistics converge to the GCM climate and this is somewhat different from the real climate. This is the motivation for using a stochastic model and exploiting the empirical scaling properties and past data to make a stochastic model. It turns out that macroweather intermittency is typically low (the multifractal corrections are small) so that they can be approximated by fractional Gaussian Noise (fGN) processes whose memory can be enormous. For example for annual forecasts, and using the observed global temperature exponent, even 50 years of global temperature data would only allow us to exploit 90% of the available memory (for ocean regions, the figure increases to 600 years). The only complication is that anthropogenic effects dominate the global statistics at time scales beyond about 20 years. However, these are easy to remove using the CO2 forcing as a linear surrogate for all the anthropogenic effects. Using this theoretical framework, we show how to make accurate stochastic macroweather forecasts. We illustrate this on monthly and annual scale series of global and northern hemisphere surface temperatures (including nearly perfect hindcasts of the "pause" in the warming since 1998). We obtain forecast skill nearly as high as the theoretical (scaling) predictability limits allow. These scaling hindcasts - using a single effective climate sensitivity and single scaling exponent are

  8. Yearbook of Correctional Education 1989.

    ERIC Educational Resources Information Center

    Duguid, Stephen, Ed.

    This yearbook contains conference papers, commissioned papers, reprints of earlier works, and research-in-progress. They offer a retrospective view as well as address the mission and perspective of correctional education, its international dimension, correctional education in action, and current research. Papers include "Correctional Education and…

  9. 75 FR 16516 - Dates Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-01

    ... From the Federal Register Online via the Government Publishing Office ] NATIONAL ARCHIVES AND RECORDS ADMINISTRATION Office of the Federal Register Dates Correction Correction In the Notices section beginning on page 15401 in the issue of March 29th, 2010, make the following correction: On pages...

  10. Political Correctness and Cultural Studies.

    ERIC Educational Resources Information Center

    Carey, James W.

    1992-01-01

    Discusses political correctness and cultural studies, dealing with cultural studies and the left, the conservative assault on cultural studies, and political correctness in the university. Describes some of the underlying changes in the university, largely unaddressed in the political correctness debate, that provide the deep structure to the…

  11. Radiation camera motion correction system

    DOEpatents

    Hoffer, P.B.

    1973-12-18

    The device determines the ratio of the intensity of radiation received by a radiation camera from two separate portions of the object. A correction signal is developed to maintain this ratio at a substantially constant value and this correction signal is combined with the camera signal to correct for object motion. (Official Gazette)

  12. Spectroscopic imaging with prospective motion correction and retrospective phase correction.

    PubMed

    Lange, Thomas; Maclaren, Julian; Buechert, Martin; Zaitsev, Maxim

    2012-06-01

    Motion-induced artifacts are much harder to recognize in magnetic resonance spectroscopic imaging than in imaging experiments and can therefore lead to erroneous interpretation. A method for prospective motion correction based on an optical tracking system has recently been proposed and has already been successfully applied to single voxel spectroscopy. In this work, the utility of prospective motion correction in combination with retrospective phase correction is evaluated for spectroscopic imaging in the human brain. Retrospective phase correction, based on the interleaved reference scan method, is used to correct for motion-induced frequency shifts and ensure correct phasing of the spectra across the whole spectroscopic imaging slice. It is demonstrated that the presented correction methodology can reduce motion-induced degradation of spectroscopic imaging data. Copyright © 2011 Wiley-Liss, Inc.

  13. Direct anharmonic correction method by molecular dynamics

    NASA Astrophysics Data System (ADS)

    Liu, Zhong-Li; Li, Rui; Zhang, Xiu-Lu; Qu, Nuo; Cai, Ling-Cang

    2017-04-01

    The quick calculation of accurate anharmonic effects of lattice vibrations is crucial to the calculations of thermodynamic properties, the construction of the multi-phase diagram and equation of states of materials, and the theoretical designs of new materials. In this paper, we proposed a direct free energy interpolation (DFEI) method based on the temperature dependent phonon density of states (TD-PDOS) reduced from molecular dynamics simulations. Using the DFEI method, after anharmonic free energy corrections we reproduced the thermal expansion coefficients, the specific heat, the thermal pressure, the isothermal bulk modulus, and the Hugoniot P- V- T relationships of Cu easily and accurately. The extensive tests on other materials including metal, alloy, semiconductor and insulator also manifest that the DFEI method can easily uncover the rest anharmonicity that the quasi-harmonic approximation (QHA) omits. It is thus evidenced that the DFEI method is indeed a very efficient method used to conduct anharmonic effect corrections beyond QHA. More importantly it is much more straightforward and easier compared to previous anharmonic methods.

  14. EDITORIAL: Politically correct physics?

    NASA Astrophysics Data System (ADS)

    Pople Deputy Editor, Stephen

    1997-03-01

    If you were a caring, thinking, liberally minded person in the 1960s, you marched against the bomb, against the Vietnam war, and for civil rights. By the 1980s, your voice was raised about the destruction of the rainforests and the threat to our whole planetary environment. At the same time, you opposed discrimination against any group because of race, sex or sexual orientation. You reasoned that people who spoke or acted in a discriminatory manner should be discriminated against. In other words, you became politically correct. Despite its oft-quoted excesses, the political correctness movement sprang from well-founded concerns about injustices in our society. So, on balance, I am all for it. Or, at least, I was until it started to invade science. Biologists were the first to feel the impact. No longer could they refer to 'higher' and 'lower' orders, or 'primitive' forms of life. To the list of undesirable 'isms' - sexism, racism, ageism - had been added a new one: speciesism. Chemists remained immune to the PC invasion, but what else could you expect from a group of people so steeped in tradition that their principal unit, the mole, requires the use of the thoroughly unreconstructed gram? Now it is the turn of the physicists. This time, the offenders are not those who talk disparagingly about other people or animals, but those who refer to 'forms of energy' and 'heat'. Political correctness has evolved into physical correctness. I was always rather fond of the various forms of energy: potential, kinetic, chemical, electrical, sound and so on. My students might merge heat and internal energy into a single, fuzzy concept loosely associated with moving molecules. They might be a little confused at a whole new crop of energies - hydroelectric, solar, wind, geothermal and tidal - but they could tell me what devices turned chemical energy into electrical energy, even if they couldn't quite appreciate that turning tidal energy into geothermal energy wasn't part of the

  15. Updating and correction.

    PubMed

    1994-09-09

    The current editions of two books edited by William T. Golden, Science Advice to the President and Science and Technology Advice to the President, Congress, and Judiciary, published this year by AAAS Press, are now being distributed by Transaction Publishers, New Brunswick, NJ 08903, at the prices $22.95 and $27.95 (paper), respectively, and are no longer available from AAAS. A related work, Golden's 1991 compilation Worldwide Science and Technology Advice to the Highest Levels of Government, originally published by Pergamon Press, is also being distributed by Transaction Publishers, at $25.95. For more information about the books see Science 1 July, p. 127. In the review of K. S. Thorne's Black Holes and Time Warps (13 May, p. 999-1000), the captions and illustrations on p. 1000 were mismatched. The correct order of the captions is (i) "A heavy rock..."; (ii) "Cosmic radio waves..."; and (iii) "The trajectories in space...."

  16. Endoscopic orientation correction.

    PubMed

    Höller, Kurt; Penne, Jochen; Schneider, Armin; Jahn, Jasper; Guttiérrez Boronat, Javier; Wittenberg, Thomas; Feussner, Hubertus; Hornegger, Joachim

    2009-01-01

    An open problem in endoscopic surgery (especially with flexible endoscopes) is the absence of a stable horizon in endoscopic images. With our "Endorientation" approach image rotation correction, even in non-rigid endoscopic surgery (particularly NOTES), can be realized with a tiny MEMS tri-axial inertial sensor placed on the tip of an endoscope. It measures the impact of gravity on each of the three orthogonal accelerometer axes. After an initial calibration and filtering of these three values the rotation angle is estimated directly. Achievable repetition rate is above the usual endoscopic video frame rate of 30 Hz; accuracy is about one degree. The image rotation is performed in real-time by digitally rotating the analog endoscopic video signal. Improvements and benefits have been evaluated in animal studies: Coordination of different instruments and estimation of tissue behavior regarding gravity related deformation and movement was rated to be much more intuitive with a stable horizon on endoscopic images.

  17. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  18. Anomaly corrected heterotic horizons

    NASA Astrophysics Data System (ADS)

    Fontanella, A.; Gutowski, J. B.; Papadopoulos, G.

    2016-10-01

    We consider supersymmetric near-horizon geometries in heterotic supergravity up to two loop order in sigma model perturbation theory. We identify the conditions for the horizons to admit enhancement of supersymmetry. We show that solutions which undergo supersymmetry enhancement exhibit an {s}{l}(2,{R}) symmetry, and we describe the geometry of their horizon sections. We also prove a modified Lichnerowicz type theorem, incorporating α' corrections, which relates Killing spinors to zero modes of near-horizon Dirac operators. Furthermore, we demonstrate that there are no AdS2 solutions in heterotic supergravity up to second order in α' for which the fields are smooth and the internal space is smooth and compact without boundary. We investigate a class of nearly supersymmetric horizons, for which the gravitino Killing spinor equation is satisfied on the spatial cross sections but not the dilatino one, and present a description of their geometry.

  19. Temperature Corrected Bootstrap Algorithm

    NASA Technical Reports Server (NTRS)

    Comiso, Joey C.; Zwally, H. Jay

    1997-01-01

    A temperature corrected Bootstrap Algorithm has been developed using Nimbus-7 Scanning Multichannel Microwave Radiometer data in preparation to the upcoming AMSR instrument aboard ADEOS and EOS-PM. The procedure first calculates the effective surface emissivity using emissivities of ice and water at 6 GHz and a mixing formulation that utilizes ice concentrations derived using the current Bootstrap algorithm but using brightness temperatures from 6 GHz and 37 GHz channels. These effective emissivities are then used to calculate surface ice which in turn are used to convert the 18 GHz and 37 GHz brightness temperatures to emissivities. Ice concentrations are then derived using the same technique as with the Bootstrap algorithm but using emissivities instead of brightness temperatures. The results show significant improvement in the area where ice temperature is expected to vary considerably such as near the continental areas in the Antarctic, where the ice temperature is colder than average, and in marginal ice zones.

  20. Statistically accurate simulations for atmospheric flows

    NASA Astrophysics Data System (ADS)

    Dubinkina, S.

    2009-04-01

    A Hamiltonian particle-mesh method for quasi-geostrophic potential vorticity flow is proposed. The microscopic vorticity field at any time is an area- and energy-conserving rearrangement of the initial field. We construct a statistical mechanics theory to explain the long-time behavior of the numerical solution. The statistical theory correctly predicts the spatial distribution of particles as a function of their point vorticity. A nonlinear relation between the coarse grained mean stream function and mean vorticity fields is predicted, consistent with the preservation of higher moments of potential vorticity reported in [R. V. Abramov, A. J. Majda 2003, PNAS 100 3841--3846].

  1. Accurate verification of the conserved-vector-current and standard-model predictions

    SciTech Connect

    Sirlin, A.; Zucchini, R.

    1986-10-20

    An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.

  2. Accurately Mapping M31's Microlensing Population

    NASA Astrophysics Data System (ADS)

    Crotts, Arlin

    2004-07-01

    We propose to augment an existing microlensing survey of M31 with source identifications provided by a modest amount of ACS {and WFPC2 parallel} observations to yield an accurate measurement of the masses responsible for microlensing in M31, and presumably much of its dark matter. The main benefit of these data is the determination of the physical {or "einstein"} timescale of each microlensing event, rather than an effective {"FWHM"} timescale, allowing masses to be determined more than twice as accurately as without HST data. The einstein timescale is the ratio of the lensing cross-sectional radius and relative velocities. Velocities are known from kinematics, and the cross-section is directly proportional to the {unknown} lensing mass. We cannot easily measure these quantities without knowing the amplification, hence the baseline magnitude, which requires the resolution of HST to find the source star. This makes a crucial difference because M31 lens m ass determinations can be more accurate than those towards the Magellanic Clouds through our Galaxy's halo {for the same number of microlensing events} due to the better constrained geometry in the M31 microlensing situation. Furthermore, our larger survey, just completed, should yield at least 100 M31 microlensing events, more than any Magellanic survey. A small amount of ACS+WFPC2 imaging will deliver the potential of this large database {about 350 nights}. For the whole survey {and a delta-function mass distribution} the mass error should approach only about 15%, or about 6% error in slope for a power-law distribution. These results will better allow us to pinpoint the lens halo fraction, and the shape of the halo lens spatial distribution, and allow generalization/comparison of the nature of halo dark matter in spiral galaxies. In addition, we will be able to establish the baseline magnitude for about 50, 000 variable stars, as well as measure an unprecedentedly deta iled color-magnitude diagram and luminosity

  3. Climate Models have Accurately Predicted Global Warming

    NASA Astrophysics Data System (ADS)

    Nuccitelli, D. A.

    2016-12-01

    Climate model projections of global temperature changes over the past five decades have proven remarkably accurate, and yet the myth that climate models are inaccurate or unreliable has formed the basis of many arguments denying anthropogenic global warming and the risks it poses to the climate system. Here we compare average global temperature predictions made by both mainstream climate scientists using climate models, and by contrarians using less physically-based methods. We also explore the basis of the myth by examining specific arguments against climate model accuracy and their common characteristics of science denial.

  4. The first accurate description of an aurora

    NASA Astrophysics Data System (ADS)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  5. Determining accurate distances to nearby galaxies

    NASA Astrophysics Data System (ADS)

    Bonanos, Alceste Zoe

    2005-11-01

    Determining accurate distances to nearby or distant galaxies is a very simple conceptually, yet complicated in practice, task. Presently, distances to nearby galaxies are only known to an accuracy of 10-15%. The current anchor galaxy of the extragalactic distance scale is the Large Magellanic Cloud, which has large (10-15%) systematic uncertainties associated with it, because of its morphology, its non-uniform reddening and the unknown metallicity dependence of the Cepheid period-luminosity relation. This work aims to determine accurate distances to some nearby galaxies, and subsequently help reduce the error in the extragalactic distance scale and the Hubble constant H 0 . In particular, this work presents the first distance determination of the DIRECT Project to M33 with detached eclipsing binaries. DIRECT aims to obtain a new anchor galaxy for the extragalactic distance scale by measuring direct, accurate (to 5%) distances to two Local Group galaxies, M31 and M33, with detached eclipsing binaries. It involves a massive variability survey of these galaxies and subsequent photometric and spectroscopic follow-up of the detached binaries discovered. In this work, I also present a catalog of variable stars discovered in one of the DIRECT fields, M31Y, which includes 41 eclipsing binaries. Additionally, we derive the distance to the Draco Dwarf Spheroidal galaxy, with ~100 RR Lyrae found in our first CCD variability study of this galaxy. A "hybrid" method of discovering Cepheids with ground-based telescopes is described next. It involves applying the image subtraction technique on the images obtained from ground-based telescopes and then following them up with the Hubble Space Telescope to derive Cepheid period-luminosity distances. By re-analyzing ESO Very Large Telescope data on M83 (NGC 5236), we demonstrate that this method is much more powerful for detecting variability, especially in crowded fields. I finally present photometry for the Wolf-Rayet binary WR 20a

  6. New law requires 'medically accurate' lesson plans.

    PubMed

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  7. Rapid and Accurate C-V Measurements

    PubMed Central

    Kim, Ji-Hong; Shrestha, Pragya R.; Campbell, Jason P.; Ryan, Jason T.; Nminibapiel, David; Kopanski, Joseph J.

    2017-01-01

    We report a new technique for the rapid measurement of full capacitance-voltage (C-V) characteristic curves. The displacement current from a 100 MHz applied sine-wave, which swings from accumulation to strong inversion, is digitized directly using an oscilloscope from the metal-oxide-semiconductor (MOS) capacitor under test. A C-V curve can be constructed directly from this data but is severely distorted due to non-ideal behavior of real measurement systems. The key advance of this work is to extract the system response function using the same measurement set-up and a known MOS capacitor. The system response correction to the measured C-V curve of the unknown MOS capacitor can then be done by simple deconvolution. No de-skewing and/or leakage current correction is necessary, making it a very simple and quick measurement. Excellent agreement between the new fast C-V method and C-V measured conventionally by an LCR meter is achieved. The total time required for measurement and analysis is approximately 2 seconds, which is limited by our equipment. PMID:28579633

  8. Social contagion of correct and incorrect information in memory.

    PubMed

    Rush, Ryan A; Clark, Steven E

    2014-01-01

    The present study examines how discussion between individuals regarding a shared memory affects their subsequent individual memory reports. In three experiments pairs of participants recalled items from photographs of common household scenes, discussed their recall with each other, and then recalled the items again individually. Results showed that after the discussion. individuals recalled more correct items and more incorrect items, with very small non-significant increases, or no change, in recall accuracy. The information people were exposed to during the discussion was generally accurate, although not as accurate as individuals' initial recall. Individuals incorporated correct exposure items into their subsequent recall at a higher rate than incorrect exposure items. Participants who were initially more accurate became less accurate, and initially less-accurate participants became more accurate as a result of their discussion. Comparisons to no-discussion control groups suggest that the effects were not simply the product of repeated recall opportunities or self-cueing, but rather reflect the transmission of information between individuals.

  9. Conductivity Cell Thermal Inertia Correction Revisited

    NASA Astrophysics Data System (ADS)

    Eriksen, C. C.

    2012-12-01

    Salinity measurements made with a CTD (conductivity-temperature-depth instrument) rely on accurate estimation of water temperature within their conductivity cell. Lueck (1990) developed a theoretical framework for heat transfer between the cell body and water passing through it. Based on this model, Lueck and Picklo (1990) introduced the practice of correcting for cell thermal inertia by filtering a temperature time series using two parameters, an amplitude α and a decay time constant τ, a practice now widely used. Typically these two parameters are chosen for a given cell configuration and internal flushing speed by a statistical method applied to a particular data set. Here, thermal inertia correction theory has been extended to apply to flow speeds spanning well over an order of magnitude, both within and outside a conductivity cell, to provide predictions of α and τ from cell geometry and composition. The extended model enables thermal inertia correction for the variable flows encountered by conductivity cells on autonomous gliders and floats, as well as tethered platforms. The length scale formed as the product of cell encounter speed of isotherms, α, and τ can be used to gauge the size of the temperature correction for a given thermal stratification. For cells flushed by dynamic pressure variation induced by platform motion, this length varies by less than a factor of 2 over more than a decade of speed variation. The magnitude of correction for free-flow flushed sensors is comparable to that of pumped cells, but at an order of magnitude in energy savings. Flow conditions around a cell's exterior are found to be of comparable importance to thermal inertia response as flushing speed. Simplification of cell thermal response to a single normal mode is most valid at slow speed. Error in thermal inertia estimation arises from both neglect of higher modes and numerical discretization of the correction scheme, both of which can be easily quantified

  10. Accurate taxonomic assignment of short pyrosequencing reads.

    PubMed

    Clemente, José C; Jansson, Jesper; Valiente, Gabriel

    2010-01-01

    Ambiguities in the taxonomy dependent assignment of pyrosequencing reads are usually resolved by mapping each read to the lowest common ancestor in a reference taxonomy of all those sequences that match the read. This conservative approach has the drawback of mapping a read to a possibly large clade that may also contain many sequences not matching the read. A more accurate taxonomic assignment of short reads can be made by mapping each read to the node in the reference taxonomy that provides the best precision and recall. We show that given a suffix array for the sequences in the reference taxonomy, a short read can be mapped to the node of the reference taxonomy with the best combined value of precision and recall in time linear in the size of the taxonomy subtree rooted at the lowest common ancestor of the matching sequences. An accurate taxonomic assignment of short reads can thus be made with about the same efficiency as when mapping each read to the lowest common ancestor of all matching sequences in a reference taxonomy. We demonstrate the effectiveness of our approach on several metagenomic datasets of marine and gut microbiota.

  11. Accurate pose estimation for forensic identification

    NASA Astrophysics Data System (ADS)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  12. Accurate Stellar Parameters for Exoplanet Host Stars

    NASA Astrophysics Data System (ADS)

    Brewer, John Michael; Fischer, Debra; Basu, Sarbani; Valenti, Jeff A.

    2015-01-01

    A large impedement to our understanding of planet formation is obtaining a clear picture of planet radii and densities. Although determining precise ratios between planet and stellar host are relatively easy, determining accurate stellar parameters is still a difficult and costly undertaking. High resolution spectral analysis has traditionally yielded precise values for some stellar parameters but stars in common between catalogs from different authors or analyzed using different techniques often show offsets far in excess of their uncertainties. Most analyses now use some external constraint, when available, to break observed degeneracies between surface gravity, effective temperature, and metallicity which can otherwise lead to correlated errors in results. However, these external constraints are impossible to obtain for all stars and can require more costly observations than the initial high resolution spectra. We demonstrate that these discrepencies can be mitigated by use of a larger line list that has carefully tuned atomic line data. We use an iterative modeling technique that does not require external constraints. We compare the surface gravity obtained with our spectral synthesis modeling to asteroseismically determined values for 42 Kepler stars. Our analysis agrees well with only a 0.048 dex offset and an rms scatter of 0.05 dex. Such accurate stellar gravities can reduce the primary source of uncertainty in radii by almost an order of magnitude over unconstrained spectral analysis.

  13. Fast and accurate propagation of coherent light

    PubMed Central

    Lewis, R. D.; Beylkin, G.; Monzón, L.

    2013-01-01

    We describe a fast algorithm to propagate, for any user-specified accuracy, a time-harmonic electromagnetic field between two parallel planes separated by a linear, isotropic and homogeneous medium. The analytical formulation of this problem (ca 1897) requires the evaluation of the so-called Rayleigh–Sommerfeld integral. If the distance between the planes is small, this integral can be accurately evaluated in the Fourier domain; if the distance is very large, it can be accurately approximated by asymptotic methods. In the large intermediate region of practical interest, where the oscillatory Rayleigh–Sommerfeld kernel must be applied directly, current numerical methods can be highly inaccurate without indicating this fact to the user. In our approach, for any user-specified accuracy ϵ>0, we approximate the kernel by a short sum of Gaussians with complex-valued exponents, and then efficiently apply the result to the input data using the unequally spaced fast Fourier transform. The resulting algorithm has computational complexity , where we evaluate the solution on an N×N grid of output points given an M×M grid of input samples. Our algorithm maintains its accuracy throughout the computational domain. PMID:24204184

  14. Accurate basis set truncation for wavefunction embedding

    NASA Astrophysics Data System (ADS)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  15. Radiometric correction of scatterometric wind measurements

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Use of a spaceborne scatterometer to determine the ocean-surface wind vector requires accurate measurement of radar backscatter from ocean. Such measurements are hindered by the effect of attenuation in the precipitating regions over sea. The attenuation can be estimated reasonably well with the knowledge of brightness temperatures observed by a microwave radiometer. The NASA SeaWinds scatterometer is to be flown on the Japanese ADEOS2. The AMSR multi-frequency radiometer on ADEOS2 will be used to correct errors due to attenuation in the SeaWinds scatterometer measurements. Here we investigate the errors in the attenuation corrections. Errors would be quite small if the radiometer and scatterometer footprints were identical and filled with uniform rain. However, the footprints are not identical, and because of their size one cannot expect uniform rain across each cell. Simulations were performed with the SeaWinds scatterometer (13.4 GHz) and AMSR (18.7 GHz) footprints with gradients of attenuation. The study shows that the resulting wind speed errors after correction (using the radiometer) are small for most cases. However, variations in the degree of overlap between the radiometer and scatterometer footprints affect the accuracy of the wind speed measurements.

  16. Aperiodicity Correction for Rotor Tip Vortex Measurements

    NASA Technical Reports Server (NTRS)

    Ramasamy, Manikandan; Paetzel, Ryan; Bhagwat, Mahendra J.

    2011-01-01

    The initial roll-up of a tip vortex trailing from a model-scale, hovering rotor was measured using particle image velocimetry. The unique feature of the measurements was that a microscope was attached to the camera to allow much higher spatial resolution than hitherto possible. This also posed some unique challenges. In particular, the existing methodologies to correct for aperiodicity in the tip vortex locations could not be easily extended to the present measurements. The difficulty stemmed from the inability to accurately determine the vortex center, which is a prerequisite for the correction procedure. A new method is proposed for determining the vortex center, as well as the vortex core properties, using a least-squares fit approach. This approach has the obvious advantage that the properties are derived from not just a few points near the vortex core, but from a much larger area of flow measurements. Results clearly demonstrate the advantage in the form of reduced variation in the estimated core properties, and also the self-consistent results obtained using three different aperiodicity correction methods.

  17. Atmospheric correction with the Bayesian empirical line.

    PubMed

    Thompson, David R; Roberts, Dar A; Gao, Bo Cai; Green, Robert O; Guild, Liane; Hayashi, Kendra; Kudela, Raphael; Palacios, Sherry

    2016-02-08

    Atmospheric correction of visible/infrared spectra traditionally involves either (1) physics-based methods using Radiative Transfer Models (RTMs), or (2) empirical methods using in situ measurements. Here a more general probabilistic formulation unifies the approaches and enables combined solutions. The technique is simple to implement and provides stable results from one or more reference spectra. This makes empirical corrections practical for large or remote environments where it is difficult to acquire coincident field data. First, we use a physics-based solution to define a prior distribution over reflectances and their correction coefficients. We then incorporate reference measurements via Bayesian inference, leading to a Maximum A Posteriori estimate which is generally more accurate than pure physics-based methods yet more stable than pure empirical methods. Gaussian assumptions enable a closed form solution based on Tikhonov regularization. We demonstrate performance in atmospheric simulations and historical data from the "Classic" Airborne Visible Infrared Imaging Spectrometer (AVIRIS-C) acquired during the HyspIRI mission preparatory campaign.

  18. Accurate fundamental parameters for 23 bright solar-type stars

    NASA Astrophysics Data System (ADS)

    Bruntt, H.; Bedding, T. R.; Quirion, P.-O.; Lo Curto, G.; Carrier, F.; Smalley, B.; Dall, T. H.; Arentoft, T.; Bazot, M.; Butler, R. P.

    2010-07-01

    We combine results from interferometry, asteroseismology and spectroscopy to determine accurate fundamental parameters of 23 bright solar-type stars, from spectral type F5 to K2 and luminosity classes III-V. For some stars we can use direct techniques to determine the mass, radius, luminosity and effective temperature, and we compare with indirect methods that rely on photometric calibrations or spectroscopic analyses. We use the asteroseismic information available in the literature to infer an indirect mass with an accuracy of 4-15 per cent. From indirect methods we determine luminosity and radius to 3 per cent. We find evidence that the luminosity from the indirect method is slightly overestimated (~ 5 per cent) for the coolest stars, indicating that their bolometric corrections (BCs) are too negative. For Teff we find a slight offset of -40 +/- 20K between the spectroscopic method and the direct method, meaning the spectroscopic temperatures are too high. From the spectroscopic analysis we determine the detailed chemical composition for 13 elements, including Li, C and O. The metallicity ranges from [Fe/H] = -1.7 to +0.4, and there is clear evidence for α-element enhancement in the metal-poor stars. We find no significant offset between the spectroscopic surface gravity and the value from combining asteroseismology with radius estimates. From the spectroscopy we also determine v sin i and we present a new calibration of macroturbulence and microturbulence. From the comparison between the results from the direct and spectroscopic methods we claim that we can determine Teff, log g and [Fe/H] with absolute accuracies of 80K, 0.08 and 0.07dex. Photometric calibrations of Strömgren indices provide accurate results for Teff and [Fe/H] but will be more uncertain for distant stars when interstellar reddening becomes important. The indirect methods are important to obtain reliable estimates of the fundamental parameters of relatively faint stars when interferometry

  19. Accurate Classification of RNA Structures Using Topological Fingerprints

    PubMed Central

    Li, Kejie; Gribskov, Michael

    2016-01-01

    While RNAs are well known to possess complex structures, functionally similar RNAs often have little sequence similarity. While the exact size and spacing of base-paired regions vary, functionally similar RNAs have pronounced similarity in the arrangement, or topology, of base-paired stems. Furthermore, predicted RNA structures often lack pseudoknots (a crucial aspect of biological activity), and are only partially correct, or incomplete. A topological approach addresses all of these difficulties. In this work we describe each RNA structure as a graph that can be converted to a topological spectrum (RNA fingerprint). The set of subgraphs in an RNA structure, its RNA fingerprint, can be compared with the fingerprints of other RNA structures to identify and correctly classify functionally related RNAs. Topologically similar RNAs can be identified even when a large fraction, up to 30%, of the stems are omitted, indicating that highly accurate structures are not necessary. We investigate the performance of the RNA fingerprint approach on a set of eight highly curated RNA families, with diverse sizes and functions, containing pseudoknots, and with little sequence similarity–an especially difficult test set. In spite of the difficult test set, the RNA fingerprint approach is very successful (ROC AUC > 0.95). Due to the inclusion of pseudoknots, the RNA fingerprint approach both covers a wider range of possible structures than methods based only on secondary structure, and its tolerance for incomplete structures suggests that it can be applied even to predicted structures. Source code is freely available at https://github.rcac.purdue.edu/mgribsko/XIOS_RNA_fingerprint. PMID:27755571

  20. Building dynamic population graph for accurate correspondence detection.

    PubMed

    Du, Shaoyi; Guo, Yanrong; Sanroma, Gerard; Ni, Dong; Wu, Guorong; Shen, Dinggang

    2015-12-01

    In medical imaging studies, there is an increasing trend for discovering the intrinsic anatomical difference across individual subjects in a dataset, such as hand images for skeletal bone age estimation. Pair-wise matching is often used to detect correspondences between each individual subject and a pre-selected model image with manually-placed landmarks. However, the large anatomical variability across individual subjects can easily compromise such pair-wise matching step. In this paper, we present a new framework to simultaneously detect correspondences among a population of individual subjects, by propagating all manually-placed landmarks from a small set of model images through a dynamically constructed image graph. Specifically, we first establish graph links between models and individual subjects according to pair-wise shape similarity (called as forward step). Next, we detect correspondences for the individual subjects with direct links to any of model images, which is achieved by a new multi-model correspondence detection approach based on our recently-published sparse point matching method. To correct those inaccurate correspondences, we further apply an error detection mechanism to automatically detect wrong correspondences and then update the image graph accordingly (called as backward step). After that, all subject images with detected correspondences are included into the set of model images, and the above two steps of graph expansion and error correction are repeated until accurate correspondences for all subject images are established. Evaluations on real hand X-ray images demonstrate that our proposed method using a dynamic graph construction approach can achieve much higher accuracy and robustness, when compared with the state-of-the-art pair-wise correspondence detection methods as well as a similar method but using static population graph.

  1. Motor equivalence during multi-finger accurate force production

    PubMed Central

    Mattos, Daniela; Schöner, Gregor; Zatsiorsky, Vladimir M.; Latash, Mark L.

    2014-01-01

    We explored stability of multi-finger cyclical accurate force production action by analysis of responses to small perturbations applied to one of the fingers and inter-cycle analysis of variance. Healthy subjects performed two versions of the cyclical task, with and without an explicit target. The “inverse piano” apparatus was used to lift/lower a finger by 1 cm over 0.5 s; the subjects were always instructed to perform the task as accurate as they could at all times. Deviations in the spaces of finger forces and modes (hypothetical commands to individual fingers) were quantified in directions that did not change total force (motor equivalent) and in directions that changed the total force (non-motor equivalent). Motor equivalent deviations started immediately with the perturbation and increased progressively with time. After a sequence of lifting-lowering perturbations leading to the initial conditions, motor equivalent deviations were dominating. These phenomena were less pronounced for analysis performed with respect to the total moment of force with respect to an axis parallel to the forearm/hand. Analysis of inter-cycle variance showed consistently higher variance in a subspace that did not change the total force as compared to the variance that affected total force. We interpret the results as reflections of task-specific stability of the redundant multi-finger system. Large motor equivalent deviations suggest that reactions of the neuromotor system to a perturbation involve large changes of neural commands that do not affect salient performance variables, even during actions with the purpose to correct those salient variables. Consistency of the analyses of motor equivalence and variance analysis provides additional support for the idea of task-specific stability ensured at a neural level. PMID:25344311

  2. Microsatellites are molecular clocks that support accurate inferences about history.

    PubMed

    Sun, James X; Mullikin, James C; Patterson, Nick; Reich, David E

    2009-05-01

    Microsatellite length mutations are often modeled using the generalized stepwise mutation process, which is a type of random walk. If this model is sufficiently accurate, one can estimate the coalescence time between alleles of a locus after a mathematical transformation of the allele lengths. When large-scale microsatellite genotyping first became possible, there was substantial interest in using this approach to make inferences about time and demography, but that interest has waned because it has not been possible to empirically validate the clock by comparing it with data in which the mutation process is well understood. We analyzed data from 783 microsatellite loci in human populations and 292 loci in chimpanzee populations, and compared them with up to one gigabase of aligned sequence data, where the molecular clock based upon nucleotide substitutions is believed to be reliable. We empirically demonstrate a remarkable linearity (r(2) > 0.95) between the microsatellite average square distance statistic and sequence divergence. We demonstrate that microsatellites are accurate molecular clocks for coalescent times of at least 2 million years (My). We apply this insight to confirm that the African populations San, Biaka Pygmy, and Mbuti Pygmy have the deepest coalescent times among populations in the Human Genome Diversity Project. Furthermore, we show that microsatellites support unbiased estimates of population differentiation (F(ST)) that are less subject to ascertainment bias than single nucleotide polymorphism (SNP) F(ST). These results raise the prospect of using microsatellite data sets to determine parameters of population history. When genotyped along with SNPs, microsatellite data can also be used to correct for SNP ascertainment bias.

  3. Accurate microfour-point probe sheet resistance measurements on small samples.

    PubMed

    Thorsteinsson, Sune; Wang, Fei; Petersen, Dirch H; Hansen, Torben Mikael; Kjaer, Daniel; Lin, Rong; Kim, Jang-Yong; Nielsen, Peter F; Hansen, Ole

    2009-05-01

    We show that accurate sheet resistance measurements on small samples may be performed using microfour-point probes without applying correction factors. Using dual configuration measurements, the sheet resistance may be extracted with high accuracy when the microfour-point probes are in proximity of a mirror plane on small samples with dimensions of a few times the probe pitch. We calculate theoretically the size of the "sweet spot," where sufficiently accurate sheet resistances result and show that even for very small samples it is feasible to do correction free extraction of the sheet resistance with sufficient accuracy. As an example, the sheet resistance of a 40 microm (50 microm) square sample may be characterized with an accuracy of 0.3% (0.1%) using a 10 microm pitch microfour-point probe and assuming a probe alignment accuracy of +/-2.5 microm.

  4. Airborne experiment results for spaceborne atmospheric synchronous correction system

    NASA Astrophysics Data System (ADS)

    Cui, Wenyu; Yi, Weining; Du, Lili; Liu, Xiao

    2015-10-01

    The image quality of optical remote sensing satellite is affected by the atmosphere, thus the image needs to be corrected. Due to the spatial and temporal variability of atmospheric conditions, correction by using synchronous atmospheric parameters can effectively improve the remote sensing image quality. For this reason, a small light spaceborne instrument, the atmospheric synchronous correction device (airborne prototype), is developed by AIOFM of CAS(Anhui Institute of Optics and Fine Mechanics of Chinese Academy of Sciences). With this instrument, of which the detection mode is timing synchronization and spatial coverage, the atmospheric parameters consistent with the images to be corrected in time and space can be obtained, and then the correction is achieved by radiative transfer model. To verify the technical process and treatment effect of spaceborne atmospheric correction system, the first airborne experiment is designed and completed. The experiment is implemented by the "satellite-airborne-ground" synchronous measuring method. A high resolution(0.4 m) camera and the atmospheric correction device are equipped on the aircraft, which photograph the ground with the satellite observation over the top simultaneously. And aerosol optical depth (AOD) and columnar water vapor (CWV) in the imagery area are also acquired, which are used for the atmospheric correction for satellite and aerial images. Experimental results show that using the AOD and CWV of imagery area retrieved by the data obtained by the device to correct aviation and satellite images, can improve image definition and contrast by more than 30%, and increase MTF by more than 1 time, which means atmospheric correction for satellite images by using the data of spaceborne atmospheric synchronous correction device is accurate and effective.

  5. Radiometrically accurate FTS for atmospheric emission observations

    NASA Technical Reports Server (NTRS)

    Revercomb, H. E.; Smith, W. L.; Stromovsky, L. A.; Knuteson, R. O.; Buijs, H.

    1989-01-01

    The calibration and operational performance of an FTIR-based airborne high-resolution interferometer sounder (HIS) for use in broadband measurements of atmospheric emission at 3.8-16.6 microns are described. The radiometric and wavelength calibration procedures in the laboratory involved the use of reference black bodies at 300 and 245 K and the known wavelength of the HIS HeNe laser (corrected for FOV effects), respectively. The atmospheric verification program included downlooking observations from the NASA U2/ER2 aircraft (where resolving power of 1800-3800 was demonstrated) and uplooking observations from the ground; good agreement with data from balloon-borne radiosondes is obtained, with absolute temperature uncertainties of less than 0.5 K and reproducibilities of 0.1-0.2 K over most of the measurement domain.

  6. Accurately Diagnosing and Treating Borderline Personality Disorder

    PubMed Central

    Gentile, Julie P.; Correll, Terry L.

    2010-01-01

    The high prevalence of comorbid bipolar and borderline personality disorders and some diagnostic criteria similar to both conditions present both diagnostic and therapeutic challenges. This article delineates certain symptoms which, by careful history taking, may be attributed more closely to one of these two disorders. Making the correct primary diagnosis along with comorbid psychiatric conditions and choosing the appropriate type of psychotherapy and pharmacotherapy are critical steps to a patient's recovery. In this article, we will use a case example to illustrate some of the challenges the psychiatrist may face in diagnosing and treating borderline personality disorder. In addition, we will explore treatment strategies, including various types of therapy modalities and medication classes, which may prove effective in stabilizing or reducing a broad range of symptomotology associated with borderline personality disorder. PMID:20508805

  7. A modified impression technique for accurate registration of peri-implant soft tissues.

    PubMed

    Attard, Nikolai; Barzilay, Izchak

    2003-02-01

    Replacement of single missing teeth with an implant-supported restoration is recognized as a highly successful treatment. An impression technique for peri-implant soft-tissue replication in an anterior zone is described. The technique involves use of an interim restoration as an abutment for the final impression. This allows accurate duplication of the soft tissues and fabrication of a final restoration with the correct emergence profile.

  8. How to conduct and interpret ITC experiments accurately for cyclodextrin-guest interactions.

    PubMed

    Bouchemal, Kawthar; Mazzaferro, Silvia

    2012-06-01

    Isothermal titration calorimetry (ITC) is one of the most interesting methods for the characterization of the interaction mechanisms of cyclodextrins (CDs) with drugs. In this review we explain how to conduct ITC experiments correctly for CD-guest interactions, how to choose an accurate fitting model for the titration curve and how to interpret carefully the ITC results. Finally, the use of ITC for the characterization of CD-containing nanoparticles is discussed.

  9. Correction, improvement and model verification of CARE 3, version 3

    NASA Technical Reports Server (NTRS)

    Rose, D. M.; Manke, J. W.; Altschul, R. E.; Nelson, D. L.

    1987-01-01

    An independent verification of the CARE 3 mathematical model and computer code was conducted and reported in NASA Contractor Report 166096, Review and Verification of CARE 3 Mathematical Model and Code: Interim Report. The study uncovered some implementation errors that were corrected and are reported in this document. The corrected CARE 3 program is called version 4. Thus the document, correction. improvement, and model verification of CARE 3, version 3 was written in April 1984. It is being published now as it has been determined to contain a more accurate representation of CARE 3 than the preceding document of April 1983. This edition supercedes NASA-CR-166122 entitled, 'Correction and Improvement of CARE 3,' version 3, April 1983.

  10. A symmetric multivariate leakage correction for MEG connectomes

    PubMed Central

    Colclough, G.L.; Brookes, M.J.; Smith, S.M.; Woolrich, M.W.

    2015-01-01

    Ambiguities in the source reconstruction of magnetoencephalographic (MEG) measurements can cause spurious correlations between estimated source time-courses. In this paper, we propose a symmetric orthogonalisation method to correct for these artificial correlations between a set of multiple regions of interest (ROIs). This process enables the straightforward application of network modelling methods, including partial correlation or multivariate autoregressive modelling, to infer connectomes, or functional networks, from the corrected ROIs. Here, we apply the correction to simulated MEG recordings of simple networks and to a resting-state dataset collected from eight subjects, before computing the partial correlations between power envelopes of the corrected ROItime-courses. We show accurate reconstruction of our simulated networks, and in the analysis of real MEGresting-state connectivity, we find dense bilateral connections within the motor and visual networks, together with longer-range direct fronto-parietal connections. PMID:25862259

  11. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  12. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  13. Micron Accurate Absolute Ranging System: Range Extension

    NASA Technical Reports Server (NTRS)

    Smalley, Larry L.; Smith, Kely L.

    1999-01-01

    The purpose of this research is to investigate Fresnel diffraction as a means of obtaining absolute distance measurements with micron or greater accuracy. It is believed that such a system would prove useful to the Next Generation Space Telescope (NGST) as a non-intrusive, non-contact measuring system for use with secondary concentrator station-keeping systems. The present research attempts to validate past experiments and develop ways to apply the phenomena of Fresnel diffraction to micron accurate measurement. This report discusses past research on the phenomena, and the basis of the use Fresnel diffraction distance metrology. The apparatus used in the recent investigations, experimental procedures used, preliminary results are discussed in detail. Continued research and equipment requirements on the extension of the effective range of the Fresnel diffraction systems is also described.

  14. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  16. Practical aspects of spatially high accurate methods

    NASA Technical Reports Server (NTRS)

    Godfrey, Andrew G.; Mitchell, Curtis R.; Walters, Robert W.

    1992-01-01

    The computational qualities of high order spatially accurate methods for the finite volume solution of the Euler equations are presented. Two dimensional essentially non-oscillatory (ENO), k-exact, and 'dimension by dimension' ENO reconstruction operators are discussed and compared in terms of reconstruction and solution accuracy, computational cost and oscillatory behavior in supersonic flows with shocks. Inherent steady state convergence difficulties are demonstrated for adaptive stencil algorithms. An exact solution to the heat equation is used to determine reconstruction error, and the computational intensity is reflected in operation counts. Standard MUSCL differencing is included for comparison. Numerical experiments presented include the Ringleb flow for numerical accuracy and a shock reflection problem. A vortex-shock interaction demonstrates the ability of the ENO scheme to excel in simulating unsteady high-frequency flow physics.

  17. Accurate Telescope Mount Positioning with MEMS Accelerometers

    NASA Astrophysics Data System (ADS)

    Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.

    2014-08-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the sub-arcminute range which is well smaller than the field-of-view of conventional imaging telescope systems. Here we present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  18. Obtaining accurate translations from expressed sequence tags.

    PubMed

    Wasmuth, James; Blaxter, Mark

    2009-01-01

    The genomes of an increasing number of species are being investigated through the generation of expressed sequence tags (ESTs). However, ESTs are prone to sequencing errors and typically define incomplete transcripts, making downstream annotation difficult. Annotation would be greatly improved with robust polypeptide translations. Many current solutions for EST translation require a large number of full-length gene sequences for training purposes, a resource that is not available for the majority of EST projects. As part of our ongoing EST programs investigating these "neglected" genomes, we have developed a polypeptide prediction pipeline, prot4EST. It incorporates freely available software to produce final translations that are more accurate than those derived from any single method. We describe how this integrated approach goes a long way to overcoming the deficit in training data.

  19. LSM: perceptually accurate line segment merging

    NASA Astrophysics Data System (ADS)

    Hamid, Naila; Khan, Nazar

    2016-11-01

    Existing line segment detectors tend to break up perceptually distinct line segments into multiple segments. We propose an algorithm for merging such broken segments to recover the original perceptually accurate line segments. The algorithm proceeds by grouping line segments on the basis of angular and spatial proximity. Then those line segment pairs within each group that satisfy unique, adaptive mergeability criteria are successively merged to form a single line segment. This process is repeated until no more line segments can be merged. We also propose a method for quantitative comparison of line segment detection algorithms. Results on the York Urban dataset show that our merged line segments are closer to human-marked ground-truth line segments compared to state-of-the-art line segment detection algorithms.

  20. Magnetic ranging tool accurately guides replacement well

    SciTech Connect

    Lane, J.B.; Wesson, J.P. )

    1992-12-21

    This paper reports on magnetic ranging surveys and directional drilling technology which accurately guided a replacement well bore to intersect a leaking gas storage well with casing damage. The second well bore was then used to pump cement into the original leaking casing shoe. The repair well bore kicked off from the surface hole, bypassed casing damage in the middle of the well, and intersected the damaged well near the casing shoe. The repair well was subsequently completed in the gas storage zone near the original well bore, salvaging the valuable bottom hole location in the reservoir. This method would prevent the loss of storage gas, and it would prevent a potential underground blowout that could permanently damage the integrity of the storage field.

  1. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, Douglas D.

    1985-01-01

    The present invention is a thermometer used for measuring furnace temperaes in the range of about 1800.degree. to 2700.degree. C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  2. Apparatus for accurately measuring high temperatures

    DOEpatents

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  3. Accurate Spacecraft Positioning by VLBI Imaging

    NASA Astrophysics Data System (ADS)

    Zheng, Weimin; Tong, Fengxian; Shu, Fengchun

    2016-12-01

    VLBI is a radio astronomy technique with very high space angle resolution, and the Chinese VLBI Network has played an important role in the Chang'E series lunar mission. In the upcoming Chinese lunar and deep space missions, the ability to achieve higher resolution angular positions will be necessary. For these reasons, we have carried out research into accurate spacecraft positioning and have conducted several space vehicle phase-referencing positioning experiments using the Chinese VLBI Network and other VLBI antennas. This paper shows the VLBI spacecraft imaging position experiment results for the Chang'E lunar probes, the Mars Express probe, and the Rosetta probe. The results have validated phase reference VLBI with the milli-arcsecond level position resolution for deep space probes.

  4. Accurate radio positions with the Tidbinbilla interferometer

    NASA Technical Reports Server (NTRS)

    Batty, M. J.; Gulkis, S.; Jauncey, D. L.; Rayner, P. T.

    1979-01-01

    The Tidbinbilla interferometer (Batty et al., 1977) is designed specifically to provide accurate radio position measurements of compact radio sources in the Southern Hemisphere with high sensitivity. The interferometer uses the 26-m and 64-m antennas of the Deep Space Network at Tidbinbilla, near Canberra. The two antennas are separated by 200 m on a north-south baseline. By utilizing the existing antennas and the low-noise traveling-wave masers at 2.29 GHz, it has been possible to produce a high-sensitivity instrument with a minimum of capital expenditure. The north-south baseline ensures that a good range of UV coverage is obtained, so that sources lying in the declination range between about -80 and +30 deg may be observed with nearly orthogonal projected baselines of no less than about 1000 lambda. The instrument also provides high-accuracy flux density measurements for compact radio sources.

  5. Highly accurate articulated coordinate measuring machine

    DOEpatents

    Bieg, Lothar F.; Jokiel, Jr., Bernhard; Ensz, Mark T.; Watson, Robert D.

    2003-12-30

    Disclosed is a highly accurate articulated coordinate measuring machine, comprising a revolute joint, comprising a circular encoder wheel, having an axis of rotation; a plurality of marks disposed around at least a portion of the circumference of the encoder wheel; bearing means for supporting the encoder wheel, while permitting free rotation of the encoder wheel about the wheel's axis of rotation; and a sensor, rigidly attached to the bearing means, for detecting the motion of at least some of the marks as the encoder wheel rotates; a probe arm, having a proximal end rigidly attached to the encoder wheel, and having a distal end with a probe tip attached thereto; and coordinate processing means, operatively connected to the sensor, for converting the output of the sensor into a set of cylindrical coordinates representing the position of the probe tip relative to a reference cylindrical coordinate system.

  6. General lysosomal hydrolysis can process prorenin accurately.

    PubMed

    Xa, Lucie K; Lacombe, Marie-Josée; Mercure, Chantal; Lazure, Claude; Reudelhuber, Timothy L

    2014-09-01

    Renin, an aspartyl protease that catalyzes the rate-limiting step of the renin-angiotensin system, is first synthesized as an inactive precursor, prorenin. Prorenin is activated by the proteolytic removal of an amino terminal prosegment in the dense granules of the juxtaglomerular (JG) cells of the kidney by one or more proteases whose identity is uncertain but commonly referred to as the prorenin-processing enzyme (PPE). Because several extrarenal tissues secrete only prorenin, we tested the hypothesis that the unique ability of JG cells to produce active renin might be explained by the existence of a PPE whose expression is restricted to JG cells. We found that inducing renin production by the mouse kidney by up to 20-fold was not associated with the concomitant induction of candidate PPEs. Because the renin-containing granules of JG cells also contain several lysosomal hydrolases, we engineered mouse Ren1 prorenin to be targeted to the classical vesicular lysosomes of cultured HEK-293 cells, where it was accurately processed and stored. Furthermore, we found that HEK cell lysosomes hydrolyzed any artificial extensions placed on the protein and that active renin was extraordinarily resistant to proteolytic degradation. Altogether, our results demonstrate that accurate processing of prorenin is not restricted to JG cells but can occur in classical vesicular lysosomes of heterologous cells. The implication is that renin production may not require a specific PPE but rather can be achieved by general hydrolysis in the lysosome-like granules of JG cells. Copyright © 2014 the American Physiological Society.

  7. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  8. Comparing uncorrected and corrected bottom-hole temperatures using published correction methods for the onshore U.S. Gulf of Mexico

    USGS Publications Warehouse

    Kinney, Scott A.; Pearson, Ofori N.

    2016-01-01

    Wireline logging temperature readings are known to be imprecise and need to be corrected to more accurately show what the formation temperature is. One issue with correcting logging temperatures is what correction factor to use. Because there are so many correction factors and they are based on different types of data and locations choosing a correction factor for a particular study area can be challenging to deal with. Some previous work has factored in only depth and other work includes time since circulation as a major component. This data set is comprised of bottom hole temperature, depth, and time since circulation that have had seven different correction factors run on the data. The data was consolidated into 6x6 mile cells and a least squared algorithm was used to create one temperature gradient (per correction factor) for that particular cell.

  9. Accurate and precise determination of isotopic ratios by MC-ICP-MS: a review.

    PubMed

    Yang, Lu

    2009-01-01

    For many decades the accurate and precise determination of isotope ratios has remained a very strong interest to many researchers due to its important applications in earth, environmental, biological, archeological, and medical sciences. Traditionally, thermal ionization mass spectrometry (TIMS) has been the technique of choice for achieving the highest accuracy and precision. However, recent developments in multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) have brought a new dimension to this field. In addition to its simple and robust sample introduction, high sample throughput, and high mass resolution, the flat-topped peaks generated by this technique provide for accurate and precise determination of isotope ratios with precision reaching 0.001%, comparable to that achieved with TIMS. These features, in combination with the ability of the ICP source to ionize nearly all elements in the periodic table, have resulted in an increased use of MC-ICP-MS for such measurements in various sample matrices. To determine accurate and precise isotope ratios with MC-ICP-MS, utmost care must be exercised during sample preparation, optimization of the instrument, and mass bias corrections. Unfortunately, there are inconsistencies and errors evident in many MC-ICP-MS publications, including errors in mass bias correction models. This review examines "state-of-the-art" methodologies presented in the literature for achievement of precise and accurate determinations of isotope ratios by MC-ICP-MS. Some general rules for such accurate and precise measurements are suggested, and calculations of combined uncertainty of the data using a few common mass bias correction models are outlined.

  10. Gravitational correction to vacuum polarization

    NASA Astrophysics Data System (ADS)

    Jentschura, U. D.

    2015-02-01

    We consider the gravitational correction to (electronic) vacuum polarization in the presence of a gravitational background field. The Dirac propagators for the virtual fermions are modified to include the leading gravitational correction (potential term) which corresponds to a coordinate-dependent fermion mass. The mass term is assumed to be uniform over a length scale commensurate with the virtual electron-positron pair. The on-mass shell renormalization condition ensures that the gravitational correction vanishes on the mass shell of the photon, i.e., the speed of light is unaffected by the quantum field theoretical loop correction, in full agreement with the equivalence principle. Nontrivial corrections are obtained for off-shell, virtual photons. We compare our findings to other works on generalized Lorentz transformations and combined quantum-electrodynamic gravitational corrections to the speed of light which have recently appeared in the literature.

  11. Real-time lens distortion correction: speed, accuracy and efficiency

    NASA Astrophysics Data System (ADS)

    Bax, Michael R.; Shahidi, Ramin

    2014-11-01

    Optical lens systems suffer from nonlinear geometrical distortion. Optical imaging applications such as image-enhanced endoscopy and image-based bronchoscope tracking require correction of this distortion for accurate localization, tracking, registration, and measurement of image features. Real-time capability is desirable for interactive systems and live video. The use of a texture-mapping graphics accelerator, which is standard hardware on current motherboard chipsets and add-in video graphics cards, to perform distortion correction is proposed. Mesh generation for image tessellation, an error analysis, and performance results are presented. It is shown that distortion correction using commodity graphics hardware is substantially faster than using the main processor and can be performed at video frame rates (faster than 30 frames per second), and that the polar-based method of mesh generation proposed here is more accurate than a conventional grid-based approach. Using graphics hardware to perform distortion correction is not only fast and accurate but also efficient as it frees the main processor for other tasks, which is an important issue in some real-time applications.

  12. Corrected ROC analysis for misclassified binary outcomes.

    PubMed

    Zawistowski, Matthew; Sussman, Jeremy B; Hofer, Timothy P; Bentley, Douglas; Hayward, Rodney A; Wiitala, Wyndy L

    2017-06-15

    Creating accurate risk prediction models from Big Data resources such as Electronic Health Records (EHRs) is a critical step toward achieving precision medicine. A major challenge in developing these tools is accounting for imperfect aspects of EHR data, particularly the potential for misclassified outcomes. Misclassification, the swapping of case and control outcome labels, is well known to bias effect size estimates for regression prediction models. In this paper, we study the effect of misclassification on accuracy assessment for risk prediction models and find that it leads to bias in the area under the curve (AUC) metric from standard ROC analysis. The extent of the bias is determined by the false positive and false negative misclassification rates as well as disease prevalence. Notably, we show that simply correcting for misclassification while building the prediction model is not sufficient to remove the bias in AUC. We therefore introduce an intuitive misclassification-adjusted ROC procedure that accounts for uncertainty in observed outcomes and produces bias-corrected estimates of the true AUC. The method requires that misclassification rates are either known or can be estimated, quantities typically required for the modeling step. The computational simplicity of our method is a key advantage, making it ideal for efficiently comparing multiple prediction models on very large datasets. Finally, we apply the correction method to a hospitalization prediction model from a cohort of over 1 million patients from the Veterans Health Administrations EHR. Implementations of the ROC correction are provided for Stata and R. Published 2017. This article is a U.S. Government work and is in the public domain in the USA. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  13. Determining spherical lens correction for astronaut training underwater

    PubMed Central

    Porter, Jason; Gibson, C. Robert; Strauss, Samuel

    2013-01-01

    Purpose To develop a model that will accurately predict the distance spherical lens correction needed to be worn by National Aeronautics and Space Administration (NASA) astronauts while training underwater. The replica space suit’s helmet contains curved visors that induce refractive power when submersed in water. Methods Anterior surface powers and thicknesses were measured for the helmet’s protective and inside visors. The impact of each visor on the helmet’s refractive power in water was analyzed using thick lens calculations and Zemax optical design software. Using geometrical optics approximations, a model was developed to determine the optimal distance spherical power needed to be worn underwater based on the helmet’s total induced spherical power underwater and the astronaut’s manifest spectacle plane correction in air. The validity of the model was tested using data from both eyes of 10 astronauts who trained underwater. Results The helmet visors induced a total power of −2.737 D when placed underwater. The required underwater spherical correction (FW) was linearly related to the spectacle plane spherical correction in air (FAir): FW = FAir + 2.356 D. The mean magnitude of the difference between the actual correction worn underwater and the calculated underwater correction was 0.20 ± 0.11 D. The actual and calculated values were highly correlated (R = 0.971) with 70% of eyes having a difference in magnitude of < 0.25 D between values. Conclusions We devised a model to calculate the spherical spectacle lens correction needed to be worn underwater by National Aeronautics and Space Administration astronauts. The model accurately predicts the actual values worn underwater and can be applied (more generally) to determine a suitable spectacle lens correction to be worn behind other types of masks when submerged underwater. PMID:21623249

  14. Cool Cluster Correctly Correlated

    SciTech Connect

    Varganov, Sergey Aleksandrovich

    2005-01-01

    Atomic clusters are unique objects, which occupy an intermediate position between atoms and condensed matter systems. For a long time it was thought that physical and chemical properties of atomic dusters monotonically change with increasing size of the cluster from a single atom to a condensed matter system. However, recently it has become clear that many properties of atomic clusters can change drastically with the size of the clusters. Because physical and chemical properties of clusters can be adjusted simply by changing the cluster's size, different applications of atomic clusters were proposed. One example is the catalytic activity of clusters of specific sizes in different chemical reactions. Another example is a potential application of atomic clusters in microelectronics, where their band gaps can be adjusted by simply changing cluster sizes. In recent years significant advances in experimental techniques allow one to synthesize and study atomic clusters of specified sizes. However, the interpretation of the results is often difficult. The theoretical methods are frequently used to help in interpretation of complex experimental data. Most of the theoretical approaches have been based on empirical or semiempirical methods. These methods allow one to study large and small dusters using the same approximations. However, since empirical and semiempirical methods rely on simple models with many parameters, it is often difficult to estimate the quantitative and even qualitative accuracy of the results. On the other hand, because of significant advances in quantum chemical methods and computer capabilities, it is now possible to do high quality ab-initio calculations not only on systems of few atoms but on clusters of practical interest as well. In addition to accurate results for specific clusters, such methods can be used for benchmarking of different empirical and semiempirical approaches. The atomic clusters studied in this work contain from a few atoms to

  15. Processor register error correction management

    DOEpatents

    Bose, Pradip; Cher, Chen-Yong; Gupta, Meeta S.

    2016-12-27

    Processor register protection management is disclosed. In embodiments, a method of processor register protection management can include determining a sensitive logical register for executable code generated by a compiler, generating an error-correction table identifying the sensitive logical register, and storing the error-correction table in a memory accessible by a processor. The processor can be configured to generate a duplicate register of the sensitive logical register identified by the error-correction table.

  16. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  17. Accurate calculated optical properties of substituted quaterphenylene nanofibers.

    PubMed

    Finnerty, Justin J; Koch, Rainer

    2010-01-14

    The accurate prediction of both excitation and emission energies of substituted p-quaterphenylenes using a variety of established and newly developed density functional methods is evaluated and compared against experimental data, both from single molecules and from nanofibers. For calculation of the UV-vis excitation the MPW1K functional is the best performing method (with the employed TZVP basis set). After a linear scaling factor is applied, mPW2-PLYP, CIS and the very fast INDO/S also reproduce the experimental data correctly. For the fluorescence relaxation energies MPW1K, mPW2-PLYP, and INDO/S give good results, even without scaling. However, mPW2-PLYP involves second-order perturbation to introduce nonlocal electron correlation and therefore requires significantly more resources, so the recommended level of theory for a single methodology to investigate the optical properties of substituted phenylenes and related systems is MPW1K/6-311+G(2d,p), followed by INDO/S as a low-cost alternative. As an extension of a previous work on predicting first hyperpolarisabilities, we can now demonstrate that the chosen approach (HF/6-31G(d)//B3LYP/6-31G(d)) produces data that correlate well with the susceptibilities derived from measurements on nanofibers.

  18. Accurate inference of local phased ancestry of modern admixed populations.

    PubMed

    Ma, Yamin; Zhao, Jian; Wong, Jian-Syuan; Ma, Li; Li, Wenzhi; Fu, Guoxing; Xu, Wei; Zhang, Kui; Kittles, Rick A; Li, Yun; Song, Qing

    2014-07-23

    Population stratification is a growing concern in genetic-association studies. Averaged ancestry at the genome level (global ancestry) is insufficient for detecting the population substructures and correcting population stratifications in association studies. Local and phase stratification are needed for human genetic studies, but current technologies cannot be applied on the entire genome data due to various technical caveats. Here we developed a novel approach (aMAP, ancestry of Modern Admixed Populations) for inferring local phased ancestry. It took about 3 seconds on a desktop computer to finish a local ancestry analysis for each human genome with 1.4-million SNPs. This method also exhibits the scalability to larger datasets with respect to the number of SNPs, the number of samples, and the size of reference panels. It can detect the lack of the proxy of reference panels. The accuracy was 99.4%. The aMAP software has a capacity for analyzing 6-way admixed individuals. As the biomedical community continues to expand its efforts to increase the representation of diverse populations, and as the number of large whole-genome sequence datasets continues to grow rapidly, there is an increasing demand on rapid and accurate local ancestry analysis in genetics, pharmacogenomics, population genetics, and clinical diagnosis.

  19. Accurate Completion of Medical Report on Diagnosing Death.

    PubMed

    Savić, Slobodan; Alempijević, Djordje; Andjelić, Sladjana

    2015-01-01

    Diagnosing death and issuing a Death Diagnosing Form (DDF) represents an activity that carries a great deal of public responsibility for medical professionals of the Emergency Medical Services (EMS) and is perpetually exposed to the control of the general public. Diagnosing death is necessary so as to confirm true, to exclude apparent death and consequentially to avoid burying a person alive, i.e. apparently dead. These expert-methodological guidelines based on the most up-to-date and medically based evidence have the goal of helping the physicians of the EMS in accurately filling out a medical report on diagnosing death. If the outcome of applied cardiopulmonary resuscitation measures is negative or when the person is found dead, the physician is under obligation to diagnose death and correctly fill out the DDF. It is also recommended to perform electrocardiography (EKG) and record asystole in at least two leads. In the process of diagnostics and treatment, it is a moral obligation of each Belgrade EMS physician to apply all available achievements and knowledge of modern medicine acquired from extensive international studies, which have been indeed the major theoretical basis for the creation of these expert-methodological guidelines. Those acting differently do so in accordance with their conscience and risk professional, and even criminal sanctions.

  20. Quantitative proteomic analysis by accurate mass retention time pairs.

    PubMed

    Silva, Jeffrey C; Denny, Richard; Dorschel, Craig A; Gorenstein, Marc; Kass, Ignatius J; Li, Guo-Zhong; McKenna, Therese; Nold, Michael J; Richardson, Keith; Young, Phillip; Geromanos, Scott

    2005-04-01

    Current methodologies for protein quantitation include 2-dimensional gel electrophoresis techniques, metabolic labeling, and stable isotope labeling methods to name only a few. The current literature illustrates both pros and cons for each of the previously mentioned methodologies. Keeping with the teachings of William of Ockham, "with all things being equal the simplest solution tends to be correct", a simple LC/MS based methodology is presented that allows relative changes in abundance of proteins in highly complex mixtures to be determined. Utilizing a reproducible chromatographic separations system along with the high mass resolution and mass accuracy of an orthogonal time-of-flight mass spectrometer, the quantitative comparison of tens of thousands of ions emanating from identically prepared control and experimental samples can be made. Using this configuration, we can determine the change in relative abundance of a small number of ions between the two conditions solely by accurate mass and retention time. Employing standard operating procedures for both sample preparation and ESI-mass spectrometry, one typically obtains under 5 ppm mass precision and quantitative variations between 10 and 15%. The principal focus of this paper will demonstrate the quantitative aspects of the methodology and continue with a discussion of the associated, complementary qualitative capabilities.

  1. Accurate measurement of RF exposure from emerging wireless communication systems

    NASA Astrophysics Data System (ADS)

    Letertre, Thierry; Monebhurrun, Vikass; Toffano, Zeno

    2013-04-01

    Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.

  2. In Situ Mosaic Brightness Correction

    NASA Technical Reports Server (NTRS)

    Deen, Robert G.; Lorre, Jean J.

    2012-01-01

    In situ missions typically have pointable, mast-mounted cameras, which are capable of taking panoramic mosaics comprised of many individual frames. These frames are mosaicked together. While the mosaic software applies radiometric correction to the images, in many cases brightness/contrast seams still exist between frames. This is largely due to errors in the radiometric correction, and the absence of correction for photometric effects in the mosaic processing chain. The software analyzes the overlaps between adjacent frames in the mosaic and determines correction factors for each image in an attempt to reduce or eliminate these brightness seams.

  3. Higher order accurate partial implicitization: An unconditionally stable fourth-order-accurate explicit numerical technique

    NASA Technical Reports Server (NTRS)

    Graves, R. A., Jr.

    1975-01-01

    The previously obtained second-order-accurate partial implicitization numerical technique used in the solution of fluid dynamic problems was modified with little complication to achieve fourth-order accuracy. The Von Neumann stability analysis demonstrated the unconditional linear stability of the technique. The order of the truncation error was deduced from the Taylor series expansions of the linearized difference equations and was verified by numerical solutions to Burger's equation. For comparison, results were also obtained for Burger's equation using a second-order-accurate partial-implicitization scheme, as well as the fourth-order scheme of Kreiss.

  4. Exploring Hindu Indian emotion expressions: evidence for accurate recognition by Americans and Indians.

    PubMed

    Hejmadi, A; Davidson, R J; Rozin, P

    2000-05-01

    Subjects were presented with videotaped expressions of 10 classic Hindu emotions. The 10 emotions were (in rough translation from Sanskrit) anger, disgust, fear, heroism, humor-amusement, love, peace, sadness, shame-embarrassment, and wonder. These emotions (except for shame) and their portrayal were described about 2,000 years ago in the Natyasastra, and are enacted in the contemporary Hindu classical dance. The expressions are dynamic and include both the face and the body, especially the hands. Three different expressive versions of each emotion were presented, along with 15 neutral expressions. American and Indian college students responded to each of these 45 expressions using either a fixed-response format (10 emotion names and "neutral/no emotion") or a totally free response format. Participants from both countries were quite accurate in identifying emotions correctly using both fixed-choice (65% correct, expected value of 9%) and free-response (61% correct, expected value close to zero) methods.

  5. Accurate method for including solid-fluid boundary interactions in mesoscopic model fluids

    SciTech Connect

    Berkenbos, A. Lowe, C.P.

    2008-04-20

    Particle models are attractive methods for simulating the dynamics of complex mesoscopic fluids. Many practical applications of this methodology involve flow through a solid geometry. As the system is modeled using particles whose positions move continuously in space, one might expect that implementing the correct stick boundary condition exactly at the solid-fluid interface is straightforward. After all, unlike discrete methods there is no mapping onto a grid to contend with. In this article we describe a method that, for axisymmetric flows, imposes both the no-slip condition and continuity of stress at the interface. We show that the new method then accurately reproduces correct hydrodynamic behavior right up to the location of the interface. As such, computed flow profiles are correct even using a relatively small number of particles to model the fluid.

  6. Exchange-Hole Dipole Dispersion Model for Accurate Energy Ranking in Molecular Crystal Structure Prediction.

    PubMed

    Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R

    2017-02-14

    Accurate energy ranking is a key facet to the problem of first-principles crystal-structure prediction (CSP) of molecular crystals. This work presents a systematic assessment of B86bPBE-XDM, a semilocal density functional combined with the exchange-hole dipole moment (XDM) dispersion model, for energy ranking using 14 compounds from the first five CSP blind tests. Specifically, the set of crystals studied comprises 11 rigid, planar compounds and 3 co-crystals. The experimental structure was correctly identified as the lowest in lattice energy for 12 of the 14 total crystals. One of the exceptions is 4-hydroxythiophene-2-carbonitrile, for which the experimental structure was correctly identified once a quasi-harmonic estimate of the vibrational free-energy contribution was included, evidencing the occasional importance of thermal corrections for accurate energy ranking. The other exception is an organic salt, where charge-transfer error (also called delocalization error) is expected to cause the base density functional to be unreliable. Provided the choice of base density functional is appropriate and an estimate of temperature effects is used, XDM-corrected density-functional theory is highly reliable for the energetic ranking of competing crystal structures.

  7. New orbit correction method uniting global and local orbit corrections

    NASA Astrophysics Data System (ADS)

    Nakamura, N.; Takaki, H.; Sakai, H.; Satoh, M.; Harada, K.; Kamiya, Y.

    2006-01-01

    A new orbit correction method, called the eigenvector method with constraints (EVC), is proposed and formulated to unite global and local orbit corrections for ring accelerators, especially synchrotron radiation(SR) sources. The EVC can exactly correct the beam positions at arbitrarily selected ring positions such as light source points, simultaneously reducing closed orbit distortion (COD) around the whole ring. Computer simulations clearly demonstrate these features of the EVC for both cases of the Super-SOR light source and the Advanced Light Source (ALS) that have typical structures of high-brilliance SR sources. In addition, the effects of errors in beam position monitor (BPM) reading and steering magnet setting on the orbit correction are analytically expressed and also compared with the computer simulations. Simulation results show that the EVC is very effective and useful for orbit correction and beam position stabilization in SR sources.

  8. Reference module selection criteria for accurate testing of photovoltaic (PV) panels

    SciTech Connect

    Roy, J.N.; Gariki, Govardhan Rao; Nagalakhsmi, V.

    2010-01-15

    It is shown that for accurate testing of PV panels the correct selection of reference modules is important. A detailed description of the test methodology is given. Three different types of reference modules, having different I{sub SC} (short circuit current) and power (in Wp) have been used for this study. These reference modules have been calibrated from NREL. It has been found that for accurate testing, both I{sub SC} and power of the reference module must be either similar or exceed to that of modules under test. In case corresponding values of the test modules are less than a particular limit, the measurements may not be accurate. The experimental results obtained have been modeled by using simple equivalent circuit model and associated I-V equations. (author)

  9. Development of a Drosophila cell-based error correction assay.

    PubMed

    Salemi, Jeffrey D; McGilvray, Philip T; Maresca, Thomas J

    2013-01-01

    Accurate transmission of the genome through cell division requires microtubules from opposing spindle poles to interact with protein super-structures called kinetochores that assemble on each sister chromatid. Most kinetochores establish erroneous attachments that are destabilized through a process called error correction. Failure to correct improper kinetochore-microtubule (kt-MT) interactions before anaphase onset results in chromosomal instability (CIN), which has been implicated in tumorigenesis and tumor adaptation. Thus, it is important to characterize the molecular basis of error correction to better comprehend how CIN occurs and how it can be modulated. An error correction assay has been previously developed in cultured mammalian cells in which incorrect kt-MT attachments are created through the induction of monopolar spindle assembly via chemical inhibition of kinesin-5. Error correction is then monitored following inhibitor wash out. Implementing the error correction assay in Drosophila melanogaster S2 cells would be valuable because kt-MT attachments are easily visualized and the cells are highly amenable to RNAi and high-throughput screening. However, Drosophila kinesin-5 (Klp61F) is unaffected by available small molecule inhibitors. To overcome this limitation, we have rendered S2 cells susceptible to kinesin-5 inhibitors by functionally replacing Klp61F with human kinesin-5 (Eg5). Eg5 expression rescued the assembly of monopolar spindles typically caused by Klp61F depletion. Eg5-mediated bipoles collapsed into monopoles due, in part, to kinesin-14 (Ncd) activity when treated with the kinesin-5 inhibitor S-trityl-L-cysteine (STLC). Furthermore, bipolar spindles reassembled and error correction was observed after STLC wash out. Importantly, error correction in Eg5-expressing S2 cells was dependent on the well-established error correction kinase Aurora B. This system provides a powerful new cell-based platform for studying error correction and CIN.

  10. Learning-Based Topological Correction for Infant Cortical Surfaces

    PubMed Central

    Hao, Shijie; Li, Gang; Wang, Li; Meng, Yu

    2017-01-01

    Reconstruction of topologically correct and accurate cortical surfaces from infant MR images is of great importance in neuroimaging mapping of early brain development. However, due to rapid growth and ongoing myelination, infant MR images exhibit extremely low tissue contrast and dynamic appearance patterns, thus leading to much more topological errors (holes and handles) in the cortical surfaces derived from tissue segmentation results, in comparison to adult MR images which typically have good tissue contrast. Existing methods for topological correction either rely on the minimal correction criteria, or ad hoc rules based on image intensity priori, thus often resulting in erroneous correction and large anatomical errors in reconstructed infant cortical surfaces. To address these issues, we propose to correct topological errors by learning information from the anatomical references, i.e., manually corrected images. Specifically, in our method, we first locate candidate voxels of topologically defected regions by using a topology-preserving level set method. Then, by leveraging rich information of the corresponding patches from reference images, we build region-specific dictionaries from the anatomical references and infer the correct labels of candidate voxels using sparse representation. Notably, we further integrate these two steps into an iterative framework to enable gradual correction of large topological errors, which are frequently occurred in infant images and cannot be completely corrected using one-shot sparse representation. Extensive experiments on infant cortical surfaces demonstrate that our method not only effectively corrects the topological defects, but also leads to better anatomical consistency, compared to the state-of-the-art methods.

  11. Correcting for sequencing error in maximum likelihood phylogeny inference.

    PubMed

    Kuhner, Mary K; McGill, James

    2014-11-04

    Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue.

  12. PET measurements of cerebral metabolism corrected for CSF contributions

    SciTech Connect

    Chawluk, J.; Alavi, A.; Dann, R.; Kushner, M.J.; Hurtig, H.; Zimmerman, R.A.; Reivich, M.

    1984-01-01

    Thirty-three subjects have been studied with PET and anatomic imaging (proton-NMR and/or CT) in order to determine the effect of cerebral atrophy on calculations of metabolic rates. Subgroups of neurologic disease investigated include stroke, brain tumor, epilepsy, psychosis, and dementia. Anatomic images were digitized through a Vidicon camera and analyzed volumetrically. Relative areas for ventricles, sulci, and brain tissue were calculated. Preliminary analysis suggests that ventricular volumes as determined by NMR and CT are similar, while sulcal volumes are larger on NMR scans. Metabolic rates (18F-FDG) were calculated before and after correction for CSF spaces, with initial focus upon dementia and normal aging. Correction for atrophy led to a greater increase (%) in global metabolic rates in demented individuals (18.2 +- 5.3) compared to elderly controls (8.3 +- 3.0,p < .05). A trend towards significantly lower glucose metabolism in demented subjects before CSF correction was not seen following correction for atrophy. These data suggest that volumetric analysis of NMR images may more accurately reflect the degree of cerebral atrophy, since NMR does not suffer from beam hardening artifact due to bone-parenchyma juxtapositions. Furthermore, appropriate correction for CSF spaces should be employed if current resolution PET scanners are to accurately measure residual brain tissue metabolism in various pathological states.

  13. Optimal arbitrarily accurate composite pulse sequences

    NASA Astrophysics Data System (ADS)

    Low, Guang Hao; Yoder, Theodore J.; Chuang, Isaac L.

    2014-02-01

    Implementing a single-qubit unitary is often hampered by imperfect control. Systematic amplitude errors ɛ, caused by incorrect duration or strength of a pulse, are an especially common problem. But a sequence of imperfect pulses can provide a better implementation of a desired operation, as compared to a single primitive pulse. We find optimal pulse sequences consisting of L primitive π or 2π rotations that suppress such errors to arbitrary order O (ɛn) on arbitrary initial states. Optimality is demonstrated by proving an L =O(n) lower bound and saturating it with L =2n solutions. Closed-form solutions for arbitrary rotation angles are given for n =1,2,3,4. Perturbative solutions for any n are proven for small angles, while arbitrary angle solutions are obtained by analytic continuation up to n =12. The derivation proceeds by a novel algebraic and nonrecursive approach, in which finding amplitude error correcting sequences can be reduced to solving polynomial equations.

  14. Optimal arbitrarily accurate composite pulse sequences

    NASA Astrophysics Data System (ADS)

    Low, Guang Hao; Yoder, Theodore

    2014-03-01

    Implementing a single qubit unitary is often hampered by imperfect control. Systematic amplitude errors ɛ, caused by incorrect duration or strength of a pulse, are an especially common problem. But a sequence of imperfect pulses can provide a better implementation of a desired operation, as compared to a single primitive pulse. We find optimal pulse sequences consisting of L primitive π or 2 π rotations that suppress such errors to arbitrary order (ɛn) on arbitrary initial states. Optimality is demonstrated by proving an L = (n) lower bound and saturating it with L = 2 n solutions. Closed-form solutions for arbitrary rotation angles are given for n = 1 , 2 , 3 , 4 . Perturbative solutions for any n are proven for small angles, while arbitrary angle solutions are obtained by analytic continuation up to n = 12 . The derivation proceeds by a novel algebraic and non-recursive approach, in which finding amplitude error correcting sequences can be reduced to solving polynomial equations.

  15. Highly accurate fast lung CT registration

    NASA Astrophysics Data System (ADS)

    Rühaak, Jan; Heldmann, Stefan; Kipshagen, Till; Fischer, Bernd

    2013-03-01

    Lung registration in thoracic CT scans has received much attention in the medical imaging community. Possible applications range from follow-up analysis, motion correction for radiation therapy, monitoring of air flow and pulmonary function to lung elasticity analysis. In a clinical environment, runtime is always a critical issue, ruling out quite a few excellent registration approaches. In this paper, a highly efficient variational lung registration method based on minimizing the normalized gradient fields distance measure with curvature regularization is presented. The method ensures diffeomorphic deformations by an additional volume regularization. Supplemental user knowledge, like a segmentation of the lungs, may be incorporated as well. The accuracy of our method was evaluated on 40 test cases from clinical routine. In the EMPIRE10 lung registration challenge, our scheme ranks third, with respect to various validation criteria, out of 28 algorithms with an average landmark distance of 0.72 mm. The average runtime is about 1:50 min on a standard PC, making it by far the fastest approach of the top-ranking algorithms. Additionally, the ten publicly available DIR-Lab inhale-exhale scan pairs were registered to subvoxel accuracy at computation times of only 20 seconds. Our method thus combines very attractive runtimes with state-of-the-art accuracy in a unique way.

  16. Accurate attitude determination of the LACE satellite

    NASA Technical Reports Server (NTRS)

    Miglin, M. F.; Campion, R. E.; Lemos, P. J.; Tran, T.

    1993-01-01

    The Low-power Atmospheric Compensation Experiment (LACE) satellite, launched in February 1990 by the Naval Research Laboratory, uses a magnetic damper on a gravity gradient boom and a momentum wheel with its axis perpendicular to the plane of the orbit to stabilize and maintain its attitude. Satellite attitude is determined using three types of sensors: a conical Earth scanner, a set of sun sensors, and a magnetometer. The Ultraviolet Plume Instrument (UVPI), on board LACE, consists of two intensified CCD cameras and a gimbal led pointing mirror. The primary purpose of the UVPI is to image rocket plumes from space in the ultraviolet and visible wavelengths. Secondary objectives include imaging stars, atmospheric phenomena, and ground targets. The problem facing the UVPI experimenters is that the sensitivity of the LACF satellite attitude sensors is not always adequate to correctly point the UVPI cameras. Our solution is to point the UVPI cameras at known targets and use the information thus gained to improve attitude measurements. This paper describes the three methods developed to determine improved attitude values using the UVPI for both real-time operations and post observation analysis.

  17. Accurate Measurement of Bone Density with QCT

    NASA Technical Reports Server (NTRS)

    Cleek, Tammy M.; Beaupre, Gary S.; Matsubara, Miki; Whalen, Robert T.; Dalton, Bonnie P. (Technical Monitor)

    2002-01-01

    The objective of this study was to determine the accuracy of bone density measurement with a new OCT technology. A phantom was fabricated using two materials, a water-equivalent compound and hydroxyapatite (HA), combined in precise proportions (QRM GrnbH, Germany). The phantom was designed to have the approximate physical size and range in bone density as a human calcaneus, with regions of 0, 50, 100, 200, 400, and 800 mg/cc HA. The phantom was scanned at 80, 120 and 140 KVp with a GE CT/i HiSpeed Advantage scanner. A ring of highly attenuating material (polyvinyl chloride or teflon) was slipped over the phantom to alter the image by introducing non-axi-symmetric beam hardening. Images were corrected with a new OCT technology using an estimate of the effective X-ray beam spectrum to eliminate beam hardening artifacts. The algorithm computes the volume fraction of HA and water-equivalent matrix in each voxel. We found excellent agreement between expected and computed HA volume fractions. Results were insensitive to beam hardening ring material, HA concentration, and scan voltage settings. Data from all 3 voltages with a best fit linear regression are displays.

  18. A gene expression biomarker accurately predicts estrogen ...

    EPA Pesticide Factsheets

    The EPA’s vision for the Endocrine Disruptor Screening Program (EDSP) in the 21st Century (EDSP21) includes utilization of high-throughput screening (HTS) assays coupled with computational modeling to prioritize chemicals with the goal of eventually replacing current Tier 1 screening tests. The ToxCast program currently includes 18 HTS in vitro assays that evaluate the ability of chemicals to modulate estrogen receptor α (ERα), an important endocrine target. We propose microarray-based gene expression profiling as a complementary approach to predict ERα modulation and have developed computational methods to identify ERα modulators in an existing database of whole-genome microarray data. The ERα biomarker consisted of 46 ERα-regulated genes with consistent expression patterns across 7 known ER agonists and 3 known ER antagonists. The biomarker was evaluated as a predictive tool using the fold-change rank-based Running Fisher algorithm by comparison to annotated gene expression data sets from experiments in MCF-7 cells. Using 141 comparisons from chemical- and hormone-treated cells, the biomarker gave a balanced accuracy for prediction of ERα activation or suppression of 94% or 93%, respectively. The biomarker was able to correctly classify 18 out of 21 (86%) OECD ER reference chemicals including “very weak” agonists and replicated predictions based on 18 in vitro ER-associated HTS assays. For 114 chemicals present in both the HTS data and the MCF-7 c

  19. Accurate Measurement of Bone Density with QCT

    NASA Technical Reports Server (NTRS)

    Cleek, Tammy M.; Beaupre, Gary S.; Matsubara, Miki; Whalen, Robert T.; Dalton, Bonnie P. (Technical Monitor)

    2002-01-01

    The objective of this study was to determine the accuracy of bone density measurement with a new OCT technology. A phantom was fabricated using two materials, a water-equivalent compound and hydroxyapatite (HA), combined in precise proportions (QRM GrnbH, Germany). The phantom was designed to have the approximate physical size and range in bone density as a human calcaneus, with regions of 0, 50, 100, 200, 400, and 800 mg/cc HA. The phantom was scanned at 80, 120 and 140 KVp with a GE CT/i HiSpeed Advantage scanner. A ring of highly attenuating material (polyvinyl chloride or teflon) was slipped over the phantom to alter the image by introducing non-axi-symmetric beam hardening. Images were corrected with a new OCT technology using an estimate of the effective X-ray beam spectrum to eliminate beam hardening artifacts. The algorithm computes the volume fraction of HA and water-equivalent matrix in each voxel. We found excellent agreement between expected and computed HA volume fractions. Results were insensitive to beam hardening ring material, HA concentration, and scan voltage settings. Data from all 3 voltages with a best fit linear regression are displays.

  20. A time-accurate adaptive grid method and the numerical simulation of a shock-vortex interaction

    NASA Technical Reports Server (NTRS)

    Bockelie, Michael J.; Eiseman, Peter R.

    1990-01-01

    A time accurate, general purpose, adaptive grid method is developed that is suitable for multidimensional steady and unsteady numerical simulations. The grid point movement is performed in a manner that generates smooth grids which resolve the severe solution gradients and the sharp transitions in the solution gradients. The temporal coupling of the adaptive grid and the PDE solver is performed with a grid prediction correction method that is simple to implement and ensures the time accuracy of the grid. Time accurate solutions of the 2-D Euler equations for an unsteady shock vortex interaction demonstrate the ability of the adaptive method to accurately adapt the grid to multiple solution features.

  1. Accurate thermoplasmonic simulation of metallic nanoparticles

    NASA Astrophysics Data System (ADS)

    Yu, Da-Miao; Liu, Yan-Nan; Tian, Fa-Lin; Pan, Xiao-Min; Sheng, Xin-Qing

    2017-01-01

    Thermoplasmonics leads to enhanced heat generation due to the localized surface plasmon resonances. The measurement of heat generation is fundamentally a complicated task, which necessitates the development of theoretical simulation techniques. In this paper, an efficient and accurate numerical scheme is proposed for applications with complex metallic nanostructures. Light absorption and temperature increase are, respectively, obtained by solving the volume integral equation (VIE) and the steady-state heat diffusion equation through the method of moments (MoM). Previously, methods based on surface integral equations (SIEs) were utilized to obtain light absorption. However, computing light absorption from the equivalent current is as expensive as O(NsNv), where Ns and Nv, respectively, denote the number of surface and volumetric unknowns. Our approach reduces the cost to O(Nv) by using VIE. The accuracy, efficiency and capability of the proposed scheme are validated by multiple simulations. The simulations show that our proposed method is more efficient than the approach based on SIEs under comparable accuracy, especially for the case where many incidents are of interest. The simulations also indicate that the temperature profile can be tuned by several factors, such as the geometry configuration of array, beam direction, and light wavelength.

  2. Accurate lineshape spectroscopy and the Boltzmann constant

    PubMed Central

    Truong, G.-W.; Anstie, J. D.; May, E. F.; Stace, T. M.; Luiten, A. N.

    2015-01-01

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m. PMID:26465085

  3. Fast and accurate exhaled breath ammonia measurement.

    PubMed

    Solga, Steven F; Mudalel, Matthew L; Spacek, Lisa A; Risby, Terence H

    2014-06-11

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations.

  4. Accurate, reliable prototype earth horizon sensor head

    NASA Technical Reports Server (NTRS)

    Schwarz, F.; Cohen, H.

    1973-01-01

    The design and performance is described of an accurate and reliable prototype earth sensor head (ARPESH). The ARPESH employs a detection logic 'locator' concept and horizon sensor mechanization which should lead to high accuracy horizon sensing that is minimally degraded by spatial or temporal variations in sensing attitude from a satellite in orbit around the earth at altitudes in the 500 km environ 1,2. An accuracy of horizon location to within 0.7 km has been predicted, independent of meteorological conditions. This corresponds to an error of 0.015 deg-at 500 km altitude. Laboratory evaluation of the sensor indicates that this accuracy is achieved. First, the basic operating principles of ARPESH are described; next, detailed design and construction data is presented and then performance of the sensor under laboratory conditions in which the sensor is installed in a simulator that permits it to scan over a blackbody source against background representing the earth space interface for various equivalent plant temperatures.

  5. Accurate Fission Data for Nuclear Safety

    NASA Astrophysics Data System (ADS)

    Solders, A.; Gorelov, D.; Jokinen, A.; Kolhinen, V. S.; Lantz, M.; Mattera, A.; Penttilä, H.; Pomp, S.; Rakopoulos, V.; Rinta-Antila, S.

    2014-05-01

    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyväskylä. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (1012 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons for benchmarking and to study the energy dependence of fission yields. The scientific program is extensive and is planed to start in 2013 with a measurement of isomeric yield ratios of proton induced fission in uranium. This will be followed by studies of independent yields of thermal and fast neutron induced fission of various actinides.

  6. Noninvasive hemoglobin monitoring: how accurate is enough?

    PubMed

    Rice, Mark J; Gravenstein, Nikolaus; Morey, Timothy E

    2013-10-01

    Evaluating the accuracy of medical devices has traditionally been a blend of statistical analyses, at times without contextualizing the clinical application. There have been a number of recent publications on the accuracy of a continuous noninvasive hemoglobin measurement device, the Masimo Radical-7 Pulse Co-oximeter, focusing on the traditional statistical metrics of bias and precision. In this review, which contains material presented at the Innovations and Applications of Monitoring Perfusion, Oxygenation, and Ventilation (IAMPOV) Symposium at Yale University in 2012, we critically investigated these metrics as applied to the new technology, exploring what is required of a noninvasive hemoglobin monitor and whether the conventional statistics adequately answer our questions about clinical accuracy. We discuss the glucose error grid, well known in the glucose monitoring literature, and describe an analogous version for hemoglobin monitoring. This hemoglobin error grid can be used to evaluate the required clinical accuracy (±g/dL) of a hemoglobin measurement device to provide more conclusive evidence on whether to transfuse an individual patient. The important decision to transfuse a patient usually requires both an accurate hemoglobin measurement and a physiologic reason to elect transfusion. It is our opinion that the published accuracy data of the Masimo Radical-7 is not good enough to make the transfusion decision.

  7. Accurate upper body rehabilitation system using kinect.

    PubMed

    Sinha, Sanjana; Bhowmick, Brojeshwar; Chakravarty, Kingshuk; Sinha, Aniruddha; Das, Abhijit

    2016-08-01

    The growing importance of Kinect as a tool for clinical assessment and rehabilitation is due to its portability, low cost and markerless system for human motion capture. However, the accuracy of Kinect in measuring three-dimensional body joint center locations often fails to meet clinical standards of accuracy when compared to marker-based motion capture systems such as Vicon. The length of the body segment connecting any two joints, measured as the distance between three-dimensional Kinect skeleton joint coordinates, has been observed to vary with time. The orientation of the line connecting adjoining Kinect skeletal coordinates has also been seen to differ from the actual orientation of the physical body segment. Hence we have proposed an optimization method that utilizes Kinect Depth and RGB information to search for the joint center location that satisfies constraints on body segment length and as well as orientation. An experimental study have been carried out on ten healthy participants performing upper body range of motion exercises. The results report 72% reduction in body segment length variance and 2° improvement in Range of Motion (ROM) angle hence enabling to more accurate measurements for upper limb exercises.

  8. Fast and Accurate Exhaled Breath Ammonia Measurement

    PubMed Central

    Solga, Steven F.; Mudalel, Matthew L.; Spacek, Lisa A.; Risby, Terence H.

    2014-01-01

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations. PMID:24962141

  9. Accurate methods for large molecular systems.

    PubMed

    Gordon, Mark S; Mullin, Jonathan M; Pruitt, Spencer R; Roskop, Luke B; Slipchenko, Lyudmila V; Boatz, Jerry A

    2009-07-23

    Three exciting new methods that address the accurate prediction of processes and properties of large molecular systems are discussed. The systematic fragmentation method (SFM) and the fragment molecular orbital (FMO) method both decompose a large molecular system (e.g., protein, liquid, zeolite) into small subunits (fragments) in very different ways that are designed to both retain the high accuracy of the chosen quantum mechanical level of theory while greatly reducing the demands on computational time and resources. Each of these methods is inherently scalable and is therefore eminently capable of taking advantage of massively parallel computer hardware while retaining the accuracy of the corresponding electronic structure method from which it is derived. The effective fragment potential (EFP) method is a sophisticated approach for the prediction of nonbonded and intermolecular interactions. Therefore, the EFP method provides a way to further reduce the computational effort while retaining accuracy by treating the far-field interactions in place of the full electronic structure method. The performance of the methods is demonstrated using applications to several systems, including benzene dimer, small organic species, pieces of the alpha helix, water, and ionic liquids.

  10. MEMS accelerometers in accurate mount positioning systems

    NASA Astrophysics Data System (ADS)

    Mészáros, László; Pál, András.; Jaskó, Attila

    2014-07-01

    In order to attain precise, accurate and stateless positioning of telescope mounts we apply microelectromechanical accelerometer systems (also known as MEMS accelerometers). In common practice, feedback from the mount position is provided by electronic, optical or magneto-mechanical systems or via real-time astrometric solution based on the acquired images. Hence, MEMS-based systems are completely independent from these mechanisms. Our goal is to investigate the advantages and challenges of applying such devices and to reach the sub-arcminute range { that is well smaller than the field-of-view of conventional imaging telescope systems. We present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors. Basically, these sensors yield raw output within an accuracy of a few degrees. We show what kind of calibration procedures could exploit spherical and cylindrical constraints between accelerometer output channels in order to achieve the previously mentioned accuracy level. We also demonstrate how can our implementation be inserted in a telescope control system. Although this attainable precision is less than both the resolution of telescope mount drive mechanics and the accuracy of astrometric solutions, the independent nature of attitude determination could significantly increase the reliability of autonomous or remotely operated astronomical observations.

  11. Does a pneumotach accurately characterize voice function?

    NASA Astrophysics Data System (ADS)

    Walters, Gage; Krane, Michael

    2016-11-01

    A study is presented which addresses how a pneumotach might adversely affect clinical measurements of voice function. A pneumotach is a device, typically a mask, worn over the mouth, in order to measure time-varying glottal volume flow. By measuring the time-varying difference in pressure across a known aerodynamic resistance element in the mask, the glottal volume flow waveform is estimated. Because it adds aerodynamic resistance to the vocal system, there is some concern that using a pneumotach may not accurately portray the behavior of the voice. To test this hypothesis, experiments were performed in a simplified airway model with the principal dimensions of an adult human upper airway. A compliant constriction, fabricated from silicone rubber, modeled the vocal folds. Variations of transglottal pressure, time-averaged volume flow, model vocal fold vibration amplitude, and radiated sound with subglottal pressure were performed, with and without the pneumotach in place, and differences noted. Acknowledge support of NIH Grant 2R01DC005642-10A1.

  12. Accurate lineshape spectroscopy and the Boltzmann constant.

    PubMed

    Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N

    2015-10-14

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m.

  13. Accurate method for computing correlated color temperature.

    PubMed

    Li, Changjun; Cui, Guihua; Melgosa, Manuel; Ruan, Xiukai; Zhang, Yaoju; Ma, Long; Xiao, Kaida; Luo, M Ronnier

    2016-06-27

    For the correlated color temperature (CCT) of a light source to be estimated, a nonlinear optimization problem must be solved. In all previous methods available to compute CCT, the objective function has only been approximated, and their predictions have achieved limited accuracy. For example, different unacceptable CCT values have been predicted for light sources located on the same isotemperature line. In this paper, we propose to compute CCT using the Newton method, which requires the first and second derivatives of the objective function. Following the current recommendation by the International Commission on Illumination (CIE) for the computation of tristimulus values (summations at 1 nm steps from 360 nm to 830 nm), the objective function and its first and second derivatives are explicitly given and used in our computations. Comprehensive tests demonstrate that the proposed method, together with an initial estimation of CCT using Robertson's method [J. Opt. Soc. Am. 58, 1528-1535 (1968)], gives highly accurate predictions below 0.0012 K for light sources with CCTs ranging from 500 K to 106 K.

  14. Accurate electromagnetic modeling of terahertz detectors

    NASA Technical Reports Server (NTRS)

    Focardi, Paolo; McGrath, William R.

    2004-01-01

    Twin slot antennas coupled to superconducting devices have been developed over the years as single pixel detectors in the terahertz (THz) frequency range for space-based and astronomy applications. Used either for mixing or direct detection, they have been object of several investigations, and are currently being developed for several missions funded or co-funded by NASA. Although they have shown promising performance in terms of noise and sensitivity, so far they have usually also shown a considerable disagreement in terms of performance between calculations and measurements, especially when considering center frequency and bandwidth. In this paper we present a thorough and accurate electromagnetic model of complete detector and we compare the results of calculations with measurements. Starting from a model of the embedding circuit, the effect of all the other elements in the detector in the coupled power have been analyzed. An extensive variety of measured and calculated data, as presented in this paper, demonstrates the effectiveness and reliability of the electromagnetic model at frequencies between 600 GHz and 2.5THz.

  15. Error analysis and correction for laser speckle photography

    SciTech Connect

    Song, Y.Z.; Kulenovic, R.; Groll, M.

    1995-12-31

    This paper deals with error analysis of experimental data of a laser speckle photography (LSP) application which measures a temperature field of natural convection around a heated cylindrical tube. A method for error corrections is proposed and presented in detail. Experimental and theoretical investigations have shown errors in the measurements are induced due to four causes. These error sources are discussed and suggestions to avoid the errors are given. Due to the error analysis and the introduced methods for their correction the temperature distribution, respectively the temperature gradient in a thermal boundary layer can be obtained more accurately.

  16. The installation and correction of compasses in airplanes

    NASA Technical Reports Server (NTRS)

    Schoeffel, M F

    1927-01-01

    The saving of time that results from flying across country on compass headings is beginning to be widely recognized. At the same time the general use of steel tube fuselages has made a knowledge of compass correction much more necessary than was the case when wooden fuselages were the rule. This paper has been prepared primarily for the benefit of the pilot who has never studied navigation and who does not desire to go into the subject more deeply than to be able to fly compass courses with confidence. It also contains material for the designer who wishes to install his compasses with the expectation that they may be accurately corrected.

  17. Correction of deformities in children using the Taylor spatial frame.

    PubMed

    Eidelman, Mark; Bialik, Viktor; Katzman, Alexander

    2006-11-01

    The Taylor spatial frame is a unique external fixator. Despite its growing popularity, few reports on its use have been published. We evaluated the effectiveness of the Taylor spatial frame in the treatment of various deformities in 31 children and adolescents. All but one patient were anatomically corrected. Complications included superficial pin tract infection (45%), three fractures of the femoral regenerate, transient peroneal palsy, and injury to the genicular artery. Despite many challenging problems, our results compared favorably with the results achieved by others. We believe that the Taylor spatial frame is a very capable and accurate fixator for the precise correction of complex deformities.

  18. Robust statistical extension to TRELLIS motion correction in MRI

    NASA Astrophysics Data System (ADS)

    Bones, Philip J.; Maclaren, Julian R.

    2008-08-01

    Bulk motion occurring during the acquisition of data in magnetic resonance imaging (MRI) causes serious artifacts in the reconstructed images. The paper presents an extension to TRELLIS, a recently developed method of detecting and correcting for bulk motion. While TRELLIS detects and corrects for bulk translation and rotation, only rotation is considered here. Accurate determination of the relative orientations of overlapping strips of kspace is demonstrated using a robust statistical approach to aid least squares estimation. Reconstructions for both simulated and actual MRI acquisitions are presented.

  19. Long-range correction for dipolar fluids at planar interfaces

    NASA Astrophysics Data System (ADS)

    Werth, Stephan; Horsch, Martin; Hasse, Hans

    2015-12-01

    A slab-based long-range correction for dipolar interactions in molecular dynamics simulation of systems with a planar geometry is presented and applied to simulate vapour-liquid interfaces. The present approach is validated with respect to the saturated liquid density and the surface tension of the Stockmayer fluid and a molecular model for ethylene oxide. The simulation results exhibit no dependence on the cut-off radius for radii down to 1 nm, proving that the long-range correction accurately captures the influence of the dipole moment on the intermolecular interaction energies and forces as well as the virial and the surface tension.

  20. Exposed and embedded corrections in aphasia therapy: issues of voice and identity.

    PubMed

    Simmons-Mackie, Nina; Damico, Jack S

    2008-01-01

    focusing on accurate productions versus communicative intents, therapy runs the risk of reducing self-esteem and communicative confidence, as well as reinforcing a sense of 'helplessness' and disempowerment among people with aphasia. The results suggest that clinicians should carefully calibrate the use of exposed and embedded corrections to balance linguistic and psychosocial goals.

  1. Feature Referenced Error Correction Apparatus.

    DTIC Science & Technology

    A feature referenced error correction apparatus utilizing the multiple images of the interstage level image format to compensate for positional...images and by the generation of an error correction signal in response to the sub-frame registration errors. (Author)

  2. Diamagnetic Corrections and Pascal's Constants

    ERIC Educational Resources Information Center

    Bain, Gordon A.; Berry, John F.

    2008-01-01

    Measured magnetic susceptibilities of paramagnetic substances must typically be corrected for their underlying diamagnetism. This correction is often accomplished by using tabulated values for the diamagnetism of atoms, ions, or whole molecules. These tabulated values can be problematic since many sources contain incomplete and conflicting data.…

  3. Diamagnetic Corrections and Pascal's Constants

    ERIC Educational Resources Information Center

    Bain, Gordon A.; Berry, John F.

    2008-01-01

    Measured magnetic susceptibilities of paramagnetic substances must typically be corrected for their underlying diamagnetism. This correction is often accomplished by using tabulated values for the diamagnetism of atoms, ions, or whole molecules. These tabulated values can be problematic since many sources contain incomplete and conflicting data.…

  4. Corrections Education Evaluation System Model.

    ERIC Educational Resources Information Center

    Nelson, Orville; And Others

    The purpose of this project was to develop an evaluation system for the competency-based vocational program developed by Wisconsin's Division of Corrections, Department of Public Instruction (DPI), and the Vocational, Technical, and Adult Education System (VTAE). Site visits were conducted at five correctional institutions in March and April of…

  5. 75 FR 70951 - Notice, Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-19

    ... From the Federal Register Online via the Government Publishing Office NATIONAL COUNCIL ON DISABILITY (NCD) Sunshine Act Meetings Notice, Correction Type: Quarterly Meeting. Summary: NCD published a...., Suite 850, Washington, DC 20004; 202-272-2004 (voice), 202-272-2074 TTY; 202-272-2022 Fax. Correction...

  6. Error Correction, Revision, and Learning

    ERIC Educational Resources Information Center

    Truscott, John; Hsu, Angela Yi-ping

    2008-01-01

    Previous research has shown that corrective feedback on an assignment helps learners reduce their errors on that assignment during the revision process. Does this finding constitute evidence that learning resulted from the feedback? Differing answers play an important role in the ongoing debate over the effectiveness of error correction,…

  7. Correcting Slightly Less Simple Movements

    ERIC Educational Resources Information Center

    Aivar, M. P.; Brenner, E.; Smeets, J. B. J.

    2005-01-01

    Many studies have analysed how goal directed movements are corrected in response to changes in the properties of the target. However, only simple movements to single targets have been used in those studies, so little is known about movement corrections under more complex situations. Evidence from studies that ask for movements to several targets…

  8. Barometric and Earth Tide Correction

    SciTech Connect

    Toll, Nathaniel J.

    2005-11-10

    BETCO corrects for barometric and earth tide effects in long-term water level records. A regression deconvolution method is used ot solve a series of linear equations to determine an impulse response function for the well pressure head. Using the response function, a pressure head correction is calculated and applied.

  9. Improved T1 mapping by motion correction and template based B1 correction in 3T MRI brain studies

    NASA Astrophysics Data System (ADS)

    Castro, Marcelo A.; Yao, Jianhua; Lee, Christabel; Pang, Yuxi; Baker, Eva; Butman, John; Thomasson, David

    2009-02-01

    Accurate estimation of relaxation time T1 from MRI images is increasingly important for some clinical applications. Low noise, high resolution, fast and accurate T1 maps from MRI images of the brain can be performed using a dual flip angle method. However, accuracy is limited by the scanners ability to deliver the prescribed flip angle due to the B1 inhomogeneity, particularly at high field strengths (e.g. 3T). One of the most accurate methods to correct that inhomogeneity is to acquire a subject-specific B1 map. However, since B1 map acquisition takes up precious scanning time and most retrospective studies do not have B1 map, it would be desirable to perform that correction from a template. For this work a dual repetition time method was used for B1 map acquisition in five normal subjects. Inaccuracies due to misregistration of acquired T1-weighted images were corrected by rigid registration, and the effects of misalignment were compared to those of B1 inhomogeneity. T1-intensity histograms were produced and three-Gaussian curves were fitted for every fully-, partially- and non-corrected histogram in order to estimate and compare the white and gray matter peaks. In addition, in order to reduce the scanning time we designed a template based correction strategy. Images from different subjects were aligned using a twelve-parameter affine registration, and B1 maps were aligned according to that transformation. Recomputed T1 maps showed a significant improvement with respect to non-corrected ones. These results are very promising and have the potential for clinical application.

  10. Quantum corpuscular corrections to the Newtonian potential

    NASA Astrophysics Data System (ADS)

    Casadio, Roberto; Giugno, Andrea; Giusti, Andrea; Lenzi, Michele

    2017-08-01

    We study an effective quantum description of the static gravitational potential for spherically symmetric systems up to the first post-Newtonian order. We start by obtaining a Lagrangian for the gravitational potential coupled to a static matter source from the weak field expansion of the Einstein-Hilbert action. By analyzing a few classical solutions of the resulting field equation, we show that our construction leads to the expected post-Newtonian expressions. Next, we show that one can reproduce the classical Newtonian results very accurately by employing a coherent quantum state, and modifications to include the first post-Newtonian corrections are considered. Our findings establish a connection between the corpuscular model of black holes and post-Newtonian gravity, and set the stage for further investigations of these quantum models.

  11. Geometric Correction System Capabilities, Processing, and Application

    SciTech Connect

    Brewster, S.B.

    1999-06-30

    The U.S. Department of Energy's Remote Sensing Laboratory developed the geometric correction system (GCS) as a state-of-the-art solution for removing distortions from multispectral line scanner data caused by aircraft motion. The system operates on Daedalus AADS-1268 scanner data acquired from fixed-wing and helicopter platforms. The aircraft attitude, altitude, acceleration, and location are recorded and applied to the data, thereby determining the location of the earth with respect to a given datum and projection. The GCS has yielded a positional accuracy of 0.5 meters when used with a 1-meter digital elevation model. Data at this level of accuracy are invaluable in making precise areal estimates and as input into a geographic information system. The combination of high-spatial resolution and accurate geo-rectification makes the GCS a unique tool in identifying and locating environmental conditions, finding targets of interest, and detecting changes as they occur over time.

  12. Progress toward accurate high spatial resolution actinide analysis by EPMA

    NASA Astrophysics Data System (ADS)

    Jercinovic, M. J.; Allaz, J. M.; Williams, M. L.

    2010-12-01

    High precision, high spatial resolution EPMA of actinides is a significant issue for geochronology, resource geochemistry, and studies involving the nuclear fuel cycle. Particular interest focuses on understanding of the behavior of Th and U in the growth and breakdown reactions relevant to actinide-bearing phases (monazite, zircon, thorite, allanite, etc.), and geochemical fractionation processes involving Th and U in fluid interactions. Unfortunately, the measurement of minor and trace concentrations of U in the presence of major concentrations of Th and/or REEs is particularly problematic, especially in complexly zoned phases with large compositional variation on the micro or nanoscale - spatial resolutions now accessible with modern instruments. Sub-micron, high precision compositional analysis of minor components is feasible in very high Z phases where scattering is limited at lower kV (15kV or less) and where the beam diameter can be kept below 400nm at high current (e.g. 200-500nA). High collection efficiency spectrometers and high performance electron optics in EPMA now allow the use of lower overvoltage through an exceptional range in beam current, facilitating higher spatial resolution quantitative analysis. The U LIII edge at 17.2 kV precludes L-series analysis at low kV (high spatial resolution), requiring careful measurements of the actinide M series. Also, U-La detection (wavelength = 0.9A) requires the use of LiF (220) or (420), not generally available on most instruments. Strong peak overlaps of Th on U make highly accurate interference correction mandatory, with problems compounded by the ThMIV and ThMV absorption edges affecting peak, background, and interference calibration measurements (especially the interference of the Th M line family on UMb). Complex REE bearing phases such as monazite, zircon, and allanite have particularly complex interference issues due to multiple peak and background overlaps from elements present in the activation

  13. Diagnostic Limitations to Accurate Diagnosis of Cholera▿

    PubMed Central

    Alam, Munirul; Hasan, Nur A.; Sultana, Marzia; Nair, G. Balakrish; Sadique, A.; Faruque, A. S. G.; Endtz, Hubert P.; Sack, R. B.; Huq, A.; Colwell, R. R.; Izumiya, Hidemasa; Morita, Masatomo; Watanabe, Haruo; Cravioto, Alejandro

    2010-01-01

    The treatment regimen for diarrhea depends greatly on correct diagnosis of its etiology. Recent diarrhea outbreaks in Bangladesh showed Vibrio cholerae to be the predominant cause, although more than 40% of the suspected cases failed to show cholera etiology by conventional culture methods (CMs). In the present study, suspected cholera stools collected from every 50th patient during an acute diarrheal outbreak were analyzed extensively using different microbiological and molecular tools to determine their etiology. Of 135 stools tested, 86 (64%) produced V. cholerae O1 by CMs, while 119 (88%) tested positive for V. cholerae O1 by rapid cholera dipstick (DS) assay; all but three samples positive for V. cholerae O1 by CMs were also positive for V. cholerae O1 by DS assay. Of 49 stools that lacked CM-based cholera etiology despite most being positive for V. cholerae O1 by DS assay, 25 (51%) had coccoid V. cholerae O1 cells as confirmed by direct fluorescent antibody (DFA) assay, 36 (73%) amplified primers for the genes wbe O1 and ctxA by multiplex-PCR (M-PCR), and 31 (63%) showed El Tor-specific lytic phage on plaque assay (PA). Each of these methods allowed the cholera etiology to be confirmed for 97% of the stool samples. The results suggest that suspected cholera stools that fail to show etiology by CMs during acute diarrhea outbreaks may be due to the inactivation of V. cholerae by in vivo vibriolytic action of the phage and/or nonculturability induced as a host response. PMID:20739485

  14. Diagnostic limitations to accurate diagnosis of cholera.

    PubMed

    Alam, Munirul; Hasan, Nur A; Sultana, Marzia; Nair, G Balakrish; Sadique, A; Faruque, A S G; Endtz, Hubert P; Sack, R B; Huq, A; Colwell, R R; Izumiya, Hidemasa; Morita, Masatomo; Watanabe, Haruo; Cravioto, Alejandro

    2010-11-01

    The treatment regimen for diarrhea depends greatly on correct diagnosis of its etiology. Recent diarrhea outbreaks in Bangladesh showed Vibrio cholerae to be the predominant cause, although more than 40% of the suspected cases failed to show cholera etiology by conventional culture methods (CMs). In the present study, suspected cholera stools collected from every 50th patient during an acute diarrheal outbreak were analyzed extensively using different microbiological and molecular tools to determine their etiology. Of 135 stools tested, 86 (64%) produced V. cholerae O1 by CMs, while 119 (88%) tested positive for V. cholerae O1 by rapid cholera dipstick (DS) assay; all but three samples positive for V. cholerae O1 by CMs were also positive for V. cholerae O1 by DS assay. Of 49 stools that lacked CM-based cholera etiology despite most being positive for V. cholerae O1 by DS assay, 25 (51%) had coccoid V. cholerae O1 cells as confirmed by direct fluorescent antibody (DFA) assay, 36 (73%) amplified primers for the genes wbe O1 and ctxA by multiplex-PCR (M-PCR), and 31 (63%) showed El Tor-specific lytic phage on plaque assay (PA). Each of these methods allowed the cholera etiology to be confirmed for 97% of the stool samples. The results suggest that suspected cholera stools that fail to show etiology by CMs during acute diarrhea outbreaks may be due to the inactivation of V. cholerae by in vivo vibriolytic action of the phage and/or nonculturability induced as a host response.

  15. Toward Accurate Adsorption Energetics on Clay Surfaces

    PubMed Central

    2016-01-01

    Clay minerals are ubiquitous in nature, and the manner in which they interact with their surroundings has important industrial and environmental implications. Consequently, a molecular-level understanding of the adsorption of molecules on clay surfaces is crucial. In this regard computer simulations play an important role, yet the accuracy of widely used empirical force fields (FF) and density functional theory (DFT) exchange-correlation functionals is often unclear in adsorption systems dominated by weak interactions. Herein we present results from quantum Monte Carlo (QMC) for water and methanol adsorption on the prototypical clay kaolinite. To the best of our knowledge, this is the first time QMC has been used to investigate adsorption at a complex, natural surface such as a clay. As well as being valuable in their own right, the QMC benchmarks obtained provide reference data against which the performance of cheaper DFT methods can be tested. Indeed using various DFT exchange-correlation functionals yields a very broad range of adsorption energies, and it is unclear a priori which evaluation is better. QMC reveals that in the systems considered here it is essential to account for van der Waals (vdW) dispersion forces since this alters both the absolute and relative adsorption energies of water and methanol. We show, via FF simulations, that incorrect relative energies can lead to significant changes in the interfacial densities of water and methanol solutions at the kaolinite interface. Despite the clear improvements offered by the vdW-corrected and the vdW-inclusive functionals, absolute adsorption energies are often overestimated, suggesting that the treatment of vdW forces in DFT is not yet a solved problem. PMID:27917256

  16. Algorithmic scatter correction in dual-energy digital mammography

    SciTech Connect

    Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.; Lau, Beverly A.; Chan, Suk-tak; Zhang, Lei

    2013-11-15

    background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.

  17. Important Nearby Galaxies without Accurate Distances

    NASA Astrophysics Data System (ADS)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  18. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  19. Accurate orbit propagation with planetary close encounters

    NASA Astrophysics Data System (ADS)

    Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca

    2015-08-01

    We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).

  20. Towards Accurate Application Characterization for Exascale (APEX)

    SciTech Connect

    Hammond, Simon David

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  1. ACES: Accurate Cervical Evaluation With Sonography.

    PubMed

    Chory, Margaret K; Schnettler, William T; March, Melissa; Hacker, Michele R; Modest, Anna M; Rodriguez, Diana

    2016-01-01

    Transvaginal sonographic cervical length screening is an important tool for the evaluation of preterm labor. However, a structured curriculum is lacking in obstetrics and gynecology residency programs. The Accurate Cervical Evaluation with Sonography (ACES) program was developed to address this deficiency and combines an online didactic course with a standardized performance assessment of live scans. We sought to evaluate the effectiveness of the ACES program to teach residents sonographic cervical length assessment. All obstetrics and gynecology residents at our institution were invited to participate from 2012 to 2013. The program consisted of an initial supervised transvaginal cervical evaluation, an online didactic course and written examination, and 5 subsequent supervised scans. The instructor performed an independent cervical length measurement at each encounter. The primary outcome was the difference in cervical length measurement between the resident and instructor. We hypothesized that this difference would decrease over time. At each visit, a 10-item checklist was used for skill assessment. Comparisons of checklist scores over time were also performed. Seventeen of 20 residents completed at least some of the training, and 10 completed the entire program. The median difference in cervical length measurement between residents and instructors at posttests 3, 4, and 5 improved significantly compared to the pretest scan (all P ≤ .02). Similarly, the checklist scores improved over time (all P ≤ .0008). Transvaginal cervical sonography is an important tool in the evaluation of preterm labor. The ACES program provides residents a structured curriculum for cervical evaluation and supervisors a standardized means of evaluating trainees' skills. © 2016 by the American Institute of Ultrasound in Medicine.

  2. Accurate Biomass Estimation via Bayesian Adaptive Sampling

    NASA Astrophysics Data System (ADS)

    Wheeler, K.; Knuth, K.; Castle, P.

    2005-12-01

    and IKONOS imagery and the 3-D volume estimates. The combination of these then allow for a rapid and hopefully very accurate estimation of biomass.

  3. Accurate glucose detection in a small etalon

    NASA Astrophysics Data System (ADS)

    Martini, Joerg; Kuebler, Sebastian; Recht, Michael; Torres, Francisco; Roe, Jeffrey; Kiesel, Peter; Bruce, Richard

    2010-02-01

    We are developing a continuous glucose monitor for subcutaneous long-term implantation. This detector contains a double chamber Fabry-Perot-etalon that measures the differential refractive index (RI) between a reference and a measurement chamber at 850 nm. The etalon chambers have wavelength dependent transmission maxima which dependent linearly on the RI of their contents. An RI difference of ▵n=1.5.10-6 changes the spectral position of a transmission maximum by 1pm in our measurement. By sweeping the wavelength of a single-mode Vertical-Cavity-Surface-Emitting-Laser (VCSEL) linearly in time and detecting the maximum transmission peaks of the etalon we are able to measure the RI of a liquid. We have demonstrated accuracy of ▵n=+/-3.5.10-6 over a ▵n-range of 0 to 1.75.10-4 and an accuracy of 2% over a ▵nrange of 1.75.10-4 to 9.8.10-4. The accuracy is primarily limited by the reference measurement. The RI difference between the etalon chambers is made specific to glucose by the competitive, reversible release of Concanavalin A (ConA) from an immobilized dextran matrix. The matrix and ConA bound to it, is positioned outside the optical detection path. ConA is released from the matrix by reacting with glucose and diffuses into the optical path to change the RI in the etalon. Factors such as temperature affect the RI in measurement and detection chamber equally but do not affect the differential measurement. A typical standard deviation in RI is +/-1.4.10-6 over the range 32°C to 42°C. The detector enables an accurate glucose specific concentration measurement.

  4. Error Correction: Report on a Study

    ERIC Educational Resources Information Center

    Dabaghi, Azizollah

    2006-01-01

    This article reports on a study which investigated the effects of correction of learners' grammatical errors on acquisition. Specifically, it compared the effects of timing of correction (immediate versus delayed correction) and manner of correction (explicit versus implicit correction). It also investigated the relative effects of correction of…

  5. Casual instrument corrections for short-period and broadband seismometers

    USGS Publications Warehouse

    Haney, Matthew M.; Power, John; West, Michael; Michaels, Paul

    2012-01-01

    Of all the filters applied to recordings of seismic waves, which include source, path, and site effects, the one we know most precisely is the instrument filter. Therefore, it behooves seismologists to accurately remove the effect of the instrument from raw seismograms. Applying instrument corrections allows analysis of the seismogram in terms of physical units (e.g., displacement or particle velocity of the Earth’s surface) instead of the output of the instrument (e.g., digital counts). The instrument correction can be considered the most fundamental processing step in seismology since it relates the raw data to an observable quantity of interest to seismologists. Complicating matters is the fact that, in practice, the term “instrument correction” refers to more than simply the seismometer. The instrument correction compensates for the complete recording system including the seismometer, telemetry, digitizer, and any anti‐alias filters. Knowledge of all these components is necessary to perform an accurate instrument correction. The subject of instrument corrections has been covered extensively in the literature (Seidl, 1980; Scherbaum, 1996). However, the prospect of applying instrument corrections still evokes angst among many seismologists—the authors of this paper included. There may be several reasons for this. For instance, the seminal paper by Seidl (1980) exists in a journal that is not currently available in electronic format and cannot be accessed online. Also, a standard method for applying instrument corrections involves the programs TRANSFER and EVALRESP in the Seismic Analysis Code (SAC) package (Goldstein et al., 2003). The exact mathematical methods implemented in these codes are not thoroughly described in the documentation accompanying SAC.

  6. Blue: correcting sequencing errors using consensus and context.

    PubMed

    Greenfield, Paul; Duesing, Konsta; Papanicolaou, Alexie; Bauer, Denis C

    2014-10-01

    Bioinformatics tools, such as assemblers and aligners, are expected to produce more accurate results when given better quality sequence data as their starting point. This expectation has led to the development of stand-alone tools whose sole purpose is to detect and remove sequencing errors. A good error-correcting tool would be a transparent component in a bioinformatics pipeline, simply taking sequence data in any of the standard formats and producing a higher quality version of the same data containing far fewer errors. It should not only be able to correct all of the types of errors found in real sequence data (substitutions, insertions, deletions and uncalled bases), but it has to be both fast enough and scalable enough to be usable on the large datasets being produced by current sequencing technologies, and work on data derived from both haploid and diploid organisms. This article presents Blue, an error-correction algorithm based on k-mer consensus and context. Blue can correct substitution, deletion and insertion errors, as well as uncalled bases. It accepts both FASTQ and FASTA formats, and corrects quality scores for corrected bases. Blue also maintains the pairing of reads, both within a file and between pairs of files, making it compatible with downstream tools that depend on read pairing. Blue is memory efficient, scalable and faster than other published tools, and usable on large sequencing datasets. On the tests undertaken, Blue also proved to be generally more accurate than other published algorithms, resulting in more accurately aligned reads and the assembly of longer contigs containing fewer errors. One significant feature of Blue is that its k-mer consensus table does not have to be derived from the set of reads being corrected. This decoupling makes it possible to correct one dataset, such as small set of 454 mate-pair reads, with the consensus derived from another dataset, such as Illumina reads derived from the same DNA sample. Such cross-correction

  7. PIXImus DXA with different software needs individual calibration to accurately predict fat mass.

    PubMed

    Johnston, Sarah L; Peacock, Wendy L; Bell, Lynn M; Lonchampt, Michel; Speakman, John R

    2005-09-01

    To validate GE PIXImus2 DXA fat mass (FM) estimates by chemical analysis, to compare previously published correction equations with an equation from our machine, and to determine intermachine variation. C57BL/6J (n = 16) and Aston (n = 14) mice (including ob/ob), Siberian hamsters (Phodopus sungorus) (n = 15), and bank voles (Clethrionomys glareolus) (n = 37) were DXA scanned postmortem, dried, then fat extracted using a Soxhlet apparatus. We compared extracted FM with DXA-predicted FM corrected using an equation designed using wild-type animals from split-sample validation and multiple regression and two previously published equations. Sixteen animals were scanned on both a GE PIXImus2 DXA in France and a second machine in the United Kingdom. DXA underestimated FM of obese C57BL/6J by 1.4 +/- 0.19 grams but overestimated FM for wild-type C57BL/6J (2.0 +/- 0.11 grams), bank voles (1.1 +/- 0.09 grams), and hamsters (1.1 +/- 0.13 grams). DXA-predicted FM corrected using our equation accurately predicted extracted FM (accuracy 0.02 grams), but the other equations did not (accuracy, -1.3 and -1.8 grams; paired Student's t test, p < 0.001). Two similar DXA instruments gave the same FM for obese mutant but not lean wild-type animals. DXA using the same software could use the same correction equation to accurately predict FM for obese mutant but not lean wild-type animals. PIXImus machines purchased with new software need validating to accurately predict FM.

  8. Thermal correction to the molar polarizability of a Boltzmann gas

    SciTech Connect

    Jentschura, U. D.; Puchalski, M.; Mohr, P. J.

    2011-12-15

    Metrology in atomic physics has been crucial for a number of advanced determinations of fundamental constants. In addition to very precise frequency measurements, the molar polarizability of an atomic gas has recently also been measured very accurately. Part of the motivation for the measurements is due to ongoing efforts to redefine the International System of Units (SI), for which an accurate value of the Boltzmann constant is needed. Here we calculate the dominant shift of the molar polarizability in an atomic gas due to thermal effects. It is given by the relativistic correction to the dipole interaction, which emerges when the probing electric field is Lorentz transformed into the rest frame of the atoms that undergo thermal motion. While this effect is small when compared to currently available experimental accuracy, the relativistic correction to the dipole interaction is much larger than the thermal shift of the polarizability induced by blackbody radiation.

  9. Isospin-mixing correction for fp-shell Fermi transitions

    SciTech Connect

    Ormand, W.E.; Brown, B.A.

    1995-10-01

    Isospin-mixing corrections for superallowed Fermi transitions in fp-shell nuclei are computed within the framework of the shelf model. The study includes a re-evaluation of three nuclei that are part of the set of nine accurately measured transitions and five new cases that are expected to be measured in the future at radioactive-beam facilities. For the heavier fp-shell nuclei, both the configuration mixing term, {delta}{sub IM}, and the radial-overlap mis-match correction, {delta}{sub RO}, are much larger than in the case of the previous nine transitions. For the nine accurately measured transitions, excellent agreement with the CVC hypothesis is found. but the CKM matrix is found to violate the unitarity condition at the level of 3 {sigma}.

  10. Surface consistent finite frequency phase corrections

    NASA Astrophysics Data System (ADS)

    Kimman, W. P.

    2016-07-01

    Static time-delay corrections are frequency independent and ignore velocity variations away from the assumed vertical ray path through the subsurface. There is therefore a clear potential for improvement if the finite frequency nature of wave propagation can be properly accounted for. Such a method is presented here based on the Born approximation, the assumption of surface consistency and the misfit of instantaneous phase. The concept of instantaneous phase lends itself very well for sweep-like signals, hence these are the focus of this study. Analytical sensitivity kernels are derived that accurately predict frequency-dependent phase shifts due to P-wave anomalies in the near surface. They are quick to compute and robust near the source and receivers. An additional correction is presented that re-introduces the nonlinear relation between model perturbation and phase delay, which becomes relevant for stronger velocity anomalies. The phase shift as function of frequency is a slowly varying signal, its computation therefore does not require fine sampling even for broad-band sweeps. The kernels reveal interesting features of the sensitivity of seismic arrivals to the near surface: small anomalies can have a relative large impact resulting from the medium field term that is dominant near the source and receivers. Furthermore, even simple velocity anomalies can produce a distinct frequency-dependent phase behaviour. Unlike statics, the predicted phase corrections are smooth in space. Verification with spectral element simulations shows an excellent match for the predicted phase shifts over the entire seismic frequency band. Applying the phase shift to the reference sweep corrects for wavelet distortion, making the technique akin to surface consistent deconvolution, even though no division in the spectral domain is involved. As long as multiple scattering is mild, surface consistent finite frequency phase corrections outperform traditional statics for moderately large

  11. Ion recombination correction in carbon ion beams.

    PubMed

    Rossomme, S; Hopfgartner, J; Lee, N D; Delor, A; Thomas, R A S; Romano, F; Fukumura, A; Vynckier, S; Palmans, H

    2016-07-01

    In this work, ion recombination is studied as a function of energy and depth in carbon ion beams. Measurements were performed in three different passively scattered carbon ion beams with energies of 62 MeV/n, 135 MeV/n, and 290 MeV/n using various types of plane-parallel ionization chambers. Experimental results were compared with two analytical models for initial recombination. One model is generally used for photon beams and the other model, developed by Jaffé, takes into account the ionization density along the ion track. An investigation was carried out to ascertain the effect on the ion recombination correction with varying ionization chamber orientation with respect to the direction of the ion tracks. The variation of the ion recombination correction factors as a function of depth was studied for a Markus ionization chamber in the 62 MeV/n nonmodulated carbon ion beam. This variation can be related to the depth distribution of linear energy transfer. Results show that the theory for photon beams is not applicable to carbon ion beams. On the other hand, by optimizing the value of the ionization density and the initial mean-square radius, good agreement is found between Jaffé's theory and the experimental results. As predicted by Jaffé's theory, the results confirm that ion recombination corrections strongly decrease with an increasing angle between the ion tracks and the electric field lines. For the Markus ionization chamber, the variation of the ion recombination correction factor with depth was modeled adequately by a sigmoid function, which is approximately constant in the plateau and strongly increasing in the Bragg peak region to values of up to 1.06. Except in the distal edge region, all experimental results are accurately described by Jaffé's theory. Experimental results confirm that ion recombination in the investigated carbon ion beams is dominated by initial recombination. Ion recombination corrections are found to be significant and cannot be

  12. On the accurate simulation of tsunami wave propagation

    NASA Astrophysics Data System (ADS)

    Castro, C. E.; Käser, M.; Toro, E. F.

    2009-04-01

    A very important part of any tsunami early warning system is the numerical simulation of the wave propagation in the open sea and close to geometrically complex coastlines respecting bathymetric variations. Here we are interested in improving the numerical tools available to accurately simulate tsunami wave propagation on a Mediterranean basin scale. To this end, we need to accomplish some targets, such as: high-order numerical simulation in space and time, preserve steady state conditions to avoid spurious oscillations and describe complex geometries due to bathymetry and coastlines. We use the Arbitrary accuracy DERivatives Riemann problem method together with Finite Volume method (ADER-FV) over non-structured triangular meshes. The novelty of this method is the improvement of the ADER-FV scheme, introducing the well-balanced property when geometrical sources are considered for unstructured meshes and arbitrary high-order accuracy. In a previous work from Castro and Toro [1], the authors mention that ADER-FV schemes approach asymptotically the well-balanced condition, which was true for the test case mentioned in [1]. However, new evidence[2] shows that for real scale problems as the Mediterranean basin, and considering realistic bathymetry as ETOPO-2[3], this asymptotic behavior is not enough. Under these realistic conditions the standard ADER-FV scheme fails to accurately describe the propagation of gravity waves without being contaminated with spurious oscillations, also known as numerical waves. The main problem here is that at discrete level, i.e. from a numerical point of view, the numerical scheme does not correctly balance the influence of the fluxes and the sources. Numerical schemes that retain this balance are said to satisfy the well-balanced property or the exact C-property. This unbalance reduces, as we refine the spatial discretization or increase the order of the numerical method. However, the computational cost increases considerably this way

  13. Elevation correction factor for absolute pressure measurements

    NASA Technical Reports Server (NTRS)

    Panek, Joseph W.; Sorrells, Mark R.

    1996-01-01

    With the arrival of highly accurate multi-port pressure measurement systems, conditions that previously did not affect overall system accuracy must now be scrutinized closely. Errors caused by elevation differences between pressure sensing elements and model pressure taps can be quantified and corrected. With multi-port pressure measurement systems, the sensing elements are connected to pressure taps that may be many feet away. The measurement system may be at a different elevation than the pressure taps due to laboratory space or test article constraints. This difference produces a pressure gradient that is inversely proportional to height within the interface tube. The pressure at the bottom of the tube will be higher than the pressure at the top due to the weight of the tube's column of air. Tubes with higher pressures will exhibit larger absolute errors due to the higher air density. The above effect is well documented but has generally been taken into account with large elevations only. With error analysis techniques, the loss in accuracy from elevation can be easily quantified. Correction factors can be applied to maintain the high accuracies of new pressure measurement systems.

  14. [Atmospheric adjacency effect correction of ETM images].

    PubMed

    Liu, Cheng-yu; Chen, Chun; Zhang, Shu-qing; Gao, Ji-yue

    2010-09-01

    It is an important precondition to retrieve the ground surface reflectance exactly for improving the subsequent product of remote sensing images and the quantitative application of remote sensing. However, because the electromagnetic wave is scattered by the atmosphere during its transmission from the ground surface to the sensor, the electromagnetic wave signal of the target received by the sensor contained the signal of the background. The adjacency effect emerges. Because of the adjacency effect, the remote sensing images become blurry, and their contrast reduces. So the ground surface reflectance retrieved from the remote sensing images is also inaccurate. Finally, the quality of subsequent product of remote sensing images and the accuracy of quantitative application of remote sensing might decrease. In the present paper, according to the radiative transfer equation, the atmospheric adjacency effect correction experiment of ETM images was carried out by using the point spread function method. The result of the experiment indicated that the contrast of the corrected ETM images increased, and the ground surface reflectance retrieved from those images was more accurate.

  15. Ovarian Cancer Incidence Corrected for Oophorectomy

    PubMed Central

    Baldwin, Lauren A.; Chen, Quan; Tucker, Thomas C.; White, Connie G.; Ore, Robert N.; Huang, Bin

    2017-01-01

    Current reported incidence rates for ovarian cancer may significantly underestimate the true rate because of the inclusion of women in the calculations who are not at risk for ovarian cancer due to prior benign salpingo-oophorectomy (SO). We have considered prior SO to more realistically estimate risk for ovarian cancer. Kentucky Health Claims Data, International Classification of Disease 9 (ICD-9) codes, Current Procedure Terminology (CPT) codes, and Kentucky Behavioral Risk Factor Surveillance System (BRFSS) Data were used to identify women who have undergone SO in Kentucky, and these women were removed from the at-risk pool in order to re-assess incidence rates to more accurately represent ovarian cancer risk. The protective effect of SO on the population was determined on an annual basis for ages 5–80+ using data from the years 2009–2013. The corrected age-adjusted rates of ovarian cancer that considered SO ranged from 33% to 67% higher than age-adjusted rates from the standard population. Correction of incidence rates for ovarian cancer by accounting for women with prior SO gives a better understanding of risk for this disease faced by women. The rates of ovarian cancer were substantially higher when SO was taken into consideration than estimates from the standard population. PMID:28368298

  16. Are patients referred to rehabilitation diagnosed accurately?

    PubMed

    Tederko, Piotr; Krasuski, Marek; Nyka, Izabella; Mycielski, Jerzy; Tarnacka, Beata

    2017-07-17

    An accurate diagnosis of the leading health condition and comorbidities is a prerequisite for safe and effective rehabilitation. The problem of diagnostic errors in physical and rehabilitation medicine (PRM) has not been addressed sufficiently. The responsibility of a referring physician is to determine indications and contraindications for rehabilitation. To assess the rate of and risk factors for inaccurate referral diagnoses (RD) in patients referred to a rehabilitation facility. We hypothesized that inaccurate RD would be more common in patients 1) referred by non-PRM physicians; 2) waiting longer for the admission; 3) older patients. Retrospective observational study. 1000 randomly selected patients admitted between 2012 and 2016 to a day- rehabilitation center (DRC). University DRC specialized in musculoskeletal diseases. On admission all cases underwent clinical verification of RD. Inappropriateness regarding primary diagnoses and comorbidities were noted. Influence of several factors affecting probability of inaccurate RD was analyzed with multiple binary regression model applied to 6 categories of diseases. The rate of inaccurate RD was 25.2%. Higher frequency of inaccurate RD was noted among patients referred by non-PRM specialists (30.3% vs 17.3% in cases referred by PRM specialists). Application of logit regression showed highly significant influence of the specialty of a referring physician on the odds of inaccurate RD (joint Wald test ch2(6)=38.98, p- value=0.000), controlling for the influence of other variables. This may reflect a suboptimal knowledge of the rehabilitation process and a tendency to neglect of comorbidities by non-PRM specialists. The rate of inaccurate RD did not correlate with time between referral and admission (joint Wald test of all odds ratios equal to 1, chi2(6)=5.62, p-value=0.467), however, mean and median waiting times were relatively short (35.7 and 25 days respectively).A high risk of overlooked multimorbidity was

  17. An automated method for accurate vessel segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Xin; Liu, Chaoyue; Le Minh, Hung; Wang, Zhiwei; Chien, Aichi; (Tim Cheng, Kwang-Ting

    2017-05-01

    Vessel segmentation is a critical task for various medical applications, such as diagnosis assistance of diabetic retinopathy, quantification of cerebral aneurysm’s growth, and guiding surgery in neurosurgical procedures. Despite technology advances in image segmentation, existing methods still suffer from low accuracy for vessel segmentation in the two challenging while common scenarios in clinical usage: (1) regions with a low signal-to-noise-ratio (SNR), and (2) at vessel boundaries disturbed by adjacent non-vessel pixels. In this paper, we present an automated system which can achieve highly accurate vessel segmentation for both 2D and 3D images even under these challenging scenarios. Three key contributions achieved by our system are: (1) a progressive contrast enhancement method to adaptively enhance contrast of challenging pixels that were otherwise indistinguishable, (2) a boundary refinement method to effectively improve segmentation accuracy at vessel borders based on Canny edge detection, and (3) a content-aware region-of-interests (ROI) adjustment method to automatically determine the locations and sizes of ROIs which contain ambiguous pixels and demand further verification. Extensive evaluation of our method is conducted on both 2D and 3D datasets. On a public 2D retinal dataset (named DRIVE (Staal 2004 IEEE Trans. Med. Imaging 23 501-9)) and our 2D clinical cerebral dataset, our approach achieves superior performance to the state-of-the-art methods including a vesselness based method (Frangi 1998 Int. Conf. on Medical Image Computing and Computer-Assisted Intervention) and an optimally oriented flux (OOF) based method (Law and Chung 2008 European Conf. on Computer Vision). An evaluation on 11 clinical 3D CTA cerebral datasets shows that our method can achieve 94% average accuracy with respect to the manual segmentation reference, which is 23% to 33% better than the five baseline methods (Yushkevich 2006 Neuroimage 31 1116-28; Law and Chung 2008

  18. Corrections of satellite positions in near-real time.

    NASA Technical Reports Server (NTRS)

    Poularikas, A. D.

    1973-01-01

    The influence of sudden increases of electron content on the accurate determination of the position of a satellite is investigated based on a spherically stratified ionospheric model. Using the total electron content information from Faraday rotation measurements, a procedure is presented whereby the corrections of satellite position due to the unpredicted electron increase can be accounted for without the need to know the spatial distribution of the additional electrons.

  19. Radiative corrections to 0/sup +/-0/sup +/. beta. transitions

    SciTech Connect

    Jaus, W.; Rasche, G.

    1987-06-01

    We reexamine and refine our former analysis of electromagnetic corrections to 0/sup +/-0/sup +/ ..beta.. transitions. The disagreement with a recent approximate calculation of Sirlin and Zucchini is due to an error in our earlier numerical computation. The new results lead to much better agreement between the Ft values of the eight accurately studied decays. We find an average value of Ft = 3072.4 +- 1.6 s. .AE

  20. Radiative corrections to 0+-0+ β transitions

    NASA Astrophysics Data System (ADS)

    Jaus, W.; Rasche, G.

    1987-06-01

    We reexamine and refine our former analysis of electromagnetic corrections to 0+-0+ β transitions. The disagreement with a recent approximate calculation of Sirlin and Zucchini is due to an error in our earlier numerical computation. The new results lead to much better agreement between the Ft values of the eight accurately studied decays. We find an average value of Ft =3072.4+/-1.6 s. .AE

  1. On the accurate estimation of gap fraction during daytime with digital cover photography

    NASA Astrophysics Data System (ADS)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  2. Singularity Correction for Long-Range-Corrected Density Functional Theory with Plane-Wave Basis Sets.

    PubMed

    Kawashima, Yukio; Hirao, Kimihiko

    2017-03-09

    We introduced two methods to correct the singularity in the calculation of long-range Hartree-Fock (HF) exchange for long-range-corrected density functional theory (LC-DFT) calculations in plane-wave basis sets. The first method introduces an auxiliary function to cancel out the singularity. The second method introduces a truncated long-range Coulomb potential, which has no singularity. We assessed the introduced methods using the LC-BLYP functional by applying it to isolated systems of naphthalene and pyridine. We first compared the total energies and the HOMO energies of the singularity-corrected and uncorrected calculations and confirmed that singularity correction is essential for LC-DFT calculations using plane-wave basis sets. The LC-DFT calculation results converged rapidly with respect to the cell size as the other functionals, and their results were in good agreement with the calculated results obtained using Gaussian basis sets. LC-DFT succeeded in obtaining accurate orbital energies and excitation energies. We next applied LC-DFT with singularity correction methods to the electronic structure calculations of the extended systems, Si and SiC. We confirmed that singularity correction is important for calculations of extended systems as well. The calculation results of the valence and conduction bands by LC-BLYP showed good convergence with respect to the number of k points sampled. The introduced methods succeeded in overcoming the singularity problem in HF exchange calculation. We investigated the effect of the singularity correction on the excitation state calculation and found that careful treatment of the singularities is required compared to ground-state calculations. We finally examined the excitonic effect on the band gap of the extended systems. We calculated the excitation energies to the first excited state of the extended systems using a supercell model at the Γ point and found that the excitonic binding energy, supposed to be small for

  3. Retrospective 3D motion correction using spherical navigator echoes.

    PubMed

    Johnson, Patricia M; Liu, Junmin; Wade, Trevor; Tavallaei, Mohammad Ali; Drangova, Maria

    2016-11-01

    To develop and evaluate a rapid spherical navigator echo (SNAV) motion correction technique, then apply it for retrospective correction of brain images. The pre-rotated, template matching SNAV method (preRot-SNAV) was developed in combination with a novel hybrid baseline strategy, which includes acquired and interpolated templates. Specifically, the SNAV templates are only rotated around X- and Y-axis; for each rotated SNAV, simulated baseline templates that mimic object rotation about the Z-axis were interpolated. The new method was first evaluated with phantom experiments. Then, a customized SNAV-interleaved gradient echo sequence was used to image three volunteers performing directed head motion. The SNAV motion measurements were used to retrospectively correct the brain images. Experiments were performed using a 3.0T whole-body MRI scanner and both single and 8-channel head coils. Phantom rotations and translations measured using the hybrid baselines agreed to within 0.9° and 1mm compared to those measured with the original preRot-SNAV method. Retrospective motion correction of in vivo images using the hybrid preRot-SNAV effectively corrected for head rotation up to 4° and 4mm. The presented hybrid approach enables the acquisition of pre-rotated baseline templates in as little as 2.5s, and results in accurate measurement of rotations and translations. Retrospective 3D motion correction successfully reduced motion artifacts in vivo. Copyright © 2016. Published by Elsevier Inc.

  4. How well does multiple OCR error correction generalize?

    NASA Astrophysics Data System (ADS)

    Lund, William B.; Ringger, Eric K.; Walker, Daniel D.

    2013-12-01

    As the digitization of historical documents, such as newspapers, becomes more common, the need of the archive patron for accurate digital text from those documents increases. Building on our earlier work, the contributions of this paper are: 1. in demonstrating the applicability of novel methods for correcting optical character recognition (OCR) on disparate data sets, including a new synthetic training set, 2. enhancing the correction algorithm with novel features, and 3. assessing the data requirements of the correction learning method. First, we correct errors using conditional random fields (CRF) trained on synthetic training data sets in order to demonstrate the applicability of the methodology to unrelated test sets. Second, we show the strength of lexical features from the training sets on two unrelated test sets, yielding a relative reduction in word error rate on the test sets of 6.52%. New features capture the recurrence of hypothesis tokens and yield an additional relative reduction in WER of 2.30%. Further, we show that only 2.0% of the full training corpus of over 500,000 feature cases is needed to achieve correction results comparable to those using the entire training corpus, effectively reducing both the complexity of the training process and the learned correction model.

  5. Corrective optics space telescope axial replacement alignment system

    NASA Astrophysics Data System (ADS)

    Slusher, Robert B.; Satter, Michael J.; Kaplan, Michael L.; Martella, Mark A.; Freymiller, Ed D.; Buzzetta, Victor

    1993-10-01

    To facilitate the accurate placement and alignment of the corrective optics space telescope axial replacement (COSTAR) structure, mechanisms, and optics, the COSTAR Alignment System (CAS) has been designed and assembled. It consists of a 20-foot optical bench, support structures for holding and aligning the COSTAR instrument at various stages of assembly, a focal plane target fixture (FPTF) providing an accurate reference to the as-built Hubble Space Telescope (HST) focal plane, two alignment translation stages with interchangeable alignment telescopes and alignment lasers, and a Zygo Mark IV interferometer with a reference sphere custom designed to allow accurate double-pass operation of the COSTAR correction optics. The system is used to align the fixed optical bench (FOB), the track, the deployable optical bench (DOB), the mechanisms, and the optics to ensure that the correction mirrors are all located in the required positions and orientations on-orbit after deployment. In this paper, the layout of the CAS is presented and the various alignment operations are listed along with the relevant alignment requirements. In addition, calibration of the necessary support structure elements and alignment aids is described, including the two-axis translation stages, the latch positions, the FPTF, and the COSTAR-mounted alignment cubes.

  6. Parents' beliefs about condoms and oral contraceptives: are they medically accurate?

    PubMed

    Eisenberg, Marla E; Bearinger, Linda H; Sieving, Renee E; Swain, Carolyne; Resnick, Michael D

    2004-01-01

    Parents are encouraged to be the primary sex educators for their children; however, little is known about the accuracy of parents' views about condoms and oral contraceptives. Telephone surveys using validated measures provided data on beliefs about the effectiveness, safety and usability of condoms and the pill among 1,069 parents of 13-17-year-olds in Minnesota and Wisconsin in 2002. Pearson chi-square tests and multivariate logistic regression models were used to compare beliefs according to sex, age, race, religion, education, income and political orientation. Substantial proportions of parents underestimated the effectiveness of condoms for preventing pregnancy and sexually transmitted diseases (STDs). Only 47% believed that condoms are very effective for STD prevention, and 40% for pregnancy prevention. Fifty-two percent thought that pill use prevents pregnancy almost all the time; 39% thought that the pill is very safe. Approximately one-quarter of parents thought that most teenagers are capable of using condoms correctly; almost four in 10 thought that most teenagers can use the pill correctly. Fathers tended to have more accurate views about condoms than mothers did; mothers' views of the pill were generally more accurate than fathers'. Whites were more likely than nonwhites to hold accurate beliefs about the pill's safety and effectiveness; conservatives were less likely than liberals to hold accurate views about the effectiveness of condoms. Campaigns encouraging parents to talk with their teenagers about sexuality should provide parents with medically accurate information on the effectiveness, safety and usability of condoms and the pill.

  7. Biomarker Surrogates Do Not Accurately Predict Sputum Eosinophils and Neutrophils in Asthma

    PubMed Central

    Hastie, Annette T.; Moore, Wendy C.; Li, Huashi; Rector, Brian M.; Ortega, Victor E.; Pascual, Rodolfo M.; Peters, Stephen P.; Meyers, Deborah A.; Bleecker, Eugene R.

    2013-01-01

    Background Sputum eosinophils (Eos) are a strong predictor of airway inflammation, exacerbations, and aid asthma management, whereas sputum neutrophils (Neu) indicate a different severe asthma phenotype, potentially less responsive to TH2-targeted therapy. Variables such as blood Eos, total IgE, fractional exhaled nitric oxide (FeNO) or FEV1% predicted, may predict airway Eos, while age, FEV1%predicted, or blood Neu may predict sputum Neu. Availability and ease of measurement are useful characteristics, but accuracy in predicting airway Eos and Neu, individually or combined, is not established. Objectives To determine whether blood Eos, FeNO, and IgE accurately predict sputum eosinophils, and age, FEV1% predicted, and blood Neu accurately predict sputum neutrophils (Neu). Methods Subjects in the Wake Forest Severe Asthma Research Program (N=328) were characterized by blood and sputum cells, healthcare utilization, lung function, FeNO, and IgE. Multiple analytical techniques were utilized. Results Despite significant association with sputum Eos, blood Eos, FeNO and total IgE did not accurately predict sputum Eos, and combinations of these variables failed to improve prediction. Age, FEV1%predicted and blood Neu were similarly unsatisfactory for prediction of sputum Neu. Factor analysis and stepwise selection found FeNO, IgE and FEV1% predicted, but not blood Eos, correctly predicted 69% of sputum Eoscorrectly predicted 64% of sputum Neuaccurately assigned only 41% of samples. Conclusion Despite statistically significant associations FeNO, IgE, blood Eos and Neu, FEV1%predicted, and age are poor surrogates, separately and combined, for accurately predicting sputum eosinophils and neutrophils. PMID:23706399

  8. 2012 Technical Corrections Fact Sheet

    EPA Pesticide Factsheets

    Final Rule: 2012 Technical Corrections, Clarifying and Other Amendments to theGreenhouse Gas Reporting Rule, and Confidentiality Determinations for Certain DataElements of the Fluorinated Gas Source Category

  9. Correction of the crooked nose.

    PubMed

    Potter, Jason K

    2012-02-01

    Correction of the deviated nose is one of the most difficult tasks in rhinoplasty surgery and should be approached in a systematic manner to ensure a satisfied patient and surgeon. Correction of the deviated nose is unique in that the patient's complaints frequently include aesthetic and functional characteristics. Equal importance should be given to the preoperative, intraoperative, and postoperative aspects of the patient's treatment to ensure a favorable outcome.

  10. On Navigation Sensor Error Correction

    NASA Astrophysics Data System (ADS)

    Larin, V. B.

    2016-01-01

    The navigation problem for the simplest wheeled robotic vehicle is solved by just measuring kinematical parameters, doing without accelerometers and angular-rate sensors. It is supposed that the steerable-wheel angle sensor has a bias that must be corrected. The navigation parameters are corrected using the GPS. The approach proposed regards the wheeled robot as a system with nonholonomic constraints. The performance of such a navigation system is demonstrated by way of an example

  11. Quantum error correction for beginners.

    PubMed

    Devitt, Simon J; Munro, William J; Nemoto, Kae

    2013-07-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future.

  12. Robust distortion correction of endoscope

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Nie, Sixiang; Soto-Thompson, Marcelo; Chen, Chao-I.; A-Rahim, Yousif I.

    2008-03-01

    Endoscopic images suffer from a fundamental spatial distortion due to the wide angle design of the endoscope lens. This barrel-type distortion is an obstacle for subsequent Computer Aided Diagnosis (CAD) algorithms and should be corrected. Various methods and research models for the barrel-type distortion correction have been proposed and studied. For industrial applications, a stable, robust method with high accuracy is required to calibrate the different types of endoscopes in an easy of use way. The correction area shall be large enough to cover all the regions that the physicians need to see. In this paper, we present our endoscope distortion correction procedure which includes data acquisition, distortion center estimation, distortion coefficients calculation, and look-up table (LUT) generation. We investigate different polynomial models used for modeling the distortion and propose a new one which provides correction results with better visual quality. The method has been verified with four types of colonoscopes. The correction procedure is currently being applied on human subject data and the coefficients are being utilized in a subsequent 3D reconstruction project of colon.

  13. Static corrections in mountainous areas using Fresnel-wavepath tomography

    NASA Astrophysics Data System (ADS)

    Zhang, Jianzhong; Shi, Tai-kun; Zhao, Yasheng; Zhou, Hua-wei

    2014-12-01

    We propose a 3-D Fresnel-wavepath tomography based on simultaneous iterative reconstruction technique (SIRT) with adaptive relaxation factors, in order to obtain effective near-surface velocity models for static corrections. We derived a formula to calculate the optimal relaxation factor for tomographic inversion to increase the convergence rate and thus the efficiency of the Fresnel-wavepath tomography. A forward method based on bilinear traveltime interpolation and the wavefront group marching is applied to achieve fast and accurate computation of the wavefront traveltimes in 3-D heterogeneous models. The new method is able to achieve near-surface velocity models effective in estimating long-period static corrections, and the remaining traveltime residuals after the tomographic inversion are used to estimate the short-period static corrections via a surface-consistent decomposition. The new method is tested using 3-D synthetic data and 3-D field dataset acquired in a complex mountainous area in southwestern China.

  14. Three-Dimensional Turbulent RANS Adjoint-Based Error Correction

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2003-01-01

    Engineering problems commonly require functional outputs of computational fluid dynamics (CFD) simulations with specified accuracy. These simulations are performed with limited computational resources. Computable error estimates offer the possibility of quantifying accuracy on a given mesh and predicting a fine grid functional on a coarser mesh. Such an estimate can be computed by solving the flow equations and the associated adjoint problem for the functional of interest. An adjoint-based error correction procedure is demonstrated for transonic inviscid and subsonic laminar and turbulent flow. A mesh adaptation procedure is formulated to target uncertainty in the corrected functional and terminate when error remaining in the calculation is less than a user-specified error tolerance. This adaptation scheme is shown to yield anisotropic meshes with corrected functionals that are more accurate for a given number of grid points then isotropic adapted and uniformly refined grids.

  15. Reflection error correction of gas turbine blade temperature

    NASA Astrophysics Data System (ADS)

    Kipngetich, Ketui Daniel; Feng, Chi; Gao, Shan

    2016-03-01

    Accurate measurement of gas turbine blades' temperature is one of the greatest challenges encountered in gas turbine temperature measurements. Within an enclosed gas turbine environment with surfaces of varying temperature and low emissivities, a new challenge is introduced into the use of radiation thermometers due to the problem of reflection error. A method for correcting this error has been proposed and demonstrated in this work through computer simulation and experiment. The method assumed that emissivities of all surfaces exchanging thermal radiation are known. Simulations were carried out considering targets with low and high emissivities of 0.3 and 0.8 respectively while experimental measurements were carried out on blades with emissivity of 0.76. Simulated results showed possibility of achieving error less than 1% while experimental result corrected the error to 1.1%. It was thus concluded that the method is appropriate for correcting reflection error commonly encountered in temperature measurement of gas turbine blades.

  16. Robustness of channel-adapted quantum error correction

    SciTech Connect

    Ballo, Gabor; Gurin, Peter

    2009-07-15

    A quantum channel models the interaction between the system we are interested in and its environment. Such a model can capture the main features of the interaction, but, because of the complexity of the environment, we cannot assume that it is fully accurate. We study the robustness of quantum error correction operations against completely unexpected and subsequently undetermined type of channel uncertainties. We find that a channel-adapted optimal error correction operation does not only give the best possible channel fidelity but it is more robust against channel alterations than any other error correction operation. Our results are valid for Pauli channels and stabilizer codes, but based on some numerical results, we believe that very similar conclusions can be drawn also in the general case.

  17. Centerline correction of incorrectly segmented coronary arteries in CT angiography

    NASA Astrophysics Data System (ADS)

    Fu, Ling; Kang, Yan

    2013-03-01

    For computer-aided diagnosis of cardiovascular diseases, accurately extracted centerlines of coronary arteries are important. However, centerlines extracted from incorrectly segmented vessels are usually unsatisfactory. For this reason, we propose two automatic centerline correction methods in this paper. First, a method based on the local volume comparison and the morphological comparison is presented to remove false centerlines from over-segmented tissues. Second, another method based on the judgment of vessel identity and the gradient-SDF (source distance field) calculation is presented to add missing centerlines of under-segmented vessels. We have validated the proposed centerline correction methods on real CT angiographic datasets of coronary arteries. The quantitative evaluation results show that the proposed methods can effectively correct centerline errors arising from erroneous vessel segmentation in most cases.

  18. Baseline correction for NMR spectroscopic metabolomics data analysis.

    PubMed

    Xi, Yuanxin; Rocke, David M

    2008-07-29

    We propose a statistically principled baseline correction method, derived from a parametric smoothing model. It uses a score function to describe the key features of baseline distortion and constructs an optimal baseline curve to maximize it. The parameters are determined automatically by using LOWESS (locally weighted scatterplot smoothing) regression to estimate the noise variance. We tested this method on 1D NMR spectra with different forms of baseline distortions, and demonstrated that it is effective for both regular 1D NMR spectra and metabolomics spectra with over-crowded peaks. Compared with the automatic baseline correction function in XWINNMR 3.5, the penalized smoothing method provides more accurate baseline correction for high-signal density metabolomics spectra.

  19. Baseline Correction for NMR Spectroscopic Metabolomics Data Analysis

    PubMed Central

    Xi, Yuanxin; Rocke, David M

    2008-01-01

    Background We propose a statistically principled baseline correction method, derived from a parametric smoothing model. It uses a score function to describe the key features of baseline distortion and constructs an optimal baseline curve to maximize it. The parameters are determined automatically by using LOWESS (locally weighted scatterplot smoothing) regression to estimate the noise variance. Results We tested this method on 1D NMR spectra with different forms of baseline distortions, and demonstrated that it is effective for both regular 1D NMR spectra and metabolomics spectra with over-crowded peaks. Conclusion Compared with the automatic baseline correction function in XWINNMR 3.5, the penalized smoothing method provides more accurate baseline correction for high-signal density metabolomics spectra. PMID:18664284

  20. Correcting biases in ICOADS sea surface temperature measurements

    NASA Astrophysics Data System (ADS)

    Chan, D.; Huybers, P. J.

    2016-12-01

    Sea-surface temperature (SSTs) estimates based on the International Comprehensive Ocean-Atmosphere Data Set (ICOADS) combine records across various measurement types that must be corrected for biases. For example, bucket measurements are known to be colder relative to engine room intake (ERI) measurements. Here, to further examine biases amongst ERI, bucket, and buoy measurements, we examine the data according to groups of ships that can be distinctly identified within the ICOADS dataset. Bias corrections are estimated according to classes of data for each season based on collocated records using a multiple linear regression approach. Accurate inter-comparison also benefits from accounting for diurnal variability for each measurement type. Results are compared with existing bias correction estimates.

  1. Tilt correction method of text image based on wavelet pyramid

    NASA Astrophysics Data System (ADS)

    Yu, Mingyang; Zhu, Qiguo

    2017-04-01

    Text images captured by camera may be tilted and distorted, which is unfavorable for document character recognition. Therefore,a method of text image tilt correction based on wavelet pyramid is proposed in this paper. The first step is to convert the text image captured by cameras to binary images. After binarization, the images are layered by wavelet transform to achieve noise reduction, enhancement and compression of image. Afterwards,the image would bedetected for edge by Canny operator, and extracted for straight lines by Radon transform. In the final step, this method calculates the intersection of straight lines and gets the corrected text images according to the intersection points and perspective transformation. The experimental result shows this method can correct text images accurately.

  2. Correction techniques for the truncation of the source field in acoustic analogies.

    PubMed

    Martínez-Lera, Paula; Schram, Christophe

    2008-12-01

    The truncation of the source field may induce large overpredictions in the acoustic field computed through acoustic analogies. A comparative study of different correction approaches proposed in the literature is carried out, considering three different techniques: correction terms based on a convection assumption, use of model extensions, and windowing techniques. It is shown that convection-based correction terms need to take into account noncompactness effects of the source field in order to yield accurate results. A modified correction term that includes these effects is derived, and its equivalence to the method of model extensions in the case of purely convected flows is highlighted. Moreover, the performance of different windowing techniques is investigated.

  3. Surface corrections to the moment of inertia and shell structure in finite Fermi systems

    NASA Astrophysics Data System (ADS)

    Gorpinchenko, D. V.; Magner, A. G.; Bartel, J.; Blocki, J. P.

    2016-02-01

    The moment of inertia for nuclear collective rotations is derived within a semiclassical approach based on the Inglis cranking and Strutinsky shell-correction methods, improved by surface corrections within the nonperturbative periodic-orbit theory. For adiabatic (statistical-equilibrium) rotations it was approximated by the generalized rigid-body moment of inertia accounting for the shell corrections of the particle density. An improved phase-space trace formula allows to express the shell components of the moment of inertia more accurately in terms of the free-energy shell correction. Evaluating their ratio within the extended Thomas-Fermi effective-surface approximation, one finds good agreement with the quantum calculations.

  4. 77 FR 3800 - Accurate NDE & Inspection, LLC; Confirmatory Order

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-25

    ... COMMISSION Accurate NDE & Inspection, LLC; Confirmatory Order In the Matter of Accurate NDE & Docket: 150... request ADR with the NRC in an attempt to resolve issues associated with this matter. In response, on August 9, 2011, Accurate NDE requested ADR to resolve this matter with the NRC. On September 28, 2011...

  5. Accurate radiocarbon age estimation using "early" measurements: a new approach to reconstructing the Paleolithic absolute chronology

    NASA Astrophysics Data System (ADS)

    Omori, Takayuki; Sano, Katsuhiro; Yoneda, Minoru

    2014-05-01

    This paper presents new correction approaches for "early" radiocarbon ages to reconstruct the Paleolithic absolute chronology. In order to discuss time-space distribution about the replacement of archaic humans, including Neanderthals in Europe, by the modern humans, a massive data, which covers a wide-area, would be needed. Today, some radiocarbon databases focused on the Paleolithic have been published and used for chronological studies. From a viewpoint of current analytical technology, however, the any database have unreliable results that make interpretation of radiocarbon dates difficult. Most of these unreliable ages had been published in the early days of radiocarbon analysis. In recent years, new analytical methods to determine highly-accurate dates have been developed. Ultrafiltration and ABOx-SC methods, as new sample pretreatments for bone and charcoal respectively, have attracted attention because they could remove imperceptible contaminates and derive reliable accurately ages. In order to evaluate the reliability of "early" data, we investigated the differences and variabilities of radiocarbon ages on different pretreatments, and attempted to develop correction functions for the assessment of the reliability. It can be expected that reliability of the corrected age is increased and the age applied to chronological research together with recent ages. Here, we introduce the methodological frameworks and archaeological applications.

  6. A precise and accurate acupoint location obtained on the face using consistency matrix pointwise fusion method.

    PubMed

    Yanq, Xuming; Ye, Yijun; Xia, Yong; Wei, Xuanzhong; Wang, Zheyu; Ni, Hongmei; Zhu, Ying; Xu, Lingyu

    2015-02-01

    To develop a more precise and accurate method, and identified a procedure to measure whether an acupoint had been correctly located. On the face, we used an acupoint location from different acupuncture experts and obtained the most precise and accurate values of acupoint location based on the consistency information fusion algorithm, through a virtual simulation of the facial orientation coordinate system. Because of inconsistencies in each acupuncture expert's original data, the system error the general weight calculation. First, we corrected each expert of acupoint location system error itself, to obtain a rational quantification for each expert of acupuncture and moxibustion acupoint location consistent support degree, to obtain pointwise variable precision fusion results, to put every expert's acupuncture acupoint location fusion error enhanced to pointwise variable precision. Then, we more effectively used the measured characteristics of different acupuncture expert's acupoint location, to improve the measurement information utilization efficiency and acupuncture acupoint location precision and accuracy. Based on using the consistency matrix pointwise fusion method on the acupuncture experts' acupoint location values, each expert's acupoint location information could be calculated, and the most precise and accurate values of each expert's acupoint location could be obtained.

  7. Benchmark data base for accurate van der Waals interaction in inorganic fragments

    NASA Astrophysics Data System (ADS)

    Brndiar, Jan; Stich, Ivan

    2012-02-01

    A range of inorganic materials, such as Sb, As, P, S, Se are built from van der Waals (vdW) interacting units forming the crystals, which neither the standard DFT GGA description as well as cheap quantum chemistry methods, such as MP2, do not describe correctly. We use this data base, for which have performed ultra accurate CCSD(T) calculations in complete basis set limit, to test the alternative approximate theories, such as Grimme [1], Langreth-Lundqvist [2], and Tkachenko-Scheffler [3]. While none of these theories gives entirely correct description, Grimme consistently provides more accurate results than Langreth-Lundqvist, which tend to overestimate the distances and underestimate the interaction energies for this set of systems. Contrary Tkachenko-Scheffler appear to yield surprisingly accurate and computationally cheap and convenient description applicable also for systems with appreciable charge transfer. [4pt] [1] S. Grimme, J. Comp. Chem. 27, 1787 (2006) [0pt] [2] K. Lee, et al., Phys. Rev. B 82 081101 (R) (2010) [0pt] [3] Tkachenko and M. Scheffler Phys. Rev. Lett. 102 073005 (2009).

  8. An accurate and practical method for inference of weak gravitational lensing from galaxy images

    NASA Astrophysics Data System (ADS)

    Bernstein, Gary M.; Armstrong, Robert; Krawiec, Christina; March, Marisa C.

    2016-07-01

    We demonstrate highly accurate recovery of weak gravitational lensing shear using an implementation of the Bayesian Fourier Domain (BFD) method proposed by Bernstein & Armstrong, extended to correct for selection biases. The BFD formalism is rigorously correct for Nyquist-sampled, background-limited, uncrowded images of background galaxies. BFD does not assign shapes to galaxies, instead compressing the pixel data D into a vector of moments M, such that we have an analytic expression for the probability P(M|g) of obtaining the observations with gravitational lensing distortion g along the line of sight. We implement an algorithm for conducting BFD's integrations over the population of unlensed source galaxies which measures ≈10 galaxies s-1 core-1 with good scaling properties. Initial tests of this code on ≈109 simulated lensed galaxy images recover the simulated shear to a fractional accuracy of m = (2.1 ± 0.4) × 10-3, substantially more accurate than has been demonstrated previously for any generally applicable method. Deep sky exposures generate a sufficiently accurate approximation to the noiseless, unlensed galaxy population distribution assumed as input to BFD. Potential extensions of the method include simultaneous measurement of magnification and shear; multiple-exposure, multiband observations; and joint inference of photometric redshifts and lensing tomography.

  9. Spectroscopically Accurate Line Lists for Application in Sulphur Chemistry

    NASA Astrophysics Data System (ADS)

    Underwood, D. S.; Azzam, A. A. A.; Yurchenko, S. N.; Tennyson, J.

    2013-09-01

    for inclusion in standard atmospheric and planetary spectroscopic databases. The methods involved in computing the ab initio potential energy and dipole moment surfaces involved minor corrections to the equilibrium S-O distance, which produced a good agreement with experimentally determined rotational energies. However the purely ab initio method was not been able to reproduce an equally spectroscopically accurate representation of vibrational motion. We therefore present an empirical refinement to this original, ab initio potential surface, based on the experimental data available. This will not only be used to reproduce the room-temperature spectrum to a greater degree of accuracy, but is essential in the production of a larger, accurate line list necessary for the simulation of higher temperature spectra: we aim for coverage suitable for T ? 800 K. Our preliminary studies on SO3 have also shown it to exhibit an interesting "forbidden" rotational spectrum and "clustering" of rotational states; to our knowledge this phenomenon has not been observed in other examples of trigonal planar molecules and is also an investigative avenue we wish to pursue. Finally, the IR absorption bands for SO2 and SO3 exhibit a strong overlap, and the inclusion of SO2 as a complement to our studies is something that we will be interested in doing in the near future.

  10. Surgical options for correction of refractive error following cataract surgery.

    PubMed

    Abdelghany, Ahmed A; Alio, Jorge L

    2014-01-01

    Refractive errors are frequently found following cataract surgery and refractive lens exchange. Accurate biometric analysis, selection and calculation of the adequate intraocular lens (IOL) and modern techniques for cataract surgery all contribute to achieving the goal of cataract surgery as a refractive procedure with no refractive error. However, in spite of all these advances, residual refractive error still occasionally occurs after cataract surgery and laser in situ keratomileusis (LASIK) can be considered the most accurate method for its correction. Lens-based procedures, such as IOL exchange or piggyback lens implantation are also possible alternatives especially in cases with extreme ametropia, corneal abnormalities, or in situations where excimer laser is unavailable. In our review, we have found that piggyback IOL is safer and more accurate than IOL exchange. Our aim is to provide a review of the recent literature regarding target refraction and residual refractive error in cataract surgery.

  11. Correction of Doppler Rada Data for Aircraft Motion Using Surface Measurements and Recursive Least-Squares Estimation

    NASA Technical Reports Server (NTRS)

    Durden, S.; Haddad, Z.

    1998-01-01

    Observations of Doppler velocity of hydrometeors form airborne Doppler weather radars normally contains a component due to the aircraft motion. Accurate hydrometeor velocity measurements thus require correction by subtracting this velocity from the observed velocity.

  12. 75 FR 9100 - Proxy Disclosure Enhancements; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-01

    ... COMMISSION 17 CFR Part 249 RIN 3235-AK28 Proxy Disclosure Enhancements; Correction AGENCY: Securities and Exchange Commission. ACTION: Final rule; correction. SUMMARY: We are making technical corrections to..., we are making three corrections to Form 8-K. We are correcting Form 8-K to add an instruction,...

  13. Fully 3D refraction correction dosimetry system.

    PubMed

    Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan

    2016-02-21

    medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space.

  14. Fully 3D refraction correction dosimetry system

    NASA Astrophysics Data System (ADS)

    Manjappa, Rakesh; Sharath Makki, S.; Kumar, Rajesh; Mohan Vasu, Ram; Kanhirodan, Rajan

    2016-02-01

    medium is 71.8%, an increase of 6.4% compared to that achieved using conventional ART algorithm. Smaller diameter dosimeters are scanned with dry air scanning by using a wide-angle lens that collects refracted light. The images reconstructed using cone beam geometry is seen to deteriorate in some planes as those regions are not scanned. Refraction correction is important and needs to be taken in to consideration to achieve quantitatively accurate dose reconstructions. Refraction modeling is crucial in array based scanners as it is not possible to identify refracted rays in the sinogram space.

  15. Accurate Event-Driven Motion Compensation in High-Resolution PET Incorporating Scattered and Random Events

    PubMed Central

    Dinelle, Katie; Cheng, Ju-Chieh; Shilov, Mikhail A.; Segars, William P.; Lidstone, Sarah C.; Blinder, Stephan; Rousset, Olivier G.; Vajihollahi, Hamid; Tsui, Benjamin M. W.; Wong, Dean F.; Sossi, Vesna

    2010-01-01

    With continuing improvements in spatial resolution of positron emission tomography (PET) scanners, small patient movements during PET imaging become a significant source of resolution degradation. This work develops and investigates a comprehensive formalism for accurate motion-compensated reconstruction which at the same time is very feasible in the context of high-resolution PET. In particular, this paper proposes an effective method to incorporate presence of scattered and random coincidences in the context of motion (which is similarly applicable to various other motion correction schemes). The overall reconstruction framework takes into consideration missing projection data which are not detected due to motion, and additionally, incorporates information from all detected events, including those which fall outside the field-of-view following motion correction. The proposed approach has been extensively validated using phantom experiments as well as realistic simulations of a new mathematical brain phantom developed in this work, and the results for a dynamic patient study are also presented. PMID:18672420

  16. Reliable Spectroscopic Constants for CCH-, NH2- and Their Isotopomers from an Accurate Potential Energy Function

    NASA Technical Reports Server (NTRS)

    Lee, Timothy J.; Dateo, Christopher E.; Schwenke, David W.; Chaban, Galina M.

    2005-01-01

    Accurate quartic force fields have been determined for the CCH- and NH2- molecular anions using the singles and doubles coupled-cluster method that includes a perturbational estimate of the effects of connected triple excitations, CCSD(T). Very large one-particle basis sets have been used including diffuse functions and up through g-type functions. Correlation of the nitrogen and carbon core electrons has been included, as well as other "small" effects, such as the diagonal Born-Oppenheimer correction, and basis set extrapolation, and corrections for higher-order correlation effects and scalar relativistic effects. Fundamental vibrational frequencies have been computed using standard second-order perturbation theory as well as variational methods. Comparison with the available experimental data is presented and discussed. The implications of our research for the astronomical observation of molecular anions will be discussed.

  17. Toward an Accurate Estimate of the Exfoliation Energy of Black Phosphorus: A Periodic Quantum Chemical Approach.

    PubMed

    Sansone, Giuseppe; Maschio, Lorenzo; Usvyat, Denis; Schütz, Martin; Karttunen, Antti

    2016-01-07

    The black phosphorus (black-P) crystal is formed of covalently bound layers of phosphorene stacked together by weak van der Waals interactions. An experimental measurement of the exfoliation energy of black-P is not available presently, making theoretical studies the most important source of information for the optimization of phosphorene production. Here, we provide an accurate estimate of the exfoliation energy of black-P on the basis of multilevel quantum chemical calculations, which include the periodic local Møller-Plesset perturbation theory of second order, augmented by higher-order corrections, which are evaluated with finite clusters mimicking the crystal. Very similar results are also obtained by density functional theory with the D3-version of Grimme's empirical dispersion correction. Our estimate of the exfoliation energy for black-P of -151 meV/atom is substantially larger than that of graphite, suggesting the need for different strategies to generate isolated layers for these two systems.

  18. Reliable Spectroscopic Constants for CCH-, NH2- and Their Isotopomers from an Accurate Potential Energy Function

    NASA Technical Reports Server (NTRS)

    Lee, Timothy J.; Dateo, Christopher E.; Schwenke, David W.; Chaban, Galina M.

    2005-01-01

    Accurate quartic force fields have been determined for the CCH- and NH2- molecular anions using the singles and doubles coupled-cluster method that includes a perturbational estimate of the effects of connected triple excitations, CCSD(T). Very large one-particle basis sets have been used including diffuse functions and up through g-type functions. Correlation of the nitrogen and carbon core electrons has been included, as well as other "small" effects, such as the diagonal Born-Oppenheimer correction, and basis set extrapolation, and corrections for higher-order correlation effects and scalar relativistic effects. Fundamental vibrational frequencies have been computed using standard second-order perturbation theory as well as variational methods. Comparison with the available experimental data is presented and discussed. The implications of our research for the astronomical observation of molecular anions will be discussed.

  19. Correction

    NASA Astrophysics Data System (ADS)

    2014-08-01

    In the About AGU article "AGU Union Fellows elected for 2014," published in the 29 July 2014 issue of Eos (95(30), 272, doi:10.1022/ 2014EO300008), a joint research group affiliation was inadvertently omitted for one Fellow. Antje Boetius is with the Alfred Wegener Institute, Bremerhaven, Germany, and the Max Planck Institute for Marine Microbiology, Bremen, Germany.

  20. Correction

    NASA Astrophysics Data System (ADS)

    1999-11-01

    Synsedimentary deformation in the Jurassic of southeastern Utah—A case of impact shaking? COMMENT Geology, v. 27, p. 661 (July 1999) The sentence on p. 661, first column, second paragraph, line one, should read: The 1600 m of Pennsylvania Paradox Formation is 75 90% salt in Arches National Park. The sentence on p. 661, second column, third paragraph, line seven, should read: This high-pressured ydrothermal solution created the clastic dikes, chert nodules from reprecipitated siliceous cement that have been called “siliceous impactites” (Kriens et al., 1997), and much of the present structure at Upheaval Dome by further faulting.