Sample records for simple correction factor

  1. Simple, Fast and Effective Correction for Irradiance Spatial Nonuniformity in Measurement of IVs of Large Area Cells at NREL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moriarty, Tom

    The NREL cell measurement lab measures the IV parameters of cells of multiple sizes and configurations. A large contributing factor to errors and uncertainty in Jsc, Imax, Pmax and efficiency can be the irradiance spatial nonuniformity. Correcting for this nonuniformity through its precise and frequent measurement can be very time consuming. This paper explains a simple, fast and effective method based on bicubic interpolation for determining and correcting for spatial nonuniformity and verification of the method's efficacy.

  2. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  3. A Novel Simple Phantom for Verifying the Dose of Radiation Therapy

    PubMed Central

    Lee, J. H.; Chang, L. T.; Shiau, A. C.; Chen, C. W.; Liao, Y. J.; Li, W. J.; Lee, M. S.; Hsu, S. M.

    2015-01-01

    A standard protocol of dosimetric measurements is used by the organizations responsible for verifying that the doses delivered in radiation-therapy institutions are within authorized limits. This study evaluated a self-designed simple auditing phantom for use in verifying the dose of radiation therapy; the phantom design, dose audit system, and clinical tests are described. Thermoluminescent dosimeters (TLDs) were used as postal dosimeters, and mailable phantoms were produced for use in postal audits. Correction factors are important for converting TLD readout values from phantoms into the absorbed dose in water. The phantom scatter correction factor was used to quantify the difference in the scattered dose between a solid water phantom and homemade phantoms; its value ranged from 1.084 to 1.031. The energy-dependence correction factor was used to compare the TLD readout of the unit dose irradiated by audit beam energies with 60Co in the solid water phantom; its value was 0.99 to 1.01. The setup-condition factor was used to correct for differences in dose-output calibration conditions. Clinical tests of the device calibrating the dose output revealed that the dose deviation was within 3%. Therefore, our homemade phantoms and dosimetric system can be applied for accurately verifying the doses applied in radiation-therapy institutions. PMID:25883980

  4. Student Understanding of the Boltzmann Factor

    ERIC Educational Resources Information Center

    Smith, Trevor I.; Mountcastle, Donald B.; Thompson, John R.

    2015-01-01

    We present results of our investigation into student understanding of the physical significance and utility of the Boltzmann factor in several simple models. We identify various justifications, both correct and incorrect, that students use when answering written questions that require application of the Boltzmann factor. Results from written data…

  5. Corrected formula for the polarization of second harmonic plasma emission

    NASA Technical Reports Server (NTRS)

    Melrose, D. B.; Dulk, G. A.; Gary, D. E.

    1980-01-01

    Corrections for the theory of polarization of second harmonic plasma emission are proposed. The nontransversality of the magnetoionic waves was not taken into account correctly and is here corrected. The corrected and uncorrected results are compared for two simple cases of parallel and isotropic distributions of Langmuir waves. It is found that whereas with the uncorrected formula plausible values of the coronal magnetic fields were obtained from the observed polarization of the second harmonic, the present results imply fields which are stronger by a factor of three to four.

  6. SEMICONDUCTOR TECHNOLOGY: An efficient dose-compensation method for proximity effect correction

    NASA Astrophysics Data System (ADS)

    Ying, Wang; Weihua, Han; Xiang, Yang; Renping, Zhang; Yang, Zhang; Fuhua, Yang

    2010-08-01

    A novel simple dose-compensation method is developed for proximity effect correction in electron-beam lithography. The sizes of exposed patterns depend on dose factors while other exposure parameters (including accelerate voltage, resist thickness, exposing step size, substrate material, and so on) remain constant. This method is based on two reasonable assumptions in the evaluation of the compensated dose factor: one is that the relation between dose factors and circle-diameters is linear in the range under consideration; the other is that the compensated dose factor is only affected by the nearest neighbors for simplicity. Four-layer-hexagon photonic crystal structures were fabricated as test patterns to demonstrate this method. Compared to the uncorrected structures, the homogeneity of the corrected hole-size in photonic crystal structures was clearly improved.

  7. Size Distribution of Sea-Salt Emissions as a Function of Relative Humidity

    NASA Astrophysics Data System (ADS)

    Zhang, K. M.; Knipping, E. M.; Wexler, A. S.; Bhave, P. V.; Tonnesen, G. S.

    2004-12-01

    Here we introduced a simple method for correcting sea-salt particle-size distributions as a function of relative humidity. Distinct from previous approaches, our derivation uses particle size at formation as the reference state rather than dry particle size. The correction factors, corresponding to the size at formation and the size at 80% RH, are given as polynomial functions of local relative humidity which are straightforward to implement. Without major compromises, the correction factors are thermodynamically accurate and can be applied between 0.45 and 0.99 RH. Since the thermodynamic properties of sea-salt electrolytes are weakly dependent on ambient temperature, these factors can be regarded as temperature independent. The correction factor w.r.t. to the size at 80% RH is in excellent agreement with those from Fitzgerald's and Gerber's growth equations; while the correction factor w.r.t. the size at formation has the advantage of being independent of dry size and relative humidity at formation. The resultant sea-salt emissions can be used directly in atmospheric model simulations at urban, regional and global scales without further correction. Application of this method to several common open-ocean and surf-zone sea-salt-particle source functions is described.

  8. Investigation of the ionospheric Faraday rotation for use in orbit corrections

    NASA Technical Reports Server (NTRS)

    Llewellyn, S. K.; Bent, R. B.; Nesterczuk, G.

    1974-01-01

    The possibility of mapping the Faraday factors on a worldwide basis was examined as a simple method of representing the conversion factors for any possible user. However, this does not seem feasible. The complex relationship between the true magnetic coordinates and the geographic latitude, longitude, and azimuth angles eliminates the possibility of setting up some simple tables that would yield worldwide results of sufficient accuracy. Tabular results for specific stations can easily be produced or could be represented in graphic form.

  9. Assessment and correction of skinfold thickness equations in estimating body fat in children with cerebral palsy.

    PubMed

    Gurka, Matthew J; Kuperminc, Michelle N; Busby, Marjorie G; Bennis, Jacey A; Grossberg, Richard I; Houlihan, Christine M; Stevenson, Richard D; Henderson, Richard C

    2010-02-01

    To assess the accuracy of skinfold equations in estimating percentage body fat in children with cerebral palsy (CP), compared with assessment of body fat from dual energy X-ray absorptiometry (DXA). Data were collected from 71 participants (30 females, 41 males) with CP (Gross Motor Function Classification System [GMFCS] levels I-V) between the ages of 8 and 18 years. Estimated percentage body fat was computed using established (Slaughter) equations based on the triceps and subscapular skinfolds. A linear model was fitted to assess the use of a simple correction to these equations for children with CP. Slaughter's equations consistently underestimated percentage body fat (mean difference compared with DXA percentage body fat -9.6/100 [SD 6.2]; 95% confidence interval [CI] -11.0 to -8.1). New equations were developed in which a correction factor was added to the existing equations based on sex, race, GMFCS level, size, and pubertal status. These corrected equations for children with CP agree better with DXA (mean difference 0.2/100 [SD=4.8]; 95% CI -1.0 to 1.3) than existing equations. A simple correction factor to commonly used equations substantially improves the ability to estimate percentage body fat from two skinfold measures in children with CP.

  10. Secular Orbit Evolution in Systems with a Strong External Perturber—A Simple and Accurate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrade-Ines, Eduardo; Eggl, Siegfried, E-mail: eandrade.ines@gmail.com, E-mail: siegfried.eggl@jpl.nasa.gov

    We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-ordermore » models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.« less

  11. Assessment and correction of skinfold thickness equations in estimating body fat in children with cerebral palsy

    PubMed Central

    GURKA, MATTHEW J; KUPERMINC, MICHELLE N; BUSBY, MARJORIE G; BENNIS, JACEY A; GROSSBERG, RICHARD I; HOULIHAN, CHRISTINE M; STEVENSON, RICHARD D; HENDERSON, RICHARD C

    2010-01-01

    AIM To assess the accuracy of skinfold equations in estimating percentage body fat in children with cerebral palsy (CP), compared with assessment of body fat from dual energy X-ray absorptiometry (DXA). METHOD Data were collected from 71 participants (30 females, 41 males) with CP (Gross Motor Function Classification System [GMFCS] levels I–V) between the ages of 8 and 18 years. Estimated percentage body fat was computed using established (Slaughter) equations based on the triceps and subscapular skinfolds. A linear model was fitted to assess the use of a simple correction to these equations for children with CP. RESULTS Slaughter’s equations consistently underestimated percentage body fat (mean difference compared with DXA percentage body fat −9.6/100 [SD 6.2]; 95% confidence interval [CI] −11.0 to −8.1). New equations were developed in which a correction factor was added to the existing equations based on sex, race, GMFCS level, size, and pubertal status. These corrected equations for children with CP agree better with DXA (mean difference 0.2/100 [SD=4.8]; 95% CI −1.0 to 1.3) than existing equations. INTERPRETATION A simple correction factor to commonly used equations substantially improves the ability to estimate percentage body fat from two skinfold measures in children with CP. PMID:19811518

  12. Correction factors for self-selection when evaluating screening programmes.

    PubMed

    Spix, Claudia; Berthold, Frank; Hero, Barbara; Michaelis, Jörg; Schilling, Freimut H

    2016-03-01

    In screening programmes there is recognized bias introduced through participant self-selection (the healthy screenee bias). Methods used to evaluate screening programmes include Intention-to-screen, per-protocol, and the "post hoc" approach in which, after introducing screening for everyone, the only evaluation option is participants versus non-participants. All methods are prone to bias through self-selection. We present an overview of approaches to correct for this bias. We considered four methods to quantify and correct for self-selection bias. Simple calculations revealed that these corrections are actually all identical, and can be converted into each other. Based on this, correction factors for further situations and measures were derived. The application of these correction factors requires a number of assumptions. Using as an example the German Neuroblastoma Screening Study, no relevant reduction in mortality or stage 4 incidence due to screening was observed. The largest bias (in favour of screening) was observed when comparing participants with non-participants. Correcting for bias is particularly necessary when using the post hoc evaluation approach, however, in this situation not all required data are available. External data or further assumptions may be required for estimation. © The Author(s) 2015.

  13. Choosing Actions

    PubMed Central

    Rosenbaum, David A.; Chapman, Kate M.; Coelho, Chase J.; Gong, Lanyun; Studenka, Breanna E.

    2013-01-01

    Actions that are chosen have properties that distinguish them from actions that are not. Of the nearly infinite possible actions that can achieve any given task, many of the unchosen actions are irrelevant, incorrect, or inappropriate. Others are relevant, correct, or appropriate but are disfavored for other reasons. Our research focuses on the question of what distinguishes actions that are chosen from actions that are possible but are not. We review studies that use simple preference methods to identify factors that contribute to action choices, especially for object-manipulation tasks. We can determine which factors are especially important through simple behavioral experiments. PMID:23761769

  14. Alternate corrections for estimating actual wetland evapotranspiration from potential evapotranspiration

    USGS Publications Warehouse

    Shoemaker, W. Barclay; Sumner, D.M.

    2006-01-01

    Corrections can be used to estimate actual wetland evapotranspiration (AET) from potential evapotranspiration (PET) as a means to define the hydrology of wetland areas. Many alternate parameterizations for correction coefficients for three PET equations are presented, covering a wide range of possible data-availability scenarios. At nine sites in the wetland Everglades of south Florida, USA, the relatively complex PET Penman equation was corrected to daily total AET with smaller standard errors than the PET simple and Priestley-Taylor equations. The simpler equations, however, required less data (and thus less funding for instrumentation), with the possibility of being corrected to AET with slightly larger, comparable, or even smaller standard errors. Air temperature generally corrected PET simple most effectively to wetland AET, while wetland stage and humidity generally corrected PET Priestley-Taylor and Penman most effectively to wetland AET. Stage was identified for PET Priestley-Taylor and Penman as the data type with the most correction ability at sites that are dry part of each year or dry part of some years. Finally, although surface water generally was readily available at each monitoring site, AET was not occurring at potential rates, as conceptually expected under well-watered conditions. Apparently, factors other than water availability, such as atmospheric and stomata resistances to vapor transport, also were limiting the PET rate. ?? 2006, The Society of Wetland Scientists.

  15. Technical note: Design flood under hydrological uncertainty

    NASA Astrophysics Data System (ADS)

    Botto, Anna; Ganora, Daniele; Claps, Pierluigi; Laio, Francesco

    2017-07-01

    Planning and verification of hydraulic infrastructures require a design estimate of hydrologic variables, usually provided by frequency analysis, and neglecting hydrologic uncertainty. However, when hydrologic uncertainty is accounted for, the design flood value for a specific return period is no longer a unique value, but is represented by a distribution of values. As a consequence, the design flood is no longer univocally defined, making the design process undetermined. The Uncertainty Compliant Design Flood Estimation (UNCODE) procedure is a novel approach that, starting from a range of possible design flood estimates obtained in uncertain conditions, converges to a single design value. This is obtained through a cost-benefit criterion with additional constraints that is numerically solved in a simulation framework. This paper contributes to promoting a practical use of the UNCODE procedure without resorting to numerical computation. A modified procedure is proposed by using a correction coefficient that modifies the standard (i.e., uncertainty-free) design value on the basis of sample length and return period only. The procedure is robust and parsimonious, as it does not require additional parameters with respect to the traditional uncertainty-free analysis. Simple equations to compute the correction term are provided for a number of probability distributions commonly used to represent the flood frequency curve. The UNCODE procedure, when coupled with this simple correction factor, provides a robust way to manage the hydrologic uncertainty and to go beyond the use of traditional safety factors. With all the other parameters being equal, an increase in the sample length reduces the correction factor, and thus the construction costs, while still keeping the same safety level.

  16. SU-F-BRE-01: A Rapid Method to Determine An Upper Limit On a Radiation Detector's Correction Factor During the QA of IMRT Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamio, Y; Bouchard, H

    2014-06-15

    Purpose: Discrepancies in the verification of the absorbed dose to water from an IMRT plan using a radiation dosimeter can be wither caused by 1) detector specific nonstandard field correction factors as described by the formalism of Alfonso et al. 2) inaccurate delivery of the DQA plan. The aim of this work is to develop a simple/fast method to determine an upper limit on the contribution of composite field correction factors to these discrepancies. Methods: Indices that characterize the non-flatness of the symmetrised collapsed delivery (VSC) of IMRT fields over detector-specific regions of interest were shown to be correlated withmore » IMRT field correction factors. The indices introduced are the uniformity index (UI) and the mean fluctuation index (MF). Each one of these correlation plots have 10 000 fields generated with a stochastic model. A total of eight radiation detectors were investigated in the radial orientation. An upper bound on the correction factors was evaluated by fitting values of high correction factors for a given index value. Results: These fitted curves can be used to compare the performance of radiation dosimeters in composite IMRT fields. Highly water-equivalent dosimeters like the scintillating detector (Exradin W1) and a generic alanine detector have been found to have corrections under 1% over a broad range of field modulations (0 – 0.12 for MF and 0 – 0.5 for UI). Other detectors have been shown to have corrections of a few percent over this range. Finally, a full Monte Carlo simulations of 18 clinical and nonclinical IMRT field showed good agreement with the fitted curve for the A12 ionization chamber. Conclusion: This work proposes a rapid method to evaluate an upper bound on the contribution of correction factors to discrepancies found in the verification of DQA plans.« less

  17. Selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays and impacts of using incorrect weighting factors on curve stability, data quality, and assay performance.

    PubMed

    Gu, Huidong; Liu, Guowen; Wang, Jian; Aubry, Anne-Françoise; Arnold, Mark E

    2014-09-16

    A simple procedure for selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays is reported. The correct weighting factor is determined by the relationship between the standard deviation of instrument responses (σ) and the concentrations (x). The weighting factor of 1, 1/x, or 1/x(2) should be selected if, over the entire concentration range, σ is a constant, σ(2) is proportional to x, or σ is proportional to x, respectively. For the first time, we demonstrated with detailed scientific reasoning, solid historical data, and convincing justification that 1/x(2) should always be used as the weighting factor for all bioanalytical LC-MS/MS assays. The impacts of using incorrect weighting factors on curve stability, data quality, and assay performance were thoroughly investigated. It was found that the most stable curve could be obtained when the correct weighting factor was used, whereas other curves using incorrect weighting factors were unstable. It was also found that there was a very insignificant impact on the concentrations reported with calibration curves using incorrect weighting factors as the concentrations were always reported with the passing curves which actually overlapped with or were very close to the curves using the correct weighting factor. However, the use of incorrect weighting factors did impact the assay performance significantly. Finally, the difference between the weighting factors of 1/x(2) and 1/y(2) was discussed. All of the findings can be generalized and applied into other quantitative analysis techniques using calibration curves with weighted least-squares regression algorithm.

  18. A simple model for the critical mass of a nuclear weapon

    NASA Astrophysics Data System (ADS)

    Reed, B. Cameron

    2018-07-01

    A probability-based model for estimating the critical mass of a fissile isotope is developed. The model requires introducing some concepts from nuclear physics and incorporating some approximations, but gives results correct to about a factor of two for uranium-235 and plutonium-239.

  19. A simple enrichment correction factor for improving erosion estimation by rare earth oxide tracers

    USDA-ARS?s Scientific Manuscript database

    Spatially distributed soil erosion data are needed to better understanding soil erosion processes and validating distributed erosion models. Rare earth element (REE) oxides were used to generate spatial erosion data. However, a general concern on the accuracy of the technique arose due to selective ...

  20. Fast sweeping method for the factored eikonal equation

    NASA Astrophysics Data System (ADS)

    Fomel, Sergey; Luo, Songting; Zhao, Hongkai

    2009-09-01

    We develop a fast sweeping method for the factored eikonal equation. By decomposing the solution of a general eikonal equation as the product of two factors: the first factor is the solution to a simple eikonal equation (such as distance) or a previously computed solution to an approximate eikonal equation. The second factor is a necessary modification/correction. Appropriate discretization and a fast sweeping strategy are designed for the equation of the correction part. The key idea is to enforce the causality of the original eikonal equation during the Gauss-Seidel iterations. Using extensive numerical examples we demonstrate that (1) the convergence behavior of the fast sweeping method for the factored eikonal equation is the same as for the original eikonal equation, i.e., the number of iterations for the Gauss-Seidel iterations is independent of the mesh size, (2) the numerical solution from the factored eikonal equation is more accurate than the numerical solution directly computed from the original eikonal equation, especially for point sources.

  1. Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen-dong; Liu, Yike; Alkhalifah, Tariq; Wu, Zedong

    2018-04-01

    The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyse the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artefacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modelling engine performs better than an isotropic migration.

  2. Estimating aquifer transmissivity from specific capacity using MATLAB.

    PubMed

    McLin, Stephen G

    2005-01-01

    Historically, specific capacity information has been used to calculate aquifer transmissivity when pumping test data are unavailable. This paper presents a simple computer program written in the MATLAB programming language that estimates transmissivity from specific capacity data while correcting for aquifer partial penetration and well efficiency. The program graphically plots transmissivity as a function of these factors so that the user can visually estimate their relative importance in a particular application. The program is compatible with any computer operating system running MATLAB, including Windows, Macintosh OS, Linux, and Unix. Two simple examples illustrate program usage.

  3. Pediatric and adolescent applications of the Taylor Spatial Frame.

    PubMed

    Paloski, Michael; Taylor, Benjamin C; Iobst, Christopher; Pugh, Kevin J

    2012-06-01

    Limb deformity can occur in the pediatric and adolescent populations from multiple etiologies: congenital, traumatic, posttraumatic sequelae, oncologic, and infection. Correcting these deformities is important for many reasons. Ilizarov popularized external fixation to accomplish this task. Taylor expanded on this by designing an external fixator in 1994 with 6 telescoping struts that can be sequentially manipulated to achieve multiaxial correction of deformity without the need for hinges or operative frame alterations. This frame can be used to correct deformities in children and has shown good anatomic correction with minimal morbidity. The nature of the construct and length of treatment affects psychosocial factors that the surgeon and family must be aware of prior to treatment. An understanding of applications of the Taylor Spatial Frame gives orthopedic surgeons an extra tool to correct simple and complex deformities in pediatric and adolescent patients. Copyright 2012, SLACK Incorporated.

  4. A simple bias correction in linear regression for quantitative trait association under two-tail extreme selection.

    PubMed

    Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C

    2011-09-01

    Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.

  5. Graphical representation of QT rate correction formulae: an aid facilitating the use of a given formula and providing a visual comparison of the impact of different formulae.

    PubMed

    Rowlands, Derek J

    2012-01-01

    The QT interval on the electrocardiogram is an increasingly important measurement, especially in relation to drug action and interaction. The QT interval varies inversely as the heart rate and numerous rate correction formulae have been proposed. It is difficult to compare the effect of applying different formulae at different heart rates and for different measured QT intervals. A simple graphical display of the results from different formulae is proposed. This display is dependent on the concept of the absolute correction factor. This graphical presentation is useful (a) in comparing the effect of the application of different formulae and (b) in directly reading the correction produced by any individual formula. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Lock-in amplifier error prediction and correction in frequency sweep measurements.

    PubMed

    Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose

    2007-01-01

    This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.

  7. Modified Hitschfeld-Bordan Equations for Attenuation-Corrected Radar Rain Reflectivity: Application to Nonuniform Beamfilling at Off-Nadir Incidence

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Liao, Liang

    2013-01-01

    As shown by Takahashi et al., multiple path attenuation estimates over the field of view of an airborne or spaceborne weather radar are feasible for off-nadir incidence angles. This follows from the fact that the surface reference technique, which provides path attenuation estimates, can be applied to each radar range gate that intersects the surface. This study builds on this result by showing that three of the modified Hitschfeld-Bordan estimates for the attenuation-corrected radar reflectivity factor can be generalized to the case where multiple path attenuation estimates are available, thereby providing a correction to the effects of nonuniform beamfilling. A simple simulation is presented showing some strengths and weaknesses of the approach.

  8. A simple theory of back-surface-field /BSF/ solar cells

    NASA Technical Reports Server (NTRS)

    Von Roos, O.

    1979-01-01

    An earlier calculation of the I-V characteristics of solar cells contains a mistake. The current generated by light within the depletion layer is too large by a factor of 2. When this mistake is corrected, not only are all previous conclusions unchanged, but the agreement with experiment becomes better. Results are presented in graphical form of new computations which not only take account of the factor of 2, but also include more recent data on material parameters.

  9. Proton dose distribution measurements using a MOSFET detector with a simple dose-weighted correction method for LET effects.

    PubMed

    Kohno, Ryosuke; Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi

    2011-04-04

    We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth-dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high-bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L-shaped bolus. The dose reproducibility, angular dependence and depth-dose response were evaluated using a 190 MeV proton beam. Depth-output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose-weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L-shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors.

  10. Proton dose distribution measurements using a MOSFET detector with a simple dose‐weighted correction method for LET effects

    PubMed Central

    Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi

    2011-01-01

    We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth‐dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high‐bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L‐shaped bolus. The dose reproducibility, angular dependence and depth‐dose response were evaluated using a 190 MeV proton beam. Depth‐output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose‐weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L‐shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors. PACS number: 87.56.‐v

  11. Recurrent aphthous stomatitis: clinical characteristics and associated systemic disorders.

    PubMed

    Rogers, R S

    1997-12-01

    Recurrent aphthous stomatitis (RAS), commonly known as canker sores, has been reported as recurrent oral ulcers, recurrent aphthous ulcers, or simple or complex aphthosis. RAS is the most common inflammatory ulcerative condition of the oral mucosa in North American patients. One of its variants is the most painful condition of the oral mucosa. Recurrent aphthous stomatitis has been the subject of active investigation along multiple lines of research, including epidemiology, immunology, clinical correlations, and therapy. Clinical evaluation of the patient requires correct diagnosis of RAS and classification of the disease based on morphology (MiAU, MjAU, HU) and severity (simple versus complex). The natural history of individual lesions of RAS is important, because it is the bench mark against which treatment benefits are measured. The lesions of RAS are not caused by a single factor but occur in an environment that is permissive for development of lesions. These factors include trauma, smoking, stress, hormonal state, family history, food hypersensitivity and infectious or immunologic factors. The clinician should consider these elements of a multifactorial process leading to the development of lesions of RAS. To properly diagnose and treat a patient with lesions of RAS, the clinician must identify or exclude associated systemic disorders or "correctable causes." Behçet's disease and complex aphthosis variants, such as ulcus vulvae acutum, mouth and genital ulcers with inflamed cartilage (MAGIC) syndrome, fever, aphthosis, pharyngitis, and adenitis (FAPA) syndrome, and cyclic neutropenia, should be considered. The aphthous-like oral ulcerations of patients with human immunodeficiency virus (HIV) disease represent a challenging differential diagnosis. The association of lesions of RAS with hematinic deficiencies and gastrointestinal diseases provides an opportunity to identify a "correctable cause," which, with appropriate treatment, can result in a remission or substantial lessening of disease activity.

  12. Single-stage three-phase boost power factor correction circuit for AC-DC converter

    NASA Astrophysics Data System (ADS)

    Azazi, Haitham Z.; Ahmed, Sayed M.; Lashine, Azza E.

    2018-01-01

    This article presents a single-stage three-phase power factor correction (PFC) circuit for AC-to-DC converter using a single-switch boost regulator, leading to improve the input power factor (PF), reducing the input current harmonics and decreasing the number of required active switches. A novel PFC control strategy which is characterised as a simple and low-cost control circuit was adopted, for achieving a good dynamic performance, unity input PF, and minimising the harmonic contents of the input current, at which it can be applied to low/medium power converters. A detailed analytical, simulation and experimental studies were therefore conducted. The effectiveness of the proposed controller algorithm is validated by the simulation results, which were carried out using MATLAB/SIMULINK environment. The proposed system is built and tested in the laboratory using DSP-DS1104 digital control board for an inductive load. The results revealed that the total harmonic distortion in the supply current was very low. Finally, a good agreement between simulation and experimental results was achieved.

  13. A simple modern correctness condition for a space-based high-performance multiprocessor

    NASA Technical Reports Server (NTRS)

    Probst, David K.; Li, Hon F.

    1992-01-01

    A number of U.S. national programs, including space-based detection of ballistic missile launches, envisage putting significant computing power into space. Given sufficient progress in low-power VLSI, multichip-module packaging and liquid-cooling technologies, we will see design of high-performance multiprocessors for individual satellites. In very high speed implementations, performance depends critically on tolerating large latencies in interprocessor communication; without latency tolerance, performance is limited by the vastly differing time scales in processor and data-memory modules, including interconnect times. The modern approach to tolerating remote-communication cost in scalable, shared-memory multiprocessors is to use a multithreaded architecture, and alter the semantics of shared memory slightly, at the price of forcing the programmer either to reason about program correctness in a relaxed consistency model or to agree to program in a constrained style. The literature on multiprocessor correctness conditions has become increasingly complex, and sometimes confusing, which may hinder its practical application. We propose a simple modern correctness condition for a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and a high-performance, shared-memory multiprocessor; the correctness condition is based on a simple interface between the multiprocessor architecture and the parallel programming system.

  14. Corrected goodness-of-fit test in covariance structure analysis.

    PubMed

    Hayakawa, Kazuhiko

    2018-05-17

    Many previous studies report simulation evidence that the goodness-of-fit test in covariance structure analysis or structural equation modeling suffers from the overrejection problem when the number of manifest variables is large compared with the sample size. In this study, we demonstrate that one of the tests considered in Browne (1974) can address this long-standing problem. We also propose a simple modification of Satorra and Bentler's mean and variance adjusted test for non-normal data. A Monte Carlo simulation is carried out to investigate the performance of the corrected tests in the context of a confirmatory factor model, a panel autoregressive model, and a cross-lagged panel (panel vector autoregressive) model. The simulation results reveal that the corrected tests overcome the overrejection problem and outperform existing tests in most cases. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  15. A simple model for deep tissue attenuation correction and large organ analysis of Cerenkov luminescence imaging

    NASA Astrophysics Data System (ADS)

    Habte, Frezghi; Natarajan, Arutselvan; Paik, David S.; Gambhir, Sanjiv S.

    2014-03-01

    Cerenkov luminescence imaging (CLI) is an emerging cost effective modality that uses conventional small animal optical imaging systems and clinically available radionuclide probes for light emission. CLI has shown good correlation with PET for organs of high uptake such as kidney, spleen, thymus and subcutaneous tumors in mouse models. However, CLI has limitations for deep tissue quantitative imaging since the blue-weighted spectral characteristics of Cerenkov radiation attenuates highly by mammalian tissue. Large organs such as the liver have also shown higher signal due to the contribution of emission of light from a greater thickness of tissue. In this study, we developed a simple model that estimates the effective tissue attenuation coefficient in order to correct the CLI signal intensity with a priori estimated depth and thickness of specific organs. We used several thin slices of ham to build a phantom with realistic attenuation. We placed radionuclide sources inside the phantom at different tissue depths and imaged it using an IVIS Spectrum (Perkin-Elmer, Waltham, MA, USA) and Inveon microPET (Preclinical Solutions Siemens, Knoxville, TN). We also performed CLI and PET of mouse models and applied the proposed attenuation model to correct CLI measurements. Using calibration factors obtained from phantom study that converts the corrected CLI measurements to %ID/g, we obtained an average difference of less that 10% for spleen and less than 35% for liver compared to conventional PET measurements. Hence, the proposed model has a capability of correcting the CLI signal to provide comparable measurements with PET data.

  16. Maximum likelihood estimation of correction for dilution bias in simple linear regression using replicates from subjects with extreme first measurements.

    PubMed

    Berglund, Lars; Garmo, Hans; Lindbäck, Johan; Svärdsudd, Kurt; Zethelius, Björn

    2008-09-30

    The least-squares estimator of the slope in a simple linear regression model is biased towards zero when the predictor is measured with random error. A corrected slope may be estimated by adding data from a reliability study, which comprises a subset of subjects from the main study. The precision of this corrected slope depends on the design of the reliability study and estimator choice. Previous work has assumed that the reliability study constitutes a random sample from the main study. A more efficient design is to use subjects with extreme values on their first measurement. Previously, we published a variance formula for the corrected slope, when the correction factor is the slope in the regression of the second measurement on the first. In this paper we show that both designs improve by maximum likelihood estimation (MLE). The precision gain is explained by the inclusion of data from all subjects for estimation of the predictor's variance and by the use of the second measurement for estimation of the covariance between response and predictor. The gain of MLE enhances with stronger true relationship between response and predictor and with lower precision in the predictor measurements. We present a real data example on the relationship between fasting insulin, a surrogate marker, and true insulin sensitivity measured by a gold-standard euglycaemic insulin clamp, and simulations, where the behavior of profile-likelihood-based confidence intervals is examined. MLE was shown to be a robust estimator for non-normal distributions and efficient for small sample situations. Copyright (c) 2008 John Wiley & Sons, Ltd.

  17. Research on the Application of Fast-steering Mirror in Stellar Interferometer

    NASA Astrophysics Data System (ADS)

    Mei, R.; Hu, Z. W.; Xu, T.; Sun, C. S.

    2017-07-01

    For a stellar interferometer, the fast-steering mirror (FSM) is widely utilized to correct wavefront tilt caused by atmospheric turbulence and internal instrumental vibration due to its high resolution and fast response frequency. In this study, the non-coplanar error between the FSM and actuator deflection axis introduced by manufacture, assembly, and adjustment is analyzed. Via a numerical method, the additional optical path difference (OPD) caused by above factors is studied, and its effects on tracking accuracy of stellar interferometer are also discussed. On the other hand, the starlight parallelism between the beams of two arms is one of the main factors of the loss of fringe visibility. By analyzing the influence of wavefront tilt caused by the atmospheric turbulence on fringe visibility, a simple and efficient real-time correction scheme of starlight parallelism is proposed based on a single array detector. The feasibility of this scheme is demonstrated by laboratory experiment. The results show that starlight parallelism meets the requirement of stellar interferometer in wavefront tilt preliminarily after the correction of fast-steering mirror.

  18. Simple and Efficient Technique for Correction of Unilateral Scissor Bite Using Straight Wire.

    PubMed

    Dolas, Siddhesh Gajanan; Chitko, Shrikant Shrinivas; Kerudi, Veerendra Virupaxappa; Patil, Harshal Ashok; Bonde, Prasad Vasudeo

    2016-03-01

    Unilateral scissor bite is a relatively rare malocclusion. However, its correction is often difficult and a challenge for the clinician. This article presents simple and efficient technique for the correction of severe unilateral scissor bite in a 14 year old boy, using 0.020 S.S. A. J. Wilcock wire (premium plus) out of the spool, with minimal adjustments and placed in mandibular arch. After about six weeks time, good amount of correction was seen in the lower arch and the lower molar had been relieved of scissor bite.

  19. Real-Gas Correction Factors for Hypersonic Flow Parameters in Helium

    NASA Technical Reports Server (NTRS)

    Erickson, Wayne D.

    1960-01-01

    The real-gas hypersonic flow parameters for helium have been calculated for stagnation temperatures from 0 F to 600 F and stagnation pressures up to 6,000 pounds per square inch absolute. The results of these calculations are presented in the form of simple correction factors which must be applied to the tabulated ideal-gas parameters. It has been shown that the deviations from the ideal-gas law which exist at high pressures may cause a corresponding significant error in the hypersonic flow parameters when calculated as an ideal gas. For example the ratio of the free-stream static to stagnation pressure as calculated from the thermodynamic properties of helium for a stagnation temperature of 80 F and pressure of 4,000 pounds per square inch absolute was found to be approximately 13 percent greater than that determined from the ideal-gas tabulation with a specific heat ratio of 5/3.

  20. Fuzzy cluster analysis of simple physicochemical properties of amino acids for recognizing secondary structure in proteins.

    PubMed Central

    Mocz, G.

    1995-01-01

    Fuzzy cluster analysis has been applied to the 20 amino acids by using 65 physicochemical properties as a basis for classification. The clustering products, the fuzzy sets (i.e., classical sets with associated membership functions), have provided a new measure of amino acid similarities for use in protein folding studies. This work demonstrates that fuzzy sets of simple molecular attributes, when assigned to amino acid residues in a protein's sequence, can predict the secondary structure of the sequence with reasonable accuracy. An approach is presented for discriminating standard folding states, using near-optimum information splitting in half-overlapping segments of the sequence of assigned membership functions. The method is applied to a nonredundant set of 252 proteins and yields approximately 73% matching for correctly predicted and correctly rejected residues with approximately 60% overall success rate for the correctly recognized ones in three folding states: alpha-helix, beta-strand, and coil. The most useful attributes for discriminating these states appear to be related to size, polarity, and thermodynamic factors. Van der Waals volume, apparent average thickness of surrounding molecular free volume, and a measure of dimensionless surface electron density can explain approximately 95% of prediction results. hydrogen bonding and hydrophobicity induces do not yet enable clear clustering and prediction. PMID:7549882

  1. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    PubMed

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  2. A vibration correction method for free-fall absolute gravimeters

    NASA Astrophysics Data System (ADS)

    Qian, J.; Wang, G.; Wu, K.; Wang, L. J.

    2018-02-01

    An accurate determination of gravitational acceleration, usually approximated as 9.8 m s-2, has been playing an important role in the areas of metrology, geophysics, and geodetics. Absolute gravimetry has been experiencing rapid developments in recent years. Most absolute gravimeters today employ a free-fall method to measure gravitational acceleration. Noise from ground vibration has become one of the most serious factors limiting measurement precision. Compared to vibration isolators, the vibration correction method is a simple and feasible way to reduce the influence of ground vibrations. A modified vibration correction method is proposed and demonstrated. A two-dimensional golden section search algorithm is used to search for the best parameters of the hypothetical transfer function. Experiments using a T-1 absolute gravimeter are performed. It is verified that for an identical group of drop data, the modified method proposed in this paper can achieve better correction effects with much less computation than previous methods. Compared to vibration isolators, the correction method applies to more hostile environments and even dynamic platforms, and is expected to be used in a wider range of applications.

  3. Rewriting History: Historical Research With the Digital Plan

    DTIC Science & Technology

    2009-10-01

    significant enough to warrant a change or further analysis of how cadets take notes. The simple nature of the digital pen, akin to writing with a pen on... significant . By the third week the larger the percentage of handwriting correctly recognized, the higher the performance rating for the pen. Related...the pen at all. The current data showed two significant correlations between percent of handwriting recognized and the factors of pen performance and

  4. On the measurement of turbulent fluctuations in high-speed flows using hot wires and hot films

    NASA Technical Reports Server (NTRS)

    Acharya, M.

    1978-01-01

    A hot wire has a limited life in high speed wind-tunnel flows because it is typically subjected to large dynamic loads. As a consequence hot films and modified hot wires are frequently used for turbulence measurements in such flows. However, the fluctuation sensitivities of such probes are reduced because of various factors, leading to erroneous results. This paper describes the results of tests on some sensors in both subsonic and supersonic boundary-layer flows. A simple technique to determine dynamic calibration correction factors for the sensitivities is also presented.

  5. Simple and Efficient Technique for Correction of Unilateral Scissor Bite Using Straight Wire

    PubMed Central

    Dolas, Siddhesh Gajanan; Chitko, Shrikant Shrinivas; Kerudi, Veerendra Virupaxappa; Bonde, Prasad Vasudeo

    2016-01-01

    Unilateral scissor bite is a relatively rare malocclusion. However, its correction is often difficult and a challenge for the clinician. This article presents simple and efficient technique for the correction of severe unilateral scissor bite in a 14 year old boy, using 0.020 S.S. A. J. Wilcock wire (premium plus) out of the spool, with minimal adjustments and placed in mandibular arch. After about six weeks time, good amount of correction was seen in the lower arch and the lower molar had been relieved of scissor bite. PMID:27231682

  6. Ground temperature measurement by PRT-5 for maps experiment

    NASA Technical Reports Server (NTRS)

    Gupta, S. K.; Tiwari, S. N.

    1978-01-01

    A simple algorithm and computer program were developed for determining the actual surface temperature from the effective brightness temperature as measured remotely by a radiation thermometer called PRT-5. This procedure allows the computation of atmospheric correction to the effective brightness temperature without performing detailed radiative transfer calculations. Model radiative transfer calculations were performed to compute atmospheric corrections for several values of the surface and atmospheric parameters individually and in combination. Polynomial regressions were performed between the magnitudes or deviations of these parameters and the corresponding computed corrections to establish simple analytical relations between them. Analytical relations were also developed to represent combined correction for simultaneous variation of parameters in terms of their individual corrections.

  7. Matching factorization theorems with an inverse-error weighting

    NASA Astrophysics Data System (ADS)

    Echevarria, Miguel G.; Kasemets, Tomas; Lansberg, Jean-Philippe; Pisano, Cristian; Signori, Andrea

    2018-06-01

    We propose a new fast method to match factorization theorems applicable in different kinematical regions, such as the transverse-momentum-dependent and the collinear factorization theorems in Quantum Chromodynamics. At variance with well-known approaches relying on their simple addition and subsequent subtraction of double-counted contributions, ours simply builds on their weighting using the theory uncertainties deduced from the factorization theorems themselves. This allows us to estimate the unknown complete matched cross section from an inverse-error-weighted average. The method is simple and provides an evaluation of the theoretical uncertainty of the matched cross section associated with the uncertainties from the power corrections to the factorization theorems (additional uncertainties, such as the nonperturbative ones, should be added for a proper comparison with experimental data). Its usage is illustrated with several basic examples, such as Z boson, W boson, H0 boson and Drell-Yan lepton-pair production in hadronic collisions, and compared to the state-of-the-art Collins-Soper-Sterman subtraction scheme. It is also not limited to the transverse-momentum spectrum, and can straightforwardly be extended to match any (un)polarized cross section differential in other variables, including multi-differential measurements.

  8. Matching factorization theorems with an inverse-error weighting

    DOE PAGES

    Echevarria, Miguel G.; Kasemets, Tomas; Lansberg, Jean-Philippe; ...

    2018-04-03

    We propose a new fast method to match factorization theorems applicable in different kinematical regions, such as the transverse-momentum-dependent and the collinear factorization theorems in Quantum Chromodynamics. At variance with well-known approaches relying on their simple addition and subsequent subtraction of double-counted contributions, ours simply builds on their weighting using the theory uncertainties deduced from the factorization theorems themselves. This allows us to estimate the unknown complete matched cross section from an inverse-error-weighted average. The method is simple and provides an evaluation of the theoretical uncertainty of the matched cross section associated with the uncertainties from the power corrections tomore » the factorization theorems (additional uncertainties, such as the nonperturbative ones, should be added for a proper comparison with experimental data). Its usage is illustrated with several basic examples, such as Z boson, W boson, H 0 boson and Drell–Yan lepton-pair production in hadronic collisions, and compared to the state-of-the-art Collins–Soper–Sterman subtraction scheme. In conclusion, it is also not limited to the transverse-momentum spectrum, and can straightforwardly be extended to match any (un)polarized cross section differential in other variables, including multi-differential measurements.« less

  9. Matching factorization theorems with an inverse-error weighting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Echevarria, Miguel G.; Kasemets, Tomas; Lansberg, Jean-Philippe

    We propose a new fast method to match factorization theorems applicable in different kinematical regions, such as the transverse-momentum-dependent and the collinear factorization theorems in Quantum Chromodynamics. At variance with well-known approaches relying on their simple addition and subsequent subtraction of double-counted contributions, ours simply builds on their weighting using the theory uncertainties deduced from the factorization theorems themselves. This allows us to estimate the unknown complete matched cross section from an inverse-error-weighted average. The method is simple and provides an evaluation of the theoretical uncertainty of the matched cross section associated with the uncertainties from the power corrections tomore » the factorization theorems (additional uncertainties, such as the nonperturbative ones, should be added for a proper comparison with experimental data). Its usage is illustrated with several basic examples, such as Z boson, W boson, H 0 boson and Drell–Yan lepton-pair production in hadronic collisions, and compared to the state-of-the-art Collins–Soper–Sterman subtraction scheme. In conclusion, it is also not limited to the transverse-momentum spectrum, and can straightforwardly be extended to match any (un)polarized cross section differential in other variables, including multi-differential measurements.« less

  10. Removing flicker based on sparse color correspondences in old film restoration

    NASA Astrophysics Data System (ADS)

    Huang, Xi; Ding, Youdong; Yu, Bing; Xia, Tianran

    2018-04-01

    In the long history of human civilization, archived film is an indispensable part of it, and using digital method to repair damaged film is also a mainstream trend nowadays. In this paper, we propose a sparse color correspondences based technique to remove fading flicker for old films. Our model, combined with multi frame images to establish a simple correction model, includes three key steps. Firstly, we recover sparse color correspondences in the input frames to build a matrix with many missing entries. Secondly, we present a low-rank matrix factorization approach to estimate the unknown parameters of this model. Finally, we adopt a two-step strategy that divide the estimated parameters into reference frame parameters for color recovery correction and other frame parameters for color consistency correction to remove flicker. Our method combined multi-frames takes continuity of the input sequence into account, and the experimental results show the method can remove fading flicker efficiently.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eken Tuna, Kevin Mayeda, Abraham Hofstetter, Rengin Gok, Gonca Orgulu, Niyazi Turkelli

    A recently developed coda magnitude methodology was applied to selected broadband stations in Turkey for the purpose of testing the coda method in a large, laterally complex region. As found in other, albeit smaller regions, coda envelope amplitude measurements are significantly less variable than distance-corrected direct wave measurements (i.e., L{sub g} and surface waves) by roughly a factor 3-to-4. Despite strong lateral crustal heterogeneity in Turkey, they found that the region could be adequately modeled assuming a simple 1-D, radially symmetric path correction. After calibrating the stations ISP, ISKB and MALT for local and regional distances, single-station moment-magnitude estimates (M{submore » W}) derived from the coda spectra were in excellent agreement with those determined from multistation waveform modeling inversions, exhibiting a data standard deviation of 0.17. Though the calibration was validated using large events, the results of the calibration will extend M{sub W} estimates to significantly smaller events which could not otherwise be waveform modeled. The successful application of the method is remarkable considering the significant lateral complexity in Turkey and the simple assumptions used in the coda method.« less

  12. Read Code quality assurance: from simple syntax to semantic stability.

    PubMed

    Schulz, E B; Barrett, J W; Price, C

    1998-01-01

    As controlled clinical vocabularies assume an increasing role in modern clinical information systems, so the issue of their quality demands greater attention. In order to meet the resulting stringent criteria for completeness and correctness, a quality assurance system comprising a database of more than 500 rules is being developed and applied to the Read Thesaurus. The authors discuss the requirement to apply quality assurance processes to their dynamic editing database in order to ensure the quality of exported products. Sources of errors include human, hardware, and software factors as well as new rules and transactions. The overall quality strategy includes prevention, detection, and correction of errors. The quality assurance process encompasses simple data specification, internal consistency, inspection procedures and, eventually, field testing. The quality assurance system is driven by a small number of tables and UNIX scripts, with "business rules" declared explicitly as Structured Query Language (SQL) statements. Concurrent authorship, client-server technology, and an initial failure to implement robust transaction control have all provided valuable lessons. The feedback loop for error management needs to be short.

  13. T1 mapping with the variable flip angle technique: A simple correction for insufficient spoiling of transverse magnetization.

    PubMed

    Baudrexel, Simon; Nöth, Ulrike; Schüre, Jan-Rüdiger; Deichmann, Ralf

    2018-06-01

    The variable flip angle method derives T 1 maps from radiofrequency-spoiled gradient-echo data sets, acquired with different flip angles α. Because the method assumes validity of the Ernst equation, insufficient spoiling of transverse magnetization yields errors in T 1 estimation, depending on the chosen radiofrequency-spoiling phase increment (Δϕ). This paper presents a versatile correction method that uses modified flip angles α' to restore the validity of the Ernst equation. Spoiled gradient-echo signals were simulated for three commonly used phase increments Δϕ (50°/117°/150°), different values of α, repetition time (TR), T 1 , and a T 2 of 85 ms. For each parameter combination, α' (for which the Ernst equation yielded the same signal) and a correction factor C Δϕ (α, TR, T 1 ) = α'/α were determined. C Δϕ was found to be independent of T 1 and fitted as polynomial C Δϕ (α, TR), allowing to calculate α' for any protocol using this Δϕ. The accuracy of the correction method for T 2 values deviating from 85 ms was also determined. The method was tested in vitro and in vivo for variable flip angle scans with different acquisition parameters. The technique considerably improved the accuracy of variable flip angle-based T 1 maps in vitro and in vivo. The proposed method allows for a simple correction of insufficient spoiling in gradient-echo data. The required polynomial parameters are supplied for three common Δϕ. Magn Reson Med 79:3082-3092, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  14. Generative models for discovering sparse distributed representations.

    PubMed Central

    Hinton, G E; Ghahramani, Z

    1997-01-01

    We describe a hierarchical, generative model that can be viewed as a nonlinear generalization of factor analysis and can be implemented in a neural network. The model uses bottom-up, top-down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that only requires locally available information. We demonstrate that the network learns to extract sparse, distributed, hierarchical representations. PMID:9304685

  15. Structure of amplitude correlations in open chaotic systems

    NASA Astrophysics Data System (ADS)

    Ericson, Torleif E. O.

    2013-02-01

    The Verbaarschot-Weidenmüller-Zirnbauer (VWZ) model is believed to correctly represent the correlations of two S-matrix elements for an open quantum chaotic system, but the solution has considerable complexity and is presently only accessed numerically. Here a procedure is developed to deduce its features over the full range of the parameter space in a transparent and simple analytical form preserving accuracy to a considerable degree. The bulk of the VWZ correlations are described by the Gorin-Seligman expression for the two-amplitude correlations of the Ericson-Gorin-Seligman model. The structure of the remaining correction factors for correlation functions is discussed with special emphasis of the rôle of the level correlation hole both for inelastic and elastic correlations.

  16. SU-E-T-24: A Simple Correction-Based Method for Independent Monitor Unit (MU) Verification in Monte Carlo (MC) Lung SBRT Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pokhrel, D; Badkul, R; Jiang, H

    2014-06-01

    Purpose: Lung-SBRT uses hypo-fractionated dose in small non-IMRT fields with tissue-heterogeneity corrected plans. An independent MU verification is mandatory for safe and effective delivery of the treatment plan. This report compares planned MU obtained from iPlan-XVM-Calgorithm against spreadsheet-based hand-calculation using most commonly used simple TMR-based method. Methods: Treatment plans of 15 patients who underwent for MC-based lung-SBRT to 50Gy in 5 fractions for PTV V100%=95% were studied. ITV was delineated on MIP images based on 4D-CT scans. PTVs(ITV+5mm margins) ranged from 10.1- 106.5cc(average=48.6cc). MC-SBRT plans were generated using a combination of non-coplanar conformal arcs/beams using iPlan XVM-Calgorithm (BrainLAB iPlan ver.4.1.2)more » for Novalis-TX consisting of micro-MLCs and 6MV-SRS (1000MU/min) beam. These plans were re-computed using heterogeneity-corrected Pencil-Beam (PB-hete) algorithm without changing any beam parameters, such as MLCs/MUs. Dose-ratio: PB-hete/MC gave beam-by-beam inhomogeneity-correction-factors (ICFs):Individual Correction. For independent-2nd-check, MC-MUs were verified using TMR-based hand-calculation and obtained an average ICF:Average Correction, whereas TMR-based hand-calculation systematically underestimated MC-MUs by ∼5%. Also, first 10 MC-plans were verified with an ion-chamber measurement using homogenous phantom. Results: For both beams/arcs, mean PB-hete dose was systematically overestimated by 5.5±2.6% and mean hand-calculated MU systematic underestimated by 5.5±2.5% compared to XVMC. With individual correction, mean hand-calculated MUs matched with XVMC by - 0.3±1.4%/0.4±1.4 for beams/arcs, respectively. After average 5% correction, hand-calculated MUs matched with XVMC by 0.5±2.5%/0.6±2.0% for beams/arcs, respectively. Smaller dependence on tumor volume(TV)/field size(FS) was also observed. Ion-chamber measurement was within ±3.0%. Conclusion: PB-hete overestimates dose to lung tumor relative to XVMC. XVMC-algorithm is much more-complex and accurate with tissues-heterogeneities. Measurement at machine is time consuming and need extra resources; also direct measurement of dose for heterogeneous treatment plans is not clinically practiced, yet. This simple correction-based method was very helpful for independent-2nd-check of MC-lung-SBRT plans and routinely used in our clinic. A look-up table can be generated to include TV/FS dependence in ICFs.« less

  17. A multi-institutional study of independent calculation verification in inhomogeneous media using a simple and effective method of heterogeneity correction integrated with the Clarkson method.

    PubMed

    Jinno, Shunta; Tachibana, Hidenobu; Moriya, Shunsuke; Mizuno, Norifumi; Takahashi, Ryo; Kamima, Tatsuya; Ishibashi, Satoru; Sato, Masanori

    2018-05-21

    In inhomogeneous media, there is often a large systematic difference in the dose between the conventional Clarkson algorithm (C-Clarkson) for independent calculation verification and the superposition-based algorithms of treatment planning systems (TPSs). These treatment site-dependent differences increase the complexity of the radiotherapy planning secondary check. We developed a simple and effective method of heterogeneity correction integrated with the Clarkson algorithm (L-Clarkson) to account for the effects of heterogeneity in the lateral dimension, and performed a multi-institutional study to evaluate the effectiveness of the method. In the method, a 2D image reconstructed from computed tomography (CT) images is divided according to lines extending from the reference point to the edge of the multileaf collimator (MLC) or jaw collimator for each pie sector, and the radiological path length (RPL) of each line is calculated on the 2D image to obtain a tissue maximum ratio and phantom scatter factor, allowing the dose to be calculated. A total of 261 plans (1237 beams) for conventional breast and lung treatments and lung stereotactic body radiotherapy were collected from four institutions. Disagreements in dose between the on-site TPSs and a verification program using the C-Clarkson and L-Clarkson algorithms were compared. Systematic differences with the L-Clarkson method were within 1% for all sites, while the C-Clarkson method resulted in systematic differences of 1-5%. The L-Clarkson method showed smaller variations. This heterogeneity correction integrated with the Clarkson algorithm would provide a simple evaluation within the range of -5% to +5% for a radiotherapy plan secondary check.

  18. Statistical bias correction method applied on CMIP5 datasets over the Indian region during the summer monsoon season for climate change applications

    NASA Astrophysics Data System (ADS)

    Prasanna, V.

    2018-01-01

    This study makes use of temperature and precipitation from CMIP5 climate model output for climate change application studies over the Indian region during the summer monsoon season (JJAS). Bias correction of temperature and precipitation from CMIP5 GCM simulation results with respect to observation is discussed in detail. The non-linear statistical bias correction is a suitable bias correction method for climate change data because it is simple and does not add up artificial uncertainties to the impact assessment of climate change scenarios for climate change application studies (agricultural production changes) in the future. The simple statistical bias correction uses observational constraints on the GCM baseline, and the projected results are scaled with respect to the changing magnitude in future scenarios, varying from one model to the other. Two types of bias correction techniques are shown here: (1) a simple bias correction using a percentile-based quantile-mapping algorithm and (2) a simple but improved bias correction method, a cumulative distribution function (CDF; Weibull distribution function)-based quantile-mapping algorithm. This study shows that the percentile-based quantile mapping method gives results similar to the CDF (Weibull)-based quantile mapping method, and both the methods are comparable. The bias correction is applied on temperature and precipitation variables for present climate and future projected data to make use of it in a simple statistical model to understand the future changes in crop production over the Indian region during the summer monsoon season. In total, 12 CMIP5 models are used for Historical (1901-2005), RCP4.5 (2005-2100), and RCP8.5 (2005-2100) scenarios. The climate index from each CMIP5 model and the observed agricultural yield index over the Indian region are used in a regression model to project the changes in the agricultural yield over India from RCP4.5 and RCP8.5 scenarios. The results revealed a better convergence of model projections in the bias corrected data compared to the uncorrected data. The study can be extended to localized regional domains aimed at understanding the changes in the agricultural productivity in the future with an agro-economy or a simple statistical model. The statistical model indicated that the total food grain yield is going to increase over the Indian region in the future, the increase in the total food grain yield is approximately 50 kg/ ha for the RCP4.5 scenario from 2001 until the end of 2100, and the increase in the total food grain yield is approximately 90 kg/ha for the RCP8.5 scenario from 2001 until the end of 2100. There are many studies using bias correction techniques, but this study applies the bias correction technique to future climate scenario data from CMIP5 models and applied it to crop statistics to find future crop yield changes over the Indian region.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eken, T; Mayeda, K; Hofstetter, A

    A recently developed coda magnitude methodology was applied to selected broadband stations in Turkey for the purpose of testing the coda method in a large, laterally complex region. As found in other, albeit smaller regions, coda envelope amplitude measurements are significantly less variable than distance-corrected direct wave measurements (i.e., L{sub g} and surface waves) by roughly a factor 3-to-4. Despite strong lateral crustal heterogeneity in Turkey, we found that the region could be adequately modeled assuming a simple 1-D, radially symmetric path correction for 10 narrow frequency bands ranging between 0.02 to 2.0 Hz. For higher frequencies however, 2-D pathmore » corrections will be necessary and will be the subject of a future study. After calibrating the stations ISP, ISKB, and MALT for local and regional distances, single-station moment-magnitude estimates (M{sub w}) derived from the coda spectra were in excellent agreement with those determined from multi-station waveform modeling inversions of long-period data, exhibiting a data standard deviation of 0.17. Though the calibration was validated using large events, the results of the calibration will extend M{sub w} estimates to significantly smaller events which could not otherwise be waveform modeled due to poor signal-to-noise ratio at long periods and sparse station coverage. The successful application of the method is remarkable considering the significant lateral complexity in Turkey and the simple assumptions used in the coda method.« less

  20. Correction of the near threshold behavior of electron collisional excitation cross-sections in the plane-wave Born approximation

    NASA Astrophysics Data System (ADS)

    Kilcrease, D. P.; Brookes, S.

    2013-12-01

    The modeling of NLTE plasmas requires the solution of population rate equations to determine the populations of the various atomic levels relevant to a particular problem. The equations require many cross sections for excitation, de-excitation, ionization and recombination. A simple and computational fast way to calculate electron collisional excitation cross-sections for ions is by using the plane-wave Born approximation. This is essentially a high-energy approximation and the cross section suffers from the unphysical problem of going to zero near threshold. Various remedies for this problem have been employed with varying degrees of success. We present a correction procedure for the Born cross-sections that employs the Elwert-Sommerfeld factor to correct for the use of plane waves instead of Coulomb waves in an attempt to produce a cross-section similar to that from using the more time consuming Coulomb Born approximation. We compare this new approximation with other, often employed correction procedures. We also look at some further modifications to our Born Elwert procedure and its combination with Y.K. Kim's correction of the Coulomb Born approximation for singly charged ions that more accurately approximate convergent close coupling calculations.

  1. Ion Thermal Conductivity and Ion Distribution Function in the Banana Regime

    DTIC Science & Technology

    1988-04-01

    approximate collision operator which is more general than the model operator derived by HIRSHMAN and SIGMAR is presented. By use of this collision...by HIRSHMAN and SIGMAR (1976). The finite aspect ratio correction is shown to increase the ion thermal conductivity by a factor of two in the...operator (12) is more general than that of Hirshman and Sigmar which can be derived by approximating Ct(1=0,1,2)in (12) by more simple forms. Let us

  2. A simple but fully nonlocal correction to the random phase approximation

    NASA Astrophysics Data System (ADS)

    Ruzsinszky, Adrienn; Perdew, John P.; Csonka, Gábor I.

    2011-03-01

    The random phase approximation (RPA) stands on the top rung of the ladder of ground-state density functional approximations. The simple or direct RPA has been found to predict accurately many isoelectronic energy differences. A nonempirical local or semilocal correction to this direct RPA leaves isoelectronic energy differences almost unchanged, while improving total energies, ionization energies, etc., but fails to correct the RPA underestimation of molecular atomization energies. Direct RPA and its semilocal correction may miss part of the middle-range multicenter nonlocality of the correlation energy in a molecule. Here we propose a fully nonlocal, hybrid-functional-like addition to the semilocal correction. The added full nonlocality is important in molecules, but not in atoms. Under uniform-density scaling, this fully nonlocal correction scales like the second-order-exchange contribution to the correlation energy, an important part of the correction to direct RPA, and like the semilocal correction itself. For the atomization energies of ten molecules, and with the help of one fit parameter, it performs much better than the elaborate second-order screened exchange correction.

  3. A new technique for correction of simple congenital earlobe clefts: diametric hinge flaps method.

    PubMed

    Qing, Yong; Cen, Ying; Xu, Xuewen; Chen, Junjie

    2013-06-01

    The earlobe plays an important part in the aesthetic appearance of the auricle. Congenital cleft earlobe may vary considerably in severity from a simple notching to extensive tissue deficiency. Most patients with cleft earlobe require surgical correction because of abnormal appearance. In this article, a new surgical technique for correcting congenital simple cleft earlobe using diametric hinge flaps is introduced. We retrospectively reviewed 4 patients diagnosed with congenital cleft earlobe between 2008 and 2010. All of them received this new surgical method. The patients were followed up from 3 to 6 months. All patients attained relatively full bodied earlobes with smooth contours, inconspicuous scars, and found their reconstructed earlobes to be aesthetically satisfactory. One patient experienced hypoesthesia in the area operated on, but recovered 3 months later. No other complications were noted. This simple method not only makes full use of the surrounding tissues to reconstruct full bodied earlobes but also avoids small notch formation caused by the linear scar contraction sometimes seen when using more traditional methods.

  4. Examination of multi-model ensemble seasonal prediction methods using a simple climate system

    NASA Astrophysics Data System (ADS)

    Kang, In-Sik; Yoo, Jin Ho

    2006-02-01

    A simple climate model was designed as a proxy for the real climate system, and a number of prediction models were generated by slightly perturbing the physical parameters of the simple model. A set of long (240 years) historical hindcast predictions were performed with various prediction models, which are used to examine various issues of multi-model ensemble seasonal prediction, such as the best ways of blending multi-models and the selection of models. Based on these results, we suggest a feasible way of maximizing the benefit of using multi models in seasonal prediction. In particular, three types of multi-model ensemble prediction systems, i.e., the simple composite, superensemble, and the composite after statistically correcting individual predictions (corrected composite), are examined and compared to each other. The superensemble has more of an overfitting problem than the others, especially for the case of small training samples and/or weak external forcing, and the corrected composite produces the best prediction skill among the multi-model systems.

  5. Correction of Atmospheric Haze in RESOURCESAT-1 LISS-4 MX Data for Urban Analysis: AN Improved Dark Object Subtraction Approach

    NASA Astrophysics Data System (ADS)

    Mustak, S.

    2013-09-01

    The correction of atmospheric effects is very essential because visible bands of shorter wavelength are highly affected by atmospheric scattering especially of Rayleigh scattering. The objectives of the paper is to find out the haze values present in the all spectral bands and to correct the haze values for urban analysis. In this paper, Improved Dark Object Subtraction method of P. Chavez (1988) is applied for the correction of atmospheric haze in the Resoucesat-1 LISS-4 multispectral satellite image. Dark object Subtraction is a very simple image-based method of atmospheric haze which assumes that there are at least a few pixels within an image which should be black (% reflectance) and such black reflectance termed as dark object which are clear water body and shadows whose DN values zero (0) or Close to zero in the image. Simple Dark Object Subtraction method is a first order atmospheric correction but Improved Dark Object Subtraction method which tends to correct the Haze in terms of atmospheric scattering and path radiance based on the power law of relative scattering effect of atmosphere. The haze values extracted using Simple Dark Object Subtraction method for Green band (Band2), Red band (Band3) and NIR band (band4) are 40, 34 and 18 but the haze values extracted using Improved Dark Object Subtraction method are 40, 18.02 and 11.80 for aforesaid bands. Here it is concluded that the haze values extracted by Improved Dark Object Subtraction method provides more realistic results than Simple Dark Object Subtraction method.

  6. Sum-rule corrections: a route to error cancellations in correlation matrix renormalisation theory

    NASA Astrophysics Data System (ADS)

    Liu, C.; Liu, J.; Yao, Y. X.; Wang, C. Z.; Ho, K. M.

    2017-03-01

    We recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a more accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.

  7. Percolation in three-dimensional fracture networks for arbitrary size and shape distributions

    NASA Astrophysics Data System (ADS)

    Thovert, J.-F.; Mourzenko, V. V.; Adler, P. M.

    2017-04-01

    The percolation threshold of fracture networks is investigated by extensive direct numerical simulations. The fractures are randomly located and oriented in three-dimensional space. A very wide range of regular, irregular, and random fracture shapes is considered, in monodisperse or polydisperse networks containing fractures with different shapes and/or sizes. The results are rationalized in terms of a dimensionless density. A simple model involving a new shape factor is proposed, which accounts very efficiently for the influence of the fracture shape. It applies with very good accuracy in monodisperse or moderately polydisperse networks, and provides a good first estimation in other situations. A polydispersity index is shown to control the need for a correction, and the corrective term is modelled for the investigated size distributions.

  8. An improved pi/4-QPSK with nonredundant error correction for satellite mobile broadcasting

    NASA Technical Reports Server (NTRS)

    Feher, Kamilo; Yang, Jiashi

    1991-01-01

    An improved pi/4-quadrature phase-shift keying (QPSK) receiver that incorporates a simple nonredundant error correction (NEC) structure is proposed for satellite and land-mobile digital broadcasting. The bit-error-rate (BER) performance of the pi/4-QPSK with NEC is analyzed and evaluated in a fast Rician fading and additive white Gaussian noise (AWGN) environment using computer simulation. It is demonstrated that with simple electronics the performance of a noncoherently detected pi/4-QPSK signal in both AWGN and fast Rician fading can be improved. When the K-factor (a ratio of average power of multipath signal to direct path power) of the Rician channel decreases, the improvement increases. An improvement of 1.2 dB could be obtained at a BER of 0.0001 in the AWGN channel. This performance gain is achieved without requiring any signal redundancy and additional bandwidth. Three types of noncoherent detection schemes of pi/4-QPSK with NEC structure, such as IF band differential detection, baseband differential detection, and FM discriminator, are discussed. It is concluded that the pi/4-QPSK with NEC is an attractive scheme for power-limited satellite land-mobile broadcasting systems.

  9. Is Directivity Still Effective in a PSHA Framework?

    NASA Astrophysics Data System (ADS)

    Spagnuolo, E.; Herrero, A.; Cultrera, G.

    2008-12-01

    Source rupture parameters, like directivity, modulate the energy release causing variations in the radiated signal amplitude. Thus they affect the empirical predictive equations and as a consequence the seismic hazard assessment. Classical probabilistic hazard evaluations, e.g. Cornell (1968), use very simple predictive equations only based on magnitude and distance which do not account for variables concerning the rupture process. However nowadays, a few predictive equations (e.g. Somerville 1997, Spudich and Chiou 2008) take into account for rupture directivity. Also few implementations have been made in a PSHA framework (e.g. Convertito et al. 2006, Rowshandel 2006). In practice, these new empirical predictive models incorporate quantitatively the rupture propagation effects through the introduction of variables like rake, azimuth, rupture velocity and laterality. The contribution of all these variables is summarized in corrective factors derived from measuring differences between the real data and the predicted ones Therefore, it's possible to keep the older computation, making use of a simple predictive model, and besides, to incorporate the directivity effect through the corrective factors. Any single supplementary variable meaning a new integral in the parametric space. However the difficulty consists of the constraints on parameter distribution functions. We present the preliminary result for ad hoc distributions (Gaussian, uniform distributions) in order to test the impact of incorporating directivity into PSHA models. We demonstrate that incorporating directivity in PSHA by means of the new predictive equations may lead to strong percentage variations in the hazard assessment.

  10. A simple method for correcting spatially resolved solar intensity oscillation observations for variations in scattered light

    NASA Technical Reports Server (NTRS)

    Jefferies, S. M.; Duvall, T. L., Jr.

    1991-01-01

    A measurement of the intensity distribution in an image of the solar disk will be corrupted by a spatial redistribution of the light that is caused by the earth's atmosphere and the observing instrument. A simple correction method is introduced here that is applicable for solar p-mode intensity observations obtained over a period of time in which there is a significant change in the scattering component of the point spread function. The method circumvents the problems incurred with an accurate determination of the spatial point spread function and its subsequent deconvolution from the observations. The method only corrects the spherical harmonic coefficients that represent the spatial frequencies present in the image and does not correct the image itself.

  11. A simple and accurate method for calculation of the structure factor of interacting charged spheres.

    PubMed

    Wu, Chu; Chan, Derek Y C; Tabor, Rico F

    2014-07-15

    Calculation of the structure factor of a system of interacting charged spheres based on the Ginoza solution of the Ornstein-Zernike equation has been developed and implemented on a stand-alone spreadsheet. This facilitates direct interactive numerical and graphical comparisons between experimental structure factors with the pioneering theoretical model of Hayter-Penfold that uses the Hansen-Hayter renormalisation correction. The method is used to fit example experimental structure factors obtained from the small-angle neutron scattering of a well-characterised charged micelle system, demonstrating that this implementation, available in the supplementary information, gives identical results to the Hayter-Penfold-Hansen approach for the structure factor, S(q) and provides direct access to the pair correlation function, g(r). Additionally, the intermediate calculations and outputs can be readily accessed and modified within the familiar spreadsheet environment, along with information on the normalisation procedure. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Superconducting qubit in a waveguide cavity with a coherence time approaching 0.1 ms

    NASA Astrophysics Data System (ADS)

    Rigetti, Chad; Gambetta, Jay M.; Poletto, Stefano; Plourde, B. L. T.; Chow, Jerry M.; Córcoles, A. D.; Smolin, John A.; Merkel, Seth T.; Rozen, J. R.; Keefe, George A.; Rothwell, Mary B.; Ketchen, Mark B.; Steffen, M.

    2012-09-01

    We report a superconducting artificial atom with a coherence time of T2*=92 μs and energy relaxation time T1=70 μs. The system consists of a single Josephson junction transmon qubit on a sapphire substrate embedded in an otherwise empty copper waveguide cavity whose lowest eigenmode is dispersively coupled to the qubit transition. We attribute the factor of four increase in the coherence quality factor relative to previous reports to device modifications aimed at reducing qubit dephasing from residual cavity photons. This simple device holds promise as a robust and easily produced artificial quantum system whose intrinsic coherence properties are sufficient to allow tests of quantum error correction.

  13. The influence of operator position, height and body orientation on eye lens dose in interventional radiology and cardiology: Monte Carlo simulations versus realistic clinical measurements.

    PubMed

    Principi, S; Farah, J; Ferrari, P; Carinou, E; Clairand, I; Ginjaume, M

    2016-09-01

    This paper aims to provide some practical recommendations to reduce eye lens dose for workers exposed to X-rays in interventional cardiology and radiology and also to propose an eye lens correction factor when lead glasses are used. Monte Carlo simulations are used to study the variation of eye lens exposure with operator position, height and body orientation with respect to the patient and the X-ray tube. The paper also looks into the efficiency of wraparound lead glasses using simulations. Computation results are compared with experimental measurements performed in Spanish hospitals using eye lens dosemeters as well as with data from available literature. Simulations showed that left eye exposure is generally higher than the right eye, when the operator stands on the right side of the patient. Operator height can induce a strong dose decrease by up to a factor of 2 for the left eye for 10-cm-taller operators. Body rotation of the operator away from the tube by 45°-60° reduces eye exposure by a factor of 2. The calculation-based correction factor of 0.3 for wraparound type lead glasses was found to agree reasonably well with experimental data. Simple precautions, such as the positioning of the image screen away from the X-ray source, lead to a significant reduction of the eye lens dose. Measurements and simulations performed in this work also show that a general eye lens correction factor of 0.5 can be used when lead glasses are worn regardless of operator position, height and body orientation. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  14. Measurement and modeling of out-of-field doses from various advanced post-mastectomy radiotherapy techniques

    NASA Astrophysics Data System (ADS)

    Yoon, Jihyung; Heins, David; Zhao, Xiaodong; Sanders, Mary; Zhang, Rui

    2017-12-01

    More and more advanced radiotherapy techniques have been adopted for post-mastectomy radiotherapies (PMRT). Patient dose reconstruction is challenging for these advanced techniques because they increase the low out-of-field dose area while the accuracy of out-of-field dose calculations by current commercial treatment planning systems (TPSs) is poor. We aim to measure and model the out-of-field radiation doses from various advanced PMRT techniques. PMRT treatment plans for an anthropomorphic phantom were generated, including volumetric modulated arc therapy with standard and flattening-filter-free photon beams, mixed beam therapy, 4-field intensity modulated radiation therapy (IMRT), and tomotherapy. We measured doses in the phantom where the TPS calculated doses were lower than 5% of the prescription dose using thermoluminescent dosimeters (TLD). The TLD measurements were corrected by two additional energy correction factors, namely out-of-beam out-of-field (OBOF) correction factor K OBOF and in-beam out-of-field (IBOF) correction factor K IBOF, which were determined by separate measurements using an ion chamber and TLD. A simple analytical model was developed to predict out-of-field dose as a function of distance from the field edge for each PMRT technique. The root mean square discrepancies between measured and calculated out-of-field doses were within 0.66 cGy Gy-1 for all techniques. The IBOF doses were highly scattered and should be evaluated case by case. One can easily combine the measured out-of-field dose here with the in-field dose calculated by the local TPS to reconstruct organ doses for a specific PMRT patient if the same treatment apparatus and technique were used.

  15. Relapses vs. reactions in multibacillary leprosy: proposal of new relapse criteria.

    PubMed

    Linder, Katharina; Zia, Mutaher; Kern, Winfried V; Pfau, Ruth K M; Wagner, Dirk

    2008-03-01

    To compare a new scoring system for multibacillary (MB) leprosy relapses, which combines time factor, risk factors and clinical presentation at relapse, to WHO criteria. Data were collected on all relapses diagnosed between 1998 and 2004 at the Marie-Adelaide-Centre in Karachi, Pakistan, including case histories, clinical manifestations, follow-up, bacterial indices, treatment and contacts. For the diagnosis of MB relapses a simple scoring system was developed and validated on a data-set of mouse foot pads (MFP)-confirmed relapses (Leprosy Reviews, 76, 2005, 241). Its sensitivity was further evaluated in the Karachi relapse cohort. The P-value was calculated with McNemar's test with continuity correction. The new scoring system that combines time factor, risk factors and clinical presentation at relapse had a higher sensitivity in MFP-confirmed relapses than the WHO-criteria (95%vs. 65%, P < 0.01). The sensitivity of the scoring system was also significantly higher than the WHO criteria in the 57 cases of MB-relapses diagnosed in Karachi (72%vs. 54%, P < 0.05). This new simple scoring system for diagnosing MB-relapses in leprosy should be further validated in a prospective study to confirm its superior sensitivity and to evaluate the specificity of these criteria by using MFP-confirmation for patients presenting with signs of activity after treatment.

  16. Correction of the near threshold behavior of electron collisional excitation cross-sections in the plane-wave Born approximation

    DOE PAGES

    Kilcrease, D. P.; Brookes, S.

    2013-08-19

    The modeling of NLTE plasmas requires the solution of population rate equations to determine the populations of the various atomic levels relevant to a particular problem. The equations require many cross sections for excitation, de-excitation, ionization and recombination. Additionally, a simple and computational fast way to calculate electron collisional excitation cross-sections for ions is by using the plane-wave Born approximation. This is essentially a high-energy approximation and the cross section suffers from the unphysical problem of going to zero near threshold. Various remedies for this problem have been employed with varying degrees of success. We present a correction procedure formore » the Born cross-sections that employs the Elwert–Sommerfeld factor to correct for the use of plane waves instead of Coulomb waves in an attempt to produce a cross-section similar to that from using the more time consuming Coulomb Born approximation. We compare this new approximation with other, often employed correction procedures. Furthermore, we also look at some further modifications to our Born Elwert procedure and its combination with Y.K. Kim's correction of the Coulomb Born approximation for singly charged ions that more accurately approximate convergent close coupling calculations.« less

  17. A simple method for evaluating the wavefront compensation error of diffractive liquid-crystal wavefront correctors.

    PubMed

    Cao, Zhaoliang; Mu, Quanquan; Hu, Lifa; Lu, Xinghai; Xuan, Li

    2009-09-28

    A simple method for evaluating the wavefront compensation error of diffractive liquid-crystal wavefront correctors (DLCWFCs) for atmospheric turbulence correction is reported. A simple formula which describes the relationship between pixel number, DLCWFC aperture, quantization level, and atmospheric coherence length was derived based on the calculated atmospheric turbulence wavefronts using Kolmogorov atmospheric turbulence theory. It was found that the pixel number across the DLCWFC aperture is a linear function of the telescope aperture and the quantization level, and it is an exponential function of the atmosphere coherence length. These results are useful for people using DLCWFCs in atmospheric turbulence correction for large-aperture telescopes.

  18. Sum-rule corrections: A route to error cancellations in correlation matrix renormalisation theory

    DOE PAGES

    Liu, C.; Liu, J.; Yao, Y. X.; ...

    2017-01-16

    Here, we recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a moremore » accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.« less

  19. Sum-rule corrections: A route to error cancellations in correlation matrix renormalisation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, C.; Liu, J.; Yao, Y. X.

    Here, we recently proposed the correlation matrix renormalisation (CMR) theory to efficiently and accurately calculate ground state total energy of molecular systems, based on the Gutzwiller variational wavefunction (GWF) to treat the electronic correlation effects. To help reduce numerical complications and better adapt the CMR to infinite lattice systems, we need to further refine the way to minimise the error originated from the approximations in the theory. This conference proceeding reports our recent progress on this key issue, namely, we obtained a simple analytical functional form for the one-electron renormalisation factors, and introduced a novel sum-rule correction for a moremore » accurate description of the intersite electron correlations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.« less

  20. Using "Tracker" to Prove the Simple Harmonic Motion Equation

    ERIC Educational Resources Information Center

    Kinchin, John

    2016-01-01

    Simple harmonic motion (SHM) is a common topic for many students to study. Using the free, though versatile, motion tracking software; "Tracker", we can extend the students experience and show that the general equation for SHM does lead to the correct period of a simple pendulum.

  1. Identifying a key physical factor sensitive to the performance of Madden-Julian oscillation simulation in climate models

    NASA Astrophysics Data System (ADS)

    Kim, Go-Un; Seo, Kyong-Hwan

    2018-01-01

    A key physical factor in regulating the performance of Madden-Julian oscillation (MJO) simulation is examined by using 26 climate model simulations from the World Meteorological Organization's Working Group for Numerical Experimentation/Global Energy and Water Cycle Experiment Atmospheric System Study (WGNE and MJO-Task Force/GASS) global model comparison project. For this, intraseasonal moisture budget equation is analyzed and a simple, efficient physical quantity is developed. The result shows that MJO skill is most sensitive to vertically integrated intraseasonal zonal wind convergence (ZC). In particular, a specific threshold value of the strength of the ZC can be used as distinguishing between good and poor models. An additional finding is that good models exhibit the correct simultaneous convection and large-scale circulation phase relationship. In poor models, however, the peak circulation response appears 3 days after peak rainfall, suggesting unfavorable coupling between convection and circulation. For an improving simulation of the MJO in climate models, we propose that this delay of circulation in response to convection needs to be corrected in the cumulus parameterization scheme.

  2. Analysis and Calibration of Sources of Electronic Error in PSD Sensor Response.

    PubMed

    Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Tsirigotis, Georgios

    2016-04-29

    In order to obtain very precise measurements of the position of agents located at a considerable distance using a sensor system based on position sensitive detectors (PSD), it is necessary to analyze and mitigate the factors that generate substantial errors in the system's response. These sources of error can be divided into electronic and geometric factors. The former stem from the nature and construction of the PSD as well as the performance, tolerances and electronic response of the system, while the latter are related to the sensor's optical system. Here, we focus solely on the electrical effects, since the study, analysis and correction of these are a prerequisite for subsequently addressing geometric errors. A simple calibration method is proposed, which considers PSD response, component tolerances, temperature variations, signal frequency used, signal to noise ratio (SNR), suboptimal operational amplifier parameters, and analog to digital converter (ADC) quantitation SNRQ, etc. Following an analysis of these effects and calibration of the sensor, it was possible to correct the errors, thus rendering the effects negligible, as reported in the results section.

  3. Analysis and Calibration of Sources of Electronic Error in PSD Sensor Response

    PubMed Central

    Rodríguez-Navarro, David; Lázaro-Galilea, José Luis; Bravo-Muñoz, Ignacio; Gardel-Vicente, Alfredo; Tsirigotis, Georgios

    2016-01-01

    In order to obtain very precise measurements of the position of agents located at a considerable distance using a sensor system based on position sensitive detectors (PSD), it is necessary to analyze and mitigate the factors that generate substantial errors in the system’s response. These sources of error can be divided into electronic and geometric factors. The former stem from the nature and construction of the PSD as well as the performance, tolerances and electronic response of the system, while the latter are related to the sensor’s optical system. Here, we focus solely on the electrical effects, since the study, analysis and correction of these are a prerequisite for subsequently addressing geometric errors. A simple calibration method is proposed, which considers PSD response, component tolerances, temperature variations, signal frequency used, signal to noise ratio (SNR), suboptimal operational amplifier parameters, and analog to digital converter (ADC) quantitation SNRQ, etc. Following an analysis of these effects and calibration of the sensor, it was possible to correct the errors, thus rendering the effects negligible, as reported in the results section. PMID:27136562

  4. Digital particle image velocimetry measurements of the downwash distribution of a desert locust Schistocerca gregaria

    PubMed Central

    Bomphrey, Richard J; Taylor, Graham K; Lawson, Nicholas J; Thomas, Adrian L.R

    2005-01-01

    Actuator disc models of insect flight are concerned solely with the rate of momentum transfer to the air that passes through the disc. These simple models assume that an even pressure is applied across the disc, resulting in a uniform downwash distribution. However, a correction factor, k, is often included to correct for the difference in efficiency between the assumed even downwash distribution, and the real downwash distribution. In the absence of any empirical measurements of the downwash distribution behind a real insect, the values of k used in the literature have been necessarily speculative. Direct measurement of this efficiency factor is now possible, and could be used to compare the relative efficiencies of insect flight across the Class. Here, we use Digital Particle Image Velocimetry to measure the instantaneous downwash distribution, mid-downstroke, of a tethered desert locust (Schistocerca gregaria). By integrating the downwash distribution, we are thereby able to provide the first direct empirical measurement of k for an insect. The measured value of k=1.12 corresponds reasonably well with that predicted by previous theoretical studies. PMID:16849240

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lessard, Francois; Archambault, Louis; Plamondon, Mathieu

    Purpose: Photon dosimetry in the kilovolt (kV) energy range represents a major challenge for diagnostic and interventional radiology and superficial therapy. Plastic scintillation detectors (PSDs) are potentially good candidates for this task. This study proposes a simple way to obtain accurate correction factors to compensate for the response of PSDs to photon energies between 80 and 150 kVp. The performance of PSDs is also investigated to determine their potential usefulness in the diagnostic energy range. Methods: A 1-mm-diameter, 10-mm-long PSD was irradiated by a Therapax SXT 150 unit using five different beam qualities made of tube potentials ranging from 80more » to 150 kVp and filtration thickness ranging from 0.8 to 0.2 mmAl + 1.0 mmCu. The light emitted by the detector was collected using an 8-m-long optical fiber and a polychromatic photodiode, which converted the scintillation photons to an electrical current. The PSD response was compared with the reference free air dose rate measured with a calibrated Farmer NE2571 ionization chamber. PSD measurements were corrected using spectra-weighted corrections, accounting for mass energy-absorption coefficient differences between the sensitive volumes of the ionization chamber and the PSD, as suggested by large cavity theory (LCT). Beam spectra were obtained from x-ray simulation software and validated experimentally using a CdTe spectrometer. Correction factors were also obtained using Monte Carlo (MC) simulations. Percent depth dose (PDD) measurements were compensated for beam hardening using the LCT correction method. These PDD measurements were compared with uncorrected PSD data, PDD measurements obtained using Gafchromic films, Monte Carlo simulations, and previous data. Results: For each beam quality used, the authors observed an increase of the energy response with effective energy when no correction was applied to the PSD response. Using the LCT correction, the PSD response was almost energy independent, with a residual 2.1% coefficient of variation (COV) over the 80-150-kVp energy range. Monte Carlo corrections reduced the COV to 1.4% over this energy range. All PDD measurements were in good agreement with one another except for the uncorrected PSD data, in which an over-response was observed with depth (13% at 10 cm with a 100 kVp beam), showing that beam hardening had a non-negligible effect on the PSD response. A correction based on LCT compensated very well for this effect, reducing the over-response to 3%.Conclusion: In the diagnostic energy range, PSDs show high-energy dependence, which can be corrected using spectra-weighted mass energy-absorption coefficients, showing no considerable sign of quenching between these energies. Correction factors obtained by Monte Carlo simulations confirm that the approximations made by LCT corrections are valid. Thus, PSDs could be useful for real-time dosimetry in radiology applications.« less

  6. 49 CFR 325.75 - Ground surface correction factors. 1

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Ground surface correction factors. 1 325.75... MOTOR CARRIER NOISE EMISSION STANDARDS Correction Factors § 325.75 Ground surface correction factors. 1... account both the distance correction factors contained in § 325.73 and the ground surface correction...

  7. 49 CFR 325.75 - Ground surface correction factors. 1

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 5 2011-10-01 2011-10-01 false Ground surface correction factors. 1 325.75... MOTOR CARRIER NOISE EMISSION STANDARDS Correction Factors § 325.75 Ground surface correction factors. 1... account both the distance correction factors contained in § 325.73 and the ground surface correction...

  8. Sagittal imbalance in patients with lumbar spinal stenosis and outcomes after simple decompression surgery.

    PubMed

    Shin, E Kyung; Kim, Chi Heon; Chung, Chun Kee; Choi, Yunhee; Yim, Dahae; Jung, Whei; Park, Sung Bae; Moon, Jung Hyeon; Heo, Won; Kim, Sung-Mi

    2017-02-01

    Lumbar spinal stenosis (LSS) is the most common lumbar degenerative disease, and sagittal imbalance is uncommon. Forward-bending posture, which is primarily caused by buckling of the ligamentum flavum, may be improved via simple decompression surgery. The objectives of this study were to identify the risk factors for sagittal imbalance and to describe the outcomes of simple decompression surgery. This is a retrospective nested case-control study PATIENT SAMPLE: This was a retrospective study that included 83 consecutive patients (M:F=46:37; mean age, 68.5±7.7 years) who underwent decompression surgery and a minimum of 12 months of follow-up. The primary end point was normalization of sagittal imbalance after decompression surgery. Sagittal imbalance was defined as a C7 sagittal vertical axis (SVA) ≥40 mm on a 36-inch-long lateral whole spine radiograph. Logistic regression analysis was used to identify the risk factors for sagittal imbalance. Bilateral decompression was performed via a unilateral approach with a tubular retractor. The SVA was measured on serial radiographs performed 1, 3, 6, and 12 months postoperatively. The prognostic factors for sagittal balance recovery were determined based on various clinical and radiological parameters. Sagittal imbalance was observed in 54% (45/83) of patients, and its risk factors were old age and a large mismatch between pelvic incidence and lumbar lordosis. The 1-year normalization rate was 73% after decompression surgery, and the median time to normalization was 1 to 3 months. Patients who did not experience SVA normalization exhibited low thoracic kyphosis (hazard ratio [HR], 1.04; 95% confidence interval [CI], 1.02-1.10) (p<.01) and spondylolisthesis (HR, 0.33; 95% CI, 0.17-0.61) before surgery. Sagittal imbalance was observed in more than 50% of LSS patients, but this imbalance was correctable via simple decompression surgery in 70% of patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Physical condition for elimination of ambiguity in conditionally convergent lattice sums

    NASA Astrophysics Data System (ADS)

    Young, K.

    1987-02-01

    The conditional convergence of the lattice sum defining the Madelung constant gives rise to an ambiguity in its value. It is shown that this ambiguity is related, through a simple and universal integral, to the average charge density on the crystal surface. The physically correct value is obtained by setting the charge density to zero. A simple and universally applicable formula for the Madelung constant is derived as a consequence. It consists of adding up dipole-dipole energies together with a nontrivial correction term.

  10. A technique for the correcting ERTS data for solar and atmospheric effects

    NASA Technical Reports Server (NTRS)

    Rogers, R. H.; Peacock, K.

    1973-01-01

    A technique is described by which an ERTS investigator can obtain absolute target reflectances by correcting spacecraft radiance measurements for variable target irradiance, atmospheric attenuation, and atmospheric backscatter. A simple measuring instrument and the necessary atmospheric measurements are discussed, and examples demonstrate the nature and magnitude of the atmospheric corrections.

  11. Extraction of the proton radius from electron-proton scattering data

    DOE PAGES

    Lee, Gabriel; Arrington, John R.; Hill, Richard J.

    2015-07-27

    We perform a new analysis of electron-proton scattering data to determine the proton electric and magnetic radii, enforcing model-independent constraints from form factor analyticity. A wide-ranging study of possible systematic effects is performed. An improved analysis is developed that rebins data taken at identical kinematic settings and avoids a scaling assumption of systematic errors with statistical errors. Employing standard models for radiative corrections, our improved analysis of the 2010 Mainz A1 Collaboration data yields a proton electric radius r E = 0.895(20) fm and magnetic radius r M = 0.776(38) fm. A similar analysis applied to world data (excluding Mainzmore » data) implies r E = 0.916(24) fm and r M = 0.914(35) fm. The Mainz and world values of the charge radius are consistent, and a simple combination yields a value r E = 0.904(15) fm that is 4σ larger than the CREMA Collaboration muonic hydrogen determination. The Mainz and world values of the magnetic radius differ by 2.7σ, and a simple average yields r M = 0.851(26) fm. As a result, the circumstances under which published muonic hydrogen and electron scattering data could be reconciled are discussed, including a possible deficiency in the standard radiative correction model which requires further analysis.« less

  12. NNLO computational techniques: The cases H→γγ and H→gg

    NASA Astrophysics Data System (ADS)

    Actis, Stefano; Passarino, Giampiero; Sturm, Christian; Uccirati, Sandro

    2009-04-01

    A large set of techniques needed to compute decay rates at the two-loop level are derived and systematized. The main emphasis of the paper is on the two Standard Model decays H→γγ and H→gg. The techniques, however, have a much wider range of application: they give practical examples of general rules for two-loop renormalization; they introduce simple recipes for handling internal unstable particles in two-loop processes; they illustrate simple procedures for the extraction of collinear logarithms from the amplitude. The latter is particularly relevant to show cancellations, e.g. cancellation of collinear divergencies. Furthermore, the paper deals with the proper treatment of non-enhanced two-loop QCD and electroweak contributions to different physical (pseudo-)observables, showing how they can be transformed in a way that allows for a stable numerical integration. Numerical results for the two-loop percentage corrections to H→γγ,gg are presented and discussed. When applied to the process pp→gg+X→H+X, the results show that the electroweak scaling factor for the cross section is between -4% and +6% in the range 100 GeV

  13. Anthropometry-corrected exposure modeling as a method to improve trunk posture assessment with a single inclinometer.

    PubMed

    Van Driel, Robin; Trask, Catherine; Johnson, Peter W; Callaghan, Jack P; Koehoorn, Mieke; Teschke, Kay

    2013-01-01

    Measuring trunk posture in the workplace commonly involves subjective observation or self-report methods or the use of costly and time-consuming motion analysis systems (current gold standard). This work compared trunk inclination measurements using a simple data-logging inclinometer with trunk flexion measurements using a motion analysis system, and evaluated adding measures of subject anthropometry to exposure prediction models to improve the agreement between the two methods. Simulated lifting tasks (n=36) were performed by eight participants, and trunk postures were simultaneously measured with each method. There were significant differences between the two methods, with the inclinometer initially explaining 47% of the variance in the motion analysis measurements. However, adding one key anthropometric parameter (lower arm length) to the inclinometer-based trunk flexion prediction model reduced the differences between the two systems and accounted for 79% of the motion analysis method's variance. Although caution must be applied when generalizing lower-arm length as a correction factor, the overall strategy of anthropometric modeling is a novel contribution. In this lifting-based study, by accounting for subject anthropometry, a single, simple data-logging inclinometer shows promise for trunk posture measurement and may have utility in larger-scale field studies where similar types of tasks are performed.

  14. Simple Pixel Structure Using Video Data Correction Method for Nonuniform Electrical Characteristics of Polycrystalline Silicon Thin-Film Transistors and Differential Aging Phenomenon of Organic Light-Emitting Diodes

    NASA Astrophysics Data System (ADS)

    Hai-Jung In,; Oh-Kyong Kwon,

    2010-03-01

    A simple pixel structure using a video data correction method is proposed to compensate for electrical characteristic variations of driving thin-film transistors (TFTs) and the degradation of organic light-emitting diodes (OLEDs) in active-matrix OLED (AMOLED) displays. The proposed method senses the electrical characteristic variations of TFTs and OLEDs and stores them in external memory. The nonuniform emission current of TFTs and the aging of OLEDs are corrected by modulating video data using the stored data. Experimental results show that the emission current error due to electrical characteristic variation of driving TFTs is in the range from -63.1 to 61.4% without compensation, but is decreased to the range from -1.9 to 1.9% with the proposed correction method. The luminance error due to the degradation of an OLED is less than 1.8% when the proposed correction method is used for a 50% degraded OLED.

  15. Effect of lensing non-Gaussianity on the CMB power spectra

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, Antony; Pratten, Geraint, E-mail: antony@cosmologist.info, E-mail: geraint.pratten@gmail.com

    2016-12-01

    Observed CMB anisotropies are lensed, and the lensed power spectra can be calculated accurately assuming the lensing deflections are Gaussian. However, the lensing deflections are actually slightly non-Gaussian due to both non-linear large-scale structure growth and post-Born corrections. We calculate the leading correction to the lensed CMB power spectra from the non-Gaussianity, which is determined by the lensing bispectrum. Assuming no primordial non-Gaussianity, the lowest-order result gives ∼ 0.3% corrections to the BB and EE polarization spectra on small-scales. However we show that the effect on EE is reduced by about a factor of two by higher-order Gaussian lensing smoothing,more » rendering the total effect safely negligible for the foreseeable future. We give a simple analytic model for the signal expected from skewness of the large-scale lensing field; the effect is similar to a net demagnification and hence a small change in acoustic scale (and therefore out of phase with the dominant lensing smoothing that predominantly affects the peaks and troughs of the power spectrum).« less

  16. SU-F-R-33: Can CT and CBCT Be Used Simultaneously for Radiomics Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, R; Wang, J; Zhong, H

    2016-06-15

    Purpose: To investigate whether CBCT and CT can be used in radiomics analysis simultaneously. To establish a batch correction method for radiomics in two similar image modalities. Methods: Four sites including rectum, bladder, femoral head and lung were considered as region of interest (ROI) in this study. For each site, 10 treatment planning CT images were collected. And 10 CBCT images which came from same site of same patient were acquired at first radiotherapy fraction. 253 radiomics features, which were selected by our test-retest study at rectum cancer CT (ICC>0.8), were calculated for both CBCT and CT images in MATLAB.more » Simple scaling (z-score) and nonlinear correction methods were applied to the CBCT radiomics features. The Pearson Correlation Coefficient was calculated to analyze the correlation between radiomics features of CT and CBCT images before and after correction. Cluster analysis of mixed data (for each site, 5 CT and 5 CBCT data are randomly selected) was implemented to validate the feasibility to merge radiomics data from CBCT and CT. The consistency of clustering result and site grouping was verified by a chi-square test for different datasets respectively. Results: For simple scaling, 234 of the 253 features have correlation coefficient ρ>0.8 among which 154 features haveρ>0.9 . For radiomics data after nonlinear correction, 240 of the 253 features have ρ>0.8 among which 220 features have ρ>0.9. Cluster analysis of mixed data shows that data of four sites was almost precisely separated for simple scaling(p=1.29 * 10{sup −7}, χ{sup 2} test) and nonlinear correction (p=5.98 * 10{sup −7}, χ{sup 2} test), which is similar to the cluster result of CT data (p=4.52 * 10{sup −8}, χ{sup 2} test). Conclusion: Radiomics data from CBCT can be merged with those from CT by simple scaling or nonlinear correction for radiomics analysis.« less

  17. Auto Emission Testing

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The photos show automobile engines being tested for nitrous oxide emissions, as required by the Environmental Protection Agency (EPA), at the Research and Engineering Division of Ford Motor Company, Dearborn. Michigan. NASA technical information helped the company develop a means of calculating emissions test results. Nitrous oxide emission readings vary with relative humidity in the test facility. EPA uses a standard humidity measurement, but the agency allows manufacturers to test under different humidity conditions, then apply a correction factor to adjust the results to the EPA standard. NASA's Dryden Flight Research Center developed analytic equations which provide a simple, computer-programmable method of correcting for humidity variations. A Ford engineer read a NASA Tech Brief describing the Dryden development and requested more detailed information in the form of a technical support package, which NASA routinely supplies to industry on request. Ford's Emissions Test Laboratory now uses the Dryden equations for humidity-adjusted emissions data reported to EPA.

  18. [The property and applications of the photovoltaic solar panel in the region of diagnostic X-ray].

    PubMed

    Hirota, Jun'ichi; Tarusawa, Kohetsu; Kudo, Kohsei

    2010-10-20

    In this study, the sensitivity in the diagnostic X-ray region of the single crystalline Si photovoltaic solar panel, which is expected to grow further, was measured by using an X-ray tube. The output voltage of the solar panel was clearly proportional to the tube voltage and a good time response in the irradiation time setting of the tube was measured. The factor which converts measured voltage to irradiation dose was extracted experimentally using a correction filter to investigate the ability of the solar panel as a dose monitor. The obtained conversion factors were N(S) = 13 ± 1[µV/µSv/s] for the serial and N(P) = 58 ± 2[µV/µSv/s] for the parallel connected solar panels, both with the Al 1 mm + Cu 0.1 mm correction filter, respectively. Therefore, a good dose dependence of the conversion factor was confirmed by varying the distance between the X-ray tube and the solar panel with that filter. In conclusion, a simple extension of our results pointed out the potential of a new concept of measurements using, for example, the photovoltaic solar panel, the direct dose measurement from X-ray tube and real time estimation of the exposed dose in IVR.

  19. Refined Use of Satellite Aerosol Optical Depth Snapshots to Constrain Biomass Burning Emissions in the GOCART Model

    NASA Technical Reports Server (NTRS)

    Petrenko, Mariya; Kahn, Ralph; Chin, Mian; Limbacher, James

    2017-01-01

    Simulations of biomass burning (BB) emissions in global chemistry and aerosol transport models depend on external inventories, which provide location and strength of burning aerosol sources. Our previous work (Petrenko et al., 2012) shows that satellite snapshots of aerosol optical depth (AOD) near the emitted smoke plume can be used to constrain model-simulated AOD, and effectively, the assumed source strength. We now refine the satellite-snapshot method and investigate applying simple multiplicative emission correction factors for the widely used Global Fire Emission Database version 3 (GFEDv3) emission inventory can achieve regional-scale consistency between MODIS AOD snapshots and the Goddard Chemistry Aerosol Radiation and Transport (GOCART) model. The model and satellite AOD are compared over a set of more than 900 BB cases observed by the MODIS instrument during the 2004, and 2006-2008 biomass burning seasons. The AOD comparison presented here shows that regional discrepancies between the model and satellite are diverse around the globe yet quite consistent within most ecosystems. Additional analysis of including small fire emission correction shows the complimentary nature of correcting for source strength and adding missing sources, and also indicates that in some regions other factors may be significant in explaining model-satellite discrepancies. This work sets the stage for a larger intercomparison within the Aerosol Inter-comparisons between Observations and Models (AeroCom) multi-model biomass burning experiment. We discuss here some of the other possible factors affecting the remaining discrepancies between model simulations and observations, but await comparisons with other AeroCom models to draw further conclusions.

  20. Alteration of a motor learning rule under mirror-reversal transformation does not depend on the amplitude of visual error.

    PubMed

    Kasuga, Shoko; Kurata, Makiko; Liu, Meigen; Ushiba, Junichi

    2015-05-01

    Human's sophisticated motor learning system paradoxically interferes with motor performance when visual information is mirror-reversed (MR), because normal movement error correction further aggravates the error. This error-increasing mechanism makes performing even a simple reaching task difficult, but is overcome by alterations in the error correction rule during the trials. To isolate factors that trigger learners to change the error correction rule, we manipulated the gain of visual angular errors when participants made arm-reaching movements with mirror-reversed visual feedback, and compared the rule alteration timing between groups with normal or reduced gain. Trial-by-trial changes in the visual angular error was tracked to explain the timing of the change in the error correction rule. Under both gain conditions, visual angular errors increased under the MR transformation, and suddenly decreased after 3-5 trials with increase. The increase became degressive at different amplitude between the two groups, nearly proportional to the visual gain. The findings suggest that the alteration of the error-correction rule is not dependent on the amplitude of visual angular errors, and possibly determined by the number of trials over which the errors increased or statistical property of the environment. The current results encourage future intensive studies focusing on the exact rule-change mechanism. Copyright © 2014 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  1. The gravitational self-interaction of the Earth's tidal bulge

    NASA Astrophysics Data System (ADS)

    Norsen, Travis; Dreese, Mackenzie; West, Christopher

    2017-09-01

    According to a standard, idealized analysis, the Moon would produce a 54 cm equilibrium tidal bulge in the Earth's oceans. This analysis omits many factors (beyond the scope of the simple idealized model) that dramatically influence the actual height and timing of the tides at different locations, but it is nevertheless an important foundation for more detailed studies. Here, we show that the standard analysis also omits another factor—the gravitational interaction of the tidal bulge with itself—which is entirely compatible with the simple, idealized equilibrium model and which produces a surprisingly non-trivial correction to the predicted size of the tidal bulge. Our analysis uses ideas and techniques that are familiar from electrostatics, and should thus be of interest to teachers and students of undergraduate E&M, Classical Mechanics (and/or other courses that cover the tides), and geophysics courses that cover the closely related topic of Earth's equatorial bulge.

  2. Bending and buckling formulation of graphene sheets based on nonlocal simple first-order shear deformation theory

    NASA Astrophysics Data System (ADS)

    Golmakani, M. E.; Malikan, M.; Sadraee Far, M. N.; Majidi, H. R.

    2018-06-01

    This paper presents a formulation based on simple first-order shear deformation theory (S-FSDT) for large deflection and buckling of orthotropic single-layered graphene sheets (SLGSs). The S-FSDT has many advantages compared to the classical plate theory (CPT) and conventional FSDT such as needless of shear correction factor, containing less number of unknowns than the existing FSDT and strong similarities with the CPT. Governing equations and boundary conditions are derived based on Hamilton’s principle using the nonlocal differential constitutive relations of Eringen and von Kármán geometrical model. Numerical results are obtained using differential quadrature (DQ) method and the Newton–Raphson iterative scheme. Finally, some comparison studies are carried out to show the high accuracy and reliability of the present formulations compared to the nonlocal CPT and FSDT for different thicknesses, elastic foundations and nonlocal parameters.

  3. Underwater and Dive Station Work-Site Noise Surveys

    DTIC Science & Technology

    2008-03-14

    A) octave band noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet...band noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet dB (A...noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet dB (A) level, and

  4. Computational and Matrix Isolation Studies of (2- and 3-Furyl)methylene

    DTIC Science & Technology

    1994-01-01

    ynal, (Appendix 3) Simple HF calculations using the 6-31 G basis set + ZPE (zero point energy correction applied) predict 2.2 to be more stable in both...QCISD(T)/6-31 1 G** + ZPE predict the triplet to more stable by 2.9 Kcal/mol. However, calculations using MP4SDTQ/6-31 1 G + ZPE predict the singlet to...calculated frequencies were scaled by a factor of 0.9. 53 Table 2.30 Calculated ZPE for 2-Oxabicyclo(3.1.0]hexa-3,5-diene.a Zero Point Energy 49.9 (KcaVmol

  5. Estimation of surface temperature in remote pollution measurement experiments

    NASA Technical Reports Server (NTRS)

    Gupta, S. K.; Tiwari, S. N.

    1978-01-01

    A simple algorithm has been developed for estimating the actual surface temperature by applying corrections to the effective brightness temperature measured by radiometers mounted on remote sensing platforms. Corrections to effective brightness temperature are computed using an accurate radiative transfer model for the 'basic atmosphere' and several modifications of this caused by deviations of the various atmospheric and surface parameters from their base model values. Model calculations are employed to establish simple analytical relations between the deviations of these parameters and the additional temperature corrections required to compensate for them. Effects of simultaneous variation of two parameters are also examined. Use of these analytical relations instead of detailed radiative transfer calculations for routine data analysis results in a severalfold reduction in computation costs.

  6. Relativistic Corrections to the Bohr Model of the Atom

    ERIC Educational Resources Information Center

    Kraft, David W.

    1974-01-01

    Presents a simple means for extending the Bohr model to include relativistic corrections using a derivation similar to that for the non-relativistic case, except that the relativistic expressions for mass and kinetic energy are employed. (Author/GS)

  7. Titan's Surface Composition from Cassini VIMS Solar Occultation Observations

    NASA Astrophysics Data System (ADS)

    McCord, Thomas; Hayne, Paul; Sotin, Christophe

    2013-04-01

    Titan's surface is obscured by a thick absorbing and scattering atmosphere, allowing direct observation of the surface within only a few spectral win-dows in the near-infrared, complicating efforts to identify and map geologi-cally important materials using remote sensing IR spectroscopy. We there-fore investigate the atmosphere's infrared transmission with direct measure-ments using Titan's occultation of the Sun as well as Titan's reflectance measured at differing illumination and observation angles observed by Cas-sini's Visual and Infrared Mapping Spectrometer (VIMS). We use two im-portant spectral windows: the 2.7-2.8-mm "double window" and the broad 5-mm window. By estimating atmospheric attenuation within these windows, we seek an empirical correction factor that can be applied to VIMS meas-urements to estimate the true surface reflectance and map inferred composi-tional variations. Applying the empirical corrections, we correct the VIMS data for the viewing geometry-dependent atmospheric effects to derive the 5-µm reflectance and 2.8/2.7-µm reflectance ratio. We then compare the cor-rected reflectances to compounds proposed to exist on Titan's surface. We propose a simple correction to VIMS Titan data to account for atmospheric attenuation and diffuse scattering in the 5-mm and 2.7-2.8 mm windows, generally applicable for airmass < 3.0. We propose a simple correction to VIMS Titan data to account for atmospheric attenuation and diffuse scatter-ing in the 5-mm and 2.7-2.8 mm windows, generally applicable for airmass < 3.0. The narrow 2.75-mm absorption feature, dividing the window into two sub-windows, present in all on-planet measurements is not present in the occultation data, and its strength is reduced at the cloud tops, suggesting the responsible molecule is concentrated in the lower troposphere or on the sur-face. Our empirical correction to Titan's surface reflectance yields properties shifted closer to water ice for the majority of the low-to-mid latitude area covered by VIMS measurements. Four compositional units are defined and mapped on Titan's surface based on the positions of data clusters in 5-mm vs. 2.8/2.7-mm scatter plots; a simple ternary mixture of H2O, hydrocarbons and CO2 might explain the reflectance properties of these surface units. The vast equatorial "dune seas" are compositionally very homogeneous, perhaps suggesting transport and mixing of particles over very large distances and/or and very consistent formation process and source material. The composi-tional branch characterizing Tui Regio and Hotei Regio is consistent with a mixture of typical Titan hydrocarbons and CO2, or possibly methane/ethane; the concentration mechanism proposed is something similar to a terrestrial playa lake evaporate deposit, based on the fact that river channels are known to feed into at least Hotei Regio.

  8. Insights into Inpatients with Poor Vision: A High Value Proposition

    PubMed Central

    Press, Valerie G.; Matthiesen, Madeleine I.; Ranadive, Alisha; Hariprasad, Seenu M.; Meltzer, David O.; Arora, Vineet M.

    2015-01-01

    Background Vision impairment is an under-recognized risk factor for adverse events among hospitalized patients, yet vision is neither routinely tested nor documented for inpatients. Low-cost ($8 and up) non-prescription ‘readers’ may be a simple, high-value intervention to improve inpatients’ vision. We aimed to study initial feasibility and efficacy of screening and correcting inpatients’ vision. Methods From June 2012 through January 2014 we began testing whether participants’ vision corrected with non-prescription lenses for eligible participants failing a vision screen (Snellen chart) performed by research assistants (RAs). Descriptive statistics and tests of comparison, including t-tests and chi-squared tests, were used when appropriate. All analyses were performed using Stata version 12 (StataCorps, College Station, TX). Results Over 800 participants’ vision was screened (n=853). Older (≥65 years; 56%) participants were more likely to have insufficient vision than younger (<65 years; 28%; p<0.001). Non-prescription readers corrected the majority of eligible participants’ vision (82%, 95/116). Discussion Among an easily identified sub-group of inpatients with poor vision, low-cost ‘readers’ successfully corrected most participants’ vision. Hospitalists and other clinicians working in the inpatient setting can play an important role in identifying opportunities to provide high-value care related to patients’ vision. PMID:25755206

  9. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin

    2015-01-01

    Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility.

  10. The serial use of child neurocognitive tests: development versus practice effects.

    PubMed

    Slade, Peter D; Townes, Brenda D; Rosenbaum, Gail; Martins, Isabel P; Luis, Henrique; Bernardo, Mario; Martin, Michael D; Derouen, Timothy A

    2008-12-01

    When serial neurocognitive assessments are performed, 2 main factors are of importance: test-retest reliability and practice effects. With children, however, there is a third, developmental factor, which occurs as a result of maturation. Child tests recognize this factor through the provision of age-corrected scaled scores. Thus, a ready-made method for estimating the relative contribution of developmental versus practice effects is the comparison of raw (developmental and practice) and scaled (practice only) scores. Data from a pool of 507 Portuguese children enrolled in a study of dental amalgams (T. A. DeRouen, B. G. Leroux, et al., 2002; T. A. DeRouen, M. D. Martin, et al., 2006) showed that practice effects over a 5-year period varied on 8 neurocognitive tests. Simple regression equations are provided for calculating individual retest scores from initial test scores. (c) 2008 APA, all rights reserved.

  11. A new dual-collimation batch reactor for determination of ultraviolet inactivation rate constants for microorganisms in aqueous suspensions

    PubMed Central

    Martin, Stephen B.; Schauer, Elizabeth S.; Blum, David H.; Kremer, Paul A.; Bahnfleth, William P.; Freihaut, James D.

    2017-01-01

    We developed, characterized, and tested a new dual-collimation aqueous UV reactor to improve the accuracy and consistency of aqueous k-value determinations. This new system is unique because it collimates UV energy from a single lamp in two opposite directions. The design provides two distinct advantages over traditional single-collimation systems: 1) real-time UV dose (fluence) determination; and 2) simple actinometric determination of a reactor factor that relates measured irradiance levels to actual irradiance levels experienced by the microbial suspension. This reactor factor replaces three of the four typical correction factors required for single-collimation reactors. Using this dual-collimation reactor, Bacillus subtilis spores demonstrated inactivation following the classic multi-hit model with k = 0.1471 cm2/mJ (with 95% confidence bounds of 0.1426 to 0.1516). PMID:27498232

  12. Neuropsychological functioning in older people with type 2 diabetes: the effect of controlling for confounding factors.

    PubMed

    Asimakopoulou, K G; Hampson, S E; Morrish, N J

    2002-04-01

    Neuropsychological functioning was examined in a group of 33 older (mean age 62.40 +/- 9.62 years) people with Type 2 diabetes (Group 1) and 33 non-diabetic participants matched with Group 1 on age, sex, premorbid intelligence and presence of hypertension and cardio/cerebrovascular conditions (Group 2). Data statistically corrected for confounding factors obtained from the diabetic group were compared with the matched control group. The results suggested small cognitive deficits in diabetic people's verbal memory and mental flexibility (Logical Memory A and SS7). No differences were seen between the two samples in simple and complex visuomotor attention, sustained complex visual attention, attention efficiency, mental double tracking, implicit memory, and self-reported memory problems. These findings indicate minimal cognitive impairment in relatively uncomplicated Type 2 diabetes and demonstrate the importance of control and matching for confounding factors.

  13. The quantitative role of flexor sheath incision in correcting Dupuytren proximal interphalangeal joint contractures.

    PubMed

    Blazar, P E; Floyd, E W; Earp, B E

    2016-07-01

    Controversy exists regarding intra-operative treatment of residual proximal interphalangeal joint contractures after Dupuytren's fasciectomy. We test the hypothesis that a simple release of the digital flexor sheath can correct residual fixed flexion contracture after subtotal fasciectomy. We prospectively enrolled 19 patients (22 digits) with Dupuytren's contracture of the proximal interphalangeal joint. The average pre-operative extension deficit of the proximal interphalangeal joints was 58° (range 30-90). The flexion contracture of the joint was corrected to an average of 28° after fasciectomy. In most digits (20 of 21), subsequent incision of the flexor sheath further corrected the contracture by an average of 23°, resulting in correction to an average flexion contracture of 4.7° (range 0-40). Our results support that contracture of the tendon sheath is a contributor to Dupuytren's contracture of the joint and that sheath release is a simple, low morbidity addition to correct Dupuytren's contractures of the proximal interphalangeal joint. Additional release of the proximal interphalangeal joint after fasciectomy, after release of the flexor sheath, is not necessary in many patients. IV (Case Series, Therapeutic). © The Author(s) 2015.

  14. A probabilistic model for deriving soil quality criteria based on secondary poisoning of top predators. I. Model description and uncertainty analysis.

    PubMed

    Traas, T P; Luttik, R; Jongbloed, R H

    1996-08-01

    In previous studies, the risk of toxicant accumulation in food chains was used to calculate quality criteria for surface water and soil. A simple algorithm was used to calculate maximum permissable concentrations [MPC = no-observed-effect concentration/bioconcentration factor(NOEC/BCF)]. These studies were limited to simple food chains. This study presents a method to calculate MPCs for more complex food webs of predators. The previous method is expanded. First, toxicity data (NOECs) for several compounds were corrected for differences between laboratory animals and animals in the wild. Second, for each compound, it was assumed these NOECs were a sample of a log-logistic distribution of mammalian and avian NOECs. Third, bioaccumulation factors (BAFs) for major food items of predators were collected and were assumed to derive from different log-logistic distributions of BAFs. Fourth, MPCs for each compound were calculated using Monte Carlo sampling from NOEC and BAF distributions. An uncertainty analysis for cadmium was performed to identify the most uncertain parameters of the model. Model analysis indicated that most of the prediction uncertainty of the model can be ascribed to uncertainty of species sensitivity as expressed by NOECs. A very small proportion of model uncertainty is contributed by BAFs from food webs. Correction factors for the conversion of NOECs from laboratory conditions to the field have some influence on the final value of MPC5, but the total prediction uncertainty of the MPC is quite large. It is concluded that the uncertainty in species sensitivity is quite large. To avoid unethical toxicity testing with mammalian or avian predators, it cannot be avoided to use this uncertainty in the method proposed to calculate MPC distributions. The fifth percentile of the MPC is suggested as a safe value for top predators.

  15. Open EFTs, IR effects & late-time resummations: systematic corrections in stochastic inflation

    DOE PAGES

    Burgess, C. P.; Holman, R.; Tasinato, G.

    2016-01-26

    Though simple inflationary models describe the CMB well, their corrections are often plagued by infrared effects that obstruct a reliable calculation of late-time behaviour. Here we adapt to cosmology tools designed to address similar issues in other physical systems with the goal of making reliable late-time inflationary predictions. The main such tool is Open EFTs which reduce in the inflationary case to Stochastic Inflation plus calculable corrections. We apply this to a simple inflationary model that is complicated enough to have dangerous IR behaviour yet simple enough to allow the inference of late-time behaviour. We find corrections to standard Stochasticmore » Inflationary predictions for the noise and drift, and we find these corrections ensure the IR finiteness of both these quantities. The late-time probability distribution, P(Φ), for super-Hubble field fluctuations are obtained as functions of the noise and drift and so these too are IR finite. We compare our results to other methods (such as large-N models) and find they agree when these models are reliable. In all cases we can explore in detail we find IR secular effects describe the slow accumulation of small perturbations to give a big effect: a significant distortion of the late-time probability distribution for the field. But the energy density associated with this is only of order H 4 at late times and so does not generate a dramatic gravitational back-reaction.« less

  16. Open EFTs, IR effects & late-time resummations: systematic corrections in stochastic inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burgess, C. P.; Holman, R.; Tasinato, G.

    Though simple inflationary models describe the CMB well, their corrections are often plagued by infrared effects that obstruct a reliable calculation of late-time behaviour. Here we adapt to cosmology tools designed to address similar issues in other physical systems with the goal of making reliable late-time inflationary predictions. The main such tool is Open EFTs which reduce in the inflationary case to Stochastic Inflation plus calculable corrections. We apply this to a simple inflationary model that is complicated enough to have dangerous IR behaviour yet simple enough to allow the inference of late-time behaviour. We find corrections to standard Stochasticmore » Inflationary predictions for the noise and drift, and we find these corrections ensure the IR finiteness of both these quantities. The late-time probability distribution, P(Φ), for super-Hubble field fluctuations are obtained as functions of the noise and drift and so these too are IR finite. We compare our results to other methods (such as large-N models) and find they agree when these models are reliable. In all cases we can explore in detail we find IR secular effects describe the slow accumulation of small perturbations to give a big effect: a significant distortion of the late-time probability distribution for the field. But the energy density associated with this is only of order H 4 at late times and so does not generate a dramatic gravitational back-reaction.« less

  17. Proof of concept of a simple computer-assisted technique for correcting bone deformities.

    PubMed

    Ma, Burton; Simpson, Amber L; Ellis, Randy E

    2007-01-01

    We propose a computer-assisted technique for correcting bone deformities using the Ilizarov method. Our technique is an improvement over prior art in that it does not require a tracking system, navigation hardware and software, or intraoperative registration. Instead, we rely on a postoperative CT scan to obtain all of the information necessary to plan the correction and compute a correction schedule for the patient. Our laboratory experiments using plastic phantoms produced deformity corrections accurate to within 3.0 degrees of rotation and 1 mm of lengthening.

  18. Hexa Helix: Modified Quad Helix Appliance to Correct Anterior and Posterior Crossbites in Mixed Dentition

    PubMed Central

    Yaseen, Syed Mohammed; Acharya, Ravindranath

    2012-01-01

    Among the commonly encountered dental irregularities which constitute developing malocclusion is the crossbite. During primary and mixed dentition phase, the crossbite is seen very often and if left untreated during these phases then a simple problem may be transformed into a more complex problem. Different techniques have been used to correct anterior and posterior crossbites in mixed dentition. This case report describes the use of hexa helix, a modified version of quad helix for the management of anterior crossbite and bilateral posterior crossbite in early mixed dentition. Correction was achieved within 15 weeks with no damage to the tooth or the marginal periodontal tissue. The procedure is a simple and effective method for treating anterior and bilateral posterior crossbites simultaneously. PMID:23119188

  19. THE CALCULATION OF BURNABLE POISON CORRECTION FACTORS FOR PWR FRESH FUEL ACTIVE COLLAR MEASUREMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croft, Stephen; Favalli, Andrea; Swinhoe, Martyn T.

    2012-06-19

    Verification of commercial low enriched uranium light water reactor fuel takes place at the fuel fabrication facility as part of the overall international nuclear safeguards solution to the civilian use of nuclear technology. The fissile mass per unit length is determined nondestructively by active neutron coincidence counting using a neutron collar. A collar comprises four slabs of high density polyethylene that surround the assembly. Three of the slabs contain {sup 3}He filled proportional counters to detect time correlated fission neutrons induced by an AmLi source placed in the fourth slab. Historically, the response of a particular collar design to amore » particular fuel assembly type has been established by careful cross-calibration to experimental absolute calibrations. Traceability exists to sources and materials held at Los Alamos National Laboratory for over 35 years. This simple yet powerful approach has ensured consistency of application. Since the 1980's there has been a steady improvement in fuel performance. The trend has been to higher burn up. This requires the use of both higher initial enrichment and greater concentrations of burnable poisons. The original analytical relationships to correct for varying fuel composition are consequently being challenged because the experimental basis for them made use of fuels of lower enrichment and lower poison content than is in use today and is envisioned for use in the near term. Thus a reassessment of the correction factors is needed. Experimental reassessment is expensive and time consuming given the great variation between fuel assemblies in circulation. Fortunately current modeling methods enable relative response functions to be calculated with high accuracy. Hence modeling provides a more convenient and cost effective means to derive correction factors which are fit for purpose with confidence. In this work we use the Monte Carlo code MCNPX with neutron coincidence tallies to calculate the influence of Gd{sub 2}O{sub 3} burnable poison on the measurement of fresh pressurized water reactor fuel. To empirically determine the response function over the range of historical and future use we have considered enrichments up to 5 wt% {sup 235}U/{sup tot}U and Gd weight fractions of up to 10 % Gd/UO{sub 2}. Parameterized correction factors are presented.« less

  20. Our method of correcting cryptotia.

    PubMed

    Yanai, A; Tange, I; Bandoh, Y; Tsuzuki, K; Sugino, H; Nagata, S

    1988-12-01

    Our technique for the correction of cryptotia using both Z-plasty and the advancement flap is described. The main advantages are the simple design of the skin incision and the possibility of its application to cryptotia other than severe cartilage deformity and extreme lack of skin.

  1. Further applications of Archimedes' principle in the correction of asymmetrical breasts.

    PubMed

    Schultz, R C; Dolezal, R F; Nolan, J

    1986-02-01

    Archimedes' law of buoyancy has been extended to the preoperative bedside assessment of volume differences between breasts, whatever their cause. The simple method described has proved to be a helpful aid in surgical procedures for the correction of breast asymmetry.

  2. Endoscopic-assisted osteotomies for the treatment of craniosynostosis.

    PubMed

    Hinojosa, J; Esparza, J; Muñoz, M J

    2007-12-01

    The development of multidisciplinar units for Craniofacial Surgery has led to better postoperative results and a considerable decrease in morbidity in the treatment of complex craniofacial patients. Standard correction of craniosynostosis involves calvarial remodeling, often considerable blood losses that need to be replaced and lengthy hospital stay. The use of minimally invasive techniques for the correction of some of these malformations are widespread and allows the surgeon to minimize the incidence of complications by means of a decreased surgical time, blood salvage, and shortening of postoperative hospitalization in comparison to conventional craniofacial techniques. Simple and milder craniosynostosis are best approached by endoscopy-assisted osteotomies and render the best results. Extended procedures other than simple suturectomies have been described for more severe patients. Different osteotomies resembling standard fronto-orbital have been developed for the correction, and the use of postoperative cranial orthesis may improve the final cosmetic appearance. Thus, endoscopic-assisted procedures differ from the simple strategy of single suture resection that rendered insufficient results in the past, and different approaches can be tailored to solve these cases in patients in the setting of a case-to-case bases.

  3. RNA structure in splicing: An evolutionary perspective.

    PubMed

    Lin, Chien-Ling; Taggart, Allison J; Fairbrother, William G

    2016-09-01

    Pre-mRNA splicing is a key post-transcriptional regulation process in which introns are excised and exons are ligated together. A novel class of structured intron was recently discovered in fish. Simple expansions of complementary AC and GT dimers at opposite boundaries of an intron were found to form a bridging structure, thereby enforcing correct splice site pairing across the intron. In some fish introns, the RNA structures are strong enough to bypass the need of regulatory protein factors for splicing. Here, we discuss the prevalence and potential functions of highly structured introns. In humans, structured introns usually arise through the co-occurrence of C and G-rich repeats at intron boundaries. We explore the potentially instructive example of the HLA receptor genes. In HLA pre-mRNA, structured introns flank the exons that encode the highly polymorphic β sheet cleft, making the processing of the transcript robust to variants that disrupt splicing factor binding. While selective forces that have shaped HLA receptor are fairly atypical, numerous other highly polymorphic genes that encode receptors contain structured introns. Finally, we discuss how the elevated mutation rate associated with the simple repeats that often compose structured intron can make structured introns themselves rapidly evolving elements.

  4. Evaluation of dead-time corrections for post-radionuclide-therapy (177)Lu quantitative imaging with low-energy high-resolution collimators.

    PubMed

    Celler, Anna; Piwowarska-Bilska, Hanna; Shcherbinin, Sergey; Uribe, Carlos; Mikolajczak, Renata; Birkenfeld, Bozena

    2014-01-01

    Dead-time (DT) effects rarely cause problems in diagnostic single-photon emission computed tomography (SPECT) studies; however, in post-radionuclide-therapy imaging, DT can be substantial. Therefore, corrections may be necessary if quantitative images are used in image-based dosimetry or for evaluation of therapy outcomes. This task is particularly challenging if low-energy collimators are used. Our goal was to design a simple method to determine the dead-time correction factor (DTCF) without the need for phantom experiments and complex calculations. Planar and SPECT/CT scans of a water phantom containing a 70 ml bottle filled with lutetium-177 (Lu) were acquired over 60 days. Two small Lu markers were used in all scans. The DTCF based on the ratio of observed to true count rates measured over the entire spectrum and using photopeak primary photons only was estimated for phantom (DT present) and marker (no DT) scans. In addition, variations in counts in SPECT projections (potentially caused by varying bremsstrahlung and scatter) were investigated. For count rates that were about two-fold higher than typically seen in post-therapy Lu scans, the maximum DTCF reached a level of about 17%. The DTCF values determined directly from the phantom experiments using the total energy spectrum and photopeak counts only were equal to 13 and 16%, respectively. They were closely matched by those from the proposed marker-based method, which uses only two energy windows and measures photopeak primary photons (15-17%). A simple, marker-based method allowing for determination of the DTCF in high-activity Lu imaging studies has been proposed and validated using phantom experiments.

  5. Experimental investigation of factors affecting the absolute recovery coefficients in iodine-124 PET lesion imaging

    NASA Astrophysics Data System (ADS)

    Jentzen, Walter

    2010-04-01

    The use of recovery coefficients (RCs) in 124I PET lesion imaging is a simple method to correct the imaged activity concentration (AC) primarily for the partial-volume effect and, to a minor extent, for the prompt gamma coincidence effect. The aim of this phantom study was to experimentally investigate a number of various factors affecting the 124I RCs. Three RC-based correction approaches were considered. These approaches differ with respect to the volume of interest (VOI) drawn, which determines the imaged AC and the RCs: a single voxel VOI containing the maximum value (maximum RC), a spherical VOI with a diameter of the scanner resolution (resolution RC) and a VOI equaling the physical object volume (isovolume RC). Measurements were performed using mainly a stand-alone PET scanner (EXACT HR+) and a latest-generation PET/CT scanner (BIOGRAPH mCT). The RCs were determined using a cylindrical phantom containing spheres or rotational ellipsoids and were derived from images acquired with a reference acquisition protocol. For each type of RC, the influence of the following factors on the RC was assessed: object shape, background activity spill in and iterative image reconstruction parameters. To evaluate the robustness of the RC-based correction approaches, the percentage deviation between RC-corrected and true ACs was determined from images acquired with a clinical acquisition protocol of different AC regimes. The observed results of the shape and spill-in effects were compared with simulation data derived from a convolution-based model. The study demonstrated that the shape effect was negligible and, therefore, was in agreement with theoretical expectations. In contradiction to the simulation results, the observed spill-in effect was unexpectedly small. To avoid variations in the determination of RCs due to reconstruction parameter changes, image reconstruction with a pixel length of about one-third or less of the scanner resolution and an OSEM 1 × 32 algorithm or one with somewhat higher number of effective iterations are recommended. Using the clinical acquisition protocol, the phantom study indicated that the resolution- or isovolume-based recovery-correction approaches appeared to be more appropriate to recover the ACs from patient data; however, the application of the three RC-based correction approaches to small lesions containing low ACs was, in particular, associated with large underestimations. The phantom study had several limitations, which were discussed in detail.

  6. Average luminosity distance in inhomogeneous universes

    NASA Astrophysics Data System (ADS)

    Kostov, Valentin Angelov

    Using numerical ray tracing, the paper studies how the average distance modulus in an inhomogeneous universe differs from its homogeneous counterpart. The averaging is over all directions from a fixed observer not over all possible observers (cosmic), thus it is more directly applicable to our observations. Unlike previous studies, the averaging is exact, non-perturbative, an includes all possible non-linear effects. The inhomogeneous universes are represented by Sweese-cheese models containing random and simple cubic lattices of mass- compensated voids. The Earth observer is in the homogeneous cheese which has an Einstein - de Sitter metric. For the first time, the averaging is widened to include the supernovas inside the voids by assuming the probability for supernova emission from any comoving volume is proportional to the rest mass in it. For voids aligned in a certain direction, there is a cumulative gravitational lensing correction to the distance modulus that increases with redshift. That correction is present even for small voids and depends on the density contrast of the voids, not on their radius. Averaging over all directions destroys the cumulative correction even in a non-randomized simple cubic lattice of voids. Despite the well known argument for photon flux conservation, the average distance modulus correction at low redshifts is not zero due to the peculiar velocities. A formula for the maximum possible average correction as a function of redshift is derived and shown to be in excellent agreement with the numerical results. The formula applies to voids of any size that: (1) have approximately constant densities in their interior and walls, (2) are not in a deep nonlinear regime. The actual average correction calculated in random and simple cubic void lattices is severely damped below the predicted maximum. That is traced to cancelations between the corrections coming from the fronts and backs of different voids at the same redshift from the observer. The calculated correction at low redshifts allows one to readily predict the redshift at which the averaged fluctuation in the Hubble diagram is below a required precision and suggests a method to extract the background Hubble constant from low redshift data without the need to correct for peculiar velocities.

  7. Response Functions for Neutron Skyshine Analyses

    NASA Astrophysics Data System (ADS)

    Gui, Ah Auu

    Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources and related conical line-beam response functions (CBRFs) for azimuthally symmetric neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analyses employing the internal line-beam and integral conical-beam methods. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 degrees. The CBRFs are evaluated at 13 neutron source energies in the same energy range and at 13 source polar angles (1 to 89 degrees). The response functions are approximated by a three parameter formula that is continuous in source energy and angle using a double linear interpolation scheme. These response function approximations are available for a source-to-detector range up to 2450 m and for the first time, give dose equivalent responses which are required for modern radiological assessments. For the CBRF, ground correction factors for neutrons and photons are calculated and approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, a simple correction procedure for humidity effects on the neutron skyshine dose is also proposed. The approximate LBRFs are used with the integral line-beam method to analyze four neutron skyshine problems with simple geometries: (1) an open silo, (2) an infinite wall, (3) a roofless rectangular building, and (4) an infinite air medium. In addition, two simple neutron skyshine problems involving an open source silo are analyzed using the integral conical-beam method. The results obtained using the LBRFs and the CBRFs are then compared with MCNP results and results of previous studies.

  8. Wall proximity corrections for hot-wire readings in turbulent flows

    NASA Technical Reports Server (NTRS)

    Hebbar, K. S.

    1980-01-01

    This note describes some details of recent (successful) attempts of wall proximity corrections for hot-wire measurements performed in a three-dimensional incompressible turbulent boundary layer. A simple and quite satisfactory method of estimating wall proximity effects on hot-wire readings is suggested.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, M.; Al-Dayeh, L.; Patel, P.

    It is well known that even small movements of the head can lead to artifacts in fMRI. Corrections for these movements are usually made by a registration algorithm which accounts for translational and rotational motion of the head under a rigid body assumption. The brain, however, is not entirely rigid and images are prone to local deformations due to CSF motion, susceptibility effects, local changes in blood flow and inhomogeneities in the magnetic and gradient fields. Since nonrigid body motion is not adequately corrected by approaches relying on simple rotational and translational corrections, we have investigated a general approach wheremore » an n{sup th} order polynomial is used to map all images onto a common reference image. The coefficients of the polynomial transformation were determined through minimization of the ratio of the variance to the mean of each pixel. Simulation studies were conducted to validate the technique. Results of experimental studies using polynomial transformation for 2D and 3D registration show lower variance to mean ratio compared to simple rotational and translational corrections.« less

  10. Method for correction of measured polarization angles from motional Stark effect spectroscopy for the effects of electric fields

    DOE PAGES

    Luce, T. C.; Petty, C. C.; Meyer, W. H.; ...

    2016-11-02

    An approximate method to correct the motional Stark effect (MSE) spectroscopy for the effects of intrinsic plasma electric fields has been developed. The motivation for using an approximate method is to incorporate electric field effects for between-pulse or real-time analysis of the current density or safety factor profile. The toroidal velocity term in the momentum balance equation is normally the dominant contribution to the electric field orthogonal to the flux surface over most of the plasma. When this approximation is valid, the correction to the MSE data can be included in a form like that used when electric field effectsmore » are neglected. This allows measurements of the toroidal velocity to be integrated into the interpretation of the MSE polarization angles without changing how the data is treated in existing codes. In some cases, such as the DIII-D system, the correction is especially simple, due to the details of the neutral beam and MSE viewing geometry. The correction method is compared using DIII-D data in a variety of plasma conditions to analysis that assumes no radial electric field is present and to analysis that uses the standard correction method, which involves significant human intervention for profile fitting. The comparison shows that the new correction method is close to the standard one, and in all cases appears to offer a better result than use of the uncorrected data. Lastly, the method has been integrated into the standard DIII-D equilibrium reconstruction code in use for analysis between plasma pulses and is sufficiently fast that it will be implemented in real-time equilibrium analysis for control applications.« less

  11. Separation, identification, quantification, and method validation of anthocyanins in botanical supplement raw materials by HPLC and HPLC-MS.

    PubMed

    Chandra, A; Rana, J; Li, Y

    2001-08-01

    A method has been established and validated for identification and quantification of individual, as well as total, anthocyanins by HPLC and LC/ES-MS in botanical raw materials used in the herbal supplement industry. The anthocyanins were separated and identified on the basis of their respective M(+) (cation) using LC/ES-MS. Separated anthocyanins were individually calculated against one commercially available anthocyanin external standard (cyanidin-3-glucoside chloride) and expressed as its equivalents. Amounts of each anthocyanin calculated as external standard equivalent were then multiplied by a molecular-weight correction factor to afford their specific quantities. Experimental procedures and use of a molecular-weight correction factors are substantiated and validated using Balaton tart cherry and elderberry as templates. Cyanidin-3-glucoside chloride has been widely used in the botanical industry to calculate total anthocyanins. In our studies on tart cherry and elderberry, its use as external standard followed by use of molecular-weight correction factors should provide relatively accurate results for total anthocyanins, because of the presence of cyanidin as their major anthocyanidin backbone. The method proposed here is simple and has a direct sample preparation procedure without any solid-phase extraction. It enables selection and use of commercially available anthocyanins as external standards for quantification of specific anthocyanins in the sample matrix irrespective of their commercial availability as analytical standards. It can be used as a template and applied for similar quantification in several anthocyanin-containing raw materials for routine quality control procedures, thus providing consistency in analytical testing of botanical raw materials used for manufacturing efficacious and true-to-the-label nutritional supplements.

  12. Effects of Gel Thickness on Microscopic Indentation Measurements of Gel Modulus

    PubMed Central

    Long, Rong; Hall, Matthew S.; Wu, Mingming; Hui, Chung-Yuen

    2011-01-01

    In vitro, animal cells are mostly cultured on a gel substrate. It was recently shown that substrate stiffness affects cellular behaviors in a significant way, including adhesion, differentiation, and migration. Therefore, an accurate method is needed to characterize the modulus of the substrate. In situ microscopic measurements of the gel substrate modulus are based on Hertz contact mechanics, where Young's modulus is derived from the indentation force and displacement measurements. In Hertz theory, the substrate is modeled as a linear elastic half-space with an infinite depth, whereas in practice, the thickness of the substrate, h, can be comparable to the contact radius and other relevant dimensions such as the radius of the indenter or steel ball, R. As a result, measurements based on Hertz theory overestimate the Young's modulus. In this work, we discuss the limitations of Hertz theory and then modify it, taking into consideration the nonlinearity of the material and large deformation using a finite-element method. We present our results in a simple correction factor, ψ, the ratio of the corrected Young's modulus and the Hertz modulus in the parameter regime of δ/h ≤ min (0.6, R/h) and 0.3 ≤ R/h ≤ 12.7. The ψ factor depends on two dimensionless parameters, R/h and δ/h (where δ is the indentation depth), both of which are easily accessible to experiments. This correction factor agrees with experimental observations obtained with the use of polyacrylamide gel and a microsphere indentation method in the parameter range of 0.1 ≤ δ/h ≤ 0.4 and 0.3 ≤ R/h ≤ 6.2. The effect of adhesion on the use of Hertz theory for small indentation depth is also discussed. PMID:21806932

  13. Energy level alignment and quantum conductance of functionalized metal-molecule junctions: Density functional theory versus GW calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Chengjun; Markussen, Troels; Thygesen, Kristian S., E-mail: thygesen@fysik.dtu.dk

    We study the effect of functional groups (CH{sub 3}*4, OCH{sub 3}, CH{sub 3}, Cl, CN, F*4) on the electronic transport properties of 1,4-benzenediamine molecular junctions using the non-equilibrium Green function method. Exchange and correlation effects are included at various levels of theory, namely density functional theory (DFT), energy level-corrected DFT (DFT+Σ), Hartree-Fock and the many-body GW approximation. All methods reproduce the expected trends for the energy of the frontier orbitals according to the electron donating or withdrawing character of the substituent group. However, only the GW method predicts the correct ordering of the conductance amongst the molecules. The absolute GWmore » (DFT) conductance is within a factor of two (three) of the experimental values. Correcting the DFT orbital energies by a simple physically motivated scissors operator, Σ, can bring the DFT conductances close to experiments, but does not improve on the relative ordering. We ascribe this to a too strong pinning of the molecular energy levels to the metal Fermi level by DFT which suppresses the variation in orbital energy with functional group.« less

  14. Low Speed and High Speed Correlation of SMART Active Flap Rotor Loads

    NASA Technical Reports Server (NTRS)

    Kottapalli, Sesi B. R.

    2010-01-01

    Measured, open loop and closed loop data from the SMART rotor test in the NASA Ames 40- by 80- Foot Wind Tunnel are compared with CAMRAD II calculations. One open loop high-speed case and four closed loop cases are considered. The closed loop cases include three high-speed cases and one low-speed case. Two of these high-speed cases include a 2 deg flap deflection at 5P case and a test maximum-airspeed case. This study follows a recent, open loop correlation effort that used a simple correction factor for the airfoil pitching moment Mach number. Compared to the earlier effort, the current open loop study considers more fundamental corrections based on advancing blade aerodynamic conditions. The airfoil tables themselves have been studied. Selected modifications to the HH-06 section flap airfoil pitching moment table are implemented. For the closed loop condition, the effect of the flap actuator is modeled by increased flap hinge stiffness. Overall, the open loop correlation is reasonable, thus confirming the basic correctness of the current semi-empirical modifications; the closed loop correlation is also reasonable considering that the current flap model is a first generation model. Detailed correlation results are given in the paper.

  15. In-place recalibration technique applied to a capacitance-type system for measuring rotor blade tip clearance

    NASA Technical Reports Server (NTRS)

    Barranger, J. P.

    1978-01-01

    The rotor blade tip clearance measurement system consists of a capacitance sensing probe with self contained tuning elements, a connecting coaxial cable, and remotely located electronics. Tests show that the accuracy of the system suffers from a strong dependence on probe tip temperature and humidity. A novel inplace recalibration technique was presented which partly overcomes this problem through a simple modification of the electronics that permits a scale factor correction. This technique, when applied to a commercial system significantly reduced errors under varying conditions of humidity and temperature. Equations were also found that characterize the important cable and probe design quantities.

  16. Magnetometer bias determination and attitude determination for near-earth spacecraft

    NASA Technical Reports Server (NTRS)

    Lerner, G. M.; Shuster, M. D.

    1979-01-01

    A simple linear-regression algorithm is used to determine simultaneously magnetometer biases, misalignments, and scale factor corrections, as well as the dependence of the measured magnetic field on magnetic control systems. This algorithm has been applied to data from the Seasat-1 and the Atmosphere Explorer Mission-1/Heat Capacity Mapping Mission (AEM-1/HCMM) spacecraft. Results show that complete inflight calibration as described here can improve significantly the accuracy of attitude solutions obtained from magnetometer measurements. This report discusses the difficulties involved in obtaining attitude information from three-axis magnetometers, briefly derives the calibration algorithm, and presents numerical results for the Seasat-1 and AEM-1/HCMM spacecraft.

  17. An analytical approach to γ-ray self-shielding effects for radioactive bodies encountered nuclear decommissioning scenarios.

    PubMed

    Gamage, K A A; Joyce, M J

    2011-10-01

    A novel analytical approach is described that accounts for self-shielding of γ radiation in decommissioning scenarios. The approach is developed with plutonium-239, cobalt-60 and caesium-137 as examples; stainless steel and concrete have been chosen as the media for cobalt-60 and caesium-137, respectively. The analytical methods have been compared MCNPX 2.6.0 simulations. A simple, linear correction factor relates the analytical results and the simulated estimates. This has the potential to greatly simplify the estimation of self-shielding effects in decommissioning activities. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. A simple approach to detect and correct signal faults of Hall position sensors for brushless DC motors at steady speed

    NASA Astrophysics Data System (ADS)

    Shi, Yongli; Wu, Zhong; Zhi, Kangyi; Xiong, Jun

    2018-03-01

    In order to realize reliable commutation of brushless DC motors (BLDCMs), a simple approach is proposed to detect and correct signal faults of Hall position sensors in this paper. First, the time instant of the next jumping edge for Hall signals is predicted by using prior information of pulse intervals in the last electrical period. Considering the possible errors between the predicted instant and the real one, a confidence interval is set by using the predicted value and a suitable tolerance for the next pulse edge. According to the relationship between the real pulse edge and the confidence interval, Hall signals can be judged and the signal faults can be corrected. Experimental results of a BLDCM at steady speed demonstrate the effectiveness of the approach.

  19. Investigation of electron-loss and photon scattering correction factors for FAC-IR-300 ionization chamber

    NASA Astrophysics Data System (ADS)

    Mohammadi, S. M.; Tavakoli-Anbaran, H.; Zeinali, H. Z.

    2017-02-01

    The parallel-plate free-air ionization chamber termed FAC-IR-300 was designed at the Atomic Energy Organization of Iran, AEOI. This chamber is used for low and medium X-ray dosimetry on the primary standard level. In order to evaluate the air-kerma, some correction factors such as electron-loss correction factor (ke) and photon scattering correction factor (ksc) are needed. ke factor corrects the charge loss from the collecting volume and ksc factor corrects the scattering of photons into collecting volume. In this work ke and ksc were estimated by Monte Carlo simulation. These correction factors are calculated for mono-energy photon. As a result of the simulation data, the ke and ksc values for FAC-IR-300 ionization chamber are 1.0704 and 0.9982, respectively.

  20. Simple Correction of Alar Retraction by Conchal Cartilage Extension Grafts.

    PubMed

    Jang, Yong Jun; Kim, Sung Min; Lew, Dae Hyun; Song, Seung Yong

    2016-11-01

    Alar retraction is a challenging condition in rhinoplasty marked by exaggerated nostril exposure and awkwardness. Although various methods for correcting alar retraction have been introduced, none is without drawbacks. Herein, we report a simple procedure that is both effective and safe for correcting alar retraction using only conchal cartilage grafting. Between August 2007 and August 2009, 18 patients underwent conchal cartilage extension grafting to correct alar retraction. Conchal cartilage extension grafts were fixed to the caudal margins of the lateral crura and covered with vestibular skin advancement flaps. Preoperative and postoperative photographs were reviewed and analyzed. Patient satisfaction was surveyed and categorized into 4 groups (very satisfied, satisfied, moderate, or unsatisfied). According to the survey, 8 patients were very satisfied, 9 were satisfied, and 1 considered the outcome moderate, resulting in satisfaction for most patients. The average distance from the alar rim to the long axis of the nostril was reduced by 1.4 mm (3.6 to 2.2 mm). There were no complications, except in 2 cases with palpable cartilage step-off that resolved without any aesthetic problems. Conchal cartilage alar extension graft is a simple, effective method of correcting alar retraction that can be combined with aesthetic rhinoplasty conveniently, utilizing conchal cartilage, which is the most similar cartilage to alar cartilage, and requiring a lesser volume of cartilage harvest compared to previously devised methods. However, the current procedure lacks efficacy for severe alar retraction and a longer follow-up period may be required to substantiate the enduring efficacy of the current procedure.

  1. Comments on baseline correction of digital strong-motion data: Examples from the 1999 Hector Mine, California, earthquake

    USGS Publications Warehouse

    Boore, D.M.; Stephens, C.D.; Joyner, W.B.

    2002-01-01

    Residual displacements for large earthquakes can sometimes be determined from recordings on modern digital instruments, but baseline offsets of unknown origin make it difficult in many cases to do so. To recover the residual displacement, we suggest tailoring a correction scheme by studying the character of the velocity obtained by integration of zeroth-order-corrected acceleration and then seeing if the residual displacements are stable when the various parameters in the particular correction scheme are varied. For many seismological and engineering purposes, however, the residual displacement are of lesser importance than ground motions at periods less than about 20 sec. These ground motions are often recoverable with simple baseline correction and low-cut filtering. In this largely empirical study, we illustrate the consequences of various correction schemes, drawing primarily from digital recordings of the 1999 Hector Mine, California, earthquake. We show that with simple processing the displacement waveforms for this event are very similar for stations separated by as much as 20 km. We also show that a strong pulse on the transverse component was radiated from the Hector Mine earthquake and propagated with little distortion to distances exceeding 170 km; this pulse leads to large response spectral amplitudes around 10 sec.

  2. A Simple Noise Correction Scheme for Diffusional Kurtosis Imaging

    PubMed Central

    Glenn, G. Russell; Tabesh, Ali; Jensen, Jens H.

    2014-01-01

    Purpose Diffusional kurtosis imaging (DKI) is sensitive to the effects of signal noise due to strong diffusion weightings and higher order modeling of the diffusion weighted signal. A simple noise correction scheme is proposed to remove the majority of the noise bias in the estimated diffusional kurtosis. Methods Weighted linear least squares (WLLS) fitting together with a voxel-wise, subtraction-based noise correction from multiple, independent acquisitions are employed to reduce noise bias in DKI data. The method is validated in phantom experiments and demonstrated for in vivo human brain for DKI-derived parameter estimates. Results As long as the signal-to-noise ratio (SNR) for the most heavily diffusion weighted images is greater than 2.1, errors in phantom diffusional kurtosis estimates are found to be less than 5 percent with noise correction, but as high as 44 percent for uncorrected estimates. In human brain, noise correction is also shown to improve diffusional kurtosis estimates derived from measurements made with low SNR. Conclusion The proposed correction technique removes the majority of noise bias from diffusional kurtosis estimates in noisy phantom data and is applicable to DKI of human brain. Features of the method include computational simplicity and ease of integration into standard WLLS DKI post-processing algorithms. PMID:25172990

  3. Super-global distortion correction for a rotational C-arm x-ray image intensifier.

    PubMed

    Liu, R R; Rudin, S; Bednarek, D R

    1999-09-01

    Image intensifier (II) distortion changes as a function of C-arm rotation angle because of changes in the orientation of the II with the earth's or other stray magnetic fields. For cone-beam computed tomography (CT), distortion correction for all angles is essential. The new super-global distortion correction consists of a model to continuously correct II distortion not only at each location in the image but for every rotational angle of the C arm. Calibration bead images were acquired with a standard C arm in 9 in. II mode. The super-global (SG) model is obtained from the single-plane global correction of the selected calibration images with given sampling angle interval. The fifth-order single-plane global corrections yielded a residual rms error of 0.20 pixels, while the SG model yielded a rms error of 0.21 pixels, a negligibly small difference. We evaluated the accuracy dependence of the SG model on various factors, such as the single-plane global fitting order, SG order, and angular sampling interval. We found that a good SG model can be obtained using a sixth-order SG polynomial fit based on the fifth-order single-plane global correction, and that a 10 degrees sampling interval was sufficient. Thus, the SG model saves processing resources and storage space. The residual errors from the mechanical errors of the x-ray system were also investigated, and found comparable with the SG residual error. Additionally, a single-plane global correction was done in the cylindrical coordinate system, and physical information about pincushion distortion and S distortion were observed and analyzed; however, this method is not recommended due to a lack of calculational efficiency. In conclusion, the SG model provides an accurate, fast, and simple correction for rotational C-arm images, which may be used for cone-beam CT.

  4. Potential of bias correction for downscaling passive microwave and soil moisture data

    USDA-ARS?s Scientific Manuscript database

    Passive microwave satellites such as SMOS (Soil Moisture and Ocean Salinity) or SMAP (Soil Moisture Active Passive) observe brightness temperature (TB) and retrieve soil moisture at a spatial resolution greater than most hydrological processes. Bias correction is proposed as a simple method to disag...

  5. Asymptotic One-Point Functions in Gauge-String Duality with Defects.

    PubMed

    Buhl-Mortensen, Isak; de Leeuw, Marius; Ipsen, Asger C; Kristjansen, Charlotte; Wilhelm, Matthias

    2017-12-29

    We take the first step in extending the integrability approach to one-point functions in AdS/dCFT to higher loop orders. More precisely, we argue that the formula encoding all tree-level one-point functions of SU(2) operators in the defect version of N=4 supersymmetric Yang-Mills theory, dual to the D5-D3 probe-brane system with flux, has a natural asymptotic generalization to higher loop orders. The asymptotic formula correctly encodes the information about the one-loop correction to the one-point functions of nonprotected operators once dressed by a simple flux-dependent factor, as we demonstrate by an explicit computation involving a novel object denoted as an amputated matrix product state. Furthermore, when applied to the Berenstein-Maldacena-Nastase vacuum state, the asymptotic formula gives a result for the one-point function which in a certain double-scaling limit agrees with that obtained in the dual string theory up to wrapping order.

  6. Viscous compressible flow direct and inverse computation and illustrations

    NASA Technical Reports Server (NTRS)

    Yang, T. T.; Ntone, F.

    1986-01-01

    An algorithm for laminar and turbulent viscous compressible two dimensional flows is presented. For the application of precise boundary conditions over an arbitrary body surface, a body-fitted coordinate system is used in the physical plane. A thin-layer approximation of tne Navier-Stokes equations is introduced to keep the viscous terms relatively simple. The flow field computation is performed in the transformed plane. A factorized, implicit scheme is used to facilitate the computation. Sample calculations, for Couette flow, developing pipe flow, an isolated airflow, two dimensional compressor cascade flow, and segmental compressor blade design are presented. To a certain extent, the effective use of the direct solver depends on the user's skill in setting up the gridwork, the time step size and the choice of the artificial viscosity. The design feature of the algorithm, an iterative scheme to correct geometry for a specified surface pressure distribution, works well for subsonic flows. A more elaborate correction scheme is required in treating transonic flows where local shock waves may be involved.

  7. Limitations of pH-potentiometric titration for the determination of the degree of deacetylation of chitosan.

    PubMed

    Balázs, Nándor; Sipos, Pál

    2007-01-15

    The degree of deacetylation (DDA) of chitosan determines the biopolymer's physico-chemical properties and technological applications. pH-Potentiometric titration seems to offer a simple and convenient means of determining DDA. However, to obtain accurate pH-potentiometric DDA values, several factors have to be taken into consideration. We found that the moisture content of the air-dry chitosan samples can be as high as 15%, and a reasonable fraction of this humidity cannot be removed by ordinary drying. Corrections have to be made for the ash content, as in some samples it can be as high as 1% by weight. The method of equivalence point determination was also found to cause systematic variations in the results and in some samples extra acid as high as 1 mol% of the free amino content was also identified. To compensate for the latter effect, the second equivalence point of the titration has to be determined separately and the analytical concentration of the acid be corrected for it. All the corrections listed here are necessary to obtain DDA values that are in reasonable agreement with those obtained from (1)H NMR and IR spectroscopic measurements. The need for these corrections severely limits the usefulness of pH-metry for determining accurate DDA values and thus potentiometry is hardly able to compete with other standard spectroscopic procedures, that is, (1)H NMR spectroscopy.

  8. On Neglecting Chemical Exchange Effects When Correcting in Vivo 31P MRS Data for Partial Saturation

    NASA Astrophysics Data System (ADS)

    Ouwerkerk, Ronald; Bottomley, Paul A.

    2001-02-01

    Signal acquisition in most MRS experiments requires a correction for partial saturation that is commonly based on a single exponential model for T1 that ignores effects of chemical exchange. We evaluated the errors in 31P MRS measurements introduced by this approximation in two-, three-, and four-site chemical exchange models under a range of flip-angles and pulse sequence repetition times (TR) that provide near-optimum signal-to-noise ratio (SNR). In two-site exchange, such as the creatine-kinase reaction involving phosphocreatine (PCr) and γ-ATP in human skeletal and cardiac muscle, errors in saturation factors were determined for the progressive saturation method and the dual-angle method of measuring T1. The analysis shows that these errors are negligible for the progressive saturation method if the observed T1 is derived from a three-parameter fit of the data. When T1 is measured with the dual-angle method, errors in saturation factors are less than 5% for all conceivable values of the chemical exchange rate and flip-angles that deliver useful SNR per unit time over the range T1/5 ≤ TR ≤ 2T1. Errors are also less than 5% for three- and four-site exchange when TR ≥ T1*/2, the so-called "intrinsic" T1's of the metabolites. The effect of changing metabolite concentrations and chemical exchange rates on observed T1's and saturation corrections was also examined with a three-site chemical exchange model involving ATP, PCr, and inorganic phosphate in skeletal muscle undergoing up to 95% PCr depletion. Although the observed T1's were dependent on metabolite concentrations, errors in saturation corrections for TR = 2 s could be kept within 5% for all exchanging metabolites using a simple interpolation of two dual-angle T1 measurements performed at the start and end of the experiment. Thus, the single-exponential model appears to be reasonably accurate for correcting 31P MRS data for partial saturation in the presence of chemical exchange. Even in systems where metabolite concentrations change, accurate saturation corrections are possible without much loss in SNR.

  9. Correcting the SIMPLE Model of Free Recall

    ERIC Educational Resources Information Center

    Lee, Michael D.; Pooley, James P.

    2013-01-01

    The scale-invariant memory, perception, and learning (SIMPLE) model developed by Brown, Neath, and Chater (2007) formalizes the theoretical idea that scale invariance is an important organizing principle across numerous cognitive domains and has made an influential contribution to the literature dealing with modeling human memory. In the context…

  10. Resistivity Correction Factor for the Four-Probe Method: Experiment II

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo

    1989-05-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F can be applied to a system consisting of a disk sample and a four-probe array. Measurements are made on isotropic graphite disks and crystalline ITO films. Factor F can correct the apparent variations of the data and lead to reasonable resistivities and sheet resistances. Here factor F is compared to other correction factors; i.e. FASTM and FJIS.

  11. Surface roughness effects on the solar reflectance of cool asphalt shingles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akbari, Hashem; Berdahl, Paul; Akbari, Hashem

    2008-02-17

    We analyze the solar reflectance of asphalt roofing shingles that are covered with pigmented mineral roofing granules. The reflecting surface is rough, with a total area approximately twice the nominal area. We introduce a simple analytical model that relates the 'micro-reflectance' of a small surface region to the 'macro-reflectance' of the shingle. This model uses a mean field approximation to account for multiple scattering effects. The model is then used to compute the reflectance of shingles with a mixture of different colored granules, when the reflectances of the corresponding mono-color shingles are known. Simple linear averaging works well, with smallmore » corrections to linear averaging derived for highly reflective materials. Reflective base granules and reflective surface coatings aid achievement of high solar reflectance. Other factors that influence the solar reflectance are the size distribution of the granules, coverage of the asphalt substrate, and orientation of the granules as affected by rollers during fabrication.« less

  12. A Method for Automated Detection of Usability Problems from Client User Interface Events

    PubMed Central

    Saadawi, Gilan M.; Legowski, Elizabeth; Medvedeva, Olga; Chavan, Girish; Crowley, Rebecca S.

    2005-01-01

    Think-aloud usability analysis provides extremely useful data but is very time-consuming and expensive to perform because of the extensive manual video analysis that is required. We describe a simple method for automated detection of usability problems from client user interface events for a developing medical intelligent tutoring system. The method incorporates (1) an agent-based method for communication that funnels all interface events and system responses to a centralized database, (2) a simple schema for representing interface events and higher order subgoals, and (3) an algorithm that reproduces the criteria used for manual coding of usability problems. A correction factor was empirically determining to account for the slower task performance of users when thinking aloud. We tested the validity of the method by simultaneously identifying usability problems using TAU and manually computing them from stored interface event data using the proposed algorithm. All usability problems that did not rely on verbal utterances were detectable with the proposed method. PMID:16779121

  13. Quantum Loop Expansion to High Orders, Extended Borel Summation, and Comparison with Exact Results

    NASA Astrophysics Data System (ADS)

    Noreen, Amna; Olaussen, Kåre

    2013-07-01

    We compare predictions of the quantum loop expansion to (essentially) infinite orders with (essentially) exact results in a simple quantum mechanical model. We find that there are exponentially small corrections to the loop expansion, which cannot be explained by any obvious “instanton”-type corrections. It is not the mathematical occurrence of exponential corrections but their seeming lack of any physical origin which we find surprising and puzzling.

  14. Simple wavefront correction framework for two-photon microscopy of in-vivo brain

    PubMed Central

    Galwaduge, P. T.; Kim, S. H.; Grosberg, L. E.; Hillman, E. M. C.

    2015-01-01

    We present an easily implemented wavefront correction scheme that has been specifically designed for in-vivo brain imaging. The system can be implemented with a single liquid crystal spatial light modulator (LCSLM), which makes it compatible with existing patterned illumination setups, and provides measurable signal improvements even after a few seconds of optimization. The optimization scheme is signal-based and does not require exogenous guide-stars, repeated image acquisition or beam constraint. The unconstrained beam approach allows the use of Zernike functions for aberration correction and Hadamard functions for scattering correction. Low order corrections performed in mouse brain were found to be valid up to hundreds of microns away from the correction location. PMID:26309763

  15. A simple second-order digital phase-locked loop.

    NASA Technical Reports Server (NTRS)

    Tegnelia, C. R.

    1972-01-01

    A simple second-order digital phase-locked loop has been designed for the Viking Orbiter 1975 command system. Excluding analog-to-digital conversion, implementation of the loop requires only an adder/subtractor, two registers, and a correctable counter with control logic. The loop considers only the polarity of phase error and corrects system clocks according to a filtered sequence of this polarity. The loop is insensitive to input gain variation, and therefore offers the advantage of stable performance over long life. Predictable performance is guaranteed by extreme reliability of acquisition, yet in the steady state the loop produces only a slight degradation with respect to analog loop performance.

  16. Newberry Combined Gravity 2016

    DOE Data Explorer

    Kelly Rose

    2016-01-22

    Newberry combined gravity from Zonge Int'l, processed for the EGS stimulation project at well 55-29. Includes data from both Davenport 2006 collection and for OSU/4D EGS monitoring 2012 collection. Locations are NAD83, UTM Zone 10 North, meters. Elevation is NAVD88. Gravity in milligals. Free air and observed gravity are included, along with simple Bouguer anomaly and terrain corrected Bouguer anomaly. SBA230 means simple Bouguer anomaly computed at 2.30 g/cc. CBA230 means terrain corrected Bouguer anomaly at 2.30 g/cc. This suite of densities are included (g/cc): 2.00, 2.10, 2.20, 2.30, 2.40, 2.50, 2.67.

  17. Correction to the paper “a simple model to determine the interrelation between the integral characteristics of hall thrusters” [Plasma Physics Reports 40, 229 (2014)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumilin, V. P.; Shumilin, A. V.; Shumilin, N. V., E-mail: vladimirshumilin@yahoo.com

    2015-11-15

    The paper is devoted to comparison of experimental data with theoretical predictions concerning the dependence of the current of accelerated ions on the operating voltage of a Hall thruster with an anode layer. The error made in the paper published by the authors in Plasma Phys. Rep. 40, 229 (2014) occurred because of a misprint in the Encyclopedia of Low-Temperature Plasma. In the present paper, this error is corrected. It is shown that the simple model proposed in the above-mentioned paper is in qualitative and quantitative agreement with experimental results.

  18. Nature of collective decision-making by simple yes/no decision units.

    PubMed

    Hasegawa, Eisuke; Mizumoto, Nobuaki; Kobayashi, Kazuya; Dobata, Shigeto; Yoshimura, Jin; Watanabe, Saori; Murakami, Yuuka; Matsuura, Kenji

    2017-10-31

    The study of collective decision-making spans various fields such as brain and behavioural sciences, economics, management sciences, and artificial intelligence. Despite these interdisciplinary applications, little is known regarding how a group of simple 'yes/no' units, such as neurons in the brain, can select the best option among multiple options. One prerequisite for achieving such correct choices by the brain is correct evaluation of relative option quality, which enables a collective decision maker to efficiently choose the best option. Here, we applied a sensory discrimination mechanism using yes/no units with differential thresholds to a model for making a collective choice among multiple options. The performance corresponding to the correct choice was shown to be affected by various parameters. High performance can be achieved by tuning the threshold distribution with the options' quality distribution. The number of yes/no units allocated to each option and its variability profoundly affects performance. When this variability is large, a quorum decision becomes superior to a majority decision under some conditions. The general features of this collective decision-making by a group of simple yes/no units revealed in this study suggest that this mechanism may be useful in applications across various fields.

  19. A simple method for measurement of maximal downstroke power on friction-loaded cycle ergometer.

    PubMed

    Morin, Jean-Benoît; Belli, Alain

    2004-01-01

    The aim of this study was to propose and validate a post-hoc correction method to obtain maximal power values taking into account inertia of the flywheel during sprints on friction-loaded cycle ergometers. This correction method was obtained from a basic postulate of linear deceleration-time evolution during the initial phase (until maximal power) of a sprint and included simple parameters as flywheel inertia, maximal velocity, time to reach maximal velocity and friction force. The validity of this model was tested by comparing measured and calculated maximal power values for 19 sprint bouts performed by five subjects against 0.6-1 N kg(-1) friction loads. Non-significant differences between measured and calculated maximal power (1151+/-169 vs. 1148+/-170 W) and a mean error index of 1.31+/-1.20% (ranging from 0.09% to 4.20%) showed the validity of this method. Furthermore, the differences between measured maximal power and power neglecting inertia (20.4+/-7.6%, ranging from 9.5% to 33.2%) emphasized the usefulness of power correcting in studies about anaerobic power which do not include inertia, and also the interest of this simple post-hoc method.

  20. A Correction to the Stress-Strain Curve During Multistage Hot Deformation of 7150 Aluminum Alloy Using Instantaneous Friction Factors

    NASA Astrophysics Data System (ADS)

    Jiang, Fulin; Tang, Jie; Fu, Dinfa; Huang, Jianping; Zhang, Hui

    2018-04-01

    Multistage stress-strain curve correction based on an instantaneous friction factor was studied for axisymmetric uniaxial hot compression of 7150 aluminum alloy. Experimental friction factors were calculated based on continuous isothermal axisymmetric uniaxial compression tests at various deformation parameters. Then, an instantaneous friction factor equation was fitted by mathematic analysis. After verification by comparing single-pass flow stress correction with traditional average friction factor correction, the instantaneous friction factor equation was applied to correct multistage stress-strain curves. The corrected results were reasonable and validated by multistage relative softening calculations. This research provides a broad potential for implementing axisymmetric uniaxial compression in multistage physical simulations and friction optimization in finite element analysis.

  1. Monitoring robot actions for error detection and recovery

    NASA Technical Reports Server (NTRS)

    Gini, M.; Smith, R.

    1987-01-01

    Reliability is a serious problem in computer controlled robot systems. Although robots serve successfully in relatively simple applications such as painting and spot welding, their potential in areas such as automated assembly is hampered by programming problems. A program for assembling parts may be logically correct, execute correctly on a simulator, and even execute correctly on a robot most of the time, yet still fail unexpectedly in the face of real world uncertainties. Recovery from such errors is far more complicated than recovery from simple controller errors, since even expected errors can often manifest themselves in unexpected ways. Here, a novel approach is presented for improving robot reliability. Instead of anticipating errors, researchers use knowledge-based programming techniques so that the robot can autonomously exploit knowledge about its task and environment to detect and recover from failures. They describe preliminary experiment of a system that they designed and constructed.

  2. Tooth loss caused by displaced elastic during simple preprosthetic orthodontic treatment

    PubMed Central

    Dianiskova, Simona; Calzolari, Chiara; Migliorati, Marco; Silvestrini-Biavati, Armando; Isola, Gaetano; Savoldi, Fabio; Dalessandri, Domenico; Paganelli, Corrado

    2016-01-01

    The use of elastics to close a diastema or correct tooth malpositions can create unintended consequences if not properly controlled. The American Association of Orthodontists recently issued a consumer alert, warning of “a substantial risk for irreparable damage” from a new trend called “do-it-yourself” orthodontics, consisting of patients autonomously using elastics to correct tooth position. The elastics can work their way below the gums and around the roots of the teeth, causing damage to the periodontium and even resulting in tooth loss. The cost of implants to replace these teeth would well exceed the cost of proper orthodontic care. This damage could also occur in a dental office, when a general dentist tries to perform a simplified orthodontic correction of a minor tooth malposition. The present case report describes a case of tooth loss caused by a displaced intraoral elastic, which occurred during a simple preprosthetic orthodontic treatment. PMID:27672645

  3. Zweig-rule-satisfying inelastic rescattering in B decays to pseudoscalar mesons

    NASA Astrophysics Data System (ADS)

    Łach, P.; Żenczykowski, P.

    2002-09-01

    We discuss all contributions from Zweig-rule-satisfying SU(3)-symmetric inelastic final state interaction (FSI)-induced corrections in B decays to ππ, πK, KK¯, πη(η'), and Kη(η'). We show how all of these FSI corrections lead to a simple redefinition of the amplitudes, permitting the use of a simple diagram-based description, in which, however, weak phases may enter in a modified way. The inclusion of FSI corrections admitted by the present data allows an arbitrary relative phase between the penguin and tree short-distance amplitudes. The FSI-induced error of the method, in which the value of the weak phase γ is to be determined by combining future results from B+,B0d,B0s decays to Kπ, is estimated to be of the order of 5° for γ~50°-60°.

  4. A Very Low Cost BCH Decoder for High Immunity of On-Chip Memories

    NASA Astrophysics Data System (ADS)

    Seo, Haejun; Han, Sehwan; Heo, Yoonseok; Cho, Taewon

    BCH(Bose-Chaudhuri-Hoquenbhem) code, a type of block codes-cyclic codes, has very strong error-correcting ability which is vital for performing the error protection on the memory system. BCH code has many kinds of dual algorithms, PGZ(Pererson-Gorenstein-Zierler) algorithm out of them is advantageous in view of correcting the errors through the simple calculation in t value. However, this is problematic when this becomes 0 (divided by zero) in case ν ≠ t. In this paper, the circuit would be simplified by suggesting the multi-mode hardware architecture in preparation that v were 0~3. First, production cost would be less thanks to the smaller number of gates. Second, lessening power consumption could lengthen the recharging period. The very low cost and simple datapath make our design a good choice in small-footprint SoC(System on Chip) as ECC(Error Correction Code/Circuit) in memory system.

  5. Bias of shear wave elasticity measurements in thin layer samples and a simple correction strategy.

    PubMed

    Mo, Jianqiang; Xu, Hao; Qiang, Bo; Giambini, Hugo; Kinnick, Randall; An, Kai-Nan; Chen, Shigao; Luo, Zongping

    2016-01-01

    Shear wave elastography (SWE) is an emerging technique for measuring biological tissue stiffness. However, the application of SWE in thin layer tissues is limited by bias due to the influence of geometry on measured shear wave speed. In this study, we investigated the bias of Young's modulus measured by SWE in thin layer gelatin-agar phantoms, and compared the result with finite element method and Lamb wave model simulation. The result indicated that the Young's modulus measured by SWE decreased continuously when the sample thickness decreased, and this effect was more significant for smaller thickness. We proposed a new empirical formula which can conveniently correct the bias without the need of using complicated mathematical modeling. In summary, we confirmed the nonlinear relation between thickness and Young's modulus measured by SWE in thin layer samples, and offered a simple and practical correction strategy which is convenient for clinicians to use.

  6. Hubble Space Telescope COSTAR asphere verification with a modified computer-generated hologram interferometer. [Corrective Optics Space Telescope Axial Replacement

    NASA Technical Reports Server (NTRS)

    Feinberg, L.; Wilson, M.

    1993-01-01

    To correct for the spherical aberration in the Hubble Space Telescope primary mirror, five anamorphic aspheric mirrors representing correction for three scientific instruments have been fabricated as part of the development of the corrective-optics space telescope axial-replacement instrument (COSTAR). During the acceptance tests of these mirrors at the vendor, a quick and simple method for verifying the asphere surface figure was developed. The technique has been used on three of the aspheres relating to the three instrument prescriptions. Results indicate that the three aspheres are correct to the limited accuracy expected of this test.

  7. Power corrections in the N -jettiness subtraction scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boughezal, Radja; Liu, Xiaohui; Petriello, Frank

    We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less

  8. Power corrections in the N -jettiness subtraction scheme

    DOE PAGES

    Boughezal, Radja; Liu, Xiaohui; Petriello, Frank

    2017-03-30

    We discuss the leading-logarithmic power corrections in the N-jettiness subtraction scheme for higher-order perturbative QCD calculations. We compute the next-to-leading order power corrections for an arbitrary N-jet process, and we explicitly calculate the power correction through next-to-next-to-leading order for color-singlet production for bothmore » $$q\\bar{q}$$ and gg initiated processes. Our results are compact and simple to implement numerically. Including the leading power correction in the N-jettiness subtraction scheme substantially improves its numerical efficiency. Finally, we discuss what features of our techniques extend to processes containing final-state jets.« less

  9. Simple Tidal Prism Models Revisited

    NASA Astrophysics Data System (ADS)

    Luketina, D.

    1998-01-01

    Simple tidal prism models for well-mixed estuaries have been in use for some time and are discussed in most text books on estuaries. The appeal of this model is its simplicity. However, there are several flaws in the logic behind the model. These flaws are pointed out and a more theoretically correct simple tidal prism model is derived. In doing so, it is made clear which effects can, in theory, be neglected and which can not.

  10. Directional errors of movements and their correction in a discrete tracking task. [pilot reaction time and sensorimotor performance

    NASA Technical Reports Server (NTRS)

    Jaeger, R. J.; Agarwal, G. C.; Gottlieb, G. L.

    1978-01-01

    Subjects can correct their own errors of movement more quickly than they can react to external stimuli by using three general categories of feedback: (1) knowledge of results, primarily visually mediated; (2) proprioceptive or kinaesthetic such as from muscle spindles and joint receptors, and (3) corollary discharge or efference copy within the central nervous system. The effects of these feedbacks on simple reaction time, choice reaction time, and error correction time were studied in four normal human subjects. The movement used was plantarflexion and dorsiflexion of the ankle joint. The feedback loops were modified, by changing the sign of the visual display to alter the subject's perception of results, and by applying vibration at 100 Hz simultaneously to both the agonist and antagonist muscles of the ankle joint. The central processing was interfered with when the subjects were given moderate doses of alcohol (blood alcohol concentration levels of up to 0.07%). Vibration and alcohol increase both the simple and choice reaction times but not the error correction time.

  11. Towards a better prediction of peak concentration, volume of distribution and half-life after oral drug administration in man, using allometry.

    PubMed

    Sinha, Vikash K; Vaarties, Karin; De Buck, Stefan S; Fenu, Luca A; Nijsen, Marjoleen; Gilissen, Ron A H J; Sanderson, Wendy; Van Uytsel, Kelly; Hoeben, Eva; Van Peer, Achiel; Mackie, Claire E; Smit, Johan W

    2011-05-01

    It is imperative that new drugs demonstrate adequate pharmacokinetic properties, allowing an optimal safety margin and convenient dosing regimens in clinical practice, which then lead to better patient compliance. Such pharmacokinetic properties include suitable peak (maximum) plasma drug concentration (C(max)), area under the plasma concentration-time curve (AUC) and a suitable half-life (t(½)). The C(max) and t(½) following oral drug administration are functions of the oral clearance (CL/F) and apparent volume of distribution during the terminal phase by the oral route (V(z)/F), each of which may be predicted and combined to estimate C(max) and t(½). Allometric scaling is a widely used methodology in the pharmaceutical industry to predict human pharmacokinetic parameters such as clearance and volume of distribution. In our previous published work, we have evaluated the use of allometry for prediction of CL/F and AUC. In this paper we describe the evaluation of different allometric scaling approaches for the prediction of C(max), V(z)/F and t(½) after oral drug administration in man. Twenty-nine compounds developed at Janssen Research and Development (a division of Janssen Pharmaceutica NV), covering a wide range of physicochemical and pharmacokinetic properties, were selected. The C(max) following oral dosing of a compound was predicted using (i) simple allometry alone; (ii) simple allometry along with correction factors such as plasma protein binding (PPB), maximum life-span potential or brain weight (reverse rule of exponents, unbound C(max) approach); and (iii) an indirect approach using allometrically predicted CL/F and V(z)/F and absorption rate constant (k(a)). The k(a) was estimated from (i) in vivo pharmacokinetic experiments in preclinical species; and (ii) predicted effective permeability in man (P(eff)), using a Caco-2 permeability assay. The V(z)/F was predicted using allometric scaling with or without PPB correction. The t(½) was estimated from the allometrically predicted parameters CL/F and V(z)/F. Predictions were deemed adequate when errors were within a 2-fold range. C(max) and t(½) could be predicted within a 2-fold error range for 59% and 66% of the tested compounds, respectively, using allometrically predicted CL/F and V(z)/F. The best predictions for C(max) were obtained when k(a) values were calculated from the Caco-2 permeability assay. The V(z)/F was predicted within a 2-fold error range for 72% of compounds when PPB correction was applied as the correction factor for scaling. We conclude that (i) C(max) and t(½) are best predicted by indirect scaling approaches (using allometrically predicted CL/F and V(z)/F and accounting for k(a) derived from permeability assay); and (ii) the PPB is an important correction factor for the prediction of V(z)/F by using allometric scaling. Furthermore, additional work is warranted to understand the mechanisms governing the processes underlying determination of C(max) so that the empirical approaches can be fine-tuned further.

  12. SU-E-T-469: A Practical Approach for the Determination of Small Field Output Factors Using Published Monte Carlo Derived Correction Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calderon, E; Siergiej, D

    2014-06-01

    Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detectormore » (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement.« less

  13. Fugacity ratio estimations for high-melting rigid aromatic compounds.

    PubMed

    Van Noort, Paul C M

    2004-07-01

    Prediction of the environmental fate of organic compounds requires knowledge of their tendency to stay in the gas and water phase. Vapor pressure and aqueous solubility are commonly used descriptors for these processes. Depending on the type of distribution process, values for either the pure solid state or the (subcooled) liquid state have to be used. Values for the (subcooled) liquid state can be calculated from those for the solid state, and vice versa, using the fugacity ratio. Fugacity ratios are usually calculated from the entropy of fusion and the melting point. For polycyclic aromatic hydrocarbons, chlorobenzenes, chlorodibenzofuranes, and chlorodibenzo(p)dioxins, fugacity ratios calculated using experimental entropies of fusion were systematically less than those obtained from a thermodynamically more rigorous approach using heat capacity data. The deviation was more than 1 order of magnitude at the highest melting point. The use of a universal value for the entropy of fusion of 56 J/molK resulted in either over or underestimation by up to more than 1 order of magnitude. A simple correction factor, based on the melting point only, was derived. This correction factor allowed the fugacity ratios to be estimated from experimental entropies of fusion and melting point with an accuracy better than 0.1-0.2 log units. Copyright 2004 Elsevier Ltd.

  14. How often should we expect to be wrong? Statistical power, P values, and the expected prevalence of false discoveries.

    PubMed

    Marino, Michael J

    2018-05-01

    There is a clear perception in the literature that there is a crisis in reproducibility in the biomedical sciences. Many underlying factors contributing to the prevalence of irreproducible results have been highlighted with a focus on poor design and execution of experiments along with the misuse of statistics. While these factors certainly contribute to irreproducibility, relatively little attention outside of the specialized statistical literature has focused on the expected prevalence of false discoveries under idealized circumstances. In other words, when everything is done correctly, how often should we expect to be wrong? Using a simple simulation of an idealized experiment, it is possible to show the central role of sample size and the related quantity of statistical power in determining the false discovery rate, and in accurate estimation of effect size. According to our calculations, based on current practice many subfields of biomedical science may expect their discoveries to be false at least 25% of the time, and the only viable course to correct this is to require the reporting of statistical power and a minimum of 80% power (1 - β = 0.80) for all studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Theoretical model of x-ray scattering as a dense matter probe.

    PubMed

    Gregori, G; Glenzer, S H; Rozmus, W; Lee, R W; Landen, O L

    2003-02-01

    We present analytical expressions for the dynamic structure factor, or form factor S(k,omega), which is the quantity describing the x-ray cross section from a dense plasma or a simple liquid. Our results, based on the random phase approximation for the treatment on the charged particle coupling, can be applied to describe scattering from either weakly coupled classical plasmas or degenerate electron liquids. Our form factor correctly reproduces the Compton energy down-shift and the known Fermi-Dirac electron velocity distribution for S(k,omega) in the case of a cold degenerate plasma. The usual concept of scattering parameter is also reinterpreted for the degenerate case in order to include the effect of the Thomas-Fermi screening. The results shown in this work can be applied to interpreting x-ray scattering in warm dense plasmas occurring in inertial confinement fusion experiments or for the modeling of solid density matter found in the interior of planets.

  16. A simple method for estimating frequency response corrections for eddy covariance systems

    Treesearch

    W. J. Massman

    2000-01-01

    A simple analytical formula is developed for estimating the frequency attenuation of eddy covariance fluxes due to sensor response, path-length averaging, sensor separation, signal processing, and flux averaging periods. Although it is an approximation based on flat terrain cospectra, this analytical formula should have broader applicability than just flat-terrain...

  17. 25+ Years of the Hubble Space Telescope and a Simple Error That Cost Millions

    ERIC Educational Resources Information Center

    Shakerin, Said

    2016-01-01

    A simple mistake in properly setting up a measuring device caused millions of dollars to be spent in correcting the initial optical failure of the Hubble Space Telescope (HST). This short article is intended as a lesson for a physics laboratory and discussion of errors in measurement.

  18. An efficient algorithm for automatic phase correction of NMR spectra based on entropy minimization

    NASA Astrophysics Data System (ADS)

    Chen, Li; Weng, Zhiqiang; Goh, LaiYoong; Garland, Marc

    2002-09-01

    A new algorithm for automatic phase correction of NMR spectra based on entropy minimization is proposed. The optimal zero-order and first-order phase corrections for a NMR spectrum are determined by minimizing entropy. The objective function is constructed using a Shannon-type information entropy measure. Entropy is defined as the normalized derivative of the NMR spectral data. The algorithm has been successfully applied to experimental 1H NMR spectra. The results of automatic phase correction are found to be comparable to, or perhaps better than, manual phase correction. The advantages of this automatic phase correction algorithm include its simple mathematical basis and the straightforward, reproducible, and efficient optimization procedure. The algorithm is implemented in the Matlab program ACME—Automated phase Correction based on Minimization of Entropy.

  19. Tinea atypica.

    PubMed

    Atzori, L; Pau, M; Aste, N

    2013-12-01

    Although usually simple, the diagnosis of dermatophyte infection is sometimes neglected. Variations in clinical presentation (tinea atypica), mimicking other skin diseases depend on many factors, partially due to the dermatophyte's characteristics, and a combination of patient's pathological but often physiological conditions, such as excessive washing or sun exposure. The physician's misdiagnosis and eventual prescription of steroids or other incongruous treatments further induce pathomorphosis (tinea incognito), longstanding disease and delayed recovery. This review describes the morphology of some atypical dermatophyte infections, in an attempt to compare and correlate changes to the normal features of the disease by site of involvement. The risk factors and predisposing conditions are also analysed to provide a reasoned interpretation of morphology and therefore evoke the diagnostic suspect in atypical cases. Periodical training is the clue to improve dermatologist expertise in what is the first-sight ability to make a diagnosis, perform the correct assessments and consequent therapy in daily practice.

  20. Rapid correction of electron microprobe data for multicomponent metallic systems

    NASA Technical Reports Server (NTRS)

    Gupta, K. P.; Sivakumar, R.

    1973-01-01

    This paper describes an empirical relation for the correction of electron microprobe data for multicomponent metallic systems. It evaluates the empirical correction parameter, a for each element in a binary alloy system using a modification of Colby's MAGIC III computer program and outlines a simple and quick way of correcting the probe data. This technique has been tested on a number of multicomponent metallic systems and the agreement with the results using theoretical expressions is found to be excellent. Limitations and suitability of this relation are discussed and a model calculation is also presented in the Appendix.

  1. Limiting factors in atomic resolution cryo electron microscopy: No simple tricks

    PubMed Central

    Zhang, Xing; Zhou, Z. Hong

    2013-01-01

    To bring cryo electron microscopy (cryoEM) of large biological complexes to atomic resolution, several factors – in both cryoEM image acquisition and 3D reconstruction – that may be neglected at low resolution become significantly limiting. Here we present thorough analyses of four limiting factors: (a) electron-beam tilt, (b) inaccurate determination of defocus values, (c) focus gradient through particles, and (d) particularly for large particles, dynamic (multiple) scattering of electrons. We also propose strategies to cope with these factors: (a) the divergence and direction tilt components of electron-beam tilt could be reduced by maintaining parallel illumination and by using a coma-free alignment procedure, respectively. Moreover, the effect of all beam tilt components, including spiral tilt, could be eliminated by use of a spherical aberration corrector. (b) More accurate measurement of defocus value could be obtained by imaging areas adjacent to the target area at high electron dose and by measuring the image shift induced by tilting the electron beam. (c) Each known Fourier coefficient in the Fourier transform of a cryoEM image is the sum of two Fourier coefficients of the 3D structure, one on each of two curved ‘characteristic surfaces’ in 3D Fourier space. We describe a simple model-based iterative method that could recover these two Fourier coefficients on the two characteristic surfaces. (d) The effect of dynamic scattering could be corrected by deconvolution of a transfer function. These analyses and our proposed strategies offer useful guidance for future experimental designs targeting atomic resolution cryoEM reconstruction. PMID:21627992

  2. Validation of the Two-Layer Model for Correcting Clear Sky Reflectance Near Clouds

    NASA Technical Reports Server (NTRS)

    Wen, Guoyong; Marshak, Alexander; Evans, K. Frank; Vamal, Tamas

    2014-01-01

    A two-layer model was developed in our earlier studies to estimate the clear sky reflectance enhancement near clouds. This simple model accounts for the radiative interaction between boundary layer clouds and molecular layer above, the major contribution to the reflectance enhancement near clouds for short wavelengths. We use LES/SHDOM simulated 3D radiation fields to valid the two-layer model for reflectance enhancement at 0.47 micrometer. We find: (a) The simple model captures the viewing angle dependence of the reflectance enhancement near cloud, suggesting the physics of this model is correct; and (b) The magnitude of the 2-layer modeled enhancement agree reasonably well with the "truth" with some expected underestimation. We further extend our model to include cloud-surface interaction using the Poisson model for broken clouds. We found that including cloud-surface interaction improves the correction, though it can introduced some over corrections for large cloud albedo, large cloud optical depth, large cloud fraction, large cloud aspect ratio. This over correction can be reduced by excluding scenes (10 km x 10km) with large cloud fraction for which the Poisson model is not designed for. Further research is underway to account for the contribution of cloud-aerosol radiative interaction to the enhancement.

  3. Efficient quantum pseudorandomness with simple graph states

    NASA Astrophysics Data System (ADS)

    Mezher, Rawad; Ghalbouni, Joe; Dgheim, Joseph; Markham, Damian

    2018-02-01

    Measurement based (MB) quantum computation allows for universal quantum computing by measuring individual qubits prepared in entangled multipartite states, known as graph states. Unless corrected for, the randomness of the measurements leads to the generation of ensembles of random unitaries, where each random unitary is identified with a string of possible measurement results. We show that repeating an MB scheme an efficient number of times, on a simple graph state, with measurements at fixed angles and no feedforward corrections, produces a random unitary ensemble that is an ɛ -approximate t design on n qubits. Unlike previous constructions, the graph is regular and is also a universal resource for measurement based quantum computing, closely related to the brickwork state.

  4. The Potential of Automated Corrective Feedback to Remediate Cohesion Problems in Advanced Students' Writing

    ERIC Educational Resources Information Center

    Strobl, Carola

    2017-01-01

    This study explores the potential of a feedback environment using simple string-based pattern matching technology for the provision of automated corrective feedback on cohesion problems. Thirty-eight high-frequent problems, including non-target like use of connectives and co-references were addressed providing both direct and indirect feedback.…

  5. A Bayesian approach to truncated data sets: An application to Malmquist bias in Supernova Cosmology

    NASA Astrophysics Data System (ADS)

    March, Marisa Cristina

    2018-01-01

    A problem commonly encountered in statistical analysis of data is that of truncated data sets. A truncated data set is one in which a number of data points are completely missing from a sample, this is in contrast to a censored sample in which partial information is missing from some data points. In astrophysics this problem is commonly seen in a magnitude limited survey such that the survey is incomplete at fainter magnitudes, that is, certain faint objects are simply not observed. The effect of this `missing data' is manifested as Malmquist bias and can result in biases in parameter inference if it is not accounted for. In Frequentist methodologies the Malmquist bias is often corrected for by analysing many simulations and computing the appropriate correction factors. One problem with this methodology is that the corrections are model dependent. In this poster we derive a Bayesian methodology for accounting for truncated data sets in problems of parameter inference and model selection. We first show the methodology for a simple Gaussian linear model and then go on to show the method for accounting for a truncated data set in the case for cosmological parameter inference with a magnitude limited supernova Ia survey.

  6. Multiplicity-dependent and nonbinomial efficiency corrections for particle number cumulants

    NASA Astrophysics Data System (ADS)

    Bzdak, Adam; Holzmann, Romain; Koch, Volker

    2016-12-01

    In this article we extend previous work on efficiency corrections for cumulant measurements [Bzdak and Koch, Phys. Rev. C 86, 044904 (2012), 10.1103/PhysRevC.86.044904; Phys. Rev. C 91, 027901 (2015), 10.1103/PhysRevC.91.027901]. We will discuss the limitations of the methods presented in these papers. Specifically we will consider multiplicity dependent efficiencies as well as nonbinomial efficiency distributions. We will discuss the most simple and straightforward methods to implement those corrections.

  7. Multiplicity-dependent and nonbinomial efficiency corrections for particle number cumulants

    DOE PAGES

    Bzdak, Adam; Holzmann, Romain; Koch, Volker

    2016-12-19

    Here, we extend previous work on efficiency corrections for cumulant measurements [Bzdak and Koch, Phys. Rev. C 86, 044904 (2012)PRVCAN0556-281310.1103/PhysRevC.86.044904; Phys. Rev. C 91, 027901 (2015)PRVCAN0556-281310.1103/PhysRevC.91.027901]. We will then discuss the limitations of the methods presented in these papers. Specifically we will consider multiplicity dependent efficiencies as well as nonbinomial efficiency distributions. We will discuss the most simple and straightforward methods to implement those corrections.

  8. Quantum Corrections to the 'Atomistic' MOSFET Simulations

    NASA Technical Reports Server (NTRS)

    Asenov, Asen; Slavcheva, G.; Kaya, S.; Balasubramaniam, R.

    2000-01-01

    We have introduced in a simple and efficient manner quantum mechanical corrections in our 3D 'atomistic' MOSFET simulator using the density gradient formalism. We have studied in comparison with classical simulations the effect of the quantum mechanical corrections on the simulation of random dopant induced threshold voltage fluctuations, the effect of the single charge trapping on interface states and the effect of the oxide thickness fluctuations in decanano MOSFETs with ultrathin gate oxides. The introduction of quantum corrections enhances the threshold voltage fluctuations but does not affect significantly the amplitude of the random telegraph noise associated with single carrier trapping. The importance of the quantum corrections for proper simulation of oxide thickness fluctuation effects has also been demonstrated.

  9. Bioluminescence Risk Detection Aid

    DTIC Science & Technology

    2010-01-01

    Delivery Vehicle, or diver) bioluminescence, based on local environmental data, in-situ measurements, and simple radiative transfer models. This work...vehicle diving to 5.5 m. Green = REMUS vehicle diving to 6.5 m. Observations were corrected for the angle of observation. IMPACT /APPLICATIONS...will sense vehicle-stimulated bioluminesce, measure local environmental conditions and ingest the information to solve a simple radiative transfer

  10. A Simple Spreadsheet Program for the Calculation of Lattice-Site Distributions

    ERIC Educational Resources Information Center

    McCaffrey, John G.

    2009-01-01

    A simple spreadsheet program is presented that can be used by undergraduate students to calculate the lattice-site distributions in solids. A major strength of the method is the natural way in which the correct number of ions or atoms are present, or absent, at specific lattice distances. The expanding-cube method utilized is straightforward to…

  11. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  12. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  13. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  14. Small field detector correction factors kQclin,Qmsr (fclin,fmsr) for silicon-diode and diamond detectors with circular 6 MV fields derived using both empirical and numerical methods.

    PubMed

    O'Brien, D J; León-Vintró, L; McClean, B

    2016-01-01

    The use of radiotherapy fields smaller than 3 cm in diameter has resulted in the need for accurate detector correction factors for small field dosimetry. However, published factors do not always agree and errors introduced by biased reference detectors, inaccurate Monte Carlo models, or experimental errors can be difficult to distinguish. The aim of this study was to provide a robust set of detector-correction factors for a range of detectors using numerical, empirical, and semiempirical techniques under the same conditions and to examine the consistency of these factors between techniques. Empirical detector correction factors were derived based on small field output factor measurements for circular field sizes from 3.1 to 0.3 cm in diameter performed with a 6 MV beam. A PTW 60019 microDiamond detector was used as the reference dosimeter. Numerical detector correction factors for the same fields were derived based on calculations from a geant4 Monte Carlo model of the detectors and the Linac treatment head. Semiempirical detector correction factors were derived from the empirical output factors and the numerical dose-to-water calculations. The PTW 60019 microDiamond was found to over-respond at small field sizes resulting in a bias in the empirical detector correction factors. The over-response was similar in magnitude to that of the unshielded diode. Good agreement was generally found between semiempirical and numerical detector correction factors except for the PTW 60016 Diode P, where the numerical values showed a greater over-response than the semiempirical values by a factor of 3.7% for a 1.1 cm diameter field and higher for smaller fields. Detector correction factors based solely on empirical measurement or numerical calculation are subject to potential bias. A semiempirical approach, combining both empirical and numerical data, provided the most reliable results.

  15. Good reasons to implement quality assurance in nationwide breast cancer screening programs in Croatia and Serbia: results from a pilot study.

    PubMed

    Ciraj-Bjelac, Olivera; Faj, Dario; Stimac, Damir; Kosutic, Dusko; Arandjic, Danijela; Brkic, Hrvoje

    2011-04-01

    The purpose of this study is to investigate the need for and the possible achievements of a comprehensive QA programme and to look at effects of simple corrective actions on image quality in Croatia and in Serbia. The paper focuses on activities related to the technical and radiological aspects of QA. The methodology consisted of two phases. The aim of the first phase was the initial assessment of mammography practice in terms of image quality, patient dose and equipment performance in selected number of mammography units in Croatia and Serbia. Subsequently, corrective actions were suggested and implemented. Then the same parameters were re-assessed. Most of the suggested corrective actions were simple, low-cost and possible to implement immediately, as these were related to working habits in mammography units, such as film processing and darkroom conditions. It has been demonstrated how simple quantitative assessment of image quality can be used for optimisation purposes. Analysis of image quality parameters as OD, gradient and contrast demonstrated general similarities between mammography practices in Croatia and Serbia. The applied methodology should be expanded to larger number of hospitals and applied on a regular basis. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.

  16. Partial volume correction using cortical surfaces

    NASA Astrophysics Data System (ADS)

    Blaasvær, Kamille R.; Haubro, Camilla D.; Eskildsen, Simon F.; Borghammer, Per; Otzen, Daniel; Ostergaard, Lasse R.

    2010-03-01

    Partial volume effect (PVE) in positron emission tomography (PET) leads to inaccurate estimation of regional metabolic activities among neighbouring tissues with different tracer concentration. This may be one of the main limiting factors in the utilization of PET in clinical practice. Partial volume correction (PVC) methods have been widely studied to address this issue. MRI based PVC methods are well-established.1 Their performance depend on the quality of the co-registration of the MR and PET dataset, on the correctness of the estimated point-spread function (PSF) of the PET scanner and largely on the performance of the segmentation method that divide the brain into brain tissue compartments.1, 2 In the present study a method for PVC is suggested, that utilizes cortical surfaces, to obtain detailed anatomical information. The objectives are to improve the performance of PVC, facilitate a study of the relationship between metabolic activity in the cerebral cortex and cortical thicknesses, and to obtain an improved visualization of PET data. The gray matter metabolic activity after performing PVC was recovered by 99.7 - 99.8 % , in relation to the true activity when testing on simple simulated data with different PSFs and by 97.9 - 100 % when testing on simulated brain PET data at different cortical thicknesses. When studying the relationship between metabolic activities and anatomical structures it was shown on simulated brain PET data, that it is important to correct for PVE in order to get the true relationship.

  17. SU-C-304-07: Are Small Field Detector Correction Factors Strongly Dependent On Machine-Specific Characteristics?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathew, D; Tanny, S; Parsai, E

    2015-06-15

    Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measuredmore » on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm{sup 2} to 0.6×0.6 cm{sup 2}, normalized to values at 5×5cm{sup 2}. Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm{sup 2} fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for class-specific reference conditions.« less

  18. Exploratory graph analysis: A new approach for estimating the number of dimensions in psychological research

    PubMed Central

    Golino, Hudson F.; Epskamp, Sacha

    2017-01-01

    The estimation of the correct number of dimensions is a long-standing problem in psychometrics. Several methods have been proposed, such as parallel analysis (PA), Kaiser-Guttman’s eigenvalue-greater-than-one rule, multiple average partial procedure (MAP), the maximum-likelihood approaches that use fit indexes as BIC and EBIC and the less used and studied approach called very simple structure (VSS). In the present paper a new approach to estimate the number of dimensions will be introduced and compared via simulation to the traditional techniques pointed above. The approach proposed in the current paper is called exploratory graph analysis (EGA), since it is based on the graphical lasso with the regularization parameter specified using EBIC. The number of dimensions is verified using the walktrap, a random walk algorithm used to identify communities in networks. In total, 32,000 data sets were simulated to fit known factor structures, with the data sets varying across different criteria: number of factors (2 and 4), number of items (5 and 10), sample size (100, 500, 1000 and 5000) and correlation between factors (orthogonal, .20, .50 and .70), resulting in 64 different conditions. For each condition, 500 data sets were simulated using lavaan. The result shows that the EGA performs comparable to parallel analysis, EBIC, eBIC and to Kaiser-Guttman rule in a number of situations, especially when the number of factors was two. However, EGA was the only technique able to correctly estimate the number of dimensions in the four-factor structure when the correlation between factors were .7, showing an accuracy of 100% for a sample size of 5,000 observations. Finally, the EGA was used to estimate the number of factors in a real dataset, in order to compare its performance with the other six techniques tested in the simulation study. PMID:28594839

  19. Exploratory graph analysis: A new approach for estimating the number of dimensions in psychological research.

    PubMed

    Golino, Hudson F; Epskamp, Sacha

    2017-01-01

    The estimation of the correct number of dimensions is a long-standing problem in psychometrics. Several methods have been proposed, such as parallel analysis (PA), Kaiser-Guttman's eigenvalue-greater-than-one rule, multiple average partial procedure (MAP), the maximum-likelihood approaches that use fit indexes as BIC and EBIC and the less used and studied approach called very simple structure (VSS). In the present paper a new approach to estimate the number of dimensions will be introduced and compared via simulation to the traditional techniques pointed above. The approach proposed in the current paper is called exploratory graph analysis (EGA), since it is based on the graphical lasso with the regularization parameter specified using EBIC. The number of dimensions is verified using the walktrap, a random walk algorithm used to identify communities in networks. In total, 32,000 data sets were simulated to fit known factor structures, with the data sets varying across different criteria: number of factors (2 and 4), number of items (5 and 10), sample size (100, 500, 1000 and 5000) and correlation between factors (orthogonal, .20, .50 and .70), resulting in 64 different conditions. For each condition, 500 data sets were simulated using lavaan. The result shows that the EGA performs comparable to parallel analysis, EBIC, eBIC and to Kaiser-Guttman rule in a number of situations, especially when the number of factors was two. However, EGA was the only technique able to correctly estimate the number of dimensions in the four-factor structure when the correlation between factors were .7, showing an accuracy of 100% for a sample size of 5,000 observations. Finally, the EGA was used to estimate the number of factors in a real dataset, in order to compare its performance with the other six techniques tested in the simulation study.

  20. Geometric correction of satellite data using curvilinear features and virtual control points

    NASA Technical Reports Server (NTRS)

    Algazi, V. R.; Ford, G. E.; Meyer, D. I.

    1979-01-01

    A simple, yet effective procedure for the geometric correction of partial Landsat scenes is described. The procedure is based on the acquisition of actual and virtual control points from the line printer output of enhanced curvilinear features. The accuracy of this method compares favorably with that of the conventional approach in which an interactive image display system is employed.

  1. Survey Response-Related Biases in Contingent Valuation: Concepts, Remedies, and Empirical Application to Valuing Aquatic Plant Management

    Treesearch

    Mark L. Messonnier; John C. Bergstrom; Chrisopher M. Cornwell; R. Jeff Teasley; H. Ken Cordell

    2000-01-01

    Simple nonresponse and selection biases that may occur in survey research such as contingent valuation applications are discussed and tested. Correction mechanisms for these types of biases are demonstrated. Results indicate the importance of testing and correcting for unit and item nonresponse bias in contingent valuation survey data. When sample nonresponse and...

  2. Comparative assessment of several post-processing methods for correcting evapotranspiration forecasts derived from TIGGE datasets.

    NASA Astrophysics Data System (ADS)

    Tian, D.; Medina, H.

    2017-12-01

    Post-processing of medium range reference evapotranspiration (ETo) forecasts based on numerical weather prediction (NWP) models has the potential of improving the quality and utility of these forecasts. This work compares the performance of several post-processing methods for correcting ETo forecasts over the continental U.S. generated from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database using data from Europe (EC), the United Kingdom (MO), and the United States (NCEP). The pondered post-processing techniques are: simple bias correction, the use of multimodels, the Ensemble Model Output Statistics (EMOS, Gneitting et al., 2005) and the Bayesian Model Averaging (BMA, Raftery et al., 2005). ETo estimates based on quality-controlled U.S. Regional Climate Reference Network measurements, and computed with the FAO 56 Penman Monteith equation, are adopted as baseline. EMOS and BMA are generally the most efficient post-processing techniques of the ETo forecasts. Nevertheless, the simple bias correction of the best model is commonly much more rewarding than using multimodel raw forecasts. Our results demonstrate the potential of different forecasting and post-processing frameworks in operational evapotranspiration and irrigation advisory systems at national scale.

  3. Automated knowledge-base refinement

    NASA Technical Reports Server (NTRS)

    Mooney, Raymond J.

    1994-01-01

    Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.

  4. Possible Explanation of the Different Temporal Behaviors of Various Classes of Sunspot Groups

    NASA Astrophysics Data System (ADS)

    Gao, Peng-Xin; Li, Ke-Jun; Li, Fu-Yu

    2017-09-01

    In order to investigate the periodicity and long-term trends of various classes of sunspot groups (SGs), we separated SGs into two categories: simple SGs (A/U ≤ 4.5, where A represents the total corrected whole spot area of the group in millionths of the solar hemisphere (msh), and U represents the total corrected umbral area of the group in msh); and complex SGs (A/U > 6.2). Based on the revised version of the Greenwich Photoheliographic Results sunspot catalogue, we investigated the periodic behaviors and long-term trends of simple and complex SGs from 1875 to 1976 using the Hilbert-Huang Transform method, and we confirm that the temporal behaviors of simple and complex SGs are quite different. Our main findings are as follows. i) For simple and complex SGs, the values of the Schwabe cycle wax and wane, following the solar activity cycle. ii) There are significant phase differences (almost antiphase) between the periodicity of 53.50 ± 3.79 years extracted from yearly simple SG numbers and the periodicity of 56.21 ± 2.92 years extracted from yearly complex SG numbers. iii) The adaptive trends of yearly simple and complex SG numbers are also quite different: for simple SGs, the values of the adaptive trend gradually increase during the time period of 1875 - 1949, then they decrease gradually from 1949 to 1976, similar to the rise and the maximum phase of a sine curve; for complex SGs, the values of the adaptive trend first slowly increase and then quickly increase, similar to the minimum and rise phase of a sine curve.

  5. The main beam correction term in kinetic energy release from metastable peaks.

    PubMed

    Petersen, Allan Christian

    2017-12-01

    The correction term for the precursor ion signal width in determination of kinetic energy release is reviewed, and the correction term is formally derived. The derived correction term differs from the traditionally applied term. An experimental finding substantiates the inaccuracy in the latter. The application of the "T-value" to study kinetic energy release is found preferable to kinetic energy release distributions when the metastable peaks are slim and simple Gaussians. For electronically predissociated systems, a "borderline zero" kinetic energy release can be directly interpreted in reaction dynamics with strong curvature in the reaction coordinate. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Radiated BPF sound measurement of centrifugal compressor

    NASA Astrophysics Data System (ADS)

    Ohuchida, S.; Tanaka, K.

    2013-12-01

    A technique to measure radiated BPF sound from an automotive turbocharger compressor impeller is proposed in this paper. Where there are high-level background noises in the measurement environment, it is difficult to discriminate the target component from the background. Since the effort of measuring BPF sound was taken in a room with such condition in this study, no discrete BPF peak was initially found on the sound spectrum. Taking its directionality into consideration, a microphone covered with a parabolic cone was selected and using this technique, the discrete peak of BPF was clearly observed. Since the level of measured sound was amplified due to the area-integration effect, correction was needed to obtain the real level. To do so, sound measurements with and without a parabolic cone were conducted for the fixed source and their level differences were used as correction factors. Consideration is given to the sound propagation mechanism utilizing measured BPF as well as the result of a simple model experiment. The present method is generally applicable to sound measurements conducted with a high level of background noise.

  7. Read Code Quality Assurance

    PubMed Central

    Schulz, Erich; Barrett, James W.; Price, Colin

    1998-01-01

    As controlled clinical vocabularies assume an increasing role in modern clinical information systems, so the issue of their quality demands greater attention. In order to meet the resulting stringent criteria for completeness and correctness, a quality assurance system comprising a database of more than 500 rules is being developed and applied to the Read Thesaurus. The authors discuss the requirement to apply quality assurance processes to their dynamic editing database in order to ensure the quality of exported products. Sources of errors include human, hardware, and software factors as well as new rules and transactions. The overall quality strategy includes prevention, detection, and correction of errors. The quality assurance process encompasses simple data specification, internal consistency, inspection procedures and, eventually, field testing. The quality assurance system is driven by a small number of tables and UNIX scripts, with “business rules” declared explicitly as Structured Query Language (SQL) statements. Concurrent authorship, client-server technology, and an initial failure to implement robust transaction control have all provided valuable lessons. The feedback loop for error management needs to be short. PMID:9670131

  8. More sound of church bells: Authors' correction

    NASA Astrophysics Data System (ADS)

    Vogt, Patrik; Kasper, Lutz; Burde, Jan-Philipp

    2016-01-01

    In the recently published article "The Sound of Church Bells: Tracking Down the Secret of a Traditional Arts and Crafts Trade," the bell frequencies have been erroneously oversimplified. The problem affects Eqs. (2) and (3), which were derived from the elementary "coffee mug model" and in which we used the speed of sound in air. However, this does not make sense from a physical point of view, since air only acts as a sound carrier, not as a sound source in the case of bells. Due to the excellent fit of the theoretical model with the empirical data, we unfortunately failed to notice this error before publication. However, all other equations, e.g., the introduction of the correction factor in Eq. (4) and the estimation of the mass in Eqs. (5) and (6) are not affected by this error, since they represent empirical models. However, it is unfortunate to introduce the speed of sound in air as a constant in Eqs. (4) and (6). Instead, we suggest the following simple rule of thumb for relating the radius of a church bell R to its humming frequency fhum:

  9. Interactions between Type of Instruction and Type of Language Feature: A Meta-Analysis

    ERIC Educational Resources Information Center

    Spada, Nina; Tomita, Yasuyo

    2010-01-01

    A meta-analysis was conducted to investigate the effects of explicit and implicit instruction on the acquisition of simple and complex grammatical features in English. The target features in the 41 studies contributing to the meta-analysis were categorized as simple or complex based on the number of criteria applied to arrive at the correct target…

  10. Preparing "Chameleon Balls" from Natural Plants: Simple Handmade pH Indicator and Teaching Material for Chemical Education

    NASA Astrophysics Data System (ADS)

    1996-05-01

    Some of the structures in Figure 1 from "Preparing "Chameleon Balls" from Natural Plants: Simple Handmade pH Indicator and Teaching Material for Chemical Education"[Kanda, N.; Asano, T.; Itoh, T.; Onoda, M. J. Chem. Educ. 1995, 72, 1131] were incorrect due to a staff error. The correct figure appears below.

  11. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets

    NASA Astrophysics Data System (ADS)

    Thieberger, P.; Gassner, D.; Hulsart, R.; Michnoff, R.; Miller, T.; Minty, M.; Sorrell, Z.; Bartnik, A.

    2018-04-01

    A simple, analytically correct algorithm is developed for calculating "pencil" relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a Field Programmable Gate Array-based BPM readout implementation of the new algorithm has been developed and characterized. Finally, the algorithm is tested with BPM data from the Cornell Preinjector.

  12. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets

    DOE PAGES

    Thieberger, Peter; Gassner, D.; Hulsart, R.; ...

    2018-04-25

    Here, a simple, analytically correct algorithm is developed for calculating “pencil” relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a FPGA-based BPM readout implementation of the new algorithm has been developed and characterized. Lastly, the algorithm ismore » tested with BPM data from the Cornell Preinjector.« less

  13. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thieberger, Peter; Gassner, D.; Hulsart, R.

    Here, a simple, analytically correct algorithm is developed for calculating “pencil” relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a FPGA-based BPM readout implementation of the new algorithm has been developed and characterized. Lastly, the algorithm ismore » tested with BPM data from the Cornell Preinjector.« less

  14. Fast readout algorithm for cylindrical beam position monitors providing good accuracy for particle bunches with large offsets.

    PubMed

    Thieberger, P; Gassner, D; Hulsart, R; Michnoff, R; Miller, T; Minty, M; Sorrell, Z; Bartnik, A

    2018-04-01

    A simple, analytically correct algorithm is developed for calculating "pencil" relativistic beam coordinates using the signals from an ideal cylindrical particle beam position monitor (BPM) with four pickup electrodes (PUEs) of infinitesimal widths. The algorithm is then applied to simulations of realistic BPMs with finite width PUEs. Surprisingly small deviations are found. Simple empirically determined correction terms reduce the deviations even further. The algorithm is then tested with simulations for non-relativistic beams. As an example of the data acquisition speed advantage, a Field Programmable Gate Array-based BPM readout implementation of the new algorithm has been developed and characterized. Finally, the algorithm is tested with BPM data from the Cornell Preinjector.

  15. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers

    PubMed Central

    Dobie, Robert A; Wojcik, Nancy C

    2015-01-01

    Objectives The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999–2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Methods Regression analysis was used to derive new age-correction values using audiometric data from the 1999–2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20–75 years. Results The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20–75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61–75 years. Conclusions Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to include older workers. PMID:26169804

  16. Age correction in monitoring audiometry: method to update OSHA age-correction tables to include older workers.

    PubMed

    Dobie, Robert A; Wojcik, Nancy C

    2015-07-13

    The US Occupational Safety and Health Administration (OSHA) Noise Standard provides the option for employers to apply age corrections to employee audiograms to consider the contribution of ageing when determining whether a standard threshold shift has occurred. Current OSHA age-correction tables are based on 40-year-old data, with small samples and an upper age limit of 60 years. By comparison, recent data (1999-2006) show that hearing thresholds in the US population have improved. Because hearing thresholds have improved, and because older people are increasingly represented in noisy occupations, the OSHA tables no longer represent the current US workforce. This paper presents 2 options for updating the age-correction tables and extending values to age 75 years using recent population-based hearing survey data from the US National Health and Nutrition Examination Survey (NHANES). Both options provide scientifically derived age-correction values that can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Regression analysis was used to derive new age-correction values using audiometric data from the 1999-2006 US NHANES. Using the NHANES median, better-ear thresholds fit to simple polynomial equations, new age-correction values were generated for both men and women for ages 20-75 years. The new age-correction values are presented as 2 options. The preferred option is to replace the current OSHA tables with the values derived from the NHANES median better-ear thresholds for ages 20-75 years. The alternative option is to retain the current OSHA age-correction values up to age 60 years and use the NHANES-based values for ages 61-75 years. Recent NHANES data offer a simple solution to the need for updated, population-based, age-correction tables for OSHA. The options presented here provide scientifically valid and relevant age-correction values which can be easily adopted by OSHA to expand their regulatory guidance to include older workers. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  17. A single-scattering correction for the seismo-acoustic parabolic equation.

    PubMed

    Collins, Michael D

    2012-04-01

    An efficient single-scattering correction that does not require iterations is derived and tested for the seismo-acoustic parabolic equation. The approach is applicable to problems involving gradual range dependence in a waveguide with fluid and solid layers, including the key case of a sloping fluid-solid interface. The single-scattering correction is asymptotically equivalent to a special case of a single-scattering correction for problems that only have solid layers [Küsel et al., J. Acoust. Soc. Am. 121, 808-813 (2007)]. The single-scattering correction has a simple interpretation (conservation of interface conditions in an average sense) that facilitated its generalization to problems involving fluid layers. Promising results are obtained for problems in which the ocean bottom interface has a small slope.

  18. Array-based satellite phase bias sensing: theory and GPS/BeiDou/QZSS results

    NASA Astrophysics Data System (ADS)

    Khodabandeh, A.; Teunissen, P. J. G.

    2014-09-01

    Single-receiver integer ambiguity resolution (IAR) is a measurement concept that makes use of network-derived non-integer satellite phase biases (SPBs), among other corrections, to recover and resolve the integer ambiguities of the carrier-phase data of a single GNSS receiver. If it is realized, the very precise integer ambiguity-resolved carrier-phase data would then contribute to the estimation of the receiver’s position, thus making (near) real-time precise point positioning feasible. Proper definition and determination of the SPBs take a leading part in developing the idea of single-receiver IAR. In this contribution, the concept of array-based between-satellite single-differenced (SD) SPB determination is introduced, which is aimed to reduce the code-dominated precision of the SD-SPB corrections. The underlying model is realized by giving the role of the local reference network to an array of antennas, mounted on rigid platforms, that are separated by short distances so that the same ionospheric delay is assumed to be experienced by all the antennas. To that end, a closed-form expression of the array-aided SD-SPB corrections is presented, thereby proposing a simple strategy to compute the SD-SPBs. After resolving double-differenced ambiguities of the array’s data, the variance of the SD-SPB corrections is shown to be reduced by a factor equal to the number of antennas. This improvement in precision is also affirmed by numerical results of the three GNSSs GPS, BeiDou and QZSS. Experimental results demonstrate that the integer-recovered ambiguities converge to integers faster, upon increasing the number of antennas aiding the SD-SPB corrections.

  19. Improved multidimensional semiclassical tunneling theory.

    PubMed

    Wagner, Albert F

    2013-12-12

    We show that the analytic multidimensional semiclassical tunneling formula of Miller et al. [Miller, W. H.; Hernandez, R.; Handy, N. C.; Jayatilaka, D.; Willets, A. Chem. Phys. Lett. 1990, 172, 62] is qualitatively incorrect for deep tunneling at energies well below the top of the barrier. The origin of this deficiency is that the formula uses an effective barrier weakly related to the true energetics but correctly adjusted to reproduce the harmonic description and anharmonic corrections of the reaction path at the saddle point as determined by second order vibrational perturbation theory. We present an analytic improved semiclassical formula that correctly includes energetic information and allows a qualitatively correct representation of deep tunneling. This is done by constructing a three segment composite Eckart potential that is continuous everywhere in both value and derivative. This composite potential has an analytic barrier penetration integral from which the semiclassical action can be derived and then used to define the semiclassical tunneling probability. The middle segment of the composite potential by itself is superior to the original formula of Miller et al. because it incorporates the asymmetry of the reaction barrier produced by the known reaction exoergicity. Comparison of the semiclassical and exact quantum tunneling probability for the pure Eckart potential suggests a simple threshold multiplicative factor to the improved formula to account for quantum effects very near threshold not represented by semiclassical theory. The deep tunneling limitations of the original formula are echoed in semiclassical high-energy descriptions of bound vibrational states perpendicular to the reaction path at the saddle point. However, typically ab initio energetic information is not available to correct it. The Supporting Information contains a Fortran code, test input, and test output that implements the improved semiclassical tunneling formula.

  20. Detector-specific correction factors in radiosurgery beams and their impact on dose distribution calculations.

    PubMed

    García-Garduño, Olivia A; Rodríguez-Ávila, Manuel A; Lárraga-Gutiérrez, José M

    2018-01-01

    Silicon-diode-based detectors are commonly used for the dosimetry of small radiotherapy beams due to their relatively small volumes and high sensitivity to ionizing radiation. Nevertheless, silicon-diode-based detectors tend to over-respond in small fields because of their high density relative to water. For that reason, detector-specific beam correction factors ([Formula: see text]) have been recommended not only to correct the total scatter factors but also to correct the tissue maximum and off-axis ratios. However, the application of [Formula: see text] to in-depth and off-axis locations has not been studied. The goal of this work is to address the impact of the correction factors on the calculated dose distribution in static non-conventional photon beams (specifically, in stereotactic radiosurgery with circular collimators). To achieve this goal, the total scatter factors, tissue maximum, and off-axis ratios were measured with a stereotactic field diode for 4.0-, 10.0-, and 20.0-mm circular collimators. The irradiation was performed with a Novalis® linear accelerator using a 6-MV photon beam. The detector-specific correction factors were calculated and applied to the experimental dosimetry data for in-depth and off-axis locations. The corrected and uncorrected dosimetry data were used to commission a treatment planning system for radiosurgery planning. Various plans were calculated with simulated lesions using the uncorrected and corrected dosimetry. The resulting dose calculations were compared using the gamma index test with several criteria. The results of this work presented important conclusions for the use of detector-specific beam correction factors ([Formula: see text] in a treatment planning system. The use of [Formula: see text] for total scatter factors has an important impact on monitor unit calculation. On the contrary, the use of [Formula: see text] for tissue-maximum and off-axis ratios has not an important impact on the dose distribution calculation by the treatment planning system. This conclusion is only valid for the combination of treatment planning system, detector, and correction factors used in this work; however, this technique can be applied to other treatment planning systems, detectors, and correction factors.

  1. Simple motion correction strategy reduces respiratory-induced motion artifacts for k-t accelerated and compressed-sensing cardiovascular magnetic resonance perfusion imaging.

    PubMed

    Zhou, Ruixi; Huang, Wei; Yang, Yang; Chen, Xiao; Weller, Daniel S; Kramer, Christopher M; Kozerke, Sebastian; Salerno, Michael

    2018-02-01

    Cardiovascular magnetic resonance (CMR) stress perfusion imaging provides important diagnostic and prognostic information in coronary artery disease (CAD). Current clinical sequences have limited temporal and/or spatial resolution, and incomplete heart coverage. Techniques such as k-t principal component analysis (PCA) or k-t sparcity and low rank structure (SLR), which rely on the high degree of spatiotemporal correlation in first-pass perfusion data, can significantly accelerate image acquisition mitigating these problems. However, in the presence of respiratory motion, these techniques can suffer from significant degradation of image quality. A number of techniques based on non-rigid registration have been developed. However, to first approximation, breathing motion predominantly results in rigid motion of the heart. To this end, a simple robust motion correction strategy is proposed for k-t accelerated and compressed sensing (CS) perfusion imaging. A simple respiratory motion compensation (MC) strategy for k-t accelerated and compressed-sensing CMR perfusion imaging to selectively correct respiratory motion of the heart was implemented based on linear k-space phase shifts derived from rigid motion registration of a region-of-interest (ROI) encompassing the heart. A variable density Poisson disk acquisition strategy was used to minimize coherent aliasing in the presence of respiratory motion, and images were reconstructed using k-t PCA and k-t SLR with or without motion correction. The strategy was evaluated in a CMR-extended cardiac torso digital (XCAT) phantom and in prospectively acquired first-pass perfusion studies in 12 subjects undergoing clinically ordered CMR studies. Phantom studies were assessed using the Structural Similarity Index (SSIM) and Root Mean Square Error (RMSE). In patient studies, image quality was scored in a blinded fashion by two experienced cardiologists. In the phantom experiments, images reconstructed with the MC strategy had higher SSIM (p < 0.01) and lower RMSE (p < 0.01) in the presence of respiratory motion. For patient studies, the MC strategy improved k-t PCA and k-t SLR reconstruction image quality (p < 0.01). The performance of k-t SLR without motion correction demonstrated improved image quality as compared to k-t PCA in the setting of respiratory motion (p < 0.01), while with motion correction there is a trend of better performance in k-t SLR as compared with motion corrected k-t PCA. Our simple and robust rigid motion compensation strategy greatly reduces motion artifacts and improves image quality for standard k-t PCA and k-t SLR techniques in setting of respiratory motion due to imperfect breath-holding.

  2. Learning versus correct models: influence of model type on the learning of a free-weight squat lift.

    PubMed

    McCullagh, P; Meyer, K N

    1997-03-01

    It has been assumed that demonstrating the correct movement is the best way to impart task-relevant information. However, empirical verification with simple laboratory skills has shown that using a learning model (showing an individual in the process of acquiring the skill to be learned) may accelerate skill acquisition and increase retention more than using a correct model. The purpose of the present study was to compare the effectiveness of viewing correct versus learning models on the acquisition of a sport skill (free-weight squat lift). Forty female participants were assigned to four learning conditions: physical practice receiving feedback, learning model with model feedback, correct model with model feedback, and learning model without model feedback. Results indicated that viewing either a correct or learning model was equally effective in learning correct form in the squat lift.

  3. Dispersion-correcting potentials can significantly improve the bond dissociation enthalpies and noncovalent binding energies predicted by density-functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DiLabio, Gino A., E-mail: Gino.DiLabio@nrc.ca; Department of Chemistry, University of British Columbia, Okanagan, 3333 University Way, Kelowna, British Columbia V1V 1V7; Koleini, Mohammad

    2014-05-14

    Dispersion-correcting potentials (DCPs) are atom-centered Gaussian functions that are applied in a manner that is similar to effective core potentials. Previous work on DCPs has focussed on their use as a simple means of improving the ability of conventional density-functional theory methods to predict the binding energies of noncovalently bonded molecular dimers. We show in this work that DCPs developed for use with the LC-ωPBE functional along with 6-31+G(2d,2p) basis sets are capable of simultaneously improving predicted noncovalent binding energies of van der Waals dimer complexes and covalent bond dissociation enthalpies in molecules. Specifically, the DCPs developed herein for themore » C, H, N, and O atoms provide binding energies for a set of 66 noncovalently bonded molecular dimers (the “S66” set) with a mean absolute error (MAE) of 0.21 kcal/mol, which represents an improvement of more than a factor of 10 over unadorned LC-ωPBE/6-31+G(2d,2p) and almost a factor of two improvement over LC-ωPBE/6-31+G(2d,2p) used in conjunction with the “D3” pairwise dispersion energy corrections. In addition, the DCPs reduce the MAE of calculated X-H and X-Y (X,Y = C, H, N, O) bond dissociation enthalpies for a set of 40 species from 3.2 kcal/mol obtained with unadorned LC-ωPBE/6-31+G(2d,2p) to 1.6 kcal/mol. Our findings demonstrate that broad improvements to the performance of DFT methods may be achievable through the use of DCPs.« less

  4. Spatial scaling of net primary productivity using subpixel landcover information

    NASA Astrophysics Data System (ADS)

    Chen, X. F.; Chen, Jing M.; Ju, Wei M.; Ren, L. L.

    2008-10-01

    Gridding the land surface into coarse homogeneous pixels may cause important biases on ecosystem model estimations of carbon budget components at local, regional and global scales. These biases result from overlooking subpixel variability of land surface characteristics. Vegetation heterogeneity is an important factor introducing biases in regional ecological modeling, especially when the modeling is made on large grids. This study suggests a simple algorithm that uses subpixel information on the spatial variability of land cover type to correct net primary productivity (NPP) estimates, made at coarse spatial resolutions where the land surface is considered as homogeneous within each pixel. The algorithm operates in such a way that NPP obtained from calculations made at coarse spatial resolutions are multiplied by simple functions that attempt to reproduce the effects of subpixel variability of land cover type on NPP. Its application to a carbon-hydrology coupled model(BEPS-TerrainLab model) estimates made at a 1-km resolution over a watershed (named Baohe River Basin) located in the southwestern part of Qinling Mountains, Shaanxi Province, China, improved estimates of average NPP as well as its spatial variability.

  5. Energy shadowing correction of ultrasonic pulse-echo records by digital signal processing

    NASA Technical Reports Server (NTRS)

    Kishoni, D.; Heyman, J. S.

    1986-01-01

    Attention is given to a numerical algorithm that, via signal processing, enables the dynamic correction of the shadowing effect of reflections on ultrasonic displays. The algorithm was applied to experimental data from graphite-epoxy composite material immersed in a water bath. It is concluded that images of material defects with the shadowing corrections allow for a more quantitative interpretation of the material state. It is noted that the proposed algorithm is fast and simple enough to be adopted for real time applications in industry.

  6. Finite-size corrections to the excitation energy transfer in a massless scalar interaction model

    NASA Astrophysics Data System (ADS)

    Maeda, Nobuki; Yabuki, Tetsuo; Tobita, Yutaka; Ishikawa, Kenzo

    2017-05-01

    We study the excitation energy transfer (EET) for a simple model in which a massless scalar particle is exchanged between two molecules. We show that a finite-size effect appears in EET by the interaction energy due to overlapping of the quantum waves in a short time interval. The effect generates finite-size corrections to Fermi's golden rule and modifies EET probability from the standard formula in the Förster mechanism. The correction terms come from transition modes outside the resonance energy region and enhance EET probability substantially.

  7. Lattice constants of pure methane and carbon dioxide hydrates at low temperatures. Implementing quantum corrections to classical molecular dynamics studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costandy, Joseph; Michalis, Vasileios K.; Economou, Ioannis G., E-mail: i.tsimpanogiannis@qatar.tamu.edu, E-mail: ioannis.economou@qatar.tamu.edu

    2016-03-28

    We introduce a simple correction to the calculation of the lattice constants of fully occupied structure sI methane or carbon dioxide pure hydrates that are obtained from classical molecular dynamics simulations using the TIP4PQ/2005 water force field. The obtained corrected lattice constants are subsequently used in order to obtain isobaric thermal expansion coefficients of the pure gas hydrates that exhibit a trend that is significantly closer to the experimental behavior than previously reported classical molecular dynamics studies.

  8. Rearranging Pionless Effective Field Theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin Savage; Silas Beane

    2001-11-19

    We point out a redundancy in the operator structure of the pionless effective field theory which dramatically simplifies computations. This redundancy is best exploited by using dibaryon fields as fundamental degrees of freedom. In turn, this suggests a new power counting scheme which sums range corrections to all orders. We explore this method with a few simple observables: the deuteron charge form factor, n p -> d gamma, and Compton scattering from the deuteron. Higher dimension operators involving electroweak gauge fields are not renormalized by the s-wave strong interactions, and therefore do not scale with inverse powers of the renormalizationmore » scale. Thus, naive dimensional analysis of these operators is sufficient to estimate their contribution to a given process.« less

  9. System optimization on coded aperture spectrometer

    NASA Astrophysics Data System (ADS)

    Liu, Hua; Ding, Quanxin; Wang, Helong; Chen, Hongliang; Guo, Chunjie; Zhou, Liwei

    2017-10-01

    For aim to find a simple multiple configuration solution and achieve higher refractive efficiency, and based on to reduce the situation disturbed by FOV change, especially in a two-dimensional spatial expansion. Coded aperture system is designed by these special structure, which includes an objective a coded component a prism reflex system components, a compensatory plate and an imaging lens Correlative algorithms and perfect imaging methods are available to ensure this system can be corrected and optimized adequately. Simulation results show that the system can meet the application requirements in MTF, REA, RMS and other related criteria. Compared with the conventional design, the system has reduced in volume and weight significantly. Therefore, the determining factors are the prototype selection and the system configuration.

  10. Influence of Misalignment on High-Order Aberration Correction for Normal Human Eyes

    NASA Astrophysics Data System (ADS)

    Zhao, Hao-Xin; Xu, Bing; Xue, Li-Xia; Dai, Yun; Liu, Qian; Rao, Xue-Jun

    2008-04-01

    Although a compensation device can correct aberrations of human eyes, the effect will be degraded by its misalignment, especially for high-order aberration correction. We calculate the positioning tolerance of correction device for high-order aberrations, and within what degree the correcting effect is better than low-order aberration (defocus and astigmatism) correction. With fixed certain misalignment within the positioning tolerance, we calculate the residual wavefront rms aberration of the first-6 to first-35 terms along with the 3rd-5th terms of aberrations corrected, and the combined first-13 terms of aberrations are also studied under the same quantity of misalignment. However, the correction effect of high-order aberrations does not meliorate along with the increase of the high-order terms under some misalignment, moreover, some simple combined terms correction can achieve similar result as complex combinations. These results suggest that it is unnecessary to correct too much the terms of high-order aberrations which are difficult to accomplish in practice, and gives confidence to correct high-order aberrations out of the laboratory.

  11. Improved Kalman Filter Method for Measurement Noise Reduction in Multi Sensor RFID Systems

    PubMed Central

    Eom, Ki Hwan; Lee, Seung Joon; Kyung, Yeo Sun; Lee, Chang Won; Kim, Min Chul; Jung, Kyung Kwon

    2011-01-01

    Recently, the range of available Radio Frequency Identification (RFID) tags has been widened to include smart RFID tags which can monitor their varying surroundings. One of the most important factors for better performance of smart RFID system is accurate measurement from various sensors. In the multi-sensing environment, some noisy signals are obtained because of the changing surroundings. We propose in this paper an improved Kalman filter method to reduce noise and obtain correct data. Performance of Kalman filter is determined by a measurement and system noise covariance which are usually called the R and Q variables in the Kalman filter algorithm. Choosing a correct R and Q variable is one of the most important design factors for better performance of the Kalman filter. For this reason, we proposed an improved Kalman filter to advance an ability of noise reduction of the Kalman filter. The measurement noise covariance was only considered because the system architecture is simple and can be adjusted by the neural network. With this method, more accurate data can be obtained with smart RFID tags. In a simulation the proposed improved Kalman filter has 40.1%, 60.4% and 87.5% less Mean Squared Error (MSE) than the conventional Kalman filter method for a temperature sensor, humidity sensor and oxygen sensor, respectively. The performance of the proposed method was also verified with some experiments. PMID:22346641

  12. Improved Kalman filter method for measurement noise reduction in multi sensor RFID systems.

    PubMed

    Eom, Ki Hwan; Lee, Seung Joon; Kyung, Yeo Sun; Lee, Chang Won; Kim, Min Chul; Jung, Kyung Kwon

    2011-01-01

    Recently, the range of available radio frequency identification (RFID) tags has been widened to include smart RFID tags which can monitor their varying surroundings. One of the most important factors for better performance of smart RFID system is accurate measurement from various sensors. In the multi-sensing environment, some noisy signals are obtained because of the changing surroundings. We propose in this paper an improved Kalman filter method to reduce noise and obtain correct data. Performance of Kalman filter is determined by a measurement and system noise covariance which are usually called the R and Q variables in the Kalman filter algorithm. Choosing a correct R and Q variable is one of the most important design factors for better performance of the Kalman filter. For this reason, we proposed an improved Kalman filter to advance an ability of noise reduction of the Kalman filter. The measurement noise covariance was only considered because the system architecture is simple and can be adjusted by the neural network. With this method, more accurate data can be obtained with smart RFID tags. In a simulation the proposed improved Kalman filter has 40.1%, 60.4% and 87.5% less mean squared error (MSE) than the conventional Kalman filter method for a temperature sensor, humidity sensor and oxygen sensor, respectively. The performance of the proposed method was also verified with some experiments.

  13. Price corrected domestic technology assumption--a method to assess pollution embodied in trade using primary official statistics only. With a case on CO2 emissions embodied in imports to Europe.

    PubMed

    Tukker, Arnold; de Koning, Arjan; Wood, Richard; Moll, Stephan; Bouwmeester, Maaike C

    2013-02-19

    Environmentally extended input output (EE IO) analysis is increasingly used to assess the carbon footprint of final consumption. Official EE IO data are, however, at best available for single countries or regions such as the EU27. This causes problems in assessing pollution embodied in imported products. The popular "domestic technology assumption (DTA)" leads to errors. Improved approaches based on Life Cycle Inventory data, Multiregional EE IO tables, etc. rely on unofficial research data and modeling, making them difficult to implement by statistical offices. The DTA can lead to errors for three main reasons: exporting countries can have higher impact intensities; may use more intermediate inputs for the same output; or may sell the imported products for lower/other prices than those produced domestically. The last factor is relevant for sustainable consumption policies of importing countries, whereas the first factors are mainly a matter of making production in exporting countries more eco-efficient. We elaborated a simple correction for price differences in imports and domestic production using monetary and physical data from official import and export statistics. A case study for the EU27 shows that this "price-adjusted DTA" gives a partial but meaningful adjustment of pollution embodied in trade compared to multiregional EE IO studies.

  14. [The correction of one cause of the short nose: how to bring the retracted alar rim downwards?].

    PubMed

    Levet, Y

    2009-10-01

    The author reports a genuine procedure called the "sliding flap" used to correct the retraction of the rim of the nostril upward. The new position of the rim is stabilized by a simple resorbable thread through the skin fixing the rim in the new situation. This technique is efficient in both primary and secondary cases.

  15. Differences in Causal Estimates from Longitudinal Analyses of Residualized versus Simple Gain Scores: Contrasting Controls for Selection and Regression Artifacts

    ERIC Educational Resources Information Center

    Larzelere, Robert E.; Ferrer, Emilio; Kuhn, Brett R.; Danelia, Ketevan

    2010-01-01

    This study estimates the causal effects of six corrective actions for children's problem behaviors, comparing four types of longitudinal analyses that correct for pre-existing differences in a cohort of 1,464 4- and 5-year-olds from Canadian National Longitudinal Survey of Children and Youth (NLSCY) data. Analyses of residualized gain scores found…

  16. The Effectiveness of Written Corrective Feedback and the Impact Lao Learners' Beliefs Have on Uptake

    ERIC Educational Resources Information Center

    Rummel, Stephanie; Bitchener, John

    2015-01-01

    This article presents the results of a study examining the effectiveness of written corrective feedback (CF) on the simple past tense and the impact beliefs may have on students' uptake of the feedback they receive. A seven-week study was carried out with 42 advanced EFL learners in Vientiane, Laos. Students' beliefs about written CF were first…

  17. Correcting deformities of the aged earlobe.

    PubMed

    Connell, Bruce F

    2005-01-01

    An earlobe that appears aged or malpositioned can sabotage the results of a well performed face lift. The most frequently noted sign of a naturally aged earlobe is increased length. Improper planning of face lift incisions may also result in disfigurement of the ear. The author suggests simple excisional techniques to correct the aged earlobe, as well as methods to avoid subsequent earlobe distortion when performing a face lift.

  18. Interpolation of unevenly spaced data using a parabolic leapfrog correction method and cubic splines

    Treesearch

    Julio L. Guardado; William T. Sommers

    1977-01-01

    The technique proposed allows interpolation of data recorded at unevenly spaced sites to a regular grid or to other sites. Known data are interpolated to an initial guess field grid of unevenly spaced rows and columns by a simple distance weighting procedure. The initial guess field is then adjusted by using a parabolic leapfrog correction and the known data. The final...

  19. Speed of Sound versus Temperature Using PVC Pipes Open at Both Ends

    ERIC Educational Resources Information Center

    Bacon, Michael E.

    2012-01-01

    In this paper we investigate the speed of sound in air as a function of temperature using a simple and inexpensive apparatus. For this experiment it is essential that the appropriate end corrections be taken into account. In a recent paper the end corrections for 2-in i.d. (5.04-cm) PVC pipes open at both ends were investigated. The air column…

  20. Resistivity Correction Factor for the Four-Probe Method: Experiment III

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo; Iwata, Atsushi

    1990-04-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F is applied to a system consisting of a rectangular parallelepiped sample and a square four-probe array. Resistivity and sheet resistance measurements are made on isotropic graphites and crystalline ITO films. Factor F corrects experimental data and leads to reasonable resistivity and sheet resistance.

  1. Can small field diode correction factors be applied universally?

    PubMed

    Liu, Paul Z Y; Suchowerska, Natalka; McKenzie, David R

    2014-09-01

    Diode detectors are commonly used in dosimetry, but have been reported to over-respond in small fields. Diode correction factors have been reported in the literature. The purpose of this study is to determine whether correction factors for a given diode type can be universally applied over a range of irradiation conditions including beams of different qualities. A mathematical relation of diode over-response as a function of the field size was developed using previously published experimental data in which diodes were compared to an air core scintillation dosimeter. Correction factors calculated from the mathematical relation were then compared those available in the literature. The mathematical relation established between diode over-response and the field size was found to predict the measured diode correction factors for fields between 5 and 30 mm in width. The average deviation between measured and predicted over-response was 0.32% for IBA SFD and PTW Type E diodes. Diode over-response was found to be not strongly dependent on the type of linac, the method of collimation or the measurement depth. The mathematical relation was found to agree with published diode correction factors derived from Monte Carlo simulations and measurements, indicating that correction factors are robust in their transportability between different radiation beams. Copyright © 2014. Published by Elsevier Ireland Ltd.

  2. The Krylov accelerated SIMPLE(R) method for flow problems in industrial furnaces

    NASA Astrophysics Data System (ADS)

    Vuik, C.; Saghir, A.; Boerstoel, G. P.

    2000-08-01

    Numerical modeling of the melting and combustion process is an important tool in gaining understanding of the physical and chemical phenomena that occur in a gas- or oil-fired glass-melting furnace. The incompressible Navier-Stokes equations are used to model the gas flow in the furnace. The discrete Navier-Stokes equations are solved by the SIMPLE(R) pressure-correction method. In these applications, many SIMPLE(R) iterations are necessary to obtain an accurate solution. In this paper, Krylov accelerated versions are proposed: GCR-SIMPLE(R). The properties of these methods are investigated for a simple two-dimensional flow. Thereafter, the efficiencies of the methods are compared for three-dimensional flows in industrial glass-melting furnaces. Copyright

  3. Performance of HADDOCK and a simple contact-based protein-ligand binding affinity predictor in the D3R Grand Challenge 2

    NASA Astrophysics Data System (ADS)

    Kurkcuoglu, Zeynep; Koukos, Panagiotis I.; Citro, Nevia; Trellet, Mikael E.; Rodrigues, J. P. G. L. M.; Moreira, Irina S.; Roel-Touris, Jorge; Melquiond, Adrien S. J.; Geng, Cunliang; Schaarschmidt, Jörg; Xue, Li C.; Vangone, Anna; Bonvin, A. M. J. J.

    2018-01-01

    We present the performance of HADDOCK, our information-driven docking software, in the second edition of the D3R Grand Challenge. In this blind experiment, participants were requested to predict the structures and binding affinities of complexes between the Farnesoid X nuclear receptor and 102 different ligands. The models obtained in Stage1 with HADDOCK and ligand-specific protocol show an average ligand RMSD of 5.1 Å from the crystal structure. Only 6/35 targets were within 2.5 Å RMSD from the reference, which prompted us to investigate the limiting factors and revise our protocol for Stage2. The choice of the receptor conformation appeared to have the strongest influence on the results. Our Stage2 models were of higher quality (13 out of 35 were within 2.5 Å), with an average RMSD of 4.1 Å. The docking protocol was applied to all 102 ligands to generate poses for binding affinity prediction. We developed a modified version of our contact-based binding affinity predictor PRODIGY, using the number of interatomic contacts classified by their type and the intermolecular electrostatic energy. This simple structure-based binding affinity predictor shows a Kendall's Tau correlation of 0.37 in ranking the ligands (7th best out of 77 methods, 5th/25 groups). Those results were obtained from the average prediction over the top10 poses, irrespective of their similarity/correctness, underscoring the robustness of our simple predictor. This results in an enrichment factor of 2.5 compared to a random predictor for ranking ligands within the top 25%, making it a promising approach to identify lead compounds in virtual screening.

  4. Simple prediction method of lumbar lordosis for planning of lumbar corrective surgery: radiological analysis in a Korean population.

    PubMed

    Lee, Chong Suh; Chung, Sung Soo; Park, Se Jun; Kim, Dong Min; Shin, Seong Kee

    2014-01-01

    This study aimed at deriving a lordosis predictive equation using the pelvic incidence and to establish a simple prediction method of lumbar lordosis for planning lumbar corrective surgery in Asians. Eighty-six asymptomatic volunteers were enrolled in the study. The maximal lumbar lordosis (MLL), lower lumbar lordosis (LLL), pelvic incidence (PI), and sacral slope (SS) were measured. The correlations between the parameters were analyzed using Pearson correlation analysis. Predictive equations of lumbar lordosis through simple regression analysis of the parameters and simple predictive values of lumbar lordosis using PI were derived. The PI strongly correlated with the SS (r = 0.78), and a strong correlation was found between the SS and LLL (r = 0.89), and between the SS and MLL (r = 0.83). Based on these correlations, the predictive equations of lumbar lordosis were found (SS = 0.80 + 0.74 PI (r = 0.78, R (2) = 0.61), LLL = 5.20 + 0.87 SS (r = 0.89, R (2) = 0.80), MLL = 17.41 + 0.96 SS (r = 0.83, R (2) = 0.68). When PI was between 30° to 35°, 40° to 50° and 55° to 60°, the equations predicted that MLL would be PI + 10°, PI + 5° and PI, and LLL would be PI - 5°, PI - 10° and PI - 15°, respectively. This simple calculation method can provide a more appropriate and simpler prediction of lumbar lordosis for Asian populations. The prediction of lumbar lordosis should be used as a reference for surgeons planning to restore the lumbar lordosis in lumbar corrective surgery.

  5. Prophylactic Z-plasty - correcting helical rim deformity from wedge excision.

    PubMed

    Kim, Peter

    2010-09-01

    Wedge excision is a popular and well documented surgical method for treating a wide range of skin lesions and cancers of the ear in the general practice setting. In the majority of cases, this is a simple and cosmetically pleasing treatment. However, it may create helical rim deformity. This article describes a simple method of preventing such deformity using prophylactic Z-plasty.

  6. The Influence of Knowledge-of-Results with Mental Retardation on a Simple Vigilance Task.

    ERIC Educational Resources Information Center

    Griffin, James Craig; And Others

    Twelve educable or trainable Ss, 14 to 22 years of age, who were institutionalized residents of a state school for the retarded, were examined in a simple vigilance test to determine effects of knowledge-of-results (KR) contingent upon correct responses. Each S in the KR group was instructed to press a switch upon seeing a light signal and to…

  7. Beam and tissue factors affecting Cherenkov image intensity for quantitative entrance and exit dosimetry on human tissue

    PubMed Central

    Zhang, Rongxiao; Glaser, Adam K.; Andreozzi, Jacqueline; Jiang, Shudong; Jarvis, Lesley A.; Gladstone, David J.; Pogue, Brian W.

    2017-01-01

    This study’s goal was to determine how Cherenkov radiation emission observed in radiotherapy is affected by predictable factors expected in patient imaging. Factors such as tissue optical properties, radiation beam properties, thickness of tissues, entrance/exit geometry, curved surface effects, curvature and imaging angles were investigated through Monte Carlo simulations. The largest physical cause of variation of the correlation factor between of Cherenkov emission and dose was the entrance/exit geometry (~50%). The largest human tissue effect was from different optical properties (~45%). Beyond these, clinical beam energy varies the correlation factor significantly (~20% for x-ray beams), followed by curved surfaces (~15% for x-ray beams and ~8% for electron beams), and finally, the effect of field size (~5% for x-ray beams). Other investigated factors which caused variations less than 5% were tissue thicknesses and source to surface distance. The effect of non-Lambertian emission was negligible for imaging angles smaller than 60 degrees. The spectrum of Cherenkov emission tends to blue-shift along the curved surface. A simple normalization approach based on the reflectance image was experimentally validated by imaging a range of tissue phantoms, as a first order correction for different tissue optical properties. PMID:27507213

  8. Empirical Derivation of Correction Factors for Human Spiral Ganglion Cell Nucleus and Nucleolus Count Units.

    PubMed

    Robert, Mark E; Linthicum, Fred H

    2016-01-01

    Profile count method for estimating cell number in sectioned tissue applies a correction factor for double count (resulting from transection during sectioning) of count units selected to represent the cell. For human spiral ganglion cell counts, we attempted to address apparent confusion between published correction factors for nucleus and nucleolus count units that are identical despite the role of count unit diameter in a commonly used correction factor formula. We examined a portion of human cochlea to empirically derive correction factors for the 2 count units, using 3-dimensional reconstruction software to identify double counts. The Neurotology and House Histological Temporal Bone Laboratory at University of California at Los Angeles. Using a fully sectioned and stained human temporal bone, we identified and generated digital images of sections of the modiolar region of the lower first turn of cochlea, identified count units with a light microscope, labeled them on corresponding digital sections, and used 3-dimensional reconstruction software to identify double-counted count units. For 25 consecutive sections, we determined that double-count correction factors for nucleus count unit (0.91) and nucleolus count unit (0.92) matched the published factors. We discovered that nuclei and, therefore, spiral ganglion cells were undercounted by 6.3% when using nucleolus count units. We determined that correction factors for count units must include an element for undercounting spiral ganglion cells as well as the double-count element. We recommend a correction factor of 0.91 for the nucleus count unit and 0.98 for the nucleolus count unit when using 20-µm sections. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  9. Experiment Clarifies Buoyancy

    ERIC Educational Resources Information Center

    Oguz, Ayse; Yurumezoglu, Kemal

    2008-01-01

    This article presents a simple activity using Archimedes' principle that helps students to develop their scientific thinking and also to identify and correct their misconceptions. The exercise consists of linear and reverse processes.

  10. Performance test and image correction of CMOS image sensor in radiation environment

    NASA Astrophysics Data System (ADS)

    Wang, Congzheng; Hu, Song; Gao, Chunming; Feng, Chang

    2016-09-01

    CMOS image sensors rival CCDs in domains that include strong radiation resistance as well as simple drive signals, so it is widely applied in the high-energy radiation environment, such as space optical imaging application and video monitoring of nuclear power equipment. However, the silicon material of CMOS image sensors has the ionizing dose effect in the high-energy rays, and then the indicators of image sensors, such as signal noise ratio (SNR), non-uniformity (NU) and bad point (BP) are degraded because of the radiation. The radiation environment of test experiments was generated by the 60Co γ-rays source. The camera module based on image sensor CMV2000 from CMOSIS Inc. was chosen as the research object. The ray dose used for the experiments was with a dose rate of 20krad/h. In the test experiences, the output signals of the pixels of image sensor were measured on the different total dose. The results of data analysis showed that with the accumulation of irradiation dose, SNR of image sensors decreased, NU of sensors was enhanced, and the number of BP increased. The indicators correction of image sensors was necessary, as it was the main factors to image quality. The image processing arithmetic was adopt to the data from the experiences in the work, which combined local threshold method with NU correction based on non-local means (NLM) method. The results from image processing showed that image correction can effectively inhibit the BP, improve the SNR, and reduce the NU.

  11. QT and JT dispersion and cardiac performance in children with neonatal Bartter syndrome: a pilot study.

    PubMed

    Hacihamdioglu, Duygu Ovunc; Fidanci, Kursat; Kilic, Ayhan; Gok, Faysal; Topaloglu, Rezan

    2013-10-01

    QT dispersion and JT dispersion are simple noninvasive arrhythmogenic markers that can be used to assess the homogeneity of cardiac repolarization. The aim of this study was to assess QT and JT dispersion and their relation with left ventricular systolic and diastolic functions in children with Bartter syndrome (BS). Nine neonatal patients with BS (median age 9.7 years) and 20 controls (median age 8 years) were investigated at rest. Both study and control subjects underwent electrocardiography (ECG) in which the interval between two R waves and QT intervals, corrected QT, QT dispersion, corrected QT dispersion, JT, corrected JT, JT dispersion and corrected JT dispersion were measured with 12-lead ECG. Two-dimensional, Doppler echocardiographic examinations were performed. Patients and controls did not differ for gender and for serum levels of potassium, magnesium, and calcium (p > 0.05). Both study and control subjects had normal echocardiographic examination and baseline myocardial performance indexes. The QT dispersion and JT dispersion were significantly prolonged in patients with BS compared to those of the controls {37.5 ms [interquartile range (IQR) 32.5-40] vs. 25.5 ms (IQR 20-30), respectively, p = 0.014 and 37.5 ms (IQR 27.5-40) vs. 22.5 ms (IQR 20-30), respectively, p = 0.003}. Elevated QT and JT dispersion during asymptomatic and normokalemic periods may be risk factors for the development of cardiac complications and arrhythmias in children with BS. In these patients the need for systematic cardiac screening and management protocol is extremely important for effective prevention.

  12. SU-F-T-584: Investigating Correction Methods for Ion Recombination Effects in OCTAVIUS 1000 SRS Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knill, C; Wayne State University School of Medicine, Detroit, MI; Snyder, M

    Purpose: PTW’s Octavius 1000 SRS array performs IMRT QA measurements with liquid filled ionization chambers (LICs). Collection efficiencies of LICs have been shown to change during IMRT delivery as a function of LINAC pulse frequency and pulse dose, which affects QA results. In this study, two methods were developed to correct changes in collection efficiencies during IMRT QA measurements, and the effects of these corrections on QA pass rates were compared. Methods: For the first correction, Matlab software was developed that calculates pulse frequency and pulse dose for each detector, using measurement and DICOM RT Plan files. Pulse information ismore » converted to collection efficiency and measurements are corrected by multiplying detector dose by ratios of calibration to measured collection efficiencies. For the second correction, MU/min in daily 1000 SRS calibration was chosen to match average MU/min of the VMAT plan. Usefulness of derived corrections were evaluated using 6MV and 10FFF SBRT RapidArc plans delivered to the OCTAVIUS 4D system using a TrueBeam equipped with an HD- MLC. Effects of the two corrections on QA results were examined by performing 3D gamma analysis comparing predicted to measured dose, with and without corrections. Results: After complex Matlab corrections, average 3D gamma pass rates improved by [0.07%,0.40%,1.17%] for 6MV and [0.29%,1.40%,4.57%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. Maximum changes in gamma pass rates were [0.43%,1.63%,3.05%] for 6MV and [1.00%,4.80%,11.2%] for 10FFF using [3%/3mm,2%/2mm,1%/1mm] criteria. On average, pass rates of simple daily calibration corrections were within 1% of complex Matlab corrections. Conclusion: Ion recombination effects can potentially be clinically significant for OCTAVIUS 1000 SRS measurements, especially for higher pulse dose unflattened beams when using tighter gamma tolerances. Matching daily 1000 SRS calibration MU/min to average planned MU/min is a simple correction that greatly reduces ion recombination effects, improving measurements accuracy and gamma pass rates. This work was supported by PTW.« less

  13. Reply to comment by Rannik on "A simple method for estimating frequency response corrections for eddy covariance systems"

    Treesearch

    W. J. Massman

    2001-01-01

    First, my thanks to Dr. Ullar Rannik for his interest and insights in my recent study of spectral corrections and associated eddy covariance flux loss (Massman, 2000, henceforth denoted by M2000). His comments are important and germane to the attenuation of low frequencies of the turbulent cospectra due to recursive filtering and block averaging. Dr. Rannik addresses...

  14. Process-conditioned bias correction for seasonal forecasting: a case-study with ENSO in Peru

    NASA Astrophysics Data System (ADS)

    Manzanas, R.; Gutiérrez, J. M.

    2018-05-01

    This work assesses the suitability of a first simple attempt for process-conditioned bias correction in the context of seasonal forecasting. To do this, we focus on the northwestern part of Peru and bias correct 1- and 4-month lead seasonal predictions of boreal winter (DJF) precipitation from the ECMWF System4 forecasting system for the period 1981-2010. In order to include information about the underlying large-scale circulation which may help to discriminate between precipitation affected by different processes, we introduce here an empirical quantile-quantile mapping method which runs conditioned on the state of the Southern Oscillation Index (SOI), which is accurately predicted by System4 and is known to affect the local climate. Beyond the reduction of model biases, our results show that the SOI-conditioned method yields better ROC skill scores and reliability than the raw model output over the entire region of study, whereas the standard unconditioned implementation provides no added value for any of these metrics. This suggests that conditioning the bias correction on simple but well-simulated large-scale processes relevant to the local climate may be a suitable approach for seasonal forecasting. Yet, further research on the suitability of the application of similar approaches to the one considered here for other regions, seasons and/or variables is needed.

  15. A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose

    PubMed Central

    Rahman, Mohammad Mizanur; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse

    2017-01-01

    Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k-nearest neighbor (k-NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms. PMID:28895910

  16. A False Alarm Reduction Method for a Gas Sensor Based Electronic Nose.

    PubMed

    Rahman, Mohammad Mizanur; Charoenlarpnopparut, Chalie; Suksompong, Prapun; Toochinda, Pisanu; Taparugssanagorn, Attaphongse

    2017-09-12

    Electronic noses (E-Noses) are becoming popular for food and fruit quality assessment due to their robustness and repeated usability without fatigue, unlike human experts. An E-Nose equipped with classification algorithms and having open ended classification boundaries such as the k -nearest neighbor ( k -NN), support vector machine (SVM), and multilayer perceptron neural network (MLPNN), are found to suffer from false classification errors of irrelevant odor data. To reduce false classification and misclassification errors, and to improve correct rejection performance; algorithms with a hyperspheric boundary, such as a radial basis function neural network (RBFNN) and generalized regression neural network (GRNN) with a Gaussian activation function in the hidden layer should be used. The simulation results presented in this paper show that GRNN has more correct classification efficiency and false alarm reduction capability compared to RBFNN. As the design of a GRNN and RBFNN is complex and expensive due to large numbers of neuron requirements, a simple hyperspheric classification method based on minimum, maximum, and mean (MMM) values of each class of the training dataset was presented. The MMM algorithm was simple and found to be fast and efficient in correctly classifying data of training classes, and correctly rejecting data of extraneous odors, and thereby reduced false alarms.

  17. Decodoku: Quantum error rorrection as a simple puzzle game

    NASA Astrophysics Data System (ADS)

    Wootton, James

    To build quantum computers, we need to detect and manage any noise that occurs. This will be done using quantum error correction. At the hardware level, QEC is a multipartite system that stores information non-locally. Certain measurements are made which do not disturb the stored information, but which do allow signatures of errors to be detected. Then there is a software problem. How to take these measurement outcomes and determine: a) The errors that caused them, and (b) how to remove their effects. For qubit error correction, the algorithms required to do this are well known. For qudits, however, current methods are far from optimal. We consider the error correction problem of qubit surface codes. At the most basic level, this is a problem that can be expressed in terms of a grid of numbers. Using this fact, we take the inherent problem at the heart of quantum error correction, remove it from its quantum context, and presented in terms of simple grid based puzzle games. We have developed three versions of these puzzle games, focussing on different aspects of the required algorithms. These have been presented and iOS and Android apps, allowing the public to try their hand at developing good algorithms to solve the puzzles. For more information, see www.decodoku.com. Funding from the NCCR QSIT.

  18. Stroke Warning Signs

    MedlinePlus

    ... person to repeat a simple sentence, like "The sky is blue." Is the person able to correctly ... to Your Doctor to Create a Plan The Life After Stroke Journey Every stroke recovery is different. ...

  19. A computational framework for automation of point defect calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goyal, Anuj; Gorai, Prashun; Peng, Haowei

    We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.

  20. Radiometric calibration of Landsat Thematic Mapper multispectral images

    USGS Publications Warehouse

    Chavez, P.S.

    1989-01-01

    A main problem encountered in radiometric calibration of satellite image data is correcting for atmospheric effects. Without this correction, an image digital number (DN) cannot be converted to a surface reflectance value. In this paper the accuracy of a calibration procedure, which includes a correction for atmospheric scattering, is tested. Two simple methods, a stand-alone and an in situ sky radiance measurement technique, were used to derive the HAZE DN values for each of the six reflectance Thematic Mapper (TM) bands. The DNs of two Landsat TM images of Phoenix, Arizona were converted to surface reflectances. -from Author

  1. A computational framework for automation of point defect calculations

    DOE PAGES

    Goyal, Anuj; Gorai, Prashun; Peng, Haowei; ...

    2017-01-13

    We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.

  2. An interface finite element model can be used to predict healing outcome of bone fractures.

    PubMed

    Alierta, J A; Pérez, M A; García-Aznar, J M

    2014-01-01

    After fractures, bone can experience different potential outcomes: successful bone consolidation, non-union and bone failure. Although, there are a lot of factors that influence fracture healing, experimental studies have shown that the interfragmentary movement (IFM) is one of the main regulators for the course of bone healing. In this sense, computational models may help to improve the development of mechanical-based treatments for bone fracture healing. Hence, based on this fact, we propose a combined repair-failure mechanistic computational model to describe bone fracture healing. Despite being a simple model, it is able to correctly estimate the time course evolution of the IFM compared to in vivo measurements under different mechanical conditions. Therefore, this mathematical approach is especially suitable for modeling the healing response of bone to fractures treated with different mechanical fixators, simulating realistic clinical conditions. This model will be a useful tool to identify factors and define targets for patient specific therapeutics interventions. © 2013 Published by Elsevier Ltd.

  3. Accurate thermodynamics for short-ranged truncations of Coulomb interactions in site-site molecular models

    NASA Astrophysics Data System (ADS)

    Rodgers, Jocelyn M.; Weeks, John D.

    2009-12-01

    Coulomb interactions are present in a wide variety of all-atom force fields. Spherical truncations of these interactions permit fast simulations but are problematic due to their incorrect thermodynamics. Herein we demonstrate that simple analytical corrections for the thermodynamics of uniform truncated systems are possible. In particular, results for the simple point charge/extended (SPC/E) water model treated with spherically truncated Coulomb interactions suggested by local molecular field theory [J. M. Rodgers and J. D. Weeks, Proc. Natl. Acad. Sci. U.S.A. 105, 19136 (2008)] are presented. We extend the results developed by Chandler [J. Chem. Phys. 65, 2925 (1976)] so that we may treat the thermodynamics of mixtures of flexible charged and uncharged molecules simulated with spherical truncations. We show that the energy and pressure of spherically truncated bulk SPC/E water are easily corrected using exact second-moment-like conditions on long-ranged structure. Furthermore, applying the pressure correction as an external pressure removes the density errors observed by other research groups in NPT simulations of spherically truncated bulk species.

  4. SU-F-T-23: Correspondence Factor Correction Coefficient for Commissioning of Leipzig and Valencia Applicators with the Standard Imaging IVB 1000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donaghue, J; Gajdos, S

    Purpose: To determine the correction factor of the correspondence factor for the Standard Imaging IVB 1000 well chamber for commissioning of Elekta’s Leipzig and Valencia skin applicators. Methods: The Leipzig and Valencia applicators are designed to treat small skin lesions by collimating irradiation to the treatment area. Published output factors are used to calculate dose rates for clinical treatments. To validate onsite applicators, a correspondence factor (CFrev) is measured and compared to published values. The published CFrev is based on well chamber model SI HDR 1000 Plus. The CFrev is determined by correlating raw values of the source calibration setupmore » (Rcal,raw) and values taken when each applicator is mounted on the same well chamber with an adapter (Rapp,raw). The CFrev is calculated by using the equation CFrev =Rapp,raw/Rcal,raw. The CFrev was measured for each applicator in both the SI HDR 1000 Plus and the SI IVB 1000. A correction factor, CFIVB for the SI IVB 1000 was determined by finding the ratio of CFrev (SI IVB 1000) and CFrev (SI HDR 1000 Plus). Results: The average correction factors at dwell position 1121 were found to be 1.073, 1.039, 1.209, 1.091, and 1.058 for the Valencia V2, Valencia V3, Leipzig H1, Leipzig H2, and Leipzig H3 respectively. There were no significant variations in the correction factor for dwell positions 1119 through 1121. Conclusion: By using the appropriate correction factor, the correspondence factors for the Leipzig and Valencia surface applicators can be validated with the Standard Imaging IVB 1000. This allows users to correlate their measurements with the Standard Imaging IVB 1000 to the published data. The correction factor is included in the equation for the CFrev as follows: CFrev= Rapp,raw/(CFIVB*Rcal,raw). Each individual applicator has its own correction factor, so care must be taken that the appropriate factor is used.« less

  5. Field and laboratory determination of water-surface elevation and velocity using noncontact measurements

    USGS Publications Warehouse

    Nelson, Jonathan M.; Kinzel, Paul J.; Schmeeckle, Mark Walter; McDonald, Richard R.; Minear, Justin T.

    2016-01-01

    Noncontact methods for measuring water-surface elevation and velocity in laboratory flumes and rivers are presented with examples. Water-surface elevations are measured using an array of acoustic transducers in the laboratory and using laser scanning in field situations. Water-surface velocities are based on using particle image velocimetry or other machine vision techniques on infrared video of the water surface. Using spatial and temporal averaging, results from these methods provide information that can be used to develop estimates of discharge for flows over known bathymetry. Making such estimates requires relating water-surface velocities to vertically averaged velocities; the methods here use standard relations. To examine where these relations break down, laboratory data for flows over simple bumps of three amplitudes are evaluated. As anticipated, discharges determined from surface information can have large errors where nonhydrostatic effects are large. In addition to investigating and characterizing this potential error in estimating discharge, a simple method for correction of the issue is presented. With a simple correction based on bed gradient along the flow direction, remotely sensed estimates of discharge appear to be viable.

  6. Nonlinear Local Bending Response and Bulging Factors for Longitudinal and Circumferential Cracks in Pressurized Cylindrical Shells

    NASA Technical Reports Server (NTRS)

    Young, Richard D.; Rose, Cheryl A.; Starnes, James H., Jr.

    2000-01-01

    Results of a geometrically nonlinear finite element parametric study to determine curvature correction factors or bulging factors that account for increased stresses due to curvature for longitudinal and circumferential cracks in unstiffened pressurized cylindrical shells are presented. Geometric parameters varied in the study include the shell radius, the shell wall thickness, and the crack length. The major results are presented in the form of contour plots of the bulging factor as a function of two nondimensional parameters: the shell curvature parameter, lambda, which is a function of the shell geometry, Poisson's ratio, and the crack length; and a loading parameter, eta, which is a function of the shell geometry, material properties, and the applied internal pressure. These plots identify the ranges of the shell curvature and loading parameters for which the effects of geometric nonlinearity are significant. Simple empirical expressions for the bulging factor are then derived from the numerical results and shown to predict accurately the nonlinear response of shells with longitudinal and circumferential cracks. The numerical results are also compared with analytical solutions based on linear shallow shell theory for thin shells, and with some other semi-empirical solutions from the literature, and limitations on the use of these other expressions are suggested.

  7. Honeywell Technical Order Transfer Tests.

    DTIC Science & Technology

    1987-06-12

    of simple corrections, a reasonable reproduction of the original could be generated. The quality was not good enough for a production environment. Lack of automated quality control (AQC) tools could account for the errors.

  8. Variations and Regularities in the Hemispheric Distributions in Sunspot Groups of Various Classes

    NASA Astrophysics Data System (ADS)

    Gao, Peng-Xin

    2018-05-01

    The present study investigates the variations and regularities in the distributions in sunspot groups (SGs) of various classes in the northern and southern hemispheres from Solar Cycles (SCs) 12 to 23. Here, we use the separation scheme that was introduced by Gao, Li, and Li ( Solar Phys. 292, 124, 2017), which is based on A/U ( A is the corrected area of the SG, and U is the corrected umbral area of the SG), in order to separate SGs into simple SGs (A/U ≤ 4.5) and complex SGs (A/U > 6.2). The time series of Greenwich photoheliographic results from 1875 to 1976 (corresponding to complete SCs 12 - 20) and Debrecen photoheliographic data during the period 1974 - 2015 (corresponding to complete SCs 21 - 23) are used to show the distributions of simple and complex SGs in the northern and southern hemispheres. The main results we obtain are reported as follows: i) the larger of the maximum annual simple SG numbers in the two hemispheres and the larger of the maximum annual complex SG numbers in the two hemispheres occur in different hemispheres during SCs 12, 14, 18, and 19; ii) the relative changing trends of two curves - cumulative SG numbers in the northern and southern hemispheres - for simple SGs are different from those for complex SGs during SCs 12, 14, 18, and 21; and iii) there are discrepancies between the dominant hemispheres of simple and complex SGs for SCs 12, 14, 18, and 21.

  9. Simple cloning strategy using GFPuv gene as positive/negative indicator.

    PubMed

    Miura, Hiromi; Inoko, Hidetoshi; Inoue, Ituro; Tanaka, Masafumi; Sato, Masahiro; Ohtsuka, Masato

    2011-09-15

    Because construction of expression vectors is the first requisite in the functional analysis of genes, development of simple cloning systems is a major requirement during the postgenomic era. In the current study, we developed cloning vectors for gain- or loss-of-function studies by using the GFPuv gene as a positive/negative indicator of cloning. These vectors allow us to easily detect correct clones and obtain expression vectors from a simple procedure by means of the combined use of the GFPuv gene and a type IIS restriction enzyme. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Simple measurement of lenticular lens quality for autostereoscopic displays

    NASA Astrophysics Data System (ADS)

    Gray, Stuart; Boudreau, Robert A.

    2013-03-01

    Lenticular lens based autostereoscopic 3D displays are finding many applications in digital signage and consumer electronics devices. A high quality 3D viewing experience requires the lenticular lens be properly aligned with the pixels on the display device so that each eye views the correct image. This work presents a simple and novel method for rapidly assessing the quality of a lenticular lens to be used in autostereoscopic displays. Errors in lenticular alignment across the entire display are easily observed with a simple test pattern where adjacent views are programmed to display different colors.

  11. 75 FR 5536 - Pipeline Safety: Control Room Management/Human Factors, Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-03

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Parts...: Control Room Management/Human Factors, Correction AGENCY: Pipeline and Hazardous Materials Safety... following correcting amendments: PART 192--TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM...

  12. Automatic recognition of light source from color negative films using sorting classification techniques

    NASA Astrophysics Data System (ADS)

    Sanger, Demas S.; Haneishi, Hideaki; Miyake, Yoichi

    1995-08-01

    This paper proposed a simple and automatic method for recognizing the light sources from various color negative film brands by means of digital image processing. First, we stretched the image obtained from a negative based on the standardized scaling factors, then extracted the dominant color component among red, green, and blue components of the stretched image. The dominant color component became the discriminator for the recognition. The experimental results verified that any one of the three techniques could recognize the light source from negatives of any film brands and all brands greater than 93.2 and 96.6% correct recognitions, respectively. This method is significant for the automation of color quality control in color reproduction from color negative film in mass processing and printing machine.

  13. What didTriplett really find? A contemporary analysis of the first experiment in social psychology.

    PubMed

    Strube, Michael J

    2005-01-01

    In 1898, Norman Triplett published was has been called the first experiment in social psychology and sports psychology. Claiming to demonstrate "the dynamogenic factors in pacemaking and competition," this oft-cited article began the serious investigation of social facilitation. This area of research now numbers in the hundreds of published works, includes the study of humans and other animials, and encompasses basic research and applied settings. But what did Triplett really find? I examine Triplett's original data and show that very little evidence existed for the social facilitation of the simple task he investigated. These analyses indicate the need to correct contemporary accounts of Triplett's work and underscore the differences in how research was evaluated at that time compared with today.

  14. A symmetric multivariate leakage correction for MEG connectomes

    PubMed Central

    Colclough, G.L.; Brookes, M.J.; Smith, S.M.; Woolrich, M.W.

    2015-01-01

    Ambiguities in the source reconstruction of magnetoencephalographic (MEG) measurements can cause spurious correlations between estimated source time-courses. In this paper, we propose a symmetric orthogonalisation method to correct for these artificial correlations between a set of multiple regions of interest (ROIs). This process enables the straightforward application of network modelling methods, including partial correlation or multivariate autoregressive modelling, to infer connectomes, or functional networks, from the corrected ROIs. Here, we apply the correction to simulated MEG recordings of simple networks and to a resting-state dataset collected from eight subjects, before computing the partial correlations between power envelopes of the corrected ROItime-courses. We show accurate reconstruction of our simulated networks, and in the analysis of real MEGresting-state connectivity, we find dense bilateral connections within the motor and visual networks, together with longer-range direct fronto-parietal connections. PMID:25862259

  15. Spec Rekindled-A Simple Torque Correction Mechanics for Transposed Teeth in Conjunction with Pre-adjusted Edgewise Appliance System.

    PubMed

    Singh, Harpreet; Maurya, Raj Kumar; Thakkar, Surbhi

    2016-12-01

    Complete transposition of teeth is a rather rare phenomenon. After correction of transposed and malaligned lateral incisor and canine, attainment of appropriate individual antagonistic tooth torque is indispensable, which many orthodontists consider to be a herculean task. Here, a novel method is proposed which demonstrates the use of Spec reverse torquing auxillary as an effective adjunctive aid in conjunction with pre-adjusted edgewise brackets.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, J C; Karmanos Cancer Institute McLaren-Macomb, Clinton Township, MI; Knill, C

    Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes.more » Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to thank PTW (Friedberg, Germany) for providing the PTW microDiamond detector for this research.« less

  17. Impact of the neutron detector choice on Bell and Glasstone spatial correction factor for subcriticality measurement

    NASA Astrophysics Data System (ADS)

    Talamo, Alberto; Gohar, Y.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2012-03-01

    In subcritical assemblies, the Bell and Glasstone spatial correction factor is used to correct the measured reactivity from different detector positions. In addition to the measuring position, several other parameters affect the correction factor: the detector material, the detector size, and the energy-angle distribution of source neutrons. The effective multiplication factor calculated by computer codes in criticality mode slightly differs from the average value obtained from the measurements in the different experimental channels of the subcritical assembly, which are corrected by the Bell and Glasstone spatial correction factor. Generally, this difference is due to (1) neutron counting errors; (2) geometrical imperfections, which are not simulated in the calculational model, and (3) quantities and distributions of material impurities, which are missing from the material definitions. This work examines these issues and it focuses on the detector choice and the calculation methodologies. The work investigated the YALINA Booster subcritical assembly of Belarus, which has been operated with three different fuel enrichments in the fast zone either: high (90%) and medium (36%), medium (36%), or low (21%) enriched uranium fuel.

  18. An analysis of the ArcCHECK-MR diode array's performance for ViewRay quality assurance.

    PubMed

    Ellefson, Steven T; Culberson, Wesley S; Bednarz, Bryan P; DeWerd, Larry A; Bayouth, John E

    2017-07-01

    The ArcCHECK-MR diode array utilizes a correction system with a virtual inclinometer to correct the angular response dependencies of the diodes. However, this correction system cannot be applied to measurements on the ViewRay MR-IGRT system due to the virtual inclinometer's incompatibility with the ViewRay's multiple simultaneous beams. Additionally, the ArcCHECK's current correction factors were determined without magnetic field effects taken into account. In the course of performing ViewRay IMRT quality assurance with the ArcCHECK, measurements were observed to be consistently higher than the ViewRay TPS predictions. The goals of this study were to quantify the observed discrepancies and test whether applying the current factors improves the ArcCHECK's accuracy for measurements on the ViewRay. Gamma and frequency analysis were performed on 19 ViewRay patient plans. Ion chamber measurements were performed at a subset of diode locations using a PMMA phantom with the same dimensions as the ArcCHECK. A new method for applying directionally dependent factors utilizing beam information from the ViewRay TPS was developed in order to analyze the current ArcCHECK correction factors. To test the current factors, nine ViewRay plans were altered to be delivered with only a single simultaneous beam and were measured with the ArcCHECK. The current correction factors were applied using both the new and current methods. The new method was also used to apply corrections to the original 19 ViewRay plans. It was found the ArcCHECK systematically reports doses higher than those actually delivered by the ViewRay. Application of the current correction factors by either method did not consistently improve measurement accuracy. As dose deposition and diode response have both been shown to change under the influence of a magnetic field, it can be concluded the current ArcCHECK correction factors are invalid and/or inadequate to correct measurements on the ViewRay system. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  19. Kinetic and metabolic isotope effects in coral skeletal carbon isotopes: A re-evaluation using experimental coral bleaching as a case study

    NASA Astrophysics Data System (ADS)

    Schoepf, Verena; Levas, Stephen J.; Rodrigues, Lisa J.; McBride, Michael O.; Aschaffenburg, Matthew D.; Matsui, Yohei; Warner, Mark E.; Hughes, Adam D.; Grottoli, Andréa G.

    2014-12-01

    Coral skeletal δ13C can be a paleo-climate proxy for light levels (i.e., cloud cover and seasonality) and for photosynthesis to respiration (P/R) ratios. The usefulness of coral δ13C as a proxy depends on metabolic isotope effects (related to changes in photosynthesis) being the dominant influence on skeletal δ13C. However, it is also influenced by kinetic isotope effects (related to calcification rate) which can overpower metabolic isotope effects and thus compromise the use of coral skeletal δ13C as a proxy. Heikoop et al. (2000) proposed a simple data correction to remove kinetic isotope effects from coral skeletal δ13C, as well as an equation to calculate P/R ratios from coral isotopes. However, despite having been used by other researchers, the data correction has never been directly tested, and isotope-based P/R ratios have never been compared to P/R ratios measured using respirometry. Experimental coral bleaching represents a unique environmental scenario to test this because bleaching produces large physiological responses that influence both metabolic and kinetic isotope effects in corals. Here, we tested the δ13C correction and the P/R calculation using three Pacific and three Caribbean coral species from controlled temperature-induced bleaching experiments where both the stable isotopes and the physiological variables that cause isotopic fractionation (i.e., photosynthesis, respiration, and calcification) were simultaneously measured. We show for the first time that the data correction proposed by Heikoop et al. (2000) does not effectively remove kinetic effects in the coral species studied here, and did not improve the metabolic signal of bleached and non-bleached corals. In addition, isotope-based P/R ratios were in poor agreement with measured P/R ratios, even when the data correction was applied. This suggests that additional factors influence δ13C and δ18O, which are not accounted for by the data correction. We therefore recommend that the data correction not be routinely applied for paleo-climate reconstruction, and that P/R ratios should only be obtained by direct measurement by respirometry.

  20. Hand Washing

    MedlinePlus

    ... study, only 58% of female and 48% of male middle- and high-school students washed their hands after using the bathroom. Yuck! How to Wash Your Hands Correctly There's a right way to wash your hands. Follow these simple ...

  1. [Endoscopically assisted fronto-orbitary correction in trigonocephaly].

    PubMed

    Hinojosa, J; Esparza, J; García-Recuero, I; Romance, A

    2007-01-01

    The development of multidisciplinar Units for Craneofacial Surgery has led to a considerable decrease in morbidity even in the cases of more complex craniofacial syndromes. The use of minimally invasive techniques for the correction of some of these malformations allows the surgeon to minimize the incidence of complications by means of a decrease in the surgical time, blood salvage and shortening of postoperative hospitalization in comparison to conventional craniofacial techniques. Simple and milder craniosynostosis are best approached by these techniques and render the best results. Different osteotomies resembling standard fronto-orbital remodelling besides simple suturectomies and the use of postoperative cranial orthesis may improve the final aesthetic appearence. In endoscopic treatment of trigonocephaly the use of preauricular incisions achieves complete pterional resection, lower lateral orbital osteotomies and successful precoronal frontal osteotomies to obtain long lasting and satisfactory outcomes.

  2. The perturbation correction factors for cylindrical ionization chambers in high-energy photon beams.

    PubMed

    Yoshiyama, Fumiaki; Araki, Fujio; Ono, Takeshi

    2010-07-01

    In this study, we calculated perturbation correction factors for cylindrical ionization chambers in high-energy photon beams by using Monte Carlo simulations. We modeled four Farmer-type cylindrical chambers with the EGSnrc/Cavity code and calculated the cavity or electron fluence correction factor, P (cav), the displacement correction factor, P (dis), the wall correction factor, P (wall), the stem correction factor, P (stem), the central electrode correction factor, P (cel), and the overall perturbation correction factor, P (Q). The calculated P (dis) values for PTW30010/30013 chambers were 0.9967 +/- 0.0017, 0.9983 +/- 0.0019, and 0.9980 +/- 0.0019, respectively, for (60)Co, 4 MV, and 10 MV photon beams. The value for a (60)Co beam was about 1.0% higher than the 0.988 value recommended by the IAEA TRS-398 protocol. The P (dis) values had a substantial discrepancy compared to those of IAEA TRS-398 and AAPM TG-51 at all photon energies. The P (wall) values were from 0.9994 +/- 0.0020 to 1.0031 +/- 0.0020 for PTW30010 and from 0.9961 +/- 0.0018 to 0.9991 +/- 0.0017 for PTW30011/30012, in the range of (60)Co-10 MV. The P (wall) values for PTW30011/30012 were around 0.3% lower than those of the IAEA TRS-398. Also, the chamber response with and without a 1 mm PMMA water-proofing sleeve agreed within their combined uncertainty. The calculated P (stem) values ranged from 0.9945 +/- 0.0014 to 0.9965 +/- 0.0014, but they are not considered in current dosimetry protocols. The values were no significant difference on beam qualities. P (cel) for a 1 mm aluminum electrode agreed within 0.3% with that of IAEA TRS-398. The overall perturbation factors agreed within 0.4% with those for IAEA TRS-398.

  3. MX Siting Investigation. Gravity Survey - Southern Snake Valley (Ferguson Desert), Utah.

    DTIC Science & Technology

    1980-03-28

    Topographic Center (DMAHTC), head- quartered in Cheyenne, Wyoming. DMAHTC reduces the data to Simple Bouguer Anomaly (see Section A1.4, Appendix Al.0...Valley, Utah . . . . . ......... . . . . . 3 3 Complete Bouguer Anomaly Contours 4 Interpreted Gravity Profile SE-3,4 5 Interpreted Gravity Profile SE...observations and reduced them to Simple Bouguer Anomalies (SBA) for each station as described in Appendix Al.0. Up to three levels of terrain corrections were

  4. [Constricted ear therapy with free auricular composite grafts].

    PubMed

    Liu, Tun; Zhang, Lian-sheng; Zhuang, Hong-xing; Zhang, Ke-yuan

    2004-03-01

    A simple and effective therapy for single side constricted ear. Transplanting normal side free composite auricular grafts to constricted ear (15 patients and 15 sides), then lengthening the helix, exposing the scapha, correcting deformity. The 15 patients composite grafts all survived. The helix has been lengthened, the scapha exposed, the normal ear reduced, the constricted ear augmented and two sides ear have become symmetry. This method is simple and results are satisfied.

  5. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

    2010-01-01

    This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…

  6. Determination of small field synthetic single-crystal diamond detector correction factors for CyberKnife, Leksell Gamma Knife Perfexion and linear accelerator.

    PubMed

    Veselsky, T; Novotny, J; Pastykova, V; Koniarova, I

    2017-12-01

    The aim of this study was to determine small field correction factors for a synthetic single-crystal diamond detector (PTW microDiamond) for routine use in clinical dosimetric measurements. Correction factors following small field Alfonso formalism were calculated by comparison of PTW microDiamond measured ratio M Qclin fclin /M Qmsr fmsr with Monte Carlo (MC) based field output factors Ω Qclin,Qmsr fclin,fmsr determined using Dosimetry Diode E or with MC simulation itself. Diode measurements were used for the CyberKnife and Varian Clinac 2100C/D linear accelerator. PTW microDiamond correction factors for Leksell Gamma Knife (LGK) were derived using MC simulated reference values from the manufacturer. PTW microDiamond correction factors for CyberKnife field sizes 25-5 mm were mostly smaller than 1% (except for 2.9% for 5 mm Iris field and 1.4% for 7.5 mm fixed cone field). The correction of 0.1% and 2.0% for 8 mm and 4 mm collimators, respectively, needed to be applied to PTW microDiamond measurements for LGK Perfexion. Finally, PTW microDiamond M Qclin fclin /M Qmsr fmsr for the linear accelerator varied from MC corrected Dosimetry Diode data by less than 0.5% (except for 1 × 1 cm 2 field size with 1.3% deviation). Regarding low resulting correction factor values, the PTW microDiamond detector may be considered an almost ideal tool for relative small field dosimetry in a large variety of stereotactic and radiosurgery treatment devices. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. SU-E-T-101: Determination and Comparison of Correction Factors Obtained for TLDs in Small Field Lung Heterogenous Phantom Using Acuros XB and EGSnrc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soh, R; Lee, J; Harianto, F

    Purpose: To determine and compare the correction factors obtained for TLDs in 2 × 2cm{sup 2} small field in lung heterogenous phantom using Acuros XB (AXB) and EGSnrc. Methods: This study will simulate the correction factors due to the perturbation of TLD-100 chips (Harshaw/Thermoscientific, 3 × 3 × 0.9mm{sup 3}, 2.64g/cm{sup 3}) in small field lung medium for Stereotactic Body Radiation Therapy (SBRT). A physical lung phantom was simulated by a 14cm thick composite cork phantom (0.27g/cm{sup 3}, HU:-743 ± 11) sandwiched between 4cm thick Plastic Water (CIRS,Norfolk). Composite cork has been shown to be a good lung substitute materialmore » for dosimetric studies. 6MV photon beam from Varian Clinac iX (Varian Medical Systems, Palo Alto, CA) with field size 2 × 2cm{sup 2} was simulated. Depth dose profiles were obtained from the Eclipse treatment planning system Acuros XB (AXB) and independently from DOSxyznrc, EGSnrc. Correction factors was calculated by the ratio of unperturbed to perturbed dose. Since AXB has limitations in simulating actual material compositions, EGSnrc will also simulate the AXB-based material composition for comparison to the actual lung phantom. Results: TLD-100, with its finite size and relatively high density, causes significant perturbation in 2 × 2cm{sup 2} small field in a low lung density phantom. Correction factors calculated by both EGSnrc and AXB was found to be as low as 0.9. It is expected that the correction factor obtained by EGSnrc wlll be more accurate as it is able to simulate the actual phantom material compositions. AXB have a limited material library, therefore it only approximates the composition of TLD, Composite cork and Plastic water, contributing to uncertainties in TLD correction factors. Conclusion: It is expected that the correction factors obtained by EGSnrc will be more accurate. Studies will be done to investigate the correction factors for higher energies where perturbation may be more pronounced.« less

  8. DS02R1: Improvements to Atomic Bomb Survivors' Input Data and Implementation of Dosimetry System 2002 (DS02) and Resulting Changes in Estimated Doses.

    PubMed

    Cullings, H M; Grant, E J; Egbert, S D; Watanabe, T; Oda, T; Nakamura, F; Yamashita, T; Fuchi, H; Funamoto, S; Marumo, K; Sakata, R; Kodama, Y; Ozasa, K; Kodama, K

    2017-01-01

    Individual dose estimates calculated by Dosimetry System 2002 (DS02) for the Life Span Study (LSS) of atomic bomb survivors are based on input data that specify location and shielding at the time of the bombing (ATB). A multi-year effort to improve information on survivors' locations ATB has recently been completed, along with comprehensive improvements in their terrain shielding input data and several improvements to computational algorithms used in combination with DS02 at RERF. Improvements began with a thorough review and prioritization of original questionnaire data on location and shielding that were taken from survivors or their proxies in the period 1949-1963. Related source documents varied in level of detail, from relatively simple lists to carefully-constructed technical drawings of structural and other shielding and surrounding neighborhoods. Systematic errors were reduced in this work by restoring the original precision of map coordinates that had been truncated due to limitations in early data processing equipment and by correcting distortions in the old (WWII-era) maps originally used to specify survivors' positions, among other improvements. Distortion errors were corrected by aligning the old maps and neighborhood drawings to orthophotographic mosaics of the cities that were newly constructed from pre-bombing aerial photographs. Random errors that were reduced included simple transcription errors and mistakes in identifying survivors' locations on the old maps. Terrain shielding input data that had been originally estimated for limited groups of survivors using older methods and data sources were completely re-estimated for all survivors using new digital terrain elevation data. Improvements to algorithms included a fix to an error in the DS02 code for coupling house and terrain shielding, a correction for elevation at the survivor's location in calculating angles to the horizon used for terrain shielding input, an improved method for truncating high dose estimates to 4 Gy to reduce the effect of dose error, and improved methods for calculating averaged shielding transmission factors that are used to calculate doses for survivors without detailed shielding input data. Input data changes are summarized and described here in some detail, along with the resulting changes in dose estimates and a simple description of changes in risk estimates for solid cancer mortality. This and future RERF publications will refer to the new dose estimates described herein as "DS02R1 doses."

  9. EXTINCTION AND DUST GEOMETRY IN M83 H II REGIONS: AN HUBBLE SPACE TELESCOPE/WFC3 STUDY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Guilin; Calzetti, Daniela; Hong, Sungryong

    We present Hubble Space Telescope/WFC3 narrow-band imaging of the starburst galaxy M83 targeting the hydrogen recombination lines (Hβ, Hα, and Paβ), which we use to investigate the dust extinction in the H II regions. We derive extinction maps with 6 pc spatial resolution from two combinations of hydrogen lines (Hα/Hβ and Hα/Paβ), and show that the longer wavelengths probe larger optical depths, with A{sub V} values larger by ≳1 mag than those derived from the shorter wavelengths. This difference leads to a factor ≳2 discrepancy in the extinction-corrected Hα luminosity, a significant effect when studying extragalactic H II regions. By comparing thesemore » observations to a series of simple models, we conclude that a large diversity of absorber/emitter geometric configurations can account for the data, implying a more complex physical structure than the classical foreground ''dust screen'' assumption. However, most data points are bracketed by the foreground screen and a model where dust and emitters are uniformly mixed. When averaged over large (≳100-200 pc) scales, the extinction becomes consistent with a ''dust screen'', suggesting that other geometries tend to be restricted to more local scales. Moreover, the extinction in any region can be described by a combination of the foreground screen and the uniform mixture model with weights of 1/3 and 2/3 in the center (≲2 kpc), respectively, and 2/3 and 1/3 for the rest of the disk. This simple prescription significantly improves the accuracy of the dust extinction corrections and can be especially useful for pixel-based analyses of galaxies similar to M83.« less

  10. A forecasting model for dengue incidence in the District of Gampaha, Sri Lanka.

    PubMed

    Withanage, Gayan P; Viswakula, Sameera D; Nilmini Silva Gunawardena, Y I; Hapugoda, Menaka D

    2018-04-24

    Dengue is one of the major health problems in Sri Lanka causing an enormous social and economic burden to the country. An accurate early warning system can enhance the efficiency of preventive measures. The aim of the study was to develop and validate a simple accurate forecasting model for the District of Gampaha, Sri Lanka. Three time-series regression models were developed using monthly rainfall, rainy days, temperature, humidity, wind speed and retrospective dengue incidences over the period January 2012 to November 2015 for the District of Gampaha, Sri Lanka. Various lag times were analyzed to identify optimum forecasting periods including interactions of multiple lags. The models were validated using epidemiological data from December 2015 to November 2017. Prepared models were compared based on Akaike's information criterion, Bayesian information criterion and residual analysis. The selected model forecasted correctly with mean absolute errors of 0.07 and 0.22, and root mean squared errors of 0.09 and 0.28, for training and validation periods, respectively. There were no dengue epidemics observed in the district during the training period and nine outbreaks occurred during the forecasting period. The proposed model captured five outbreaks and correctly rejected 14 within the testing period of 24 months. The Pierce skill score of the model was 0.49, with a receiver operating characteristic of 86% and 92% sensitivity. The developed weather based forecasting model allows warnings of impending dengue outbreaks and epidemics in advance of one month with high accuracy. Depending upon climatic factors, the previous month's dengue cases had a significant effect on the dengue incidences of the current month. The simple, precise and understandable forecasting model developed could be used to manage limited public health resources effectively for patient management, vector surveillance and intervention programmes in the district.

  11. Factors Associated With Early Loss of Hallux Valgus Correction.

    PubMed

    Shibuya, Naohiro; Kyprios, Evangelos M; Panchani, Prakash N; Martin, Lanster R; Thorud, Jakob C; Jupiter, Daniel C

    Recurrence is common after hallux valgus corrective surgery. Although many investigators have studied the risk factors associated with a suboptimal hallux position at the end of long-term follow-up, few have evaluated the factors associated with actual early loss of correction. We conducted a retrospective cohort study to identify the predictors of lateral deviation of the hallux during the postoperative period. We evaluated the demographic data, preoperative severity of the hallux valgus, other angular measurements characterizing underlying deformities, amount of hallux valgus correction, and postoperative alignment of the corrected hallux valgus for associations with recurrence. After adjusting for the covariates, the only factor associated with recurrence was the postoperative tibial sesamoid position. The recurrence rate was ~50% and ~60% when the postoperative tibial sesamoid position was >4 and >5 on the 7-point scale, respectively. Published by Elsevier Inc.

  12. Mapping Infected Area after a Flash-Flooding Storm Using Multi Criteria Analysis and Spectral Indices

    NASA Astrophysics Data System (ADS)

    Al-Akad, S.; Akensous, Y.; Hakdaoui, M.

    2017-11-01

    This research article is summarize the applications of remote sensing and GIS to study the urban floods risk in Al Mukalla. Satellite acquisition of a flood event on October 2015 in Al Mukalla (Yemen) by using flood risk mapping techniques illustrate the potential risk present in this city. Satellite images (The Landsat and DEM images data were atmospherically corrected, radiometric corrected, and geometric and topographic distortions rectified.) are used for flood risk mapping to afford a hazard (vulnerability) map. This map is provided by applying image-processing techniques and using geographic information system (GIS) environment also the application of NDVI, NDWI index, and a method to estimate the flood-hazard areas. Four factors were considered in order to estimate the spatial distribution of the hazardous areas: flow accumulation, slope, land use, geology and elevation. The multi-criteria analysis, allowing to deal with vulnerability to flooding, as well as mapping areas at the risk of flooding of the city Al Mukalla. The main object of this research is to provide a simple and rapid method to reduce and manage the risks caused by flood in Yemen by take as example the city of Al Mukalla.

  13. Accuracy of 1D microvascular flow models in the limit of low Reynolds numbers.

    PubMed

    Pindera, Maciej Z; Ding, Hui; Athavale, Mahesh M; Chen, Zhijian

    2009-05-01

    We describe results of numerical simulations of steady flows in tubes with branch bifurcations using fully 3D and reduced 1D geometries. The intent is to delineate the range of validity of reduced models used for simulations of flows in microcapillary networks, as a function of the flow Reynolds number Re. Results from model problems indicate that for Re less than 1 and possibly as high as 10, vasculatures may be represented by strictly 1D Poiseuille flow geometries with flow variation in the axial dimensions only. In that range flow rate predictions in the different branches generated by 1D and 3D models differ by a constant factor, independent of Re. When the cross-sectional areas of the branches are constant these differences are generally small and appear to stem from an uncertainty of how the individual branch lengths are defined. This uncertainty can be accounted for by a simple geometrical correction. For non-constant cross-sections the differences can be much more significant. If additional corrections for the presence of branch junctions and flow area variations are not taken into account in 1D models of complex vasculatures, the resultant flow predictions should be interpreted with caution.

  14. Linear positioning laser calibration setup of CNC machine tools

    NASA Astrophysics Data System (ADS)

    Sui, Xiulin; Yang, Congjing

    2002-10-01

    The linear positioning laser calibration setup of CNC machine tools is capable of executing machine tool laser calibraiotn and backlash compensation. Using this setup, hole locations on CNC machien tools will be correct and machien tool geometry will be evaluated and adjusted. Machien tool laser calibration and backlash compensation is a simple and straightforward process. First the setup is to 'find' the stroke limits of the axis. Then the laser head is then brought into correct alignment. Second is to move the machine axis to the other extreme, the laser head is now aligned, using rotation and elevation adjustments. Finally the machine is moved to the start position and final alignment is verified. The stroke of the machine, and the machine compensation interval dictate the amount of data required for each axis. These factors determine the amount of time required for a through compensation of the linear positioning accuracy. The Laser Calibrator System monitors the material temperature and the air density; this takes into consideration machine thermal growth and laser beam frequency. This linear positioning laser calibration setup can be used on CNC machine tools, CNC lathes, horizontal centers and vertical machining centers.

  15. Measurement of microchannel fluidic resistance with a standard voltage meter.

    PubMed

    Godwin, Leah A; Deal, Kennon S; Hoepfner, Lauren D; Jackson, Louis A; Easley, Christopher J

    2013-01-03

    A simplified method for measuring the fluidic resistance (R(fluidic)) of microfluidic channels is presented, in which the electrical resistance (R(elec)) of a channel filled with a conductivity standard solution can be measured and directly correlated to R(fluidic) using a simple equation. Although a slight correction factor could be applied in this system to improve accuracy, results showed that a standard voltage meter could be used without calibration to determine R(fluidic) to within 12% error. Results accurate to within 2% were obtained when a geometric correction factor was applied using these particular channels. When compared to standard flow rate measurements, such as meniscus tracking in outlet tubing, this approach provided a more straightforward alternative and resulted in lower measurement error. The method was validated using 9 different fluidic resistance values (from ∼40 to 600kPa smm(-3)) and over 30 separately fabricated microfluidic devices. Furthermore, since the method is analogous to resistance measurements with a voltage meter in electrical circuits, dynamic R(fluidic) measurements were possible in more complex microfluidic designs. Microchannel R(elec) was shown to dynamically mimic pressure waveforms applied to a membrane in a variable microfluidic resistor. The variable resistor was then used to dynamically control aqueous-in-oil droplet sizes and spacing, providing a unique and convenient control system for droplet-generating devices. This conductivity-based method for fluidic resistance measurement is thus a useful tool for static or real-time characterization of microfluidic systems. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Measurement of Microchannel Fluidic Resistance with a Standard Voltage Meter

    PubMed Central

    Godwin, Leah A.; Deal, Kennon S.; Hoepfner, Lauren D.; Jackson, Louis A.; Easley, Christopher J.

    2012-01-01

    A simplified method for measuring the fluidic resistance (Rfluidic) of microfluidic channels is presented, in which the electrical resistance (Relec) of a channel filled with a conductivity standard solution can be measured and directly correlated to Rfluidic using a simple equation. Although a slight correction factor could be applied in this system to improve accuracy, results showed that a standard voltage meter could be used without calibration to determine Rfluidic to within 12% error. Results accurate to within 2% were obtained when a geometric correction factor was applied using these particular channels. When compared to standard flow rate measurements, such as meniscus tracking in outlet tubing, this approach provided a more straightforward alternative and resulted in lower measurement error. The method was validated using 9 different fluidic resistance values (from ~40 – 600 kPa s mm−3) and over 30 separately fabricated microfluidic devices. Furthermore, since the method is analogous to resistance measurements with a voltage meter in electrical circuits, dynamic Rfluidic measurements were possible in more complex microfluidic designs. Microchannel Relec was shown to dynamically mimic pressure waveforms applied to a membrane in a variable microfluidic resistor. The variable resistor was then used to dynamically control aqueous-in-oil droplet sizes and spacing, providing a unique and convenient control system for droplet-generating devices. This conductivity-based method for fluidic resistance measurement is thus a useful tool for static or real-time characterization of microfluidic systems. PMID:23245901

  17. An improved bias correction method of daily rainfall data using a sliding window technique for climate change impact assessment

    NASA Astrophysics Data System (ADS)

    Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.

    2018-01-01

    Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological simulations forced using the bias corrected rainfall (distribution mapping and modified power transformation methods that used the proposed daily correction factors) was similar to those simulated by the IMD rainfall. The results demonstrate that the methods and the time scales used for bias correction of RCM rainfall data have a larger impact on the accuracy of the daily rainfall and consequently the simulated streamflow. The analysis suggests that the distribution mapping with daily correction factors can be preferred for adjusting RCM rainfall data irrespective of seasons or climate zones for realistic simulation of streamflow.

  18. Three-Color Chromosome Painting as Seen through the Eyes of mFISH: Another Look at Radiation-Induced Exchanges and Their Conversion to Whole-Genome Equivalency

    PubMed Central

    Loucas, Bradford D.; Shuryak, Igor; Cornforth, Michael N.

    2016-01-01

    Whole-chromosome painting (WCP) typically involves the fluorescent staining of a small number of chromosomes. Consequently, it is capable of detecting only a fraction of exchanges that occur among the full complement of chromosomes in a genome. Mathematical corrections are commonly applied to WCP data in order to extrapolate the frequency of exchanges occurring in the entire genome [whole-genome equivalency (WGE)]. However, the reliability of WCP to WGE extrapolations depends on underlying assumptions whose conditions are seldom met in actual experimental situations, in particular the presumed absence of complex exchanges. Using multi-fluor fluorescence in situ hybridization (mFISH), we analyzed the induction of simple exchanges produced by graded doses of 137Cs gamma rays (0–4 Gy), and also 1.1 GeV 56Fe ions (0–1.5 Gy). In order to represent cytogenetic damage as it would have appeared to the observer following standard three-color WCP, all mFISH information pertaining to exchanges that did not specifically involve chromosomes 1, 2, or 4 was ignored. This allowed us to reconstruct dose–responses for three-color apparently simple (AS) exchanges. Using extrapolation methods similar to those derived elsewhere, these were expressed in terms of WGE for comparison to mFISH data. Based on AS events, the extrapolated frequencies systematically overestimated those actually observed by mFISH. For gamma rays, these errors were practically independent of dose. When constrained to a relatively narrow range of doses, the WGE corrections applied to both 56Fe and gamma rays predicted genome-equivalent damage with a level of accuracy likely sufficient for most applications. However, the apparent accuracy associated with WCP to WGE corrections is both fortuitous and misleading. This is because (in normal practice) such corrections can only be applied to AS exchanges, which are known to include complex aberrations in the form of pseudosimple exchanges. When WCP to WGE corrections are applied to true simple exchanges, the results are less than satisfactory, leading to extrapolated values that underestimate the true WGE response by unacceptably large margins. Likely explanations for these results are discussed, as well as their implications for radiation protection. Thus, in seeming contradiction to notion that complex aberrations be avoided altogether in WGE corrections – and in violation of assumptions upon which these corrections are based – their inadvertent inclusion in three-color WCP data is actually required in order for them to yield even marginally acceptable results. PMID:27014627

  19. Individual risk of cutaneous melanoma in New Zealand: developing a clinical prediction aid.

    PubMed

    Sneyd, Mary Jane; Cameron, Claire; Cox, Brian

    2014-05-22

    New Zealand and Australia have the highest melanoma incidence rates worldwide. In New Zealand, both the incidence and thickness have been increasing. Clinical decisions require accurate risk prediction but a simple list of genetic, phenotypic and behavioural risk factors is inadequate to estimate individual risk as the risk factors for melanoma have complex interactions. In order to offer tailored clinical management strategies, we developed a New Zealand prediction model to estimate individual 5-year absolute risk of melanoma. A population-based case-control study (368 cases and 270 controls) of melanoma risk factors provided estimates of relative risks for fair-skinned New Zealanders aged 20-79 years. Model selection techniques and multivariate logistic regression were used to determine the important predictors. The relative risks for predictors were combined with baseline melanoma incidence rates and non-melanoma mortality rates to calculate individual probabilities of developing melanoma within 5 years. For women, the best model included skin colour, number of moles > =5 mm on the right arm, having a 1st degree relative with large moles, and a personal history of non-melanoma skin cancer (NMSC). The model correctly classified 68% of participants; the C-statistic was 0.74. For men, the best model included age, place of occupation up to age 18 years, number of moles > =5 mm on the right arm, birthplace, and a history of NMSC. The model correctly classified 67% of cases; the C-statistic was 0.71. We have developed the first New Zealand risk prediction model that calculates individual absolute 5-year risk of melanoma. This model will aid physicians to identify individuals at high risk, allowing them to individually target surveillance and other management strategies, and thereby reduce the high melanoma burden in New Zealand.

  20. Quantum Computer Science

    NASA Astrophysics Data System (ADS)

    Mermin, N. David

    2007-08-01

    Preface; 1. Cbits and Qbits; 2. General features and some simple examples; 3. Breaking RSA encryption with a quantum computer; 4. Searching with a quantum computer; 5. Quantum error correction; 6. Protocols that use just a few Qbits; Appendices; Index.

  1. Factors Influencing the Design, Establishment, Administration, and Governance of Correctional Education for Females

    ERIC Educational Resources Information Center

    Ellis, Johnica; McFadden, Cheryl; Colaric, Susan

    2008-01-01

    This article summarizes the results of a study conducted to investigate factors influencing the organizational design, establishment, administration, and governance of correctional education for females. The research involved interviews with correctional and community college administrators and practitioners representing North Carolina female…

  2. Improving satellite retrievals of NO2 in biomass burning regions

    NASA Astrophysics Data System (ADS)

    Bousserez, N.; Martin, R. V.; Lamsal, L. N.; Mao, J.; Cohen, R. C.; Anderson, B. E.

    2010-12-01

    The quality of space-based nitrogen dioxide (NO2) retrievals from solar backscatter depends on a priori knowledge of the NO2 profile shape as well as the effects of atmospheric scattering. These effects are characterized by the air mass factor (AMF) calculation. Calculation of the AMF combines a radiative transfer calculation together with a priori information about aerosols and about NO2 profiles (shape factors), which are usually taken from a chemical transport model. In this work we assess the impact of biomass burning emissions on the AMF using the LIDORT radiative transfer model and a GEOS-Chem simulation based on a daily fire emissions inventory (FLAMBE). We evaluate the GEOS-Chem aerosol optical properties and NO2 shape factors using in situ data from the ARCTAS summer 2008 (North America) and DABEX winter 2006 (western Africa) experiments. Sensitivity studies are conducted to assess the impact of biomass burning on the aerosols and the NO2 shape factors used in the AMF calculation. The mean aerosol correction over boreal fires is negligible (+3%), in contrast with a large reduction (-18%) over African savanna fires. The change in sign and magnitude over boreal forest and savanna fires appears to be driven by the shielding effects that arise from the greater biomass burning aerosol optical thickness (AOT) above the African biomass burning NO2. In agreement with previous work, the single scattering albedo (SSA) also affects the aerosol correction. We further investigated the effect of clouds on the aerosol correction. For a fixed AOT, the aerosol correction can increase from 20% to 50% when cloud fraction increases from 0 to 30%. Over both boreal and savanna fires, the greatest impact on the AMF is from the fire-induced change in the NO2 profile (shape factor correction), that decreases the AMF by 38% over the boreal fires and by 62% of the savanna fires. Combining the aerosol and shape factor corrections together results in small differences compared to the shape factor correction alone (without the aerosol correction), indicating that a shape factor-only correction is a good approximation of the total AMF correction associated with fire emissions. We use this result to define a measurement-based correction of the AMF based on the relationship between the slant column variability and the variability of the shape factor in the lower troposphere. This method may be generalized to other types of emission sources.

  3. Crater property in two-particle bound states: When and why

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, Chi-Keung

    2000-06-01

    Crater has shown that, for two particles (with masses m{sub 1} and m{sub 2}) in a Coulombic bound state, the charge distribution is equal to the sum of the two charge distributions obtained by taking m{sub 1}{yields}{infinity} and m{sub 2}{yields}{infinity}, respectively, while keeping the same Coulombic potential. We provide a simple scaling criterion to determine whether an arbitrary Hamiltonian possesses this property. In particular, we show that, for a Coulombic system, fine structure corrections preserve this Crater property while two-particle relativistic corrections and/or hyperfine corrections may destroy it. (c) 2000 American Association of Physics Teachers.

  4. Two flaps and Z-plasty technique for correction of longitudinal ear lobe cleft.

    PubMed

    Lee, Paik-Kwon; Ju, Hong-Sil; Rhie, Jong-Won; Ahn, Sang-Tae

    2005-06-01

    Various surgical techniques have been reported for the correction of congenital ear lobe deformities. Our method, the two-flaps-and-Z-plasty technique, for correcting the longitudinal ear lobe cleft is presented. This technique is simple and easy to perform. It enables us to keep the bulkiness of the ear lobe with minimal tissue sacrifice, and to make a shorter operation scar. The small Z-plasty at the free ear lobe margin avoids notching deformity and makes the shape of the ear lobe smoother. The result is satisfactory in terms of matching the contralateral normal ear lobe in shape and symmetry.

  5. A booklet on participants' rights to improve consent for clinical research: a randomized trial.

    PubMed

    Benatar, Jocelyne R; Mortimer, John; Stretton, Matthew; Stewart, Ralph A H

    2012-01-01

    Information on the rights of subjects in clinical trials has become increasingly complex and difficult to understand. This study evaluates whether a simple booklet which is relevant to all research studies improves the understanding of rights needed for subjects to provide informed consent. 21 currently used informed consent forms (ICF) from international clinical trials were separated into information related to the specific research study, and general information on participants' rights. A booklet designed to provide information on participants' rights which used simple language was developed to replace this information in current ICF's Readability of each component of ICF's and the booklet was then assessed using the Flesch-Kincaid Reading ease score (FK). To further evaluate the booklet 282 hospital inpatients were randomised to one of three ways to present research information; a standard ICF, the booklet combined with a short ICF, or the booklet combined with a simplified ICF. Comprehension of information related to the research proposal and to participant's rights was assessed by questionnaire. Information related to participants' rights contributed an average of 44% of the words in standard ICFs, and was harder to read than information describing the clinical trial (FK 25 versus (vs.) 41 respectively, p = 0.0003). The booklet reduced the number of words and improved FK from 25 to 42. The simplified ICF had a slightly higher FK score than the standard ICF (50 vs. 42). Comprehension assessed in inpatients was better for the booklet and short ICF 62%, (95% confidence interval (CI) 56 to 67) correct, or simplified ICF 62% (CI 58 to 68) correct compared to 52%, (CI 47 to 57) correct for the standard ICF, p = 0.009. This was due to better understanding of questions on rights (62% vs. 49% correct, p = 0.0008). Comprehension of study related information was similar for the simplified and standard ICF (60% vs. 64% correct, p = 0.68). A booklet provides a simple consistent approach to providing information on participant rights which is relevant to all research studies, and improves comprehension of patients who typically participate in clinical trials.

  6. A Booklet on Participants’ Rights to Improve Consent for Clinical Research: A Randomized Trial

    PubMed Central

    Benatar, Jocelyne R.; Mortimer, John; Stretton, Matthew; Stewart, Ralph A. H.

    2012-01-01

    Objective Information on the rights of subjects in clinical trials has become increasingly complex and difficult to understand. This study evaluates whether a simple booklet which is relevant to all research studies improves the understanding of rights needed for subjects to provide informed consent. Methods 21 currently used informed consent forms (ICF) from international clinical trials were separated into information related to the specific research study, and general information on participants’ rights. A booklet designed to provide information on participants’ rights which used simple language was developed to replace this information in current ICF’s Readability of each component of ICF’s and the booklet was then assessed using the Flesch-Kincaid Reading ease score (FK). To further evaluate the booklet 282 hospital inpatients were randomised to one of three ways to present research information; a standard ICF, the booklet combined with a short ICF, or the booklet combined with a simplified ICF. Comprehension of information related to the research proposal and to participant’s rights was assessed by questionnaire. Results Information related to participants’ rights contributed an average of 44% of the words in standard ICFs, and was harder to read than information describing the clinical trial (FK 25 versus (vs.) 41 respectively, p = 0.0003). The booklet reduced the number of words and improved FK from 25 to 42. The simplified ICF had a slightly higher FK score than the standard ICF (50 vs. 42). Comprehension assessed in inpatients was better for the booklet and short ICF 62%, (95% confidence interval (CI) 56 to 67) correct, or simplified ICF 62% (CI 58 to 68) correct compared to 52%, (CI 47 to 57) correct for the standard ICF, p = 0.009. This was due to better understanding of questions on rights (62% vs. 49% correct, p = 0.0008). Comprehension of study related information was similar for the simplified and standard ICF (60% vs. 64% correct, p = 0.68). Conclusions A booklet provides a simple consistent approach to providing information on participant rights which is relevant to all research studies, and improves comprehension of patients who typically participate in clinical trials. PMID:23094034

  7. Spec Rekindled-A Simple Torque Correction Mechanics for Transposed Teeth in Conjunction with Pre-adjusted Edgewise Appliance System

    PubMed Central

    Singh, Harpreet; Thakkar, Surbhi

    2016-01-01

    Complete transposition of teeth is a rather rare phenomenon. After correction of transposed and malaligned lateral incisor and canine, attainment of appropriate individual antagonistic tooth torque is indispensable, which many orthodontists consider to be a herculean task. Here, a novel method is proposed which demonstrates the use of Spec reverse torquing auxillary as an effective adjunctive aid in conjunction with pre-adjusted edgewise brackets. PMID:28209017

  8. Assessing the capability of continuum and discrete particle methods to simulate gas-solids flow using DNS predictions as a benchmark

    DOE PAGES

    Lu, Liqiang; Liu, Xiaowen; Li, Tingwen; ...

    2017-08-12

    For this study, gas–solids flow in a three-dimension periodic domain was numerically investigated by direct numerical simulation (DNS), computational fluid dynamic-discrete element method (CFD-DEM) and two-fluid model (TFM). DNS data obtained by finely resolving the flow around every particle are used as a benchmark to assess the validity of coarser DEM and TFM approaches. The CFD-DEM predicts the correct cluster size distribution and under-predicts the macro-scale slip velocity even with a grid size as small as twice the particle diameter. The TFM approach predicts larger cluster size and lower slip velocity with a homogeneous drag correlation. Although the slip velocitymore » can be matched by a simple modification to the drag model, the predicted voidage distribution is still different from DNS: Both CFD-DEM and TFM over-predict the fraction of particles in dense regions and under-predict the fraction of particles in regions of intermediate void fractions. Also, the cluster aspect ratio of DNS is smaller than CFD-DEM and TFM. Since a simple correction to the drag model can predict a correct slip velocity, it is hopeful that drag corrections based on more elaborate theories that consider voidage gradient and particle fluctuations may be able to improve the current predictions of cluster distribution.« less

  9. Assessing the capability of continuum and discrete particle methods to simulate gas-solids flow using DNS predictions as a benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Liu, Xiaowen; Li, Tingwen

    For this study, gas–solids flow in a three-dimension periodic domain was numerically investigated by direct numerical simulation (DNS), computational fluid dynamic-discrete element method (CFD-DEM) and two-fluid model (TFM). DNS data obtained by finely resolving the flow around every particle are used as a benchmark to assess the validity of coarser DEM and TFM approaches. The CFD-DEM predicts the correct cluster size distribution and under-predicts the macro-scale slip velocity even with a grid size as small as twice the particle diameter. The TFM approach predicts larger cluster size and lower slip velocity with a homogeneous drag correlation. Although the slip velocitymore » can be matched by a simple modification to the drag model, the predicted voidage distribution is still different from DNS: Both CFD-DEM and TFM over-predict the fraction of particles in dense regions and under-predict the fraction of particles in regions of intermediate void fractions. Also, the cluster aspect ratio of DNS is smaller than CFD-DEM and TFM. Since a simple correction to the drag model can predict a correct slip velocity, it is hopeful that drag corrections based on more elaborate theories that consider voidage gradient and particle fluctuations may be able to improve the current predictions of cluster distribution.« less

  10. Canopy interception variability in changing climate

    NASA Astrophysics Data System (ADS)

    Kalicz, Péter; Herceg, András; Kisfaludi, Balázs; Csáki, Péter; Gribovszki, Zoltán

    2017-04-01

    Tree canopies play a rather important role in forest hydrology. They intercept significant amounts of precipitation and evaporate back into the atmosphere during and after precipitation event. This process determines the net intake of forest soils and so important factor of hydrological processes in forested catchments. Average amount of interception loss is determined by the storage capacity of tree canopies and the rainfall distribution. Canopy storage capacity depends on several factors. It shows strong correlation with the leaf area index (LAI). Some equations are available to quantify this dependence. LAI shows significant variability both spatial and temporal scale. There are several methods to derive LAI from remote sensed data which helps to follow changes of it. In this study MODIS sensor based LAI time series are used to estimate changes of the storage capacity. Rainfall distribution derived from the FORESEE database which is developed for climate change related impact studies in the Carpathian Basin. It contains observation based precipitation data for the past and uses bias correction method for the climate projections. In this study a site based estimation is outworked for the Sopron Hills area. Sopron Hills is located at the eastern foothills of the Alps in Hungary. The study site, namely Hidegvíz Valley experimental catchment, is located in the central valley of the Sopron Hills. Long-term interception measurements are available in several forest sites in Hidegvíz Valley. With the combination of the ground based observations, MODIS LAI datasets a simple function is developed to describe the average yearly variations in canopy storage. Interception measurements and the CREMAP evapotranspiration data help to calibrate a simple interception loss equation based on Merriam's work. Based on these equation and the FORESEE bias corrected precipitation data an estimation is outworked for better understanding of the feedback of forest crown on hydrological cycle. This research has been supported by the Agroclimate.2 VKSZ_12-1-2013-0034 project, and the corresponding author's work was also supported by the János Bolyai Scholarship of the Hungarian Academy of Sciences.

  11. Utilizing an Energy Management System with Distributed Resources to Manage Critical Loads and Reduce Energy Costs

    DTIC Science & Technology

    2014-09-01

    peak shaving, conducting power factor correction, matching critical load to most efficient distributed resource, and islanding a system during...photovoltaic arrays during islanding, and power factor correction, the implementation of the ESS by itself is likely to prove cost prohibitive. The DOD...These functions include peak shaving, conducting power factor correction, matching critical load to most efficient distributed resource, and islanding a

  12. Simulating Freshwater Availability under Future Climate Conditions

    NASA Astrophysics Data System (ADS)

    Zhao, F.; Zeng, N.; Motesharrei, S.; Gustafson, K. C.; Rivas, J.; Miralles-Wilhelm, F.; Kalnay, E.

    2013-12-01

    Freshwater availability is a key factor for regional development. Precipitation, evaporation, river inflow and outflow are the major terms in the estimate of regional water supply. In this study, we aim to obtain a realistic estimate for these variables from 1901 to 2100. First we calculated the ensemble mean precipitation using the 2011-2100 RCP4.5 output (re-sampled to half-degree spatial resolution) from 16 General Circulation Models (GCMs) participating the Coupled Model Intercomparison Project Phase 5 (CMIP5). The projections are then combined with the half-degree 1901-2010 Climate Research Unit (CRU) TS3.2 dataset after bias correction. We then used the combined data to drive our UMD Earth System Model (ESM), in order to generate evaporation and runoff. We also developed a River-Routing Scheme based on the idea of Taikan Oki, as part of the ESM. It is capable of calculating river inflow and outflow for any region, driven by the gridded runoff output. River direction and slope information from Global Dominant River Tracing (DRT) dataset are included in our scheme. The effects of reservoirs/dams are parameterized based on a few simple factors such as soil moisture, population density and geographic regions. Simulated river flow is validated with river gauge measurements for the world's major rivers. We have applied our river flow calculation to two data-rich watersheds in the United States: Phoenix AMA watershed and the Potomac River Basin. The results are used in our SImple WAter model (SIWA) to explore water management options.

  13. Internal (Annular) and Compressible External (Flat Plate) Turbulent Flow Heat Transfer Correlations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dechant, Lawrence; Smith, Justin

    Here we provide a discussion regarding the applicability of a family of traditional heat transfer correlation based models for several (unit level) heat transfer problems associated with flight heat transfer estimates and internal flow heat transfer associated with an experimental simulation design (Dobranich 2014). Variability between semi-empirical free-flight models suggests relative differences for heat transfer coefficients on the order of 10%, while the internal annular flow behavior is larger with differences on the order of 20%. We emphasize that these expressions are strictly valid only for the geometries they have been derived for e.g. the fully developed annular flow ormore » simple external flow problems. Though, the application of flat plate skin friction estimate to cylindrical bodies is a traditional procedure to estimate skin friction and heat transfer, an over-prediction bias is often observed using these approximations for missile type bodies. As a correction for this over-estimate trend, we discuss a simple scaling reduction factor for flat plate turbulent skin friction and heat transfer solutions (correlations) applied to blunt bodies of revolution at zero angle of attack. The method estimates the ratio between axisymmetric and 2-d stagnation point heat transfer skin friction and Stanton number solution expressions for sub-turbulent Reynolds numbers %3C1x10 4 . This factor is assumed to also directly influence the flat plate results applied to the cylindrical portion of the flow and the flat plate correlations are modified by« less

  14. The role of noise in self-organized decision making by the true slime mold Physarum polycephalum.

    PubMed

    Meyer, Bernd; Ansorge, Cedrick; Nakagaki, Toshiyuki

    2017-01-01

    Self-organized mechanisms are frequently encountered in nature and known to achieve flexible, adaptive control and decision-making. Noise plays a crucial role in such systems: It can enable a self-organized system to reliably adapt to short-term changes in the environment while maintaining a generally stable behavior. This is fundamental in biological systems because they must strike a delicate balance between stable and flexible behavior. In the present paper we analyse the role of noise in the decision-making of the true slime mold Physarum polycephalum, an important model species for the investigation of computational abilities in simple organisms. We propose a simple biological experiment to investigate the reaction of P. polycephalum to time-variant risk factors and present a stochastic extension of an established mathematical model for P. polycephalum to analyze this experiment. It predicts that-due to the mechanism of stochastic resonance-noise can enable P. polycephalum to correctly assess time-variant risk factors, while the corresponding noise-free system fails to do so. Beyond the study of P. polycephalum we demonstrate that the influence of noise on self-organized decision-making is not tied to a specific organism. Rather it is a general property of the underlying process dynamics, which appears to be universal across a wide range of systems. Our study thus provides further evidence that stochastic resonance is a fundamental component of the decision-making in self-organized macroscopic and microscopic groups and organisms.

  15. The role of noise in self-organized decision making by the true slime mold Physarum polycephalum

    PubMed Central

    Ansorge, Cedrick; Nakagaki, Toshiyuki

    2017-01-01

    Self-organized mechanisms are frequently encountered in nature and known to achieve flexible, adaptive control and decision-making. Noise plays a crucial role in such systems: It can enable a self-organized system to reliably adapt to short-term changes in the environment while maintaining a generally stable behavior. This is fundamental in biological systems because they must strike a delicate balance between stable and flexible behavior. In the present paper we analyse the role of noise in the decision-making of the true slime mold Physarum polycephalum, an important model species for the investigation of computational abilities in simple organisms. We propose a simple biological experiment to investigate the reaction of P. polycephalum to time-variant risk factors and present a stochastic extension of an established mathematical model for P. polycephalum to analyze this experiment. It predicts that—due to the mechanism of stochastic resonance—noise can enable P. polycephalum to correctly assess time-variant risk factors, while the corresponding noise-free system fails to do so. Beyond the study of P. polycephalum we demonstrate that the influence of noise on self-organized decision-making is not tied to a specific organism. Rather it is a general property of the underlying process dynamics, which appears to be universal across a wide range of systems. Our study thus provides further evidence that stochastic resonance is a fundamental component of the decision-making in self-organized macroscopic and microscopic groups and organisms. PMID:28355213

  16. Circuit Impedance Could Be a Crucial Factor Influencing Radiofrequency Ablation Efficacy and Safety: A Myocardial Phantom Study of the Problem and its Correction.

    PubMed

    Bhaskaran, Abhishek; Barry, M A; Pouliopoulos, Jim; Nalliah, Chrishan; Qian, Pierre; Chik, William; Thavapalachandran, Sujitha; Davis, Lloyd; McEwan, Alistair; Thomas, Stuart; Kovoor, Pramesh; Thiagalingam, Aravinda

    2016-03-01

    Circuit impedance could affect the safety and efficacy of radiofrequency (RF) ablation. To perform irrigated RF ablations with graded impedance to compare (1) lesion dimensions and overheated dimensions in fixed power ablations (2) and in power corrected ablations. Ablations were performed with irrigated Navistar Thermocool catheter and Stockert EP shuttle generator at settings of 40 W power for 60 seconds, in a previously validated myocardial phantom. The impedance of the circuit was set at 60 Ω, 80 Ω, 100 Ω, 120 Ω, 140 Ω, and 160 Ω. The lesion and overheated dimensions were measured at 53 °C and 80 °C isotherms, respectively. In the second set of ablations, power was corrected according to circuit impedance. In total, 70 ablations were performed. The lesion volume was 72.0 ± 4.8% and 44.7 ± 4.6% higher at 80 Ω and 100 Ω, respectively, compared to that at 120 Ω and it was 15.4 ± 1.2%, 28.1 ± 2.0%, and 38.0 ± 1.8% lower at 140 Ω, 160 Ω, and 180 Ω, respectively. The overheated volume was four times larger when impedance was reduced to 80 Ω from 100 Ω. It was absent at 120 Ω and above. In the power corrected ablations, the lesion volumes were similar to that of 40 W/120 Ω ablations and there was no evidence of overheating. The lesion and overheated dimensions were significantly larger with lower circuit impedance during irrigated RF ablation and the lesion size was smaller in high impedance ablations. Power delivery adjusted to impedance using a simple equation improved the consistency of lesion formation and prevented overheating. © 2015 Wiley Periodicals, Inc.

  17. Integrability in AdS/CFT correspondence: quasi-classical analysis

    NASA Astrophysics Data System (ADS)

    Gromov, Nikolay

    2009-06-01

    In this review, we consider a quasi-classical method applicable to integrable field theories which is based on a classical integrable structure—the algebraic curve. We apply it to the Green-Schwarz superstring on the AdS5 × S5 space. We show that the proposed method reproduces perfectly the earlier results obtained by expanding the string action for some simple classical solutions. The construction is explicitly covariant and is not based on a particular parameterization of the fields and as a result is free from ambiguities. On the other hand, the finite size corrections in some particularly important scaling limit are studied in this paper for a system of Bethe equations. For the general superalgebra \\su(N|K) , the result for the 1/L corrections is obtained. We find an integral equation which describes these corrections in a closed form. As an application, we consider the conjectured Beisert-Staudacher (BS) equations with the Hernandez-Lopez dressing factor where the finite size corrections should reproduce quasi-classical results around a general classical solution. Indeed, we show that our integral equation can be interpreted as a sum of all physical fluctuations and thus prove the complete one-loop consistency of the BS equations. We demonstrate that any local conserved charge (including the AdS energy) computed from the BS equations is indeed given at one loop by the sum of the charges of fluctuations with an exponential precision for large S5 angular momentum of the string. As an independent result, the BS equations in an \\su(2) sub-sector were derived from Zamolodchikovs's S-matrix. The paper is based on the author's PhD thesis.

  18. Optically buffered Jones-matrix-based multifunctional optical coherence tomography with polarization mode dispersion correction

    PubMed Central

    Hong, Young-Joo; Makita, Shuichi; Sugiyama, Satoshi; Yasuno, Yoshiaki

    2014-01-01

    Polarization mode dispersion (PMD) degrades the performance of Jones-matrix-based polarization-sensitive multifunctional optical coherence tomography (JM-OCT). The problem is specially acute for optically buffered JM-OCT, because the long fiber in the optical buffering module induces a large amount of PMD. This paper aims at presenting a method to correct the effect of PMD in JM-OCT. We first mathematically model the PMD in JM-OCT and then derive a method to correct the PMD. This method is a combination of simple hardware modification and subsequent software correction. The hardware modification is introduction of two polarizers which transform the PMD into global complex modulation of Jones matrix. Subsequently, the software correction demodulates the global modulation. The method is validated with an experimentally obtained point spread function with a mirror sample, as well as by in vivo measurement of a human retina. PMID:25657888

  19. Correcting for batch effects in case-control microbiome studies

    PubMed Central

    Gibbons, Sean M.; Duvallet, Claire

    2018-01-01

    High-throughput data generation platforms, like mass-spectrometry, microarrays, and second-generation sequencing are susceptible to batch effects due to run-to-run variation in reagents, equipment, protocols, or personnel. Currently, batch correction methods are not commonly applied to microbiome sequencing datasets. In this paper, we compare different batch-correction methods applied to microbiome case-control studies. We introduce a model-free normalization procedure where features (i.e. bacterial taxa) in case samples are converted to percentiles of the equivalent features in control samples within a study prior to pooling data across studies. We look at how this percentile-normalization method compares to traditional meta-analysis methods for combining independent p-values and to limma and ComBat, widely used batch-correction models developed for RNA microarray data. Overall, we show that percentile-normalization is a simple, non-parametric approach for correcting batch effects and improving sensitivity in case-control meta-analyses. PMID:29684016

  20. Bias correction of nutritional status estimates when reported age is used for calculating WHO indicators in children under five years of age.

    PubMed

    Quezada, Amado D; García-Guerra, Armando; Escobar, Leticia

    2016-06-01

    To assess the performance of a simple correction method for nutritional status estimates in children under five years of age when exact age is not available from the data. The proposed method was based on the assumption of symmetry of age distributions within a given month of age and validated in a large population-based survey sample of Mexican preschool children. The main distributional assumption was consistent with the data. All prevalence estimates derived from the correction method showed no statistically significant bias. In contrast, failing to correct attained age resulted in an underestimation of stunting in general and an overestimation of overweight or obesity among the youngest. The proposed method performed remarkably well in terms of bias correction of estimates and could be easily applied in situations in which either birth or interview dates are not available from the data.

  1. Dual-beam manually-actuated distortion-corrected imaging (DMDI) with micromotor catheters.

    PubMed

    Lee, Anthony M D; Hohert, Geoffrey; Angkiriwang, Patricia T; MacAulay, Calum; Lane, Pierre

    2017-09-04

    We present a new paradigm for performing two-dimensional scanning called dual-beam manually-actuated distortion-corrected imaging (DMDI). DMDI operates by imaging the same object with two spatially-separated beams that are being mechanically scanned rapidly in one dimension with slower manual actuation along a second dimension. Registration of common features between the two imaging channels allows remapping of the images to correct for distortions due to manual actuation. We demonstrate DMDI using a 4.7 mm OD rotationally scanning dual-beam micromotor catheter (DBMC). The DBMC requires a simple, one-time calibration of the beam paths by imaging a patterned phantom. DMDI allows for distortion correction of non-uniform axial speed and rotational motion of the DBMC. We show the utility of this technique by demonstrating en face OCT image distortion correction of a manually-scanned checkerboard phantom and fingerprint scan.

  2. Quantum error-correction failure distributions: Comparison of coherent and stochastic error models

    NASA Astrophysics Data System (ADS)

    Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.

    2017-06-01

    We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.

  3. SU-F-I-13: Correction Factor Computations for the NIST Ritz Free Air Chamber for Medium-Energy X Rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergstrom, P

    Purpose: The National Institute of Standards and Technology (NIST) uses 3 free-air chambers to establish primary standards for radiation dosimetry at x-ray energies. For medium-energy × rays, the Ritz free-air chamber is the main measurement device. In order to convert the charge or current collected by the chamber to the radiation quantities air kerma or air kerma rate, a number of correction factors specific to the chamber must be applied. Methods: We used the Monte Carlo codes EGSnrc and PENELOPE. Results: Among these correction factors are the diaphragm correction (which accounts for interactions of photons from the x-ray source inmore » the beam-defining diaphragm of the chamber), the scatter correction (which accounts for the effects of photons scattered out of the primary beam), the electron-loss correction (which accounts for electrons that only partially expend their energy in the collection region), the fluorescence correction (which accounts for ionization due to reabsorption ffluorescence photons and the bremsstrahlung correction (which accounts for the reabsorption of bremsstrahlung photons). We have computed monoenergetic corrections for the NIST Ritz chamber for the 1 cm, 3 cm and 7 cm collection plates. Conclusion: We find good agreement with other’s results for the 7 cm plate. The data used to obtain these correction factors will be used to establish air kerma and it’s uncertainty in the standard NIST x-ray beams.« less

  4. Higher Flexibility and Better Immediate Spontaneous Correction May Not Gain Better Results for Nonstructural Thoracic Curve in Lenke 5C AIS Patients

    PubMed Central

    Zhang, Yanbin; Lin, Guanfeng; Wang, Shengru; Zhang, Jianguo; Shen, Jianxiong; Wang, Yipeng; Guo, Jianwei; Yang, Xinyu; Zhao, Lijuan

    2016-01-01

    Study Design. Retrospective study. Objective. To study the behavior of the unfused thoracic curve in Lenke type 5C during the follow-up and to identify risk factors for its correction loss. Summary of Background Data. Few studies have focused on the spontaneous behaviors of the unfused thoracic curve after selective thoracolumbar or lumbar fusion during the follow-up and the risk factors for spontaneous correction loss. Methods. We retrospectively reviewed 45 patients (41 females and 4 males) with AIS who underwent selective TL/L fusion from 2006 to 2012 in a single institution. The follow-up averaged 36 months (range, 24–105 months). Patients were divided into two groups. Thoracic curves in group A improved or maintained their curve magnitude after spontaneous correction, with a negative or no correction loss during the follow-up. Thoracic curves in group B deteriorated after spontaneous correction with a positive correction loss. Univariate analysis and multivariate analysis were built to identify the risk factors for correction loss of the unfused thoracic curves. Results. The minor thoracic curve was 26° preoperatively. It was corrected to 13° immediately with a spontaneous correction of 48.5%. At final follow-up it was 14° with a correction loss of 1°. Thoracic curves did not deteriorate after spontaneous correction in 23 cases in group A, while 22 cases were identified with thoracic curve progressing in group B. In multivariate analysis, two risk factors were independently associated with thoracic correction loss: higher flexibility and better immediate spontaneous correction rate of thoracic curve. Conclusion. Posterior selective TL/L fusion with pedicle screw constructs is an effective treatment for Lenke 5C AIS patients. Nonstructural thoracic curves with higher flexibility or better immediate correction are more likely to progress during the follow-up and close attentions must be paid to these patients in case of decompensation. Level of Evidence: 4 PMID:27831989

  5. Two-Dimensional Thermal Boundary Layer Corrections for Convective Heat Flux Gauges

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Haddad, George

    2007-01-01

    This work presents a CFD (Computational Fluid Dynamics) study of two-dimensional thermal boundary layer correction factors for convective heat flux gauges mounted in flat plate subjected to a surface temperature discontinuity with variable properties taken into account. A two-equation k - omega turbulence model is considered. Results are obtained for a wide range of Mach numbers (1 to 5), gauge radius ratio, and wall temperature discontinuity. Comparisons are made for correction factors with constant properties and variable properties. It is shown that the variable-property effects on the heat flux correction factors become significant

  6. Correction tool for Active Shape Model based lumbar muscle segmentation.

    PubMed

    Valenzuela, Waldo; Ferguson, Stephen J; Ignasiak, Dominika; Diserens, Gaelle; Vermathen, Peter; Boesch, Chris; Reyes, Mauricio

    2015-08-01

    In the clinical environment, accuracy and speed of the image segmentation process plays a key role in the analysis of pathological regions. Despite advances in anatomic image segmentation, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a low number of interactions, and a user-independent solution. In this work we present a new interactive correction method for correcting the image segmentation. Given an initial segmentation and the original image, our tool provides a 2D/3D environment, that enables 3D shape correction through simple 2D interactions. Our scheme is based on direct manipulation of free form deformation adapted to a 2D environment. This approach enables an intuitive and natural correction of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle segmentation from Magnetic Resonance Images. Experimental results show that full segmentation correction could be performed within an average correction time of 6±4 minutes and an average of 68±37 number of interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.03.

  7. Simple estimation of Förster Resonance Energy Transfer (FRET) orientation factor distribution in membranes.

    PubMed

    Loura, Luís M S

    2012-11-19

    Because of its acute sensitivity to distance in the nanometer scale, Förster resonance energy transfer (FRET) has found a large variety of applications in many fields of chemistry, physics, and biology. One important issue regarding the correct usage of FRET is its dependence on the donor-acceptor relative orientation, expressed as the orientation factor k(2). Different donor/acceptor conformations can lead to k(2) values in the 0 ≤ k(2) ≤ 4 range. Because the characteristic distance for FRET, R(0), is proportional to (k(2))1/6, uncertainties in the orientation factor are reflected in the quality of information that can be retrieved from a FRET experiment. In most cases, the average value of k(2) corresponding to the dynamic isotropic limit ( = 2/3) is used for computation of R(0) and hence donor-acceptor distances and acceptor concentrations. However, this can lead to significant error in unfavorable cases. This issue is more critical in membrane systems, because of their intrinsically anisotropic nature and their reduced fluidity in comparison to most common solvents. Here, a simple numerical simulation method for estimation of the probability density function of k(2) for membrane-embedded donor and acceptor fluorophores in the dynamic regime is presented. In the simplest form, the proposed procedure uses as input the most probable orientations of the donor and acceptor transition dipoles, obtained by experimental (including linear dichroism) or theoretical (such as molecular dynamics simulation) techniques. Optionally, information about the widths of the donor and/or acceptor angular distributions may be incorporated. The methodology is illustrated for special limiting cases and common membrane FRET pairs.

  8. Effects of vegetation heterogeneity and surface topography on spatial scaling of net primary productivity

    NASA Astrophysics Data System (ADS)

    Chen, J. M.; Chen, X.; Ju, W.

    2013-03-01

    Due to the heterogeneous nature of the land surface, spatial scaling is an inevitable issue in the development of land models coupled with low-resolution Earth system models (ESMs) for predicting land-atmosphere interactions and carbon-climate feedbacks. In this study, a simple spatial scaling algorithm is developed to correct errors in net primary productivity (NPP) estimates made at a coarse spatial resolution based on sub-pixel information of vegetation heterogeneity and surface topography. An eco-hydrological model BEPS-TerrainLab, which considers both vegetation and topographical effects on the vertical and lateral water flows and the carbon cycle, is used to simulate NPP at 30 m and 1 km resolutions for a 5700 km2 watershed with an elevation range from 518 m to 3767 m in the Qinling Mountain, Shaanxi Province, China. Assuming that the NPP simulated at 30 m resolution represents the reality and that at 1 km resolution is subject to errors due to sub-pixel heterogeneity, a spatial scaling index (SSI) is developed to correct the coarse resolution NPP values pixel by pixel. The agreement between the NPP values at these two resolutions is improved considerably from R2 = 0.782 to R2 = 0.884 after the correction. The mean bias error (MBE) in NPP modeled at the 1 km resolution is reduced from 14.8 g C m-2 yr-1 to 4.8 g C m-2 yr-1 in comparison with NPP modeled at 30 m resolution, where the mean NPP is 668 g C m-2 yr-1. The range of spatial variations of NPP at 30 m resolution is larger than that at 1 km resolution. Land cover fraction is the most important vegetation factor to be considered in NPP spatial scaling, and slope is the most important topographical factor for NPP spatial scaling especially in mountainous areas, because of its influence on the lateral water redistribution, affecting water table, soil moisture and plant growth. Other factors including leaf area index (LAI), elevation and aspect have small and additive effects on improving the spatial scaling between these two resolutions.

  9. Effects of vegetation heterogeneity and surface topography on spatial scaling of net primary productivity

    NASA Astrophysics Data System (ADS)

    Chen, J. M.; Chen, X.; Ju, W.

    2013-07-01

    Due to the heterogeneous nature of the land surface, spatial scaling is an inevitable issue in the development of land models coupled with low-resolution Earth system models (ESMs) for predicting land-atmosphere interactions and carbon-climate feedbacks. In this study, a simple spatial scaling algorithm is developed to correct errors in net primary productivity (NPP) estimates made at a coarse spatial resolution based on sub-pixel information of vegetation heterogeneity and surface topography. An eco-hydrological model BEPS-TerrainLab, which considers both vegetation and topographical effects on the vertical and lateral water flows and the carbon cycle, is used to simulate NPP at 30 m and 1 km resolutions for a 5700 km2 watershed with an elevation range from 518 m to 3767 m in the Qinling Mountain, Shanxi Province, China. Assuming that the NPP simulated at 30 m resolution represents the reality and that at 1 km resolution is subject to errors due to sub-pixel heterogeneity, a spatial scaling index (SSI) is developed to correct the coarse resolution NPP values pixel by pixel. The agreement between the NPP values at these two resolutions is improved considerably from R2 = 0.782 to R2 = 0.884 after the correction. The mean bias error (MBE) in NPP modelled at the 1 km resolution is reduced from 14.8 g C m-2 yr-1 to 4.8 g C m-2 yr-1 in comparison with NPP modelled at 30 m resolution, where the mean NPP is 668 g C m-2 yr-1. The range of spatial variations of NPP at 30 m resolution is larger than that at 1 km resolution. Land cover fraction is the most important vegetation factor to be considered in NPP spatial scaling, and slope is the most important topographical factor for NPP spatial scaling especially in mountainous areas, because of its influence on the lateral water redistribution, affecting water table, soil moisture and plant growth. Other factors including leaf area index (LAI) and elevation have small and additive effects on improving the spatial scaling between these two resolutions.

  10. Stainless hooks to bond lower lingual retainer.

    PubMed

    Durgekar, Sujala G; Nagaraj, K

    2011-01-01

    We introduced a simple and economical technique for precise placement of lower lingual retainers. Two stainless steel hooks made of 0.6mm wire are placed interdentally in the embrasure area between canine and lateral incisor bilaterally to lock the retainer wire in the correct position. Etch, rinse and dry the enamel surfaces with the retainer passively in place, then bond the retainer with light-cured adhesive. Hooks are simple to fabricate and eliminate the need for a transfer tray.

  11. Reliable Channel-Adapted Error Correction: Bacon-Shor Code Recovery from Amplitude Damping

    NASA Astrophysics Data System (ADS)

    Piedrafita, Álvaro; Renes, Joseph M.

    2017-12-01

    We construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve exact correction to a desired order in the damping rate. The first, employing one-bit teleportation and single-qubit measurements, needs only one-fourth as many physical qubits, while the second, using just stabilizer measurements and Pauli corrections, needs only half. The improvements stem from the fact that damping events need only be detected, not corrected, and that effective phase errors arising due to undamped qubits occur at a lower rate than damping errors. For error correction that is itself subject to damping noise, we show that existing fault-tolerance methods can be employed for the latter scheme, while the former can be made to avoid potential catastrophic errors and can easily cope with damping faults in ancilla qubits.

  12. Universal state-selective corrections to multireference coupled-cluster theories with single and double excitations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brabec, Jiri; van Dam, Hubertus JJ; Pittner, Jiri

    2012-03-28

    The recently proposed Universal State-Selective (USS) corrections [K. Kowalski, J. Chem. Phys. 134, 194107 (2011)] to approximate Multi-Reference Coupled Cluster (MRCC) energies can be commonly applied to any type of MRCC theory based on the Jeziorski-Monkhorst [B. Jeziorski, H.J. Monkhorst, Phys. Rev. A 24, 1668 (1981)] exponential Ansatz. In this letter we report on the performance of a simple USS correction to the Brillouin-Wigner MRCC (BW-MRCC) formalism employing single and double excitations (BW-MRCCSD). It is shown that the resulting formalism (USS-BW-MRCCSD), which uses the manifold of single and double excitations to construct the correction, can be related to a posteriorimore » corrections utilized in routine BW-MRCCSD calculations. In several benchmark calculations we compare the results of the USS-BW-MRCCSD method with results of the BW-MRCCSD approach employing a posteriori corrections and with results obtained with the Full Configuration Interaction (FCI) method.« less

  13. The Additional Secondary Phase Correction System for AIS Signals

    PubMed Central

    Wang, Xiaoye; Zhang, Shufang; Sun, Xiaowen

    2017-01-01

    This paper looks at the development and implementation of the additional secondary phase factor (ASF) real-time correction system for the Automatic Identification System (AIS) signal. A large number of test data were collected using the developed ASF correction system and the propagation characteristics of the AIS signal that transmits at sea and the ASF real-time correction algorithm of the AIS signal were analyzed and verified. Accounting for the different hardware of the receivers in the land-based positioning system and the variation of the actual environmental factors, the ASF correction system corrects original measurements of positioning receivers in real time and provides corrected positioning accuracy within 10 m. PMID:28362330

  14. Simple Anchoring of the Penopubic Skin to the Prepubic Deep Fascia in Surgical Correction of Buried Penis

    PubMed Central

    Jung, Eun-hong; Jang, Seok-heun; Lee, Jae-won

    2011-01-01

    Purpose The aim of this study was to categorize concealed penis and buried penis by preoperative physical examination including the manual prepubic compression test and to describe a simple surgical technique to correct buried penis that was based on surgical experience and comprehension of the anatomical components. Materials and Methods From March 2007 to November 2010, 17 patients were diagnosed with buried penis after differentiation of this condition from concealed penis. The described surgical technique consisted of a minimal incision and simple fixation of the penile shaft skin and superficial fascia to the prepubic deep fascia, without degloving the penile skin. Results The mean age of the patients was 10.2 years, ranging from 8 years to 15 years. The median follow-up was 19 months (range, 5 to 49 months). The mean penile lengths were 1.8 cm (range, 1.1 to 2.5 cm) preoperatively and 4.5 cm (range, 3.3 to 5.8 cm) postoperatively. The median difference between preoperative and postoperative penile lengths was 2.7 cm (range, 2.1 to 3.9 cm). There were no serious intra- or postoperative complications. Conclusions With the simple anchoring of the penopubic skin to the prepubic deep fascia, we obtained successful subjective and objective outcomes without complications. We suggest that this is a promising surgical method for selected patients with buried penis. PMID:22195270

  15. SU-E-T-123: Anomalous Altitude Effect in Permanent Implant Brachytherapy Seeds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watt, E; Spencer, DP; Meyer, T

    Purpose: Permanent seed implant brachytherapy procedures require the measurement of the air kerma strength of seeds prior to implant. This is typically accomplished using a well-type ionization chamber. Previous measurements (Griffin et al., 2005; Bohm et al., 2005) of several low-energy seeds using the air-communicating HDR 1000 Plus chamber have demonstrated that the standard temperature-pressure correction factor, P{sub TP}, may overcompensate for air density changes induced by altitude variations by up to 18%. The purpose of this work is to present empirical correction factors for two clinically-used seeds (IsoAid ADVANTAGE™ {sup 103}Pd and Nucletron selectSeed {sup 125}I) for which empiricalmore » altitude correction factors do not yet exist in the literature when measured with the HDR 1000 Plus chamber. Methods: An in-house constructed pressure vessel containing the HDR 1000 Plus well chamber and a digital barometer/thermometer was pumped or evacuated, as appropriate, to a variety of pressures from 725 to 1075 mbar. Current measurements, corrected with P{sub TP}, were acquired for each seed at these pressures and normalized to the reading at ‘standard’ pressure (1013.25 mbar). Results: Measurements in this study have shown that utilization of P{sub TP} can overcompensate in the corrected current reading by up to 20% and 17% for the IsoAid Pd-103 and the Nucletron I-125 seed respectively. Compared to literature correction factors for other seed models, the correction factors in this study diverge by up to 2.6% and 3.0% for iodine (with silver) and palladium respectively, indicating the need for seed-specific factors. Conclusion: The use of seed specific altitude correction factors can reduce uncertainty in the determination of air kerma strength. The empirical correction factors determined in this work can be applied in clinical quality assurance measurements of air kerma strength for two previously unpublished seed designs (IsoAid ADVANTAGE™ {sup 103}Pd and Nucletron selectSeed {sup 125}I) with the HDR 1000 Plus well chamber.« less

  16. Influence of Ametropia and Its Correction on Measurement of Accommodation.

    PubMed

    Bernal-Molina, Paula; Vargas-Martín, Fernando; Thibos, Larry N; López-Gil, Norberto

    2016-06-01

    Amplitude of accommodation (AA) is reportedly greater for myopic eyes than for hyperopic eyes. We investigated potential explanations for this difference. Analytical analysis and computer ray tracing were performed on two schematic eye models of axial ametropia. Using paraxial and nonparaxial approaches, AA was specified for the naked and the corrected eye using the anterior corneal surface as the reference plane. Assuming that axial myopia is due entirely to an increase in vitreous chamber depth, AA increases with the amount of myopia for two reasons that have not always been taken into account. First is the choice of reference location for specifying refractive error and AA in diopters. When specified relative to the cornea, AA increases with the degree of myopia more than when specified relative to the eye's first Gaussian principal plane. The second factor is movement of the eye's second Gaussian principal plane toward the retina during accommodation, which has a larger dioptric effect in shorter eyes. Using the corneal plane (placed at the corneal vertex) as the reference plane for specifying accommodation, AA depends slightly on the axial length of the eye's vitreous chamber. This dependency can be reduced significantly by using a reference plane located 4 mm posterior to the corneal plane. A simple formula is provided to help clinicians and researchers obtain a value of AA that closely reflects power changes of the crystalline lens, independent of axial ametropia and its correction with lenses.

  17. Misura di g con pendolo non in regime caotico

    NASA Astrophysics Data System (ADS)

    Sigismondi, Costantino

    2017-02-01

    The measurement of the gravity acceleration with pendulum is a basic experiment in Newtonian physics, but the correct choice of wire and weight to suspend can avoid to have a cahotic instead of simple pendulum.

  18. Relativistic corrections to the form factors of Bc into P-wave orbitally excited charmonium

    NASA Astrophysics Data System (ADS)

    Zhu, Ruilin

    2018-06-01

    We investigated the form factors of the Bc meson into P-wave orbitally excited charmonium using the nonrelativistic QCD effective theory. Through the analytic computation, the next-to-leading order relativistic corrections to the form factors were obtained, and the asymptotic expressions were studied in the infinite bottom quark mass limit. Employing the general form factors, we discussed the exclusive decays of the Bc meson into P-wave orbitally excited charmonium and a light meson. We found that the relativistic corrections lead to a large correction for the form factors, which makes the branching ratios of the decay channels B (Bc ± →χcJ (hc) +π± (K±)) larger. These results are useful for the phenomenological analysis of the Bc meson decays into P-wave charmonium, which shall be tested in the LHCb experiments.

  19. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    NASA Technical Reports Server (NTRS)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  20. Testing the Two-Layer Model for Correcting Clear Sky Reflectance near Clouds

    NASA Technical Reports Server (NTRS)

    Wen, Guoyong; Marshak, Alexander; Evans, Frank; Varnai, Tamas; Levy, Rob

    2015-01-01

    A two-layer model (2LM) was developed in our earlier studies to estimate the clear sky reflectance enhancement due to cloud-molecular radiative interaction at MODIS at 0.47 micrometers. Recently, we extended the model to include cloud-surface and cloud-aerosol radiative interactions. We use the LES/SHDOM simulated 3D true radiation fields to test the 2LM for reflectance enhancement at 0.47 micrometers. We find: The simple model captures the viewing angle dependence of the reflectance enhancement near cloud, suggesting the physics of this model is correct; the cloud-molecular interaction alone accounts for 70 percent of the enhancement; the cloud-surface interaction accounts for 16 percent of the enhancement; the cloud-aerosol interaction accounts for an additional 13 percent of the enhancement. We conclude that the 2LM is simple to apply and unbiased.

  1. Use of the Wigner representation in scattering problems

    NASA Technical Reports Server (NTRS)

    Bemler, E. A.

    1975-01-01

    The basic equations of quantum scattering were translated into the Wigner representation, putting quantum mechanics in the form of a stochastic process in phase space, with real valued probability distributions and source functions. The interpretative picture associated with this representation is developed and stressed and results used in applications published elsewhere are derived. The form of the integral equation for scattering as well as its multiple scattering expansion in this representation are derived. Quantum corrections to classical propagators are briefly discussed. The basic approximation used in the Monte-Carlo method is derived in a fashion which allows for future refinement and which includes bound state production. Finally, as a simple illustration of some of the formalism, scattering is treated by a bound two body problem. Simple expressions for single and double scattering contributions to total and differential cross-sections as well as for all necessary shadow corrections are obtained.

  2. Phase-and-amplitude recovery from a single phase-contrast image using partially spatially coherent x-ray radiation

    NASA Astrophysics Data System (ADS)

    Beltran, Mario A.; Paganin, David M.; Pelliccia, Daniele

    2018-05-01

    A simple method of phase-and-amplitude extraction is derived that corrects for image blurring induced by partially spatially coherent incident illumination using only a single intensity image as input. The method is based on Fresnel diffraction theory for the case of high Fresnel number, merged with the space-frequency description formalism used to quantify partially coherent fields and assumes the object under study is composed of a single-material. A priori knowledge of the object’s complex refractive index and information obtained by characterizing the spatial coherence of the source is required. The algorithm was applied to propagation-based phase-contrast data measured with a laboratory-based micro-focus x-ray source. The blurring due to the finite spatial extent of the source is embedded within the algorithm as a simple correction term to the so-called Paganin algorithm and is also numerically stable in the presence of noise.

  3. Power corrections to TMD factorization for Z-boson production

    DOE PAGES

    Balitsky, I.; Tarasov, A.

    2018-05-24

    A typical factorization formula for production of a particle with a small transverse momentum in hadron-hadron collisions is given by a convolution of two TMD parton densities with cross section of production of the final particle by the two partons. For practical applications at a given transverse momentum, though, one should estimate at what momenta the power corrections to the TMD factorization formula become essential. In this work, we calculate the first power corrections to TMD factorization formula for Z-boson production and Drell-Yan process in high-energy hadron-hadron collisions. At the leading order in N c power corrections are expressed inmore » terms of leading power TMDs by QCD equations of motion.« less

  4. Power corrections to TMD factorization for Z-boson production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balitsky, I.; Tarasov, A.

    A typical factorization formula for production of a particle with a small transverse momentum in hadron-hadron collisions is given by a convolution of two TMD parton densities with cross section of production of the final particle by the two partons. For practical applications at a given transverse momentum, though, one should estimate at what momenta the power corrections to the TMD factorization formula become essential. In this work, we calculate the first power corrections to TMD factorization formula for Z-boson production and Drell-Yan process in high-energy hadron-hadron collisions. At the leading order in N c power corrections are expressed inmore » terms of leading power TMDs by QCD equations of motion.« less

  5. Recurrent candidal intertrigo: challenges and solutions

    PubMed Central

    Metin, Ahmet; Dilek, Nursel; Bilgili, Serap Gunes

    2018-01-01

    Intertrigo is a common inflammatory dermatosis of opposing skin surfaces that can be caused by a variety of infectious agents, most notably candida, under the effect of mechanical and environmental factors. Symptoms such as pain and itching significantly decrease quality of life, leading to high morbidity. A multitude of predisposing factors, particularly obesity, diabetes mellitus, and immunosuppressive conditions facilitate both the occurrence and recurrence of the disease. The diagnosis of candidal intertrigo is usually based on clinical appearance. However, a range of laboratory studies from simple tests to advanced methods can be carried out to confirm the diagnosis. Such tests are especially useful in treatment-resistant or recurrent cases for establishing a differential diagnosis. The first and key step of management is identification and correction of predisposing factors. Patients should be encouraged to lose weight, followed up properly after endocrinologic treatment and intestinal colonization or periorificial infections should be medically managed, especially in recurrent and resistant cases. Medical treatment of candidal intertrigo usually requires topical administration of nystatin and azole group antifungals. In this context, it is also possible to use magistral remedies safely and effectively. In case of predisposing immunosuppressive conditions or generalized infections, novel systemic agents with higher potency may be required. PMID:29713190

  6. Identification and Correction of Additive and Multiplicative Spatial Biases in Experimental High-Throughput Screening.

    PubMed

    Mazoure, Bogdan; Caraus, Iurie; Nadon, Robert; Makarenkov, Vladimir

    2018-06-01

    Data generated by high-throughput screening (HTS) technologies are prone to spatial bias. Traditionally, bias correction methods used in HTS assume either a simple additive or, more recently, a simple multiplicative spatial bias model. These models do not, however, always provide an accurate correction of measurements in wells located at the intersection of rows and columns affected by spatial bias. The measurements in these wells depend on the nature of interaction between the involved biases. Here, we propose two novel additive and two novel multiplicative spatial bias models accounting for different types of bias interactions. We describe a statistical procedure that allows for detecting and removing different types of additive and multiplicative spatial biases from multiwell plates. We show how this procedure can be applied by analyzing data generated by the four HTS technologies (homogeneous, microorganism, cell-based, and gene expression HTS), the three high-content screening (HCS) technologies (area, intensity, and cell-count HCS), and the only small-molecule microarray technology available in the ChemBank small-molecule screening database. The proposed methods are included in the AssayCorrector program, implemented in R, and available on CRAN.

  7. Improved algorithm for computerized detection and quantification of pulmonary emphysema at high-resolution computed tomography (HRCT)

    NASA Astrophysics Data System (ADS)

    Tylen, Ulf; Friman, Ola; Borga, Magnus; Angelhed, Jan-Erik

    2001-05-01

    Emphysema is characterized by destruction of lung tissue with development of small or large holes within the lung. These areas will have Hounsfield values (HU) approaching -1000. It is possible to detect and quantificate such areas using simple density mask technique. The edge enhancement reconstruction algorithm, gravity and motion of the heart and vessels during scanning causes artefacts, however. The purpose of our work was to construct an algorithm that detects such image artefacts and corrects them. The first step is to apply inverse filtering to the image removing much of the effect of the edge enhancement reconstruction algorithm. The next step implies computation of the antero-posterior density gradient caused by gravity and correction for that. Motion artefacts are in a third step corrected for by use of normalized averaging, thresholding and region growing. Twenty healthy volunteers were investigated, 10 with slight emphysema and 10 without. Using simple density mask technique it was not possible to separate persons with disease from those without. Our algorithm improved separation of the two groups considerably. Our algorithm needs further refinement, but may form a basis for further development of methods for computerized diagnosis and quantification of emphysema by HRCT.

  8. Simple vertex correction improves G W band energies of bulk and two-dimensional crystals

    NASA Astrophysics Data System (ADS)

    Schmidt, Per S.; Patrick, Christopher E.; Thygesen, Kristian S.

    2017-11-01

    The G W self-energy method has long been recognized as the gold standard for quasiparticle (QP) calculations of solids in spite of the fact that the neglect of vertex corrections and the use of a density-functional theory starting point lack rigorous justification. In this work we remedy this situation by including a simple vertex correction that is consistent with a local-density approximation starting point. We analyze the effect of the self-energy by splitting it into short-range and long-range terms which are shown to govern, respectively, the center and size of the band gap. The vertex mainly improves the short-range correlations and therefore has a small effect on the band gap, while it shifts the band gap center up in energy by around 0.5 eV, in good agreement with experiments. Our analysis also explains how the relative importance of short- and long-range interactions in structures of different dimensionality is reflected in their QP energies. Inclusion of the vertex comes at practically no extra computational cost and even improves the basis set convergence compared to G W . Taken together, the method provides an efficient and rigorous improvement over the G W approximation.

  9. A comparative study of the effects of cone-plate and parallel-plate geometries on rheological properties under oscillatory shear flow

    NASA Astrophysics Data System (ADS)

    Song, Hyeong Yong; Salehiyan, Reza; Li, Xiaolei; Lee, Seung Hak; Hyun, Kyu

    2017-11-01

    In this study, the effects of cone-plate (C/P) and parallel-plate (P/P) geometries were investigated on the rheological properties of various complex fluids, e.g. single-phase (polymer melts and solutions) and multiphase systems (polymer blend and nanocomposite, and suspension). Small amplitude oscillatory shear (SAOS) tests were carried out to compare linear rheological responses while nonlinear responses were compared using large amplitude oscillatory shear (LAOS) tests at different frequencies. Moreover, Fourier-transform (FT)-rheology method was used to analyze the nonlinear responses under LAOS flow. Experimental results were compared with predictions obtained by single-point correction and shear rate correction. For all systems, SAOS data measured by C/P and P/P coincide with each other, but results showed discordance between C/P and P/P measurements in the nonlinear regime. For all systems except xanthan gum solutions, first-harmonic moduli were corrected using a single horizontal shift factor, whereas FT rheology-based nonlinear parameters ( I 3/1, I 5/1, Q 3, and Q 5) were corrected using vertical shift factors that are well predicted by single-point correction. Xanthan gum solutions exhibited anomalous corrections. Their first-harmonic Fourier moduli were superposed using a horizontal shift factor predicted by shear rate correction applicable to highly shear thinning fluids. The distinguished corrections were observed for FT rheology-based nonlinear parameters. I 3/1 and I 5/1 were superposed by horizontal shifts, while the other systems displayed vertical shifts of I 3/1 and I 5/1. Q 3 and Q 5 of xanthan gum solutions were corrected using both horizontal and vertical shift factors. In particular, the obtained vertical shift factors for Q 3 and Q 5 were twice as large as predictions made by single-point correction. Such larger values are rationalized by the definitions of Q 3 and Q 5. These results highlight the significance of horizontal shift corrections in nonlinear oscillatory shear data.

  10. SPECTRAL CORRECTION FACTORS FOR CONVENTIONAL NEUTRON DOSE METERS USED IN HIGH-ENERGY NEUTRON ENVIRONMENTS-IMPROVED AND EXTENDED RESULTS BASED ON A COMPLETE SURVEY OF ALL NEUTRON SPECTRA IN IAEA-TRS-403.

    PubMed

    Oparaji, U; Tsai, Y H; Liu, Y C; Lee, K W; Patelli, E; Sheu, R J

    2017-06-01

    This paper presents improved and extended results of our previous study on corrections for conventional neutron dose meters used in environments with high-energy neutrons (En > 10 MeV). Conventional moderated-type neutron dose meters tend to underestimate the dose contribution of high-energy neutrons because of the opposite trends of dose conversion coefficients and detection efficiencies as the neutron energy increases. A practical correction scheme was proposed based on analysis of hundreds of neutron spectra in the IAEA-TRS-403 report. By comparing 252Cf-calibrated dose responses with reference values derived from fluence-to-dose conversion coefficients, this study provides recommendations for neutron field characterization and the corresponding dose correction factors. Further sensitivity studies confirm the appropriateness of the proposed scheme and indicate that (1) the spectral correction factors are nearly independent of the selection of three commonly used calibration sources: 252Cf, 241Am-Be and 239Pu-Be; (2) the derived correction factors for Bonner spheres of various sizes (6"-9") are similar in trend and (3) practical high-energy neutron indexes based on measurements can be established to facilitate the application of these correction factors in workplaces. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. The power to detect linkage in complex disease by means of simple LOD-score analyses.

    PubMed Central

    Greenberg, D A; Abreu, P; Hodge, S E

    1998-01-01

    Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage. PMID:9718328

  12. The power to detect linkage in complex disease by means of simple LOD-score analyses.

    PubMed

    Greenberg, D A; Abreu, P; Hodge, S E

    1998-09-01

    Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage.

  13. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... illustrate the application of correction factors to sound level measurement readings: (1) Example 1—Highway operations. Assume that a motor vehicle generates a maximum observed sound level reading of 86 dB(A) during a... of the test site is acoustically “hard.” The corrected sound level generated by the motor vehicle...

  14. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... illustrate the application of correction factors to sound level measurement readings: (1) Example 1—Highway operations. Assume that a motor vehicle generates a maximum observed sound level reading of 86 dB(A) during a... of the test site is acoustically “hard.” The corrected sound level generated by the motor vehicle...

  15. Factors influencing workplace violence risk among correctional health workers: insights from an Australian survey.

    PubMed

    Cashmore, Aaron W; Indig, Devon; Hampton, Stephen E; Hegney, Desley G; Jalaludin, Bin B

    2016-11-01

    Little is known about the environmental and organisational determinants of workplace violence in correctional health settings. This paper describes the views of health professionals working in these settings on the factors influencing workplace violence risk. All employees of a large correctional health service in New South Wales, Australia, were invited to complete an online survey. The survey included an open-ended question seeking the views of participants about the factors influencing workplace violence in correctional health settings. Responses to this question were analysed using qualitative thematic analysis. Participants identified several factors that they felt reduced the risk of violence in their workplace, including: appropriate workplace health and safety policies and procedures; professionalism among health staff; the presence of prison guards and the quality of security provided; and physical barriers within clinics. Conversely, participants perceived workplace violence risk to be increased by: low health staff-to-patient and correctional officer-to-patient ratios; high workloads; insufficient or underperforming security staff; and poor management of violence, especially horizontal violence. The views of these participants should inform efforts to prevent workplace violence among correctional health professionals.

  16. Synchronizing movements with the metronome: nonlinear error correction and unstable periodic orbits.

    PubMed

    Engbert, Ralf; Krampe, Ralf Th; Kurths, Jürgen; Kliegl, Reinhold

    2002-02-01

    The control of human hand movements is investigated in a simple synchronization task. We propose and analyze a stochastic model based on nonlinear error correction; a mechanism which implies the existence of unstable periodic orbits. This prediction is tested in an experiment with human subjects. We find that our experimental data are in good agreement with numerical simulations of our theoretical model. These results suggest that feedback control of the human motor systems shows nonlinear behavior. Copyright 2001 Elsevier Science (USA).

  17. Feedforward operation of a lens setup for large defocus and astigmatism correction

    NASA Astrophysics Data System (ADS)

    Verstraete, Hans R. G. W.; Almasian, MItra; Pozzi, Paolo; Bilderbeek, Rolf; Kalkman, Jeroen; Faber, Dirk J.; Verhaegen, Michel

    2016-04-01

    In this manuscript, we present a lens setup for large defocus and astigmatism correction. A deformable defocus lens and two rotational cylindrical lenses are used to control the defocus and astigmatism. The setup is calibrated using a simple model that allows the calculation of the lens inputs so that a desired defocus and astigmatism are actuated on the eye. The setup is tested by determining the feedforward prediction error, imaging a resolution target, and removing introduced aberrations.

  18. Standing on the shoulders of giants: improving medical image segmentation via bias correction.

    PubMed

    Wang, Hongzhi; Das, Sandhitsu; Pluta, John; Craige, Caryne; Altinay, Murat; Avants, Brian; Weiner, Michael; Mueller, Susanne; Yushkevich, Paul

    2010-01-01

    We propose a simple strategy to improve automatic medical image segmentation. The key idea is that without deep understanding of a segmentation method, we can still improve its performance by directly calibrating its results with respect to manual segmentation. We formulate the calibration process as a bias correction problem, which is addressed by machine learning using training data. We apply this methodology on three segmentation problems/methods and show significant improvements for all of them.

  19. The constricted ear.

    PubMed

    Paredes, Alfredo A; Williams, J Kerwin; Elsahy, Nabil I

    2002-04-01

    The constricted ear may be described best as a pursestring closure of the ear. The deformity may include lidding of the upper pole with downward folding, protrusion of the concha, decreased vertical height, and low ear position relative to the face. The goals of surgical correction should include obtaining symmetry and correcting the intra-auricular anatomy. The degree of intervention is based on the severity of the deformity and may range from simple repositioning, soft tissue rearrangement, or manipulation of the cartilage. Multiple surgical techniques are described.

  20. Estimation of Skidding Offered by Ackermann Mechanism

    NASA Astrophysics Data System (ADS)

    Rao, Are Padma; Venkatachalam, Rapur

    2016-04-01

    Steering for a four wheeler is being provided by Ackermann mechanism. Though it cannot always provide correct steering conditions, it is very popular because of its simple nature. A correct steering would avoid skidding of the tires, and thereby enhance their lives as the wear of the tires is reduced. In this paper it is intended to analyze Ackermann mechanism for its performance. A method of estimating skidding due to improper steering is proposed. Two parameters are identified using which the length of skidding can be estimated.

  1. Learning in tele-autonomous systems using Soar

    NASA Technical Reports Server (NTRS)

    Laird, John E.; Yager, Eric S.; Tuck, Christopher M.; Hucka, Michael

    1989-01-01

    Robo-Soar is a high-level robot arm control system implemented in Soar. Robo-Soar learns to perform simple block manipulation tasks using advice from a human. Following learning, the system is able to perform similar tasks without external guidance. It can also learn to correct its knowledge, using its own problem solving in addition to outside guidance. Robo-Soar corrects its knowledge by accepting advice about relevance of features in its domain, using a unique integration of analytic and empirical learning techniques.

  2. Fluence correction factors for graphite calorimetry in a low-energy clinical proton beam: I. Analytical and Monte Carlo simulations.

    PubMed

    Palmans, H; Al-Sulaiti, L; Andreo, P; Shipley, D; Lühr, A; Bassler, N; Martinkovič, J; Dobrovodský, J; Rossomme, S; Thomas, R A S; Kacperek, A

    2013-05-21

    The conversion of absorbed dose-to-graphite in a graphite phantom to absorbed dose-to-water in a water phantom is performed by water to graphite stopping power ratios. If, however, the charged particle fluence is not equal at equivalent depths in graphite and water, a fluence correction factor, kfl, is required as well. This is particularly relevant to the derivation of absorbed dose-to-water, the quantity of interest in radiotherapy, from a measurement of absorbed dose-to-graphite obtained with a graphite calorimeter. In this work, fluence correction factors for the conversion from dose-to-graphite in a graphite phantom to dose-to-water in a water phantom for 60 MeV mono-energetic protons were calculated using an analytical model and five different Monte Carlo codes (Geant4, FLUKA, MCNPX, SHIELD-HIT and McPTRAN.MEDIA). In general the fluence correction factors are found to be close to unity and the analytical and Monte Carlo codes give consistent values when considering the differences in secondary particle transport. When considering only protons the fluence correction factors are unity at the surface and increase with depth by 0.5% to 1.5% depending on the code. When the fluence of all charged particles is considered, the fluence correction factor is about 0.5% lower than unity at shallow depths predominantly due to the contributions from alpha particles and increases to values above unity near the Bragg peak. Fluence correction factors directly derived from the fluence distributions differential in energy at equivalent depths in water and graphite can be described by kfl = 0.9964 + 0.0024·zw-eq with a relative standard uncertainty of 0.2%. Fluence correction factors derived from a ratio of calculated doses at equivalent depths in water and graphite can be described by kfl = 0.9947 + 0.0024·zw-eq with a relative standard uncertainty of 0.3%. These results are of direct relevance to graphite calorimetry in low-energy protons but given that the fluence correction factor is almost solely influenced by non-elastic nuclear interactions the results are also relevant for plastic phantoms that consist of carbon, oxygen and hydrogen atoms as well as for soft tissues.

  3. Selecting informative subsets of sparse supermatrices increases the chance to find correct trees.

    PubMed

    Misof, Bernhard; Meyer, Benjamin; von Reumont, Björn Marcus; Kück, Patrick; Misof, Katharina; Meusemann, Karen

    2013-12-03

    Character matrices with extensive missing data are frequently used in phylogenomics with potentially detrimental effects on the accuracy and robustness of tree inference. Therefore, many investigators select taxa and genes with high data coverage. Drawbacks of these selections are their exclusive reliance on data coverage without consideration of actual signal in the data which might, thus, not deliver optimal data matrices in terms of potential phylogenetic signal. In order to circumvent this problem, we have developed a heuristics implemented in a software called mare which (1) assesses information content of genes in supermatrices using a measure of potential signal combined with data coverage and (2) reduces supermatrices with a simple hill climbing procedure to submatrices with high total information content. We conducted simulation studies using matrices of 50 taxa × 50 genes with heterogeneous phylogenetic signal among genes and data coverage between 10-30%. With matrices of 50 taxa × 50 genes with heterogeneous phylogenetic signal among genes and data coverage between 10-30% Maximum Likelihood (ML) tree reconstructions failed to recover correct trees. A selection of a data subset with the herein proposed approach increased the chance to recover correct partial trees more than 10-fold. The selection of data subsets with the herein proposed simple hill climbing procedure performed well either considering the information content or just a simple presence/absence information of genes. We also applied our approach on an empirical data set, addressing questions of vertebrate systematics. With this empirical dataset selecting a data subset with high information content and supporting a tree with high average boostrap support was most successful if information content of genes was considered. Our analyses of simulated and empirical data demonstrate that sparse supermatrices can be reduced on a formal basis outperforming the usually used simple selections of taxa and genes with high data coverage.

  4. EASY: a simple tool for simultaneously removing background, deadtime and acoustic ringing in quantitative NMR spectroscopy--part I: basic principle and applications.

    PubMed

    Jaeger, Christian; Hemmann, Felix

    2014-01-01

    Elimination of Artifacts in NMR SpectroscopY (EASY) is a simple but very effective tool to remove simultaneously any real NMR probe background signal, any spectral distortions due to deadtime ringdown effects and -specifically- severe acoustic ringing artifacts in NMR spectra of low-gamma nuclei. EASY enables and maintains quantitative NMR (qNMR) as only a single pulse (preferably 90°) is used for data acquisition. After the acquisition of the first scan (it contains the wanted NMR signal and the background/deadtime/ringing artifacts) the same experiment is repeated immediately afterwards before the T1 waiting delay. This second scan contains only the background/deadtime/ringing parts. Hence, the simple difference of both yields clean NMR line shapes free of artefacts. In this Part I various examples for complete (1)H, (11)B, (13)C, (19)F probe background removal due to construction parts of the NMR probes are presented. Furthermore, (25)Mg EASY of Mg(OH)2 is presented and this example shows how extremely strong acoustic ringing can be suppressed (more than a factor of 200) such that phase and baseline correction for spectra acquired with a single pulse is no longer a problem. EASY is also a step towards deadtime-free data acquisition as these effects are also canceled completely. EASY can be combined with any other NMR experiment, including 2D NMR, if baseline distortions are a big problem. © 2013 Published by Elsevier Inc.

  5. Human versus automation in responding to failures: an expected-value analysis

    NASA Technical Reports Server (NTRS)

    Sheridan, T. B.; Parasuraman, R.

    2000-01-01

    A simple analytical criterion is provided for deciding whether a human or automation is best for a failure detection task. The method is based on expected-value decision theory in much the same way as is signal detection. It requires specification of the probabilities of misses (false negatives) and false alarms (false positives) for both human and automation being considered, as well as factors independent of the choice--namely, costs and benefits of incorrect and correct decisions as well as the prior probability of failure. The method can also serve as a basis for comparing different modes of automation. Some limiting cases of application are discussed, as are some decision criteria other than expected value. Actual or potential applications include the design and evaluation of any system in which either humans or automation are being considered.

  6. Improved Convergence and Robustness of USM3D Solutions on Mixed Element Grids (Invited)

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.

    2015-01-01

    Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Scheme (HANIS), has been developed and implemented. It provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier Stokes (RANS) equations and a nonlinear control of the solution update. Two variants of the new methodology are assessed on four benchmark cases, namely, a zero-pressure gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the baseline solver technology.

  7. A two-dimensional time domain near zone to far zone transformation

    NASA Technical Reports Server (NTRS)

    Luebbers, Raymond J.; Ryan, Deirdre; Beggs, John H.; Kunz, Karl S.

    1991-01-01

    A time domain transformation useful for extrapolating three dimensional near zone finite difference time domain (FDTD) results to the far zone was presented. Here, the corresponding two dimensional transform is outlined. While the three dimensional transformation produced a physically observable far zone time domain field, this is not convenient to do directly in two dimensions, since a convolution would be required. However, a representative two dimensional far zone time domain result can be obtained directly. This result can then be transformed to the frequency domain using a Fast Fourier Transform, corrected with a simple multiplicative factor, and used, for example, to calculate the complex wideband scattering width of a target. If an actual time domain far zone result is required, it can be obtained by inverse Fourier transform of the final frequency domain result.

  8. A two-dimensional time domain near zone to far zone transformation

    NASA Technical Reports Server (NTRS)

    Luebbers, Raymond J.; Ryan, Deirdre; Beggs, John H.; Kunz, Karl S.

    1991-01-01

    In a previous paper, a time domain transformation useful for extrapolating 3-D near zone finite difference time domain (FDTD) results to the far zone was presented. In this paper, the corresponding 2-D transform is outlined. While the 3-D transformation produced a physically observable far zone time domain field, this is not convenient to do directly in 2-D, since a convolution would be required. However, a representative 2-D far zone time domain result can be obtained directly. This result can then be transformed to the frequency domain using a Fast Fourier Transform, corrected with a simple multiplicative factor, and used, for example, to calculate the complex wideband scattering width of a target. If an actual time domain far zone result is required it can be obtained by inverse Fourier transform of the final frequency domain result.

  9. Dielectric properties of lung tissue as a function of air content.

    PubMed

    Nopp, P; Rapp, E; Pfützner, H; Nakesch, H; Ruhsam, C

    1993-06-01

    Dielectric measurements were made on lung samples with different electrode systems in the frequency range 5 kHz-100 kHz. In the case of plate electrodes and spot electrodes, the effects of electrode polarization were partly corrected. An air filling factor F is defined, which is determined from the mass and volume of the sample. The results indicate that the electrical properties of lung tissue are highly dependent on the condition of the tissue. Furthermore they show that the conductivity sigma as well as the relative permittivity epsilon r decreases with increasing F. This is discussed using histological material. Using a simple theoretical model, the decrease of sigma and epsilon r is explained by the thinning of the alveolar walls as well as by the deformation of the epithelial cells and blood vessels through the expansion of the alveoli.

  10. An improved version of NCOREL: A computer program for 3-D nonlinear supersonic potential flow computations

    NASA Technical Reports Server (NTRS)

    Siclari, Michael J.

    1988-01-01

    A computer code called NCOREL (for Nonconical Relaxation) has been developed to solve for supersonic full potential flows over complex geometries. The method first solves for the conical at the apex and then marches downstream in a spherical coordinate system. Implicit relaxation techniques are used to numerically solve the full potential equation at each subsequent crossflow plane. Many improvements have been made to the original code including more reliable numerics for computing wing-body flows with multiple embedded shocks, inlet flow through simulation, wake model and entropy corrections. Line relaxation or approximate factorization schemes are optionally available. Improved internal grid generation using analytic conformal mappings, supported by a simple geometric Harris wave drag input that was originally developed for panel methods and internal geometry package are some of the new features.

  11. SU-F-T-67: Correction Factors for Monitor Unit Verification of Clinical Electron Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haywood, J

    Purpose: Monitor units calculated by electron Monte Carlo treatment planning systems are often higher than TG-71 hand calculations for a majority of patients. Here I’ve calculated tables of geometry and heterogeneity correction factors for correcting electron hand calculations. Method: A flat water phantom with spherical volumes having radii ranging from 3 to 15 cm was created. The spheres were centered with respect to the flat water phantom, and all shapes shared a surface at 100 cm SSD. D{sub max} dose at 100 cm SSD was calculated for each cone and energy on the flat phantom and for the spherical volumesmore » in the absence of the flat phantom. The ratio of dose in the sphere to dose in the flat phantom defined the geometrical correction factor. The heterogeneity factors were then calculated from the unrestricted collisional stopping power for tissues encountered in electron beam treatments. These factors were then used in patient second check calculations. Patient curvature was estimated by the largest sphere that aligns to the patient contour, and appropriate tissue density was read from the physical properties provided by the CT. The resulting MU were compared to those calculated by the treatment planning system and TG-71 hand calculations. Results: The geometry and heterogeneity correction factors range from ∼(0.8–1.0) and ∼(0.9–1.01) respectively for the energies and cones presented. Percent differences for TG-71 hand calculations drop from ∼(3–14)% to ∼(0–2)%. Conclusion: Monitor units calculated with the correction factors typically decrease the percent difference to under actionable levels, < 5%. While these correction factors work for a majority of patients, there are some patient anatomies that do not fit the assumptions made. Using these factors in hand calculations is a first step in bringing the verification monitor units into agreement with the treatment planning system MU.« less

  12. Radiative corrections to the η(') Dalitz decays

    NASA Astrophysics Data System (ADS)

    Husek, Tomáš; Kampf, Karol; Novotný, Jiří; Leupold, Stefan

    2018-05-01

    We provide the complete set of radiative corrections to the Dalitz decays η(')→ℓ+ℓ-γ beyond the soft-photon approximation, i.e., over the whole range of the Dalitz plot and with no restrictions on the energy of a radiative photon. The corrections inevitably depend on the η(')→ γ*γ(*) transition form factors. For the singly virtual transition form factor appearing, e.g., in the bremsstrahlung correction, recent dispersive calculations are used. For the one-photon-irreducible contribution at the one-loop level (for the doubly virtual form factor), we use a vector-meson-dominance-inspired model while taking into account the η -η' mixing.

  13. Impact of creatine kinase correction on the predictive value of S-100B after mild traumatic brain injury.

    PubMed

    Bazarian, Jeffrey J; Beck, Christopher; Blyth, Brian; von Ahsen, Nicolas; Hasselblatt, Martin

    2006-01-01

    To validate a correction factor for the extracranial release of the astroglial protein, S-100B, based on concomitant creatine kinase (CK) levels. The CK- S-100B relationship in non-head injured marathon runners was used to derive a correction factor for the extracranial release of S-100B. This factor was then applied to a separate cohort of 96 mild traumatic brain injury (TBI) patients in whom both CK and S-100B levels were measured. Corrected S-100B was compared to uncorrected S-100B for the prediction of initial head CT, three-month headache and three-month post concussive syndrome (PCS). Corrected S-100B resulted in a statistically significant improvement in the prediction of 3-month headache (area under curve [AUC] 0.46 vs 0.52, p=0.02), but not PCS or initial head CT. Using a cutoff that maximizes sensitivity (> or = 90%), corrected S-100B improved the prediction of initial head CT scan (negative predictive value from 75% [95% CI, 2.6%, 67.0%] to 96% [95% CI: 83.5%, 99.8%]). Although S-100B is overall poorly predictive of outcome, a correction factor using CK is a valid means of accounting for extracranial release. By increasing the proportion of mild TBI patients correctly categorized as low risk for abnormal head CT, CK-corrected S100-B can further reduce the number of unnecessary brain CT scans performed after this injury.

  14. The accuracy of climate models' simulated season lengths and the effectiveness of grid scale correction factors

    DOE PAGES

    Winterhalter, Wade E.

    2011-09-01

    Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I foundmore » that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.« less

  15. A simple randomisation procedure for validating discriminant analysis: a methodological note.

    PubMed

    Wastell, D G

    1987-04-01

    Because the goal of discriminant analysis (DA) is to optimise classification, it designedly exaggerates between-group differences. This bias complicates validation of DA. Jack-knifing has been used for validation but is inappropriate when stepwise selection (SWDA) is employed. A simple randomisation test is presented which is shown to give correct decisions for SWDA. The general superiority of randomisation tests over orthodox significance tests is discussed. Current work on non-parametric methods of estimating the error rates of prediction rules is briefly reviewed.

  16. A simple technique for correction of relapsed overjet.

    PubMed

    Kakkirala, Neelima; Saxena, Ruchi

    2014-01-01

    Class III malocclusions are usually growth related discrepancies, which often become more severe when growth is completed Orthognathic surgery can be a part of the treatment plan, although a good number of cases can be treated non-surgically by camouflage treatment. The purpose of this report is to review the relapse tendency in patients treated non-surgically. A simple technique is described to combat one such post-treatment relapse condition in an adult patient who had undergone orthodontic treatment by extraction of a single lower incisor.

  17. Influence of parameter changes to stability behavior of rotors

    NASA Technical Reports Server (NTRS)

    Fritzen, C. P.; Nordmann, R.

    1982-01-01

    The occurrence of unstable vibrations in rotating machinery requires corrective measures for improvement of the stability behavior. A simple approximate method is represented to find out the influence of parameter changes to the stability behavior. The method is based on an expansion of the eigenvalues in terms of system parameters. Influence coefficients show the effect of structural modifications. The method first of all was applied to simple nonconservative rotor models. It was approved for an unsymmetric rotor of a test rig.

  18. Impulse Noise and Neurosensory Hearing Loss—Relationship to Small Arms Fire

    PubMed Central

    Keim, Robert J.

    1970-01-01

    The problems of noise are not limited to the simple annoyance of an individual. Noise can produce a permanent hearing handicap. Many everyday activities and hobbies are associated with hazardous exposure to noise. The hunter and the sport shooter are potential subjects of severe and unresolvable hearing loss. Noise-induced hearing loss develops insidiously. The means of prevention are far more simple than is correction of the loss. Wearing ear protectors, plugs or earmuffs, is advisable during exposure to hazardous noise. PMID:5460217

  19. Correction factors in determining speed of sound among freshmen in undergraduate physics laboratory

    NASA Astrophysics Data System (ADS)

    Lutfiyah, A.; Adam, A. S.; Suprapto, N.; Kholiq, A.; Putri, N. P.

    2018-03-01

    This paper deals to identify the correction factor in determining speed of sound that have been done by freshmen in undergraduate physics laboratory. Then, the result will be compared with speed of sound that determining by senior student. Both of them used the similar instrument, namely resonance tube with apparatus. The speed of sound indicated by senior was 333.38 ms-1 with deviation to the theory about 3.98%. Meanwhile, for freshmen, the speed of sound experiment was categorised into three parts: accurate value (52.63%), middle value (31.58%) and lower value (15.79%). Based on analysis, some correction factors were suggested: human error in determining first and second harmonic, end correction of tube diameter, and another factors from environment, such as temperature, humidity, density, and pressure.

  20. A new approach for modeling gravitational radiation from the inspiral of two neutron stars

    NASA Astrophysics Data System (ADS)

    Luke, Stephen A.

    In this dissertation, a new method of applying the ADM formalism of general relativity to model the gravitational radiation emitted from the realistic inspiral of a neutron star binary is described. A description of the conformally flat condition (CFC) is summarized, and the ADM equations are solved by use of the CFC approach for a neutron star binary. The advantages and limitations of this approach are discussed, and the need for a more accurate improvement to this approach is described. To address this need, a linearized perturbation of the CFC spatial three metric is then introduced. The general relativistic hydrodynamic equations are then allowed to evolve against this basis under the assumption that the first-order corrections to the hydrodynamic variables are negligible compared to their CFC values. As a first approximation, the linear corrections to the conformal factor, lapse function, and shift vector are also assumed to be small compared to the extrinsic curvature and the three metric. A boundary matching method is then introduced as a way of computing the gravitational radiation of this relativistic system without use of the multipole expansion as employed by earlier applications of the CFC approach. It is assumed that at a location far from the source, the three metric is accurately described by a linear correction to Minkowski spacetime. The two polarizations of gravitational radiation can then be computed at that point in terms of the linearized correction to the metric. The evolution equations obtained from the linearized perturbative correction to the CFC approach and the method for recovery of the gravity wave signal are then tested by use of a three-dimensional numerical simulation. This code is used to compute the gravity wave signal emitted a pair of equal mass neutron stars in quasi-stable circular orbits at a point early in their inspiral phase. From this simple numerical analysis, the correct general trend of gravitational radiation is recovered. Comparisons with (5/2) post-Newtonian solutions show a similar gravitational waveform, although inaccuracies are still found to exist from this computation. Finally, several areas for improvement and potential future applications of this technique are discussed.

  1. Photobleaching correction in fluorescence microscopy images

    NASA Astrophysics Data System (ADS)

    Vicente, Nathalie B.; Diaz Zamboni, Javier E.; Adur, Javier F.; Paravani, Enrique V.; Casco, Víctor H.

    2007-11-01

    Fluorophores are used to detect molecular expression by highly specific antigen-antibody reactions in fluorescence microscopy techniques. A portion of the fluorophore emits fluorescence when irradiated with electromagnetic waves of particular wavelengths, enabling its detection. Photobleaching irreversibly destroys fluorophores stimulated by radiation within the excitation spectrum, thus eliminating potentially useful information. Since this process may not be completely prevented, techniques have been developed to slow it down or to correct resulting alterations (mainly, the decrease in fluorescent signal). In the present work, the correction by photobleaching curve was studied using E-cadherin (a cell-cell adhesion molecule) expression in Bufo arenarum embryos. Significant improvements were observed when applying this simple, inexpensive and fast technique.

  2. Resistivity Correction Factor for the Four-Probe Method: Experiment I

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Enjoji, Hideo

    1988-05-01

    Experimental verification of the theoretically derived resistivity correction factor (RCF) is presented. Resistivity and sheet resistance measurements by the four-probe method are made on three samples: isotropic graphite, ITO film and Au film. It is indicated that the RCF can correct the apparent variations of experimental data to yield reasonable resistivities and sheet resistances.

  3. Monte Carlo calculated correction factors for diodes and ion chambers in small photon fields.

    PubMed

    Czarnecki, D; Zink, K

    2013-04-21

    The application of small photon fields in modern radiotherapy requires the determination of total scatter factors Scp or field factors Ω(f(clin), f(msr))(Q(clin), Q(msr)) with high precision. Both quantities require the knowledge of the field-size-dependent and detector-dependent correction factor k(f(clin), f(msr))(Q(clin), Q(msr)). The aim of this study is the determination of the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) for different types of detectors in a clinical 6 MV photon beam of a Siemens KD linear accelerator. The EGSnrc Monte Carlo code was used to calculate the dose to water and the dose to different detectors to determine the field factor as well as the mentioned correction factor for different small square field sizes. Besides this, the mean water to air stopping power ratio as well as the ratio of the mean energy absorption coefficients for the relevant materials was calculated for different small field sizes. As the beam source, a Monte Carlo based model of a Siemens KD linear accelerator was used. The results show that in the case of ionization chambers the detector volume has the largest impact on the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)); this perturbation may contribute up to 50% to the correction factor. Field-dependent changes in stopping-power ratios are negligible. The magnitude of k(f(clin), f(msr))(Q(clin), Q(msr)) is of the order of 1.2 at a field size of 1 × 1 cm(2) for the large volume ion chamber PTW31010 and is still in the range of 1.05-1.07 for the PinPoint chambers PTW31014 and PTW31016. For the diode detectors included in this study (PTW60016, PTW 60017), the correction factor deviates no more than 2% from unity in field sizes between 10 × 10 and 1 × 1 cm(2), but below this field size there is a steep decrease of k(f(clin), f(msr))(Q(clin), Q(msr)) below unity, i.e. a strong overestimation of dose. Besides the field size and detector dependence, the results reveal a clear dependence of the correction factor on the accelerator geometry for field sizes below 1 × 1 cm(2), i.e. on the beam spot size of the primary electrons hitting the target. This effect is especially pronounced for the ionization chambers. In conclusion, comparing all detectors, the unshielded diode PTW60017 is highly recommended for small field dosimetry, since its correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) is closest to unity in small fields and mainly independent of the electron beam spot size.

  4. Verifiable Measurement-Only Blind Quantum Computing with Stabilizer Testing.

    PubMed

    Hayashi, Masahito; Morimae, Tomoyuki

    2015-11-27

    We introduce a simple protocol for verifiable measurement-only blind quantum computing. Alice, a client, can perform only single-qubit measurements, whereas Bob, a server, can generate and store entangled many-qubit states. Bob generates copies of a graph state, which is a universal resource state for measurement-based quantum computing, and sends Alice each qubit of them one by one. Alice adaptively measures each qubit according to her program. If Bob is honest, he generates the correct graph state, and, therefore, Alice can obtain the correct computation result. Regarding the security, whatever Bob does, Bob cannot get any information about Alice's computation because of the no-signaling principle. Furthermore, malicious Bob does not necessarily send the copies of the correct graph state, but Alice can check the correctness of Bob's state by directly verifying the stabilizers of some copies.

  5. Reply to "Comment on `Simple improvements to classical bubble nucleation models'"

    NASA Astrophysics Data System (ADS)

    Tanaka, Kyoko K.; Tanaka, Hidekazu; Angélil, Raymond; Diemand, Jürg

    2016-08-01

    We reply to the Comment by Schmelzer and Baidakov [Phys. Rev. E 94, 026801 (2016)]., 10.1103/PhysRevE.94.026801 They suggest that a more modern approach than the classic description by Tolman is necessary to model the surface tension of curved interfaces. Therefore we now consider the higher-order Helfrich correction, rather than the simpler first-order Tolman correction. Using a recent parametrization of the Helfrich correction provided by Wilhelmsen et al. [J. Chem. Phys. 142, 064706 (2015)], 10.1063/1.4907588, we test this description against measurements from our simulations, and find an agreement stronger than what the pure Tolman description offers. Our analyses suggest a necessary correction of order higher than the second for small bubbles with radius ≲1 nm. In addition, we respond to other minor criticism about our results.

  6. Verifiable Measurement-Only Blind Quantum Computing with Stabilizer Testing

    NASA Astrophysics Data System (ADS)

    Hayashi, Masahito; Morimae, Tomoyuki

    2015-11-01

    We introduce a simple protocol for verifiable measurement-only blind quantum computing. Alice, a client, can perform only single-qubit measurements, whereas Bob, a server, can generate and store entangled many-qubit states. Bob generates copies of a graph state, which is a universal resource state for measurement-based quantum computing, and sends Alice each qubit of them one by one. Alice adaptively measures each qubit according to her program. If Bob is honest, he generates the correct graph state, and, therefore, Alice can obtain the correct computation result. Regarding the security, whatever Bob does, Bob cannot get any information about Alice's computation because of the no-signaling principle. Furthermore, malicious Bob does not necessarily send the copies of the correct graph state, but Alice can check the correctness of Bob's state by directly verifying the stabilizers of some copies.

  7. Fatigue Crack Growth Rate and Stress-Intensity Factor Corrections for Out-of-Plane Crack Growth

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Herman, Dave J.; James, Mark A.

    2003-01-01

    Fatigue crack growth rate testing is performed by automated data collection systems that assume straight crack growth in the plane of symmetry and use standard polynomial solutions to compute crack length and stress-intensity factors from compliance or potential drop measurements. Visual measurements used to correct the collected data typically include only the horizontal crack length, which for cracks that propagate out-of-plane, under-estimates the crack growth rates and over-estimates the stress-intensity factors. The authors have devised an approach for correcting both the crack growth rates and stress-intensity factors based on two-dimensional mixed mode-I/II finite element analysis (FEA). The approach is used to correct out-of-plane data for 7050-T7451 and 2025-T6 aluminum alloys. Results indicate the correction process works well for high DeltaK levels but fails to capture the mixed-mode effects at DeltaK levels approaching threshold (da/dN approximately 10(exp -10) meter/cycle).

  8. Correction factor for ablation algorithms used in corneal refractive surgery with gaussian-profile beams

    NASA Astrophysics Data System (ADS)

    Jimenez, Jose Ramón; González Anera, Rosario; Jiménez del Barco, Luis; Hita, Enrique; Pérez-Ocón, Francisco

    2005-01-01

    We provide a correction factor to be added in ablation algorithms when a Gaussian beam is used in photorefractive laser surgery. This factor, which quantifies the effect of pulse overlapping, depends on beam radius and spot size. We also deduce the expected post-surgical corneal radius and asphericity when considering this factor. Data on 141 eyes operated on LASIK (laser in situ keratomileusis) with a Gaussian profile show that the discrepancy between experimental and expected data on corneal power is significantly lower when using the correction factor. For an effective improvement of post-surgical visual quality, this factor should be applied in ablation algorithms that do not consider the effects of pulse overlapping with a Gaussian beam.

  9. Airspace Technology Demonstration 3 (ATD-3): Dynamic Weather Routes (DWR) Technology Transfer Document Summary Version 1.0

    NASA Technical Reports Server (NTRS)

    Sheth, Kapil; Wang, Easter Mayan Chan

    2016-01-01

    Airspace Technology Demonstration #3 (ATD-3) is part of NASA's Airspace Operations and Safety Program (AOSP) - specifically, its Airspace Technology Demonstrations (ATD) Project. ATD-3 is a multiyear research and development effort which proposes to develop and demonstrate automation technologies and operating concepts that enable air navigation service providers and airspace users to continuously assess weather, winds, traffic, and other information to identify, evaluate, and implement workable opportunities for flight plan route corrections that can result in significant flight time and fuel savings in en route airspace. In order to ensure that the products of this tech-transfer are relevant and useful, NASA has created strong partnerships with the FAA and key industry stakeholders. This summary document and accompanying technology artifacts satisfy the first of three Research Transition Products (RTPs) defined in the Applied Traffic Flow Management (ATFM) Research Transition Team (RTT) Plan. This transfer consists of NASA's legacy Dynamic Weather Routes (DWR) work for efficient routing for en-route weather avoidance. DWR is a ground-based trajectory automation system that continuously and automatically analyzes active airborne aircraft in en route airspace to identify opportunities for simple corrections to flight plan routes that can save significant flying time, at least five minutes wind-corrected, while avoiding weather and considering traffic conflicts, airspace sector congestion, special use airspace, and FAA routing restrictions. The key benefit of the DWR concept is to let automation continuously and automatically analyze active flights to find those where simple route corrections can save significant time and fuel. Operators are busy during weather events. It is more effective to let automation find the opportunities for high-value route corrections.

  10. Automation in the graphic arts

    NASA Astrophysics Data System (ADS)

    Truszkowski, Walt

    1995-04-01

    The CHIMES (Computer-Human Interaction Models) tool was designed to help solve a simply-stated but important problem, i.e., the problem of generating a user interface to a system that complies with established human factors standards and guidelines. Though designed for use in a fairly restricted user domain, i.e., spacecraft mission operations, the CHIMES system is essentially domain independent and applicable wherever graphical user interfaces of displays are to be encountered. The CHIMES philosophy and operating strategy are quite simple. Instead of requiring a human designer to actively maintain in his or her head the now encyclopedic knowledge that human factors and user interface specialists have evolved, CHIMES incorporates this information in its knowledge bases. When directed to evaluated a design, CHIMES determines and accesses the appropriate knowledge, performs an evaluation of the design against that information, determines whether the design is compliant with the selected guidelines and suggests corrective actions if deviations from guidelines are discovered. This paper will provide an overview of the capabilities of the current CHIMES tool and discuss the potential integration of CHIMES-like technology in automated graphic arts systems.

  11. Perturbation theory of structure in classical liquid mixtures: Application to metallic systems near phase separation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Henderson, R. L.

    1974-01-01

    The partial structure factors of classical simple liquid mixtures near phase separation are dicussed. The theory is developed for particles interacting through pair potentials, and is thus appropriate both to insulating fluids, and also to metallic systems if these may be described by an effective ion-ion pair interaction. The motivation arose from consideration of metallic liquid mixtures, in which resistive anomalies have been observed near phase separation. A mean field theory correction appropriate to 3 pair potential for the effects of correlated motions in the reference fluid is studied. The work is cast in terms of functions which are closely related to the direct correlation functions of Ornstein and Zernike. The results are qualitatively in accord with physical expectations. Quantitative agreement with experiment seems to turn on the selection of the hard core reference potential in terms of the metallic effective pair potential. It is suggested that the present effective pair potentials are perhaps not properly used to calculate the metallic structure factors at long wavelength.

  12. Convoluted Quasi Sturmian basis for the two-electron continuum

    NASA Astrophysics Data System (ADS)

    Ancarani, Lorenzo Ugo; Zaytsev, A. S.; Zaytsev, S. A.

    2016-09-01

    In the construction of solutions for the Coulomb three-body scattering problem one encounters a series of mathematical and numerical difficulties, one of which are the cumbersome boundary conditions the wave function should obey. We propose to describe a Coulomb three-body system continuum with a set of two-particle functions, named Convoluted Quasi Sturmian (CQS) in. They are built using recently introduced Quasi Sturmian (QS) functions which have the merit of possessing a closed form. Unlike a simple product of two one-particle functions, by construction, the CQS functions look asymptotically like a six-dimensional outgoing spherical wave. The proposed CQS basis is tested through the study of the double ionization of helium by high-energy electron impact in the framework of the Temkin-Poet model. An adequate logarithmic-like phase factor is further included in order to take into account the Coulomb interelectronic interaction and formally build the correct asymptotic behavior when all interparticle distances are large. With such a phase-factor (that can be easily extended to take into account higher partial waves) rapid convergence of the expansion can be obtained.

  13. Lymph node size as a simple prognostic factor in node negative colon cancer and an alternative thesis to stage migration.

    PubMed

    Märkl, Bruno; Schaller, Tina; Kokot, Yuriy; Endhardt, Katharina; Kretsinger, Hallie; Hirschbühl, Klaus; Aumann, Georg; Schenkirsch, Gerhard

    2016-10-01

    Stage migration is an accepted explanation for the association between lymph node (LN) yield and outcome in colon cancer. To investigate whether the alternative thesis of immune response is more likely, we performed a retrospective study. We enrolled 239 cases of node negative cancers, which were categorized according to the number of LNs with diameters larger than 5 mm (LN5) into the groups LN5-very low (0 to 1 LN5), LN5-low (2 to 5 LN5), and LN5-high (≥6 LN5). Significant differences were found in pT3/4 cancers with median survival times of 40, 57, and 71 months (P = .022) in the LN5-very low, LN5-low, and LN5-high groups, respectively. Multivariable analysis revealed that LN5 number and infiltration type were independent prognostic factors. LN size is prognostic in node negative colon cancer. The correct explanation for outcome differences associated with LN harvest is probably the activation status of LNs. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. A case report on the remodelling technique for the earlobe using a soft splint.

    PubMed

    Vaiude, Partha N; Anthony, Edwin T; Syed, Mobin; Ilyas, Syed

    2008-01-01

    Correcting earlobe deformities often presents an aesthetic challenge to the surgeon. The described technique presents a simple, accurate and cost effective method of remodelling soft tissue defects of the earlobe using a soft splint.

  15. Binary phase lock loops for simplified OMEGA receivers

    NASA Technical Reports Server (NTRS)

    Burhans, R. W.

    1974-01-01

    A sampled binary phase lock loop is proposed for periodically correcting OMEGA receiver internal clocks. The circuit is particularly simple to implement and provides a means of generating long range 3.4 KHz difference frequency lanes from simultaneous pair measurements.

  16. Completeness relations for Maass Laplacians and heat kernels on the super Poincaré upper half-plane

    NASA Astrophysics Data System (ADS)

    Oshima, Kazuto

    1990-12-01

    Simple completeness relations are proposed for Maass Laplacians. With the help of these completeness relations, correct heat kernels of (super) Maass Laplacians are derived on the (super) Poincaré upper half-plane.

  17. Evaluation and parameterization of ATCOR3 topographic correction method for forest cover mapping in mountain areas

    NASA Astrophysics Data System (ADS)

    Balthazar, Vincent; Vanacker, Veerle; Lambin, Eric F.

    2012-08-01

    A topographic correction of optical remote sensing data is necessary to improve the quality of quantitative forest cover change analyses in mountainous terrain. The implementation of semi-empirical correction methods requires the calibration of model parameters that are empirically defined. This study develops a method to improve the performance of topographic corrections for forest cover change detection in mountainous terrain through an iterative tuning method of model parameters based on a systematic evaluation of the performance of the correction. The latter was based on: (i) the general matching of reflectances between sunlit and shaded slopes and (ii) the occurrence of abnormal reflectance values, qualified as statistical outliers, in very low illuminated areas. The method was tested on Landsat ETM+ data for rough (Ecuadorian Andes) and very rough mountainous terrain (Bhutan Himalayas). Compared to a reference level (no topographic correction), the ATCOR3 semi-empirical correction method resulted in a considerable reduction of dissimilarities between reflectance values of forested sites in different topographic orientations. Our results indicate that optimal parameter combinations are depending on the site, sun elevation and azimuth and spectral conditions. We demonstrate that the results of relatively simple topographic correction methods can be greatly improved through a feedback loop between parameter tuning and evaluation of the performance of the correction model.

  18. Fisheye camera method for spatial non-uniformity corrections in luminous flux measurements with integrating spheres

    NASA Astrophysics Data System (ADS)

    Kokka, Alexander; Pulli, Tomi; Poikonen, Tuomas; Askola, Janne; Ikonen, Erkki

    2017-08-01

    This paper presents a fisheye camera method for determining spatial non-uniformity corrections in luminous flux measurements with integrating spheres. Using a fisheye camera installed into a port of an integrating sphere, the relative angular intensity distribution of the lamp under test is determined. This angular distribution is used for calculating the spatial non-uniformity correction for the lamp when combined with the spatial responsivity data of the sphere. The method was validated by comparing it to a traditional goniophotometric approach when determining spatial correction factors for 13 LED lamps with different angular spreads. The deviations between the spatial correction factors obtained using the two methods ranged from -0.15 % to 0.15%. The mean magnitude of the deviations was 0.06%. For a typical LED lamp, the expanded uncertainty (k = 2 ) for the spatial non-uniformity correction factor was evaluated to be 0.28%. The fisheye camera method removes the need for goniophotometric measurements in determining spatial non-uniformity corrections, thus resulting in considerable system simplification. Generally, no permanent modifications to existing integrating spheres are required.

  19. Student understanding of the Boltzmann factor

    NASA Astrophysics Data System (ADS)

    Smith, Trevor I.; Mountcastle, Donald B.; Thompson, John R.

    2015-12-01

    [This paper is part of the Focused Collection on Upper Division Physics Courses.] We present results of our investigation into student understanding of the physical significance and utility of the Boltzmann factor in several simple models. We identify various justifications, both correct and incorrect, that students use when answering written questions that require application of the Boltzmann factor. Results from written data as well as teaching interviews suggest that many students can neither recognize situations in which the Boltzmann factor is applicable nor articulate the physical significance of the Boltzmann factor as an expression for multiplicity, a fundamental quantity of statistical mechanics. The specific student difficulties seen in the written data led us to develop a guided-inquiry tutorial activity, centered around the derivation of the Boltzmann factor, for use in undergraduate statistical mechanics courses. We report on the development process of our tutorial, including data from teaching interviews and classroom observations of student discussions about the Boltzmann factor and its derivation during the tutorial development process. This additional information informed modifications that improved students' abilities to complete the tutorial during the allowed class time without sacrificing the effectiveness as we have measured it. These data also show an increase in students' appreciation of the origin and significance of the Boltzmann factor during the student discussions. Our findings provide evidence that working in groups to better understand the physical origins of the canonical probability distribution helps students gain a better understanding of when the Boltzmann factor is applicable and how to use it appropriately in answering relevant questions.

  20. Glassy behaviour in simple kinetically constrained models: topological networks, lattice analogues and annihilation-diffusion

    NASA Astrophysics Data System (ADS)

    Sherrington, David; Davison, Lexie; Buhot, Arnaud; Garrahan, Juan P.

    2002-02-01

    We report a study of a series of simple model systems with only non-interacting Hamiltonians, and hence simple equilibrium thermodynamics, but with constrained dynamics of a type initially suggested by foams and idealized covalent glasses. We demonstrate that macroscopic dynamical features characteristic of real and more complex model glasses, such as two-time decays in energy and auto-correlation functions, arise from the dynamics and we explain them qualitatively and quantitatively in terms of annihilation-diffusion concepts and theory. The comparison is with strong glasses. We also consider fluctuation-dissipation relations and demonstrate subtleties of interpretation. We find no FDT breakdown when the correct normalization is chosen.

  1. Re-evaluation of the correction factors for the GROVEX

    NASA Astrophysics Data System (ADS)

    Ketelhut, Steffen; Meier, Markus

    2018-04-01

    The GROVEX (GROssVolumige EXtrapolationskammer, large-volume extrapolation chamber) is the primary standard for the dosimetry of low-dose-rate interstitial brachytherapy at the Physikalisch-Technische Bundesanstalt (PTB). In the course of setup modifications and re-measuring of several dimensions, the correction factors have been re-evaluated in this work. The correction factors for scatter and attenuation have been recalculated using the Monte Carlo software package EGSnrc, and a new expression has been found for the divergence correction. The obtained results decrease the measured reference air kerma rate by approximately 0.9% for the representative example of a seed of type Bebig I25.S16C. This lies within the expanded uncertainty (k  =  2).

  2. Detector signal correction method and system

    DOEpatents

    Carangelo, Robert M.; Duran, Andrew J.; Kudman, Irwin

    1995-07-11

    Corrective factors are applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factors may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects.

  3. Detector signal correction method and system

    DOEpatents

    Carangelo, R.M.; Duran, A.J.; Kudman, I.

    1995-07-11

    Corrective factors are applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factors may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects. 5 figs.

  4. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    NASA Astrophysics Data System (ADS)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  5. Calculation of the Pitot tube correction factor for Newtonian and non-Newtonian fluids.

    PubMed

    Etemad, S Gh; Thibault, J; Hashemabadi, S H

    2003-10-01

    This paper presents the numerical investigation performed to calculate the correction factor for Pitot tubes. The purely viscous non-Newtonian fluids with the power-law model constitutive equation were considered. It was shown that the power-law index, the Reynolds number, and the distance between the impact and static tubes have a major influence on the Pitot tube correction factor. The problem was solved for a wide range of these parameters. It was shown that employing Bernoulli's equation could lead to large errors, which depend on the magnitude of the kinetic energy and energy friction loss terms. A neural network model was used to correlate the correction factor of a Pitot tube as a function of these three parameters. This correlation is valid for most Newtonian, pseudoplastic, and dilatant fluids at low Reynolds number.

  6. Signals from the ventrolateral thalamus to the motor cortex during locomotion

    PubMed Central

    Marlinski, Vladimir; Nilaweera, Wijitha U.; Zelenin, Pavel V.; Sirota, Mikhail G.

    2012-01-01

    The activity of the motor cortex during locomotion is profoundly modulated in the rhythm of strides. The source of modulation is not known. In this study we examined the activity of one of the major sources of afferent input to the motor cortex, the ventrolateral thalamus (VL). Experiments were conducted in chronically implanted cats with an extracellular single-neuron recording technique. VL neurons projecting to the motor cortex were identified by antidromic responses. During locomotion, the activity of 92% of neurons was modulated in the rhythm of strides; 67% of cells discharged one activity burst per stride, a pattern typical for the motor cortex. The characteristics of these discharges in most VL neurons appeared to be well suited to contribute to the locomotion-related activity of the motor cortex. In addition to simple locomotion, we examined VL activity during walking on a horizontal ladder, a task that requires vision for correct foot placement. Upon transition from simple to ladder locomotion, the activity of most VL neurons exhibited the same changes that have been reported for the motor cortex, i.e., an increase in the strength of stride-related modulation and shortening of the discharge duration. Five modes of integration of simple and ladder locomotion-related information were recognized in the VL. We suggest that, in addition to contributing to the locomotion-related activity in the motor cortex during simple locomotion, the VL integrates and transmits signals needed for correct foot placement on a complex terrain to the motor cortex. PMID:21994259

  7. Ultra-high resolution electron microscopy

    DOE PAGES

    Oxley, Mark P.; Lupini, Andrew R.; Pennycook, Stephen J.

    2016-12-23

    The last two decades have seen dramatic advances in the resolution of the electron microscope brought about by the successful correction of lens aberrations that previously limited resolution for most of its history. Here we briefly review these advances, the achievement of sub-Ångstrom resolution and the ability to identify individual atoms, their bonding configurations and even their dynamics and diffusion pathways. We then present a review of the basic physics of electron scattering, lens aberrations and their correction, and an approximate imaging theory for thin crystals which provides physical insight into the various different imaging modes. Then we proceed tomore » describe a more exact imaging theory starting from Yoshioka’s formulation and covering full image simulation methods using Bloch waves, the multislice formulation and the frozen phonon/quantum excitation of phonons models. Delocalization of inelastic scattering has become an important limiting factor at atomic resolution. We therefore discuss this issue extensively, showing how the full-width-half-maximum is the appropriate measure for predicting image contrast, but the diameter containing 50% of the excitation is an important measure of the range of the interaction. These two measures can differ by a factor of 5, are not a simple function of binding energy, and full image simulations are required to match to experiment. The Z-dependence of annular dark field images is also discussed extensively, both for single atoms and for crystals, and we show that temporal incoherence must be included accurately if atomic species are to be identified through matching experimental intensities to simulations. Finally we mention a few promising directions for future investigation.« less

  8. Entrance dose measurements for in‐vivo diode dosimetry: Comparison of correction factors for two types of commercial silicon diode detectors

    PubMed Central

    Zhu, X. R.

    2000-01-01

    Silicon diode dosimeters have been used routinely for in‐vivo dosimetry. Despite their popularity, an appropriate implementation of an in‐vivo dosimetry program using diode detectors remains a challenge for clinical physicists. One common approach is to relate the diode readout to the entrance dose, that is, dose to the reference depth of maximum dose such as dmax for the 10×10 cm2 field. Various correction factors are needed in order to properly infer the entrance dose from the diode readout, depending on field sizes, target‐to‐surface distances (TSD), and accessories (such as wedges and compensate filters). In some clinical practices, however, no correction factor is used. In this case, a diode‐dosimeter‐based in‐vivo dosimetry program may not serve the purpose effectively; that is, to provide an overall check of the dosimetry procedure. In this paper, we provide a formula to relate the diode readout to the entrance dose. Correction factors for TSD, field size, and wedges used in this formula are also clearly defined. Two types of commercial diode detectors, ISORAD (n‐type) and the newly available QED (p‐type) (Sun Nuclear Corporation), are studied. We compared correction factors for TSDs, field sizes, and wedges. Our results are consistent with the theory of radiation damage of silicon diodes. Radiation damage has been shown to be more serious for n‐type than for p‐type detectors. In general, both types of diode dosimeters require correction factors depending on beam energy, TSD, field size, and wedge. The magnitudes of corrections for QED (p‐type) diodes are smaller than ISORAD detectors. PACS number(s): 87.66.–a, 87.52.–g PMID:11674824

  9. Universal relations for range corrections to Efimov features

    DOE PAGES

    Ji, Chen; Braaten, Eric; Phillips, Daniel R.; ...

    2015-09-09

    In a three-body system of identical bosons interacting through a large S-wave scattering length a, there are several sets of features related to the Efimov effect that are characterized by discrete scale invariance. Effective field theory was recently used to derive universal relations between these Efimov features that include the first-order correction due to a nonzero effective range r s. We reveal a simple pattern in these range corrections that had not been previously identified. The pattern is explained by the renormalization group for the effective field theory, which implies that the Efimov three-body parameter runs logarithmically with the momentummore » scale at a rate proportional to r s/a. The running Efimov parameter also explains the empirical observation that range corrections can be largely taken into account by shifting the Efimov parameter by an adjustable parameter divided by a. Furthermore, the accuracy of universal relations that include first-order range corrections is verified by comparing them with various theoretical calculations using models with nonzero range.« less

  10. A drift correction optimization technique for the reduction of the inter-measurement dispersion of isotope ratios measured using a multi-collector plasma mass spectrometer

    NASA Astrophysics Data System (ADS)

    Doherty, W.; Lightfoot, P. C.; Ames, D. E.

    2014-08-01

    The effects of polynomial interpolation and internal standardization drift corrections on the inter-measurement dispersion (statistical) of isotope ratios measured with a multi-collector plasma mass spectrometer were investigated using the (analyte, internal standard) isotope systems of (Ni, Cu), (Cu, Ni), (Zn, Cu), (Zn, Ga), (Sm, Eu), (Hf, Re) and (Pb, Tl). The performance of five different correction factors was compared using a (statistical) range based merit function ωm which measures the accuracy and inter-measurement range of the instrument calibration. The frequency distribution of optimal correction factors over two hundred data sets uniformly favored three particular correction factors while the remaining two correction factors accounted for a small but still significant contribution to the reduction of the inter-measurement dispersion. Application of the merit function is demonstrated using the detection of Cu and Ni isotopic fractionation in laboratory and geologic-scale chemical reactor systems. Solvent extraction (diphenylthiocarbazone (Cu, Pb) and dimethylglyoxime (Ni) was used to either isotopically fractionate the metal during extraction using the method of competition or to isolate the Cu and Ni from the sample (sulfides and associated silicates). In the best case, differences in isotopic composition of ± 3 in the fifth significant figure could be routinely and reliably detected for Cu65/63 and Ni61/62. One of the internal standardization drift correction factors uses a least squares estimator to obtain a linear functional relationship between the measured analyte and internal standard isotope ratios. Graphical analysis demonstrates that the points on these graphs are defined by highly non-linear parametric curves and not two linearly correlated quantities which is the usual interpretation of these graphs. The success of this particular internal standardization correction factor was found in some cases to be due to a fortuitous, scale dependent, parametric curve effect.

  11. Method of absorbance correction in a spectroscopic heating value sensor

    DOEpatents

    Saveliev, Alexei; Jangale, Vilas Vyankatrao; Zelepouga, Sergeui; Pratapas, John

    2013-09-17

    A method and apparatus for absorbance correction in a spectroscopic heating value sensor in which a reference light intensity measurement is made on a non-absorbing reference fluid, a light intensity measurement is made on a sample fluid, and a measured light absorbance of the sample fluid is determined. A corrective light intensity measurement at a non-absorbing wavelength of the sample fluid is made on the sample fluid from which an absorbance correction factor is determined. The absorbance correction factor is then applied to the measured light absorbance of the sample fluid to arrive at a true or accurate absorbance for the sample fluid.

  12. New Correction Factors Based on Seasonal Variability of Outdoor Temperature for Estimating Annual Radon Concentrations in UK.

    PubMed

    Daraktchieva, Z

    2017-06-01

    Indoor radon concentrations generally vary with season. Radon gas enters buildings from beneath due to a small air pressure difference between the inside of a house and outdoors. This underpressure which draws soil gas including radon into the house depends on the difference between the indoor and outdoor temperatures. The variation in a typical house in UK showed that the mean indoor radon concentration reaches a maximum in January and a minimum in July. Sine functions were used to model the indoor radon data and monthly average outdoor temperatures, covering the period between 2005 and 2014. The analysis showed a strong negative correlation between the modelled indoor radon data and outdoor temperature. This correlation was used to calculate new correction factors that could be used for estimation of annual radon concentration in UK homes. The comparison between the results obtained with the new correction factors and the previously published correction factors showed that the new correction factors perform consistently better on the selected data sets. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Calibrating the ECCO ocean general circulation model using Green's functions

    NASA Technical Reports Server (NTRS)

    Menemenlis, D.; Fu, L. L.; Lee, T.; Fukumori, I.

    2002-01-01

    Green's functions provide a simple, yet effective, method to test and calibrate General-Circulation-Model(GCM) parameterizations, to study and quantify model and data errors, to correct model biases and trends, and to blend estimates from different solutions and data products.

  14. Gage for 3-d contours

    NASA Technical Reports Server (NTRS)

    Haynie, C. C.

    1980-01-01

    Simple gage, used with template, can help inspectors determine whether three-dimensional curved surface has correct contour. Gage was developed as aid in explosive forming of Space Shuttle emergency-escape hatch. For even greater accuracy, wedge can be made of metal and calibrated by indexing machine.

  15. A New Joint-Blade SENSE Reconstruction for Accelerated PROPELLER MRI

    PubMed Central

    Lyu, Mengye; Liu, Yilong; Xie, Victor B.; Feng, Yanqiu; Guo, Hua; Wu, Ed X.

    2017-01-01

    PROPELLER technique is widely used in MRI examinations for being motion insensitive, but it prolongs scan time and is restricted mainly to T2 contrast. Parallel imaging can accelerate PROPELLER and enable more flexible contrasts. Here, we propose a multi-step joint-blade (MJB) SENSE reconstruction to reduce the noise amplification in parallel imaging accelerated PROPELLER. MJB SENSE utilizes the fact that PROPELLER blades contain sharable information and blade-combined images can serve as regularization references. It consists of three steps. First, conventional blade-combined images are obtained using the conventional simple single-blade (SSB) SENSE, which reconstructs each blade separately. Second, the blade-combined images are employed as regularization for blade-wise noise reduction. Last, with virtual high-frequency data resampled from the previous step, all blades are jointly reconstructed to form the final images. Simulations were performed to evaluate the proposed MJB SENSE for noise reduction and motion correction. MJB SENSE was also applied to both T2-weighted and T1-weighted in vivo brain data. Compared to SSB SENSE, MJB SENSE greatly reduced the noise amplification at various acceleration factors, leading to increased image SNR in all simulation and in vivo experiments, including T1-weighted imaging with short echo trains. Furthermore, it preserved motion correction capability and was computationally efficient. PMID:28205602

  16. A New Joint-Blade SENSE Reconstruction for Accelerated PROPELLER MRI.

    PubMed

    Lyu, Mengye; Liu, Yilong; Xie, Victor B; Feng, Yanqiu; Guo, Hua; Wu, Ed X

    2017-02-16

    PROPELLER technique is widely used in MRI examinations for being motion insensitive, but it prolongs scan time and is restricted mainly to T2 contrast. Parallel imaging can accelerate PROPELLER and enable more flexible contrasts. Here, we propose a multi-step joint-blade (MJB) SENSE reconstruction to reduce the noise amplification in parallel imaging accelerated PROPELLER. MJB SENSE utilizes the fact that PROPELLER blades contain sharable information and blade-combined images can serve as regularization references. It consists of three steps. First, conventional blade-combined images are obtained using the conventional simple single-blade (SSB) SENSE, which reconstructs each blade separately. Second, the blade-combined images are employed as regularization for blade-wise noise reduction. Last, with virtual high-frequency data resampled from the previous step, all blades are jointly reconstructed to form the final images. Simulations were performed to evaluate the proposed MJB SENSE for noise reduction and motion correction. MJB SENSE was also applied to both T2-weighted and T1-weighted in vivo brain data. Compared to SSB SENSE, MJB SENSE greatly reduced the noise amplification at various acceleration factors, leading to increased image SNR in all simulation and in vivo experiments, including T1-weighted imaging with short echo trains. Furthermore, it preserved motion correction capability and was computationally efficient.

  17. Application of modern radiative transfer tools to model laboratory quartz emissivity

    NASA Astrophysics Data System (ADS)

    Pitman, Karly M.; Wolff, Michael J.; Clayton, Geoffrey C.

    2005-08-01

    Planetary remote sensing of regolith surfaces requires use of theoretical models for interpretation of constituent grain physical properties. In this work, we review and critically evaluate past efforts to strengthen numerical radiative transfer (RT) models with comparison to a trusted set of nadir incidence laboratory quartz emissivity spectra. By first establishing a baseline statistical metric to rate successful model-laboratory emissivity spectral fits, we assess the efficacy of hybrid computational solutions (Mie theory + numerically exact RT algorithm) to calculate theoretical emissivity values for micron-sized α-quartz particles in the thermal infrared (2000-200 cm-1) wave number range. We show that Mie theory, a widely used but poor approximation to irregular grain shape, fails to produce the single scattering albedo and asymmetry parameter needed to arrive at the desired laboratory emissivity values. Through simple numerical experiments, we show that corrections to single scattering albedo and asymmetry parameter values generated via Mie theory become more necessary with increasing grain size. We directly compare the performance of diffraction subtraction and static structure factor corrections to the single scattering albedo, asymmetry parameter, and emissivity for dense packing of grains. Through these sensitivity studies, we provide evidence that, assuming RT methods work well given sufficiently well-quantified inputs, assumptions about the scatterer itself constitute the most crucial aspect of modeling emissivity values.

  18. Wavelet Monte Carlo dynamics: A new algorithm for simulating the hydrodynamics of interacting Brownian particles

    NASA Astrophysics Data System (ADS)

    Dyer, Oliver T.; Ball, Robin C.

    2017-03-01

    We develop a new algorithm for the Brownian dynamics of soft matter systems that evolves time by spatially correlated Monte Carlo moves. The algorithm uses vector wavelets as its basic moves and produces hydrodynamics in the low Reynolds number regime propagated according to the Oseen tensor. When small moves are removed, the correlations closely approximate the Rotne-Prager tensor, itself widely used to correct for deficiencies in Oseen. We also include plane wave moves to provide the longest range correlations, which we detail for both infinite and periodic systems. The computational cost of the algorithm scales competitively with the number of particles simulated, N, scaling as N In N in homogeneous systems and as N in dilute systems. In comparisons to established lattice Boltzmann and Brownian dynamics algorithms, the wavelet method was found to be only a factor of order 1 times more expensive than the cheaper lattice Boltzmann algorithm in marginally semi-dilute simulations, while it is significantly faster than both algorithms at large N in dilute simulations. We also validate the algorithm by checking that it reproduces the correct dynamics and equilibrium properties of simple single polymer systems, as well as verifying the effect of periodicity on the mobility tensor.

  19. Sampling errors in blunt dust samplers arising from external wall loss effects

    NASA Astrophysics Data System (ADS)

    Vincent, J. H.; Gibson, H.

    Evidence is given that, with some forms of blunt dust sampler under conditions relating to those encountered in practical occupational hygiene and environmental monitoring, particles which impact onto the outer surface of the sampler body may not adhere permanently, and may eventually enter the sampling orifice. The effect of such external wall loss is to bring about excess sampling, where errors as high as 100% could arise. The problem is particularly important in the sampling of dry airborne particulates of the type commonly found in practical situations. For a given sampler configuration, the effect becomes more marked as the particle size increases or as the ratio of sampling velocity to ambient wind speed increases. We would expect it be greater for gritty, crystalline material than for smoother, amorphous material. Possible mechanisms controlling external wall losses were examined, and it was concluded that particle 'blow-off' (as opposed to particle 'bounce') is the most plausible. On the basis of simple experiments, it might be possible to make corrections for the sampling errors in question, but caution is recommended in doing so because of the unpredictable effects of environmental factors such as temperature and relative humidity. Of the possible practical solutions to the problem, it is felt that the best approach lies in the correct choice of sampler inlet design.

  20. Response functions for neutron skyshine analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gui, A.A.; Shultis, J.K.; Faw, R.E.

    1997-02-01

    Neutron and associated secondary photon line-beam response functions (LBRFs) for point monodirectional neutron sources are generated using the MCNP Monte Carlo code for use in neutron skyshine analysis employing the integral line-beam method. The LBRFs are evaluated at 14 neutron source energies ranging from 0.01 to 14 MeV and at 18 emission angles from 1 to 170 deg, as measured from the source-to-detector axis. The neutron and associated secondary photon conical-beam response functions (CBRFs) for azimuthally symmetric neutron sources are also evaluated at 13 neutron source energies in the same energy range and at 13 polar angles of source collimationmore » from 1 to 89 deg. The response functions are approximated by an empirical three-parameter function of the source-to-detector distance. These response function approximations are available for a source-to-detector distance up to 2,500 m and, for the first time, give dose equivalent responses that are required for modern radiological assessments. For the CBRFs, ground correction factors for neutrons and secondary photons are calculated and also approximated by empirical formulas for use in air-over-ground neutron skyshine problems with azimuthal symmetry. In addition, simple procedures are proposed for humidity and atmospheric density corrections.« less

  1. Analysis of diffuse radiation data for Beer Sheva: Measured (shadow ring) versus calculated (global-horizontal beam) values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kudish, A.I.; Ianetz, A.

    1993-12-01

    The authors have utilized concurrently measured global, normal incidence beam, and diffuse radiation data, the latter measured by means of a shadow ring pyranometer to study the relative magnitude of the anisotropic contribution (circumsolar region and nonuniform sky conditions) to the diffuse radiation. In the case of Beer Sheva, the monthly average hourly anisotropic correction factor varies from 2.9 to 20.9%, whereas the [open quotes]standard[close quotes] geometric correction factor varies from 5.6 to 14.0%. The monthly average hourly overall correction factor (combined anisotropic and geometric factors) varies from 8.9 to 37.7%. The data have also been analyzed using a simplemore » model of sky radiance developed by Steven in 1984. His anisotropic correction factor is a function of the relative strength and angular width of the circumsolar radiation region. The results of this analysis are in agreement with those previously reported for Quidron on the Dead Sea, viz. the anisotropy and relative strength of the circumsolar radiation are significantly greater than at any of the sites analyzed by Steven. In addition, the data have been utilized to validate a model developed by LeBaron et al. in 1990 for correcting shadow ring diffuse radiation data. The monthly average deviation between the corrected and true diffuse radiation values varies from 4.55 to 7.92%.« less

  2. Testing the Perey effect

    DOE PAGES

    Titus, L. J.; Nunes, Filomena M.

    2014-03-12

    Here, the effects of non-local potentials have historically been approximately included by applying a correction factor to the solution of the corresponding equation for the local equivalent interaction. This is usually referred to as the Perey correction factor. In this work we investigate the validity of the Perey correction factor for single-channel bound and scattering states, as well as in transfer (p, d) cross sections. Method: We solve the scattering and bound state equations for non-local interactions of the Perey-Buck type, through an iterative method. Using the distorted wave Born approximation, we construct the T-matrix for (p,d) on 17O, 41Ca,more » 49Ca, 127Sn, 133Sn, and 209Pb at 20 and 50 MeV. As a result, we found that for bound states, the Perey corrected wave function resulting from the local equation agreed well with that from the non-local equation in the interior region, but discrepancies were found in the surface and peripheral regions. Overall, the Perey correction factor was adequate for scattering states, with the exception of a few partial waves corresponding to the grazing impact parameters. These differences proved to be important for transfer reactions. In conclusion, the Perey correction factor does offer an improvement over taking a direct local equivalent solution. However, if the desired accuracy is to be better than 10%, the exact solution of the non-local equation should be pursued.« less

  3. S-NPP VIIRS thermal emissive band gain correction during the blackbody warm-up-cool-down cycle

    NASA Astrophysics Data System (ADS)

    Choi, Taeyoung J.; Cao, Changyong; Weng, Fuzhong

    2016-09-01

    The Suomi National Polar orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) has onboard calibrators called blackbody (BB) and Space View (SV) for Thermal Emissive Band (TEB) radiometric calibration. In normal operation, the BB temperature is set to 292.5 K providing one radiance level. From the NOAA's Integrated Calibration and Validation System (ICVS) monitoring system, the TEB calibration factors (F-factors) have been trended and show very stable responses, however the BB Warm-Up-Cool-Down (WUCD) cycles provide detectors' gain and temperature dependent sensitivity measurements. Since the launch of S-NPP, the NOAA Sea Surface Temperature (SST) group noticed unexpected global SST anomalies during the WUCD cycles. In this study, the TEB Ffactors are calculated during the WUCD cycle on June 17th 2015. The TEB F-factors are analyzed by identifying the VIIRS On-Board Calibrator Intermediate Product (OBCIP) files to be Warm-Up or Cool-Down granules. To correct the SST anomaly, an F-factor correction parameter is calculated by the modified C1 (or b1) values which are derived from the linear portion of C1 coefficient during the WUCD. The F-factor correction factors are applied back to the original VIIRS SST bands showing significantly reducing the F-factor changes. Obvious improvements are observed in M12, M14 and M16, but corrections effects are hardly seen in M16. Further investigation is needed to find out the source of the F-factor oscillations during the WUCD.

  4. Experimental setup for the determination of the correction factors of the neutron doseratemeters in fast neutron fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliescu, Elena; Bercea, Sorin; Dudu, Dorin

    2013-12-16

    The use of the U-120 Cyclotron of the IFIN-HH allowed to perform a testing bench with fast neutrons in order to determine the correction factors of the doseratemeters dedicated to neutron measurement. This paper deals with researchers performed in order to develop the irradiation facility testing the fast neutrons flux generated at the Cyclotron. This facility is presented, together with the results obtain in determining the correction factor for a doseratemeter dedicated to the neutron dose equivalent rate measurement.

  5. Understanding the atmospheric measurement and behavior of perfluorooctanoic acid.

    PubMed

    Webster, Eva M; Ellis, David A

    2012-09-01

    The recently reported quantification of the atmospheric sampling artifact for perfluorooctanoic acid (PFOA) was applied to existing gas and particle concentration measurements. Specifically, gas phase concentrations were increased by a factor of 3.5 and particle-bound concentrations by a factor of 0.1. The correlation constants in two particle-gas partition coefficient (K(QA)) estimation equations were determined for multiple studies with and without correcting for the sampling artifact. Correction for the sampling artifact gave correlation constants with improved agreement to those reported for other neutral organic contaminants, thus supporting the application of the suggested correction factors for perfluorinated carboxylic acids. Applying the corrected correlation constant to a recent multimedia modeling study improved model agreement with corrected, reported, atmospheric concentrations. This work confirms that there is sufficient partitioning to the gas phase to support the long-range atmospheric transport of PFOA. Copyright © 2012 SETAC.

  6. Slip Correction Measurements of Certified PSL Nanoparticles Using a Nanometer Differential Mobility Analyzer (Nano-DMA) for Knudsen Number From 0.5 to 83

    PubMed Central

    Kim, Jung Hyeun; Mulholland, George W.; Kukuck, Scott R.; Pui, David Y. H.

    2005-01-01

    The slip correction factor has been investigated at reduced pressures and high Knudsen number using polystyrene latex (PSL) particles. Nano-differential mobility analyzers (NDMA) were used in determining the slip correction factor by measuring the electrical mobility of 100.7 nm, 269 nm, and 19.90 nm particles as a function of pressure. The aerosol was generated via electrospray to avoid multiplets for the 19.90 nm particles and to reduce the contaminant residue on the particle surface. System pressure was varied down to 8.27 kPa, enabling slip correction measurements for Knudsen numbers as large as 83. A condensation particle counter was modified for low pressure application. The slip correction factor obtained for the three particle sizes is fitted well by the equation: C = 1 + Kn (α + β exp(−γ/Kn)), with α = 1.165, β = 0.483, and γ = 0.997. The first quantitative uncertainty analysis for slip correction measurements was carried out. The expanded relative uncertainty (95 % confidence interval) in measuring slip correction factor was about 2 % for the 100.7 nm SRM particles, about 3 % for the 19.90 nm PSL particles, and about 2.5 % for the 269 nm SRM particles. The major sources of uncertainty are the diameter of particles, the geometric constant associated with NDMA, and the voltage. PMID:27308102

  7. Region of validity of the finite–temperature Thomas–Fermi model with respect to quantum and exchange corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dyachkov, Sergey, E-mail: serj.dyachkov@gmail.com; Moscow Institute of Physics and Technology, 9 Institutskiy per., Dolgoprudny, Moscow Region 141700; Levashov, Pavel, E-mail: pasha@ihed.ras.ru

    We determine the region of applicability of the finite–temperature Thomas–Fermi model and its thermal part with respect to quantum and exchange corrections. Very high accuracy of computations has been achieved by using a special approach for the solution of the boundary problem and numerical integration. We show that the thermal part of the model can be applied at lower temperatures than the full model. Also we offer simple approximations of the boundaries of validity for practical applications.

  8. Lithographically encoded polymer microtaggant using high-capacity and error-correctable QR code for anti-counterfeiting of drugs.

    PubMed

    Han, Sangkwon; Bae, Hyung Jong; Kim, Junhoi; Shin, Sunghwan; Choi, Sung-Eun; Lee, Sung Hoon; Kwon, Sunghoon; Park, Wook

    2012-11-20

    A QR-coded microtaggant for the anti-counterfeiting of drugs is proposed that can provide high capacity and error-correction capability. It is fabricated lithographically in a microfluidic channel with special consideration of the island patterns in the QR Code. The microtaggant is incorporated in the drug capsule ("on-dose authentication") and can be read by a simple smartphone QR Code reader application when removed from the capsule and washed free of drug. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. The Lagrangian Multiplier Method of Finding Upper and Lower Limits to Critical Stresses of Clamped Plates

    DTIC Science & Technology

    1946-01-01

    geometrica ~ boundary condi- tions of the problem. (2) The energy of the load-plate system is computed for this deflection surface and is then minimized...and interpolating to find the k that makes the seriw vanish. The correct value of m is that which gives the lowest value of k. For two half waves (m=2...the square plate, the present rekdively simple upper- and lower-limit calcula- tions show that his est,imatd limit of error is correct for this case

  10. Method and system for photoconductive detector signal correction

    DOEpatents

    Carangelo, Robert M.; Hamblen, David G.; Brouillette, Carl R.

    1992-08-04

    A corrective factor is applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factor may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects.

  11. Method and system for photoconductive detector signal correction

    DOEpatents

    Carangelo, R.M.; Hamblen, D.G.; Brouillette, C.R.

    1992-08-04

    A corrective factor is applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factor may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects. 5 figs.

  12. Impaired smooth-pursuit in Parkinson's disease: normal cue-information memory, but dysfunction of extra-retinal mechanisms for pursuit preparation and execution

    PubMed Central

    Fukushima, Kikuro; Ito, Norie; Barnes, Graham R; Onishi, Sachiyo; Kobayashi, Nobuyoshi; Takei, Hidetoshi; Olley, Peter M; Chiba, Susumu; Inoue, Kiyoharu; Warabi, Tateo

    2015-01-01

    While retinal image motion is the primary input for smooth-pursuit, its efficiency depends on cognitive processes including prediction. Reports are conflicting on impaired prediction during pursuit in Parkinson's disease. By separating two major components of prediction (image motion direction memory and movement preparation) using a memory-based pursuit task, and by comparing tracking eye movements with those during a simple ramp-pursuit task that did not require visual memory, we examined smooth-pursuit in 25 patients with Parkinson's disease and compared the results with 14 age-matched controls. In the memory-based pursuit task, cue 1 indicated visual motion direction, whereas cue 2 instructed the subjects to prepare to pursue or not to pursue. Based on the cue-information memory, subjects were asked to pursue the correct spot from two oppositely moving spots or not to pursue. In 24/25 patients, the cue-information memory was normal, but movement preparation and execution were impaired. Specifically, unlike controls, most of the patients (18/24 = 75%) lacked initial pursuit during the memory task and started tracking the correct spot by saccades. Conversely, during simple ramp-pursuit, most patients (83%) exhibited initial pursuit. Popping-out of the correct spot motion during memory-based pursuit was ineffective for enhancing initial pursuit. The results were similar irrespective of levodopa/dopamine agonist medication. Our results indicate that the extra-retinal mechanisms of most patients are dysfunctional in initiating memory-based (not simple ramp) pursuit. A dysfunctional pursuit loop between frontal eye fields (FEF) and basal ganglia may contribute to the impairment of extra-retinal mechanisms, resulting in deficient pursuit commands from the FEF to brainstem. PMID:25825544

  13. Prevalence of pulmonary tuberculosis among adults in selected slums of Delhi city.

    PubMed

    Sarin, Rohit; Vohra, Vikram; Khalid, U K; Sharma, Prem Prakash; Chadha, Vineet; Sharada, M A

    2018-04-01

    A survey was carried out to estimate the point prevalence of bacteriologically positive pulmonary tuberculosis (PTB) among persons ≥15 years of age residing in Jhuggi-Jhopri (JJ) colonies - urban slums in Delhi, India implementing Directly Observed Treatment strategy since 1998. Among 12 JJ colonies selected by simple random sampling, persons having persistent cough for ≥2 weeks at the time of the survey or cough of any duration along with history of contact/currently on ant-TB treatment/known HIV positive were subjected to sputum examination - 2 specimens, by smear microscopy for Acid Fast Bacilli and culture for Mycobacterium tuberculosis. Persons with at least one specimen positive were labelled as bacteriologically confirmed PTB. Prevalence was estimated after imputing missing values to correct bias introduced by incompleteness of data and corrected for non-screening by X-ray by a multiplication factor derived from recently conducted surveys. Of 40,756 persons registered, 40,529 (99.4%) were screened. Of them, 691 (2%) were eligible for sputum examination. Spot specimens were collected from 659 (99.2%) and early morning sputum specimens from 647 (98.1%). Using screening by interview alone, prevalence of bacteriologically positive PTB in persons ≥15 years of age was estimated at 160.4 (123.7-197.1) per 100,000 populations and210.0 (CI: 162.5-258.2) after correcting for non-screening by X-ray. Observed prevalence suggests further strengthening of TB control program in urban slums. Copyright © 2017 Tuberculosis Association of India. Published by Elsevier B.V. All rights reserved.

  14. Treatment of post-traumatic elbow deformities in children with the Ilizarov distraction osteogenesis technique.

    PubMed

    Özkan, Cenk; Deveci, Mehmet Ali; Tekin, Mustafa; Biçer, Ömer Sunkar; Gökçe, Kadir; Gülşen, Mahir

    2017-01-01

    The present study assessed functional and radiographic outcomes of distraction osteogenesis treatment of post-traumatic elbow deformities in children. Eight children were treated between 2008 and 2013 for post-traumatic elbow deformities using distraction osteogenesis. Mean age at time of operation was 10.9 years. Six patients had varus and 2 had valgus deformity. Magnitude of correction, fixator index, complications, carrying angle, and elbow range of motion were assessed. Functional results were graded according to protocol of Bellemore et al. Mean follow-up was 43 months. Mean preoperative varus deformity in 6 patients was 29.2° and valgus deformity in 2 patients was 28.5°. Preoperative flexion and extension of elbow were 123.8° and -10.6°, respectively. Mean carrying angle was 9° valgus at last follow-up. Mean flexion and extension were 134.4° and -6.0°, respectively. Change in carrying angle was statistically significant (p = 0.002). There were 2 grade 1 pin tract infections and 1 diaphyseal fracture of humerus. Functional outcome was rated excellent in 7 patients and good in 1 patient. Ilizarov distraction osteogenesis is a valuable alternative in treatment of elbow deformities in children. The surgical technique is simple and correction is adjustable. Gradual correction prevents possible neurovascular complications and minimally invasive surgery produces less scarring. Compliance of patient and family is key factor in the success of the outcome. Level IV, therapeutic study. Copyright © 2016 Turkish Association of Orthopaedics and Traumatology. Production and hosting by Elsevier B.V. All rights reserved.

  15. Intensity-Value Corrections for Integrating Sphere Measurements of Solid Samples Measured Behind Glass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Timothy J.; Bernacki, Bruce E.; Redding, Rebecca L.

    2014-11-01

    Accurate and calibrated directional-hemispherical reflectance spectra of solids are important for both in situ and remote sensing. Many solids are in the form of powders or granules and to measure their diffuse reflectance spectra in the laboratory, it is often necessary to place the samples behind a transparent medium such as glass for the ultraviolet (UV), visible, or near-infrared spectral regions. Using both experimental methods and a simple optical model, we demonstrate that glass (fused quartz in our case) leads to artifacts in the reflectance values. We report our observations that the measured reflectance values, for both hemispherical and diffusemore » reflectance, are distorted by the additional reflections arising at the air–quartz and sample–quartz interfaces. The values are dependent on the sample reflectance and are offset in intensity in the hemispherical case, leading to measured values up to ~6% too high for a 2% reflectance surface, ~3.8% too high for 10% reflecting surfaces, approximately correct for 40–60% diffuse-reflecting surfaces, and ~1.5% too low for 99% reflecting Spectralon® surfaces. For the case of diffuse-only reflectance, the measured values are uniformly too low due to the polished glass, with differences of nearly 6% for a 99% reflecting matte surface. The deviations arise from the added reflections from the quartz surfaces, as verified by both theory and experiment, and depend on sphere design. Finally, empirical correction factors were implemented into post-processing software to redress the artifact for hemispherical and diffuse reflectance data across the 300–2300 nm range.« less

  16. A comparison of quality of present-day heat flow obtained from BHTs, Horner Plots of Malay Basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waples, D.W.; Mahadir, R.

    1994-07-01

    Reconciling temperature data obtained from measurement of single BHT, multiple BHT at a single depth, RFTs, and DSTs, is very difficult. Quality of data varied widely, however DST data were assumed to be most reliable. Data from 87 wells was used in this study, but only 47 wells have DST data. BASINMOD program was used to calculate the present-day heat flow, using measured thermal conductivity and calibrated against the DST data. The heat flows obtained from the DST data were assumed to be correct and representative throughout the basin. Then, heat flows using (1) uncorrected RFT data, (2) multiple BHTmore » data corrected by the Horner plot method, and (3) single BHT values corrected upward by a standard 10% were calculated. All of these three heat-flow populations had identically standard deviations to that for the DST data, but with significantly lower mean values. Correction factors were calculated to give each of the three erroneous populations the same mean value as the DST population. Heat flows calculated from RFT data had to be corrected upward by a factor of 1.12 to be equivalent to DST data; Horner plot data corrected by a factor of 1.18, and single BHT data by a factor of 1.2. These results suggest that present-day subsurface temperatures using RFT, Horner plot, and BHT data are considerably lower than they should be. The authors suspect qualitatively similar results would be found in other areas. Hence, they recommend significant corrections be routinely made until local calibration factors are established.« less

  17. SU-E-T-552: Monte Carlo Calculation of Correction Factors for a Free-Air Ionization Chamber in Support of a National Air-Kerma Standard for Electronic Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mille, M; Bergstrom, P

    2015-06-15

    Purpose: To use Monte Carlo radiation transport methods to calculate correction factors for a free-air ionization chamber in support of a national air-kerma standard for low-energy, miniature x-ray sources used for electronic brachytherapy (eBx). Methods: The NIST is establishing a calibration service for well-type ionization chambers used to characterize the strength of eBx sources prior to clinical use. The calibration approach involves establishing the well-chamber’s response to an eBx source whose air-kerma rate at a 50 cm distance is determined through a primary measurement performed using the Lamperti free-air ionization chamber. However, the free-air chamber measurements of charge or currentmore » can only be related to the reference air-kerma standard after applying several corrections, some of which are best determined via Monte Carlo simulation. To this end, a detailed geometric model of the Lamperti chamber was developed in the EGSnrc code based on the engineering drawings of the instrument. The egs-fac user code in EGSnrc was then used to calculate energy-dependent correction factors which account for missing or undesired ionization arising from effects such as: (1) attenuation and scatter of the x-rays in air; (2) primary electrons escaping the charge collection region; (3) lack of charged particle equilibrium; (4) atomic fluorescence and bremsstrahlung radiation. Results: Energy-dependent correction factors were calculated assuming a monoenergetic point source with the photon energy ranging from 2 keV to 60 keV in 2 keV increments. Sufficient photon histories were simulated so that the Monte Carlo statistical uncertainty of the correction factors was less than 0.01%. The correction factors for a specific eBx source will be determined by integrating these tabulated results over its measured x-ray spectrum. Conclusion: The correction factors calculated in this work are important for establishing a national standard for eBx which will help ensure that dose is accurately and consistently delivered to patients.« less

  18. The magnetisation distribution of the Ising model - a new approach

    NASA Astrophysics Data System (ADS)

    Hakan Lundow, Per; Rosengren, Anders

    2010-03-01

    A completely new approach to the Ising model in 1 to 5 dimensions is developed. We employ a generalisation of the binomial coefficients to describe the magnetisation distributions of the Ising model. For the complete graph this distribution is exact. For simple lattices of dimensions d=1 and d=5 the magnetisation distributions are remarkably well-fitted by the generalized binomial distributions. For d=4 we are only slightly less successful, while for d=2,3 we see some deviations (with exceptions!) between the generalized binomial and the Ising distribution. The results speak in favour of the generalized binomial distribution's correctness regarding their general behaviour in comparison to the Ising model. A theoretical analysis of the distribution's moments also lends support their being correct asymptotically, including the logarithmic corrections in d=4. The full extent to which they correctly model the Ising distribution, and for which graph families, is not settled though.

  19. On-board error correction improves IR earth sensor accuracy

    NASA Astrophysics Data System (ADS)

    Alex, T. K.; Kasturirangan, K.; Shrivastava, S. K.

    1989-10-01

    Infra-red earth sensors are used in satellites for attitude sensing. Their accuracy is limited by systematic and random errors. The sources of errors in a scanning infra-red earth sensor are analyzed in this paper. The systematic errors arising from seasonal variation of infra-red radiation, oblate shape of the earth, ambient temperature of sensor, changes in scan/spin rates have been analyzed. Simple relations are derived using least square curve fitting for on-board correction of these errors. Random errors arising out of noise from detector and amplifiers, instability of alignment and localized radiance anomalies are analyzed and possible correction methods are suggested. Sun and Moon interference on earth sensor performance has seriously affected a number of missions. The on-board processor detects Sun/Moon interference and corrects the errors on-board. It is possible to obtain eight times improvement in sensing accuracy, which will be comparable with ground based post facto attitude refinement.

  20. Removal of ring artifacts in microtomography by characterization of scintillator variations.

    PubMed

    Vågberg, William; Larsson, Jakob C; Hertz, Hans M

    2017-09-18

    Ring artifacts reduce image quality in tomography, and arise from faulty detector calibration. In microtomography, we have identified that ring artifacts can arise due to high-spatial frequency variations in the scintillator thickness. Such variations are normally removed by a flat-field correction. However, as the spectrum changes, e.g. due to beam hardening, the detector response varies non-uniformly introducing ring artifacts that persist after flat-field correction. In this paper, we present a method to correct for ring artifacts from variations in scintillator thickness by using a simple method to characterize the local scintillator response. The method addresses the actual physical cause of the ring artifacts, in contrary to many other ring artifact removal methods which rely only on image post-processing. By applying the technique to an experimental phantom tomography, we show that ring artifacts are strongly reduced compared to only making a flat-field correction.

  1. Prevention of myopia by partial correction of hyperopia: a twins study.

    PubMed

    Medina, Antonio

    2018-04-01

    To confirm the prediction of emmetropization feedback theory that myopia can be prevented by correcting the hyperopia of a child at risk of becoming myopic. We conducted such myopia prevention treatment with twins at risk. Their hyperopia was partially corrected by one half at age 7 and in subsequent years until age 16. Hyperopia progressively decreased in all eyes as expected. None of the twins developed myopia. The spherical equivalent refractions of the followed eyes were +1 and +1.25 D at age 16. Feedback theory accurately predicted these values. The treatment of the twins with partial correction of their hyperopia was successful. Prevention of myopia with this technique is relatively simple and powerful. The use of this myopia prevention treatment has no adverse effects. This prevention treatment is indicated in children with a hyperopic reserve at risk of developing myopia.

  2. Efficient correction of wavefront inhomogeneities in X-ray holographic nanotomography by random sample displacement

    NASA Astrophysics Data System (ADS)

    Hubert, Maxime; Pacureanu, Alexandra; Guilloud, Cyril; Yang, Yang; da Silva, Julio C.; Laurencin, Jerome; Lefebvre-Joud, Florence; Cloetens, Peter

    2018-05-01

    In X-ray tomography, ring-shaped artifacts present in the reconstructed slices are an inherent problem degrading the global image quality and hindering the extraction of quantitative information. To overcome this issue, we propose a strategy for suppression of ring artifacts originating from the coherent mixing of the incident wave and the object. We discuss the limits of validity of the empty beam correction in the framework of a simple formalism. We then deduce a correction method based on two-dimensional random sample displacement, with minimal cost in terms of spatial resolution, acquisition, and processing time. The method is demonstrated on bone tissue and on a hydrogen electrode of a ceramic-metallic solid oxide cell. Compared to the standard empty beam correction, we obtain high quality nanotomography images revealing detailed object features. The resulting absence of artifacts allows straightforward segmentation and posterior quantification of the data.

  3. Simultaneous double-rod rotation technique in posterior instrumentation surgery for correction of adolescent idiopathic scoliosis.

    PubMed

    Ito, Manabu; Abumi, Kuniyoshi; Kotani, Yoshihisa; Takahata, Masahiko; Sudo, Hideki; Hojo, Yoshihiro; Minami, Akio

    2010-03-01

    The authors present a new posterior correction technique consisting of simultaneous double-rod rotation using 2 contoured rods and polyaxial pedicle screws with or without Nesplon tapes. The purpose of this study is to introduce the basic principles and surgical procedures of this new posterior surgery for correction of adolescent idiopathic scoliosis. Through gradual rotation of the concave-side rod by 2 rod holders, the convex-side rod simultaneously rotates with the the concave-side rod. This procedure does not involve any force pushing down the spinal column around the apex. Since this procedure consists of upward pushing and lateral translation of the spinal column with simultaneous double-rod rotation maneuvers, it is simple and can obtain thoracic kyphosis as well as favorable scoliosis correction. This technique is applicable not only to a thoracic single curve but also to double major curves in cases of adolescent idiopathic scoliosis.

  4. Design of general apochromatic drift-quadrupole beam lines

    NASA Astrophysics Data System (ADS)

    Lindstrøm, C. A.; Adli, E.

    2016-07-01

    Chromatic errors are normally corrected using sextupoles in regions of large dispersion. In low emittance linear accelerators, use of sextupoles can be challenging. Apochromatic focusing is a lesser-known alternative approach, whereby chromatic errors of Twiss parameters are corrected without the use of sextupoles, and has consequently been subject to renewed interest in advanced linear accelerator research. Proof of principle designs were first established by Montague and Ruggiero and developed more recently by Balandin et al. We describe a general method for designing drift-quadrupole beam lines of arbitrary order in apochromatic correction, including analytic expressions for emittance growth and other merit functions. Worked examples are shown for plasma wakefield accelerator staging optics and for a simple final focus system.

  5. Prosthetic Correction of Postenucleation Socket Syndrome: A Case Report.

    PubMed

    Kamble, Vikas B

    2014-12-01

    Postenucleation socket syndrome is a frequent late complication of enucleation of eye globe. Several pathophysiological mechanisms have been proposed to account for the symptoms of postenucleation socket syndrome, which include lost orbital volume, superior sulcus deformity, upper eyelid ptosis, lower eyelid laxity, and backward tilt of the prosthesis. The goal of postenucleation socket syndrome treatment is to achieve the best possible functional and esthetic result. The treatment can be either conservative or surgical. For the patient interested in a non-surgical correction, the conservative treatment is simple and non invasive and can be done with prosthesis modification for good positioning, comfort, and mobility. This paper describes prosthetic correction of a patient with postenucleation socket syndrome by modified ocular prosthesis.

  6. Tuning dispersion correction in DFT-D2 for metal-molecule interactions: A tailored reparameterization strategy for the adsorption of aromatic systems on Ag(1 1 1)

    NASA Astrophysics Data System (ADS)

    Schiavo, Eduardo; Muñoz-García, Ana B.; Barone, Vincenzo; Vittadini, Andrea; Casarin, Maurizio; Forrer, Daniel; Pavone, Michele

    2018-02-01

    Common local and semi-local density functionals poorly describe the molecular physisorption on metal surfaces due to the lack of dispersion interactions. In the last decade, several correction schemes have been proposed to amend this fundamental flaw of Density Functional Theory. Using the prototypical case of aromatic molecules adsorbed on Ag(1 1 1), we discuss the accuracy of different dispersion-correction methods and present a reparameterization strategy for the simple and effective DFT-D2. For the adsorption of different aromatic systems on the same metallic substrate, good results at feasible computational costs are achieved by means of a fitting procedure against MP2 data.

  7. Mask process correction (MPC) modeling and its application to EUV mask for electron beam mask writer EBM-7000

    NASA Astrophysics Data System (ADS)

    Kamikubo, Takashi; Ohnishi, Takayuki; Hara, Shigehiro; Anze, Hirohito; Hattori, Yoshiaki; Tamamushi, Shuichi; Bai, Shufeng; Wang, Jen-Shiang; Howell, Rafael; Chen, George; Li, Jiangwei; Tao, Jun; Wiley, Jim; Kurosawa, Terunobu; Saito, Yasuko; Takigawa, Tadahiro

    2010-09-01

    In electron beam writing on EUV mask, it has been reported that CD linearity does not show simple signatures as observed with conventional COG (Cr on Glass) masks because they are caused by scattered electrons form EUV mask itself which comprises stacked heavy metals and thick multi-layers. To resolve this issue, Mask Process Correction (MPC) will be ideally applicable. Every pattern is reshaped in MPC. Therefore, the number of shots would not increase and writing time will be kept within reasonable range. In this paper, MPC is extended to modeling for correction of CD linearity errors on EUV mask. And its effectiveness is verified with simulations and experiments through actual writing test.

  8. Atmospheric correction of ocean color sensors: analysis of the effects of residual instrument polarization sensitivity.

    PubMed

    Gordon, H R; Du, T; Zhang, T

    1997-09-20

    We provide an analysis of the influence of instrument polarization sensitivity on the radiance measured by spaceborne ocean color sensors. Simulated examples demonstrate the influence of polarization sensitivity on the retrieval of the water-leaving reflectance rho(w). A simple method for partially correcting for polarization sensitivity--replacing the linear polarization properties of the top-of-atmosphere reflectance with those from a Rayleigh-scattering atmosphere--is provided and its efficacy is evaluated. It is shown that this scheme improves rho(w) retrievals as long as the polarization sensitivity of the instrument does not vary strongly from band to band. Of course, a complete polarization-sensitivity characterization of the ocean color sensor is required to implement the correction.

  9. Time delay of critical images in the vicinity of cusp point of gravitational-lens systems

    NASA Astrophysics Data System (ADS)

    Alexandrov, A.; Zhdanov, V.

    2016-12-01

    We consider approximate analytical formulas for time-delays of critical images of a point source in the neighborhood of a cusp-caustic. We discuss zero, first and second approximations in powers of a parameter that defines the proximity of the source to the cusp. These formulas link the time delay with characteristics of the lens potential. The formula of zero approximation was obtained by Congdon, Keeton & Nordgren (MNRAS, 2008). In case of a general lens potential we derived first order correction thereto. If the potential is symmetric with respect to the cusp axis, then this correction is identically equal to zero. For this case, we obtained second order correction. The relations found are illustrated by a simple model example.

  10. Fermi orbital self-interaction corrected electronic structure of molecules beyond local density approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hahn, T., E-mail: torsten.hahn@physik.tu-freiberg.de; Liebing, S.; Kortus, J.

    2015-12-14

    The correction of the self-interaction error that is inherent to all standard density functional theory calculations is an object of increasing interest. In this article, we apply the very recently developed Fermi-orbital based approach for the self-interaction correction [M. R. Pederson et al., J. Chem. Phys. 140, 121103 (2014) and M. R. Pederson, J. Chem. Phys. 142, 064112 (2015)] to a set of different molecular systems. Our study covers systems ranging from simple diatomic to large organic molecules. We focus our analysis on the direct estimation of the ionization potential from orbital eigenvalues. Further, we show that the Fermi orbitalmore » positions in structurally similar molecules appear to be transferable.« less

  11. Modelling Thin Film Microbending: A Comparative Study of Three Different Approaches

    NASA Astrophysics Data System (ADS)

    Aifantis, Katerina E.; Nikitas, Nikos; Zaiser, Michael

    2011-09-01

    Constitutive models which describe crystal microplasticity in a continuum framework can be envisaged as average representations of the dynamics of dislocation systems. Thus, their performance needs to be assessed not only by their ability to correctly represent stress-strain characteristics on the specimen scale but also by their ability to correctly represent the evolution of internal stress and strain patterns. In the present comparative study we consider the bending of a free-standing thin film. We compare the results of 3D DDD simulations with those obtained from a simple 1D gradient plasticity model and a more complex dislocation-based continuum model. Both models correctly reproduce the nontrivial strain patterns predicted by DDD for the microbending problem.

  12. Heavy quark form factors at two loops

    NASA Astrophysics Data System (ADS)

    Ablinger, J.; Behring, A.; Blümlein, J.; Falcioni, G.; De Freitas, A.; Marquard, P.; Rana, N.; Schneider, C.

    2018-05-01

    We compute the two-loop QCD corrections to the heavy quark form factors in the case of the vector, axial-vector, scalar and pseudoscalar currents up to second order in the dimensional parameter ɛ =(4 -D )/2 . These terms are required in the renormalization of the higher-order corrections to these form factors.

  13. Determination of correction factors in beta radiation beams using Monte Carlo method.

    PubMed

    Polo, Ivón Oramas; Santos, William de Souza; Caldas, Linda V E

    2018-06-15

    The absorbed dose rate is the main characterization quantity for beta radiation. The extrapolation chamber is considered the primary standard instrument. To determine absorbed dose rates in beta radiation beams, it is necessary to establish several correction factors. In this work, the correction factors for the backscatter due to the collecting electrode and to the guard ring, and the correction factor for Bremsstrahlung in beta secondary standard radiation beams are presented. For this purpose, the Monte Carlo method was applied. The results obtained are considered acceptable, and they agree within the uncertainties. The differences between the backscatter factors determined by the Monte Carlo method and those of the ISO standard were 0.6%, 0.9% and 2.04% for 90 Sr/ 90 Y, 85 Kr and 147 Pm sources respectively. The differences between the Bremsstrahlung factors determined by the Monte Carlo method and those of the ISO were 0.25%, 0.6% and 1% for 90 Sr/ 90 Y, 85 Kr and 147 Pm sources respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Validation of the Simple Shoulder Test in a Portuguese-Brazilian population. Is the latent variable structure and validation of the Simple Shoulder Test Stable across cultures?

    PubMed

    Neto, Jose Osni Bruggemann; Gesser, Rafael Lehmkuhl; Steglich, Valdir; Bonilauri Ferreira, Ana Paula; Gandhi, Mihir; Vissoci, João Ricardo Nickenig; Pietrobon, Ricardo

    2013-01-01

    The validation of widely used scales facilitates the comparison across international patient samples. The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. The Simple Shoulder Test was translated from English into Brazilian Portuguese, translated back into English, and evaluated for accuracy by an expert committee. It was then administered to 100 patients with shoulder conditions. Psychometric properties were analyzed including factor analysis, internal reliability, test-retest reliability at seven days, and construct validity in relation to the Short Form 36 health survey (SF-36). Factor analysis demonstrated a three factor solution. Cronbach's alpha was 0.82. Test-retest reliability index as measured by intra-class correlation coefficient (ICC) was 0.84. Associations were observed in the hypothesized direction with all subscales of SF-36 questionnaire. The Simple Shoulder Test translation and cultural adaptation to Brazilian-Portuguese demonstrated adequate factor structure, internal reliability, and validity, ultimately allowing for its use in the comparison with international patient samples.

  15. Validation of the Simple Shoulder Test in a Portuguese-Brazilian Population. Is the Latent Variable Structure and Validation of the Simple Shoulder Test Stable across Cultures?

    PubMed Central

    Neto, Jose Osni Bruggemann; Gesser, Rafael Lehmkuhl; Steglich, Valdir; Bonilauri Ferreira, Ana Paula; Gandhi, Mihir; Vissoci, João Ricardo Nickenig; Pietrobon, Ricardo

    2013-01-01

    Background The validation of widely used scales facilitates the comparison across international patient samples. The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. Objective The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. Methods The Simple Shoulder Test was translated from English into Brazilian Portuguese, translated back into English, and evaluated for accuracy by an expert committee. It was then administered to 100 patients with shoulder conditions. Psychometric properties were analyzed including factor analysis, internal reliability, test-retest reliability at seven days, and construct validity in relation to the Short Form 36 health survey (SF-36). Results Factor analysis demonstrated a three factor solution. Cronbach’s alpha was 0.82. Test-retest reliability index as measured by intra-class correlation coefficient (ICC) was 0.84. Associations were observed in the hypothesized direction with all subscales of SF-36 questionnaire. Conclusion The Simple Shoulder Test translation and cultural adaptation to Brazilian-Portuguese demonstrated adequate factor structure, internal reliability, and validity, ultimately allowing for its use in the comparison with international patient samples. PMID:23675436

  16. Possible Mechanism for the Generation of a Fundamental Unit of Charge (long version)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lestone, John Paul

    2017-06-16

    Various methods for calculating particle-emission rates from hot systems are reviewed. Semi-classically derived photon-emission rates often contain the term exp(-ε/T) which needs to be replaced with the corresponding Planckian factor of [exp(-ε/T)-1] -1 to obtain the correct rate. This replacement is associated with the existence of stimulated emission. Simple arguments are used to demonstrate that black holes can also undergo stimulated emission, as previously determined by others. We extend these concepts to fundamental particles, and assume they can be stimulated to emit virtual photons with a cross section of πλ 2, in the case of an isolated particle when themore » incident virtual-photon energy is < 2πmc 2. Stimulated-virtual photons can be exchanged with other particles generating a force. With the inclusion of near-field effects, the model choices presented give a calculated fundamental unit of charge of 1.6022x10 -19 C. If these choices are corroborated by detailed calculations then an understanding of the numerical value of the fine structure constant may emerge. The present study suggests charge might be an emergent property generated by a simple interaction mechanism between point-like particles and the electromagnetic vacuum, similar to the process that generates the Lamb shift.« less

  17. Robust Bayesian linear regression with application to an analysis of the CODATA values for the Planck constant

    NASA Astrophysics Data System (ADS)

    Wübbeler, Gerd; Bodnar, Olha; Elster, Clemens

    2018-02-01

    Weighted least-squares estimation is commonly applied in metrology to fit models to measurements that are accompanied with quoted uncertainties. The weights are chosen in dependence on the quoted uncertainties. However, when data and model are inconsistent in view of the quoted uncertainties, this procedure does not yield adequate results. When it can be assumed that all uncertainties ought to be rescaled by a common factor, weighted least-squares estimation may still be used, provided that a simple correction of the uncertainty obtained for the estimated model is applied. We show that these uncertainties and credible intervals are robust, as they do not rely on the assumption of a Gaussian distribution of the data. Hence, common software for weighted least-squares estimation may still safely be employed in such a case, followed by a simple modification of the uncertainties obtained by that software. We also provide means of checking the assumptions of such an approach. The Bayesian regression procedure is applied to analyze the CODATA values for the Planck constant published over the past decades in terms of three different models: a constant model, a straight line model and a spline model. Our results indicate that the CODATA values may not have yet stabilized.

  18. Assessment of Gaseous Oxidized Mercury Measurement Accuracy at an Atmospheric Mercury Network (AMNet) Site

    NASA Astrophysics Data System (ADS)

    Luke, W. T.

    2016-12-01

    Recent laboratory and field research has documented and explored the biases and inaccuracies of the measurement of gaseous oxidized mercury (GOM) compounds using KCl-coated denuders. We report on the development of a simple, automated GOM calibration source and its deployment at NOAA/Air Resources Laboratory's Atmospheric Mercury Network (AMNet) site at the Mauna Loa Observatory (MLO) on the island of Hawaii. NOAA/ARL has developed a permeation-tube based calibration source with an extremely simple flow path that minimizes surface adsorptive effects and losses. The source was used to inject HgBr2 into one of two side-by-side Tekran® mercury speciation systems at MLO to characterize GOM measurement accuracy under a variety of atmospheric conditions. Due to its unique topography and meteorology, MLO experiences katabatic (upslope/downslope) mesoscale flow superimposed on the synoptic trade wind circulation of the tropics. Water vapor, ozone, and other trace atmospheric constituents often display pronounced diurnal variations at the site, which frequently encounters air characteristic of the middle free troposphere at night, and of the tropical marine boundary layer during the day. Results presented here will assist in the better understanding of the biases underlying GOM measurements in global mercury monitoring networks and may allow the development of correction factors for ambient data.

  19. BASIC: A Simple and Accurate Modular DNA Assembly Method.

    PubMed

    Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S

    2017-01-01

    Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2].

  20. Hybrid seine for full fish community collections

    USGS Publications Warehouse

    McKenna, James E.; Waldt, Emily M.; Abbett, Ross; David, Anthony; Snyder, James

    2013-01-01

    Seines are simple and effective fish collection gears, but the net mesh size influences how well the catch represents the fish communities. We designed and tested a hybrid seine with a dual-mesh bag (1/4″ and 1/8″) and compared the fish assemblage collected by each mesh. The fine-mesh net retained three times as many fish and collected more species (as many as eight), including representatives of several rare species, than did the coarser mesh. The dual-mesh bag permitted us to compare both sizes and species retained by each layer and to develop species-specific abundance correction factors, which allowed comparison of catches with the coarse-mesh seine used for earlier collections. The results indicate that a hybrid seine with coarse-mesh wings and a fine-mesh bag would enhance future studies of fish communities, especially when small-bodied fishes or early life stages are the research focus.

  1. A model of phytoplankton blooms.

    PubMed

    Huppert, Amit; Blasius, Bernd; Stone, Lewi

    2002-02-01

    A simple model that describes the dynamics of nutrient-driven phytoplankton blooms is presented. Apart from complicated simulation studies, very few models reported in the literature have taken this "bottom-up" approach. Yet, as discussed and justified from a theoretical standpoint, many blooms are strongly controlled by nutrients rather than by higher trophic levels. The analysis identifies an important threshold effect: a bloom will only be triggered when nutrients exceed a certain defined level. This threshold effect should be generic to both natural blooms and most simulation models. Furthermore, predictions are given as to how the peak of the bloom Pmax is determined by initial conditions. A number of counterintuitive results are found. In particular, it is shown that increasing initial nutrient or phytoplankton levels can act to decrease Pmax. Correct predictions require an understanding of such factors as the timing of the bloom and the period of nutrient buildup before the bloom.

  2. Improving Rydberg Excitations within Time-Dependent Density Functional Theory with Generalized Gradient Approximations: The Exchange-Enhancement-for-Large-Gradient Scheme.

    PubMed

    Li, Shaohong L; Truhlar, Donald G

    2015-07-14

    Time-dependent density functional theory (TDDFT) with conventional local and hybrid functionals such as the local and hybrid generalized gradient approximations (GGA) seriously underestimates the excitation energies of Rydberg states, which limits its usefulness for applications such as spectroscopy and photochemistry. We present here a scheme that modifies the exchange-enhancement factor to improve GGA functionals for Rydberg excitations within the TDDFT framework while retaining their accuracy for valence excitations and for the thermochemical energetics calculated by ground-state density functional theory. The scheme is applied to a popular hybrid GGA functional and tested on data sets of valence and Rydberg excitations and atomization energies, and the results are encouraging. The scheme is simple and flexible. It can be used to correct existing functionals, and it can also be used as a strategy for the development of new functionals.

  3. Improving Rydberg Excitations within Time-Dependent Density Functional Theory with Generalized Gradient Approximations: The Exchange-Enhancement-for-Large-Gradient Scheme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Shaohong L.; Truhlar, Donald G.

    Time-dependent density functional theory (TDDFT) with conventional local and hybrid functionals such as the local and hybrid generalized gradient approximations (GGA) seriously underestimates the excitation energies of Rydberg states, which limits its usefulness for applications such as spectroscopy and photochemistry. We present here a scheme that modifies the exchange-enhancement factor to improve GGA functionals for Rydberg excitations within the TDDFT framework while retaining their accuracy for valence excitations and for the thermochemical energetics calculated by ground-state density functional theory. The scheme is applied to a popular hybrid GGA functional and tested on data sets of valence and Rydberg excitations andmore » atomization energies, and the results are encouraging. The scheme is simple and flexible. It can be used to correct existing functionals, and it can also be used as a strategy for the development of new functionals.« less

  4. Daniell method for power spectral density estimation in atomic force microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Labuda, Aleksander

    An alternative method for power spectral density (PSD) estimation—the Daniell method—is revisited and compared to the most prevalent method used in the field of atomic force microscopy for quantifying cantilever thermal motion—the Bartlett method. Both methods are shown to underestimate the Q factor of a simple harmonic oscillator (SHO) by a predictable, and therefore correctable, amount in the absence of spurious deterministic noise sources. However, the Bartlett method is much more prone to spectral leakage which can obscure the thermal spectrum in the presence of deterministic noise. By the significant reduction in spectral leakage, the Daniell method leads to amore » more accurate representation of the true PSD and enables clear identification and rejection of deterministic noise peaks. This benefit is especially valuable for the development of automated PSD fitting algorithms for robust and accurate estimation of SHO parameters from a thermal spectrum.« less

  5. Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frink, Neal T.

    2016-01-01

    Several improvements to the mixed-element USM3D discretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.

  6. Improved Convergence and Robustness of USM3D Solutions on Mixed-Element Grids

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Diskin, Boris; Thomas, James L.; Frinks, Neal T.

    2016-01-01

    Several improvements to the mixed-elementUSM3Ddiscretization and defect-correction schemes have been made. A new methodology for nonlinear iterations, called the Hierarchical Adaptive Nonlinear Iteration Method, has been developed and implemented. The Hierarchical Adaptive Nonlinear Iteration Method provides two additional hierarchies around a simple and approximate preconditioner of USM3D. The hierarchies are a matrix-free linear solver for the exact linearization of Reynolds-averaged Navier-Stokes equations and a nonlinear control of the solution update. Two variants of the Hierarchical Adaptive Nonlinear Iteration Method are assessed on four benchmark cases, namely, a zero-pressure-gradient flat plate, a bump-in-channel configuration, the NACA 0012 airfoil, and a NASA Common Research Model configuration. The new methodology provides a convergence acceleration factor of 1.4 to 13 over the preconditioner-alone method representing the baseline solver technology.

  7. Evaluation of thermal data for geologic applications

    NASA Technical Reports Server (NTRS)

    Kahle, A. B.; Palluconi, F. D.; Levine, C. J.; Abrams, M. J.; Nash, D. B.; Alley, R. E.; Schieldge, J. P.

    1982-01-01

    Sensitivity studies using thermal models indicated sources of errors in the determination of thermal inertia from HCMM data. Apparent thermal inertia, with only simple atmospheric radiance corrections to the measured surface temperature, would be sufficient for most operational requirements for surface thermal inertia. Thermal data does have additional information about the nature of surface material that is not available in visible and near infrared reflectance data. Color composites of daytime temperature, nighttime temperature, and albedo were often more useful than thermal inertia images alone for discrimination of lithologic boundaries. A modeling study, using the annual heating cycle, indicated the feasibility of looking for geologic features buried under as much as a meter of alluvial material. The spatial resolution of HCMM data is a major limiting factor in the usefulness of the data for geologic applications. Future thermal infrared satellite sensors should provide spatial resolution comparable to that of the LANDSAT data.

  8. Improving Rydberg Excitations within Time-Dependent Density Functional Theory with Generalized Gradient Approximations: The Exchange-Enhancement-for-Large-Gradient Scheme

    DOE PAGES

    Li, Shaohong L.; Truhlar, Donald G.

    2015-05-22

    Time-dependent density functional theory (TDDFT) with conventional local and hybrid functionals such as the local and hybrid generalized gradient approximations (GGA) seriously underestimates the excitation energies of Rydberg states, which limits its usefulness for applications such as spectroscopy and photochemistry. We present here a scheme that modifies the exchange-enhancement factor to improve GGA functionals for Rydberg excitations within the TDDFT framework while retaining their accuracy for valence excitations and for the thermochemical energetics calculated by ground-state density functional theory. The scheme is applied to a popular hybrid GGA functional and tested on data sets of valence and Rydberg excitations andmore » atomization energies, and the results are encouraging. The scheme is simple and flexible. It can be used to correct existing functionals, and it can also be used as a strategy for the development of new functionals.« less

  9. Free-energy functional of the Debye-Hückel model of simple fluids

    NASA Astrophysics Data System (ADS)

    Piron, R.; Blenski, T.

    2016-12-01

    The Debye-Hückel approximation to the free energy of a simple fluid is written as a functional of the pair correlation function. This functional can be seen as the Debye-Hückel equivalent to the functional derived in the hypernetted chain framework by Morita and Hiroike, as well as by Lado. It allows one to obtain the Debye-Hückel integral equation through a minimization with respect to the pair correlation function, leads to the correct form of the internal energy, and fulfills the virial theorem.

  10. The Method of Curvatures.

    ERIC Educational Resources Information Center

    Greenslade, Thomas B., Jr.; Miller, Franklin, Jr.

    1981-01-01

    Describes method for locating images in simple and complex systems of thin lenses and spherical mirrors. The method helps students to understand differences between real and virtual images. It is helpful in discussing the human eye and the correction of imperfect vision by the use of glasses. (Author/SK)

  11. Analysis of Levene's Test under Design Imbalance.

    ERIC Educational Resources Information Center

    Keyes, Tim K.; Levy, Martin S.

    1997-01-01

    H. Levene (1960) proposed a heuristic test for heteroscedasticity in the case of a balanced two-way layout, based on analysis of variance of absolute residuals. Conditions under which design imbalance affects the test's characteristics are identified, and a simple correction involving leverage is proposed. (SLD)

  12. Single photon and nonlocality

    NASA Astrophysics Data System (ADS)

    Drezet, Aurelien

    2007-03-01

    In a paper by Home and Agarwal [1], it is claimed that quantum nonlocality can be revealed in a simple interferometry experiment using only single particles. A critical analysis of the concept of hidden variable used by the authors of [1] shows that the reasoning is not correct.

  13. Simple Statistics: - Summarized!

    ERIC Educational Resources Information Center

    Blai, Boris, Jr.

    Statistics are an essential tool for making proper judgement decisions. It is concerned with probability distribution models, testing of hypotheses, significance tests and other means of determining the correctness of deductions and the most likely outcome of decisions. Measures of central tendency include the mean, median and mode. A second…

  14. Use Over-the-Counter Medicines Wisely

    MedlinePlus

    ... correctly. Simply put, this means that when you buy or use an OTC medicine, remember to: ● Respect that OTCs are serious medicines ... simple steps: ● Read the label—every time you buy or use a nonprescription medicine pay special attention to the ingredients, and directions ...

  15. Babies and Math: A Meta-Analysis of Infants' Simple Arithmetic Competence

    ERIC Educational Resources Information Center

    Christodoulou, Joan; Lac, Andrew; Moore, David S.

    2017-01-01

    Wynn's (1992) seminal research reported that infants looked longer at stimuli representing "incorrect" versus "correct" solutions of basic addition and subtraction problems and concluded that infants have innate arithmetical abilities. Since then, infancy researchers have attempted to replicate this effect, yielding mixed…

  16. A simple method to determine evaporation and compensate for liquid losses in small-scale cell culture systems.

    PubMed

    Wiegmann, Vincent; Martinez, Cristina Bernal; Baganz, Frank

    2018-04-24

    Establish a method to indirectly measure evaporation in microwell-based cell culture systems and show that the proposed method allows compensating for liquid losses in fed-batch processes. A correlation between evaporation and the concentration of Na + was found (R 2  = 0.95) when using the 24-well-based miniature bioreactor system (micro-Matrix) for a batch culture with GS-CHO. Based on these results, a method was developed to counteract evaporation with periodic water additions based on measurements of the Na + concentration. Implementation of this method resulted in a reduction of the relative liquid loss after 15 days of a fed-batch cultivation from 36.7 ± 6.7% without volume corrections to 6.9 ± 6.5% with volume corrections. A procedure was established to indirectly measure evaporation through a correlation with the level of Na + ions in solution and deriving a simple formula to account for liquid losses.

  17. A simple and efficient alternative to implementing systematic random sampling in stereological designs without a motorized microscope stage.

    PubMed

    Melvin, Neal R; Poda, Daniel; Sutherland, Robert J

    2007-10-01

    When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.

  18. Correcting the initialization of models with fractional derivatives via history-dependent conditions

    NASA Astrophysics Data System (ADS)

    Du, Maolin; Wang, Zaihua

    2016-04-01

    Fractional differential equations are more and more used in modeling memory (history-dependent, non-local, or hereditary) phenomena. Conventional initial values of fractional differential equations are defined at a point, while recent works define initial conditions over histories. We prove that the conventional initialization of fractional differential equations with a Riemann-Liouville derivative is wrong with a simple counter-example. The initial values were assumed to be arbitrarily given for a typical fractional differential equation, but we find one of these values can only be zero. We show that fractional differential equations are of infinite dimensions, and the initial conditions, initial histories, are defined as functions over intervals. We obtain the equivalent integral equation for Caputo case. With a simple fractional model of materials, we illustrate that the recovery behavior is correct with the initial creep history, but is wrong with initial values at the starting point of the recovery. We demonstrate the application of initial history by solving a forced fractional Lorenz system numerically.

  19. Effective Algorithm for Detection and Correction of the Wave Reconstruction Errors Caused by the Tilt of Reference Wave in Phase-shifting Interferometry

    NASA Astrophysics Data System (ADS)

    Xu, Xianfeng; Cai, Luzhong; Li, Dailin; Mao, Jieying

    2010-04-01

    In phase-shifting interferometry (PSI) the reference wave is usually supposed to be an on-axis plane wave. But in practice a slight tilt of reference wave often occurs, and this tilt will introduce unexpected errors of the reconstructed object wave-front. Usually the least-square method with iterations, which is time consuming, is employed to analyze the phase errors caused by the tilt of reference wave. Here a simple effective algorithm is suggested to detect and then correct this kind of errors. In this method, only some simple mathematic operation is used, avoiding using least-square equations as needed in most methods reported before. It can be used for generalized phase-shifting interferometry with two or more frames for both smooth and diffusing objects, and the excellent performance has been verified by computer simulations. The numerical simulations show that the wave reconstruction errors can be reduced by 2 orders of magnitude.

  20. Simulation and Correction of Triana-Viewed Earth Radiation Budget with ERBE/ISCCP Data

    NASA Technical Reports Server (NTRS)

    Huang, Jian-Ping; Minnis, Patrick; Doelling, David R.; Valero, Francisco P. J.

    2002-01-01

    This paper describes the simulation of the earth radiation budget (ERB) as viewed by Triana and the development of correction models for converting Trianaviewed radiances into a complete ERB. A full range of Triana views and global radiation fields are simulated using a combination of datasets from ERBE (Earth Radiation Budget Experiment) and ISCCP (International Satellite Cloud Climatology Project) and analyzed with a set of empirical correction factors specific to the Triana views. The results show that the accuracy of global correction factors to estimate ERB from Triana radiances is a function of the Triana position relative to the Lagrange-1 (L1) or the Sun location. Spectral analysis of the global correction factor indicates that both shortwave (SW; 0.2 - 5.0 microns) and longwave (LW; 5 -50 microns) parameters undergo seasonal and diurnal cycles that dominate the periodic fluctuations. The diurnal cycle, especially its amplitude, is also strongly dependent on the seasonal cycle. Based on these results, models are developed to correct the radiances for unviewed areas and anisotropic emission and reflection. A preliminary assessment indicates that these correction models can be applied to Triana radiances to produce the most accurate global ERB to date.

  1. Correction factors for the NMi free-air ionization chamber for medium-energy x-rays calculated with the Monte Carlo method.

    PubMed

    Grimbergen, T W; van Dijk, E; de Vries, W

    1998-11-01

    A new method is described for the determination of x-ray quality dependent correction factors for free-air ionization chambers. The method is based on weighting correction factors for mono-energetic photons, which are calculated using the Monte Carlo method, with measured air kerma spectra. With this method, correction factors for electron loss, scatter inside the chamber and transmission through the diaphragm and front wall have been calculated for the NMi free-air chamber for medium-energy x-rays for a wide range of x-ray qualities in use at NMi. The newly obtained correction factors were compared with the values in use at present, which are based on interpolation of experimental data for a specific set of x-ray qualities. For x-ray qualities which are similar to this specific set, the agreement between the correction factors determined with the new method and those based on the experimental data is better than 0.1%, except for heavily filtered x-rays generated at 250 kV. For x-ray qualities dissimilar to the specific set, differences up to 0.4% exist, which can be explained by uncertainties in the interpolation procedure of the experimental data. Since the new method does not depend on experimental data for a specific set of x-ray qualities, the new method allows for a more flexible use of the free-air chamber as a primary standard for air kerma for any x-ray quality in the medium-energy x-ray range.

  2. Genetic risk factors for ovarian cancer and their role for endometriosis risk.

    PubMed

    Burghaus, Stefanie; Fasching, Peter A; Häberle, Lothar; Rübner, Matthias; Büchner, Kathrin; Blum, Simon; Engel, Anne; Ekici, Arif B; Hartmann, Arndt; Hein, Alexander; Beckmann, Matthias W; Renner, Stefan P

    2017-04-01

    Several genetic variants have been validated as risk factors for ovarian cancer. Endometriosis has also been described as a risk factor for ovarian cancer. Identifying genetic risk factors that are common to the two diseases might help improve our understanding of the molecular pathogenesis potentially linking the two conditions. In a hospital-based case-control analysis, 12 single nucleotide polymorphisms (SNPs), validated by the Ovarian Cancer Association Consortium (OCAC) and the Collaborative Oncological Gene-environment Study (COGS) project, were genotyped using TaqMan® OpenArray™ analysis. The cases consisted of patients with endometriosis, and the controls were healthy individuals without endometriosis. A total of 385 cases and 484 controls were analyzed. Odds ratios and P values were obtained using simple logistic regression models, as well as from multiple logistic regression models with adjustment for clinical predictors. rs11651755 in HNF1B was found to be associated with endometriosis in this case-control study. The OR was 0.66 (95% CI, 0.51 to 0.84) and the P value after correction for multiple testing was 0.01. None of the other genotypes was associated with a risk for endometriosis. As rs11651755 in HNF1B modified both the ovarian cancer risk and also the risk for endometriosis, HNF1B may be causally involved in the pathogenetic pathway leading from endometriosis to ovarian cancer. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Calibration of entrance dose measurement for an in vivo dosimetry programme.

    PubMed

    Ding, W; Patterson, W; Tremethick, L; Joseph, D

    1995-11-01

    An increasing number of cancer treatment centres are using in vivo dosimetry as a quality assurance tool for verifying dosimetry as either the entrance or exit surface of the patient undergoing external beam radiotherapy. Equipment is usually limited to either thermoluminescent dosimeters (TLD) or semiconductor detectors such as p-type diodes. The semiconductor detector is more popular than the TLD due to the major advantage of real time analysis of the actual dose delivered. If a discrepancy is observed between the calculated and the measured entrance dose, it is possible to eliminate several likely sources of errors by immediately verifying all treatment parameters. Five Scanditronix EDP-10 p-type diodes were investigated to determine their calibration and relevant correction factors for entrance dose measurements using a Victoreen White Water-RW3 tissue equivalent phantom and a 6 MV photon beam from a Varian Clinac 2100C linear accelerator. Correction factors were determined for individual diodes for the following parameters: source to surface distance (SSD), collimator size, wedge, plate (tray) and temperature. The directional dependence of diode response was also investigated. The SSD correction factor (CSSD) was found to increase by approximately 3% over the range of SSD from 80 to 130 cm. The correction factor for collimator size (Cfield) also varied by approximately 3% between 5 x 5 and 40 x 40 cm2. The wedge correction factor (Cwedge) and plate correction factor (Cplate) were found to be a function of collimator size. Over the range of measurement, these factors varied by a maximum of 1 and 1.5%, respectively. The Cplate variation between the solid and the drilled plates under the same irradiation conditions was a maximum of 2.4%. The diode sensitivity demonstrated an increase with temperature. A maximum of 2.5% variation for the directional dependence of diode response was observed for angle of +/- 60 degrees. In conclusion, in vivo dosimetry is an important and reliable method for checking the dose delivered to the patient. Preclinical calibration and determination of the relevant correction factors for each diode are essential in order to achieve a high accuracy of dose delivered to the patient.

  4. Edge-to-center plasma density ratios in two-dimensional plasma discharges

    NASA Astrophysics Data System (ADS)

    Lucken, R.; Croes, V.; Lafleur, T.; Raimbault, J.-L.; Bourdon, A.; Chabert, P.

    2018-03-01

    Edge-to-center plasma density ratios—so-called h factors—are important parameters for global models of plasma discharges as they are used to calculate the plasma losses at the reactor walls. There are well-established theories for h factors in the one-dimensional (1D) case. The purpose of this paper is to establish h factors in two-dimensional (2D) systems, with guidance from a 2D particle-in-cell (PIC) simulation. We derive analytical solutions of a 2D fluid theory that includes the effect of ion inertia, but assumes a constant (independent of space) ion collision frequency (using an average ion velocity) across the discharge. Predicted h factors from this 2D fluid theory have the same order of magnitude and the same trends as the PIC simulations when the average ion velocity used in the collision frequency is set equal to the ion thermal velocity. The best agreement is obtained when the average ion velocity varies with pressure (but remains independent of space), going from half the Bohm velocity at low pressure, to the thermal velocity at high pressure. The analysis also shows that a simple correction of the widely-used 1D heuristic formula may be proposed to accurately incorporate 2D effects.

  5. Reconstituting factor concentrates: Defining Evidence of Coaching Non-Experts (DEVICE) in haemophilia--a prospective randomized feasibility study.

    PubMed

    Bidlingmaier, C; Kurnik, K; Hölscher, G; Kappler, M

    2007-09-01

    The introduction of new needleless devices as demanded by the US Department of Labor Occupational Safety and Health Administration (OSHA) has caused problems with the reconstitution of antihaemophilic factor in emergency situations. Our aim therefore was to evaluate the feasibility of a needleless device for reconstitution of antihaemophilic factor for non-haemophilia experts and to define evidence of the need for coaching these physicians via providing two additional photographs illustrating the two key points of the factor reconstitution process. Twenty-eight physicians of a tertiary care university children's hospital were randomized into two groups, either with no further explanation of the reconstitution device or with two additional photographs, showing the two key steps of the procedure. Reconstitution of dummy-factor concentrate was video-taped and evaluated by a blinded helper. Main outcome measure was the successful reconstitution of dummy-factor concentrate and procedure failure respectively. Of the group without explanation of the reconstitution device, only two of 14 physicians were able to reconstitute the dummy-factor concentrate. Of the group receiving two photographs, nine of 14 completed the task successfully (P = 0.0068). The needleless device is not self explaining to non-haemophilia physicians involved in emergency services. Coaching via short to the point instructions as provided by simple visual educational material therefore is crucial to enable these physicians to resolve the expensive emergency drug quickly and correctly. Companies concerned with the production of any devices to dissolve drugs, especially for treatment of rare diseases as haemophilia, therefore should take measures to simplify therapy.

  6. Knowledge about infertility risk factors, fertility myths and illusory benefits of healthy habits in young people.

    PubMed

    Bunting, Laura; Boivin, Jacky

    2008-08-01

    Previous research has highlighted a lack of fertility awareness in the general population especially in relation to the optimal fertile period during the menstrual cycle, incidence of infertility and duration of the reproductive life span. The current study assessed fertility knowledge more broadly in young people and investigated three areas of knowledge, namely risk factors associated with female infertility (e.g. smoking), beliefs in false fertility myths (e.g. benefits of rural living) and beliefs in the illusory benefits of healthy habits (e.g. exercising regularly) on female fertility. The sample (n = 149) consisted of 110 female and 39 male postgraduate and undergraduate university students (average age 24.01, SD = 7.81). Knowledge scores were based on a simple task requiring the participants to estimate the effect a factor would have on a group of 100 women trying to get pregnant. Items (n = 21) were grouped according to three categories: risk factors (e.g. smoking; 7 items), myths (e.g. living in countryside; 7 items) and healthy habits (e.g. being normal weight; 7 items). An analysis of variance showed a significant main effect of factor (P < 0.001) and post hoc tests revealed that young people were significantly better at correctly identifying the effects of risks compared with null effects of healthy habits (P < 0.001) or fertility myths (P < 0.001). Young people are aware that the negative lifestyle factors reduce fertility but falsely believe in fertility myths and the benefits of healthy habits. We suggest that the public education campaigns should be directed to erroneous beliefs about pseudo protective factors.

  7. Nonsurgical correction of congenital ear abnormalities in the newborn: Case series.

    PubMed

    Smith, Wg; Toye, Jw; Reid, A; Smith, Rw

    2005-07-01

    To determine whether a simple, nonsurgical treatment for congenital ear abnormalities (lop-ear, Stahl's ear, protruding ear, cryptotia) improved the appearance of ear abnormalities in newborns at six weeks of age. This is a descriptive case series. All newborns with identified abnormalities were referred by their family physician to one paediatrician (WGS) in a small level 2 perinatal centre. The ears were waxed and taped in a standard manner within 10 days of birth. Pictures were taken before taping and at the end of taping (one month). All patients and pictures were assessed by one plastic surgeon (JWT) at six weeks of age and scored using a standard scoring system. A telephone survey of the nontreatment group was conducted. The total number of ears assessed was 90. Of this total, 69 ears were taped and fully evaluated in the study (77%). The refusal rate was 23%. In the treatment group, 59% had lop-ear, 19% had Stahl's ear, 17% had protruding ear and 3% had cryptotia. Overall correction (excellent/improved) for the treatment group was 90% (100% for lop-ear, 100% for Stahl's ear, 67% for protruding ear and 0% for cryptotia). In the nontreatment (refusal) group, 67% of the ears failed to correct spontaneously. No complications were recognized by the authors or parents by six weeks. The percentage of newborns in one year in the perinatal centre with recognized ear abnormalities was 6% (90 of 1600). A simple, nonsurgical treatment in a Caucasian population appeared to be very effective in correcting congenital ear abnormalities with no complications and high patient/parent satisfaction.

  8. Floristic composition and across-track reflectance gradient in Landsat images over Amazonian forests

    NASA Astrophysics Data System (ADS)

    Muro, Javier; doninck, Jasper Van; Tuomisto, Hanna; Higgins, Mark A.; Moulatlet, Gabriel M.; Ruokolainen, Kalle

    2016-09-01

    Remotely sensed image interpretation or classification of tropical forests can be severely hampered by the effects of the bidirectional reflection distribution function (BRDF). Even for narrow swath sensors like Landsat TM/ETM+, the influence of reflectance anisotropy can be sufficiently strong to introduce a cross-track reflectance gradient. If the BRDF could be assumed to be linear for the limited swath of Landsat, it would be possible to remove this gradient during image preprocessing using a simple empirical method. However, the existence of natural gradients in reflectance caused by spatial variation in floristic composition of the forest can restrict the applicability of such simple corrections. Here we use floristic information over Peruvian and Brazilian Amazonia acquired through field surveys, complemented with information from geological maps, to investigate the interaction of real floristic gradients and the effect of reflectance anisotropy on the observed reflectances in Landsat data. In addition, we test the assumption of linearity of the BRDF for a limited swath width, and whether different primary non-inundated forest types are characterized by different magnitudes of the directional reflectance gradient. Our results show that a linear function is adequate to empirically correct for view angle effects, and that the magnitude of the across-track reflectance gradient is independent of floristic composition in the non-inundated forests we studied. This makes a routine correction of view angle effects possible. However, floristic variation complicates the issue, because different forest types have different mean reflectances. This must be taken into account when deriving the correction function in order to avoid eliminating natural gradients.

  9. Control circuit maintains unity power factor of reactive load

    NASA Technical Reports Server (NTRS)

    Kramer, M.; Martinage, L. H.

    1966-01-01

    Circuit including feedback control elements automatically corrects the power factor of a reactive load. It maintains power supply efficiency where negative load reactance changes and varies by providing corrective error signals to the control windings of a power supply transformer.

  10. Experimental Verification of the Theory of Wind-Tunnel Boundary Interference

    NASA Technical Reports Server (NTRS)

    Theodorsen, Theodore; Silverstein, Abe

    1935-01-01

    The results of an experimental investigation on the boundary-correction factor are presented in this report. The values of the boundary-correction factor from the theory, which at the present time is virtually completed, are given in the report for all conventional types of tunnels. With the isolation of certain disturbing effects, the experimental boundary-correction factor was found to be in satisfactory agreement with the theoretically predicted values, thus verifying the soundness and sufficiency of the theoretical analysis. The establishment of a considerable velocity distortion, in the nature of a unique blocking effect, constitutes a principal result of the investigation.

  11. Improving global estimates of syphilis in pregnancy by diagnostic test type: A systematic review and meta-analysis.

    PubMed

    Ham, D Cal; Lin, Carol; Newman, Lori; Wijesooriya, N Saman; Kamb, Mary

    2015-06-01

    "Probable active syphilis," is defined as seroreactivity in both non-treponemal and treponemal tests. A correction factor of 65%, namely the proportion of pregnant women reactive in one syphilis test type that were likely reactive in the second, was applied to reported syphilis seropositivity data reported to WHO for global estimates of syphilis during pregnancy. To identify more accurate correction factors based on test type reported. Medline search using: "Syphilis [Mesh] and Pregnancy [Mesh]," "Syphilis [Mesh] and Prenatal Diagnosis [Mesh]," and "Syphilis [Mesh] and Antenatal [Keyword]. Eligible studies must have reported results for pregnant or puerperal women for both non-treponemal and treponemal serology. We manually calculated the crude percent estimates of subjects with both reactive treponemal and reactive non-treponemal tests among subjects with reactive treponemal and among subjects with reactive non-treponemal tests. We summarized the percent estimates using random effects models. Countries reporting both reactive non-treponemal and reactive treponemal testing required no correction factor. Countries reporting non-treponemal testing or treponemal testing alone required a correction factor of 52.2% and 53.6%, respectively. Countries not reporting test type required a correction factor of 68.6%. Future estimates should adjust reported maternal syphilis seropositivity by test type to ensure accuracy. Published by Elsevier Ireland Ltd.

  12. Improved scatterer property estimates from ultrasound backscatter for small gate lengths using a gate-edge correction factor

    NASA Astrophysics Data System (ADS)

    Oelze, Michael L.; O'Brien, William D.

    2004-11-01

    Backscattered rf signals used to construct conventional ultrasound B-mode images contain frequency-dependent information that can be examined through the backscattered power spectrum. The backscattered power spectrum is found by taking the magnitude squared of the Fourier transform of a gated time segment corresponding to a region in the scattering volume. When a time segment is gated, the edges of the gated regions change the frequency content of the backscattered power spectrum due to truncating of the waveform. Tapered windows, like the Hanning window, and longer gate lengths reduce the relative contribution of the gate-edge effects. A new gate-edge correction factor was developed that partially accounted for the edge effects. The gate-edge correction factor gave more accurate estimates of scatterer properties at small gate lengths compared to conventional windowing functions. The gate-edge correction factor gave estimates of scatterer properties within 5% of actual values at very small gate lengths (less than 5 spatial pulse lengths) in both simulations and from measurements on glass-bead phantoms. While the gate-edge correction factor gave higher accuracy of estimates at smaller gate lengths, the precision of estimates was not improved at small gate lengths over conventional windowing functions. .

  13. Air-braked cycle ergometers: validity of the correction factor for barometric pressure.

    PubMed

    Finn, J P; Maxwell, B F; Withers, R T

    2000-10-01

    Barometric pressure exerts by far the greatest influence of the three environmental factors (barometric pressure, temperature and humidity) on power outputs from air-braked ergometers. The barometric pressure correction factor for power outputs from air-braked ergometers is in widespread use but apparently has never been empirically validated. Our experiment validated this correction factor by calibrating two air-braked cycle ergometers in a hypobaric chamber using a dynamic calibration rig. The results showed that if the power output correction for changes in air resistance at barometric pressures corresponding to altitudes of 38, 600, 1,200 and 1,800 m above mean sea level were applied, then the coefficients of variation were 0.8-1.9% over the range of 160-1,597 W. The overall mean error was 3.0 % but this included up to 0.73 % for the propagated error that was associated with errors in the measurement of: a) temperature b) relative humidity c) barometric pressure d) force, distance and angular velocity by the dynamic calibration rig. The overall mean error therefore approximated the +/- 2.0% of true load that was specified by the Laboratory Standards Assistance Scheme of the Australian Sports Commission. The validity of the correction factor for barometric pressure on power output was therefore demonstrated over the altitude range of 38-1,800 m.

  14. A stable second order method for training back propagation networks

    NASA Technical Reports Server (NTRS)

    Nachtsheim, Philip R.

    1993-01-01

    A simple method for improving the learning rate of the back-propagation algorithm is described. The basis of the method is that approximate second order corrections can be incorporated in the output units. The extended method leads to significant improvements in the convergence rate.

  15. Correction for spatial averaging in laser speckle contrast analysis

    PubMed Central

    Thompson, Oliver; Andrews, Michael; Hirst, Evan

    2011-01-01

    Practical laser speckle contrast analysis systems face a problem of spatial averaging of speckles, due to the pixel size in the cameras used. Existing practice is to use a system factor in speckle contrast analysis to account for spatial averaging. The linearity of the system factor correction has not previously been confirmed. The problem of spatial averaging is illustrated using computer simulation of time-integrated dynamic speckle, and the linearity of the correction confirmed using both computer simulation and experimental results. The valid linear correction allows various useful compromises in the system design. PMID:21483623

  16. An advanced method to assess the diet of free-ranging large carnivores based on scats.

    PubMed

    Wachter, Bettina; Blanc, Anne-Sophie; Melzheimer, Jörg; Höner, Oliver P; Jago, Mark; Hofer, Heribert

    2012-01-01

    The diet of free-ranging carnivores is an important part of their ecology. It is often determined from prey remains in scats. In many cases, scat analyses are the most efficient method but they require correction for potential biases. When the diet is expressed as proportions of consumed mass of each prey species, the consumed prey mass to excrete one scat needs to be determined and corrected for prey body mass because the proportion of digestible to indigestible matter increases with prey body mass. Prey body mass can be corrected for by conducting feeding experiments using prey of various body masses and fitting a regression between consumed prey mass to excrete one scat and prey body mass (correction factor 1). When the diet is expressed as proportions of consumed individuals of each prey species and includes prey animals not completely consumed, the actual mass of each prey consumed by the carnivore needs to be controlled for (correction factor 2). No previous study controlled for this second bias. Here we use an extended series of feeding experiments on a large carnivore, the cheetah (Acinonyx jubatus), to establish both correction factors. In contrast to previous studies which fitted a linear regression for correction factor 1, we fitted a biologically more meaningful exponential regression model where the consumed prey mass to excrete one scat reaches an asymptote at large prey sizes. Using our protocol, we also derive correction factor 1 and 2 for other carnivore species and apply them to published studies. We show that the new method increases the number and proportion of consumed individuals in the diet for large prey animals compared to the conventional method. Our results have important implications for the interpretation of scat-based studies in feeding ecology and the resolution of human-wildlife conflicts for the conservation of large carnivores.

  17. An Advanced Method to Assess the Diet of Free-Ranging Large Carnivores Based on Scats

    PubMed Central

    Wachter, Bettina; Blanc, Anne-Sophie; Melzheimer, Jörg; Höner, Oliver P.; Jago, Mark; Hofer, Heribert

    2012-01-01

    Background The diet of free-ranging carnivores is an important part of their ecology. It is often determined from prey remains in scats. In many cases, scat analyses are the most efficient method but they require correction for potential biases. When the diet is expressed as proportions of consumed mass of each prey species, the consumed prey mass to excrete one scat needs to be determined and corrected for prey body mass because the proportion of digestible to indigestible matter increases with prey body mass. Prey body mass can be corrected for by conducting feeding experiments using prey of various body masses and fitting a regression between consumed prey mass to excrete one scat and prey body mass (correction factor 1). When the diet is expressed as proportions of consumed individuals of each prey species and includes prey animals not completely consumed, the actual mass of each prey consumed by the carnivore needs to be controlled for (correction factor 2). No previous study controlled for this second bias. Methodology/Principal Findings Here we use an extended series of feeding experiments on a large carnivore, the cheetah (Acinonyx jubatus), to establish both correction factors. In contrast to previous studies which fitted a linear regression for correction factor 1, we fitted a biologically more meaningful exponential regression model where the consumed prey mass to excrete one scat reaches an asymptote at large prey sizes. Using our protocol, we also derive correction factor 1 and 2 for other carnivore species and apply them to published studies. We show that the new method increases the number and proportion of consumed individuals in the diet for large prey animals compared to the conventional method. Conclusion/Significance Our results have important implications for the interpretation of scat-based studies in feeding ecology and the resolution of human-wildlife conflicts for the conservation of large carnivores. PMID:22715373

  18. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.

  19. Simple statistical bias correction techniques greatly improve moderate resolution air quality forecast at station level

    NASA Astrophysics Data System (ADS)

    Curci, Gabriele; Falasca, Serena

    2017-04-01

    Deterministic air quality forecast is routinely carried out at many local Environmental Agencies in Europe and throughout the world by means of eulerian chemistry-transport models. The skill of these models in predicting the ground-level concentrations of relevant pollutants (ozone, nitrogen dioxide, particulate matter) a few days ahead has greatly improved in recent years, but it is not yet always compliant with the required quality level for decision making (e.g. the European Commission has set a maximum uncertainty of 50% on daily values of relevant pollutants). Post-processing of deterministic model output is thus still regarded as a useful tool to make the forecast more reliable. In this work, we test several bias correction techniques applied to a long-term dataset of air quality forecasts over Europe and Italy. We used the WRF-CHIMERE modelling system, which provides operational experimental chemical weather forecast at CETEMPS (http://pumpkin.aquila.infn.it/forechem/), to simulate the years 2008-2012 at low resolution over Europe (0.5° x 0.5°) and moderate resolution over Italy (0.15° x 0.15°). We compared the simulated dataset with available observation from the European Environmental Agency database (AirBase) and characterized model skill and compliance with EU legislation using the Delta tool from FAIRMODE project (http://fairmode.jrc.ec.europa.eu/). The bias correction techniques adopted are, in order of complexity: (1) application of multiplicative factors calculated as the ratio of model-to-observed concentrations averaged over the previous days; (2) correction of the statistical distribution of model forecasts, in order to make it similar to that of the observations; (3) development and application of Model Output Statistics (MOS) regression equations. We illustrate differences and advantages/disadvantages of the three approaches. All the methods are relatively easy to implement for other modelling systems.

  20. Correction for the 17O interference in δ(13C) measurements when analyzing CO2 with stable isotope mass spectrometry

    USGS Publications Warehouse

    Coplen, Tyler B.; Brand, Willi A.; Assonov, Sergey S.

    2010-01-01

    Measurements of δ(13C) determined on CO2 with an isotope-ratio mass spectrometer (IRMS) must be corrected for the amount of 17O in the CO2. For data consistency, this must be done using identical methods by different laboratories. This report aims at unifying data treatment for CO2 IRMS by proposing (i) a unified set of numerical values, and (ii) a unified correction algorithm, based on a simple, linear approximation formula. Because the oxygen of natural CO2 is derived mostly from the global water pool, it is recommended that a value of 0.528 be employed for the factor λ, which relates differences in 17O and 18O abundances. With the currently accepted N(13C)/N(12C) of 0.011 180(28) in VPDB (Vienna Peedee belemnite) reevaluation of data yields a value of 0.000 393(1) for the oxygen isotope ratio N(17O)/N(16O) of the evolved CO2. The ratio of these quantities, a ratio of isotope ratios, is essential for the 17O abundance correction: [N(17O)/N(16O)]/[N(13C)/N(12C)] = 0.035 16(8). The equation [δ(13C) ≈ 45δVPDB-CO2 + 2 17R/13R (45δVPDB-CO2 – λ46δVPDB-CO2)] closely approximates δ(13C) values with less than 0.010 ‰ deviation for normal oxygen-bearing materials and no more than 0.026 ‰ in extreme cases. Other materials containing oxygen of non-mass-dependent isotope composition require a more specific data treatment. A similar linear approximation is also suggested for δ(18O). The linear approximations are easy to implement in a data spreadsheet, and also help in generating a simplified uncertainty budget.

Top