Sample records for correction factor method

  1. An improved bias correction method of daily rainfall data using a sliding window technique for climate change impact assessment

    NASA Astrophysics Data System (ADS)

    Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.

    2018-01-01

    Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological simulations forced using the bias corrected rainfall (distribution mapping and modified power transformation methods that used the proposed daily correction factors) was similar to those simulated by the IMD rainfall. The results demonstrate that the methods and the time scales used for bias correction of RCM rainfall data have a larger impact on the accuracy of the daily rainfall and consequently the simulated streamflow. The analysis suggests that the distribution mapping with daily correction factors can be preferred for adjusting RCM rainfall data irrespective of seasons or climate zones for realistic simulation of streamflow.

  2. Resistivity Correction Factor for the Four-Probe Method: Experiment II

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo

    1989-05-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F can be applied to a system consisting of a disk sample and a four-probe array. Measurements are made on isotropic graphite disks and crystalline ITO films. Factor F can correct the apparent variations of the data and lead to reasonable resistivities and sheet resistances. Here factor F is compared to other correction factors; i.e. FASTM and FJIS.

  3. Fisheye camera method for spatial non-uniformity corrections in luminous flux measurements with integrating spheres

    NASA Astrophysics Data System (ADS)

    Kokka, Alexander; Pulli, Tomi; Poikonen, Tuomas; Askola, Janne; Ikonen, Erkki

    2017-08-01

    This paper presents a fisheye camera method for determining spatial non-uniformity corrections in luminous flux measurements with integrating spheres. Using a fisheye camera installed into a port of an integrating sphere, the relative angular intensity distribution of the lamp under test is determined. This angular distribution is used for calculating the spatial non-uniformity correction for the lamp when combined with the spatial responsivity data of the sphere. The method was validated by comparing it to a traditional goniophotometric approach when determining spatial correction factors for 13 LED lamps with different angular spreads. The deviations between the spatial correction factors obtained using the two methods ranged from -0.15 % to 0.15%. The mean magnitude of the deviations was 0.06%. For a typical LED lamp, the expanded uncertainty (k = 2 ) for the spatial non-uniformity correction factor was evaluated to be 0.28%. The fisheye camera method removes the need for goniophotometric measurements in determining spatial non-uniformity corrections, thus resulting in considerable system simplification. Generally, no permanent modifications to existing integrating spheres are required.

  4. Correction factors for the NMi free-air ionization chamber for medium-energy x-rays calculated with the Monte Carlo method.

    PubMed

    Grimbergen, T W; van Dijk, E; de Vries, W

    1998-11-01

    A new method is described for the determination of x-ray quality dependent correction factors for free-air ionization chambers. The method is based on weighting correction factors for mono-energetic photons, which are calculated using the Monte Carlo method, with measured air kerma spectra. With this method, correction factors for electron loss, scatter inside the chamber and transmission through the diaphragm and front wall have been calculated for the NMi free-air chamber for medium-energy x-rays for a wide range of x-ray qualities in use at NMi. The newly obtained correction factors were compared with the values in use at present, which are based on interpolation of experimental data for a specific set of x-ray qualities. For x-ray qualities which are similar to this specific set, the agreement between the correction factors determined with the new method and those based on the experimental data is better than 0.1%, except for heavily filtered x-rays generated at 250 kV. For x-ray qualities dissimilar to the specific set, differences up to 0.4% exist, which can be explained by uncertainties in the interpolation procedure of the experimental data. Since the new method does not depend on experimental data for a specific set of x-ray qualities, the new method allows for a more flexible use of the free-air chamber as a primary standard for air kerma for any x-ray quality in the medium-energy x-ray range.

  5. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    NASA Astrophysics Data System (ADS)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  6. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    NASA Technical Reports Server (NTRS)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  7. Resistivity Correction Factor for the Four-Probe Method: Experiment I

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Enjoji, Hideo

    1988-05-01

    Experimental verification of the theoretically derived resistivity correction factor (RCF) is presented. Resistivity and sheet resistance measurements by the four-probe method are made on three samples: isotropic graphite, ITO film and Au film. It is indicated that the RCF can correct the apparent variations of experimental data to yield reasonable resistivities and sheet resistances.

  8. Hypothesis Testing Using Factor Score Regression: A Comparison of Four Methods

    ERIC Educational Resources Information Center

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2016-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and…

  9. Determination of correction factors in beta radiation beams using Monte Carlo method.

    PubMed

    Polo, Ivón Oramas; Santos, William de Souza; Caldas, Linda V E

    2018-06-15

    The absorbed dose rate is the main characterization quantity for beta radiation. The extrapolation chamber is considered the primary standard instrument. To determine absorbed dose rates in beta radiation beams, it is necessary to establish several correction factors. In this work, the correction factors for the backscatter due to the collecting electrode and to the guard ring, and the correction factor for Bremsstrahlung in beta secondary standard radiation beams are presented. For this purpose, the Monte Carlo method was applied. The results obtained are considered acceptable, and they agree within the uncertainties. The differences between the backscatter factors determined by the Monte Carlo method and those of the ISO standard were 0.6%, 0.9% and 2.04% for 90 Sr/ 90 Y, 85 Kr and 147 Pm sources respectively. The differences between the Bremsstrahlung factors determined by the Monte Carlo method and those of the ISO were 0.25%, 0.6% and 1% for 90 Sr/ 90 Y, 85 Kr and 147 Pm sources respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. SU-F-BRE-01: A Rapid Method to Determine An Upper Limit On a Radiation Detector's Correction Factor During the QA of IMRT Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamio, Y; Bouchard, H

    2014-06-15

    Purpose: Discrepancies in the verification of the absorbed dose to water from an IMRT plan using a radiation dosimeter can be wither caused by 1) detector specific nonstandard field correction factors as described by the formalism of Alfonso et al. 2) inaccurate delivery of the DQA plan. The aim of this work is to develop a simple/fast method to determine an upper limit on the contribution of composite field correction factors to these discrepancies. Methods: Indices that characterize the non-flatness of the symmetrised collapsed delivery (VSC) of IMRT fields over detector-specific regions of interest were shown to be correlated withmore » IMRT field correction factors. The indices introduced are the uniformity index (UI) and the mean fluctuation index (MF). Each one of these correlation plots have 10 000 fields generated with a stochastic model. A total of eight radiation detectors were investigated in the radial orientation. An upper bound on the correction factors was evaluated by fitting values of high correction factors for a given index value. Results: These fitted curves can be used to compare the performance of radiation dosimeters in composite IMRT fields. Highly water-equivalent dosimeters like the scintillating detector (Exradin W1) and a generic alanine detector have been found to have corrections under 1% over a broad range of field modulations (0 – 0.12 for MF and 0 – 0.5 for UI). Other detectors have been shown to have corrections of a few percent over this range. Finally, a full Monte Carlo simulations of 18 clinical and nonclinical IMRT field showed good agreement with the fitted curve for the A12 ionization chamber. Conclusion: This work proposes a rapid method to evaluate an upper bound on the contribution of correction factors to discrepancies found in the verification of DQA plans.« less

  11. An analysis of the ArcCHECK-MR diode array's performance for ViewRay quality assurance.

    PubMed

    Ellefson, Steven T; Culberson, Wesley S; Bednarz, Bryan P; DeWerd, Larry A; Bayouth, John E

    2017-07-01

    The ArcCHECK-MR diode array utilizes a correction system with a virtual inclinometer to correct the angular response dependencies of the diodes. However, this correction system cannot be applied to measurements on the ViewRay MR-IGRT system due to the virtual inclinometer's incompatibility with the ViewRay's multiple simultaneous beams. Additionally, the ArcCHECK's current correction factors were determined without magnetic field effects taken into account. In the course of performing ViewRay IMRT quality assurance with the ArcCHECK, measurements were observed to be consistently higher than the ViewRay TPS predictions. The goals of this study were to quantify the observed discrepancies and test whether applying the current factors improves the ArcCHECK's accuracy for measurements on the ViewRay. Gamma and frequency analysis were performed on 19 ViewRay patient plans. Ion chamber measurements were performed at a subset of diode locations using a PMMA phantom with the same dimensions as the ArcCHECK. A new method for applying directionally dependent factors utilizing beam information from the ViewRay TPS was developed in order to analyze the current ArcCHECK correction factors. To test the current factors, nine ViewRay plans were altered to be delivered with only a single simultaneous beam and were measured with the ArcCHECK. The current correction factors were applied using both the new and current methods. The new method was also used to apply corrections to the original 19 ViewRay plans. It was found the ArcCHECK systematically reports doses higher than those actually delivered by the ViewRay. Application of the current correction factors by either method did not consistently improve measurement accuracy. As dose deposition and diode response have both been shown to change under the influence of a magnetic field, it can be concluded the current ArcCHECK correction factors are invalid and/or inadequate to correct measurements on the ViewRay system. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  12. Resistivity Correction Factor for the Four-Probe Method: Experiment III

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo; Iwata, Atsushi

    1990-04-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F is applied to a system consisting of a rectangular parallelepiped sample and a square four-probe array. Resistivity and sheet resistance measurements are made on isotropic graphites and crystalline ITO films. Factor F corrects experimental data and leads to reasonable resistivity and sheet resistance.

  13. SU-E-T-469: A Practical Approach for the Determination of Small Field Output Factors Using Published Monte Carlo Derived Correction Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calderon, E; Siergiej, D

    2014-06-01

    Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detectormore » (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement.« less

  14. SEMICONDUCTOR TECHNOLOGY: An efficient dose-compensation method for proximity effect correction

    NASA Astrophysics Data System (ADS)

    Ying, Wang; Weihua, Han; Xiang, Yang; Renping, Zhang; Yang, Zhang; Fuhua, Yang

    2010-08-01

    A novel simple dose-compensation method is developed for proximity effect correction in electron-beam lithography. The sizes of exposed patterns depend on dose factors while other exposure parameters (including accelerate voltage, resist thickness, exposing step size, substrate material, and so on) remain constant. This method is based on two reasonable assumptions in the evaluation of the compensated dose factor: one is that the relation between dose factors and circle-diameters is linear in the range under consideration; the other is that the compensated dose factor is only affected by the nearest neighbors for simplicity. Four-layer-hexagon photonic crystal structures were fabricated as test patterns to demonstrate this method. Compared to the uncorrected structures, the homogeneity of the corrected hole-size in photonic crystal structures was clearly improved.

  15. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    PubMed

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  16. Method of absorbance correction in a spectroscopic heating value sensor

    DOEpatents

    Saveliev, Alexei; Jangale, Vilas Vyankatrao; Zelepouga, Sergeui; Pratapas, John

    2013-09-17

    A method and apparatus for absorbance correction in a spectroscopic heating value sensor in which a reference light intensity measurement is made on a non-absorbing reference fluid, a light intensity measurement is made on a sample fluid, and a measured light absorbance of the sample fluid is determined. A corrective light intensity measurement at a non-absorbing wavelength of the sample fluid is made on the sample fluid from which an absorbance correction factor is determined. The absorbance correction factor is then applied to the measured light absorbance of the sample fluid to arrive at a true or accurate absorbance for the sample fluid.

  17. Correction factors for self-selection when evaluating screening programmes.

    PubMed

    Spix, Claudia; Berthold, Frank; Hero, Barbara; Michaelis, Jörg; Schilling, Freimut H

    2016-03-01

    In screening programmes there is recognized bias introduced through participant self-selection (the healthy screenee bias). Methods used to evaluate screening programmes include Intention-to-screen, per-protocol, and the "post hoc" approach in which, after introducing screening for everyone, the only evaluation option is participants versus non-participants. All methods are prone to bias through self-selection. We present an overview of approaches to correct for this bias. We considered four methods to quantify and correct for self-selection bias. Simple calculations revealed that these corrections are actually all identical, and can be converted into each other. Based on this, correction factors for further situations and measures were derived. The application of these correction factors requires a number of assumptions. Using as an example the German Neuroblastoma Screening Study, no relevant reduction in mortality or stage 4 incidence due to screening was observed. The largest bias (in favour of screening) was observed when comparing participants with non-participants. Correcting for bias is particularly necessary when using the post hoc evaluation approach, however, in this situation not all required data are available. External data or further assumptions may be required for estimation. © The Author(s) 2015.

  18. Size Distribution of Sea-Salt Emissions as a Function of Relative Humidity

    NASA Astrophysics Data System (ADS)

    Zhang, K. M.; Knipping, E. M.; Wexler, A. S.; Bhave, P. V.; Tonnesen, G. S.

    2004-12-01

    Here we introduced a simple method for correcting sea-salt particle-size distributions as a function of relative humidity. Distinct from previous approaches, our derivation uses particle size at formation as the reference state rather than dry particle size. The correction factors, corresponding to the size at formation and the size at 80% RH, are given as polynomial functions of local relative humidity which are straightforward to implement. Without major compromises, the correction factors are thermodynamically accurate and can be applied between 0.45 and 0.99 RH. Since the thermodynamic properties of sea-salt electrolytes are weakly dependent on ambient temperature, these factors can be regarded as temperature independent. The correction factor w.r.t. to the size at 80% RH is in excellent agreement with those from Fitzgerald's and Gerber's growth equations; while the correction factor w.r.t. the size at formation has the advantage of being independent of dry size and relative humidity at formation. The resultant sea-salt emissions can be used directly in atmospheric model simulations at urban, regional and global scales without further correction. Application of this method to several common open-ocean and surf-zone sea-salt-particle source functions is described.

  19. An advanced method to assess the diet of free-ranging large carnivores based on scats.

    PubMed

    Wachter, Bettina; Blanc, Anne-Sophie; Melzheimer, Jörg; Höner, Oliver P; Jago, Mark; Hofer, Heribert

    2012-01-01

    The diet of free-ranging carnivores is an important part of their ecology. It is often determined from prey remains in scats. In many cases, scat analyses are the most efficient method but they require correction for potential biases. When the diet is expressed as proportions of consumed mass of each prey species, the consumed prey mass to excrete one scat needs to be determined and corrected for prey body mass because the proportion of digestible to indigestible matter increases with prey body mass. Prey body mass can be corrected for by conducting feeding experiments using prey of various body masses and fitting a regression between consumed prey mass to excrete one scat and prey body mass (correction factor 1). When the diet is expressed as proportions of consumed individuals of each prey species and includes prey animals not completely consumed, the actual mass of each prey consumed by the carnivore needs to be controlled for (correction factor 2). No previous study controlled for this second bias. Here we use an extended series of feeding experiments on a large carnivore, the cheetah (Acinonyx jubatus), to establish both correction factors. In contrast to previous studies which fitted a linear regression for correction factor 1, we fitted a biologically more meaningful exponential regression model where the consumed prey mass to excrete one scat reaches an asymptote at large prey sizes. Using our protocol, we also derive correction factor 1 and 2 for other carnivore species and apply them to published studies. We show that the new method increases the number and proportion of consumed individuals in the diet for large prey animals compared to the conventional method. Our results have important implications for the interpretation of scat-based studies in feeding ecology and the resolution of human-wildlife conflicts for the conservation of large carnivores.

  20. An Advanced Method to Assess the Diet of Free-Ranging Large Carnivores Based on Scats

    PubMed Central

    Wachter, Bettina; Blanc, Anne-Sophie; Melzheimer, Jörg; Höner, Oliver P.; Jago, Mark; Hofer, Heribert

    2012-01-01

    Background The diet of free-ranging carnivores is an important part of their ecology. It is often determined from prey remains in scats. In many cases, scat analyses are the most efficient method but they require correction for potential biases. When the diet is expressed as proportions of consumed mass of each prey species, the consumed prey mass to excrete one scat needs to be determined and corrected for prey body mass because the proportion of digestible to indigestible matter increases with prey body mass. Prey body mass can be corrected for by conducting feeding experiments using prey of various body masses and fitting a regression between consumed prey mass to excrete one scat and prey body mass (correction factor 1). When the diet is expressed as proportions of consumed individuals of each prey species and includes prey animals not completely consumed, the actual mass of each prey consumed by the carnivore needs to be controlled for (correction factor 2). No previous study controlled for this second bias. Methodology/Principal Findings Here we use an extended series of feeding experiments on a large carnivore, the cheetah (Acinonyx jubatus), to establish both correction factors. In contrast to previous studies which fitted a linear regression for correction factor 1, we fitted a biologically more meaningful exponential regression model where the consumed prey mass to excrete one scat reaches an asymptote at large prey sizes. Using our protocol, we also derive correction factor 1 and 2 for other carnivore species and apply them to published studies. We show that the new method increases the number and proportion of consumed individuals in the diet for large prey animals compared to the conventional method. Conclusion/Significance Our results have important implications for the interpretation of scat-based studies in feeding ecology and the resolution of human-wildlife conflicts for the conservation of large carnivores. PMID:22715373

  1. Correcting the Relative Bias of Light Obscuration and Flow Imaging Particle Counters.

    PubMed

    Ripple, Dean C; Hu, Zhishang

    2016-03-01

    Industry and regulatory bodies desire more accurate methods for counting and characterizing particles. Measurements of proteinaceous-particle concentrations by light obscuration and flow imaging can differ by factors of ten or more. We propose methods to correct the diameters reported by light obscuration and flow imaging instruments. For light obscuration, diameters were rescaled based on characterization of the refractive index of typical particles and a light scattering model for the extinction efficiency factor. The light obscuration models are applicable for either homogeneous materials (e.g., silicone oil) or for chemically homogeneous, but spatially non-uniform aggregates (e.g., protein aggregates). For flow imaging, the method relied on calibration of the instrument with silica beads suspended in water-glycerol mixtures. These methods were applied to a silicone-oil droplet suspension and four particle suspensions containing particles produced from heat stressed and agitated human serum albumin, agitated polyclonal immunoglobulin, and abraded ethylene tetrafluoroethylene polymer. All suspensions were measured by two flow imaging and one light obscuration apparatus. Prior to correction, results from the three instruments disagreed by a factor ranging from 3.1 to 48 in particle concentration over the size range from 2 to 20 μm. Bias corrections reduced the disagreement from an average factor of 14 down to an average factor of 1.5. The methods presented show promise in reducing the relative bias between light obscuration and flow imaging.

  2. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  3. Determination of small-field correction factors for cylindrical ionization chambers using a semiempirical method

    NASA Astrophysics Data System (ADS)

    Park, Kwangwoo; Bak, Jino; Park, Sungho; Choi, Wonhoon; Park, Suk Won

    2016-02-01

    A semiempirical method based on the averaging effect of the sensitive volumes of different air-filled ionization chambers (ICs) was employed to approximate the correction factors for beam quality produced from the difference in the sizes of the reference field and small fields. We measured the output factors using several cylindrical ICs and calculated the correction factors using a mathematical method similar to deconvolution; in the method, we modeled the variable and inhomogeneous energy fluence function within the chamber cavity. The parameters of the modeled function and the correction factors were determined by solving a developed system of equations as well as on the basis of the measurement data and the geometry of the chambers. Further, Monte Carlo (MC) computations were performed using the Monaco® treatment planning system to validate the proposed method. The determined correction factors (k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} ) were comparable to the values derived from the MC computations performed using Monaco®. For example, for a 6 MV photon beam and a field size of 1  ×  1 cm2, k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} was calculated to be 1.125 for a PTW 31010 chamber and 1.022 for a PTW 31016 chamber. On the other hand, the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values determined from the MC computations were 1.121 and 1.031, respectively; the difference between the proposed method and the MC computation is less than 2%. In addition, we determined the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values for PTW 30013, PTW 31010, PTW 31016, IBA FC23-C, and IBA CC13 chambers as well. We devised a method for determining k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} from both the measurement of the output factors and model-based mathematical computation. The proposed method can be useful in case the MC simulation would not be applicable for the clinical settings.

  4. Detector signal correction method and system

    DOEpatents

    Carangelo, Robert M.; Duran, Andrew J.; Kudman, Irwin

    1995-07-11

    Corrective factors are applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factors may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects.

  5. Detector signal correction method and system

    DOEpatents

    Carangelo, R.M.; Duran, A.J.; Kudman, I.

    1995-07-11

    Corrective factors are applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factors may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects. 5 figs.

  6. Attenuation correction factors for cylindrical, disc and box geometry

    NASA Astrophysics Data System (ADS)

    Agarwal, Chhavi; Poi, Sanhita; Mhatre, Amol; Goswami, A.; Gathibandhe, M.

    2009-08-01

    In the present study, attenuation correction factors have been experimentally determined for samples having cylindrical, disc and box geometry and compared with the attenuation correction factors calculated by Hybrid Monte Carlo (HMC) method [ C. Agarwal, S. Poi, A. Goswami, M. Gathibandhe, R.A. Agrawal, Nucl. Instr. and. Meth. A 597 (2008) 198] and with the near-field and far-field formulations available in literature. It has been observed that the near-field formulae, although said to be applicable at close sample-detector geometry, does not work at very close sample-detector configuration. The advantage of the HMC method is that it is found to be valid for all sample-detector geometries.

  7. Impact of reconstruction parameters on quantitative I-131 SPECT

    NASA Astrophysics Data System (ADS)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,-26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.

  8. Ionization chamber-based reference dosimetry of intensity modulated radiation beams.

    PubMed

    Bouchard, Hugo; Seuntjens, Jan

    2004-09-01

    The present paper addresses reference dose measurements using thimble ionization chambers for quality assurance in IMRT fields. In these radiation fields, detector fluence perturbation effects invalidate the application of open-field dosimetry protocol data for the derivation of absorbed dose to water from ionization chamber measurements. We define a correction factor C(Q)IMRT to correct the absorbed dose to water calibration coefficient N(D, w)Q for fluence perturbation effects in individual segments of an IMRT delivery and developed a calculation method to evaluate the factor. The method consists of precalculating, using accurate Monte Carlo techniques, ionization chamber, type-dependent cavity air dose, and in-phantom dose to water at the reference point for zero-width pencil beams as a function of position of the pencil beams impinging on the phantom surface. These precalculated kernels are convolved with the IMRT fluence distribution to arrive at the dose-to-water-dose-to-cavity air ratio [D(a)w (IMRT)] for IMRT fields and with a 10x10 cm2 open-field fluence to arrive at the same ratio D(a)w (Q) for the 10x10 cm2 reference field. The correction factor C(Q)IMRT is then calculated as the ratio of D(a)w (IMRT) and D(a)w (Q). The calculation method was experimentally validated and the magnitude of chamber correction factors in reference dose measurements in single static and dynamic IMRT fields was studied. The results show that, for thimble-type ionization chambers the correction factor in a single, realistic dynamic IMRT field can be of the order of 10% or more. We therefore propose that for accurate reference dosimetry of complete n-beam IMRT deliveries, ionization chamber fluence perturbation correction factors must explicitly be taken into account.

  9. Method and system for photoconductive detector signal correction

    DOEpatents

    Carangelo, Robert M.; Hamblen, David G.; Brouillette, Carl R.

    1992-08-04

    A corrective factor is applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factor may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects.

  10. Method and system for photoconductive detector signal correction

    DOEpatents

    Carangelo, R.M.; Hamblen, D.G.; Brouillette, C.R.

    1992-08-04

    A corrective factor is applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factor may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects. 5 figs.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, J C; Karmanos Cancer Institute McLaren-Macomb, Clinton Township, MI; Knill, C

    Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes.more » Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to thank PTW (Friedberg, Germany) for providing the PTW microDiamond detector for this research.« less

  12. Testing the Perey effect

    DOE PAGES

    Titus, L. J.; Nunes, Filomena M.

    2014-03-12

    Here, the effects of non-local potentials have historically been approximately included by applying a correction factor to the solution of the corresponding equation for the local equivalent interaction. This is usually referred to as the Perey correction factor. In this work we investigate the validity of the Perey correction factor for single-channel bound and scattering states, as well as in transfer (p, d) cross sections. Method: We solve the scattering and bound state equations for non-local interactions of the Perey-Buck type, through an iterative method. Using the distorted wave Born approximation, we construct the T-matrix for (p,d) on 17O, 41Ca,more » 49Ca, 127Sn, 133Sn, and 209Pb at 20 and 50 MeV. As a result, we found that for bound states, the Perey corrected wave function resulting from the local equation agreed well with that from the non-local equation in the interior region, but discrepancies were found in the surface and peripheral regions. Overall, the Perey correction factor was adequate for scattering states, with the exception of a few partial waves corresponding to the grazing impact parameters. These differences proved to be important for transfer reactions. In conclusion, the Perey correction factor does offer an improvement over taking a direct local equivalent solution. However, if the desired accuracy is to be better than 10%, the exact solution of the non-local equation should be pursued.« less

  13. Method and apparatus for providing pulse pile-up correction in charge quantizing radiation detection systems

    DOEpatents

    Britton, Jr., Charles L.; Wintenberg, Alan L.

    1993-01-01

    A radiation detection method and system for continuously correcting the quantization of detected charge during pulse pile-up conditions. Charge pulses from a radiation detector responsive to the energy of detected radiation events are converted to voltage pulses of predetermined shape whose peak amplitudes are proportional to the quantity of charge of each corresponding detected event by means of a charge-sensitive preamplifier. These peak amplitudes are sampled and stored sequentially in accordance with their respective times of occurrence. Based on the stored peak amplitudes and times of occurrence, a correction factor is generated which represents the fraction of a previous pulses influence on a preceding pulse peak amplitude. This correction factor is subtracted from the following pulse amplitude in a summing amplifier whose output then represents the corrected charge quantity measurement.

  14. Experimental determination of field factors (\\Omega _{{{Q}_{\\text{clin}}},{{Q}_{\\text{msr}}}}^{{{f}_{\\text{clin}}},{{f}_{\\text{msr}}}} ) for small radiotherapy beams using the daisy chain correction method

    NASA Astrophysics Data System (ADS)

    Lárraga-Gutiérrez, José Manuel

    2015-08-01

    Recently, Alfonso et al proposed a new formalism for the dosimetry of small and non-standard fields. The proposed new formalism is strongly based on the calculation of detector-specific beam correction factors by Monte Carlo simulation methods, which accounts for the difference in the response of the detector between the small and the machine specific reference field. The correct calculation of the detector-specific beam correction factors demands an accurate knowledge of the linear accelerator, detector geometry and composition materials. The present work shows that the field factors in water may be determined experimentally using the daisy chain correction method down to a field size of 1 cm  ×  1 cm for a specific set of detectors. The detectors studied were: three mini-ionization chambers (PTW-31014, PTW-31006, IBA-CC01), three silicon-based diodes (PTW-60018, IBA-SFD and IBA-PFD) and one synthetic diamond detector (PTW-60019). Monte Carlo simulations and experimental measurements were performed for a 6 MV photon beam at 10 cm depth in water with a source-to-axis distance of 100 cm. The results show that the differences between the experimental and Monte Carlo calculated field factors are less than 0.5%—with the exception of the IBA-PFD—for field sizes between 1.5 cm  ×  1.5 cm and 5 cm  ×  5 cm. For the 1 cm  ×  1 cm field size, the differences are within 2%. By using the daisy chain correction method, it is possible to determine measured field factors in water. The results suggest that the daisy chain correction method is not suitable for measurements performed with the IBA-PFD detector. The latter is due to the presence of tungsten powder in the detector encapsulation material. The use of Monte Carlo calculated k{{Q\\text{clin}},{{Q}\\text{msr}}}{{f\\text{clin}},{{f}\\text{msr}}} is encouraged for field sizes less than or equal to 1 cm  ×  1 cm for the dosimeters used in this work.

  15. Timing Calibration in PET Using a Time Alignment Probe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moses, William W.; Thompson, Christopher J.

    2006-05-05

    We evaluate the Scanwell Time Alignment Probe for performing the timing calibration for the LBNL Prostate-Specific PET Camera. We calibrate the time delay correction factors for each detector module in the camera using two methods--using the Time Alignment Probe (which measures the time difference between the probe and each detector module) and using the conventional method (which measures the timing difference between all module-module combinations in the camera). These correction factors, which are quantized in 2 ns steps, are compared on a module-by-module basis. The values are in excellent agreement--of the 80 correction factors, 62 agree exactly, 17 differ bymore » 1 step, and 1 differs by 2 steps. We also measure on-time and off-time counting rates when the two sets of calibration factors are loaded into the camera and find that they agree within statistical error. We conclude that the performance using the Time Alignment Probe and conventional methods are equivalent.« less

  16. Hypothesis Testing Using Factor Score Regression

    PubMed Central

    Devlieger, Ines; Mayer, Axel; Rosseel, Yves

    2015-01-01

    In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886

  17. WE-AB-207A-07: A Planning CT-Guided Scatter Artifact Correction Method for CBCT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, X; Liu, T; Dong, X

    Purpose: Cone beam computed tomography (CBCT) imaging is on increasing demand for high-performance image-guided radiotherapy such as online tumor delineation and dose calculation. However, the current CBCT imaging has severe scatter artifacts and its current clinical application is therefore limited to patient setup based mainly on the bony structures. This study’s purpose is to develop a CBCT artifact correction method. Methods: The proposed scatter correction method utilizes the planning CT to improve CBCT image quality. First, an image registration is used to match the planning CT with the CBCT to reduce the geometry difference between the two images. Then, themore » planning CT-based prior information is entered into the Bayesian deconvolution framework to iteratively perform a scatter artifact correction for the CBCT mages. This technique was evaluated using Catphan phantoms with multiple inserts. Contrast-to-noise ratios (CNR) and signal-to-noise ratios (SNR), and the image spatial nonuniformity (ISN) in selected volume of interests (VOIs) were calculated to assess the proposed correction method. Results: Post scatter correction, the CNR increased by a factor of 1.96, 3.22, 3.20, 3.46, 3.44, 1.97 and 1.65, and the SNR increased by a factor 1.05, 2.09, 1.71, 3.95, 2.52, 1.54 and 1.84 for the Air, PMP, LDPE, Polystryrene, Acrylic, Delrin and Teflon inserts, respectively. The ISN decreased from 21.1% to 4.7% in the corrected images. All values of CNR, SNR and ISN in the corrected CBCT image were much closer to those in the planning CT images. The results demonstrated that the proposed method reduces the relevant artifacts and recovers CT numbers. Conclusion: We have developed a novel CBCT artifact correction method based on CT image, and demonstrated that the proposed CT-guided correction method could significantly reduce scatter artifacts and improve the image quality. This method has great potential to correct CBCT images allowing its use in adaptive radiotherapy.« less

  18. Can small field diode correction factors be applied universally?

    PubMed

    Liu, Paul Z Y; Suchowerska, Natalka; McKenzie, David R

    2014-09-01

    Diode detectors are commonly used in dosimetry, but have been reported to over-respond in small fields. Diode correction factors have been reported in the literature. The purpose of this study is to determine whether correction factors for a given diode type can be universally applied over a range of irradiation conditions including beams of different qualities. A mathematical relation of diode over-response as a function of the field size was developed using previously published experimental data in which diodes were compared to an air core scintillation dosimeter. Correction factors calculated from the mathematical relation were then compared those available in the literature. The mathematical relation established between diode over-response and the field size was found to predict the measured diode correction factors for fields between 5 and 30 mm in width. The average deviation between measured and predicted over-response was 0.32% for IBA SFD and PTW Type E diodes. Diode over-response was found to be not strongly dependent on the type of linac, the method of collimation or the measurement depth. The mathematical relation was found to agree with published diode correction factors derived from Monte Carlo simulations and measurements, indicating that correction factors are robust in their transportability between different radiation beams. Copyright © 2014. Published by Elsevier Ireland Ltd.

  19. Intermediate boundary conditions for LOD, ADI and approximate factorization methods

    NASA Technical Reports Server (NTRS)

    Leveque, R. J.

    1985-01-01

    A general approach to determining the correct intermediate boundary conditions for dimensional splitting methods is presented. The intermediate solution U is viewed as a second order accurate approximation to a modified equation. Deriving the modified equation and using the relationship between this equation and the original equation allows us to determine the correct boundary conditions for U*. This technique is illustrated by applying it to locally one dimensional (LOD) and alternating direction implicit (ADI) methods for the heat equation in two and three space dimensions. The approximate factorization method is considered in slightly more generality.

  20. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    PubMed

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  1. Method to determine the position-dependant metal correction factor for dose-rate equivalent laser testing of semiconductor devices

    DOEpatents

    Horn, Kevin M.

    2013-07-09

    A method reconstructs the charge collection from regions beneath opaque metallization of a semiconductor device, as determined from focused laser charge collection response images, and thereby derives a dose-rate dependent correction factor for subsequent broad-area, dose-rate equivalent, laser measurements. The position- and dose-rate dependencies of the charge-collection magnitude of the device are determined empirically and can be combined with a digital reconstruction methodology to derive an accurate metal-correction factor that permits subsequent absolute dose-rate response measurements to be derived from laser measurements alone. Broad-area laser dose-rate testing can thereby be used to accurately determine the peak transient current, dose-rate response of semiconductor devices to penetrating electron, gamma- and x-ray irradiation.

  2. Robust incremental compensation of the light attenuation with depth in 3D fluorescence microscopy.

    PubMed

    Kervrann, C; Legland, D; Pardini, L

    2004-06-01

    Summary Fluorescent signal intensities from confocal laser scanning microscopes (CLSM) suffer from several distortions inherent to the method. Namely, layers which lie deeper within the specimen are relatively dark due to absorption and scattering of both excitation and fluorescent light, photobleaching and/or other factors. Because of these effects, a quantitative analysis of images is not always possible without correction. Under certain assumptions, the decay of intensities can be estimated and used for a partial depth intensity correction. In this paper we propose an original robust incremental method for compensating the attenuation of intensity signals. Most previous correction methods are more or less empirical and based on fitting a decreasing parametric function to the section mean intensity curve computed by summing all pixel values in each section. The fitted curve is then used for the calculation of correction factors for each section and a new compensated sections series is computed. However, these methods do not perfectly correct the images. Hence, the algorithm we propose for the automatic correction of intensities relies on robust estimation, which automatically ignores pixels where measurements deviate from the decay model. It is based on techniques adopted from the computer vision literature for image motion estimation. The resulting algorithm is used to correct volumes acquired in CLSM. An implementation of such a restoration filter is discussed and examples of successful restorations are given.

  3. Spectral method for the correction of the Cerenkov light effect in plastic scintillation detectors: A comparison study of calibration procedures and validation in Cerenkov light-dominated situations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guillot, Mathieu; Gingras, Luc; Archambault, Louis

    2011-04-15

    Purpose: The purposes of this work were: (1) To determine if a spectral method can accurately correct the Cerenkov light effect in plastic scintillation detectors (PSDs) for situations where the Cerenkov light is dominant over the scintillation light and (2) to develop a procedural guideline for accurately determining the calibration factors of PSDs. Methods: The authors demonstrate, by using the equations of the spectral method, that the condition for accurately correcting the effect of Cerenkov light is that the ratio of the two calibration factors must be equal to the ratio of the Cerenkov light measured within the two differentmore » spectral regions used for analysis. Based on this proof, the authors propose two new procedures to determine the calibration factors of PSDs, which were designed to respect this condition. A PSD that consists of a cylindrical polystyrene scintillating fiber (1.6 mm{sup 3}) coupled to a plastic optical fiber was calibrated by using these new procedures and the two reference procedures described in the literature. To validate the extracted calibration factors, relative dose profiles and output factors for a 6 MV photon beam from a medical linac were measured with the PSD and an ionization chamber. Emphasis was placed on situations where the Cerenkov light is dominant over the scintillation light and on situations dissimilar to the calibration conditions. Results: The authors found that the accuracy of the spectral method depends on the procedure used to determine the calibration factors of the PSD and on the attenuation properties of the optical fiber used. The results from the relative dose profile measurements showed that the spectral method can correct the Cerenkov light effect with an accuracy level of 1%. The results obtained also indicate that PSDs measure output factors that are lower than those measured with ionization chambers for square field sizes larger than 25x25 cm{sup 2}, in general agreement with previously published Monte Carlo results. Conclusions: The authors conclude that the spectral method can be used to accurately correct the Cerenkov light effect in PSDs. The authors confirmed the importance of maximizing the difference of Cerenkov light production between calibration measurements. The authors also found that the attenuation of the optical fiber, which is assumed to be constant in the original formulation of the spectral method, may cause a variation of the calibration factors in some experimental setups.« less

  4. Small field detector correction factors kQclin,Qmsr (fclin,fmsr) for silicon-diode and diamond detectors with circular 6 MV fields derived using both empirical and numerical methods.

    PubMed

    O'Brien, D J; León-Vintró, L; McClean, B

    2016-01-01

    The use of radiotherapy fields smaller than 3 cm in diameter has resulted in the need for accurate detector correction factors for small field dosimetry. However, published factors do not always agree and errors introduced by biased reference detectors, inaccurate Monte Carlo models, or experimental errors can be difficult to distinguish. The aim of this study was to provide a robust set of detector-correction factors for a range of detectors using numerical, empirical, and semiempirical techniques under the same conditions and to examine the consistency of these factors between techniques. Empirical detector correction factors were derived based on small field output factor measurements for circular field sizes from 3.1 to 0.3 cm in diameter performed with a 6 MV beam. A PTW 60019 microDiamond detector was used as the reference dosimeter. Numerical detector correction factors for the same fields were derived based on calculations from a geant4 Monte Carlo model of the detectors and the Linac treatment head. Semiempirical detector correction factors were derived from the empirical output factors and the numerical dose-to-water calculations. The PTW 60019 microDiamond was found to over-respond at small field sizes resulting in a bias in the empirical detector correction factors. The over-response was similar in magnitude to that of the unshielded diode. Good agreement was generally found between semiempirical and numerical detector correction factors except for the PTW 60016 Diode P, where the numerical values showed a greater over-response than the semiempirical values by a factor of 3.7% for a 1.1 cm diameter field and higher for smaller fields. Detector correction factors based solely on empirical measurement or numerical calculation are subject to potential bias. A semiempirical approach, combining both empirical and numerical data, provided the most reliable results.

  5. Radiative-Transfer Modeling of Spectra of Densely Packed Particulate Media

    NASA Astrophysics Data System (ADS)

    Ito, G.; Mishchenko, M. I.; Glotch, T. D.

    2017-12-01

    Remote sensing measurements over a wide range of wavelengths from both ground- and space-based platforms have provided a wealth of data regarding the surfaces and atmospheres of various solar system bodies. With proper interpretations, important properties, such as composition and particle size, can be inferred. However, proper interpretation of such datasets can often be difficult, especially for densely packed particulate media with particle sizes on the order of wavelength of light being used for remote sensing. Radiative transfer theory has often been applied to the study of densely packed particulate media like planetary regoliths and snow, but with difficulty, and here we continue to investigate radiative transfer modeling of spectra of densely packed media. We use the superposition T-matrix method to compute scattering properties of clusters of particles and capture the near-field effects important for dense packing. Then, the scattering parameters from the T-matrix computations are modified with the static structure factor correction, accounting for the dense packing of the clusters themselves. Using these corrected scattering parameters, reflectance (or emissivity via Kirchhoff's Law) is computed with the method of invariance imbedding solution to the radiative transfer equation. For this work we modeled the emissivity spectrum of the 3.3 µm particle size fraction of enstatite, representing some common mineralogical and particle size components of regoliths, in the mid-infrared wavelengths (5 - 50 µm). The modeled spectrum from the T-matrix method with static structure factor correction using moderate packing densities (filling factors of 0.1 - 0.2) produced better fits to the laboratory measurement of corresponding spectrum than the spectrum modeled by the equivalent method without static structure factor correction. Future work will test the method of the superposition T-matrix and static structure factor correction combination for larger particles sizes and polydispersed clusters in search for the most effective modeling of spectra of densely packed particulate media.

  6. Neural network approach to proximity effect corrections in electron-beam lithography

    NASA Astrophysics Data System (ADS)

    Frye, Robert C.; Cummings, Kevin D.; Rietman, Edward A.

    1990-05-01

    The proximity effect, caused by electron beam backscattering during resist exposure, is an important concern in writing submicron features. It can be compensated by appropriate local changes in the incident beam dose, but computation of the optimal correction usually requires a prohibitively long time. We present an example of such a computation on a small test pattern, which we performed by an iterative method. We then used this solution as a training set for an adaptive neural network. After training, the network computed the same correction as the iterative method, but in a much shorter time. Correcting the image with a software based neural network resulted in a decrease in the computation time by a factor of 30, and a hardware based network enhanced the computation speed by more than a factor of 1000. Both methods had an acceptably small error of 0.5% compared to the results of the iterative computation. Additionally, we verified that the neural network correctly generalized the solution of the problem to include patterns not contained in its training set.

  7. A convolution model for obtaining the response of an ionization chamber in static non standard fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez-Castano, D. M.; Gonzalez, L. Brualla; Gago-Arias, M. A.

    2012-01-15

    Purpose: This work contains an alternative methodology for obtaining correction factors for ionization chamber (IC) dosimetry of small fields and composite fields such as IMRT. The method is based on the convolution/superposition (C/S) of an IC response function (RF) with the dose distribution in a certain plane which includes chamber position. This method is an alternative to the full Monte Carlo (MC) approach that has been used previously by many authors for the same objective. Methods: The readout of an IC at a point inside a phantom irradiated by a certain beam can be obtained as the convolution of themore » dose spatial distribution caused by the beam and the IC two-dimensional RF. The proposed methodology has been applied successfully to predict the response of a PTW 30013 IC when measuring different nonreference fields, namely: output factors of 6 MV small fields, beam profiles of cobalt 60 narrow fields and 6 MV radiosurgery segments. The two-dimensional RF of a PTW 30013 IC was obtained by MC simulation of the absorbed dose to cavity air when the IC was scanned by a 0.6 x 0.6 mm{sup 2} cross section parallel pencil beam at low depth in a water phantom. For each of the cases studied, the results of the IC direct measurement were compared with the corresponding obtained by the C/S method. Results: For all of the cases studied, the agreement between the IC direct measurement and the IC calculated response was excellent (better than 1.5%). Conclusions: This method could be implemented in TPS in order to calculate dosimetry correction factors when an experimental IMRT treatment verification with in-phantom ionization chamber is performed. The miss-response of the IC due to the nonreference conditions could be quickly corrected by this method rather than employing MC derived correction factors. This method can be considered as an alternative to the plan-class associated correction factors proposed recently as part of an IAEA work group on nonstandard field dosimetry.« less

  8. A maintenance time prediction method considering ergonomics through virtual reality simulation.

    PubMed

    Zhou, Dong; Zhou, Xin-Xin; Guo, Zi-Yue; Lv, Chuan

    2016-01-01

    Maintenance time is a critical quantitative index in maintainability prediction. An efficient maintenance time measurement methodology plays an important role in early stage of the maintainability design. While traditional way to measure the maintenance time ignores the differences between line production and maintenance action. This paper proposes a corrective MOD method considering several important ergonomics factors to predict the maintenance time. With the help of the DELMIA analysis tools, the influence coefficient of several factors are discussed to correct the MOD value and the designers can measure maintenance time by calculating the sum of the corrective MOD time of each maintenance therbligs. Finally a case study is introduced, by maintaining the virtual prototype of APU motor starter in DELMIA, designer obtains the actual maintenance time by the proposed method, and the result verifies the effectiveness and accuracy of the proposed method.

  9. On the importance of appropriate precipitation gauge catch correction for hydrological modelling at mid to high latitudes

    NASA Astrophysics Data System (ADS)

    Stisen, S.; Højberg, A. L.; Troldborg, L.; Refsgaard, J. C.; Christensen, B. S. B.; Olsen, M.; Henriksen, H. J.

    2012-11-01

    Precipitation gauge catch correction is often given very little attention in hydrological modelling compared to model parameter calibration. This is critical because significant precipitation biases often make the calibration exercise pointless, especially when supposedly physically-based models are in play. This study addresses the general importance of appropriate precipitation catch correction through a detailed modelling exercise. An existing precipitation gauge catch correction method addressing solid and liquid precipitation is applied, both as national mean monthly correction factors based on a historic 30 yr record and as gridded daily correction factors based on local daily observations of wind speed and temperature. The two methods, named the historic mean monthly (HMM) and the time-space variable (TSV) correction, resulted in different winter precipitation rates for the period 1990-2010. The resulting precipitation datasets were evaluated through the comprehensive Danish National Water Resources model (DK-Model), revealing major differences in both model performance and optimised model parameter sets. Simulated stream discharge is improved significantly when introducing the TSV correction, whereas the simulated hydraulic heads and multi-annual water balances performed similarly due to recalibration adjusting model parameters to compensate for input biases. The resulting optimised model parameters are much more physically plausible for the model based on the TSV correction of precipitation. A proxy-basin test where calibrated DK-Model parameters were transferred to another region without site specific calibration showed better performance for parameter values based on the TSV correction. Similarly, the performances of the TSV correction method were superior when considering two single years with a much dryer and a much wetter winter, respectively, as compared to the winters in the calibration period (differential split-sample tests). We conclude that TSV precipitation correction should be carried out for studies requiring a sound dynamic description of hydrological processes, and it is of particular importance when using hydrological models to make predictions for future climates when the snow/rain composition will differ from the past climate. This conclusion is expected to be applicable for mid to high latitudes, especially in coastal climates where winter precipitation types (solid/liquid) fluctuate significantly, causing climatological mean correction factors to be inadequate.

  10. A study on scattering correction for γ-photon 3D imaging test method

    NASA Astrophysics Data System (ADS)

    Xiao, Hui; Zhao, Min; Liu, Jiantang; Chen, Hao

    2018-03-01

    A pair of 511KeV γ-photons is generated during a positron annihilation. Their directions differ by 180°. The moving path and energy information can be utilized to form the 3D imaging test method in industrial domain. However, the scattered γ-photons are the major factors influencing the imaging precision of the test method. This study proposes a γ-photon single scattering correction method from the perspective of spatial geometry. The method first determines possible scattering points when the scattered γ-photon pair hits the detector pair. The range of scattering angle can then be calculated according to the energy window. Finally, the number of scattered γ-photons denotes the attenuation of the total scattered γ-photons along its moving path. The corrected γ-photons are obtained by deducting the scattered γ-photons from the original ones. Two experiments are conducted to verify the effectiveness of the proposed scattering correction method. The results concluded that the proposed scattering correction method can efficiently correct scattered γ-photons and improve the test accuracy.

  11. Temperature and pressure effects on capacitance probe cryogenic liquid level measurement accuracy

    NASA Technical Reports Server (NTRS)

    Edwards, Lawrence G.; Haberbusch, Mark

    1993-01-01

    The inaccuracies of liquid nitrogen and liquid hydrogen level measurements by use of a coaxial capacitance probe were investigated as a function of fluid temperatures and pressures. Significant liquid level measurement errors were found to occur due to the changes in the fluids dielectric constants which develop over the operating temperature and pressure ranges of the cryogenic storage tanks. The level measurement inaccuracies can be reduced by using fluid dielectric correction factors based on measured fluid temperatures and pressures. The errors in the corrected liquid level measurements were estimated based on the reported calibration errors of the temperature and pressure measurement systems. Experimental liquid nitrogen (LN2) and liquid hydrogen (LH2) level measurements were obtained using the calibrated capacitance probe equations and also by the dielectric constant correction factor method. The liquid levels obtained by the capacitance probe for the two methods were compared with the liquid level estimated from the fluid temperature profiles. Results show that the dielectric constant corrected liquid levels agreed within 0.5 percent of the temperature profile estimated liquid level. The uncorrected dielectric constant capacitance liquid level measurements deviated from the temperature profile level by more than 5 percent. This paper identifies the magnitude of liquid level measurement error that can occur for LN2 and LH2 fluids due to temperature and pressure effects on the dielectric constants over the tank storage conditions from 5 to 40 psia. A method of reducing the level measurement errors by using dielectric constant correction factors based on fluid temperature and pressure measurements is derived. The improved accuracy by use of the correction factors is experimentally verified by comparing liquid levels derived from fluid temperature profiles.

  12. Method of Calculating the Correction Factors for Cable Dimensioning in Smart Grids

    NASA Astrophysics Data System (ADS)

    Simutkin, M.; Tuzikova, V.; Tlusty, J.; Tulsky, V.; Muller, Z.

    2017-04-01

    One of the main causes of overloading electrical equipment by currents of higher harmonics is the great increasing of a number of non-linear electricity power consumers. Non-sinusoidal voltages and currents affect the operation of electrical equipment, reducing its lifetime, increases the voltage and power losses in the network, reducing its capacity. There are standards that respects emissions amount of higher harmonics current that cannot provide interference limit for a safe level in power grid. The article presents a method for determining a correction factor to the long-term allowable current of the cable, which allows for this influence. Using mathematical models in the software Elcut, it was described thermal processes in the cable in case the flow of non-sinusoidal current. Developed in the article theoretical principles, methods, mathematical models allow us to calculate the correction factor to account for the effect of higher harmonics in the current spectrum for network equipment in any type of non-linear load.

  13. Detecting and correcting the bias of unmeasured factors using perturbation analysis: a data-mining approach.

    PubMed

    Lee, Wen-Chung

    2014-02-05

    The randomized controlled study is the gold-standard research method in biomedicine. In contrast, the validity of a (nonrandomized) observational study is often questioned because of unknown/unmeasured factors, which may have confounding and/or effect-modifying potential. In this paper, the author proposes a perturbation test to detect the bias of unmeasured factors and a perturbation adjustment to correct for such bias. The proposed method circumvents the problem of measuring unknowns by collecting the perturbations of unmeasured factors instead. Specifically, a perturbation is a variable that is readily available (or can be measured easily) and is potentially associated, though perhaps only very weakly, with unmeasured factors. The author conducted extensive computer simulations to provide a proof of concept. Computer simulations show that, as the number of perturbation variables increases from data mining, the power of the perturbation test increased progressively, up to nearly 100%. In addition, after the perturbation adjustment, the bias decreased progressively, down to nearly 0%. The data-mining perturbation analysis described here is recommended for use in detecting and correcting the bias of unmeasured factors in observational studies.

  14. SU-C-304-07: Are Small Field Detector Correction Factors Strongly Dependent On Machine-Specific Characteristics?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathew, D; Tanny, S; Parsai, E

    2015-06-15

    Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measuredmore » on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm{sup 2} to 0.6×0.6 cm{sup 2}, normalized to values at 5×5cm{sup 2}. Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm{sup 2} fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for class-specific reference conditions.« less

  15. The Etiology of Presbyopia, Contributing Factors, and Future Correction Methods

    NASA Astrophysics Data System (ADS)

    Hickenbotham, Adam Lyle

    Presbyopia has been a complicated problem for clinicians and researchers for centuries. Defining what constitutes presbyopia and what are its primary causes has long been a struggle for the vision and scientific community. Although presbyopia is a normal aging process of the eye, the continuous and gradual loss of accommodation is often dreaded and feared. If presbyopia were to be considered a disease, its global burden would be enormous as it affects more than a billion people worldwide. In this dissertation, I explore factors associated with presbyopia and develop a model for explaining the onset of presbyopia. In this model, the onset of presbyopia is associated primarily with three factors; depth of focus, focusing ability (accommodation), and habitual reading (or task) distance. If any of these three factors could be altered sufficiently, the onset of presbyopia could be delayed or prevented. Based on this model, I then examine possible optical methods that would be effective in correcting for presbyopia by expanding depth of focus. Two methods that have been show to be effective at expanding depth of focus include utilizing a small pupil aperture or generating higher order aberrations, particularly spherical aberration. I compare these two optical methods through the use of simulated designs, monitor testing, and visual performance metrics and then apply them in subjects through an adaptive optics system that corrects aberrations through a wavefront aberrometer and deformable mirror. I then summarize my findings and speculate about the future of presbyopia correction.

  16. The Empirical Verification of an Assignment of Items to Subtests: The Oblique Multiple Group Method versus the Confirmatory Common Factor Method

    ERIC Educational Resources Information Center

    Stuive, Ilse; Kiers, Henk A. L.; Timmerman, Marieke E.; ten Berge, Jos M. F.

    2008-01-01

    This study compares two confirmatory factor analysis methods on their ability to verify whether correct assignments of items to subtests are supported by the data. The confirmatory common factor (CCF) method is used most often and defines nonzero loadings so that they correspond to the assignment of items to subtests. Another method is the oblique…

  17. Empirical Derivation of Correction Factors for Human Spiral Ganglion Cell Nucleus and Nucleolus Count Units.

    PubMed

    Robert, Mark E; Linthicum, Fred H

    2016-01-01

    Profile count method for estimating cell number in sectioned tissue applies a correction factor for double count (resulting from transection during sectioning) of count units selected to represent the cell. For human spiral ganglion cell counts, we attempted to address apparent confusion between published correction factors for nucleus and nucleolus count units that are identical despite the role of count unit diameter in a commonly used correction factor formula. We examined a portion of human cochlea to empirically derive correction factors for the 2 count units, using 3-dimensional reconstruction software to identify double counts. The Neurotology and House Histological Temporal Bone Laboratory at University of California at Los Angeles. Using a fully sectioned and stained human temporal bone, we identified and generated digital images of sections of the modiolar region of the lower first turn of cochlea, identified count units with a light microscope, labeled them on corresponding digital sections, and used 3-dimensional reconstruction software to identify double-counted count units. For 25 consecutive sections, we determined that double-count correction factors for nucleus count unit (0.91) and nucleolus count unit (0.92) matched the published factors. We discovered that nuclei and, therefore, spiral ganglion cells were undercounted by 6.3% when using nucleolus count units. We determined that correction factors for count units must include an element for undercounting spiral ganglion cells as well as the double-count element. We recommend a correction factor of 0.91 for the nucleus count unit and 0.98 for the nucleolus count unit when using 20-µm sections. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  18. SU-E-T-552: Monte Carlo Calculation of Correction Factors for a Free-Air Ionization Chamber in Support of a National Air-Kerma Standard for Electronic Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mille, M; Bergstrom, P

    2015-06-15

    Purpose: To use Monte Carlo radiation transport methods to calculate correction factors for a free-air ionization chamber in support of a national air-kerma standard for low-energy, miniature x-ray sources used for electronic brachytherapy (eBx). Methods: The NIST is establishing a calibration service for well-type ionization chambers used to characterize the strength of eBx sources prior to clinical use. The calibration approach involves establishing the well-chamber’s response to an eBx source whose air-kerma rate at a 50 cm distance is determined through a primary measurement performed using the Lamperti free-air ionization chamber. However, the free-air chamber measurements of charge or currentmore » can only be related to the reference air-kerma standard after applying several corrections, some of which are best determined via Monte Carlo simulation. To this end, a detailed geometric model of the Lamperti chamber was developed in the EGSnrc code based on the engineering drawings of the instrument. The egs-fac user code in EGSnrc was then used to calculate energy-dependent correction factors which account for missing or undesired ionization arising from effects such as: (1) attenuation and scatter of the x-rays in air; (2) primary electrons escaping the charge collection region; (3) lack of charged particle equilibrium; (4) atomic fluorescence and bremsstrahlung radiation. Results: Energy-dependent correction factors were calculated assuming a monoenergetic point source with the photon energy ranging from 2 keV to 60 keV in 2 keV increments. Sufficient photon histories were simulated so that the Monte Carlo statistical uncertainty of the correction factors was less than 0.01%. The correction factors for a specific eBx source will be determined by integrating these tabulated results over its measured x-ray spectrum. Conclusion: The correction factors calculated in this work are important for establishing a national standard for eBx which will help ensure that dose is accurately and consistently delivered to patients.« less

  19. A Full-Core Resonance Self-Shielding Method Using a Continuous-Energy Quasi–One-Dimensional Slowing-Down Solution that Accounts for Temperature-Dependent Fuel Subregions and Resonance Interference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yuxuan; Martin, William; Williams, Mark

    In this paper, a correction-based resonance self-shielding method is developed that allows annular subdivision of the fuel rod. The method performs the conventional iteration of the embedded self-shielding method (ESSM) without subdivision of the fuel to capture the interpin shielding effect. The resultant self-shielded cross sections are modified by correction factors incorporating the intrapin effects of radial variation of the shielded cross section, radial temperature distribution, and resonance interference. A quasi–one-dimensional slowing-down equation is developed to calculate such correction factors. The method is implemented in the DeCART code and compared with the conventional ESSM and subgroup method with benchmark MCNPmore » results. The new method yields substantially improved results for both spatially dependent reaction rates and eigenvalues for typical pressurized water reactor pin cell cases with uniform and nonuniform fuel temperature profiles. Finally, the new method is also proved effective in treating assembly heterogeneity and complex material composition such as mixed oxide fuel, where resonance interference is much more intense.« less

  20. Fast sweeping method for the factored eikonal equation

    NASA Astrophysics Data System (ADS)

    Fomel, Sergey; Luo, Songting; Zhao, Hongkai

    2009-09-01

    We develop a fast sweeping method for the factored eikonal equation. By decomposing the solution of a general eikonal equation as the product of two factors: the first factor is the solution to a simple eikonal equation (such as distance) or a previously computed solution to an approximate eikonal equation. The second factor is a necessary modification/correction. Appropriate discretization and a fast sweeping strategy are designed for the equation of the correction part. The key idea is to enforce the causality of the original eikonal equation during the Gauss-Seidel iterations. Using extensive numerical examples we demonstrate that (1) the convergence behavior of the fast sweeping method for the factored eikonal equation is the same as for the original eikonal equation, i.e., the number of iterations for the Gauss-Seidel iterations is independent of the mesh size, (2) the numerical solution from the factored eikonal equation is more accurate than the numerical solution directly computed from the original eikonal equation, especially for point sources.

  1. HQET form factors for Bs → Klv decays beyond leading order

    NASA Astrophysics Data System (ADS)

    Banerjee, Debasish; Koren, Mateusz; Simma, Hubert; Sommer, Rainer

    2018-03-01

    We compute semi-leptonic Bs decay form factors using Heavy Quark Effective Theory on the lattice. To obtain good control of the 1 /mb expansion, one has to take into account not only the leading static order but also the terms arising at O (1/mb): kinetic, spin and current insertions. We show results for these terms calculated through the ratio method, using our prior results for the static order. After combining them with non-perturbative HQET parameters they can be continuum-extrapolated to give the QCD form factor correct up to O (1/mb2) corrections and without O (αs(mb)n) corrections.

  2. Biometrics encryption combining palmprint with two-layer error correction codes

    NASA Astrophysics Data System (ADS)

    Li, Hengjian; Qiu, Jian; Dong, Jiwen; Feng, Guang

    2017-07-01

    To bridge the gap between the fuzziness of biometrics and the exactitude of cryptography, based on combining palmprint with two-layer error correction codes, a novel biometrics encryption method is proposed. Firstly, the randomly generated original keys are encoded by convolutional and cyclic two-layer coding. The first layer uses a convolution code to correct burst errors. The second layer uses cyclic code to correct random errors. Then, the palmprint features are extracted from the palmprint images. Next, they are fused together by XORing operation. The information is stored in a smart card. Finally, the original keys extraction process is the information in the smart card XOR the user's palmprint features and then decoded with convolutional and cyclic two-layer code. The experimental results and security analysis show that it can recover the original keys completely. The proposed method is more secure than a single password factor, and has higher accuracy than a single biometric factor.

  3. Intercomparison of methods for coincidence summing corrections in gamma-ray spectrometry--part II (volume sources).

    PubMed

    Lépy, M-C; Altzitzoglou, T; Anagnostakis, M J; Capogni, M; Ceccatelli, A; De Felice, P; Djurasevic, M; Dryak, P; Fazio, A; Ferreux, L; Giampaoli, A; Han, J B; Hurtado, S; Kandic, A; Kanisch, G; Karfopoulos, K L; Klemola, S; Kovar, P; Laubenstein, M; Lee, J H; Lee, J M; Lee, K B; Pierre, S; Carvalhal, G; Sima, O; Tao, Chau Van; Thanh, Tran Thien; Vidmar, T; Vukanac, I; Yang, M J

    2012-09-01

    The second part of an intercomparison of the coincidence summing correction methods is presented. This exercise concerned three volume sources, filled with liquid radioactive solution. The same experimental spectra, decay scheme and photon emission intensities were used by all the participants. The results were expressed as coincidence summing corrective factors for several energies of (152)Eu and (134)Cs, and different source-to-detector distances. They are presented and discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. High-resolution gamma ray attenuation density measurements on mining exploration drill cores, including cut cores

    NASA Astrophysics Data System (ADS)

    Ross, P.-S.; Bourke, A.

    2017-01-01

    Physical property measurements are increasingly important in mining exploration. For density determinations on rocks, one method applicable on exploration drill cores relies on gamma ray attenuation. This non-destructive method is ideal because each measurement takes only 10 s, making it suitable for high-resolution logging. However calibration has been problematic. In this paper we present new empirical, site-specific correction equations for whole NQ and BQ cores. The corrections force back the gamma densities to the "true" values established by the immersion method. For the NQ core caliber, the density range extends to high values (massive pyrite, 5 g/cm3) and the correction is thought to be very robust. We also present additional empirical correction factors for cut cores which take into account the missing material. These "cut core correction factors", which are not site-specific, were established by making gamma density measurements on truncated aluminum cylinders of various residual thicknesses. Finally we show two examples of application for the Abitibi Greenstone Belt in Canada. The gamma ray attenuation measurement system is part of a multi-sensor core logger which also determines magnetic susceptibility, geochemistry and mineralogy on rock cores, and performs line-scan imaging.

  5. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.

  6. On Aethalometer measurement uncertainties and an instrument correction factor for the Arctic

    NASA Astrophysics Data System (ADS)

    Backman, John; Schmeisser, Lauren; Virkkula, Aki; Ogren, John A.; Asmi, Eija; Starkweather, Sandra; Sharma, Sangeeta; Eleftheriadis, Konstantinos; Uttal, Taneil; Jefferson, Anne; Bergin, Michael; Makshtas, Alexander; Tunved, Peter; Fiebig, Markus

    2017-12-01

    Several types of filter-based instruments are used to estimate aerosol light absorption coefficients. Two significant results are presented based on Aethalometer measurements at six Arctic stations from 2012 to 2014. First, an alternative method of post-processing the Aethalometer data is presented, which reduces measurement noise and lowers the detection limit of the instrument more effectively than boxcar averaging. The biggest benefit of this approach can be achieved if instrument drift is minimised. Moreover, by using an attenuation threshold criterion for data post-processing, the relative uncertainty from the electronic noise of the instrument is kept constant. This approach results in a time series with a variable collection time (Δt) but with a constant relative uncertainty with regard to electronic noise in the instrument. An additional advantage of this method is that the detection limit of the instrument will be lowered at small aerosol concentrations at the expense of temporal resolution, whereas there is little to no loss in temporal resolution at high aerosol concentrations ( > 2.1-6.7 Mm-1 as measured by the Aethalometers). At high aerosol concentrations, minimising the detection limit of the instrument is less critical. Additionally, utilising co-located filter-based absorption photometers, a correction factor is presented for the Arctic that can be used in Aethalometer corrections available in literature. The correction factor of 3.45 was calculated for low-elevation Arctic stations. This correction factor harmonises Aethalometer attenuation coefficients with light absorption coefficients as measured by the co-located light absorption photometers. Using one correction factor for Arctic Aethalometers has the advantage that measurements between stations become more inter-comparable.

  7. Nanoscale simulation of shale transport properties using the lattice Boltzmann method: Permeability and diffusivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Li; Zhang, Lei; Kang, Qinjun

    Here, porous structures of shales are reconstructed using the markov chain monte carlo (MCMC) method based on scanning electron microscopy (SEM) images of shale samples from Sichuan Basin, China. Characterization analysis of the reconstructed shales is performed, including porosity, pore size distribution, specific surface area and pore connectivity. The lattice Boltzmann method (LBM) is adopted to simulate fluid flow and Knudsen diffusion within the reconstructed shales. Simulation results reveal that the tortuosity of the shales is much higher than that commonly employed in the Bruggeman equation, and such high tortuosity leads to extremely low intrinsic permeability. Correction of the intrinsicmore » permeability is performed based on the dusty gas model (DGM) by considering the contribution of Knudsen diffusion to the total flow flux, resulting in apparent permeability. The correction factor over a range of Knudsen number and pressure is estimated and compared with empirical correlations in the literature. We find that for the wide pressure range investigated, the correction factor is always greater than 1, indicating Knudsen diffusion always plays a role on shale gas transport mechanisms in the reconstructed shales. Specifically, we found that most of the values of correction factor fall in the slip and transition regime, with no Darcy flow regime observed.« less

  8. Nanoscale simulation of shale transport properties using the lattice Boltzmann method: Permeability and diffusivity

    DOE PAGES

    Chen, Li; Zhang, Lei; Kang, Qinjun; ...

    2015-01-28

    Here, porous structures of shales are reconstructed using the markov chain monte carlo (MCMC) method based on scanning electron microscopy (SEM) images of shale samples from Sichuan Basin, China. Characterization analysis of the reconstructed shales is performed, including porosity, pore size distribution, specific surface area and pore connectivity. The lattice Boltzmann method (LBM) is adopted to simulate fluid flow and Knudsen diffusion within the reconstructed shales. Simulation results reveal that the tortuosity of the shales is much higher than that commonly employed in the Bruggeman equation, and such high tortuosity leads to extremely low intrinsic permeability. Correction of the intrinsicmore » permeability is performed based on the dusty gas model (DGM) by considering the contribution of Knudsen diffusion to the total flow flux, resulting in apparent permeability. The correction factor over a range of Knudsen number and pressure is estimated and compared with empirical correlations in the literature. We find that for the wide pressure range investigated, the correction factor is always greater than 1, indicating Knudsen diffusion always plays a role on shale gas transport mechanisms in the reconstructed shales. Specifically, we found that most of the values of correction factor fall in the slip and transition regime, with no Darcy flow regime observed.« less

  9. Nanoscale simulation of shale transport properties using the lattice Boltzmann method: permeability and diffusivity

    PubMed Central

    Chen, Li; Zhang, Lei; Kang, Qinjun; Viswanathan, Hari S.; Yao, Jun; Tao, Wenquan

    2015-01-01

    Porous structures of shales are reconstructed using the markov chain monte carlo (MCMC) method based on scanning electron microscopy (SEM) images of shale samples from Sichuan Basin, China. Characterization analysis of the reconstructed shales is performed, including porosity, pore size distribution, specific surface area and pore connectivity. The lattice Boltzmann method (LBM) is adopted to simulate fluid flow and Knudsen diffusion within the reconstructed shales. Simulation results reveal that the tortuosity of the shales is much higher than that commonly employed in the Bruggeman equation, and such high tortuosity leads to extremely low intrinsic permeability. Correction of the intrinsic permeability is performed based on the dusty gas model (DGM) by considering the contribution of Knudsen diffusion to the total flow flux, resulting in apparent permeability. The correction factor over a range of Knudsen number and pressure is estimated and compared with empirical correlations in the literature. For the wide pressure range investigated, the correction factor is always greater than 1, indicating Knudsen diffusion always plays a role on shale gas transport mechanisms in the reconstructed shales. Specifically, we found that most of the values of correction factor fall in the slip and transition regime, with no Darcy flow regime observed. PMID:25627247

  10. SU-F-I-13: Correction Factor Computations for the NIST Ritz Free Air Chamber for Medium-Energy X Rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergstrom, P

    Purpose: The National Institute of Standards and Technology (NIST) uses 3 free-air chambers to establish primary standards for radiation dosimetry at x-ray energies. For medium-energy × rays, the Ritz free-air chamber is the main measurement device. In order to convert the charge or current collected by the chamber to the radiation quantities air kerma or air kerma rate, a number of correction factors specific to the chamber must be applied. Methods: We used the Monte Carlo codes EGSnrc and PENELOPE. Results: Among these correction factors are the diaphragm correction (which accounts for interactions of photons from the x-ray source inmore » the beam-defining diaphragm of the chamber), the scatter correction (which accounts for the effects of photons scattered out of the primary beam), the electron-loss correction (which accounts for electrons that only partially expend their energy in the collection region), the fluorescence correction (which accounts for ionization due to reabsorption ffluorescence photons and the bremsstrahlung correction (which accounts for the reabsorption of bremsstrahlung photons). We have computed monoenergetic corrections for the NIST Ritz chamber for the 1 cm, 3 cm and 7 cm collection plates. Conclusion: We find good agreement with other’s results for the 7 cm plate. The data used to obtain these correction factors will be used to establish air kerma and it’s uncertainty in the standard NIST x-ray beams.« less

  11. SU-E-T-472: Improvement of IMRT QA Passing Rate by Correcting Angular Dependence of MatriXX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Q; Watkins, W; Kim, T

    2015-06-15

    Purpose: Multi-channel planar detector arrays utilized for IMRT-QA, such as the MatriXX, exhibit an incident-beam angular dependent response which can Result in false-positive gamma-based QA results, especially for helical tomotherapy plans which encompass the full range of beam angles. Although MatriXX can use with gantry angle sensor to provide automatically angular correction, this sensor does not work with tomotherapy. The purpose of the study is to reduce IMRT-QA false-positives by correcting for the MatriXX angular dependence. Methods: MatriXX angular dependence was characterized by comparing multiple fixed-angle irradiation measurements with corresponding TPS computed doses. For 81 Tomo-helical IMRT-QA measurements, two differentmore » correction schemes were tested: (1) A Monte-Carlo dose engine was used to compute MatriXX signal based on the angular-response curve. The computed signal was then compared with measurement. (2) Uncorrected computed signal was compared with measurements uniformly scaled to account for the average angular dependence. Three scaling factor (+2%, +2.5%, +3%) were tested. Results: The MatriXX response is 8% less than predicted for a PA beam even when the couch is fully accounted for. Without angular correction, only 67% of the cases pass the >90% points γ<1 (3%, 3mm). After full angular correction, 96% of the cases pass the criteria. Of three scaling factors, +2% gave the highest passing rate (89%), which is still less than the full angular correction method. With a stricter γ(2%,3mm) criteria, the full angular correction method was still able to achieve the 90% passing rate while the scaling method only gives 53% passing rate. Conclusion: Correction for the MatriXX angular dependence reduced the false-positives rate of our IMRT-QA process. It is necessary to correct for the angular dependence to achieve the IMRT passing criteria specified in TG129.« less

  12. A vibration correction method for free-fall absolute gravimeters

    NASA Astrophysics Data System (ADS)

    Qian, J.; Wang, G.; Wu, K.; Wang, L. J.

    2018-02-01

    An accurate determination of gravitational acceleration, usually approximated as 9.8 m s-2, has been playing an important role in the areas of metrology, geophysics, and geodetics. Absolute gravimetry has been experiencing rapid developments in recent years. Most absolute gravimeters today employ a free-fall method to measure gravitational acceleration. Noise from ground vibration has become one of the most serious factors limiting measurement precision. Compared to vibration isolators, the vibration correction method is a simple and feasible way to reduce the influence of ground vibrations. A modified vibration correction method is proposed and demonstrated. A two-dimensional golden section search algorithm is used to search for the best parameters of the hypothetical transfer function. Experiments using a T-1 absolute gravimeter are performed. It is verified that for an identical group of drop data, the modified method proposed in this paper can achieve better correction effects with much less computation than previous methods. Compared to vibration isolators, the correction method applies to more hostile environments and even dynamic platforms, and is expected to be used in a wider range of applications.

  13. Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen-dong; Liu, Yike; Alkhalifah, Tariq; Wu, Zedong

    2018-04-01

    The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyse the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artefacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modelling engine performs better than an isotropic migration.

  14. Determination of 210Pb concentration in NORM waste - An application of the transmission method for self-attenuation corrections for gamma-ray spectrometry

    NASA Astrophysics Data System (ADS)

    Bonczyk, Michal

    2018-07-01

    This article deals with the problem of the self-attenuation of low-energy gamma-rays from the isotope of lead 210Pb (46.5 keV) in industrial waste. The 167 samples of industrial waste, belonging to nine categories, were tested by means of gamma spectrometry in order to determine 210Pb activity concentration. The experimental method for self-attenuation corrections for gamma rays emitted by lead isotope was applied. Mass attenuation coefficients were determined for energy of 46.5 keV. Correction factors were calculated based on mass attenuation coefficients, sample density and thickness. A mathematical formula for correction calculation was evaluated. The 210Pb activity concentration obtained varied in the range from several Bq·kg-1 up to 19,810 Bq kg-1. The mass attenuation coefficients varied across the range of 0.19-4.42 cm2·g-1. However, the variation of mass attenuation coefficient within some categories of waste was relatively small. The calculated corrections for self-attenuation were 0.98 - 6.97. The high value of correction factors must not be neglect in radiation risk assessment.

  15. A simplified analytical dose calculation algorithm accounting for tissue heterogeneity for low-energy brachytherapy sources.

    PubMed

    Mashouf, Shahram; Lechtman, Eli; Beaulieu, Luc; Verhaegen, Frank; Keller, Brian M; Ravi, Ananth; Pignol, Jean-Philippe

    2013-09-21

    The American Association of Physicists in Medicine Task Group No. 43 (AAPM TG-43) formalism is the standard for seeds brachytherapy dose calculation. But for breast seed implants, Monte Carlo simulations reveal large errors due to tissue heterogeneity. Since TG-43 includes several factors to account for source geometry, anisotropy and strength, we propose an additional correction factor, called the inhomogeneity correction factor (ICF), accounting for tissue heterogeneity for Pd-103 brachytherapy. This correction factor is calculated as a function of the media linear attenuation coefficient and mass energy absorption coefficient, and it is independent of the source internal structure. Ultimately the dose in heterogeneous media can be calculated as a product of dose in water as calculated by TG-43 protocol times the ICF. To validate the ICF methodology, dose absorbed in spherical phantoms with large tissue heterogeneities was compared using the TG-43 formalism corrected for heterogeneity versus Monte Carlo simulations. The agreement between Monte Carlo simulations and the ICF method remained within 5% in soft tissues up to several centimeters from a Pd-103 source. Compared to Monte Carlo, the ICF methods can easily be integrated into a clinical treatment planning system and it does not require the detailed internal structure of the source or the photon phase-space.

  16. Impact of correction factors in human brain lesion-behavior inference.

    PubMed

    Sperber, Christoph; Karnath, Hans-Otto

    2017-03-01

    Statistical voxel-based lesion-behavior mapping (VLBM) in neurological patients with brain lesions is frequently used to examine the relationship between structure and function of the healthy human brain. Only recently, two simulation studies noted reduced anatomical validity of this method, observing the results of VLBM to be systematically misplaced by about 16 mm. However, both simulation studies differed from VLBM analyses of real data in that they lacked the proper use of two correction factors: lesion size and "sufficient lesion affection." In simulation experiments on a sample of 274 real stroke patients, we found that the use of these two correction factors reduced misplacement markedly compared to uncorrected VLBM. Apparently, the misplacement is due to physiological effects of brain lesion anatomy. Voxel-wise topographies of collateral damage in the real data were generated and used to compute a metric for the inter-voxel relation of brain damage. "Anatomical bias" vectors that were solely calculated from these inter-voxel relations in the patients' real anatomical data, successfully predicted the VLBM misplacement. The latter has the potential to help in the development of new VLBM methods that provide even higher anatomical validity than currently available by the proper use of correction factors. Hum Brain Mapp 38:1692-1701, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. Simple, Fast and Effective Correction for Irradiance Spatial Nonuniformity in Measurement of IVs of Large Area Cells at NREL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moriarty, Tom

    The NREL cell measurement lab measures the IV parameters of cells of multiple sizes and configurations. A large contributing factor to errors and uncertainty in Jsc, Imax, Pmax and efficiency can be the irradiance spatial nonuniformity. Correcting for this nonuniformity through its precise and frequent measurement can be very time consuming. This paper explains a simple, fast and effective method based on bicubic interpolation for determining and correcting for spatial nonuniformity and verification of the method's efficacy.

  18. SU-F-T-23: Correspondence Factor Correction Coefficient for Commissioning of Leipzig and Valencia Applicators with the Standard Imaging IVB 1000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donaghue, J; Gajdos, S

    Purpose: To determine the correction factor of the correspondence factor for the Standard Imaging IVB 1000 well chamber for commissioning of Elekta’s Leipzig and Valencia skin applicators. Methods: The Leipzig and Valencia applicators are designed to treat small skin lesions by collimating irradiation to the treatment area. Published output factors are used to calculate dose rates for clinical treatments. To validate onsite applicators, a correspondence factor (CFrev) is measured and compared to published values. The published CFrev is based on well chamber model SI HDR 1000 Plus. The CFrev is determined by correlating raw values of the source calibration setupmore » (Rcal,raw) and values taken when each applicator is mounted on the same well chamber with an adapter (Rapp,raw). The CFrev is calculated by using the equation CFrev =Rapp,raw/Rcal,raw. The CFrev was measured for each applicator in both the SI HDR 1000 Plus and the SI IVB 1000. A correction factor, CFIVB for the SI IVB 1000 was determined by finding the ratio of CFrev (SI IVB 1000) and CFrev (SI HDR 1000 Plus). Results: The average correction factors at dwell position 1121 were found to be 1.073, 1.039, 1.209, 1.091, and 1.058 for the Valencia V2, Valencia V3, Leipzig H1, Leipzig H2, and Leipzig H3 respectively. There were no significant variations in the correction factor for dwell positions 1119 through 1121. Conclusion: By using the appropriate correction factor, the correspondence factors for the Leipzig and Valencia surface applicators can be validated with the Standard Imaging IVB 1000. This allows users to correlate their measurements with the Standard Imaging IVB 1000 to the published data. The correction factor is included in the equation for the CFrev as follows: CFrev= Rapp,raw/(CFIVB*Rcal,raw). Each individual applicator has its own correction factor, so care must be taken that the appropriate factor is used.« less

  19. A propensity score approach to correction for bias due to population stratification using genetic and non-genetic factors.

    PubMed

    Zhao, Huaqing; Rebbeck, Timothy R; Mitra, Nandita

    2009-12-01

    Confounding due to population stratification (PS) arises when differences in both allele and disease frequencies exist in a population of mixed racial/ethnic subpopulations. Genomic control, structured association, principal components analysis (PCA), and multidimensional scaling (MDS) approaches have been proposed to address this bias using genetic markers. However, confounding due to PS can also be due to non-genetic factors. Propensity scores are widely used to address confounding in observational studies but have not been adapted to deal with PS in genetic association studies. We propose a genomic propensity score (GPS) approach to correct for bias due to PS that considers both genetic and non-genetic factors. We compare the GPS method with PCA and MDS using simulation studies. Our results show that GPS can adequately adjust and consistently correct for bias due to PS. Under no/mild, moderate, and severe PS, GPS yielded estimated with bias close to 0 (mean=-0.0044, standard error=0.0087). Under moderate or severe PS, the GPS method consistently outperforms the PCA method in terms of bias, coverage probability (CP), and type I error. Under moderate PS, the GPS method consistently outperforms the MDS method in terms of CP. PCA maintains relatively high power compared to both MDS and GPS methods under the simulated situations. GPS and MDS are comparable in terms of statistical properties such as bias, type I error, and power. The GPS method provides a novel and robust tool for obtaining less-biased estimates of genetic associations that can consider both genetic and non-genetic factors. 2009 Wiley-Liss, Inc.

  20. A simplified method for correcting contaminant concentrations in eggs for moisture loss.

    USGS Publications Warehouse

    Heinz, Gary H.; Stebbins, Katherine R.; Klimstra, Jon D.; Hoffman, David J.

    2009-01-01

    We developed a simplified and highly accurate method for correcting contaminant concentrations in eggs for the moisture that is lost from an egg during incubation. To make the correction, one injects water into the air cell of the egg until overflowing. The amount of water injected corrects almost perfectly for the amount of water lost during incubation or when an egg is left in the nest and dehydrates and deteriorates over time. To validate the new method we weighed freshly laid chicken (Gallus gallus) eggs and then incubated sets of fertile and dead eggs for either 12 or 19 d. We then injected water into the air cells of these eggs and verified that the weights after water injection were almost identical to the weights of the eggs when they were fresh. The advantages of the new method are its speed, accuracy, and simplicity: It does not require the calculation of a correction factor that has to be applied to each contaminant residue.

  1. Determination of velocity correction factors for real-time air velocity monitoring in underground mines.

    PubMed

    Zhou, Lihong; Yuan, Liming; Thomas, Rick; Iannacchione, Anthony

    2017-12-01

    When there are installations of air velocity sensors in the mining industry for real-time airflow monitoring, a problem exists with how the monitored air velocity at a fixed location corresponds to the average air velocity, which is used to determine the volume flow rate of air in an entry with the cross-sectional area. Correction factors have been practically employed to convert a measured centerline air velocity to the average air velocity. However, studies on the recommended correction factors of the sensor-measured air velocity to the average air velocity at cross sections are still lacking. A comprehensive airflow measurement was made at the Safety Research Coal Mine, Bruceton, PA, using three measuring methods including single-point reading, moving traverse, and fixed-point traverse. The air velocity distribution at each measuring station was analyzed using an air velocity contour map generated with Surfer ® . The correction factors at each measuring station for both the centerline and the sensor location were calculated and are discussed.

  2. Estimating Occupancy of Gopher Tortoise (Gorpherus polyphemus) Burrows in Coastal Scrub and Slash Pine Flatwoods

    NASA Technical Reports Server (NTRS)

    Breininger, David R.; Schmalzer, Paul A.; Hinkle, C. Ross

    1991-01-01

    One hundred twelve plots were established in coastal scrub and slash pine flatwoods habitats on the John F. Kennedy Space Center (KSC) to evaluate relationships between the number of burrows and gopher tortoise (Gopherus polyphemus) density. All burrows were located within these plots and were classified according to tortoise activity. Depending on season, bucket trapping, a stick method, a gopher tortoise pulling device, and a camera system were used to estimate tortoise occupancy. Correction factors (% of burrows occupied) were calculated by season and habitat type. Our data suggest that less than 20% of the active and inactive burrows combined were occupied during seasons when gopher tortoises were active. Correction factors were higher in poorly-drained areas and lower in well-drained areas during the winter, when gopher tortoise activity was low. Correction factors differed from studies elsewhere, indicating that population estimates require correction factors specific to the site and season to accurately estimate population size.

  3. Determination of velocity correction factors for real-time air velocity monitoring in underground mines

    PubMed Central

    Yuan, Liming; Thomas, Rick; Iannacchione, Anthony

    2017-01-01

    When there are installations of air velocity sensors in the mining industry for real-time airflow monitoring, a problem exists with how the monitored air velocity at a fixed location corresponds to the average air velocity, which is used to determine the volume flow rate of air in an entry with the cross-sectional area. Correction factors have been practically employed to convert a measured centerline air velocity to the average air velocity. However, studies on the recommended correction factors of the sensor-measured air velocity to the average air velocity at cross sections are still lacking. A comprehensive airflow measurement was made at the Safety Research Coal Mine, Bruceton, PA, using three measuring methods including single-point reading, moving traverse, and fixed-point traverse. The air velocity distribution at each measuring station was analyzed using an air velocity contour map generated with Surfer®. The correction factors at each measuring station for both the centerline and the sensor location were calculated and are discussed. PMID:29201495

  4. An experimental comparison of ETM+ image geometric correction methods in the mountainous areas of Yunnan Province, China

    NASA Astrophysics Data System (ADS)

    Wang, Jinliang; Wu, Xuejiao

    2010-11-01

    Geometric correction of imagery is a basic application of remote sensing technology. Its precision will impact directly on the accuracy and reliability of applications. The accuracy of geometric correction depends on many factors, including the used model for correction and the accuracy of the reference map, the number of ground control points (GCP) and its spatial distribution, resampling methods. The ETM+ image of Kunming Dianchi Lake Basin and 1:50000 geographical maps had been used to compare different correction methods. The results showed that: (1) The correction errors were more than one pixel and some of them were several pixels when the polynomial model was used. The correction accuracy was not stable when the Delaunay model was used. The correction errors were less than one pixel when the collinearity equation was used. (2) 6, 9, 25 and 35 GCP were selected randomly for geometric correction using the polynomial correction model respectively, the best result was obtained when 25 GCPs were used. (3) The contrast ratio of image corrected by using nearest neighbor and the best resampling rate was compared to that of using the cubic convolution and bilinear model. But the continuity of pixel gravy value was not very good. The contrast of image corrected was the worst and the computation time was the longest by using the cubic convolution method. According to the above results, the result was the best by using bilinear to resample.

  5. Air density correction in ionization dosimetry.

    PubMed

    Christ, G; Dohm, O S; Schüle, E; Gaupp, S; Martin, M

    2004-05-21

    Air density must be taken into account when ionization dosimetry is performed with unsealed ionization chambers. The German dosimetry protocol DIN 6800-2 states an air density correction factor for which current barometric pressure and temperature and their reference values must be known. It also states that differences between air density and the attendant reference value, as well as changes in ionization chamber sensitivity, can be determined using a radioactive check source. Both methods have advantages and drawbacks which the paper discusses in detail. Barometric pressure at a given height above sea level can be determined by using a suitable barometer, or data downloaded from airport or weather service internet sites. The main focus of the paper is to show how barometric data from measurement or from the internet are correctly processed. Therefore the paper also provides all the requisite equations and terminological explanations. Computed and measured barometric pressure readings are compared, and long-term experience with air density correction factors obtained using both methods is described.

  6. Data-driven sensitivity inference for Thomson scattering electron density measurement systems.

    PubMed

    Fujii, Keisuke; Yamada, Ichihiro; Hasuo, Masahiro

    2017-01-01

    We developed a method to infer the calibration parameters of multichannel measurement systems, such as channel variations of sensitivity and noise amplitude, from experimental data. We regard such uncertainties of the calibration parameters as dependent noise. The statistical properties of the dependent noise and that of the latent functions were modeled and implemented in the Gaussian process kernel. Based on their statistical difference, both parameters were inferred from the data. We applied this method to the electron density measurement system by Thomson scattering for the Large Helical Device plasma, which is equipped with 141 spatial channels. Based on the 210 sets of experimental data, we evaluated the correction factor of the sensitivity and noise amplitude for each channel. The correction factor varies by ≈10%, and the random noise amplitude is ≈2%, i.e., the measurement accuracy increases by a factor of 5 after this sensitivity correction. The certainty improvement in the spatial derivative inference was demonstrated.

  7. Correction of photoresponse nonuniformity for matrix detectors based on prior compensation for their nonlinear behavior.

    PubMed

    Ferrero, Alejandro; Campos, Joaquin; Pons, Alicia

    2006-04-10

    What we believe to be a novel procedure to correct the nonuniformity that is inherent in all matrix detectors has been developed and experimentally validated. This correction method, unlike other nonuniformity-correction algorithms, consists of two steps that separate two of the usual problems that affect characterization of matrix detectors, i.e., nonlinearity and the relative variation of the pixels' responsivity across the array. The correction of the nonlinear behavior remains valid for any illumination wavelength employed, as long as the nonlinearity is not due to power dependence of the internal quantum efficiency. This method of correction of nonuniformity permits the immediate calculation of the correction factor for any given power level and for any illuminant that has a known spectral content once the nonuniform behavior has been characterized for a sufficient number of wavelengths. This procedure has a significant advantage compared with other traditional calibration-based methods, which require that a full characterization be carried out for each spectral distribution pattern of the incident optical radiation. The experimental application of this novel method has achieved a 20-fold increase in the uniformity of a CCD array for response levels close to saturation.

  8. An accurate filter loading correction is essential for assessing personal exposure to black carbon using an Aethalometer.

    PubMed

    Good, Nicholas; Mölter, Anna; Peel, Jennifer L; Volckens, John

    2017-07-01

    The AE51 micro-Aethalometer (microAeth) is a popular and useful tool for assessing personal exposure to particulate black carbon (BC). However, few users of the AE51 are aware that its measurements are biased low (by up to 70%) due to the accumulation of BC on the filter substrate over time; previous studies of personal black carbon exposure are likely to have suffered from this bias. Although methods to correct for bias in micro-Aethalometer measurements of particulate black carbon have been proposed, these methods have not been verified in the context of personal exposure assessment. Here, five Aethalometer loading correction equations based on published methods were evaluated. Laboratory-generated aerosols of varying black carbon content (ammonium sulfate, Aquadag and NIST diesel particulate matter) were used to assess the performance of these methods. Filters from a personal exposure assessment study were also analyzed to determine how the correction methods performed for real-world samples. Standard correction equations produced correction factors with root mean square errors of 0.10 to 0.13 and mean bias within ±0.10. An optimized correction equation is also presented, along with sampling recommendations for minimizing bias when assessing personal exposure to BC using the AE51 micro-Aethalometer.

  9. Higher Flexibility and Better Immediate Spontaneous Correction May Not Gain Better Results for Nonstructural Thoracic Curve in Lenke 5C AIS Patients

    PubMed Central

    Zhang, Yanbin; Lin, Guanfeng; Wang, Shengru; Zhang, Jianguo; Shen, Jianxiong; Wang, Yipeng; Guo, Jianwei; Yang, Xinyu; Zhao, Lijuan

    2016-01-01

    Study Design. Retrospective study. Objective. To study the behavior of the unfused thoracic curve in Lenke type 5C during the follow-up and to identify risk factors for its correction loss. Summary of Background Data. Few studies have focused on the spontaneous behaviors of the unfused thoracic curve after selective thoracolumbar or lumbar fusion during the follow-up and the risk factors for spontaneous correction loss. Methods. We retrospectively reviewed 45 patients (41 females and 4 males) with AIS who underwent selective TL/L fusion from 2006 to 2012 in a single institution. The follow-up averaged 36 months (range, 24–105 months). Patients were divided into two groups. Thoracic curves in group A improved or maintained their curve magnitude after spontaneous correction, with a negative or no correction loss during the follow-up. Thoracic curves in group B deteriorated after spontaneous correction with a positive correction loss. Univariate analysis and multivariate analysis were built to identify the risk factors for correction loss of the unfused thoracic curves. Results. The minor thoracic curve was 26° preoperatively. It was corrected to 13° immediately with a spontaneous correction of 48.5%. At final follow-up it was 14° with a correction loss of 1°. Thoracic curves did not deteriorate after spontaneous correction in 23 cases in group A, while 22 cases were identified with thoracic curve progressing in group B. In multivariate analysis, two risk factors were independently associated with thoracic correction loss: higher flexibility and better immediate spontaneous correction rate of thoracic curve. Conclusion. Posterior selective TL/L fusion with pedicle screw constructs is an effective treatment for Lenke 5C AIS patients. Nonstructural thoracic curves with higher flexibility or better immediate correction are more likely to progress during the follow-up and close attentions must be paid to these patients in case of decompensation. Level of Evidence: 4 PMID:27831989

  10. Automated general temperature correction method for dielectric soil moisture sensors

    NASA Astrophysics Data System (ADS)

    Kapilaratne, R. G. C. Jeewantinie; Lu, Minjiao

    2017-08-01

    An effective temperature correction method for dielectric sensors is important to ensure the accuracy of soil water content (SWC) measurements of local to regional-scale soil moisture monitoring networks. These networks are extensively using highly temperature sensitive dielectric sensors due to their low cost, ease of use and less power consumption. Yet there is no general temperature correction method for dielectric sensors, instead sensor or site dependent correction algorithms are employed. Such methods become ineffective at soil moisture monitoring networks with different sensor setups and those that cover diverse climatic conditions and soil types. This study attempted to develop a general temperature correction method for dielectric sensors which can be commonly used regardless of the differences in sensor type, climatic conditions and soil type without rainfall data. In this work an automated general temperature correction method was developed by adopting previously developed temperature correction algorithms using time domain reflectometry (TDR) measurements to ThetaProbe ML2X, Stevens Hydra probe II and Decagon Devices EC-TM sensor measurements. The rainy day effects removal procedure from SWC data was automated by incorporating a statistical inference technique with temperature correction algorithms. The temperature correction method was evaluated using 34 stations from the International Soil Moisture Monitoring Network and another nine stations from a local soil moisture monitoring network in Mongolia. Soil moisture monitoring networks used in this study cover four major climates and six major soil types. Results indicated that the automated temperature correction algorithms developed in this study can eliminate temperature effects from dielectric sensor measurements successfully even without on-site rainfall data. Furthermore, it has been found that actual daily average of SWC has been changed due to temperature effects of dielectric sensors with a significant error factor comparable to ±1% manufacturer's accuracy.

  11. SU-E-T-101: Determination and Comparison of Correction Factors Obtained for TLDs in Small Field Lung Heterogenous Phantom Using Acuros XB and EGSnrc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soh, R; Lee, J; Harianto, F

    Purpose: To determine and compare the correction factors obtained for TLDs in 2 × 2cm{sup 2} small field in lung heterogenous phantom using Acuros XB (AXB) and EGSnrc. Methods: This study will simulate the correction factors due to the perturbation of TLD-100 chips (Harshaw/Thermoscientific, 3 × 3 × 0.9mm{sup 3}, 2.64g/cm{sup 3}) in small field lung medium for Stereotactic Body Radiation Therapy (SBRT). A physical lung phantom was simulated by a 14cm thick composite cork phantom (0.27g/cm{sup 3}, HU:-743 ± 11) sandwiched between 4cm thick Plastic Water (CIRS,Norfolk). Composite cork has been shown to be a good lung substitute materialmore » for dosimetric studies. 6MV photon beam from Varian Clinac iX (Varian Medical Systems, Palo Alto, CA) with field size 2 × 2cm{sup 2} was simulated. Depth dose profiles were obtained from the Eclipse treatment planning system Acuros XB (AXB) and independently from DOSxyznrc, EGSnrc. Correction factors was calculated by the ratio of unperturbed to perturbed dose. Since AXB has limitations in simulating actual material compositions, EGSnrc will also simulate the AXB-based material composition for comparison to the actual lung phantom. Results: TLD-100, with its finite size and relatively high density, causes significant perturbation in 2 × 2cm{sup 2} small field in a low lung density phantom. Correction factors calculated by both EGSnrc and AXB was found to be as low as 0.9. It is expected that the correction factor obtained by EGSnrc wlll be more accurate as it is able to simulate the actual phantom material compositions. AXB have a limited material library, therefore it only approximates the composition of TLD, Composite cork and Plastic water, contributing to uncertainties in TLD correction factors. Conclusion: It is expected that the correction factors obtained by EGSnrc will be more accurate. Studies will be done to investigate the correction factors for higher energies where perturbation may be more pronounced.« less

  12. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    PubMed

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  13. Comparison of adjoint and analytical Bayesian inversion methods for constraining Asian sources of carbon monoxide using satellite (MOPITT) measurements of CO columns

    NASA Astrophysics Data System (ADS)

    Kopacz, Monika; Jacob, Daniel J.; Henze, Daven K.; Heald, Colette L.; Streets, David G.; Zhang, Qiang

    2009-02-01

    We apply the adjoint of an atmospheric chemical transport model (GEOS-Chem CTM) to constrain Asian sources of carbon monoxide (CO) with 2° × 2.5° spatial resolution using Measurement of Pollution in the Troposphere (MOPITT) satellite observations of CO columns in February-April 2001. Results are compared to the more common analytical method for solving the same Bayesian inverse problem and applied to the same data set. The analytical method is more exact but because of computational limitations it can only constrain emissions over coarse regions. We find that the correction factors to the a priori CO emission inventory from the adjoint inversion are generally consistent with those of the analytical inversion when averaged over the large regions of the latter. The adjoint solution reveals fine-scale variability (cities, political boundaries) that the analytical inversion cannot resolve, for example, in the Indian subcontinent or between Korea and Japan, and some of that variability is of opposite sign which points to large aggregation errors in the analytical solution. Upward correction factors to Chinese emissions from the prior inventory are largest in central and eastern China, consistent with a recent bottom-up revision of that inventory, although the revised inventory also sees the need for upward corrections in southern China where the adjoint and analytical inversions call for downward correction. Correction factors for biomass burning emissions derived from the adjoint and analytical inversions are consistent with a recent bottom-up inventory on the basis of MODIS satellite fire data.

  14. Calibration of 4π NaI(Tl) detectors with coincidence summing correction using new numerical procedure and ANGLE4 software

    NASA Astrophysics Data System (ADS)

    Badawi, Mohamed S.; Jovanovic, Slobodan I.; Thabet, Abouzeid A.; El-Khatib, Ahmed M.; Dlabac, Aleksandar D.; Salem, Bohaysa A.; Gouda, Mona M.; Mihaljevic, Nikola N.; Almugren, Kholud S.; Abbas, Mahmoud I.

    2017-03-01

    The 4π NaI(Tl) γ-ray detectors are consisted of the well cavity with cylindrical cross section, and the enclosing geometry of measurements with large detection angle. This leads to exceptionally high efficiency level and a significant coincidence summing effect, much more than a single cylindrical or coaxial detector especially in very low activity measurements. In the present work, the detection effective solid angle in addition to both full-energy peak and total efficiencies of well-type detectors, were mainly calculated by the new numerical simulation method (NSM) and ANGLE4 software. To obtain the coincidence summing correction factors through the previously mentioned methods, the simulation of the coincident emission of photons was modeled mathematically, based on the analytical equations and complex integrations over the radioactive volumetric sources including the self-attenuation factor. The measured full-energy peak efficiencies and correction factors were done by using 152Eu, where an exact adjustment is required for the detector efficiency curve, because neglecting the coincidence summing effect can make the results inconsistent with the whole. These phenomena, in general due to the efficiency calibration process and the coincidence summing corrections, appear jointly. The full-energy peak and the total efficiencies from the two methods typically agree with discrepancy 10%. The discrepancy between the simulation, ANGLE4 and measured full-energy peak after corrections for the coincidence summing effect was on the average, while not exceeding 14%. Therefore, this technique can be easily applied in establishing the efficiency calibration curves of well-type detectors.

  15. Extended hybrid-space SENSE for EPI: Off-resonance and eddy current corrected joint interleaved blip-up/down reconstruction.

    PubMed

    Zahneisen, Benjamin; Aksoy, Murat; Maclaren, Julian; Wuerslin, Christian; Bammer, Roland

    2017-06-01

    Geometric distortions along the phase encode direction caused by off-resonant spins are still a major issue in EPI based functional and diffusion imaging. If the off-resonance map is known it is possible to correct for distortions. Most correction methods operate as a post-processing step on the reconstructed magnitude images. Here, we present an algebraic reconstruction method (hybrid-space SENSE) that incorporates a physics based model of off-resonances, phase inconsistencies between k-space segments, and T2*-decay during the acquisition. The method can be used to perform a joint reconstruction of interleaved acquisitions with normal (blip-up) and inverted (blip-down) phase encode direction which results in reduced g-factor penalty. A joint blip-up/down simultaneous multi slice (SMS) reconstruction for SMS-factor 4 in combination with twofold in-plane acceleration leads to a factor of two decrease in maximum g-factor penalty while providing off-resonance and eddy-current corrected images. We provide an algebraic framework for reconstructing diffusion weighted EPI data that in addition to the general applicability of hybrid-space SENSE to 2D-EPI, SMS-EPI and 3D-EPI with arbitrary k-space coverage along z, allows for a modeling of arbitrary spatio-temporal effects during the acquisition period like off-resonances, phase inconsistencies and T2*-decay. The most immediate benefit is a reduction in g-factor penalty if an interleaved blip-up/down acquisition strategy is chosen which facilitates eddy current estimation and ensures no loss in k-space encoding in regions with strong off-resonance gradients. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Light-Field Correction for Spatial Calibration of Optical See-Through Head-Mounted Displays.

    PubMed

    Itoh, Yuta; Klinker, Gudrun

    2015-04-01

    A critical requirement for AR applications with Optical See-Through Head-Mounted Displays (OST-HMD) is to project 3D information correctly into the current viewpoint of the user - more particularly, according to the user's eye position. Recently-proposed interaction-free calibration methods [16], [17] automatically estimate this projection by tracking the user's eye position, thereby freeing users from tedious manual calibrations. However, the method is still prone to contain systematic calibration errors. Such errors stem from eye-/HMD-related factors and are not represented in the conventional eye-HMD model used for HMD calibration. This paper investigates one of these factors - the fact that optical elements of OST-HMDs distort incoming world-light rays before they reach the eye, just as corrective glasses do. Any OST-HMD requires an optical element to display a virtual screen. Each such optical element has different distortions. Since users see a distorted world through the element, ignoring this distortion degenerates the projection quality. We propose a light-field correction method, based on a machine learning technique, which compensates the world-scene distortion caused by OST-HMD optics. We demonstrate that our method reduces the systematic error and significantly increases the calibration accuracy of the interaction-free calibration.

  17. CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.

    PubMed

    Saegusa, Jun

    2008-01-01

    The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.

  18. A comparative study of the effects of cone-plate and parallel-plate geometries on rheological properties under oscillatory shear flow

    NASA Astrophysics Data System (ADS)

    Song, Hyeong Yong; Salehiyan, Reza; Li, Xiaolei; Lee, Seung Hak; Hyun, Kyu

    2017-11-01

    In this study, the effects of cone-plate (C/P) and parallel-plate (P/P) geometries were investigated on the rheological properties of various complex fluids, e.g. single-phase (polymer melts and solutions) and multiphase systems (polymer blend and nanocomposite, and suspension). Small amplitude oscillatory shear (SAOS) tests were carried out to compare linear rheological responses while nonlinear responses were compared using large amplitude oscillatory shear (LAOS) tests at different frequencies. Moreover, Fourier-transform (FT)-rheology method was used to analyze the nonlinear responses under LAOS flow. Experimental results were compared with predictions obtained by single-point correction and shear rate correction. For all systems, SAOS data measured by C/P and P/P coincide with each other, but results showed discordance between C/P and P/P measurements in the nonlinear regime. For all systems except xanthan gum solutions, first-harmonic moduli were corrected using a single horizontal shift factor, whereas FT rheology-based nonlinear parameters ( I 3/1, I 5/1, Q 3, and Q 5) were corrected using vertical shift factors that are well predicted by single-point correction. Xanthan gum solutions exhibited anomalous corrections. Their first-harmonic Fourier moduli were superposed using a horizontal shift factor predicted by shear rate correction applicable to highly shear thinning fluids. The distinguished corrections were observed for FT rheology-based nonlinear parameters. I 3/1 and I 5/1 were superposed by horizontal shifts, while the other systems displayed vertical shifts of I 3/1 and I 5/1. Q 3 and Q 5 of xanthan gum solutions were corrected using both horizontal and vertical shift factors. In particular, the obtained vertical shift factors for Q 3 and Q 5 were twice as large as predictions made by single-point correction. Such larger values are rationalized by the definitions of Q 3 and Q 5. These results highlight the significance of horizontal shift corrections in nonlinear oscillatory shear data.

  19. Procedure for Determining Speed and Climbing Performance of Airships

    NASA Technical Reports Server (NTRS)

    Thompson, F L

    1936-01-01

    The procedure for obtaining air-speed and rate-of-climb measurements in performance tests of airships is described. Two methods of obtaining speed measurements, one by means of instruments in the airship and the other by flight over a measured ground course, are explained. Instruments, their calibrations, necessary correction factors, observations, and calculations are detailed for each method, and also for the rate-of-climb tests. A method of correction for the effect on density of moist air and a description of other methods of speed course testing are appended.

  20. Numerical method for angle-of-incidence correction factors for diffuse radiation incident photovoltaic modules

    DOE PAGES

    Marion, Bill

    2017-03-27

    Here, a numerical method is provided for solving the integral equation for the angle-of-incidence (AOI) correction factor for diffuse radiation incident photovoltaic (PV) modules. The types of diffuse radiation considered include sky, circumsolar, horizon, and ground-reflected. The method permits PV module AOI characteristics to be addressed when calculating AOI losses associated with diffuse radiation. Pseudo code is provided to aid users in the implementation, and results are shown for PV modules with tilt angles from 0° to 90°. Diffuse AOI losses are greatest for small PV module tilt angles. Including AOI losses associated with the diffuse irradiance will improve predictionsmore » of PV system performance.« less

  1. Joint release rate estimation and measurement-by-measurement model correction for atmospheric radionuclide emission in nuclear accidents: An application to wind tunnel experiments.

    PubMed

    Li, Xinpeng; Li, Hong; Liu, Yun; Xiong, Wei; Fang, Sheng

    2018-03-05

    The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. SU-F-T-67: Correction Factors for Monitor Unit Verification of Clinical Electron Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haywood, J

    Purpose: Monitor units calculated by electron Monte Carlo treatment planning systems are often higher than TG-71 hand calculations for a majority of patients. Here I’ve calculated tables of geometry and heterogeneity correction factors for correcting electron hand calculations. Method: A flat water phantom with spherical volumes having radii ranging from 3 to 15 cm was created. The spheres were centered with respect to the flat water phantom, and all shapes shared a surface at 100 cm SSD. D{sub max} dose at 100 cm SSD was calculated for each cone and energy on the flat phantom and for the spherical volumesmore » in the absence of the flat phantom. The ratio of dose in the sphere to dose in the flat phantom defined the geometrical correction factor. The heterogeneity factors were then calculated from the unrestricted collisional stopping power for tissues encountered in electron beam treatments. These factors were then used in patient second check calculations. Patient curvature was estimated by the largest sphere that aligns to the patient contour, and appropriate tissue density was read from the physical properties provided by the CT. The resulting MU were compared to those calculated by the treatment planning system and TG-71 hand calculations. Results: The geometry and heterogeneity correction factors range from ∼(0.8–1.0) and ∼(0.9–1.01) respectively for the energies and cones presented. Percent differences for TG-71 hand calculations drop from ∼(3–14)% to ∼(0–2)%. Conclusion: Monitor units calculated with the correction factors typically decrease the percent difference to under actionable levels, < 5%. While these correction factors work for a majority of patients, there are some patient anatomies that do not fit the assumptions made. Using these factors in hand calculations is a first step in bringing the verification monitor units into agreement with the treatment planning system MU.« less

  3. Revisions to some parameters used in stochastic-method simulations of ground motion

    USGS Publications Warehouse

    Boore, David; Thompson, Eric M.

    2015-01-01

    The stochastic method of ground‐motion simulation specifies the amplitude spectrum as a function of magnitude (M) and distance (R). The manner in which the amplitude spectrum varies with M and R depends on physical‐based parameters that are often constrained by recorded motions for a particular region (e.g., stress parameter, geometrical spreading, quality factor, and crustal amplifications), which we refer to as the seismological model. The remaining ingredient for the stochastic method is the ground‐motion duration. Although the duration obviously affects the character of the ground motion in the time domain, it also significantly affects the response of a single‐degree‐of‐freedom oscillator. Recently published updates to the stochastic method include a new generalized double‐corner‐frequency source model, a new finite‐fault correction, a new parameterization of duration, and a new duration model for active crustal regions. In this article, we augment these updates with a new crustal amplification model and a new duration model for stable continental regions. Random‐vibration theory (RVT) provides a computationally efficient method to compute the peak oscillator response directly from the ground‐motion amplitude spectrum and duration. Because the correction factor used to account for the nonstationarity of the ground motion depends on the ground‐motion amplitude spectrum and duration, we also present new RVT correction factors for both active and stable regions.

  4. SU-E-T-123: Anomalous Altitude Effect in Permanent Implant Brachytherapy Seeds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watt, E; Spencer, DP; Meyer, T

    Purpose: Permanent seed implant brachytherapy procedures require the measurement of the air kerma strength of seeds prior to implant. This is typically accomplished using a well-type ionization chamber. Previous measurements (Griffin et al., 2005; Bohm et al., 2005) of several low-energy seeds using the air-communicating HDR 1000 Plus chamber have demonstrated that the standard temperature-pressure correction factor, P{sub TP}, may overcompensate for air density changes induced by altitude variations by up to 18%. The purpose of this work is to present empirical correction factors for two clinically-used seeds (IsoAid ADVANTAGE™ {sup 103}Pd and Nucletron selectSeed {sup 125}I) for which empiricalmore » altitude correction factors do not yet exist in the literature when measured with the HDR 1000 Plus chamber. Methods: An in-house constructed pressure vessel containing the HDR 1000 Plus well chamber and a digital barometer/thermometer was pumped or evacuated, as appropriate, to a variety of pressures from 725 to 1075 mbar. Current measurements, corrected with P{sub TP}, were acquired for each seed at these pressures and normalized to the reading at ‘standard’ pressure (1013.25 mbar). Results: Measurements in this study have shown that utilization of P{sub TP} can overcompensate in the corrected current reading by up to 20% and 17% for the IsoAid Pd-103 and the Nucletron I-125 seed respectively. Compared to literature correction factors for other seed models, the correction factors in this study diverge by up to 2.6% and 3.0% for iodine (with silver) and palladium respectively, indicating the need for seed-specific factors. Conclusion: The use of seed specific altitude correction factors can reduce uncertainty in the determination of air kerma strength. The empirical correction factors determined in this work can be applied in clinical quality assurance measurements of air kerma strength for two previously unpublished seed designs (IsoAid ADVANTAGE™ {sup 103}Pd and Nucletron selectSeed {sup 125}I) with the HDR 1000 Plus well chamber.« less

  5. Improved estimates of environmental copper release rates from antifouling products.

    PubMed

    Finnie, Alistair A

    2006-01-01

    The US Navy Dome method for measuring copper release rates from antifouling paint in-service on ships' hulls can be considered to be the most reliable indicator of environmental release rates. In this paper, the relationship between the apparent copper release rate and the environmental release rate is established for a number of antifouling coating types using data from a variety of available laboratory, field and calculation methods. Apart from a modified Dome method using panels, all laboratory, field and calculation methods significantly overestimate the environmental release rate of copper from antifouling coatings. The difference is greatest for self-polishing copolymer antifoulings (SPCs) and smallest for certain erodible/ablative antifoulings, where the ASTM/ISO standard and the CEPE calculation method are seen to typically overestimate environmental release rates by factors of about 10 and 4, respectively. Where ASTM/ISO or CEPE copper release rate data are used for environmental risk assessment or regulatory purposes, it is proposed that the release rate values should be divided by a correction factor to enable more reliable generic environmental risk assessments to be made. Using a conservative approach based on a realistic worst case and accounting for experimental uncertainty in the data that are currently available, proposed default correction factors for use with all paint types are 5.4 for the ASTM/ISO method and 2.9 for the CEPE calculation method. Further work is required to expand this data-set and refine the correction factors through correlation of laboratory measured and calculated copper release rates with the direct in situ environmental release rate for different antifouling paints under a range of environmental conditions.

  6. Improved scatter correction with factor analysis for planar and SPECT imaging

    NASA Astrophysics Data System (ADS)

    Knoll, Peter; Rahmim, Arman; Gültekin, Selma; Šámal, Martin; Ljungberg, Michael; Mirzaei, Siroos; Segars, Paul; Szczupak, Boguslaw

    2017-09-01

    Quantitative nuclear medicine imaging is an increasingly important frontier. In order to achieve quantitative imaging, various interactions of photons with matter have to be modeled and compensated. Although correction for photon attenuation has been addressed by including x-ray CT scans (accurate), correction for Compton scatter remains an open issue. The inclusion of scattered photons within the energy window used for planar or SPECT data acquisition decreases the contrast of the image. While a number of methods for scatter correction have been proposed in the past, in this work, we propose and assess a novel, user-independent framework applying factor analysis (FA). Extensive Monte Carlo simulations for planar and tomographic imaging were performed using the SIMIND software. Furthermore, planar acquisition of two Petri dishes filled with 99mTc solutions and a Jaszczak phantom study (Data Spectrum Corporation, Durham, NC, USA) using a dual head gamma camera were performed. In order to use FA for scatter correction, we subdivided the applied energy window into a number of sub-windows, serving as input data. FA results in two factor images (photo-peak, scatter) and two corresponding factor curves (energy spectra). Planar and tomographic Jaszczak phantom gamma camera measurements were recorded. The tomographic data (simulations and measurements) were processed for each angular position resulting in a photo-peak and a scatter data set. The reconstructed transaxial slices of the Jaszczak phantom were quantified using an ImageJ plugin. The data obtained by FA showed good agreement with the energy spectra, photo-peak, and scatter images obtained in all Monte Carlo simulated data sets. For comparison, the standard dual-energy window (DEW) approach was additionally applied for scatter correction. FA in comparison with the DEW method results in significant improvements in image accuracy for both planar and tomographic data sets. FA can be used as a user-independent approach for scatter correction in nuclear medicine.

  7. Correction to Account for the Isomer of 87Y in the 87Y Radiochemical Diagnostic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayes-Sterbenz, Anna Catherine; Jungman, Gerard

    Here we summarize the need to correct inventories of 87Y reported by the Los Alamos weapons radiochemistry team. The need for a correction arises from the fact that a 13.37 hour isomer of 87Y, that is strongly populated through (n, 2n) reactions on 88Y and isomers of 88Y, has not been included in the experimental analyses of NTS data. Inventories of 87Y reported by LANL’s weapons radiochemistry team should be multiplied by a correction factor that is numerically close to 0.9. Alternatively, the user could increase simulated values of 87Y by 1.1 for comparison with the original method for reportingmore » NTS values. If the inventories in question were directly reported by LLNL’s radiochemistry team, care must be taken to determine whether or not the correction factor has already been applied.« less

  8. Consistency of Pilot Trainee Cognitive Ability, Personality, and Training Performance in Undergraduate Pilot Training

    DTIC Science & Technology

    2013-09-09

    multivariate correction method (Lawley, 1943) was used for all scores except the MAB FSIQ which used the univariate ( Thorndike , 1949) method. FSIQ... Thorndike , R. L. (1949). Personnel selection. NY: Wiley. Tupes, E. C., & Christal, R. C. (1961). Recurrent personality factors based on trait ratings... Thorndike , 1949). aThe correlations for 1995 were not corrected due to the small sample size (N = 17). *p< .05 Consistency of Pilot Attributes

  9. A comparison of quality of present-day heat flow obtained from BHTs, Horner Plots of Malay Basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waples, D.W.; Mahadir, R.

    1994-07-01

    Reconciling temperature data obtained from measurement of single BHT, multiple BHT at a single depth, RFTs, and DSTs, is very difficult. Quality of data varied widely, however DST data were assumed to be most reliable. Data from 87 wells was used in this study, but only 47 wells have DST data. BASINMOD program was used to calculate the present-day heat flow, using measured thermal conductivity and calibrated against the DST data. The heat flows obtained from the DST data were assumed to be correct and representative throughout the basin. Then, heat flows using (1) uncorrected RFT data, (2) multiple BHTmore » data corrected by the Horner plot method, and (3) single BHT values corrected upward by a standard 10% were calculated. All of these three heat-flow populations had identically standard deviations to that for the DST data, but with significantly lower mean values. Correction factors were calculated to give each of the three erroneous populations the same mean value as the DST population. Heat flows calculated from RFT data had to be corrected upward by a factor of 1.12 to be equivalent to DST data; Horner plot data corrected by a factor of 1.18, and single BHT data by a factor of 1.2. These results suggest that present-day subsurface temperatures using RFT, Horner plot, and BHT data are considerably lower than they should be. The authors suspect qualitatively similar results would be found in other areas. Hence, they recommend significant corrections be routinely made until local calibration factors are established.« less

  10. Optimized distortion correction technique for echo planar imaging.

    PubMed

    Chen , N K; Wyrwicz, A M

    2001-03-01

    A new phase-shifted EPI pulse sequence is described that encodes EPI phase errors due to all off-resonance factors, including B(o) field inhomogeneity, eddy current effects, and gradient waveform imperfections. Combined with the previously proposed multichannel modulation postprocessing algorithm (Chen and Wyrwicz, MRM 1999;41:1206-1213), the encoded phase error information can be used to effectively remove geometric distortions in subsequent EPI scans. The proposed EPI distortion correction technique has been shown to be effective in removing distortions due to gradient waveform imperfections and phase gradient-induced eddy current effects. In addition, this new method retains advantages of the earlier method, such as simultaneous correction of different off-resonance factors without use of a complicated phase unwrapping procedure. The effectiveness of this technique is illustrated with EPI studies on phantoms and animal subjects. Implementation to different versions of EPI sequences is also described. Magn Reson Med 45:525-528, 2001. Copyright 2001 Wiley-Liss, Inc.

  11. Ozone Correction for AM0 Calibrated Solar Cells for the Aircraft Method

    NASA Technical Reports Server (NTRS)

    Snyder, David B.; Scheiman, David A.; Jenkins, Phillip P.; Rieke, William J.; Blankenship, Kurt S.

    2002-01-01

    The aircraft solar cell calibration method has provided cells calibrated to space conditions for 37 years. However, it is susceptible to systematic errors due to ozone concentrations in the stratosphere. The present correction procedure applies a 1 percent increase to the measured I(sub SC) values. High band-gap cells are more sensitive to ozone absorbed wavelengths (0.4 to 0.8 microns) so it becomes important to reassess the correction technique. This paper evaluates the ozone correction to be 1+O3xFo, where O3 is the total ozone along the optical path, and Fo is 29.8 x 10(exp -6)/du for a Silicon solar cell, 42.6 x 10(exp -6)/du for a GaAs cell and 57.2 x 10(exp -6)/du for an InGaP cell. These correction factors work best to correct data points obtained during the flight rather than as a correction to the final result.

  12. Crustal Thickness Mapping of the Rifted Margin Ocean-Continent Transition using Satellite Gravity Inversion Incorporating a Lithosphere Thermal Correction

    NASA Astrophysics Data System (ADS)

    Hurst, N. W.; Kusznir, N. J.

    2005-05-01

    A new method of inverting satellite gravity at rifted continental margins to give crustal thickness, incorporating a lithosphere thermal correction, has been developed which does not use a priori information about the location of the ocean-continent transition (OCT) and provides an independent prediction of OCT location. Satellite derived gravity anomaly data (Sandwell and Smith 1997) and bathymetry data (Gebco 2003) are used to derive the mantle residual gravity anomaly which is inverted in 3D in the spectral domain to give Moho depth. Oceanic lithosphere and stretched continental margin lithosphere produce a large negative residual thermal gravity anomaly (up to -380 mgal), which must be corrected for in order to determine Moho depth. This thermal gravity correction may be determined for oceanic lithosphere using oceanic isochron data, and for the thinned continental margin lithosphere using margin rift age and beta stretching estimates iteratively derived from crustal basement thickness determined from the gravity inversion. The gravity inversion using the thermal gravity correction predicts oceanic crustal thicknesses consistent with seismic observations, while that without the thermal correction predicts much too great oceanic crustal thicknesses. Predicted Moho depth and crustal thinning across the Hatton and Faroes rifted margins, using the gravity inversion with embedded thermal correction, compare well with those produced by wide-angle seismology. A new gravity inversion method has been developed in which no isochrons are used to define the thermal gravity correction. The new method assumes all lithosphere to be initially continental and a uniform lithosphere stretching age is used corresponding to the time of continental breakup. The thinning factor produced by the gravity inversion is used to predict the thickness of oceanic crust. This new modified form of gravity inversion with embedded thermal correction provides an improved estimate of rifted continental margin crustal thinning and an improved (and isochron independent) prediction of OCT location. The new method uses an empirical relationship to predict the thickness of oceanic crust as a function of lithosphere thinning factor controlled by two input parameters: a critical thinning factor for the start of ocean crust production and the maximum oceanic crustal thickness produced when the thinning factor = 1, corresponding to infinite lithosphere stretching. The disadvantage of using a uniform stretching age corresponding to the age of continental breakup is that the inversion fails to predict increasing thermal gravity correction towards the ocean ridge and incorrectly predicts thickening of oceanic crust with decreasing oceanic age. The new gravity inversion method has been applied to N. Atlantic rifted margins. This work forms part of the NERC Margins iSIMM project. iSIMM investigators are from Liverpool and Cambridge Universities, Badley Geoscience & Schlumberger Cambridge Research supported by the NERC, the DTI, Agip UK, BP, Amerada Hess Ltd, Anadarko, ConocoPhillips, Shell, Statoil and WesternGeco. The iSIMM team comprises NJ Kusznir, RS White, AM Roberts, PAF Christie, A Chappell, J Eccles, R Fletcher, D Healy, N Hurst, ZC Lunnon, CJ Parkin, AW Roberts, LK Smith, V Tymms & R Spitzer.

  13. Comparison of different methods to include recycling in LCAs of aluminium cans and disposable polystyrene cups.

    PubMed

    van der Harst, Eugenie; Potting, José; Kroeze, Carolien

    2016-02-01

    Many methods have been reported and used to include recycling in life cycle assessments (LCAs). This paper evaluates six widely used methods: three substitution methods (i.e. substitution based on equal quality, a correction factor, and alternative material), allocation based on the number of recycling loops, the recycled-content method, and the equal-share method. These six methods were first compared, with an assumed hypothetical 100% recycling rate, for an aluminium can and a disposable polystyrene (PS) cup. The substitution and recycled-content method were next applied with actual rates for recycling, incineration and landfilling for both product systems in selected countries. The six methods differ in their approaches to credit recycling. The three substitution methods stimulate the recyclability of the product and assign credits for the obtained recycled material. The choice to either apply a correction factor, or to account for alternative substituted material has a considerable influence on the LCA results, and is debatable. Nevertheless, we prefer incorporating quality reduction of the recycled material by either a correction factor or an alternative substituted material over simply ignoring quality loss. The allocation-on-number-of-recycling-loops method focusses on the life expectancy of material itself, rather than on a specific separate product. The recycled-content method stimulates the use of recycled material, i.e. credits the use of recycled material in products and ignores the recyclability of the products. The equal-share method is a compromise between the substitution methods and the recycled-content method. The results for the aluminium can follow the underlying philosophies of the methods. The results for the PS cup are additionally influenced by the correction factor or credits for the alternative material accounting for the drop in PS quality, the waste treatment management (recycling rate, incineration rate, landfilling rate), and the source of avoided electricity in case of waste incineration. The results for the PS cup, which are less dominated by production of virgin material than aluminium can, furthermore depend on the environmental impact categories. This stresses the importance to consider other impact categories besides the most commonly used global warming impact. The multitude of available methods complicates the choice of an appropriate method for the LCA practitioner. New guidelines keep appearing and industries also suggest their own preferred method. Unambiguous ISO guidelines, particularly related to sensitivity analysis, would be a great step forward in making more robust LCAs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Optical factors determined by the T-matrix method in turbidity measurement of absolute coagulation rate constants.

    PubMed

    Xu, Shenghua; Liu, Jie; Sun, Zhiwei

    2006-12-01

    Turbidity measurement for the absolute coagulation rate constants of suspensions has been extensively adopted because of its simplicity and easy implementation. A key factor in deriving the rate constant from experimental data is how to theoretically evaluate the so-called optical factor involved in calculating the extinction cross section of doublets formed during aggregation. In a previous paper, we have shown that compared with other theoretical approaches, the T-matrix method provides a robust solution to this problem and is effective in extending the applicability range of the turbidity methodology, as well as increasing measurement accuracy. This paper will provide a more comprehensive discussion of the physical insight for using the T-matrix method in turbidity measurement and associated technical details. In particular, the importance of ensuring the correct value for the refractive indices for colloidal particles and the surrounding medium used in the calculation is addressed, because the indices generally vary with the wavelength of the incident light. The comparison of calculated results with experiments shows that the T-matrix method can correctly calculate optical factors even for large particles, whereas other existing theories cannot. In addition, the data of the optical factor calculated by the T-matrix method for a range of particle radii and incident light wavelengths are listed.

  15. Statistical correction of lidar-derived digital elevation models with multispectral airborne imagery in tidal marshes

    USGS Publications Warehouse

    Buffington, Kevin J.; Dugger, Bruce D.; Thorne, Karen M.; Takekawa, John Y.

    2016-01-01

    Airborne light detection and ranging (lidar) is a valuable tool for collecting large amounts of elevation data across large areas; however, the limited ability to penetrate dense vegetation with lidar hinders its usefulness for measuring tidal marsh platforms. Methods to correct lidar elevation data are available, but a reliable method that requires limited field work and maintains spatial resolution is lacking. We present a novel method, the Lidar Elevation Adjustment with NDVI (LEAN), to correct lidar digital elevation models (DEMs) with vegetation indices from readily available multispectral airborne imagery (NAIP) and RTK-GPS surveys. Using 17 study sites along the Pacific coast of the U.S., we achieved an average root mean squared error (RMSE) of 0.072 m, with a 40–75% improvement in accuracy from the lidar bare earth DEM. Results from our method compared favorably with results from three other methods (minimum-bin gridding, mean error correction, and vegetation correction factors), and a power analysis applying our extensive RTK-GPS dataset showed that on average 118 points were necessary to calibrate a site-specific correction model for tidal marshes along the Pacific coast. By using available imagery and with minimal field surveys, we showed that lidar-derived DEMs can be adjusted for greater accuracy while maintaining high (1 m) resolution.

  16. Topographic correction realization based on the CBERS-02B image

    NASA Astrophysics Data System (ADS)

    Qin, Hui-ping; Yi, Wei-ning; Fang, Yong-hua

    2011-08-01

    The special topography of mountain terrain will induce the retrieval distortion in same species and surface spectral lines. In order to improve the research accuracy of topographic surface characteristic, many researchers have focused on topographic correction. Topographic correction methods can be statistical-empirical model or physical model, in which the methods based on the digital elevation model data are most popular. Restricted by spatial resolution, previous model mostly corrected topographic effect based on Landsat TM image, whose spatial resolution is 30 meter that can be easily achieved from internet or calculated from digital map. Some researchers have also done topographic correction based on high spatial resolution images, such as Quickbird and Ikonos, but there is little correlative research on the topographic correction of CBERS-02B image. In this study, liao-ning mountain terrain was taken as the objective. The digital elevation model data was interpolated to 2.36 meter by 15 meter original digital elevation model one meter by one meter. The C correction, SCS+C correction, Minnaert correction and Ekstrand-r were executed to correct the topographic effect. Then the corrected results were achieved and compared. The images corrected with C correction, SCS+C correction, Minnaert correction and Ekstrand-r were compared, and the scatter diagrams between image digital number and cosine of solar incidence angel with respect to surface normal were shown. The mean value, standard variance, slope of scatter diagram, and separation factor were statistically calculated. The analysed result shows that the shadow is weakened in corrected images than the original images, and the three-dimensional affect is removed. The absolute slope of fitting lines in scatter diagram is minished. Minnaert correction method has the most effective result. These demonstrate that the former correction methods can be successfully adapted to CBERS-02B images. The DEM data can be interpolated step by step to get the corresponding spatial resolution approximately for the condition that high spatial resolution elevation data is hard to get.

  17. Improving satellite retrievals of NO2 in biomass burning regions

    NASA Astrophysics Data System (ADS)

    Bousserez, N.; Martin, R. V.; Lamsal, L. N.; Mao, J.; Cohen, R. C.; Anderson, B. E.

    2010-12-01

    The quality of space-based nitrogen dioxide (NO2) retrievals from solar backscatter depends on a priori knowledge of the NO2 profile shape as well as the effects of atmospheric scattering. These effects are characterized by the air mass factor (AMF) calculation. Calculation of the AMF combines a radiative transfer calculation together with a priori information about aerosols and about NO2 profiles (shape factors), which are usually taken from a chemical transport model. In this work we assess the impact of biomass burning emissions on the AMF using the LIDORT radiative transfer model and a GEOS-Chem simulation based on a daily fire emissions inventory (FLAMBE). We evaluate the GEOS-Chem aerosol optical properties and NO2 shape factors using in situ data from the ARCTAS summer 2008 (North America) and DABEX winter 2006 (western Africa) experiments. Sensitivity studies are conducted to assess the impact of biomass burning on the aerosols and the NO2 shape factors used in the AMF calculation. The mean aerosol correction over boreal fires is negligible (+3%), in contrast with a large reduction (-18%) over African savanna fires. The change in sign and magnitude over boreal forest and savanna fires appears to be driven by the shielding effects that arise from the greater biomass burning aerosol optical thickness (AOT) above the African biomass burning NO2. In agreement with previous work, the single scattering albedo (SSA) also affects the aerosol correction. We further investigated the effect of clouds on the aerosol correction. For a fixed AOT, the aerosol correction can increase from 20% to 50% when cloud fraction increases from 0 to 30%. Over both boreal and savanna fires, the greatest impact on the AMF is from the fire-induced change in the NO2 profile (shape factor correction), that decreases the AMF by 38% over the boreal fires and by 62% of the savanna fires. Combining the aerosol and shape factor corrections together results in small differences compared to the shape factor correction alone (without the aerosol correction), indicating that a shape factor-only correction is a good approximation of the total AMF correction associated with fire emissions. We use this result to define a measurement-based correction of the AMF based on the relationship between the slant column variability and the variability of the shape factor in the lower troposphere. This method may be generalized to other types of emission sources.

  18. 40 CFR 60.547 - Test methods and procedures.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) Method 24 or formulation data for the determination of the VOC content of cements or green tire spray materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... corrected using an experimentally determined response factor comparing the alternative calibration gas to...

  19. 40 CFR 60.547 - Test methods and procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) Method 24 or formulation data for the determination of the VOC content of cements or green tire spray materials. In the event of dispute, Method 24 shall be the reference method. For Method 24, the cement or... corrected using an experimentally determined response factor comparing the alternative calibration gas to...

  20. Learners' Perception of Corrective Feedback in Pair Work

    ERIC Educational Resources Information Center

    Yoshida, Reiko

    2008-01-01

    The present study examines Japanese language learners' perception of corrective feedback (CF) in pair work in relation to their noticing and understanding of their partners' CF and the factors that influence it. This study focuses on three learners, who worked together in pair work. The data collection methods consist of classroom observation,…

  1. Improving estimates of wilderness use from mandatory travel permits.

    Treesearch

    David W. Lime; Grace A. Lorence

    1974-01-01

    Mandatory permits provide recreation managers with better use estimates. Because some visitors do not obtain permits, use estimates based on permit data need to be corrected. In the Boundary Waters Canoe Area, a method was devised for distinguishing noncomplying groups and finding correction factors that reflect the impact of these groups. Suggestions for improving...

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saha, K; Barbarits, J; Humenik, R

    Purpose: Chang’s mathematical formulation is a common method of attenuation correction applied on reconstructed Jaszczak phantom images. Though Chang’s attenuation correction method has been used for 360° angle acquisition, its applicability for 180° angle acquisition remains a question with one vendor’s camera software producing artifacts. The objective of this work is to ensure that Chang’s attenuation correction technique can be applied for reconstructed Jaszczak phantom images acquired in both 360° and 180° mode. Methods: The Jaszczak phantom filled with 20 mCi of diluted Tc-99m was placed on the patient table of Siemens e.cam™ (n = 2) and Siemens Symbia™ (nmore » = 1) dual head gamma cameras centered both in lateral and axial directions. A total of 3 scans were done at 180° and 2 scans at 360° orbit acquisition modes. Thirty two million counts were acquired for both modes. Reconstruction of the projection data was performed using filtered back projection smoothed with pre reconstruction Butterworth filter (order: 6, cutoff: 0.55). Reconstructed transaxial slices were attenuation corrected by Chang’s attenuation correction technique as implemented in the camera software. Corrections were also done using a modified technique where photon path lengths for all possible attenuation paths through a pixel in the image space were added to estimate the corresponding attenuation factor. The inverse of the attenuation factor was utilized to correct the attenuated pixel counts. Results: Comparable uniformity and noise were observed for 360° acquired phantom images attenuation corrected by the vendor technique (28.3% and 7.9%) and the proposed technique (26.8% and 8.4%). The difference in uniformity for 180° acquisition between the proposed technique (22.6% and 6.8%) and the vendor technique (57.6% and 30.1%) was more substantial. Conclusion: Assessment of attenuation correction performance by phantom uniformity analysis illustrated improved uniformity with the proposed algorithm compared to the camera software.« less

  3. The evaluation of correction algorithms of intensity nonuniformity in breast MRI images: a phantom study

    NASA Astrophysics Data System (ADS)

    Borys, Damian; Serafin, Wojciech; Gorczewski, Kamil; Kijonka, Marek; Frackiewicz, Mariusz; Palus, Henryk

    2018-04-01

    The aim of this work was to test the most popular and essential algorithms of the intensity nonuniformity correction of the breast MRI imaging. In this type of MRI imaging, especially in the proximity of the coil, the signal is strong but also can produce some inhomogeneities. Evaluated methods of signal correction were: N3, N3FCM, N4, Nonparametric, and SPM. For testing purposes, a uniform phantom object was used to obtain test images using breast imaging MRI coil. To quantify the results, two measures were used: integral uniformity and standard deviation. For each algorithm minimum, average and maximum values of both evaluation factors have been calculated using the binary mask created for the phantom. In the result, two methods obtained the lowest values in these measures: N3FCM and N4, however, for the second method visually phantom was the most uniform after correction.

  4. A 3D inversion for all-space magnetotelluric data with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, Kun

    2017-04-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results. The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm.

  5. A drift correction optimization technique for the reduction of the inter-measurement dispersion of isotope ratios measured using a multi-collector plasma mass spectrometer

    NASA Astrophysics Data System (ADS)

    Doherty, W.; Lightfoot, P. C.; Ames, D. E.

    2014-08-01

    The effects of polynomial interpolation and internal standardization drift corrections on the inter-measurement dispersion (statistical) of isotope ratios measured with a multi-collector plasma mass spectrometer were investigated using the (analyte, internal standard) isotope systems of (Ni, Cu), (Cu, Ni), (Zn, Cu), (Zn, Ga), (Sm, Eu), (Hf, Re) and (Pb, Tl). The performance of five different correction factors was compared using a (statistical) range based merit function ωm which measures the accuracy and inter-measurement range of the instrument calibration. The frequency distribution of optimal correction factors over two hundred data sets uniformly favored three particular correction factors while the remaining two correction factors accounted for a small but still significant contribution to the reduction of the inter-measurement dispersion. Application of the merit function is demonstrated using the detection of Cu and Ni isotopic fractionation in laboratory and geologic-scale chemical reactor systems. Solvent extraction (diphenylthiocarbazone (Cu, Pb) and dimethylglyoxime (Ni) was used to either isotopically fractionate the metal during extraction using the method of competition or to isolate the Cu and Ni from the sample (sulfides and associated silicates). In the best case, differences in isotopic composition of ± 3 in the fifth significant figure could be routinely and reliably detected for Cu65/63 and Ni61/62. One of the internal standardization drift correction factors uses a least squares estimator to obtain a linear functional relationship between the measured analyte and internal standard isotope ratios. Graphical analysis demonstrates that the points on these graphs are defined by highly non-linear parametric curves and not two linearly correlated quantities which is the usual interpretation of these graphs. The success of this particular internal standardization correction factor was found in some cases to be due to a fortuitous, scale dependent, parametric curve effect.

  6. Redrawing the US Obesity Landscape: Bias-Corrected Estimates of State-Specific Adult Obesity Prevalence

    PubMed Central

    Ward, Zachary J.; Long, Michael W.; Resch, Stephen C.; Gortmaker, Steven L.; Cradock, Angie L.; Giles, Catherine; Hsiao, Amber; Wang, Y. Claire

    2016-01-01

    Background State-level estimates from the Centers for Disease Control and Prevention (CDC) underestimate the obesity epidemic because they use self-reported height and weight. We describe a novel bias-correction method and produce corrected state-level estimates of obesity and severe obesity. Methods Using non-parametric statistical matching, we adjusted self-reported data from the Behavioral Risk Factor Surveillance System (BRFSS) 2013 (n = 386,795) using measured data from the National Health and Nutrition Examination Survey (NHANES) (n = 16,924). We validated our national estimates against NHANES and estimated bias-corrected state-specific prevalence of obesity (BMI≥30) and severe obesity (BMI≥35). We compared these results with previous adjustment methods. Results Compared to NHANES, self-reported BRFSS data underestimated national prevalence of obesity by 16% (28.67% vs 34.01%), and severe obesity by 23% (11.03% vs 14.26%). Our method was not significantly different from NHANES for obesity or severe obesity, while previous methods underestimated both. Only four states had a corrected obesity prevalence below 30%, with four exceeding 40%–in contrast, most states were below 30% in CDC maps. Conclusions Twelve million adults with obesity (including 6.7 million with severe obesity) were misclassified by CDC state-level estimates. Previous bias-correction methods also resulted in underestimates. Accurate state-level estimates are necessary to plan for resources to address the obesity epidemic. PMID:26954566

  7. [Wound microbial sampling methods in surgical practice, imprint techniques].

    PubMed

    Chovanec, Z; Veverková, L; Votava, M; Svoboda, J; Peštál, A; Doležel, J; Jedlička, V; Veselý, M; Wechsler, J; Čapov, I

    2012-12-01

    The wound is a damage of tissue. The process of healing is influenced by many systemic and local factors. The most crucial and the most discussed local factor of wound healing is infection. Surgical site infection in the wound is caused by micro-organisms. This information is known for many years, however the conditions leading to an infection occurrence have not been sufficiently described yet. Correct sampling technique, correct storage, transportation, evaluation, and valid interpretation of these data are very important in clinical practice. There are many methods for microbiological sampling, but the best one has not been yet identified and validated. We aim to discuss the problem with the focus on the imprint technique.

  8. Communication: Finite size correction in periodic coupled cluster theory calculations of solids.

    PubMed

    Liao, Ke; Grüneis, Andreas

    2016-10-14

    We present a method to correct for finite size errors in coupled cluster theory calculations of solids. The outlined technique shares similarities with electronic structure factor interpolation methods used in quantum Monte Carlo calculations. However, our approach does not require the calculation of density matrices. Furthermore we show that the proposed finite size corrections achieve chemical accuracy in the convergence of second-order Møller-Plesset perturbation and coupled cluster singles and doubles correlation energies per atom for insulating solids with two atomic unit cells using 2 × 2 × 2 and 3 × 3 × 3 k-point meshes only.

  9. Time of death of victims found in cold water environment.

    PubMed

    Karhunen, Pekka J; Goebeler, Sirkka; Winberg, Olli; Tuominen, Markku

    2008-04-07

    Limited data is available on the application of post-mortem temperature methods to non-standard conditions, especially in problematic real life cases in which the body of the victim is found in cold water environment. Here we present our experience on two cases with known post-mortem times. A 14-year-old girl (rectal temperature 15.5 degrees C) was found assaulted and drowned after a rainy cold night (+5 degrees C) in wet clothing (four layers) at the bottom of a shallow ditch, lying in non-flowing water. The post-mortem time turned out to be 15-16 h. Four days later, at the same time in the morning, after a cold (+/- 0 degrees C) night, a young man (rectal temperature 10.8 degrees C) was found drowned in a shallow cold drain (+4 degrees C) wearing similar clothing (four layers) and being exposed to almost similar environmental and weather conditions, except of flow (7.7 l/s or 0.3 m/s) in the drain. The post-mortem time was deduced to be 10-12 h. We tested the applicability of five practical methods to estimate time of death. Henssge's temperature-time of death nomogram method with correction factors was the most versatile and gave also most accurate results, although there is limited data on choosing of correction factors. In the first case, the right correction factor was close to 1.0 (recommended 1.1-1.2), suggesting that wet clothing acted like dry clothing in slowing down body cooling. In the second case, the right correction factor was between 0.3 and 0.5, similar to the recommended 0.35 for naked bodies in flowing water.

  10. [Quantitative surface analysis of Pt-Co, Cu-Au and Cu-Ag alloy films by XPS and AES].

    PubMed

    Li, Lian-Zhong; Zhuo, Shang-Jun; Shen, Ru-Xiang; Qian, Rong; Gao, Jie

    2013-11-01

    In order to improve the quantitative analysis accuracy of AES, We associated XPS with AES and studied the method to reduce the error of AES quantitative analysis, selected Pt-Co, Cu-Au and Cu-Ag binary alloy thin-films as the samples, used XPS to correct AES quantitative analysis results by changing the auger sensitivity factors to make their quantitative analysis results more similar. Then we verified the accuracy of the quantitative analysis of AES when using the revised sensitivity factors by other samples with different composition ratio, and the results showed that the corrected relative sensitivity factors can reduce the error in quantitative analysis of AES to less than 10%. Peak defining is difficult in the form of the integral spectrum of AES analysis since choosing the starting point and ending point when determining the characteristic auger peak intensity area with great uncertainty, and to make analysis easier, we also processed data in the form of the differential spectrum, made quantitative analysis on the basis of peak to peak height instead of peak area, corrected the relative sensitivity factors, and verified the accuracy of quantitative analysis by the other samples with different composition ratio. The result showed that the analytical error in quantitative analysis of AES reduced to less than 9%. It showed that the accuracy of AES quantitative analysis can be highly improved by the way of associating XPS with AES to correct the auger sensitivity factors since the matrix effects are taken into account. Good consistency was presented, proving the feasibility of this method.

  11. Method and apparatus for reconstructing in-cylinder pressure and correcting for signal decay

    DOEpatents

    Huang, Jian

    2013-03-12

    A method comprises steps for reconstructing in-cylinder pressure data from a vibration signal collected from a vibration sensor mounted on an engine component where it can generate a signal with a high signal-to-noise ratio, and correcting the vibration signal for errors introduced by vibration signal charge decay and sensor sensitivity. The correction factors are determined as a function of estimated motoring pressure and the measured vibration signal itself with each of these being associated with the same engine cycle. Accordingly, the method corrects for charge decay and changes in sensor sensitivity responsive to different engine conditions to allow greater accuracy in the reconstructed in-cylinder pressure data. An apparatus is also disclosed for practicing the disclosed method, comprising a vibration sensor, a data acquisition unit for receiving the vibration signal, a computer processing unit for processing the acquired signal and a controller for controlling the engine operation based on the reconstructed in-cylinder pressure.

  12. [Nonpharmacological correction of low back pain by single or integrated means of medical rehabilitation and the evaluation of their effectiveness].

    PubMed

    Sakalauskiene, Giedre

    2009-01-01

    Low back pain is a global worldwide problem. A great attention is given to correction of this health status by a wide range of rehabilitation specialists. Some single or integrated physical factors, physiotherapy, specific and nonspecific physical exercises, alternative methods of treatment, also the complex of multidisciplinary rehabilitation means are applied in the management of low back pain. The evidence-based data are analyzed in order to identify which nonpharmacological means are effective in pain correction; in addition, the effectiveness of various methods and models of low back pain management are compared in this article. Research data evaluating the impact effectiveness of single or integrated means of rehabilitation are very controversial. There are no evidence-based specific recommendations for the correction of this health status objectively assessing advantages of physiotherapy or physical factors and referring the definite indications of their prescription. It is thought that multidisciplinary rehabilitation is most effective in management of chronic low back pain. The positive results depend on the experience of a physician and other rehabilitation specialists. A patient's motivation to participate in the process of pain control is very important. It is recommended to inform a patient about the effectiveness of administered methods. There is a lack of evidence-based trials evaluating the effectiveness of nonpharmacological methods of pain control in Lithuania. Therefore, the greater attention of researchers and administrative structures of health care should be given to this problem in order to develop the evidence-based guidelines for an effective correction of low back pain.

  13. An entropy correction method for unsteady full potential flows with strong shocks

    NASA Technical Reports Server (NTRS)

    Whitlow, W., Jr.; Hafez, M. M.; Osher, S. J.

    1986-01-01

    An entropy correction method for the unsteady full potential equation is presented. The unsteady potential equation is modified to account for entropy jumps across shock waves. The conservative form of the modified equation is solved in generalized coordinates using an implicit, approximate factorization method. A flux-biasing differencing method, which generates the proper amounts of artificial viscosity in supersonic regions, is used to discretize the flow equations in space. Comparisons between the present method and solutions of the Euler equations and between the present method and experimental data are presented. The comparisons show that the present method more accurately models solutions of the Euler equations and experiment than does the isentropic potential formulation.

  14. [Determination of five naphthaquinones in Arnebia euchroma by quantitative analysis multi-components with single-marker].

    PubMed

    Zhao, Wen-Wen; Wu, Zhi-Min; Wu, Xia; Zhao, Hai-Yu; Chen, Xiao-Qing

    2016-10-01

    This study is to determine five naphthaquinones (acetylshikonin, β-acetoxyisovalerylalkannin, isobutylshikonin, β,β'-dimethylacrylalkannin,α-methyl-n-butylshikonin) by quantitative analysis of multi-components with a single marker (QAMS). β,β'-Dimethylacrylalkannin was selected as the internal reference substance, and the relative correlation factors (RCFs) of acetylshikonin, β-acetoxyisovalerylalkannin, isobutylshikonin and α-methyl-n-butylshikonin were calculated. Then the ruggedness of relative correction factors was tested on different instruments and columns. Meanwhile, 16 batches of Arnebia euchroma were analyzed by external standard method (ESM) and QAMS, respectively. The peaks were identifited by LC-MS. The ruggedness of relative correction factors was good. And the analytical results calculated by ESM and QAMS showed no difference. The quantitative method established was feasible and suitable for the quality evaluation of A. euchroma. Copyright© by the Chinese Pharmaceutical Association.

  15. 49 CFR 325.75 - Ground surface correction factors. 1

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Ground surface correction factors. 1 325.75... MOTOR CARRIER NOISE EMISSION STANDARDS Correction Factors § 325.75 Ground surface correction factors. 1... account both the distance correction factors contained in § 325.73 and the ground surface correction...

  16. 49 CFR 325.75 - Ground surface correction factors. 1

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 5 2011-10-01 2011-10-01 false Ground surface correction factors. 1 325.75... MOTOR CARRIER NOISE EMISSION STANDARDS Correction Factors § 325.75 Ground surface correction factors. 1... account both the distance correction factors contained in § 325.73 and the ground surface correction...

  17. Effective absorption correction for energy dispersive X-ray mapping in a scanning transmission electron microscope: analysing the local indium distribution in rough samples of InGaN alloy layers.

    PubMed

    Wang, X; Chauvat, M-P; Ruterana, P; Walther, T

    2017-12-01

    We have applied our previous method of self-consistent k*-factors for absorption correction in energy-dispersive X-ray spectroscopy to quantify the indium content in X-ray maps of thick compound InGaN layers. The method allows us to quantify the indium concentration without measuring the sample thickness, density or beam current, and works even if there is a drastic local thickness change due to sample roughness or preferential thinning. The method is shown to select, point-by-point in a two-dimensional spectrum image or map, the k*-factor from the local Ga K/L intensity ratio that is most appropriate for the corresponding sample geometry, demonstrating it is not the sample thickness measured along the electron beam direction but the optical path length the X-rays have to travel through the sample that is relevant for the absorption correction. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  18. Dimensions of vegetable parenting practices among preschoolers

    USDA-ARS?s Scientific Manuscript database

    The objective of this study was to determine the factor structure of 31 effective and ineffective vegetable parenting practices used by parents of preschool children based on three theoretically proposed factors: responsiveness, control, and structure. The methods employed included both corrected it...

  19. Calculated X-ray Intensities Using Monte Carlo Algorithms: A Comparison to Experimental EPMA Data

    NASA Technical Reports Server (NTRS)

    Carpenter, P. K.

    2005-01-01

    Monte Carlo (MC) modeling has been used extensively to simulate electron scattering and x-ray emission from complex geometries. Here are presented comparisons between MC results and experimental electron-probe microanalysis (EPMA) measurements as well as phi(rhoz) correction algorithms. Experimental EPMA measurements made on NIST SRM 481 (AgAu) and 482 (CuAu) alloys, at a range of accelerating potential and instrument take-off angles, represent a formal microanalysis data set that has been widely used to develop phi(rhoz) correction algorithms. X-ray intensity data produced by MC simulations represents an independent test of both experimental and phi(rhoz) correction algorithms. The alpha-factor method has previously been used to evaluate systematic errors in the analysis of semiconductor and silicate minerals, and is used here to compare the accuracy of experimental and MC-calculated x-ray data. X-ray intensities calculated by MC are used to generate a-factors using the certificated compositions in the CuAu binary relative to pure Cu and Au standards. MC simulations are obtained using the NIST, WinCasino, and WinXray algorithms; derived x-ray intensities have a built-in atomic number correction, and are further corrected for absorption and characteristic fluorescence using the PAP phi(rhoz) correction algorithm. The Penelope code additionally simulates both characteristic and continuum x-ray fluorescence and thus requires no further correction for use in calculating alpha-factors.

  20. An extended linear scaling method for downscaling temperature and its implication in the Jhelum River basin, Pakistan, and India, using CMIP5 GCMs

    NASA Astrophysics Data System (ADS)

    Mahmood, Rashid; JIA, Shaofeng

    2017-11-01

    In this study, the linear scaling method used for the downscaling of temperature was extended from monthly scaling factors to daily scaling factors (SFs) to improve the daily variations in the corrected temperature. In the original linear scaling (OLS), mean monthly SFs are used to correct the future data, but mean daily SFs are used to correct the future data in the extended linear scaling (ELS) method. The proposed method was evaluated in the Jhelum River basin for the period 1986-2000, using the observed maximum temperature (Tmax) and minimum temperature (Tmin) of 18 climate stations and the simulated Tmax and Tmin of five global climate models (GCMs) (GFDL-ESM2G, NorESM1-ME, HadGEM2-ES, MIROC5, and CanESM2), and the method was also compared with OLS to observe the improvement. Before the evaluation of ELS, these GCMs were also evaluated using their raw data against the observed data for the same period (1986-2000). Four statistical indicators, i.e., error in mean, error in standard deviation, root mean square error, and correlation coefficient, were used for the evaluation process. The evaluation results with GCMs' raw data showed that GFDL-ESM2G and MIROC5 performed better than other GCMs according to all the indicators but with unsatisfactory results that confine their direct application in the basin. Nevertheless, after the correction with ELS, a noticeable improvement was observed in all the indicators except correlation coefficient because this method only adjusts (corrects) the magnitude. It was also noticed that the daily variations of the observed data were better captured by the corrected data with ELS than OLS. Finally, the ELS method was applied for the downscaling of five GCMs' Tmax and Tmin for the period of 2041-2070 under RCP8.5 in the Jhelum basin. The results showed that the basin would face hotter climate in the future relative to the present climate, which may result in increasing water requirements in public, industrial, and agriculture sectors; change in the hydrological cycle and monsoon pattern; and lack of glaciers in the basin.

  1. The Effect Of Different Corrective Feedback Methods on the Outcome and Self Confidence of Young Athletes

    PubMed Central

    Tzetzis, George; Votsis, Evandros; Kourtessis, Thomas

    2008-01-01

    This experiment investigated the effects of three corrective feedback methods, using different combinations of correction, or error cues and positive feedback for learning two badminton skills with different difficulty (forehand clear - low difficulty, backhand clear - high difficulty). Outcome and self-confidence scores were used as dependent variables. The 48 participants were randomly assigned into four groups. Group A received correction cues and positive feedback. Group B received cues on errors of execution. Group C received positive feedback, correction cues and error cues. Group D was the control group. A pre, post and a retention test was conducted. A three way analysis of variance ANOVA (4 groups X 2 task difficulty X 3 measures) with repeated measures on the last factor revealed significant interactions for each depended variable. All the corrective feedback methods groups, increased their outcome scores over time for the easy skill, but only groups A and C for the difficult skill. Groups A and B had significantly better outcome scores than group C and the control group for the easy skill on the retention test. However, for the difficult skill, group C was better than groups A, B and D. The self confidence scores of groups A and C improved over time for the easy skill but not for group B and D. Again, for the difficult skill, only group C improved over time. Finally a regression analysis depicted that the improvement in performance predicted a proportion of the improvement in self confidence for both the easy and the difficult skill. It was concluded that when young athletes are taught skills of different difficulty, different type of instruction, might be more appropriate in order to improve outcome and self confidence. A more integrated approach on teaching will assist coaches or physical education teachers to be more efficient and effective. Key pointsThe type of the skill is a critical factor in determining the effectiveness of the feedback types.Different instructional methods of corrective feedback could have beneficial effects in the outcome and self-confidence of young athletesInstructions focusing on the correct cues or errors increase performance of easy skills.Positive feedback or correction cues increase self-confidence of easy skills but only the combination of error and correction cues increase self confidence and outcome scores of difficult skills. PMID:24149905

  2. Sparse PCA corrects for cell type heterogeneity in epigenome-wide association studies.

    PubMed

    Rahmani, Elior; Zaitlen, Noah; Baran, Yael; Eng, Celeste; Hu, Donglei; Galanter, Joshua; Oh, Sam; Burchard, Esteban G; Eskin, Eleazar; Zou, James; Halperin, Eran

    2016-05-01

    In epigenome-wide association studies (EWAS), different methylation profiles of distinct cell types may lead to false discoveries. We introduce ReFACTor, a method based on principal component analysis (PCA) and designed for the correction of cell type heterogeneity in EWAS. ReFACTor does not require knowledge of cell counts, and it provides improved estimates of cell type composition, resulting in improved power and control for false positives in EWAS. Corresponding software is available at http://www.cs.tau.ac.il/~heran/cozygene/software/refactor.html.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackenzie, Alistair, E-mail: alistairmackenzie@nhs.net; Dance, David R.; Young, Kenneth C.

    Purpose: The aim of this work is to create a model to predict the noise power spectra (NPS) for a range of mammographic radiographic factors. The noise model was necessary to degrade images acquired on one system to match the image quality of different systems for a range of beam qualities. Methods: Five detectors and x-ray systems [Hologic Selenia (ASEh), Carestream computed radiography CR900 (CRc), GE Essential (CSI), Carestream NIP (NIPc), and Siemens Inspiration (ASEs)] were characterized for this study. The signal transfer property was measured as the pixel value against absorbed energy per unit area (E) at a referencemore » beam quality of 28 kV, Mo/Mo or 29 kV, W/Rh with 45 mm polymethyl methacrylate (PMMA) at the tube head. The contributions of the three noise sources (electronic, quantum, and structure) to the NPS were calculated by fitting a quadratic at each spatial frequency of the NPS against E. A quantum noise correction factor which was dependent on beam quality was quantified using a set of images acquired over a range of radiographic factors with different thicknesses of PMMA. The noise model was tested for images acquired at 26 kV, Mo/Mo with 20 mm PMMA and 34 kV, Mo/Rh with 70 mm PMMA for three detectors (ASEh, CRc, and CSI) over a range of exposures. The NPS were modeled with and without the noise correction factor and compared with the measured NPS. A previous method for adapting an image to appear as if acquired on a different system was modified to allow the reference beam quality to be different from the beam quality of the image. The method was validated by adapting the ASEh flat field images with two thicknesses of PMMA (20 and 70 mm) to appear with the imaging characteristics of the CSI and CRc systems. Results: The quantum noise correction factor rises with higher beam qualities, except for CR systems at high spatial frequencies, where a flat response was found against mean photon energy. This is due to the dominance of secondary quantum noise in CR. The use of the quantum noise correction factor reduced the difference from the model to the real NPS to generally within 4%. The use of the quantum noise correction improved the conversion of ASEh image to CRc image but had no difference for the conversion to CSI images. Conclusions: A practical method for estimating the NPS at any dose and over a range of beam qualities for mammography has been demonstrated. The noise model was incorporated into a methodology for converting an image to appear as if acquired on a different detector. The method can now be extended to work for a wide range of beam qualities and can be applied to the conversion of mammograms.« less

  4. Method for correction of measured polarization angles from motional Stark effect spectroscopy for the effects of electric fields

    DOE PAGES

    Luce, T. C.; Petty, C. C.; Meyer, W. H.; ...

    2016-11-02

    An approximate method to correct the motional Stark effect (MSE) spectroscopy for the effects of intrinsic plasma electric fields has been developed. The motivation for using an approximate method is to incorporate electric field effects for between-pulse or real-time analysis of the current density or safety factor profile. The toroidal velocity term in the momentum balance equation is normally the dominant contribution to the electric field orthogonal to the flux surface over most of the plasma. When this approximation is valid, the correction to the MSE data can be included in a form like that used when electric field effectsmore » are neglected. This allows measurements of the toroidal velocity to be integrated into the interpretation of the MSE polarization angles without changing how the data is treated in existing codes. In some cases, such as the DIII-D system, the correction is especially simple, due to the details of the neutral beam and MSE viewing geometry. The correction method is compared using DIII-D data in a variety of plasma conditions to analysis that assumes no radial electric field is present and to analysis that uses the standard correction method, which involves significant human intervention for profile fitting. The comparison shows that the new correction method is close to the standard one, and in all cases appears to offer a better result than use of the uncorrected data. Lastly, the method has been integrated into the standard DIII-D equilibrium reconstruction code in use for analysis between plasma pulses and is sufficiently fast that it will be implemented in real-time equilibrium analysis for control applications.« less

  5. Wall interference correction improvements for the ONERA main wind tunnels

    NASA Technical Reports Server (NTRS)

    Vaucheret, X.

    1982-01-01

    This paper describes improved methods of calculating wall interference corrections for the ONERA large windtunnels. The mathematical description of the model and its sting support have become more sophisticated. An increasing number of singularities is used until an agreement between theoretical and experimental signatures of the model and sting on the walls of the closed test section is obtained. The singularity decentering effects are calculated when the model reaches large angles of attack. The porosity factor cartography on the perforated walls deduced from the measured signatures now replaces the reference tests previously carried out in larger tunnels. The porosity factors obtained from the blockage terms (signatures at zero lift) and from the lift terms are in good agreement. In each case (model + sting + test section), wall corrections are now determined, before the tests, as a function of the fundamental parameters M, CS, CZ. During the windtunnel tests, the corrections are quickly computed from these functions.

  6. Underwater and Dive Station Work-Site Noise Surveys

    DTIC Science & Technology

    2008-03-14

    A) octave band noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet...band noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet dB (A...noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet dB (A) level, and

  7. Calibration of entrance dose measurement for an in vivo dosimetry programme.

    PubMed

    Ding, W; Patterson, W; Tremethick, L; Joseph, D

    1995-11-01

    An increasing number of cancer treatment centres are using in vivo dosimetry as a quality assurance tool for verifying dosimetry as either the entrance or exit surface of the patient undergoing external beam radiotherapy. Equipment is usually limited to either thermoluminescent dosimeters (TLD) or semiconductor detectors such as p-type diodes. The semiconductor detector is more popular than the TLD due to the major advantage of real time analysis of the actual dose delivered. If a discrepancy is observed between the calculated and the measured entrance dose, it is possible to eliminate several likely sources of errors by immediately verifying all treatment parameters. Five Scanditronix EDP-10 p-type diodes were investigated to determine their calibration and relevant correction factors for entrance dose measurements using a Victoreen White Water-RW3 tissue equivalent phantom and a 6 MV photon beam from a Varian Clinac 2100C linear accelerator. Correction factors were determined for individual diodes for the following parameters: source to surface distance (SSD), collimator size, wedge, plate (tray) and temperature. The directional dependence of diode response was also investigated. The SSD correction factor (CSSD) was found to increase by approximately 3% over the range of SSD from 80 to 130 cm. The correction factor for collimator size (Cfield) also varied by approximately 3% between 5 x 5 and 40 x 40 cm2. The wedge correction factor (Cwedge) and plate correction factor (Cplate) were found to be a function of collimator size. Over the range of measurement, these factors varied by a maximum of 1 and 1.5%, respectively. The Cplate variation between the solid and the drilled plates under the same irradiation conditions was a maximum of 2.4%. The diode sensitivity demonstrated an increase with temperature. A maximum of 2.5% variation for the directional dependence of diode response was observed for angle of +/- 60 degrees. In conclusion, in vivo dosimetry is an important and reliable method for checking the dose delivered to the patient. Preclinical calibration and determination of the relevant correction factors for each diode are essential in order to achieve a high accuracy of dose delivered to the patient.

  8. Gravity gradient preprocessing at the GOCE HPF

    NASA Astrophysics Data System (ADS)

    Bouman, J.; Rispens, S.; Gruber, T.; Schrama, E.; Visser, P.; Tscherning, C. C.; Veicherts, M.

    2009-04-01

    One of the products derived from the GOCE observations are the gravity gradients. These gravity gradients are provided in the Gradiometer Reference Frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. In order to use these gravity gradients for application in Earth sciences and gravity field analysis, additional pre-processing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and non-tidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/f behaviour for low frequencies. In the outlier detection the 1/f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.

  9. Preprocessing of gravity gradients at the GOCE high-level processing facility

    NASA Astrophysics Data System (ADS)

    Bouman, Johannes; Rispens, Sietse; Gruber, Thomas; Koop, Radboud; Schrama, Ernst; Visser, Pieter; Tscherning, Carl Christian; Veicherts, Martin

    2009-07-01

    One of the products derived from the gravity field and steady-state ocean circulation explorer (GOCE) observations are the gravity gradients. These gravity gradients are provided in the gradiometer reference frame (GRF) and are calibrated in-flight using satellite shaking and star sensor data. To use these gravity gradients for application in Earth scienes and gravity field analysis, additional preprocessing needs to be done, including corrections for temporal gravity field signals to isolate the static gravity field part, screening for outliers, calibration by comparison with existing external gravity field information and error assessment. The temporal gravity gradient corrections consist of tidal and nontidal corrections. These are all generally below the gravity gradient error level, which is predicted to show a 1/ f behaviour for low frequencies. In the outlier detection, the 1/ f error is compensated for by subtracting a local median from the data, while the data error is assessed using the median absolute deviation. The local median acts as a high-pass filter and it is robust as is the median absolute deviation. Three different methods have been implemented for the calibration of the gravity gradients. All three methods use a high-pass filter to compensate for the 1/ f gravity gradient error. The baseline method uses state-of-the-art global gravity field models and the most accurate results are obtained if star sensor misalignments are estimated along with the calibration parameters. A second calibration method uses GOCE GPS data to estimate a low-degree gravity field model as well as gravity gradient scale factors. Both methods allow to estimate gravity gradient scale factors down to the 10-3 level. The third calibration method uses high accurate terrestrial gravity data in selected regions to validate the gravity gradient scale factors, focussing on the measurement band. Gravity gradient scale factors may be estimated down to the 10-2 level with this method.

  10. The combination of the error correction methods of GAFCHROMIC EBT3 film

    PubMed Central

    Li, Yinghui; Chen, Lixin; Zhu, Jinhan; Liu, Xiaowei

    2017-01-01

    Purpose The aim of this study was to combine a set of methods for use of radiochromic film dosimetry, including calibration, correction for lateral effects and a proposed triple-channel analysis. These methods can be applied to GAFCHROMIC EBT3 film dosimetry for radiation field analysis and verification of IMRT plans. Methods A single-film exposure was used to achieve dose calibration, and the accuracy was verified based on comparisons with the square-field calibration method. Before performing the dose analysis, the lateral effects on pixel values were corrected. The position dependence of the lateral effect was fitted by a parabolic function, and the curvature factors of different dose levels were obtained using a quadratic formula. After lateral effect correction, a triple-channel analysis was used to reduce disturbances and convert scanned images from films into dose maps. The dose profiles of open fields were measured using EBT3 films and compared with the data obtained using an ionization chamber. Eighteen IMRT plans with different field sizes were measured and verified with EBT3 films, applying our methods, and compared to TPS dose maps, to check correct implementation of film dosimetry proposed here. Results The uncertainty of lateral effects can be reduced to ±1 cGy. Compared with the results of Micke A et al., the residual disturbances of the proposed triple-channel method at 48, 176 and 415 cGy are 5.3%, 20.9% and 31.4% smaller, respectively. Compared with the ionization chamber results, the difference in the off-axis ratio and percentage depth dose are within 1% and 2%, respectively. For the application of IMRT verification, there were no difference between two triple-channel methods. Compared with only corrected by triple-channel method, the IMRT results of the combined method (include lateral effect correction and our present triple-channel method) show a 2% improvement for large IMRT fields with the criteria 3%/3 mm. PMID:28750023

  11. School--Possibility or (New) Risk for Young Females in Correctional Institutions

    ERIC Educational Resources Information Center

    Boric, Ivana Jedud; Mirosavljevic, Anja

    2015-01-01

    In this paper, the authors deal with the education of girls in a Croatian correctional institution as a risk factor for social exclusion based on the data obtained via semi-structured interviews with experts and the girls and via the documentation analysis method. In this regard, the paper deals with two perspectives, i.e. the girls' and experts',…

  12. Ratios of total suspended solids to suspended sediment concentrations by particle size

    USGS Publications Warehouse

    Selbig, W.R.; Bannerman, R.T.

    2011-01-01

    Wet-sieving sand-sized particles from a whole storm-water sample before splitting the sample into laboratory-prepared containers can reduce bias and improve the precision of suspended-sediment concentrations (SSC). Wet-sieving, however, may alter concentrations of total suspended solids (TSS) because the analytical method used to determine TSS may not have included the sediment retained on the sieves. Measuring TSS is still commonly used by environmental managers as a regulatory metric for solids in storm water. For this reason, a new method of correlating concentrations of TSS and SSC by particle size was used to develop a series of correction factors for SSC as a means to estimate TSS. In general, differences between TSS and SSC increased with greater particle size and higher sand content. Median correction factors to SSC ranged from 0.29 for particles larger than 500m to 0.85 for particles measuring from 32 to 63m. Great variability was observed in each fraction-a result of varying amounts of organic matter in the samples. Wide variability in organic content could reduce the transferability of the correction factors. ?? 2011 American Society of Civil Engineers.

  13. Altitudinal patterns of plant diversity on the Jade Dragon Snow Mountain, southwestern China.

    PubMed

    Xu, Xiang; Zhang, Huayong; Tian, Wang; Zeng, Xiaoqiang; Huang, Hai

    2016-01-01

    Understanding altitudinal patterns of biological diversity and their underlying mechanisms is critically important for biodiversity conservation in mountainous regions. The contribution of area to plant diversity patterns is widely acknowledged and may mask the effects of other determinant factors. In this context, it is important to examine altitudinal patterns of corrected taxon richness by eliminating the area effect. Here we adopt two methods to correct observed taxon richness: a power-law relationship between richness and area, hereafter "method 1"; and richness counted in equal-area altitudinal bands, hereafter "method 2". We compare these two methods on the Jade Dragon Snow Mountain, which is the nearest large-scale altitudinal gradient to the Equator in the Northern Hemisphere. We find that seed plant species richness, genus richness, family richness, and species richness of trees, shrubs, herbs and Groups I-III (species with elevational range size <150, between 150 and 500, and >500 m, respectively) display distinct hump-shaped patterns along the equal-elevation altitudinal gradient. The corrected taxon richness based on method 2 (TRcor2) also shows hump-shaped patterns for all plant groups, while the one based on method 1 (TRcor1) does not. As for the abiotic factors influencing the patterns, mean annual temperature, mean annual precipitation, and mid-domain effect explain a larger part of the variation in TRcor2 than in TRcor1. In conclusion, for biodiversity patterns on the Jade Dragon Snow Mountain, method 2 preserves the significant influences of abiotic factors to the greatest degree while eliminating the area effect. Our results thus reveal that although the classical method 1 has earned more attention and approval in previous research, method 2 can perform better under certain circumstances. We not only confirm the essential contribution of method 1 in community ecology, but also highlight the significant role of method 2 in eliminating the area effect, and call for more application of method 2 in further macroecological studies.

  14. An Adaptive Deghosting Method in Neural Network-Based Infrared Detectors Nonuniformity Correction

    PubMed Central

    Li, Yiyang; Jin, Weiqi; Zhu, Jin; Zhang, Xu; Li, Shuo

    2018-01-01

    The problems of the neural network-based nonuniformity correction algorithm for infrared focal plane arrays mainly concern slow convergence speed and ghosting artifacts. In general, the more stringent the inhibition of ghosting, the slower the convergence speed. The factors that affect these two problems are the estimated desired image and the learning rate. In this paper, we propose a learning rate rule that combines adaptive threshold edge detection and a temporal gate. Through the noise estimation algorithm, the adaptive spatial threshold is related to the residual nonuniformity noise in the corrected image. The proposed learning rate is used to effectively and stably suppress ghosting artifacts without slowing down the convergence speed. The performance of the proposed technique was thoroughly studied with infrared image sequences with both simulated nonuniformity and real nonuniformity. The results show that the deghosting performance of the proposed method is superior to that of other neural network-based nonuniformity correction algorithms and that the convergence speed is equivalent to the tested deghosting methods. PMID:29342857

  15. An Adaptive Deghosting Method in Neural Network-Based Infrared Detectors Nonuniformity Correction.

    PubMed

    Li, Yiyang; Jin, Weiqi; Zhu, Jin; Zhang, Xu; Li, Shuo

    2018-01-13

    The problems of the neural network-based nonuniformity correction algorithm for infrared focal plane arrays mainly concern slow convergence speed and ghosting artifacts. In general, the more stringent the inhibition of ghosting, the slower the convergence speed. The factors that affect these two problems are the estimated desired image and the learning rate. In this paper, we propose a learning rate rule that combines adaptive threshold edge detection and a temporal gate. Through the noise estimation algorithm, the adaptive spatial threshold is related to the residual nonuniformity noise in the corrected image. The proposed learning rate is used to effectively and stably suppress ghosting artifacts without slowing down the convergence speed. The performance of the proposed technique was thoroughly studied with infrared image sequences with both simulated nonuniformity and real nonuniformity. The results show that the deghosting performance of the proposed method is superior to that of other neural network-based nonuniformity correction algorithms and that the convergence speed is equivalent to the tested deghosting methods.

  16. Correction of spin diffusion during iterative automated NOE assignment

    NASA Astrophysics Data System (ADS)

    Linge, Jens P.; Habeck, Michael; Rieping, Wolfgang; Nilges, Michael

    2004-04-01

    Indirect magnetization transfer increases the observed nuclear Overhauser enhancement (NOE) between two protons in many cases, leading to an underestimation of target distances. Wider distance bounds are necessary to account for this error. However, this leads to a loss of information and may reduce the quality of the structures generated from the inter-proton distances. Although several methods for spin diffusion correction have been published, they are often not employed to derive distance restraints. This prompted us to write a user-friendly and CPU-efficient method to correct for spin diffusion that is fully integrated in our program ambiguous restraints for iterative assignment (ARIA). ARIA thus allows automated iterative NOE assignment and structure calculation with spin diffusion corrected distances. The method relies on numerical integration of the coupled differential equations which govern relaxation by matrix squaring and sparse matrix techniques. We derive a correction factor for the distance restraints from calculated NOE volumes and inter-proton distances. To evaluate the impact of our spin diffusion correction, we tested the new calibration process extensively with data from the Pleckstrin homology (PH) domain of Mus musculus β-spectrin. By comparing structures refined with and without spin diffusion correction, we show that spin diffusion corrected distance restraints give rise to structures of higher quality (notably fewer NOE violations and a more regular Ramachandran map). Furthermore, spin diffusion correction permits the use of tighter error bounds which improves the distinction between signal and noise in an automated NOE assignment scheme.

  17. Zero-Echo-Time and Dixon Deep Pseudo-CT (ZeDD CT): Direct Generation of Pseudo-CT Images for Pelvic PET/MRI Attenuation Correction Using Deep Convolutional Neural Networks with Multiparametric MRI.

    PubMed

    Leynes, Andrew P; Yang, Jaewon; Wiesinger, Florian; Kaushik, Sandeep S; Shanbhag, Dattesh D; Seo, Youngho; Hope, Thomas A; Larson, Peder E Z

    2018-05-01

    Accurate quantification of uptake on PET images depends on accurate attenuation correction in reconstruction. Current MR-based attenuation correction methods for body PET use a fat and water map derived from a 2-echo Dixon MRI sequence in which bone is neglected. Ultrashort-echo-time or zero-echo-time (ZTE) pulse sequences can capture bone information. We propose the use of patient-specific multiparametric MRI consisting of Dixon MRI and proton-density-weighted ZTE MRI to directly synthesize pseudo-CT images with a deep learning model: we call this method ZTE and Dixon deep pseudo-CT (ZeDD CT). Methods: Twenty-six patients were scanned using an integrated 3-T time-of-flight PET/MRI system. Helical CT images of the patients were acquired separately. A deep convolutional neural network was trained to transform ZTE and Dixon MR images into pseudo-CT images. Ten patients were used for model training, and 16 patients were used for evaluation. Bone and soft-tissue lesions were identified, and the SUV max was measured. The root-mean-squared error (RMSE) was used to compare the MR-based attenuation correction with the ground-truth CT attenuation correction. Results: In total, 30 bone lesions and 60 soft-tissue lesions were evaluated. The RMSE in PET quantification was reduced by a factor of 4 for bone lesions (10.24% for Dixon PET and 2.68% for ZeDD PET) and by a factor of 1.5 for soft-tissue lesions (6.24% for Dixon PET and 4.07% for ZeDD PET). Conclusion: ZeDD CT produces natural-looking and quantitatively accurate pseudo-CT images and reduces error in pelvic PET/MRI attenuation correction compared with standard methods. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.

  18. SU-E-I-20: Dead Time Count Loss Compensation in SPECT/CT: Projection Versus Global Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siman, W; Kappadath, S

    Purpose: To compare projection-based versus global correction that compensate for deadtime count loss in SPECT/CT images. Methods: SPECT/CT images of an IEC phantom (2.3GBq 99mTc) with ∼10% deadtime loss containing the 37mm (uptake 3), 28 and 22mm (uptake 6) spheres were acquired using a 2 detector SPECT/CT system with 64 projections/detector and 15 s/projection. The deadtime, Ti and the true count rate, Ni at each projection, i was calculated using the monitor-source method. Deadtime corrected SPECT were reconstructed twice: (1) with projections that were individually-corrected for deadtime-losses; and (2) with original projections with losses and then correcting the reconstructed SPECTmore » images using a scaling factor equal to the inverse of the average fractional loss for 5 projections/detector. For both cases, the SPECT images were reconstructed using OSEM with attenuation and scatter corrections. The two SPECT datasets were assessed by comparing line profiles in xyplane and z-axis, evaluating the count recoveries, and comparing ROI statistics. Higher deadtime losses (up to 50%) were also simulated to the individually corrected projections by multiplying each projection i by exp(-a*Ni*Ti), where a is a scalar. Additionally, deadtime corrections in phantoms with different geometries and deadtime losses were also explored. The same two correction methods were carried for all these data sets. Results: Averaging the deadtime losses in 5 projections/detector suffices to recover >99% of the loss counts in most clinical cases. The line profiles (xyplane and z-axis) and the statistics in the ROIs drawn in the SPECT images corrected using both methods showed agreement within the statistical noise. The count-loss recoveries in the two methods also agree within >99%. Conclusion: The projection-based and the global correction yield visually indistinguishable SPECT images. The global correction based on sparse sampling of projections losses allows for accurate SPECT deadtime loss correction while keeping the study duration reasonable.« less

  19. Whole-heart coronary MRA with 3D affine motion correction using 3D image-based navigation.

    PubMed

    Henningsson, Markus; Prieto, Claudia; Chiribiri, Amedeo; Vaillant, Ghislain; Razavi, Reza; Botnar, René M

    2014-01-01

    Robust motion correction is necessary to minimize respiratory motion artefacts in coronary MR angiography (CMRA). The state-of-the-art method uses a 1D feet-head translational motion correction approach, and data acquisition is limited to a small window in the respiratory cycle, which prolongs the scan by a factor of 2-3. The purpose of this work was to implement 3D affine motion correction for Cartesian whole-heart CMRA using a 3D navigator (3D-NAV) to allow for data acquisition throughout the whole respiratory cycle. 3D affine transformations for different respiratory states (bins) were estimated by using 3D-NAV image acquisitions which were acquired during the startup profiles of a steady-state free precession sequence. The calculated 3D affine transformations were applied to the corresponding high-resolution Cartesian image acquisition which had been similarly binned, to correct for respiratory motion between bins. Quantitative and qualitative comparisons showed no statistical difference between images acquired with the proposed method and the reference method using a diaphragmatic navigator with a narrow gating window. We demonstrate that 3D-NAV and 3D affine correction can be used to acquire Cartesian whole-heart 3D coronary artery images with 100% scan efficiency with similar image quality as with the state-of-the-art gated and corrected method with approximately 50% scan efficiency. Copyright © 2013 Wiley Periodicals, Inc.

  20. Coherent vector meson photoproduction from deuterium at intermediate energies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rogers, T.C.; Strikman, M.I.; Sargsian, M.M.

    2006-04-15

    We analyze the cross section for vector meson photoproduction off a deuteron for the intermediate range of photon energies starting at a few giga-electron-volts above the threshold and higher. We reproduce the steps in the derivation of the conventional nonrelativistic Glauber expression based on an effective diagrammatic method while making corrections for Fermi motion and intermediate-energy kinematic effects. We show that, for intermediate-energy vector meson production, the usual Glauber factorization breaks down, and we derive corrections to the usual Glauber method to linear order in longitudinal nucleon momentum. The purpose of our analysis is to establish methods for probing interestingmore » physics in the production mechanism for {phi} mesons and heavier vector mesons. We demonstrate how neglecting the breakdown of Glauber factorization can lead to errors in measurements of basic cross sections extracted from nuclear data.« less

  1. Next-to-leading-logarithmic power corrections for N -jettiness subtraction in color-singlet production

    NASA Astrophysics Data System (ADS)

    Boughezal, Radja; Isgrò, Andrea; Petriello, Frank

    2018-04-01

    We present a detailed derivation of the power corrections to the factorization theorem for the 0-jettiness event shape variable T . Our calculation is performed directly in QCD without using the formalism of effective field theory. We analytically calculate the next-to-leading logarithmic power corrections for small T at next-to-leading order in the strong coupling constant, extending previous computations which obtained only the leading-logarithmic power corrections. We address a discrepancy in the literature between results for the leading-logarithmic power corrections to a particular definition of 0-jettiness. We present a numerical study of the power corrections in the context of their application to the N -jettiness subtraction method for higher-order calculations, using gluon-fusion Higgs production as an example. The inclusion of the next-to-leading-logarithmic power corrections further improves the numerical efficiency of the approach beyond the improvement obtained from the leading-logarithmic power corrections.

  2. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    PubMed Central

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-01-01

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW’s) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach. PMID:27314363

  3. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data.

    PubMed

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-06-15

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration's (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003-2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW's) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  4. Development of an Analysis and Design Optimization Framework for Marine Propellers

    NASA Astrophysics Data System (ADS)

    Tamhane, Ashish C.

    In this thesis, a framework for the analysis and design optimization of ship propellers is developed. This framework can be utilized as an efficient synthesis tool in order to determine the main geometric characteristics of the propeller but also to provide the designer with the capability to optimize the shape of the blade sections based on their specific criteria. A hybrid lifting-line method with lifting-surface corrections to account for the three-dimensional flow effects has been developed. The prediction of the correction factors is achieved using Artificial Neural Networks and Support Vector Regression. This approach results in increased approximation accuracy compared to existing methods and allows for extrapolation of the correction factor values. The effect of viscosity is implemented in the framework via the coupling of the lifting line method with the open-source RANSE solver OpenFOAM for the calculation of lift, drag and pressure distribution on the blade sections using a transition kappa-o SST turbulence model. Case studies of benchmark high-speed propulsors are utilized in order to validate the proposed framework for propeller operation in open-water conditions but also in a ship's wake.

  5. Rectal temperature-based death time estimation in infants.

    PubMed

    Igari, Yui; Hosokai, Yoshiyuki; Funayama, Masato

    2016-03-01

    In determining the time of death in infants based on rectal temperature, the same methods used in adults are generally used. However, whether the methods for adults are suitable for infants is unclear. In this study, we examined the following 3 methods in 20 infant death cases: computer simulation of rectal temperature based on the infinite cylinder model (Ohno's method), computer-based double exponential approximation based on Marshall and Hoare's double exponential model with Henssge's parameter determination (Henssge's method), and computer-based collinear approximation based on extrapolation of the rectal temperature curve (collinear approximation). The interval between the last time the infant was seen alive and the time that he/she was found dead was defined as the death time interval and compared with the estimated time of death. In Ohno's method, 7 cases were within the death time interval, and the average deviation in the other 12 cases was approximately 80 min. The results of both Henssge's method and collinear approximation were apparently inferior to the results of Ohno's method. The corrective factor was set within the range of 0.7-1.3 in Henssge's method, and a modified program was newly developed to make it possible to change the corrective factors. Modification A, in which the upper limit of the corrective factor range was set as the maximum value in each body weight, produced the best results: 8 cases were within the death time interval, and the average deviation in the other 12 cases was approximately 80min. There was a possibility that the influence of thermal isolation on the actual infants was stronger than that previously shown by Henssge. We conclude that Ohno's method and Modification A are useful for death time estimation in infants. However, it is important to accept the estimated time of death with certain latitude considering other circumstances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Interference correction by extracting the information of interference dominant regions: Application to near-infrared spectra

    NASA Astrophysics Data System (ADS)

    Bi, Yiming; Tang, Liang; Shan, Peng; Xie, Qiong; Hu, Yong; Peng, Silong; Tan, Jie; Li, Changwen

    2014-08-01

    Interference such as baseline drift and light scattering can degrade the model predictability in multivariate analysis of near-infrared (NIR) spectra. Usually interference can be represented by an additive and a multiplicative factor. In order to eliminate these interferences, correction parameters are needed to be estimated from spectra. However, the spectra are often mixed of physical light scattering effects and chemical light absorbance effects, making it difficult for parameter estimation. Herein, a novel algorithm was proposed to find a spectral region automatically that the interesting chemical absorbance and noise are low, that is, finding an interference dominant region (IDR). Based on the definition of IDR, a two-step method was proposed to find the optimal IDR and the corresponding correction parameters estimated from IDR. Finally, the correction was performed to the full spectral range using previously obtained parameters for the calibration set and test set, respectively. The method can be applied to multi target systems with one IDR suitable for all targeted analytes. Tested on two benchmark data sets of near-infrared spectra, the performance of the proposed method provided considerable improvement compared with full spectral estimation methods and comparable with other state-of-art methods.

  7. Borehole deviation and correction factor data for selected wells in the eastern Snake River Plain aquifer at and near the Idaho National Laboratory, Idaho

    USGS Publications Warehouse

    Twining, Brian V.

    2016-11-29

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Department of Energy, has maintained a water-level monitoring program at the Idaho National Laboratory (INL) since 1949. The purpose of the program is to systematically measure and report water-level data to assess the eastern Snake River Plain aquifer and long term changes in groundwater recharge, discharge, movement, and storage. Water-level data are commonly used to generate potentiometric maps and used to infer increases and (or) decreases in the regional groundwater system. Well deviation is one component of water-level data that is often overlooked and is the result of the well construction and the well not being plumb. Depending on measured slant angle, where well deviation generally increases linearly with increasing slant angle, well deviation can suggest artificial anomalies in the water table. To remove the effects of well deviation, the USGS INL Project Office applies a correction factor to water-level data when a well deviation survey indicates a change in the reference elevation of greater than or equal to 0.2 ft.Borehole well deviation survey data were considered for 177 wells completed within the eastern Snake River Plain aquifer, but not all wells had deviation survey data available. As of 2016, USGS INL Project Office database includes: 57 wells with gyroscopic survey data; 100 wells with magnetic deviation survey data; 11 wells with erroneous gyroscopic data that were excluded; and, 68 wells with no deviation survey data available. Of the 57 wells with gyroscopic deviation surveys, correction factors for 16 wells ranged from 0.20 to 6.07 ft and inclination angles (SANG) ranged from 1.6 to 16.0 degrees. Of the 100 wells with magnetic deviation surveys, a correction factor for 21 wells ranged from 0.20 to 5.78 ft and SANG ranged from 1.0 to 13.8 degrees, not including the wells that did not meet the correction factor criteria of greater than or equal to 0.20 ft.Forty-seven wells had gyroscopic and magnetic deviation survey data for the same well. Datasets for both survey types were compared for the same well to determine whether magnetic survey data were consistent with gyroscopic survey data. Of those 47 wells, 96 percent showed similar correction factor estimates (≤ 0.20 ft) for both magnetic and gyroscopic well deviation surveys. A linear comparison of correction factor estimates for both magnetic and gyroscopic deviation well surveys for all 47 wells indicate good linear correlation, represented by an r-squared of 0.88. The correction factor difference between the gyroscopic and magnetic surveys for 45 of 47 wells ranged from 0.00 to 0.18 ft, not including USGS 57 and USGS 125. Wells USGS 57 and USGS 125 show a correction factor difference of 2.16 and 0.36 ft, respectively; however, review of the data files suggest erroneous SANG data for both magnetic deviation well surveys. The difference in magnetic and gyroscopic well deviation SANG measurements, for all wells, ranged from 0.0 to 0.9 degrees. These data indicate good agreement between SANG data measured using the magnetic deviation survey methods and SANG data measured using gyroscopic deviation survey methods, even for surveys collected years apart.

  8. Lock-in amplifier error prediction and correction in frequency sweep measurements.

    PubMed

    Sonnaillon, Maximiliano Osvaldo; Bonetto, Fabian Jose

    2007-01-01

    This article proposes an analytical algorithm for predicting errors in lock-in amplifiers (LIAs) working with time-varying reference frequency. Furthermore, a simple method for correcting such errors is presented. The reference frequency can be swept in order to measure the frequency response of a system within a given spectrum. The continuous variation of the reference frequency produces a measurement error that depends on three factors: the sweep speed, the LIA low-pass filters, and the frequency response of the measured system. The proposed error prediction algorithm is based on the final value theorem of the Laplace transform. The correction method uses a double-sweep measurement. A mathematical analysis is presented and validated with computational simulations and experimental measurements.

  9. Asymmetric collimation: Dosimetric characteristics, treatment planning algorithm, and clinical applications

    NASA Astrophysics Data System (ADS)

    Kwa, William

    1998-11-01

    In this thesis the dosimetric characteristics of asymmetric fields are investigated and a new computation method for the dosimetry of asymmetric fields is described and implemented into an existing treatment planning algorithm. Based on this asymmetric field treatment planning algorithm, the clinical use of asymmetric fields in cancer treatment is investigated, and new treatment techniques for conformal therapy are developed. Dose calculation is verified with thermoluminescent dosimeters in a body phantom. In this thesis, an analytical approach is proposed to account for the dose reduction when a corresponding symmetric field is collimated asymmetrically to a smaller asymmetric field. This is represented by a correction factor that uses the ratio of the equivalent field dose contributions between the asymmetric and symmetric fields. The same equation used in the expression of the correction factor can be used for a wide range of asymmetric field sizes, photon energies and linear accelerators. This correction factor will account for the reduction in scatter contributions within an asymmetric field, resulting in the dose profile of an asymmetric field resembling that of a wedged field. The output factors of some linear accelerators are dependent on the collimator settings and whether the upper or lower collimators are used to set the narrower dimension of a radiation field. In addition to this collimator exchange effect for symmetric fields, asymmetric fields are also found to exhibit some asymmetric collimator backscatter effect. The proposed correction factor is extended to account for these effects. A set of correction factors determined semi-empirically to account for the dose reduction in the penumbral region and outside the radiated field is established. Since these correction factors rely only on the output factors and the tissue maximum ratios, they can easily be implemented into an existing treatment planning system. There is no need to store either additional sets of asymmetric field profiles or databases for the implementation of these correction factors into an existing in-house treatment planning system. With this asymmetric field algorithm, the computation time is found to be 20 times faster than a commercial system. This computation method can also be generalized to the dose representation of a two-fold asymmetric field whereby both the field width and length are set asymmetrically, and the calculations are not limited to points lying on one of the principal planes. The dosimetric consequences of asymmetric fields on the dose delivery in clinical situations are investigated. Examples of the clinical use of asymmetric fields are given and the potential use of asymmetric fields in conformal therapy is demonstrated. An alternative head and neck conformal therapy is described, and the treatment plan is compared to the conventional technique. The dose distributions calculated for the standard and alternative techniques are confirmed with thermoluminescent dosimeters in a body phantom at selected dose points. (Abstract shortened by UMI.)

  10. The two sides of the C-factor.

    PubMed

    Fok, Alex S L; Aregawi, Wondwosen A

    2018-04-01

    The aim of this paper is to investigate the effects on shrinkage strain/stress development of the lateral constraints at the bonded surfaces of resin composite specimens used in laboratory measurement. Using three-dimensional (3D) Hooke's law, a recently developed shrinkage stress theory is extended to 3D to include the additional out-of-plane strain/stress induced by the lateral constraints at the bonded surfaces through the Poisson's ratio effect. The model contains a parameter that defines the relative thickness of the boundary layers, adjacent to the bonded surfaces, that are under such multiaxial stresses. The resulting differential equation is solved for the shrinkage stress under different boundary conditions. The accuracy of the model is assessed by comparing the numerical solutions with a wide range of experimental data, which include those from both shrinkage strain and shrinkage stress measurements. There is good agreement between theory and experiments. The model correctly predicts the different instrument-dependent effects that a specimen's configuration factor (C-factor) has on shrinkage stress. That is, for noncompliant stress-measuring instruments, shrinkage stress increases with the C-factor of the cylindrical specimen; while the opposite is true for compliant instruments. The model also provides a correction factor, which is a function of the C-factor, Poisson's ratio and boundary layer thickness of the specimen, for shrinkage strain measured using the bonded-disc method. For the resin composite examined, the boundary layers have a combined thickness that is ∼11.5% of the specimen's diameter. The theory provides a physical and mechanical basis for the C-factor using principles of engineering mechanics. The correction factor it provides allows the linear shrinkage strain of a resin composite to be obtained more accurately from the bonded-disc method. Published by Elsevier Ltd.

  11. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  12. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE PAGES

    None, None

    2016-11-21

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  13. [Baseline correction of spectrum for the inversion of chlorophyll-a concentration in the turbidity water].

    PubMed

    Wei, Yu-Chun; Wang, Guo-Xiang; Cheng, Chun-Mei; Zhang, Jing; Sun, Xiao-Peng

    2012-09-01

    Suspended particle material is the main factor affecting remote sensing inversion of chlorophyll-a concentration (Chla) in turbidity water. According to the optical property of suspended material in water, the present paper proposed a linear baseline correction method to weaken the suspended particle contribution in the spectrum above turbidity water surface. The linear baseline was defined as the connecting line of reflectance from 450 to 750 nm, and baseline correction is that spectrum reflectance subtracts the baseline. Analysis result of field data in situ of Meiliangwan, Taihu Lake in April, 2011 and March, 2010 shows that spectrum linear baseline correction can improve the inversion precision of Chl a and produce the better model diagnoses. As the data in March, 2010, RMSE of band ratio model built by original spectrum is 4.11 mg x m(-3), and that built by spectrum baseline correction is 3.58 mg x m(-3). Meanwhile, residual distribution and homoscedasticity in the model built by baseline correction spectrum is improved obviously. The model RMSE of April, 2011 shows the similar result. The authors suggest that using linear baseline correction as the spectrum processing method to improve Chla inversion accuracy in turbidity water without algae bloom.

  14. Artificial Intelligence Techniques for Automatic Screening of Amblyogenic Factors

    PubMed Central

    Van Eenwyk, Jonathan; Agah, Arvin; Giangiacomo, Joseph; Cibis, Gerhard

    2008-01-01

    Purpose To develop a low-cost automated video system to effectively screen children aged 6 months to 6 years for amblyogenic factors. Methods In 1994 one of the authors (G.C.) described video vision development assessment, a digitizable analog video-based system combining Brückner pupil red reflex imaging and eccentric photorefraction to screen young children for amblyogenic factors. The images were analyzed manually with this system. We automated the capture of digital video frames and pupil images and applied computer vision and artificial intelligence to analyze and interpret results. The artificial intelligence systems were evaluated by a tenfold testing method. Results The best system was the decision tree learning approach, which had an accuracy of 77%, compared to the “gold standard” specialist examination with a “refer/do not refer” decision. Criteria for referral were strabismus, including microtropia, and refractive errors and anisometropia considered to be amblyogenic. Eighty-two percent of strabismic individuals were correctly identified. High refractive errors were also correctly identified and referred 90% of the time, as well as significant anisometropia. The program was less correct in identifying more moderate refractive errors, below +5 and less than −7. Conclusions Although we are pursuing a variety of avenues to improve the accuracy of the automated analysis, the program in its present form provides acceptable cost benefits for detecting ambylogenic factors in children aged 6 months to 6 years. PMID:19277222

  15. Alpha Air Sample Counting Efficiency Versus Dust Loading: Evaluation of a Large Data Set

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogue, M. G.; Gause-Lott, S. M.; Owensby, B. N.

    Dust loading on air sample filters is known to cause a loss of efficiency for direct counting of alpha activity on the filters, but the amount of dust loading and the correction factor needed to account for attenuated alpha particles is difficult to assess. In this paper, correction factors are developed by statistical analysis of a large database of air sample results for a uranium and plutonium processing facility at the Savannah River Site. As is typically the case, dust-loading data is not directly available, but sample volume is found to be a reasonable proxy measure; the amount of dustmore » loading is inferred by a combination of the derived correction factors and a Monte Carlo model. The technique compares the distribution of activity ratios [beta/(beta + alpha)] by volume and applies a range of correction factors on the raw alpha count rate. The best-fit results with this method are compared with MCNP modeling of activity uniformly deposited in the dust and analytical laboratory results of digested filters. Finally, a linear fit is proposed to evenly-deposited alpha activity collected on filters with dust loading over a range of about 2 mg cm -2 to 1,000 mg cm -2.« less

  16. Alpha Air Sample Counting Efficiency Versus Dust Loading: Evaluation of a Large Data Set

    DOE PAGES

    Hogue, M. G.; Gause-Lott, S. M.; Owensby, B. N.; ...

    2018-03-03

    Dust loading on air sample filters is known to cause a loss of efficiency for direct counting of alpha activity on the filters, but the amount of dust loading and the correction factor needed to account for attenuated alpha particles is difficult to assess. In this paper, correction factors are developed by statistical analysis of a large database of air sample results for a uranium and plutonium processing facility at the Savannah River Site. As is typically the case, dust-loading data is not directly available, but sample volume is found to be a reasonable proxy measure; the amount of dustmore » loading is inferred by a combination of the derived correction factors and a Monte Carlo model. The technique compares the distribution of activity ratios [beta/(beta + alpha)] by volume and applies a range of correction factors on the raw alpha count rate. The best-fit results with this method are compared with MCNP modeling of activity uniformly deposited in the dust and analytical laboratory results of digested filters. Finally, a linear fit is proposed to evenly-deposited alpha activity collected on filters with dust loading over a range of about 2 mg cm -2 to 1,000 mg cm -2.« less

  17. Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, R.L.; Christensen, S.

    2006-01-01

    Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Altunbas, Cem, E-mail: caltunbas@gmail.com; Lai, Chao-Jen; Zhong, Yuncheng

    Purpose: In using flat panel detectors (FPD) for cone beam computed tomography (CBCT), pixel gain variations may lead to structured nonuniformities in projections and ring artifacts in CBCT images. Such gain variations can be caused by change in detector entrance exposure levels or beam hardening, and they are not accounted by conventional flat field correction methods. In this work, the authors presented a method to identify isolated pixel clusters that exhibit gain variations and proposed a pixel gain correction (PGC) method to suppress both beam hardening and exposure level dependent gain variations. Methods: To modulate both beam spectrum and entrancemore » exposure, flood field FPD projections were acquired using beam filters with varying thicknesses. “Ideal” pixel values were estimated by performing polynomial fits in both raw and flat field corrected projections. Residuals were calculated by taking the difference between measured and ideal pixel values to identify clustered image and FPD artifacts in flat field corrected and raw images, respectively. To correct clustered image artifacts, the ratio of ideal to measured pixel values in filtered images were utilized as pixel-specific gain correction factors, referred as PGC method, and they were tabulated as a function of pixel value in a look-up table. Results: 0.035% of detector pixels lead to clustered image artifacts in flat field corrected projections, where 80% of these pixels were traced back and linked to artifacts in the FPD. The performance of PGC method was tested in variety of imaging conditions and phantoms. The PGC method reduced clustered image artifacts and fixed pattern noise in projections, and ring artifacts in CBCT images. Conclusions: Clustered projection image artifacts that lead to ring artifacts in CBCT can be better identified with our artifact detection approach. When compared to the conventional flat field correction method, the proposed PGC method enables characterization of nonlinear pixel gain variations as a function of change in x-ray spectrum and intensity. Hence, it can better suppress image artifacts due to beam hardening as well as artifacts that arise from detector entrance exposure variation.« less

  19. Performance of a Line Loss Correction Method for Gas Turbine Emission Measurements

    NASA Astrophysics Data System (ADS)

    Hagen, D. E.; Whitefield, P. D.; Lobo, P.

    2015-12-01

    International concern for the environmental impact of jet engine exhaust emissions in the atmosphere has led to increased attention on gas turbine engine emission testing. The Society of Automotive Engineers Aircraft Exhaust Emissions Measurement Committee (E-31) has published an Aerospace Information Report (AIR) 6241 detailing the sampling system for the measurement of non-volatile particulate matter from aircraft engines, and is developing an Aerospace Recommended Practice (ARP) for methodology and system specification. The Missouri University of Science and Technology (MST) Center for Excellence for Aerospace Particulate Emissions Reduction Research has led numerous jet engine exhaust sampling campaigns to characterize emissions at different locations in the expanding exhaust plume. Particle loss, due to various mechanisms, occurs in the sampling train that transports the exhaust sample from the engine exit plane to the measurement instruments. To account for the losses, both the size dependent penetration functions and the size distribution of the emitted particles need to be known. However in the proposed ARP, particle number and mass are measured, but size is not. Here we present a methodology to generate number and mass correction factors for line loss, without using direct size measurement. A lognormal size distribution is used to represent the exhaust aerosol at the engine exit plane and is defined by the measured number and mass at the downstream end of the sample train. The performance of this line loss correction is compared to corrections based on direct size measurements using data taken by MST during numerous engine test campaigns. The experimental uncertainty in these correction factors is estimated. Average differences between the line loss correction method and size based corrections are found to be on the order of 10% for number and 2.5% for mass.

  20. OBT analysis method using polyethylene beads for limited quantities of animal tissue.

    PubMed

    Kim, S B; Stuart, M

    2015-08-01

    This study presents a polyethylene beads method for OBT determination in animal tissues and animal products for cases where the amount of water recovered by combustion is limited by sample size or quantity. In the method, the amount of water recovered after combustion is enhanced by adding tritium-free polyethylene beads to the sample prior to combustion in an oxygen bomb. The method reduces process time by allowing the combustion water to be easily collected with a pipette. Sufficient water recovery was achieved using the polyethylene beads method when 2 g of dry animal tissue or animal product were combusted with 2 g of polyethylene beads. Correction factors, which account for the dilution due to the combustion water of the beads, are provided for beef, chicken, pork, fish and clams, as well as egg, milk and cheese. The method was tested by comparing its OBT results with those of the conventional method using animal samples collected on the Chalk River Laboratories (CRL) site. The results determined that the polyethylene beads method added no more than 25% uncertainty when appropriate correction factors are used. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  1. Determination of efficiency of an aged HPGe detector for gaseous sources by self absorption correction and point source methods

    NASA Astrophysics Data System (ADS)

    Sarangapani, R.; Jose, M. T.; Srinivasan, T. K.; Venkatraman, B.

    2017-07-01

    Methods for the determination of efficiency of an aged high purity germanium (HPGe) detector for gaseous sources have been presented in the paper. X-ray radiography of the detector has been performed to get detector dimensions for computational purposes. The dead layer thickness of HPGe detector has been ascertained from experiments and Monte Carlo computations. Experimental work with standard point and liquid sources in several cylindrical geometries has been undertaken for obtaining energy dependant efficiency. Monte Carlo simulations have been performed for computing efficiencies for point, liquid and gaseous sources. Self absorption correction factors have been obtained using mathematical equations for volume sources and MCNP simulations. Self-absorption correction and point source methods have been used to estimate the efficiency for gaseous sources. The efficiencies determined from the present work have been used to estimate activity of cover gas sample of a fast reactor.

  2. Removing flicker based on sparse color correspondences in old film restoration

    NASA Astrophysics Data System (ADS)

    Huang, Xi; Ding, Youdong; Yu, Bing; Xia, Tianran

    2018-04-01

    In the long history of human civilization, archived film is an indispensable part of it, and using digital method to repair damaged film is also a mainstream trend nowadays. In this paper, we propose a sparse color correspondences based technique to remove fading flicker for old films. Our model, combined with multi frame images to establish a simple correction model, includes three key steps. Firstly, we recover sparse color correspondences in the input frames to build a matrix with many missing entries. Secondly, we present a low-rank matrix factorization approach to estimate the unknown parameters of this model. Finally, we adopt a two-step strategy that divide the estimated parameters into reference frame parameters for color recovery correction and other frame parameters for color consistency correction to remove flicker. Our method combined multi-frames takes continuity of the input sequence into account, and the experimental results show the method can remove fading flicker efficiently.

  3. Evaluation of the Stress Level of Children with Idiopathic Scoliosis in relation to the Method of Treatment and Parameters of the Deformity

    PubMed Central

    Leszczewska, Justyna; Czaprowski, Dariusz; Pawłowska, Paulina; Kolwicz, Aleksandra; Kotwicki, Tomasz

    2012-01-01

    Stress level due to existing body deformity as well as to the treatment with a corrective brace is one of factors influencing the quality of life of children with idiopathic scoliosis undergoing non-surgical management. The purpose of the study was to evaluate the stress level among children suffering from idiopathic scoliosis in relation to the method of treatment and the parameters of the deformity. Seventy-three patients with idiopathic scoliosis participated in the study. Fifty-two children were treated by means of physiotherapy, while 21 patients were treated with both Cheneau corrective brace and physiotherapy. To assess the stress level related to the deformity itself and to the method of treatment with corrective brace, the two Bad Sobernheim Stress Questionnaires (BSSQs) were applied, the BSSQ Deformity and the BSSQ Brace, respectively. PMID:22919333

  4. Evaluation of the stress level of children with idiopathic scoliosis in relation to the method of treatment and parameters of the deformity.

    PubMed

    Leszczewska, Justyna; Czaprowski, Dariusz; Pawłowska, Paulina; Kolwicz, Aleksandra; Kotwicki, Tomasz

    2012-01-01

    Stress level due to existing body deformity as well as to the treatment with a corrective brace is one of factors influencing the quality of life of children with idiopathic scoliosis undergoing non-surgical management. The purpose of the study was to evaluate the stress level among children suffering from idiopathic scoliosis in relation to the method of treatment and the parameters of the deformity. Seventy-three patients with idiopathic scoliosis participated in the study. Fifty-two children were treated by means of physiotherapy, while 21 patients were treated with both Cheneau corrective brace and physiotherapy. To assess the stress level related to the deformity itself and to the method of treatment with corrective brace, the two Bad Sobernheim Stress Questionnaires (BSSQs) were applied, the BSSQ Deformity and the BSSQ Brace, respectively.

  5. SU-F-T-367: Using PRIMO, a PENELOPE-Based Software, to Improve the Small Field Dosimetry of Linear Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benmakhlouf, H; Andreo, P; Brualla, L

    2016-06-15

    Purpose: To calculate output correction factors for Varian Clinac 2100iX beams for seven small field detectors and use the values to determine the small field output factors for the linacs at Karolinska university hospital. Methods: Phase space files (psf) for square fields between 0.25cm and 10cm were calculated using the PENELOPE-based PRIMO software. The linac MC-model was tuned by comparing PRIMO-estimated and experimentally determined depth doses and lateral dose-profiles for 40cmx40cm fields. The calculated psf were used as radiation sources to calculate the correction factors of IBA and PTW detectors with the code penEasy/PENELOPE. Results: The optimal tuning parameters ofmore » the MClinac model in PRIMO were 5.4 MeV incident electron energy and zero energy spread, focal spot size and beam divergence. Correction factors obtained for the liquid ion chamber (PTW-T31018) are within 1% down to 0.5 cm fields. For unshielded diodes (IBA-EFD, IBA-SFD, PTW-T60017 and PTW-T60018) the corrections are up to 2% at intermediate fields (>1cm side), becoming down to −11% for fields smaller than 1cm. The shielded diode (IBA-PFD and PTW-T60016) corrections vary with field size from 0 to −4%. Volume averaging effects are found for most detectors in the presence of 0.25cm fields. Conclusion: Good agreement was found between correction factors based on PRIMO-generated psf and those from other publications. The calculated factors will be implemented in output factor measurements (using several detectors) in the clinic. PRIMO is a userfriendly general code capable of generating small field psf and can be used without having to code own linac geometries. It can therefore be used to improve the clinical dosimetry, especially in the commissioning of linear accelerators. Important dosimetry data, such as dose-profiles and output factors can be determined more accurately for a specific machine, geometry and setup by using PRIMO and having a MC-model of the detector used.« less

  6. Topographic Correction Module at Storm (TC@Storm)

    NASA Astrophysics Data System (ADS)

    Zaksek, K.; Cotar, K.; Veljanovski, T.; Pehani, P.; Ostir, K.

    2015-04-01

    Different solar position in combination with terrain slope and aspect result in different illumination of inclined surfaces. Therefore, the retrieved satellite data cannot be accurately transformed to the spectral reflectance, which depends only on the land cover. The topographic correction should remove this effect and enable further automatic processing of higher level products. The topographic correction TC@STORM was developed as a module within the SPACE-SI automatic near-real-time image processing chain STORM. It combines physical approach with the standard Minnaert method. The total irradiance is modelled as a three-component irradiance: direct (dependent on incidence angle, sun zenith angle and slope), diffuse from the sky (dependent mainly on sky-view factor), and diffuse reflected from the terrain (dependent on sky-view factor and albedo). For computation of diffuse irradiation from the sky we assume an anisotropic brightness of the sky. We iteratively estimate a linear combination from 10 different models, to provide the best results. Dependent on the data resolution, we mask shades based on radiometric (image) or geometric properties. The method was tested on RapidEye, Landsat 8, and PROBA-V data. Final results of the correction were evaluated and statistically validated based on various topography settings and land cover classes. Images show great improvements in shaded areas.

  7. In vivo proton dosimetry using a MOSFET detector in an anthropomorphic phantom with tissue inhomogeneity.

    PubMed

    Kohno, Ryosuke; Hotta, Kenji; Matsubara, Kana; Nishioka, Shie; Matsuura, Taeko; Kawashima, Mitsuhiko

    2012-03-08

    When in vivo proton dosimetry is performed with a metal-oxide semiconductor field-effect transistor (MOSFET) detector, the response of the detector depends strongly on the linear energy transfer. The present study reports a practical method to correct the MOSFET response for linear energy transfer dependence by using a simplified Monte Carlo dose calculation method (SMC). A depth-output curve for a mono-energetic proton beam in polyethylene was measured with the MOSFET detector. This curve was used to calculate MOSFET output distributions with the SMC (SMC(MOSFET)). The SMC(MOSFET) output value at an arbitrary point was compared with the value obtained by the conventional SMC(PPIC), which calculates proton dose distributions by using the depth-dose curve determined by a parallel-plate ionization chamber (PPIC). The ratio of the two values was used to calculate the correction factor of the MOSFET response at an arbitrary point. The dose obtained by the MOSFET detector was determined from the product of the correction factor and the MOSFET raw dose. When in vivo proton dosimetry was performed with the MOSFET detector in an anthropomorphic phantom, the corrected MOSFET doses agreed with the SMC(PPIC) results within the measurement error. To our knowledge, this is the first report of successful in vivo proton dosimetry with a MOSFET detector.

  8. Fringe-period selection for a multifrequency fringe-projection phase unwrapping method

    NASA Astrophysics Data System (ADS)

    Zhang, Chunwei; Zhao, Hong; Jiang, Kejian

    2016-08-01

    The multi-frequency fringe-projection phase unwrapping method (MFPPUM) is a typical phase unwrapping algorithm for fringe projection profilometry. It has the advantage of being capable of correctly accomplishing phase unwrapping even in the presence of surface discontinuities. If the fringe frequency ratio of the MFPPUM is too large, fringe order error (FOE) may be triggered. FOE will result in phase unwrapping error. It is preferable for the phase unwrapping to be kept correct while the fewest sets of lower frequency fringe patterns are used. To achieve this goal, in this paper a parameter called fringe order inaccuracy (FOI) is defined, dominant factors which may induce FOE are theoretically analyzed, a method to optimally select the fringe periods for the MFPPUM is proposed with the aid of FOI, and experiments are conducted to research the impact of the dominant factors in phase unwrapping and demonstrate the validity of the proposed method. Some novel phenomena are revealed by these experiments. The proposed method helps to optimally select the fringe periods and detect the phase unwrapping error for the MFPPUM.

  9. Robust finger vein ROI localization based on flexible segmentation.

    PubMed

    Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun

    2013-10-24

    Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.

  10. Robust Finger Vein ROI Localization Based on Flexible Segmentation

    PubMed Central

    Lu, Yu; Xie, Shan Juan; Yoon, Sook; Yang, Jucheng; Park, Dong Sun

    2013-01-01

    Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system. PMID:24284769

  11. Finite-nuclear-size contribution to the g factor of a bound electron: Higher-order effects

    NASA Astrophysics Data System (ADS)

    Karshenboim, Savely G.; Ivanov, Vladimir G.

    2018-02-01

    A precision comparison of theory and experiments on the g factor of an electron bound in a hydrogenlike ion with a spinless nucleus requires a detailed account of finite-nuclear-size contributions. While the relativistic corrections to the leading finite-size contribution are known, the higher-order effects need an additional consideration. Two results are presented in the paper. One is on the anomalous-magnetic-moment correction to the finite-size effects and the other is due to higher-order effects in Z α m RN . We also present here a method to relate the contributions to the g factor of a bound electron in a hydrogenlike atom to its energy within a nonrelativistic approach.

  12. Effectiveness and confounding factors of penetrating astigmatic keratotomy in clinical practice

    PubMed Central

    Yen, Chu-Yu; Tseng, Gow-Lieng

    2018-01-01

    Abstract Rationale: Penetrating astigmatic keratotomy (penetrating AK) is a well-known method to correct corneal astigmatism but rarely be performed nowadays. This article reevaluated the clinical effectiveness and confounding factors of penetrating AK. Patient concerns: Penetrating AK has been introduced to serve as one alternative operation for astigmatism correction, and is thought to have the potential advantage of being more affordable and easy to perform. The purpose of our study is to evaluate the effectiveness and confounding factors of penetrating AK. Diagnoses: The chart of 95 patients with corneal astigmatism (range: 0.75–3.25 diopters [D]) who received penetrating AK from January 2014 to December 2016 was collected. The corneal astigmatism were measured by an autokeratometer (Topcon KR8100PA topographer-autorefractor), and repeated with manual keratometer in low reproducibility cases. Interventions: All patients received penetrating AK by an experienced ophthalmologist (Dr. Gow-Lieng Tseng, MD, PHD) in the operation room. Among which, 66 patients received penetrating AK with phacoemulsification simultaneously (group A), whereas 29 patients received penetrating AK at least 3 months after phacoemulsification (group B). After excluding the patients combined with other procedures or lost followed up, 79 patients are remaining for analysis. The outcome was evaluated by net correction, the difference between preoperative corneal astigmatism (PCA) and residual corneal astigmatism (RCA). Two sample t tests and Pearson test were used for effectiveness evaluation. For confounding factors, multivariate linear regression was used for statistical analysis. Outcomes: The mean preoperative and postoperative refractive cylinders were 1.97 ± 0.77 and 1.08 ± 0.64 D, respectively, in group A and 2.62 ± 1.05 and 1.51 ± 0.89 D in group B. There were no statistically significant differences in net correction between these two groups (0.9 ± 0.66 vs. 1.1 ± 0.69, P = .214). Higher PCA were associated with higher net correction in both group A (P = .002) and group B (P = .019). Compound myopic astigmatism caused less net correction than others only in group A (P = 0.031). Lessons: Penetrating AK is an accessible, affordable, and effective way to correct corneal astigmatism. The results of this procedure are comparable to modern methods in patients with low to moderate corneal astigmatism. PMID:29369200

  13. SU-E-T-408: Determination of KQ,Q0-Factors From Water and Graphite Calorimetry in a 60 MeV Proton Beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rossomme, S; Renaud, J; Sarfehnia, A

    2014-06-01

    Purpose: To reduce the uncertainty of the beam quality correction factor k Q,Q0, for scattered proton beams (SPB). This factor is used in dosimetry protocols, to determine absorbed dose-to-water with ionization chambers. For the Roos plane parallel chambers (RPPICs), the IAEA TRS-398 protocol estimates k Q,Q0-factor to be 1.004(for a beam quality Rres=2 g.cm{sup 2}), with an uncertainty of 2.1%. Methods: A graphite calorimeter (GCal), a water calorimeter (WCal) and RPPICs were exposed, in a single experiment, to a 60 MeV non-modulated SPB. RPPICs were calibrated in terms of absorbed dose-to-water in a 20 MeV electron beam. The calibration coefficientmore » is traceable to NPL's absorbed dose standards. Chamber measurements were corrected for environmental conditions, recombination and polarity. The WCal corrections include heat loss, heat defect and vessel perturbation. The GCal corrections include heat loss and absorbed dose conversion. Except for heat loss correction and its uncertainty in the WCal system, all major corrections were included in the analysis. Other minor corrections, such as beam profile non-uniformity, are still to be evaluated. Experimental k Q,Q0-factors were derived by comparing the results obtained with both calorimeters and ionometry. Results: The absorbed dose-to-water from both calorimeters was found to be within 1.3% with an uncertainty of 1.2%. k Q,Q0-factor for a RPPIC was found to be 0.998 and 1.011, with a standard uncertainty of 1.4% and 0.9% when the dose is based on the GCal and the WCal, respectively. Conclusion: Results suggest the possibility to determine k Q,Q0-values for PPICs in SPB with a lower uncertainty than specified in the TRS-398 thereby helping to reduce uncertainty on absorbed dose-to-water. The agreement between calorimeters confirms the possibility to use GCal or WCal as primary standard in SPB. Because of the dose conversion, the use of GCal may lead to slightly higher uncertainty, but is, at present, considerably easier to operate.« less

  14. Summing coincidence correction for γ-ray measurements using the HPGe detector with a low background shielding system

    NASA Astrophysics Data System (ADS)

    He, L.-C.; Diao, L.-J.; Sun, B.-H.; Zhu, L.-H.; Zhao, J.-W.; Wang, M.; Wang, K.

    2018-02-01

    A Monte Carlo method based on the GEANT4 toolkit has been developed to correct the full-energy peak (FEP) efficiencies of a high purity germanium (HPGe) detector equipped with a low background shielding system, and moreover evaluated using summing peaks in a numerical way. It is found that the FEP efficiencies of 60Co, 133Ba and 152Eu can be improved up to 18% by taking the calculated true summing coincidence factors (TSCFs) correction into account. Counts of summing coincidence γ peaks in the spectrum of 152Eu can be well reproduced using the corrected efficiency curve within an accuracy of 3%.

  15. On the Performance of T2∗ Correction Methods for Quantification of Hepatic Fat Content

    PubMed Central

    Reeder, Scott B.; Bice, Emily K.; Yu, Huanzhou; Hernando, Diego; Pineda, Angel R.

    2014-01-01

    Nonalcoholic fatty liver disease is the most prevalent chronic liver disease in Western societies. MRI can quantify liver fat, the hallmark feature of nonalcoholic fatty liver disease, so long as multiple confounding factors including T2∗ decay are addressed. Recently developed MRI methods that correct for T2∗ to improve the accuracy of fat quantification either assume a common T2∗ (single- T2∗) for better stability and noise performance or independently estimate the T2∗ for water and fat (dual- T2∗) for reduced bias, but with noise performance penalty. In this study, the tradeoff between bias and variance for different T2∗ correction methods is analyzed using the Cramér-Rao bound analysis for biased estimators and is validated using Monte Carlo experiments. A noise performance metric for estimation of fat fraction is proposed. Cramér-Rao bound analysis for biased estimators was used to compute the metric at different echo combinations. Optimization was performed for six echoes and typical T2∗ values. This analysis showed that all methods have better noise performance with very short first echo times and echo spacing of ∼π/2 for single- T2∗ correction, and ∼2π/3 for dual- T2∗ correction. Interestingly, when an echo spacing and first echo shift of ∼π/2 are used, methods without T2∗ correction have less than 5% bias in the estimates of fat fraction. PMID:21661045

  16. Work characteristics as predictors of correctional supervisors’ health outcomes

    PubMed Central

    Buden, Jennifer C.; Dugan, Alicia G.; Namazi, Sara; Huedo-Medina, Tania B.; Cherniack, Martin G.; Faghri, Pouran D.

    2016-01-01

    Objective This study examined associations among health behaviors, psychosocial work factors, and health status. Methods Correctional supervisors (n=157) completed a survey that assessed interpersonal and organizational views on health. Chi-square and logistic regressions were used to examine relationships among variables. Results Respondents had a higher prevalence of obesity and comorbidities compared to the general U.S. adult population. Burnout was significantly associated with nutrition, physical activity, sleep duration, sleep quality, diabetes, and anxiety/depression. Job meaning, job satisfaction and workplace social support may predict health behaviors and outcomes. Conclusions Correctional supervisors are understudied and have poor overall health status. Improving health behaviors of middle-management employees may have a beneficial effect on the health of the entire workforce. This paper demonstrates the importance of psychosocial work factors that may contribute to health behaviors and outcomes. PMID:27483335

  17. 2001 Bhuj, India, earthquake engineering seismoscope recordings and Eastern North America ground-motion attenuation relations

    USGS Publications Warehouse

    Cramer, C.H.; Kumar, A.

    2003-01-01

    Engineering seismoscope data collected at distances less than 300 km for the M 7.7 Bhuj, India, mainshock are compatible with ground-motion attenuation in eastern North America (ENA). The mainshock ground-motion data have been corrected to a common geological site condition using the factors of Joyner and Boore (2000) and a classification scheme of Quaternary or Tertiary sediments or rock. We then compare these data to ENA ground-motion attenuation relations. Despite uncertainties in recording method, geological site corrections, common tectonic setting, and the amount of regional seismic attenuation, the corrected Bhuj dataset agrees with the collective predictions by ENA ground-motion attenuation relations within a factor of 2. This level of agreement is within the dataset uncertainties and the normal variance for recorded earthquake ground motions.

  18. Investigation of electron-loss and photon scattering correction factors for FAC-IR-300 ionization chamber

    NASA Astrophysics Data System (ADS)

    Mohammadi, S. M.; Tavakoli-Anbaran, H.; Zeinali, H. Z.

    2017-02-01

    The parallel-plate free-air ionization chamber termed FAC-IR-300 was designed at the Atomic Energy Organization of Iran, AEOI. This chamber is used for low and medium X-ray dosimetry on the primary standard level. In order to evaluate the air-kerma, some correction factors such as electron-loss correction factor (ke) and photon scattering correction factor (ksc) are needed. ke factor corrects the charge loss from the collecting volume and ksc factor corrects the scattering of photons into collecting volume. In this work ke and ksc were estimated by Monte Carlo simulation. These correction factors are calculated for mono-energy photon. As a result of the simulation data, the ke and ksc values for FAC-IR-300 ionization chamber are 1.0704 and 0.9982, respectively.

  19. A study of ionospheric grid modification technique for BDS/GPS receiver

    NASA Astrophysics Data System (ADS)

    Liu, Xuelin; Li, Meina; Zhang, Lei

    2017-07-01

    For the single-frequency GPS receiver, ionospheric delay is an important factor affecting the positioning performance. There are many kinds of ionospheric correction methods, common models are Bent model, IRI model, Klobuchar model, Ne Quick model and so on. The US Global Positioning System (GPS) uses the Klobuchar coefficients transmitted in the satellite signal to correct the ionospheric delay error for a single frequency GPS receiver, but this model can only reduce the ionospheric error of about 50% in the mid-latitudes. In the Beidou system, the accuracy of the correction delay is higher. Therefore, this paper proposes a method that using BD grid information to correct GPS ionospheric delay to improve the ionospheric delay for the BDS/GPS compatible positioning receiver. In this paper, the principle of ionospheric grid algorithm is introduced in detail, and the positioning accuracy of GPS system and BDS/GPS compatible positioning system is compared and analyzed by the real measured data. The results show that the method can effectively improve the positioning accuracy of the receiver in a more concise way.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.

    Here, we present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography–mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arrangedmore » into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.« less

  1. Shutterless non-uniformity correction for the long-term stability of an uncooled long-wave infrared camera

    NASA Astrophysics Data System (ADS)

    Liu, Chengwei; Sui, Xiubao; Gu, Guohua; Chen, Qian

    2018-02-01

    For the uncooled long-wave infrared (LWIR) camera, the infrared (IR) irradiation the focal plane array (FPA) receives is a crucial factor that affects the image quality. Ambient temperature fluctuation as well as system power consumption can result in changes of FPA temperature and radiation characteristics inside the IR camera; these will further degrade the imaging performance. In this paper, we present a novel shutterless non-uniformity correction method to compensate for non-uniformity derived from the variation of ambient temperature. Our method combines a calibration-based method and the properties of a scene-based method to obtain correction parameters at different ambient temperature conditions, so that the IR camera performance can be less influenced by ambient temperature fluctuation or system power consumption. The calibration process is carried out in a temperature chamber with slowly changing ambient temperature and a black body as uniform radiation source. Enough uniform images are captured and the gain coefficients are calculated during this period. Then in practical application, the offset parameters are calculated via the least squares method based on the gain coefficients, the captured uniform images and the actual scene. Thus we can get a corrected output through the gain coefficients and offset parameters. The performance of our proposed method is evaluated on realistic IR images and compared with two existing methods. The images we used in experiments are obtained by a 384× 288 pixels uncooled LWIR camera. Results show that our proposed method can adaptively update correction parameters as the actual target scene changes and is more stable to temperature fluctuation than the other two methods.

  2. Applications of multivariate modeling to neuroimaging group analysis: A comprehensive alternative to univariate general linear model

    PubMed Central

    Chen, Gang; Adleman, Nancy E.; Saad, Ziad S.; Leibenluft, Ellen; Cox, RobertW.

    2014-01-01

    All neuroimaging packages can handle group analysis with t-tests or general linear modeling (GLM). However, they are quite hamstrung when there are multiple within-subject factors or when quantitative covariates are involved in the presence of a within-subject factor. In addition, sphericity is typically assumed for the variance–covariance structure when there are more than two levels in a within-subject factor. To overcome such limitations in the traditional AN(C)OVA and GLM, we adopt a multivariate modeling (MVM) approach to analyzing neuroimaging data at the group level with the following advantages: a) there is no limit on the number of factors as long as sample sizes are deemed appropriate; b) quantitative covariates can be analyzed together with within- subject factors; c) when a within-subject factor is involved, three testing methodologies are provided: traditional univariate testing (UVT)with sphericity assumption (UVT-UC) and with correction when the assumption is violated (UVT-SC), and within-subject multivariate testing (MVT-WS); d) to correct for sphericity violation at the voxel level, we propose a hybrid testing (HT) approach that achieves equal or higher power via combining traditional sphericity correction methods (Greenhouse–Geisser and Huynh–Feldt) with MVT-WS. PMID:24954281

  3. Improving the accuracy of CT dimensional metrology by a novel beam hardening correction method

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Li, Lei; Zhang, Feng; Xi, Xiaoqi; Deng, Lin; Yan, Bin

    2015-01-01

    Its powerful nondestructive characteristics are attracting more and more research into the study of computed tomography (CT) for dimensional metrology, which offers a practical alternative to the common measurement methods. However, the inaccuracy and uncertainty severely limit the further utilization of CT for dimensional metrology due to many factors, among which the beam hardening (BH) effect plays a vital role. This paper mainly focuses on eliminating the influence of the BH effect in the accuracy of CT dimensional metrology. To correct the BH effect, a novel exponential correction model is proposed. The parameters of the model are determined by minimizing the gray entropy of the reconstructed volume. In order to maintain the consistency and contrast of the corrected volume, a punishment term is added to the cost function, enabling more accurate measurement results to be obtained by the simple global threshold method. The proposed method is efficient, and especially suited to the case where there is a large difference in gray value between material and background. Different spheres with known diameters are used to verify the accuracy of dimensional measurement. Both simulation and real experimental results demonstrate the improvement in measurement precision. Moreover, a more complex workpiece is also tested to show that the proposed method is of general feasibility.

  4. Cause-specific mortality time series analysis: a general method to detect and correct for abrupt data production changes

    PubMed Central

    2011-01-01

    Background Monitoring the time course of mortality by cause is a key public health issue. However, several mortality data production changes may affect cause-specific time trends, thus altering the interpretation. This paper proposes a statistical method that detects abrupt changes ("jumps") and estimates correction factors that may be used for further analysis. Methods The method was applied to a subset of the AMIEHS (Avoidable Mortality in the European Union, toward better Indicators for the Effectiveness of Health Systems) project mortality database and considered for six European countries and 13 selected causes of deaths. For each country and cause of death, an automated jump detection method called Polydect was applied to the log mortality rate time series. The plausibility of a data production change associated with each detected jump was evaluated through literature search or feedback obtained from the national data producers. For each plausible jump position, the statistical significance of the between-age and between-gender jump amplitude heterogeneity was evaluated by means of a generalized additive regression model, and correction factors were deduced from the results. Results Forty-nine jumps were detected by the Polydect method from 1970 to 2005. Most of the detected jumps were found to be plausible. The age- and gender-specific amplitudes of the jumps were estimated when they were statistically heterogeneous, and they showed greater by-age heterogeneity than by-gender heterogeneity. Conclusion The method presented in this paper was successfully applied to a large set of causes of death and countries. The method appears to be an alternative to bridge coding methods when the latter are not systematically implemented because they are time- and resource-consuming. PMID:21929756

  5. Consistency analysis and correction of ground-based radar observations using space-borne radar

    NASA Astrophysics Data System (ADS)

    Zhang, Shuai; Zhu, Yiqing; Wang, Zhenhui; Wang, Yadong

    2018-04-01

    The lack of an accurate determination of radar constant can introduce biases in ground-based radar (GR) reflectivity factor data, and lead to poor consistency of radar observations. The geometry-matching method was applied to carry out spatial matching of radar data from the Precipitation Radar (PR) on board the Tropical Rainfall Measuring Mission (TRMM) satellite to observations from a GR deployed at Nanjing, China, in their effective sampling volume, with 250 match-up cases obtained from January 2008 to October 2013. The consistency of the GR was evaluated with reference to the TRMM PR, whose stability is established. The results show that the below-bright-band-height data of the Nanjing radar can be split into three periods: Period I from January 2008 to March 2010, Period II from March 2010 to May 2013, and Period III from May 2013 to October 2013. There are distinct differences in overall reflectivity factor between the three periods, and the overall reflectivity factor in period II is smaller by a factor of over 3 dB than in periods I and III, although the overall reflectivity within each period remains relatively stable. Further investigation shows that in period II the difference between the GR and PR observations changed with echo intensity. A best-fit relation between the two radar reflectivity factors provides a linear correction that is applied to the reflectivity of the Nanjing radar, and which is effective in improving its consistency. Rain-gauge data were used to verify the correction, and the estimated precipitation based on the corrected GR reflectivity data was closer to the rain-gauge observations than that without correction.

  6. A new phase correction method in NMR imaging based on autocorrelation and histogram analysis.

    PubMed

    Ahn, C B; Cho, Z H

    1987-01-01

    A new statistical approach to phase correction in NMR imaging is proposed. The proposed scheme consists of first-and zero-order phase corrections each by the inverse multiplication of estimated phase error. The first-order error is estimated by the phase of autocorrelation calculated from the complex valued phase distorted image while the zero-order correction factor is extracted from the histogram of phase distribution of the first-order corrected image. Since all the correction procedures are performed on the spatial domain after completion of data acquisition, no prior adjustments or additional measurements are required. The algorithm can be applicable to most of the phase-involved NMR imaging techniques including inversion recovery imaging, quadrature modulated imaging, spectroscopic imaging, and flow imaging, etc. Some experimental results with inversion recovery imaging as well as quadrature spectroscopic imaging are shown to demonstrate the usefulness of the algorithm.

  7. The Kerala Decentration Meter. A new method and devise for fitting the optical of spectacle lenses in the visual axis.

    PubMed

    Joseph, T K; Kartha, C P

    1982-01-01

    Centring of spectacle lenses is much neglected field of ophthalmology. The prismatic effect caused by wrong centring results in a phoria on the eye muscles which in turn causes persistent eyestrain. The theory of visual axis, optical axis and angle alpha is discussed. Using new methods the visual axis and optical axis of 35 subjects were measured. The results were computed for facial asymmetry, parallax error, angle alpha and also decentration for near vision. The results show that decentration is required on account of each of these factors. Considerable correction is needed in the vertical direction, a fact much neglected nowadays; and vertical decentration results in vertical phoria which is more symptomatic than horizontal phorias. Angle Alpha was computed for each of these patients. A new devise called 'The Kerala Decentration Meter' using the pinhole method for measuring the degree of decentration from the datum centre of the frame, and capable of correcting all the factors described above, is shown with diagrams.

  8. On Neglecting Chemical Exchange Effects When Correcting in Vivo 31P MRS Data for Partial Saturation

    NASA Astrophysics Data System (ADS)

    Ouwerkerk, Ronald; Bottomley, Paul A.

    2001-02-01

    Signal acquisition in most MRS experiments requires a correction for partial saturation that is commonly based on a single exponential model for T1 that ignores effects of chemical exchange. We evaluated the errors in 31P MRS measurements introduced by this approximation in two-, three-, and four-site chemical exchange models under a range of flip-angles and pulse sequence repetition times (TR) that provide near-optimum signal-to-noise ratio (SNR). In two-site exchange, such as the creatine-kinase reaction involving phosphocreatine (PCr) and γ-ATP in human skeletal and cardiac muscle, errors in saturation factors were determined for the progressive saturation method and the dual-angle method of measuring T1. The analysis shows that these errors are negligible for the progressive saturation method if the observed T1 is derived from a three-parameter fit of the data. When T1 is measured with the dual-angle method, errors in saturation factors are less than 5% for all conceivable values of the chemical exchange rate and flip-angles that deliver useful SNR per unit time over the range T1/5 ≤ TR ≤ 2T1. Errors are also less than 5% for three- and four-site exchange when TR ≥ T1*/2, the so-called "intrinsic" T1's of the metabolites. The effect of changing metabolite concentrations and chemical exchange rates on observed T1's and saturation corrections was also examined with a three-site chemical exchange model involving ATP, PCr, and inorganic phosphate in skeletal muscle undergoing up to 95% PCr depletion. Although the observed T1's were dependent on metabolite concentrations, errors in saturation corrections for TR = 2 s could be kept within 5% for all exchanging metabolites using a simple interpolation of two dual-angle T1 measurements performed at the start and end of the experiment. Thus, the single-exponential model appears to be reasonably accurate for correcting 31P MRS data for partial saturation in the presence of chemical exchange. Even in systems where metabolite concentrations change, accurate saturation corrections are possible without much loss in SNR.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lessard, Francois; Archambault, Louis; Plamondon, Mathieu

    Purpose: Photon dosimetry in the kilovolt (kV) energy range represents a major challenge for diagnostic and interventional radiology and superficial therapy. Plastic scintillation detectors (PSDs) are potentially good candidates for this task. This study proposes a simple way to obtain accurate correction factors to compensate for the response of PSDs to photon energies between 80 and 150 kVp. The performance of PSDs is also investigated to determine their potential usefulness in the diagnostic energy range. Methods: A 1-mm-diameter, 10-mm-long PSD was irradiated by a Therapax SXT 150 unit using five different beam qualities made of tube potentials ranging from 80more » to 150 kVp and filtration thickness ranging from 0.8 to 0.2 mmAl + 1.0 mmCu. The light emitted by the detector was collected using an 8-m-long optical fiber and a polychromatic photodiode, which converted the scintillation photons to an electrical current. The PSD response was compared with the reference free air dose rate measured with a calibrated Farmer NE2571 ionization chamber. PSD measurements were corrected using spectra-weighted corrections, accounting for mass energy-absorption coefficient differences between the sensitive volumes of the ionization chamber and the PSD, as suggested by large cavity theory (LCT). Beam spectra were obtained from x-ray simulation software and validated experimentally using a CdTe spectrometer. Correction factors were also obtained using Monte Carlo (MC) simulations. Percent depth dose (PDD) measurements were compensated for beam hardening using the LCT correction method. These PDD measurements were compared with uncorrected PSD data, PDD measurements obtained using Gafchromic films, Monte Carlo simulations, and previous data. Results: For each beam quality used, the authors observed an increase of the energy response with effective energy when no correction was applied to the PSD response. Using the LCT correction, the PSD response was almost energy independent, with a residual 2.1% coefficient of variation (COV) over the 80-150-kVp energy range. Monte Carlo corrections reduced the COV to 1.4% over this energy range. All PDD measurements were in good agreement with one another except for the uncorrected PSD data, in which an over-response was observed with depth (13% at 10 cm with a 100 kVp beam), showing that beam hardening had a non-negligible effect on the PSD response. A correction based on LCT compensated very well for this effect, reducing the over-response to 3%.Conclusion: In the diagnostic energy range, PSDs show high-energy dependence, which can be corrected using spectra-weighted mass energy-absorption coefficients, showing no considerable sign of quenching between these energies. Correction factors obtained by Monte Carlo simulations confirm that the approximations made by LCT corrections are valid. Thus, PSDs could be useful for real-time dosimetry in radiology applications.« less

  10. SU-F-T-70: A High Dose Rate Total Skin Electron Irradiation Technique with A Specific Inter-Film Variation Correction Method for Very Large Electron Beam Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, X; Rosenfield, J; Dong, X

    2016-06-15

    Purpose: Rotational total skin electron irradiation (RTSEI) is used in the treatment of cutaneous T-cell lymphoma. Due to inter-film uniformity variations the dosimetry measurement of a large electron beam of a very low energy is challenging. This work provides a method to improve the accuracy of flatness and symmetry for a very large treatment field of low electron energy used in dual beam RTSEI. Methods: RTSEI is delivered by dual angles field a gantry of ±20 degrees of 270 to cover the upper and the lower halves of the patient body with acceptable beam uniformity. The field size is inmore » the order of 230cm in vertical height and 120 cm in horizontal width and beam energy is a degraded 6 MeV (6 mm of PMMA spoiler). We utilized parallel plate chambers, Gafchromic films and OSLDs as a measuring devices for absolute dose, B-Factor, stationary and rotational percent depth dose and beam uniformity. To reduce inter-film dosimetric variation we introduced a new specific correction method to analyze beam uniformity. This correction method uses some image processing techniques combining film value before and after radiation dose to compensate the inter-variation dose response differences among films. Results: Stationary and rotational depth of dose demonstrated that the Rp is 2 cm for rotational and the maximum dose is shifted toward the surface (3mm). The dosimetry for the phantom showed that dose uniformity reduced to 3.01% for the vertical flatness and 2.35% for horizontal flatness after correction thus achieving better flatness and uniformity. The absolute dose readings of calibrated films after our correction matched with the readings from OSLD. Conclusion: The proposed correction method for Gafchromic films will be a useful tool to correct inter-film dosimetric variation for the future clinical film dosimetry verification in very large fields, allowing the optimizations of other parameters.« less

  11. Energy response corrections for profile measurements using a combination of different detector types.

    PubMed

    Wegener, Sonja; Sauer, Otto A

    2018-02-01

    Different detector properties will heavily affect the results of off-axis measurements outside of radiation fields, where a different energy spectrum is encountered. While a diode detector would show a high spatial resolution, it contains high atomic number elements, which lead to perturbations and energy-dependent response. An ionization chamber, on the other hand, has a much smaller energy dependence, but shows dose averaging over its larger active volume. We suggest a way to obtain spatial energy response corrections of a detector independent of its volume effect for profiles of arbitrary fields by using a combination of two detectors. Measurements were performed at an Elekta Versa HD accelerator equipped with an Agility MLC. Dose profiles of fields between 10 × 4 cm² and 0.6 × 0.6 cm² were recorded several times, first with different small-field detectors (unshielded diode 60012 and stereotactic field detector SFD, microDiamond, EDGE, and PinPoint 31006) and then with a larger volume ionization chamber Semiflex 31010 for different photon beam qualities of 6, 10, and 18 MV. Correction factors for the small-field detectors were obtained from the readings of the respective detector and the ionization chamber using a convolution method. Selected profiles were also recorded on film to enable a comparison. After applying the correction factors to the profiles measured with different detectors, agreement between the detectors and with profiles measured on EBT3 film was improved considerably. Differences in the full width half maximum obtained with the detectors and the film typically decreased by a factor of two. Off-axis correction factors outside of a 10 × 1 cm² field ranged from about 1.3 for the EDGE diode about 10 mm from the field edge to 0.7 for the PinPoint 31006 25 mm from the field edge. The microDiamond required corrections comparable in size to the Si-diodes and even exceeded the values in the tail region of the field. The SFD was found to require the smallest correction. The corrections typically became larger for higher energies and for smaller field sizes. With a combination of two detectors, experimentally derived correction factors can be obtained. Application of those factors leads to improved agreement between the measured profiles and those recorded on EBT3 film. The results also complement so far only Monte Carlo-simulated values for the off-axis response of different detectors. © 2017 American Association of Physicists in Medicine.

  12. A New, More Powerful Approach to Multitrait-Multimethod Analyses: An Application of Second-Order Confirmatory Factor Analysis.

    ERIC Educational Resources Information Center

    Marsh, Herbert W.; Hocevar, Dennis

    The advantages of applying confirmatory factor analysis (CFA) to multitrait-multimethod (MTMM) data are widely recognized. However, because CFA as traditionally applied to MTMM data incorporates single indicators of each scale (i.e., each trait/method combination), important weaknesses are the failure to: (1) correct appropriately for measurement…

  13. A general formalism for phase space calculations

    NASA Technical Reports Server (NTRS)

    Norbury, John W.; Deutchman, Philip A.; Townsend, Lawrence W.; Cucinotta, Francis A.

    1988-01-01

    General formulas for calculating the interactions of galactic cosmic rays with target nuclei are presented. Methods for calculating the appropriate normalization volume elements and phase space factors are presented. Particular emphasis is placed on obtaining correct phase space factors for 2-, and 3-body final states. Calculations for both Lorentz-invariant and noninvariant phase space are presented.

  14. Correction factors for ionization chamber measurements with the ‘Valencia’ and ‘large field Valencia’ brachytherapy applicators

    NASA Astrophysics Data System (ADS)

    Gimenez-Alventosa, V.; Gimenez, V.; Ballester, F.; Vijande, J.; Andreo, P.

    2018-06-01

    Treatment of small skin lesions using HDR brachytherapy applicators is a widely used technique. The shielded applicators currently available in clinical practice are based on a tungsten-alloy cup that collimates the source-emitted radiation into a small region, hence protecting nearby tissues. The goal of this manuscript is to evaluate the correction factors required for dose measurements with a plane-parallel ionization chamber typically used in clinical brachytherapy for the ‘Valencia’ and ‘large field Valencia’ shielded applicators. Monte Carlo simulations have been performed using the PENELOPE-2014 system to determine the absorbed dose deposited in a water phantom and in the chamber active volume with a Type A uncertainty of the order of 0.1%. The average energies of the photon spectra arriving at the surface of the water phantom differ by approximately 10%, being 384 keV for the ‘Valencia’ and 343 keV for the ‘large field Valencia’. The ionization chamber correction factors have been obtained for both applicators using three methods, their values depending on the applicator being considered. Using a depth-independent global chamber perturbation correction factor and no shift of the effective point of measurement yields depth-dose differences of up to 1% for the ‘Valencia’ applicator. Calculations using a depth-dependent global perturbation factor, or a shift of the effective point of measurement combined with a constant partial perturbation factor, result in differences of about 0.1% for both applicators. The results emphasize the relevance of carrying out detailed Monte Carlo studies for each shielded brachytherapy applicator and ionization chamber.

  15. Simultaneous acquisition of perfusion and permeability from corrected relaxation rates with dynamic susceptibility contrast dual gradient echo.

    PubMed

    Kim, Eun-Ju; Kim, Dae-Hong; Lee, Sang Hoon; Huh, Yong-Min; Song, Ho-Taek; Suh, Jin-Suck

    2004-04-01

    This study compared two methods, corrected (separation of T(1) and T(2)* effects) and uncorrected, in order to determine the suitability of the perfusion and permeability measures through Delta R(2)* and Delta R(1) analyses. A dynamic susceptibility contrast dual gradient echo (DSC-DGE) was used to image the fixed phantoms and flow phantoms (Sephadex perfusion phantoms and dialyzer phantom for the permeability measurements). The results confirmed that the corrected relaxation rate was linearly proportional to gadolinium-diethyltriamine pentaacetic acid (Gd-DTPA) concentration, whereas the uncorrected relaxation rate did not in the fixed phantom and simulation experiments. For the perfusion measurements, it was found that the correction process was necessary not only for the Delta R(1) time curve but also for the Delta R(2)* time curve analyses. Perfusion could not be measured without correcting the Delta R(2)* time curve. The water volume, which was expressed as the perfusion amount, was found to be closer to the theoretical value when using the corrected Delta R(1) curve in the calculations. However, this may occur in the low concentration of Gd-DTPA in tissue used in this study. For the permeability measurements based on the two-compartment model, the permeability factor (k(ev); e = extravascular, v = vascular) from the outside to the inside of the hollow fibers was greater in the corrected Delta R(1) method than in the uncorrected Delta R(1) method. The differences between the corrected and the uncorrected Delta R(1) values were confirmed by the simulation experiments. In conclusion, this study proposes that the correction for the relaxation rates, Delta R(2)* and Delta R(1), is indispensable in making accurate perfusion and permeability measurements, and that DSC-DGE is a useful method for obtaining information on perfusion and permeability, simultaneously.

  16. A Correction to the Stress-Strain Curve During Multistage Hot Deformation of 7150 Aluminum Alloy Using Instantaneous Friction Factors

    NASA Astrophysics Data System (ADS)

    Jiang, Fulin; Tang, Jie; Fu, Dinfa; Huang, Jianping; Zhang, Hui

    2018-04-01

    Multistage stress-strain curve correction based on an instantaneous friction factor was studied for axisymmetric uniaxial hot compression of 7150 aluminum alloy. Experimental friction factors were calculated based on continuous isothermal axisymmetric uniaxial compression tests at various deformation parameters. Then, an instantaneous friction factor equation was fitted by mathematic analysis. After verification by comparing single-pass flow stress correction with traditional average friction factor correction, the instantaneous friction factor equation was applied to correct multistage stress-strain curves. The corrected results were reasonable and validated by multistage relative softening calculations. This research provides a broad potential for implementing axisymmetric uniaxial compression in multistage physical simulations and friction optimization in finite element analysis.

  17. Slice profile and B1 corrections in 2D magnetic resonance fingerprinting.

    PubMed

    Ma, Dan; Coppo, Simone; Chen, Yong; McGivney, Debra F; Jiang, Yun; Pahwa, Shivani; Gulani, Vikas; Griswold, Mark A

    2017-11-01

    The goal of this study is to characterize and improve the accuracy of 2D magnetic resonance fingerprinting (MRF) scans in the presence of slice profile (SP) and B 1 imperfections, which are two main factors that affect quantitative results in MRF. The SP and B 1 imperfections are characterized and corrected separately. The SP effect is corrected by simulating the radiofrequency pulse in the dictionary, and the B 1 is corrected by acquiring a B 1 map using the Bloch-Siegert method before each scan. The accuracy, precision, and repeatability of the proposed method are evaluated in phantom studies. The effects of both SP and B 1 imperfections are also illustrated and corrected in the in vivo studies. The SP and B 1 corrections improve the accuracy of the T 1 and T 2 values, independent of the shape of the radiofrequency pulse. The T 1 and T 2 values obtained from different excitation patterns become more consistent after corrections, which leads to an improvement of the robustness of the MRF design. This study demonstrates that MRF is sensitive to both SP and B 1 effects, and that corrections can be made to improve the accuracy of MRF with only a 2-s increase in acquisition time. Magn Reson Med 78:1781-1789, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  18. SU-E-T-223: Computed Radiography Dose Measurements of External Radiotherapy Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aberle, C; Kapsch, R

    2015-06-15

    Purpose: To obtain quantitative, two-dimensional dose measurements of external radiotherapy beams with a computed radiography (CR) system and to derive volume correction factors for ionization chambers in small fields. Methods: A commercial Kodak ACR2000i CR system with Kodak Flexible Phosphor Screen HR storage foils was used. Suitable measurement conditions and procedures were established. Several corrections were derived, including image fading, length-scale corrections and long-term stability corrections. Dose calibration curves were obtained for cobalt, 4 MV, 8 MV and 25 MV photons, and for 10 MeV, 15 MeV and 18 MeV electrons in a water phantom. Inherent measurement inhomogeneities were studiedmore » as well as directional dependence of the response. Finally, 2D scans with ionization chambers were directly compared to CR measurements, and volume correction factors were derived. Results: Dose calibration curves (0.01 Gy to 7 Gy) were obtained for multiple photon and electron beam qualities. For each beam quality, the calibration curves can be described by a single fit equation over the whole dose range. The energy dependence of the dose response was determined. The length scale on the images was adjusted scan-by-scan, typically by 2 percent horizontally and by 3 percent vertically. The remaining inhomogeneities after the system’s standard calibration procedure were corrected for. After correction, the homogeneity is on the order of a few percent. The storage foils can be rotated by up to 30 degrees without a significant effect on the measured signal. First results on the determination of volume correction factors were obtained. Conclusion: With CR, quantitative, two-dimensional dose measurements with a high spatial resolution (sub-mm) can be obtained over a large dose range. In order to make use of these advantages, several calibrations, corrections and supporting measurements are needed. This work was funded by the European Metrology Research Programme (EMRP) project HLT09 MetrExtRT Metrology for Radiotherapy using Complex Radiation Fields.« less

  19. Proton dose distribution measurements using a MOSFET detector with a simple dose-weighted correction method for LET effects.

    PubMed

    Kohno, Ryosuke; Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi

    2011-04-04

    We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth-dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high-bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L-shaped bolus. The dose reproducibility, angular dependence and depth-dose response were evaluated using a 190 MeV proton beam. Depth-output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose-weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L-shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors.

  20. Proton dose distribution measurements using a MOSFET detector with a simple dose‐weighted correction method for LET effects

    PubMed Central

    Hotta, Kenji; Matsuura, Taeko; Matsubara, Kana; Nishioka, Shie; Nishio, Teiji; Kawashima, Mitsuhiko; Ogino, Takashi

    2011-01-01

    We experimentally evaluated the proton beam dose reproducibility, sensitivity, angular dependence and depth‐dose relationships for a new Metal Oxide Semiconductor Field Effect Transistor (MOSFET) detector. The detector was fabricated with a thinner oxide layer and was operated at high‐bias voltages. In order to accurately measure dose distributions, we developed a practical method for correcting the MOSFET response to proton beams. The detector was tested by examining lateral dose profiles formed by protons passing through an L‐shaped bolus. The dose reproducibility, angular dependence and depth‐dose response were evaluated using a 190 MeV proton beam. Depth‐output curves produced using the MOSFET detectors were compared with results obtained using an ionization chamber (IC). Since accurate measurements of proton dose distribution require correction for LET effects, we developed a simple dose‐weighted correction method. The correction factors were determined as a function of proton penetration depth, or residual range. The residual proton range at each measurement point was calculated using the pencil beam algorithm. Lateral measurements in a phantom were obtained for pristine and SOBP beams. The reproducibility of the MOSFET detector was within 2%, and the angular dependence was less than 9%. The detector exhibited a good response at the Bragg peak (0.74 relative to the IC detector). For dose distributions resulting from protons passing through an L‐shaped bolus, the corrected MOSFET dose agreed well with the IC results. Absolute proton dosimetry can be performed using MOSFET detectors to a precision of about 3% (1 sigma). A thinner oxide layer thickness improved the LET in proton dosimetry. By employing correction methods for LET dependence, it is possible to measure absolute proton dose using MOSFET detectors. PACS number: 87.56.‐v

  1. Habitat complexity and fish size affect the detection of Indo-Pacific lionfish on invaded coral reefs

    NASA Astrophysics Data System (ADS)

    Green, S. J.; Tamburello, N.; Miller, S. E.; Akins, J. L.; Côté, I. M.

    2013-06-01

    A standard approach to improving the accuracy of reef fish population estimates derived from underwater visual censuses (UVCs) is the application of species-specific correction factors, which assumes that a species' detectability is constant under all conditions. To test this assumption, we quantified detection rates for invasive Indo-Pacific lionfish ( Pterois volitans and P. miles), which are now a primary threat to coral reef conservation throughout the Caribbean. Estimates of lionfish population density and distribution, which are essential for managing the invasion, are currently obtained through standard UVCs. Using two conventional UVC methods, the belt transect and stationary visual census (SVC), we assessed how lionfish detection rates vary with lionfish body size and habitat complexity (measured as rugosity) on invaded continuous and patch reefs off Cape Eleuthera, the Bahamas. Belt transect and SVC surveys performed equally poorly, with both methods failing to detect the presence of lionfish in >50 % of surveys where thorough, lionfish-focussed searches yielded one or more individuals. Conventional methods underestimated lionfish biomass by ~200 %. Crucially, detection rate varied significantly with both lionfish size and reef rugosity, indicating that the application of a single correction factor across habitats and stages of invasion is unlikely to accurately characterize local populations. Applying variable correction factors that account for site-specific lionfish size and rugosity to conventional survey data increased estimates of lionfish biomass, but these remained significantly lower than actual biomass. To increase the accuracy and reliability of estimates of lionfish density and distribution, monitoring programs should use detailed area searches rather than standard visual survey methods. Our study highlights the importance of accounting for sources of spatial and temporal variation in detection to increase the accuracy of survey data from coral reef systems.

  2. Refractive Outcomes, Contrast Sensitivity, HOAs, and Patient Satisfaction in Moderate Myopia: Wavefront-Optimized Versus Tissue-Saving PRK.

    PubMed

    Nassiri, Nader; Sheibani, Kourosh; Azimi, Abbas; Khosravi, Farinaz Mahmoodi; Heravian, Javad; Yekta, Abasali; Moghaddam, Hadi Ostadi; Nassiri, Saman; Yasseri, Mehdi; Nassiri, Nariman

    2015-10-01

    To compare refractive outcomes, contrast sensitivity, higher-order aberrations (HOAs), and patient satisfaction after photorefractive keratectomy for correction of moderate myopia with two methods: tissue saving versus wavefront optimized. In this prospective, comparative study, 152 eyes (80 patients) with moderate myopia with and without astigmatism were randomly divided into two groups: the tissue-saving group (Technolas 217z Zyoptix laser; Bausch & Lomb, Rochester, NY) (76 eyes of 39 patients) or the wavefront-optimized group (WaveLight Allegretto Wave Eye-Q laser; Alcon Laboratories, Inc., Fort Worth, TX) (76 eyes of 41 patients). Preoperative and 3-month postoperative refractive outcomes, contrast sensitivity, HOAs, and patient satisfaction were compared between the two groups. The mean spherical equivalent was -4.50 ± 1.02 diopters. No statistically significant differences were detected between the groups in terms of uncorrected and corrected distance visual acuity and spherical equivalent preoperatively and 3 months postoperatively. No statistically significant differences were seen in the amount of preoperative to postoperative contrast sensitivity changes between the two groups in photopic and mesopic conditions. HOAs and Q factor increased in both groups postoperatively (P = .001), with the tissue-saving method causing more increases in HOAs (P = .007) and Q factor (P = .039). Patient satisfaction was comparable between both groups. Both platforms were effective in correcting moderate myopia with or without astigmatism. No difference in refractive outcome, contrast sensitivity changes, and patient satisfaction between the groups was observed. Postoperatively, the tissue-saving method caused a higher increase in HOAs and Q factor compared to the wavefront-optimized method, which could be due to larger optical zone sizes in the tissue-saving group. Copyright 2015, SLACK Incorporated.

  3. The self-absorption correction factors for 210Pb concentration in mining waste and influence on environmental radiation risk assessment.

    PubMed

    Bonczyk, Michal; Michalik, Boguslaw; Chmielewska, Izabela

    2017-03-01

    The radioactive lead isotope 210 Pb occurs in waste originating from metal smelting and refining industry, gas and oil extraction and sometimes from underground coal mines, which are deposited in natural environment very often. Radiation risk assessment requires accurate knowledge about the concentration of 210 Pb in such materials. Laboratory measurements seem to be the only reliable method applicable in environmental 210 Pb monitoring. One of the methods is gamma-ray spectrometry, which is a very fast and cost-effective method to determine 210 Pb concentration. On the other hand, the self-attenuation of gamma ray from 210 Pb (46.5 keV) in a sample is significant as it does not depend only on sample density but also on sample chemical composition (sample matrix). This phenomenon is responsible for the under-estimation of the 210 Pb activity concentration level often when gamma spectrometry is applied with no regard to relevant corrections. Finally, the corresponding radiation risk can be also improperly evaluated. Sixty samples of coal mining solid tailings (sediments created from underground mining water) were analysed. Slightly modified and adapted to the existing laboratory condition, a transmission method has been applied for the accurate measurement of 210 Pb concentration . The observed concentrations of 210 Pb range between 42.2 ÷ 11,700 Bq·kg -1 of dry mass. Experimentally obtained correction factors related to a sample density and elemental composition range between 1.11 and 6.97. Neglecting this factor can cause a significant error or underestimations in radiological risk assessment. The obtained results have been used for environmental radiation risk assessment performed by use of the ERICA tool assuming exposure conditions typical for the final destination of such kind of waste.

  4. A new bias field correction method combining N3 and FCM for improved segmentation of breast density on MRI.

    PubMed

    Lin, Muqing; Chan, Siwa; Chen, Jeon-Hor; Chang, Daniel; Nie, Ke; Chen, Shih-Ting; Lin, Cheng-Ju; Shih, Tzu-Ching; Nalcioglu, Orhan; Su, Min-Ying

    2011-01-01

    Quantitative breast density is known as a strong risk factor associated with the development of breast cancer. Measurement of breast density based on three-dimensional breast MRI may provide very useful information. One important step for quantitative analysis of breast density on MRI is the correction of field inhomogeneity to allow an accurate segmentation of the fibroglandular tissue (dense tissue). A new bias field correction method by combining the nonparametric nonuniformity normalization (N3) algorithm and fuzzy-C-means (FCM)-based inhomogeneity correction algorithm is developed in this work. The analysis is performed on non-fat-sat T1-weighted images acquired using a 1.5 T MRI scanner. A total of 60 breasts from 30 healthy volunteers was analyzed. N3 is known as a robust correction method, but it cannot correct a strong bias field on a large area. FCM-based algorithm can correct the bias field on a large area, but it may change the tissue contrast and affect the segmentation quality. The proposed algorithm applies N3 first, followed by FCM, and then the generated bias field is smoothed using Gaussian kernal and B-spline surface fitting to minimize the problem of mistakenly changed tissue contrast. The segmentation results based on the N3+FCM corrected images were compared to the N3 and FCM alone corrected images and another method, coherent local intensity clustering (CLIC), corrected images. The segmentation quality based on different correction methods were evaluated by a radiologist and ranked. The authors demonstrated that the iterative N3+FCM correction method brightens the signal intensity of fatty tissues and that separates the histogram peaks between the fibroglandular and fatty tissues to allow an accurate segmentation between them. In the first reading session, the radiologist found (N3+FCM > N3 > FCM) ranking in 17 breasts, (N3+FCM > N3 = FCM) ranking in 7 breasts, (N3+FCM = N3 > FCM) in 32 breasts, (N3+FCM = N3 = FCM) in 2 breasts, and (N3 > N3+FCM > FCM) in 2 breasts. The results of the second reading session were similar. The performance in each pairwise Wilcoxon signed-rank test is significant, showing N3+FCM superior to both N3 and FCM, and N3 superior to FCM. The performance of the new N3+FCM algorithm was comparable to that of CLIC, showing equivalent quality in 57/60 breasts. Choosing an appropriate bias field correction method is a very important preprocessing step to allow an accurate segmentation of fibroglandular tissues based on breast MRI for quantitative measurement of breast density. The proposed algorithm combining N3+FCM and CLIC both yield satisfactory results.

  5. Ideal, nonideal, and no-marker variables: The confirmatory factor analysis (CFA) marker technique works when it matters.

    PubMed

    Williams, Larry J; O'Boyle, Ernest H

    2015-09-01

    A persistent concern in the management and applied psychology literature is the effect of common method variance on observed relations among variables. Recent work (i.e., Richardson, Simmering, & Sturman, 2009) evaluated 3 analytical approaches to controlling for common method variance, including the confirmatory factor analysis (CFA) marker technique. Their findings indicated significant problems with this technique, especially with nonideal marker variables (those with theoretical relations with substantive variables). Based on their simulation results, Richardson et al. concluded that not correcting for method variance provides more accurate estimates than using the CFA marker technique. We reexamined the effects of using marker variables in a simulation study and found the degree of error in estimates of a substantive factor correlation was relatively small in most cases, and much smaller than error associated with making no correction. Further, in instances in which the error was large, the correlations between the marker and substantive scales were higher than that found in organizational research with marker variables. We conclude that in most practical settings, the CFA marker technique yields parameter estimates close to their true values, and the criticisms made by Richardson et al. are overstated. (c) 2015 APA, all rights reserved).

  6. On Choosing a Rational Flight Trajectory to the Moon

    NASA Astrophysics Data System (ADS)

    Gordienko, E. S.; Khudorozhkov, P. A.

    2017-12-01

    The algorithm for choosing a trajectory of spacecraft flight to the Moon is discussed. The characteristic velocity values needed for correcting the flight trajectory and a braking maneuver are estimated using the Monte Carlo method. The profile of insertion and flight to a near-circular polar orbit with an altitude of 100 km of an artificial lunar satellite (ALS) is given. The case of two corrections applied during the flight and braking phases is considered. The flight to an ALS orbit is modeled in the geocentric geoequatorial nonrotating coordinate system with the influence of perturbations from the Earth, the Sun, and the Moon factored in. The characteristic correction costs corresponding to corrections performed at different time points are examined. Insertion phase errors, the errors of performing the needed corrections, and the errors of determining the flight trajectory parameters are taken into account.

  7. Using the Time-Correlated Induced Fission Method to Simultaneously Measure the 235U Content and the Burnable Poison Content in LWR Fuel Assemblies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Root, M. A.; Menlove, H. O.; Lanza, R. C.

    The uranium neutron coincidence collar uses thermal neutron interrogation to verify the 235U mass in low-enriched uranium (LEU) fuel assemblies in fuel fabrication facilities. Burnable poisons are commonly added to nuclear fuel to increase the lifetime of the fuel. The high thermal neutron absorption by these poisons reduces the active neutron signal produced by the fuel. Burnable poison correction factors or fast-mode runs with Cd liners can help compensate for this effect, but the correction factors rely on operator declarations of burnable poison content, and fast-mode runs are time-consuming. Finally, this paper describes a new analysis method to measure themore » 235U mass and burnable poison content in LEU nuclear fuel simultaneously in a timely manner, without requiring additional hardware.« less

  8. Using the Time-Correlated Induced Fission Method to Simultaneously Measure the 235U Content and the Burnable Poison Content in LWR Fuel Assemblies

    DOE PAGES

    Root, M. A.; Menlove, H. O.; Lanza, R. C.; ...

    2018-03-21

    The uranium neutron coincidence collar uses thermal neutron interrogation to verify the 235U mass in low-enriched uranium (LEU) fuel assemblies in fuel fabrication facilities. Burnable poisons are commonly added to nuclear fuel to increase the lifetime of the fuel. The high thermal neutron absorption by these poisons reduces the active neutron signal produced by the fuel. Burnable poison correction factors or fast-mode runs with Cd liners can help compensate for this effect, but the correction factors rely on operator declarations of burnable poison content, and fast-mode runs are time-consuming. Finally, this paper describes a new analysis method to measure themore » 235U mass and burnable poison content in LEU nuclear fuel simultaneously in a timely manner, without requiring additional hardware.« less

  9. Bias Correction Methods Explain Much of the Variation Seen in Breast Cancer Risks of BRCA1/2 Mutation Carriers.

    PubMed

    Vos, Janet R; Hsu, Li; Brohet, Richard M; Mourits, Marian J E; de Vries, Jakob; Malone, Kathleen E; Oosterwijk, Jan C; de Bock, Geertruida H

    2015-08-10

    Recommendations for treating patients who carry a BRCA1/2 gene are mainly based on cumulative lifetime risks (CLTRs) of breast cancer determined from retrospective cohorts. These risks vary widely (27% to 88%), and it is important to understand why. We analyzed the effects of methods of risk estimation and bias correction and of population factors on CLTRs in this retrospective clinical cohort of BRCA1/2 carriers. The following methods to estimate the breast cancer risk of BRCA1/2 carriers were identified from the literature: Kaplan-Meier, frailty, and modified segregation analyses with bias correction consisting of including or excluding index patients combined with including or excluding first-degree relatives (FDRs) or different conditional likelihoods. These were applied to clinical data of BRCA1/2 families derived from our family cancer clinic for whom a simulation was also performed to evaluate the methods. CLTRs and 95% CIs were estimated and compared with the reference CLTRs. CLTRs ranged from 35% to 83% for BRCA1 and 41% to 86% for BRCA2 carriers at age 70 years width of 95% CIs: 10% to 35% and 13% to 46%, respectively). Relative bias varied from -38% to +16%. Bias correction with inclusion of index patients and untested FDRs gave the smallest bias: +2% (SD, 2%) in BRCA1 and +0.9% (SD, 3.6%) in BRCA2. Much of the variation in breast cancer CLTRs in retrospective clinical BRCA1/2 cohorts is due to the bias-correction method, whereas a smaller part is due to population differences. Kaplan-Meier analyses with bias correction that includes index patients and a proportion of untested FDRs provide suitable CLTRs for carriers counseled in the clinic. © 2015 by American Society of Clinical Oncology.

  10. Refraction error correction for deformation measurement by digital image correlation at elevated temperature

    NASA Astrophysics Data System (ADS)

    Su, Yunquan; Yao, Xuefeng; Wang, Shen; Ma, Yinji

    2017-03-01

    An effective correction model is proposed to eliminate the refraction error effect caused by an optical window of a furnace in digital image correlation (DIC) deformation measurement under high-temperature environment. First, a theoretical correction model with the corresponding error correction factor is established to eliminate the refraction error induced by double-deck optical glass in DIC deformation measurement. Second, a high-temperature DIC experiment using a chromium-nickel austenite stainless steel specimen is performed to verify the effectiveness of the correction model by the correlation calculation results under two different conditions (with and without the optical glass). Finally, both the full-field and the divisional displacement results with refraction influence are corrected by the theoretical model and then compared to the displacement results extracted from the images without refraction influence. The experimental results demonstrate that the proposed theoretical correction model can effectively improve the measurement accuracy of DIC method by decreasing the refraction errors from measured full-field displacements under high-temperature environment.

  11. Determination of the Kwall correction factor for a cylindrical ionization chamber to measure air-kerma in 60Co gamma beams.

    PubMed

    Laitano, R F; Toni, M P; Pimpinella, M; Bovi, M

    2002-07-21

    The factor Kwall to correct for photon attenuation and scatter in the wall of ionization chambers for 60Co air-kerma measurement has been traditionally determined by a procedure based on a linear extrapolation of the chamber current to zero wall thickness. Monte Carlo calculations by Rogers and Bielajew (1990 Phys. Med. Biol. 35 1065-78) provided evidence, mostly for chambers of cylindrical and spherical geometry, of appreciable deviations between the calculated values of Kwall and those obtained by the traditional extrapolation procedure. In the present work an experimental method other than the traditional extrapolation procedure was used to determine the Kwall factor. In this method the dependence of the ionization current in a cylindrical chamber was analysed as a function of an effective wall thickness in place of the physical (radial) wall thickness traditionally considered in this type of measurement. To this end the chamber wall was ideally divided into distinct regions and for each region an effective thickness to which the chamber current correlates was determined. A Monte Carlo calculation of attenuation and scatter effects in the different regions of the chamber wall was also made to compare calculation to measurement results. The Kwall values experimentally determined in this work agree within 0.2% with the Monte Carlo calculation. The agreement between these independent methods and the appreciable deviation (up to about 1%) between the results of both these methods and those obtained by the traditional extrapolation procedure support the conclusion that the two independent methods providing comparable results are correct and the traditional extrapolation procedure is likely to be wrong. The numerical results of the present study refer to a cylindrical cavity chamber like that adopted as the Italian national air-kerma standard at INMRI-ENEA (Italy). The method used in this study applies, however, to any other chamber of the same type.

  12. Thin wing corrections for phase-change heat-transfer data.

    NASA Technical Reports Server (NTRS)

    Hunt, J. L.; Pitts, J. I.

    1971-01-01

    Since no methods are available for determining the magnitude of the errors incurred when the semiinfinite slab assumption is violated, a computer program was developed to calculate the heat-transfer coefficients to both sides of a finite, one-dimensional slab subject to the boundary conditions ascribed to the phase-change coating technique. The results have been correlated in the form of correction factors to the semiinfinite slab solutions in terms of parameters normally used with the technique.

  13. Enhancement of breast periphery region in digital mammography

    NASA Astrophysics Data System (ADS)

    Menegatti Pavan, Ana Luiza; Vacavant, Antoine; Petean Trindade, Andre; Quini, Caio Cesar; Rodrigues de Pina, Diana

    2018-03-01

    Volumetric breast density has been shown to be one of the strongest risk factor for breast cancer diagnosis. This metric can be estimated using digital mammograms. During mammography acquisition, breast is compressed and part of it loses contact with the paddle, resulting in an uncompressed region in periphery with thickness variation. Therefore, reliable density estimation in the breast periphery region is a problem, which affects the accuracy of volumetric breast density measurement. The aim of this study was to enhance breast periphery to solve the problem of thickness variation. Herein, we present an automatic algorithm to correct breast periphery thickness without changing pixel value from internal breast region. The correction pixel values from periphery was based on mean values over iso-distance lines from the breast skin-line using only adipose tissue information. The algorithm detects automatically the periphery region where thickness should be corrected. A correction factor was applied in breast periphery image to enhance the region. We also compare our contribution with two other algorithms from state-of-the-art, and we show its accuracy by means of different quality measures. Experienced radiologists subjectively evaluated resulting images from the tree methods in relation to original mammogram. The mean pixel value, skewness and kurtosis from histogram of the three methods were used as comparison metric. As a result, the methodology presented herein showed to be a good approach to be performed before calculating volumetric breast density.

  14. In vivo proton dosimetry using a MOSFET detector in an anthropomorphic phantom with tissue inhomogeneity

    PubMed Central

    Hotta, Kenji; Matsubara, Kana; Nishioka, Shie; Matsuura, Taeko; Kawashima, Mitsuhiko

    2012-01-01

    When in vivo proton dosimetry is performed with a metal‐oxide semiconductor field‐effect transistor (MOSFET) detector, the response of the detector depends strongly on the linear energy transfer. The present study reports a practical method to correct the MOSFET response for linear energy transfer dependence by using a simplified Monte Carlo dose calculation method (SMC). A depth‐output curve for a mono‐energetic proton beam in polyethylene was measured with the MOSFET detector. This curve was used to calculate MOSFET output distributions with the SMC (SMCMOSFET). The SMCMOSFET output value at an arbitrary point was compared with the value obtained by the conventional SMCPPIC, which calculates proton dose distributions by using the depth‐dose curve determined by a parallel‐plate ionization chamber (PPIC). The ratio of the two values was used to calculate the correction factor of the MOSFET response at an arbitrary point. The dose obtained by the MOSFET detector was determined from the product of the correction factor and the MOSFET raw dose. When in vivo proton dosimetry was performed with the MOSFET detector in an anthropomorphic phantom, the corrected MOSFET doses agreed with the SMCPPIC results within the measurement error. To our knowledge, this is the first report of successful in vivo proton dosimetry with a MOSFET detector. PACS number: 87.56.‐v PMID:22402385

  15. Diagnosing and Correcting Mass Accuracy and Signal Intensity Error Due to Initial Ion Position Variations in a MALDI TOFMS

    NASA Astrophysics Data System (ADS)

    Malys, Brian J.; Piotrowski, Michelle L.; Owens, Kevin G.

    2018-02-01

    Frustrated by worse than expected error for both peak area and time-of-flight (TOF) in matrix assisted laser desorption ionization (MALDI) experiments using samples prepared by electrospray deposition, it was finally determined that there was a correlation between sample location on the target plate and the measured TOF/peak area. Variations in both TOF and peak area were found to be due to small differences in the initial position of ions formed in the source region of the TOF mass spectrometer. These differences arise largely from misalignment of the instrument sample stage, with a smaller contribution arising from the non-ideal shape of the target plates used. By physically measuring the target plates used and comparing TOF data collected from three different instruments, an estimate of the magnitude and direction of the sample stage misalignment was determined for each of the instruments. A correction method was developed to correct the TOFs and peak areas obtained for a given combination of target plate and instrument. Two correction factors are determined, one by initially collecting spectra from each sample position used and another by using spectra from a single position for each set of samples on a target plate. For TOF and mass values, use of the correction factor reduced the error by a factor of 4, with the relative standard deviation (RSD) of the corrected masses being reduced to 12-24 ppm. For the peak areas, the RSD was reduced from 28% to 16% for samples deposited twice onto two target plates over two days.

  16. Diagnosing and Correcting Mass Accuracy and Signal Intensity Error Due to Initial Ion Position Variations in a MALDI TOFMS

    NASA Astrophysics Data System (ADS)

    Malys, Brian J.; Piotrowski, Michelle L.; Owens, Kevin G.

    2017-12-01

    Frustrated by worse than expected error for both peak area and time-of-flight (TOF) in matrix assisted laser desorption ionization (MALDI) experiments using samples prepared by electrospray deposition, it was finally determined that there was a correlation between sample location on the target plate and the measured TOF/peak area. Variations in both TOF and peak area were found to be due to small differences in the initial position of ions formed in the source region of the TOF mass spectrometer. These differences arise largely from misalignment of the instrument sample stage, with a smaller contribution arising from the non-ideal shape of the target plates used. By physically measuring the target plates used and comparing TOF data collected from three different instruments, an estimate of the magnitude and direction of the sample stage misalignment was determined for each of the instruments. A correction method was developed to correct the TOFs and peak areas obtained for a given combination of target plate and instrument. Two correction factors are determined, one by initially collecting spectra from each sample position used and another by using spectra from a single position for each set of samples on a target plate. For TOF and mass values, use of the correction factor reduced the error by a factor of 4, with the relative standard deviation (RSD) of the corrected masses being reduced to 12-24 ppm. For the peak areas, the RSD was reduced from 28% to 16% for samples deposited twice onto two target plates over two days. [Figure not available: see fulltext.

  17. Frequency correction method for improved spatial correlation of hyperpolarized 13C metabolites and anatomy.

    PubMed

    Cunningham, Charles H; Dominguez Viqueira, William; Hurd, Ralph E; Chen, Albert P

    2014-02-01

    Blip-reversed echo-planar imaging (EPI) is investigated as a method for measuring and correcting the spatial shifts that occur due to bulk frequency offsets in (13)C metabolic imaging in vivo. By reversing the k-space trajectory for every other time point, the direction of the spatial shift for a given frequency is reversed. Here, mutual information is used to find the 'best' alignment between images and thereby measure the frequency offset. Time-resolved 3D images of pyruvate/lactate/urea were acquired with 5 s temporal resolution over a 1 min duration in rats (N = 6). For each rat, a second injection was performed with the demodulation frequency purposely mis-set by +35 Hz, to test the correction for erroneous shifts in the images. Overall, the shift induced by the 35 Hz frequency offset was 5.9 ± 0.6 mm (mean ± standard deviation). This agrees well with the expected 5.7 mm shift based on the 2.02 ms delay between k-space lines (giving 30.9 Hz per pixel). The 0.6 mm standard deviation in the correction corresponds to a frequency-detection accuracy of 4 Hz. A method was presented for ensuring the spatial registration between (13)C metabolic images and conventional anatomical images when long echo-planar readouts are used. The frequency correction method was shown to have an accuracy of 4 Hz. Summing the spatially corrected frames gave a signal-to-noise ratio (SNR) improvement factor of 2 or greater, compared with the highest single frame. Copyright © 2013 John Wiley & Sons, Ltd.

  18. A comparison of methods for adjusting biomarkers of iron, zinc, and selenium status for the effect of inflammation in an older population: a case for interleukin 6.

    PubMed

    MacDonell, Sue O; Miller, Jody C; Harper, Michelle J; Reid, Malcolm R; Haszard, Jillian J; Gibson, Rosalind S; Houghton, Lisa A

    2018-05-14

    Older people are at risk of micronutrient deficiencies, which can be under- or overestimated in the presence of inflammation. Several methods have been proposed to adjust for the effect of inflammation; however, to our knowledge, none have been investigated in older adults in whom chronic inflammation is common. We investigated the influence of various inflammation-adjustment methods on micronutrient biomarkers associated with anemia in older people living in aged-care facilities in New Zealand. Blood samples were collected from 289 New Zealand aged-care residents aged >65 y. Serum ferritin, soluble transferrin receptor (sTfR), total body iron (TBI), plasma zinc, and selenium as well as the inflammatory markers high-sensitivity C-reactive protein (CRP), α1-acid glycoprotein (AGP), and interleukin 6 (IL-6) were measured. Four adjustment methods were applied to micronutrient concentrations: 1) internal correction factors based on stages of inflammation defined by CRP and AGP, 2) external correction factors derived from the literature, 3) a regression correction model in which reference CRP and AGP were set to the maximum of the lowest decile, and 4) a regression correction model in which reference IL-6 was set to the maximum of the lowest decile. Forty percent of participants had elevated concentrations of CRP, AGP, or both, and 37% of participants had higher than normal concentrations of IL-6. Adjusted geometric mean values for serum ferritin, sTfR, and TBI were significantly lower (P < 0.001), and plasma zinc and selenium were significantly higher (P < 0.001), than the unadjusted values regardless of the method applied. The greatest inflammation adjustment was observed with the regression correction that used IL-6. Subsequently, the prevalence of zinc and selenium deficiency decreased (-13% and -14%, respectively; P < 0.001), whereas iron deficiency remained unaffected. Adjustment for inflammation should be considered when evaluating micronutrient status in this aging population group; however, the approaches used require further investigation, particularly the influence of adjustment for IL-6.

  19. Apparatus and method for quantitative assay of samples of transuranic waste contained in barrels in the presence of matrix material

    DOEpatents

    Caldwell, J.T.; Herrera, G.C.; Hastings, R.D.; Shunk, E.R.; Kunz, W.E.

    1987-08-28

    Apparatus and method for performing corrections for matrix material effects on the neutron measurements generated from analysis of transuranic waste drums using the differential-dieaway technique. By measuring the absorption index and the moderator index for a particular drum, correction factors can be determined for the effects of matrix materials on the ''observed'' quantity of fissile and fertile material present therein in order to determine the actual assays thereof. A barrel flux monitor is introduced into the measurement chamber to accomplish these measurements as a new contribution to the differential-dieaway technology. 9 figs.

  20. Propulsion of a fin whale (Balaenoptera physalus): why the fin whale is a fast swimmer.

    PubMed

    Bose, N; Lien, J

    1989-07-22

    Measurements of an immature fin whale (Balaenoptera physalus), which died as a result of entrapment in fishing gear near Frenchmans Cove, Newfoundland (47 degrees 9' N, 55 degrees 25' W), were made to obtain estimates of volume and surface area of the animal. Detailed measurements of the flukes, both planform and sections, were also obtained. A strip theory was developed to calculate the hydrodynamic performance of the whale's flukes as an oscillating propeller. This method is based on linear, two-dimensional, small-amplitude, unsteady hydrofoil theory with correction factors used to account for the effects of finite span and finite amplitude motion. These correction factors were developed from theoretical results of large-amplitude heaving motion and unsteady lifting-surface theory. A model that makes an estimate of the effects of viscous flow on propeller performance was superimposed on the potential-flow results. This model estimates the drag of the hydrofoil sections by assuming that the drag is similar to that of a hydrofoil section in steady flow. The performance characteristics of the flukes of the fin whale were estimated by using this method. The effects of the different correction factors, and of the frictional drag of the fluke sections, are emphasized. Frictional effects in particular were found to reduce the hydrodynamic efficiency of the flukes significantly. The results are discussed and compared with the known characteristics of fin-whale swimming.

  1. [Determination of ventricular volumes by a non-geometric method using gamma-cineangiography].

    PubMed

    Faivre, R; Cardot, J C; Baud, M; Verdenet, J; Berthout, P; Bidet, A C; Bassand, J P; Maurat, J P

    1985-08-01

    The authors suggest a new way of determining ventricular volume by a non-geometric method using gamma-cineangiography. The results obtained by this method were compared with those obtained by a geometric methods and contrast ventriculography in 94 patients. The new non-geometric method supposes that the radioactive tracer is evenly distributed in the cardiovascular system so that blood radioactivity levels can be measured. The ventricular volume is then equal to the ratio of radioactivity in the LV zone to that of 1 ml of blood. Comparison of the radionuclide and angiographic data in the first 60 patients showed systematic values--despite a satisfactory statistical correlation (r = 0.87, y = 0.30 X + 6.3). This underestimation is due to the phenomenon of attenuation related to the depth of the heart in the thoracic cage and to autoabsorption at source, the degree of which depends on the ventricular volume. An empirical method of calculation allows correction for these factors by taking into account absorption in the tissues by relating to body surface area and autoabsorption at source by correcting for the surface of isotopic ventricular projection expressed in pixels. Using the data of this empirical method, the correction formula for radionuclide ventricular volume is obtained by a multiple linear regression: corrected radionuclide volume = K X measured radionuclide volume (Formula: see text). This formula was applied in the following 34 patients. The correlation between the uncorrected and corrected radionuclide volumes and the angiographic volumes was improved (r = 0.65 vs r = 0.94) and the values were more accurate (y = 0.18 X + 26 vs y = 0.96 X + 1.5).(ABSTRACT TRUNCATED AT 250 WORDS)

  2. Measurement correction method for force sensor used in dynamic pressure calibration based on artificial neural network optimized by genetic algorithm

    NASA Astrophysics Data System (ADS)

    Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing

    2017-12-01

    We present an optimization algorithm to obtain low-uncertainty dynamic pressure measurements from a force-transducer-based device. In this paper, the advantages and disadvantages of the methods that are commonly used to measure the propellant powder gas pressure, the applicable scope of dynamic pressure calibration devices, and the shortcomings of the traditional comparison calibration method based on the drop-weight device are firstly analysed in detail. Then, a dynamic calibration method for measuring pressure using a force sensor based on a drop-weight device is introduced. This method can effectively save time when many pressure sensors are calibrated simultaneously and extend the life of expensive reference sensors. However, the force sensor is installed between the drop-weight and the hammerhead by transition pieces through the connection mode of bolt fastening, which causes adverse effects such as additional pretightening and inertia forces. To solve these effects, the influence mechanisms of the pretightening force, the inertia force and other influence factors on the force measurement are theoretically analysed. Then a measurement correction method for the force measurement is proposed based on an artificial neural network optimized by a genetic algorithm. The training and testing data sets are obtained from calibration tests, and the selection criteria for the key parameters of the correction model is discussed. The evaluation results for the test data show that the correction model can effectively improve the force measurement accuracy of the force sensor. Compared with the traditional high-accuracy comparison calibration method, the percentage difference of the impact-force-based measurement is less than 0.6% and the relative uncertainty of the corrected force value is 1.95%, which can meet the requirements of engineering applications.

  3. [Evaluation of crossing calibration of (123)I-MIBG H/M ration, with the IDW scatter correction method, on different gamma camera systems].

    PubMed

    Kittaka, Daisuke; Takase, Tadashi; Akiyama, Masayuki; Nakazawa, Yasuo; Shinozuka, Akira; Shirai, Muneaki

    2011-01-01

    (123)I-MIBG Heart-to-Mediastinum activity ratio (H/M) is commonly used as an indicator of relative myocardial (123)I-MIBG uptake. H/M ratios reflect myocardial sympathetic nerve function, therefore it is a useful parameter to assess regional myocardial sympathetic denervation in various cardiac diseases. However, H/M ratio values differ by site, gamma camera system, position and size of region of interest (ROI), and collimator. In addition to these factors, 529 keV scatter component may also affect (123)I-MIBG H/M ratio. In this study, we examined whether the H/M ratio shows correlation between two different gamma camera systems and that sought for H/M ratio calculation formula. Moreover, we assessed the feasibility of (123)I Dual Window (IDW) method, which is a scatter correction method, and compared H/M ratios with and without IDW method. H/M ratio displayed a good correlation between two gamma camera systems. Additionally, we were able to create a new H/M calculation formula. These results indicated that the IDW method is a useful scatter correction method for calculating (123)I-MIBG H/M ratios.

  4. Orbit-product representation and correction of Gaussian belief propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jason K; Chertkov, Michael; Chernyak, Vladimir

    We present a new interpretation of Gaussian belief propagation (GaBP) based on the 'zeta function' representation of the determinant as a product over orbits of a graph. We show that GaBP captures back-tracking orbits of the graph and consider how to correct this estimate by accounting for non-backtracking orbits. We show that the product over non-backtracking orbits may be interpreted as the determinant of the non-backtracking adjacency matrix of the graph with edge weights based on the solution of GaBP. An efficient method is proposed to compute a truncated correction factor including all non-backtracking orbits up to a specified length.

  5. Emerging technology for transonic wind-tunnel-wall interference assessment and corrections

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Kemp, W. B., Jr.; Garriz, J. A.

    1988-01-01

    Several nonlinear transonic codes and a panel method code for wind tunnel/wall interference assessment and correction (WIAC) studies are reviewed. Contrasts between two- and three-dimensional transonic testing factors which affect WIAC procedures are illustrated with airfoil data from the NASA/Langley 0.3-meter transonic cyrogenic tunnel and Pathfinder I data. Also, three-dimensional transonic WIAC results for Mach number and angle-of-attack corrections to data from a relatively large 20 deg swept semispan wing in the solid wall NASA/Ames high Reynolds number Channel I are verified by three-dimensional thin-layer Navier-Stokes free-air solutions.

  6. Relativistic corrections to heavy quark fragmentation to S-wave heavy mesons

    NASA Astrophysics Data System (ADS)

    Sang, Wen-Long; Yang, Lan-Fei; Chen, Yu-Qi

    2009-07-01

    The relativistic corrections of order v2 to the fragmentation functions for the heavy quark to S-wave heavy quarkonia are calculated in the framework of the nonrelativistic quantum chromodynamics factorization formula. We derive the fragmentation functions by using the Collins-Soper definition in both the Feynman gauge and the axial gauge. We also extract them through the process Z0→Hq qmacr in the limit MZ/m→∞. We find that all results obtained by these two different methods and in different gauges are the same. We estimate the relative size of the relativistic corrections to the fragmentation functions.

  7. Relativistic corrections to heavy quark fragmentation to S-wave heavy mesons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sang Wenlong; Yang Lanfei; Chen Yuqi

    The relativistic corrections of order v{sup 2} to the fragmentation functions for the heavy quark to S-wave heavy quarkonia are calculated in the framework of the nonrelativistic quantum chromodynamics factorization formula. We derive the fragmentation functions by using the Collins-Soper definition in both the Feynman gauge and the axial gauge. We also extract them through the process Z{sup 0}{yields}Hqq in the limit M{sub Z}/m{yields}{infinity}. We find that all results obtained by these two different methods and in different gauges are the same. We estimate the relative size of the relativistic corrections to the fragmentation functions.

  8. Imprinting high-gradient topographical structures onto optical surfaces using magnetorheological finishing: manufacturing corrective optical elements for high-power laser applications.

    PubMed

    Menapace, Joseph A; Ehrmann, Paul E; Bayramian, Andrew J; Bullington, Amber; Di Nicola, Jean-Michel G; Haefner, Constantin; Jarboe, Jeffrey; Marshall, Christopher; Schaffers, Kathleen I; Smith, Cal

    2016-07-01

    Corrective optical elements form an important part of high-precision optical systems. We have developed a method to manufacture high-gradient corrective optical elements for high-power laser systems using deterministic magnetorheological finishing (MRF) imprinting technology. Several process factors need to be considered for polishing ultraprecise topographical structures onto optical surfaces using MRF. They include proper selection of MRF removal function and wheel sizes, detailed MRF tool and interferometry alignment, and optimized MRF polishing schedules. Dependable interferometry also is a key factor in high-gradient component manufacture. A wavefront attenuating cell, which enables reliable measurement of gradients beyond what is attainable using conventional interferometry, is discussed. The results of MRF imprinting a 23 μm deep structure containing gradients over 1.6 μm / mm onto a fused-silica window are presented as an example of the technique's capabilities. This high-gradient element serves as a thermal correction plate in the high-repetition-rate advanced petawatt laser system currently being built at Lawrence Livermore National Laboratory.

  9. Imprinting high-gradient topographical structures onto optical surfaces using magnetorheological finishing: Manufacturing corrective optical elements for high-power laser applications

    DOE PAGES

    Menapace, Joseph A.; Ehrmann, Paul E.; Bayramian, Andrew J.; ...

    2016-03-15

    Corrective optical elements form an important part of high-precision optical systems. We have developed a method to manufacture high-gradient corrective optical elements for high-power laser systems using deterministic magnetorheological finishing (MRF) imprinting technology. Several process factors need to be considered for polishing ultraprecise topographical structures onto optical surfaces using MRF. They include proper selection of MRF removal function and wheel sizes, detailed MRF tool and interferometry alignment, and optimized MRF polishing schedules. Dependable interferometry also is a key factor in high-gradient component manufacture. A wavefront attenuating cell, which enables reliable measurement of gradients beyond what is attainable using conventional interferometry,more » is discussed. The results of MRF imprinting a 23 μm deep structure containing gradients over 1.6 μm / mm onto a fused-silica window are presented as an example of the technique’s capabilities. As a result, this high-gradient element serves as a thermal correction plate in the high-repetition-rate advanced petawatt laser system currently being built at Lawrence Livermore National Laboratory.« less

  10. The Impact of Individual and Institutional Factors on Turnover Intent Among Taiwanese Correctional Staff.

    PubMed

    Lai, Yung-Lien

    2017-01-01

    The existing literature on turnover intent among correctional staff conducted in Western societies focuses on the impact of individual-level factors; the possible effects of institutional contexts have been largely overlooked. Moreover, the relationships of various multidimensional conceptualizations of both job satisfaction and organizational commitment to turnover intent are still largely unknown. Using data collected by a self-reported survey of 676 custody staff employed in 22 Taiwanese correctional facilities during April of 2011, the present study expands upon theoretical models developed in Western societies and examines the effects of both individual and institutional factors on turnover intent simultaneously. Results from the use of the hierarchical linear modeling (HLM) statistical method indicate that, at the individual-level, supervisory versus non-supervisory status, job stress, job dangerousness, job satisfaction, and organizational commitment consistently produce a significant association with turnover intent after controlling for personal characteristics. Specifically, three distinct forms of organizational commitment demonstrated an inverse impact on turnover intent. Among institutional-level variables, custody staff who came from a larger facility reported higher likelihood of thinking about quitting their job. © The Author(s) 2015.

  11. Microcomputer Calculated Diagnostic X-Ray Exposure Factors: Clinical Evaluation

    PubMed Central

    Markivee, C. R.; Edwards, F. Marc; Leonard, Patricia

    1981-01-01

    Calculation of correct settings for the controls of a diagnostic x-ray machine was established as feasible in a microcomputer with 4K memory. The cost effectiveness and other findings in the application of this method are discussed.

  12. Infant mortality by color or race from Rondônia, Brazilian Amazon

    PubMed Central

    Gava, Caroline; Cardoso, Andrey Moreira; Basta, Paulo Cesar

    2017-01-01

    ABSTRACT OBJECTIVE To analyze the quality of records for live births and infant deaths and to estimate the infant mortality rate for skin color or race, in order to explore possible racial inequalities in health. METHODS Descriptive study that analyzed the quality of records of the Live Births Information System and Mortality Information System in Rondônia, Brazilian Amazonian, between 2006-2009. The infant mortality rates were estimated for skin color or race with the direct method and corrected by: (1) proportional distribution of deaths with missing data related to skin color or race; and (2) application of correction factors. We also calculated proportional mortality by causes and age groups. RESULTS The capture of live births and deaths improved in relation to 2006-2007, which required lower correction factors to estimate infant mortality rate. The risk of death of indigenous infant (31.3/1,000 live births) was higher than that noted for the other skin color or race groups, exceeding by 60% the infant mortality rate in Rondônia (19.9/1,000 live births). Black children had the highest neonatal infant mortality rate, while the indigenous had the highest post-neonatal infant mortality rate. Among the indigenous deaths, 15.2% were due to ill-defined causes, while the other groups did not exceed 5.4%. The proportional infant mortality due to infectious and parasitic diseases was higher among indigenous children (12.1%), while among black children it occurred due to external causes (8.7%). CONCLUSIONS Expressive inequalities in infant mortality were noted between skin color or race categories, more unfavorable for indigenous infants. Correction factors proposed in the literature lack to consider differences in underreporting of deaths for skin color or race. The specific correction among the color or race categories would likely result in exacerbation of the observed inequalities. PMID:28423134

  13. A 3D correction method for predicting the readings of a PinPoint chamber on the CyberKnife® M6™ machine

    NASA Astrophysics Data System (ADS)

    Zhang, Yongqian; Brandner, Edward; Ozhasoglu, Cihat; Lalonde, Ron; Heron, Dwight E.; Saiful Huq, M.

    2018-02-01

    The use of small fields in radiation therapy techniques has increased substantially in particular in stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT). However, as field size reduces further still, the response of the detector changes more rapidly with field size, and the effects of measurement uncertainties become increasingly significant due to the lack of lateral charged particle equilibrium, spectral changes as a function of field size, detector choice, and subsequent perturbations of the charged particle fluence. This work presents a novel 3D dose volume-to-point correction method to predict the readings of a 0.015 cc PinPoint chamber (PTW 31014) for both small static-fields and composite-field dosimetry formed by fixed cones on the CyberKnife® M6™ machine. A 3D correction matrix is introduced to link the 3D dose distribution to the response of the PinPoint chamber in water. The parameters of the correction matrix are determined by modeling its 3D dose response in circular fields created using the 12 fixed cones (5 mm-60 mm) on a CyberKnife® M6™ machine. A penalized least-square optimization problem is defined by fitting the calculated detector reading to the experimental measurement data to generate the optimal correction matrix; the simulated annealing algorithm is used to solve the inverse optimization problem. All the experimental measurements are acquired for every 2 mm chamber shift in the horizontal planes for each field size. The 3D dose distributions for the measurements are calculated using the Monte Carlo calculation with the MultiPlan® treatment planning system (Accuray Inc., Sunnyvale, CA, USA). The performance evaluation of the 3D conversion matrix is carried out by comparing the predictions of the output factors (OFs), off-axis ratios (OARs) and percentage depth dose (PDD) data to the experimental measurement data. The discrepancy of the measurement and the prediction data for composite fields is also performed for clinical SRS plans. The optimization algorithm used for generating the optimal correction factors is stable, and the resulting correction factors were smooth in the spatial domain. The measurement and prediction of OFs agree closely with percentage differences of less than 1.9% for all the 12 cones. The discrepancies between the prediction and the measurement PDD readings at 50 mm and 80 mm depth are 1.7% and 1.9%, respectively. The percentage differences of OARs between measurement and prediction data are less than 2% in the low dose gradient region, and 2%/1 mm discrepancies are observed within the high dose gradient regions. The differences between the measurement and prediction data for all the CyberKnife based SRS plans are less than 1%. These results demonstrate the existence and efficiency of the novel 3D correction method for small field dosimetry. The 3D correction matrix links the 3D dose distribution and the reading of the PinPoint chamber. The comparison between the predicted reading and the measurement data for static small fields (OFs, OARs and PDDs) yield discrepancies within 2% for low dose gradient regions and 2%/1 mm for high dose gradient regions; the discrepancies between the predicted and the measurement data are less than 1% for all the SRS plans. The 3D correction method provides an access to evaluate the clinical measurement data and can be applied to non-standard composite fields intensity modulated radiation therapy point dose verification.

  14. Bias Correction of MODIS AOD using DragonNET to obtain improved estimation of PM2.5

    NASA Astrophysics Data System (ADS)

    Gross, B.; Malakar, N. K.; Atia, A.; Moshary, F.; Ahmed, S. A.; Oo, M. M.

    2014-12-01

    MODIS AOD retreivals using the Dark Target algorithm is strongly affected by the underlying surface reflection properties. In particular, the operational algorithms make use of surface parameterizations trained on global datasets and therefore do not account properly for urban surface differences. This parameterization continues to show an underestimation of the surface reflection which results in a general over-biasing in AOD retrievals. Recent results using the Dragon-Network datasets as well as high resolution retrievals in the NYC area illustrate that this is even more significant at the newest C006 3 km retrievals. In the past, we used AERONET observation in the City College to obtain bias-corrected AOD, but the homogeneity assumptions using only one site for the region is clearly an issue. On the other hand, DragonNET observations provide ample opportunities to obtain better tuning the surface corrections while also providing better statistical validation. In this study we present a neural network method to obtain bias correction of the MODIS AOD using multiple factors including surface reflectivity at 2130nm, sun-view geometrical factors and land-class information. These corrected AOD's are then used together with additional WRF meteorological factors to improve estimates of PM2.5. Efforts to explore the portability to other urban areas will be discussed. In addition, annual surface ratio maps will be developed illustrating that among the land classes, the urban pixels constitute the largest deviations from the operational model.

  15. Implementation of in vivo Dosimetry with Isorad{sup TM} Semiconductor Diodes in Radiotherapy Treatments of the Pelvis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rodriguez, Miguel L.; Abrego, Eladio; Pineda, Amalia

    2008-04-01

    This report describes the results obtained with the Isorad{sup TM} (Red) semiconductor detectors for implementing an in vivo dosimetry program in patients subject to radiotherapy treatment of the pelvis. Four n-type semiconductor diodes were studied to characterize them for the application. The diode calibration consisted of establishing reading-to-dose conversion factors in reference conditions and a set of correction factors accounting for deviations of the diode response in comparison to that of an ion chamber. Treatments of the pelvis were performed by using an isocentric 'box' technique employing a beam of 18 MV with the shape of the fields defined bymore » a multileaf collimator. The method of Rizzotti-Leunen was used to assess the dose at the isocenter based on measurements of the in vivo dose at the entrance and at the exit of each radiation field. The in vivo dose was evaluated for a population of 80 patients. The diodes exhibit good characteristics for their use in in vivo dosimetry; however, the high attenuation of the beam ({approx}12% at 5.0-cm depth) produced, and some important correction factors, must be taken into account. The correction factors determined, including the source-to-surface factor, were within a range of {+-}4%. The frequency histograms of the relative difference between the expected and measured doses at the entrance, the exit, and the isocenter, have mean values and standard deviations of -0.09% (2.18%), 0.77% (2.73%), and -0.11% (1.76%), respectively. The method implemented has proven to be very useful in the assessment of the in vivo dose in this kind of treatment.« less

  16. Improved determination of particulate absorption from combined filter pad and PSICAM measurements.

    PubMed

    Lefering, Ina; Röttgers, Rüdiger; Weeks, Rebecca; Connor, Derek; Utschig, Christian; Heymann, Kerstin; McKee, David

    2016-10-31

    Filter pad light absorption measurements are subject to two major sources of experimental uncertainty: the so-called pathlength amplification factor, β, and scattering offsets, o, for which previous null-correction approaches are limited by recent observations of non-zero absorption in the near infrared (NIR). A new filter pad absorption correction method is presented here which uses linear regression against point-source integrating cavity absorption meter (PSICAM) absorption data to simultaneously resolve both β and the scattering offset. The PSICAM has previously been shown to provide accurate absorption data, even in highly scattering waters. Comparisons of PSICAM and filter pad particulate absorption data reveal linear relationships that vary on a sample by sample basis. This regression approach provides significantly improved agreement with PSICAM data (3.2% RMS%E) than previously published filter pad absorption corrections. Results show that direct transmittance (T-method) filter pad absorption measurements perform effectively at the same level as more complex geometrical configurations based on integrating cavity measurements (IS-method and QFT-ICAM) because the linear regression correction compensates for the sensitivity to scattering errors in the T-method. This approach produces accurate filter pad particulate absorption data for wavelengths in the blue/UV and in the NIR where sensitivity issues with PSICAM measurements limit performance. The combination of the filter pad absorption and PSICAM is therefore recommended for generating full spectral, best quality particulate absorption data as it enables correction of multiple errors sources across both measurements.

  17. Analysis of self-reported versus biomarker based smoking prevalence: methodology to compute corrected smoking prevalence rates.

    PubMed

    Jain, Ram B

    2017-07-01

    Prevalence of smoking is needed to estimate the need for future public health resources. To compute and compare smoking prevalence rates by using self-reported smoking statuses, two serum cotinine (SCOT) based biomarker methods, and one urinary 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanol (NNAL) based biomarker method. These estimates were then used to develop correction factors to be applicable to self-reported prevalences to arrive at corrected smoking prevalence rates. Data from National Health and Nutrition Examination Survey (NHANES) for 2007-2012 for those aged ≥20 years (N = 16826) were used. Self-reported prevalence rate for the total population computed as the weighted number of self-reported smokers divided by weighted number of all participants was 21.6% and 24% when computed by weighted number of self-reported smokers divided by the weighted number of self-reported smokers and nonsmokers. The corrected prevalence rate was found to be 25.8%. A 1% underestimate in smoking prevalence is equivalent to not being able to identify 2.2 million smokers in US in a given year. This underestimation, if not corrected, could lead to serious gap in the public health services available and needed to provide adequate preventive and corrective treatment to smokers.

  18. A Maximum-Likelihood Method to Correct for Allelic Dropout in Microsatellite Data with No Replicate Genotypes

    PubMed Central

    Wang, Chaolong; Schroeder, Kari B.; Rosenberg, Noah A.

    2012-01-01

    Allelic dropout is a commonly observed source of missing data in microsatellite genotypes, in which one or both allelic copies at a locus fail to be amplified by the polymerase chain reaction. Especially for samples with poor DNA quality, this problem causes a downward bias in estimates of observed heterozygosity and an upward bias in estimates of inbreeding, owing to mistaken classifications of heterozygotes as homozygotes when one of the two copies drops out. One general approach for avoiding allelic dropout involves repeated genotyping of homozygous loci to minimize the effects of experimental error. Existing computational alternatives often require replicate genotyping as well. These approaches, however, are costly and are suitable only when enough DNA is available for repeated genotyping. In this study, we propose a maximum-likelihood approach together with an expectation-maximization algorithm to jointly estimate allelic dropout rates and allele frequencies when only one set of nonreplicated genotypes is available. Our method considers estimates of allelic dropout caused by both sample-specific factors and locus-specific factors, and it allows for deviation from Hardy–Weinberg equilibrium owing to inbreeding. Using the estimated parameters, we correct the bias in the estimation of observed heterozygosity through the use of multiple imputations of alleles in cases where dropout might have occurred. With simulated data, we show that our method can (1) effectively reproduce patterns of missing data and heterozygosity observed in real data; (2) correctly estimate model parameters, including sample-specific dropout rates, locus-specific dropout rates, and the inbreeding coefficient; and (3) successfully correct the downward bias in estimating the observed heterozygosity. We find that our method is fairly robust to violations of model assumptions caused by population structure and by genotyping errors from sources other than allelic dropout. Because the data sets imputed under our model can be investigated in additional subsequent analyses, our method will be useful for preparing data for applications in diverse contexts in population genetics and molecular ecology. PMID:22851645

  19. Absorbed dose-to-water protocol applied to synchrotron-generated x-rays at very high dose rates

    NASA Astrophysics Data System (ADS)

    Fournier, P.; Crosbie, J. C.; Cornelius, I.; Berkvens, P.; Donzelli, M.; Clavel, A. H.; Rosenfeld, A. B.; Petasecca, M.; Lerch, M. L. F.; Bräuer-Krisch, E.

    2016-07-01

    Microbeam radiation therapy (MRT) is a new radiation treatment modality in the pre-clinical stage of development at the ID17 Biomedical Beamline of the European synchrotron radiation facility (ESRF) in Grenoble, France. MRT exploits the dose volume effect that is made possible through the spatial fractionation of the high dose rate synchrotron-generated x-ray beam into an array of microbeams. As an important step towards the development of a dosimetry protocol for MRT, we have applied the International Atomic Energy Agency’s TRS 398 absorbed dose-to-water protocol to the synchrotron x-ray beam in the case of the broad beam irradiation geometry (i.e. prior to spatial fractionation into microbeams). The very high dose rates observed here mean the ion recombination correction factor, k s , is the most challenging to quantify of all the necessary corrections to apply for ionization chamber based absolute dosimetry. In the course of this study, we have developed a new method, the so called ‘current ramping’ method, to determine k s for the specific irradiation and filtering conditions typically utilized throughout the development of MRT. Using the new approach we deduced an ion recombination correction factor of 1.047 for the maximum ESRF storage ring current (200 mA) under typical beam spectral filtering conditions in MRT. MRT trials are currently underway with veterinary patients at the ESRF that require additional filtering, and we have estimated a correction factor of 1.025 for these filtration conditions for the same ESRF storage ring current. The protocol described herein provides reference dosimetry data for the associated Treatment Planning System utilized in the current veterinary trials and anticipated future human clinical trials.

  20. Accelerated acquisition of tagged MRI for cardiac motion correction in simultaneous PET-MR: Phantom and patient studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Chuan, E-mail: chuan.huang@stonybrookmedicine.edu; Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115; Departments of Radiology, Psychiatry, Stony Brook Medicine, Stony Brook, New York 11794

    2015-02-15

    Purpose: Degradation of image quality caused by cardiac and respiratory motions hampers the diagnostic quality of cardiac PET. It has been shown that improved diagnostic accuracy of myocardial defect can be achieved by tagged MR (tMR) based PET motion correction using simultaneous PET-MR. However, one major hurdle for the adoption of tMR-based PET motion correction in the PET-MR routine is the long acquisition time needed for the collection of fully sampled tMR data. In this work, the authors propose an accelerated tMR acquisition strategy using parallel imaging and/or compressed sensing and assess the impact on the tMR-based motion corrected PETmore » using phantom and patient data. Methods: Fully sampled tMR data were acquired simultaneously with PET list-mode data on two simultaneous PET-MR scanners for a cardiac phantom and a patient. Parallel imaging and compressed sensing were retrospectively performed by GRAPPA and kt-FOCUSS algorithms with various acceleration factors. Motion fields were estimated using nonrigid B-spline image registration from both the accelerated and fully sampled tMR images. The motion fields were incorporated into a motion corrected ordered subset expectation maximization reconstruction algorithm with motion-dependent attenuation correction. Results: Although tMR acceleration introduced image artifacts into the tMR images for both phantom and patient data, motion corrected PET images yielded similar image quality as those obtained using the fully sampled tMR images for low to moderate acceleration factors (<4). Quantitative analysis of myocardial defect contrast over ten independent noise realizations showed similar results. It was further observed that although the image quality of the motion corrected PET images deteriorates for high acceleration factors, the images were still superior to the images reconstructed without motion correction. Conclusions: Accelerated tMR images obtained with more than 4 times acceleration can still provide relatively accurate motion fields and yield tMR-based motion corrected PET images with similar image quality as those reconstructed using fully sampled tMR data. The reduction of tMR acquisition time makes it more compatible with routine clinical cardiac PET-MR studies.« less

  1. Wall interference assessment and corrections

    NASA Technical Reports Server (NTRS)

    Newman, P. A.; Kemp, W. B., Jr.; Garriz, J. A.

    1989-01-01

    Wind tunnel wall interference assessment and correction (WIAC) concepts, applications, and typical results are discussed in terms of several nonlinear transonic codes and one panel method code developed for and being implemented at NASA-Langley. Contrasts between 2-D and 3-D transonic testing factors which affect WIAC procedures are illustrated using airfoil data from the 0.3 m Transonic Cryogenic Tunnel and Pathfinder 1 data from the National Transonic Facility. Initial results from the 3-D WIAC codes are encouraging; research on and implementation of WIAC concepts continue.

  2. “What's the right thing to do?” Correctional healthcare providers' knowledge, attitudes and experiences caring for transgender inmates

    PubMed Central

    Clark, Kirsty A.; White Hughto, Jaclyn M.; Pachankis, John E.

    2017-01-01

    Rational Incarcerated transgender individuals may need to access physical and mental health services to meet their general and gender-affirming (e.g., hormones, surgery) medical needs while incarcerated. Objective This study sought to examine correctional healthcare providers’ knowledge of, attitudes toward, and experiences providing care to transgender inmates. Method In 2016, 20 correctional healthcare providers (e.g., physicians, social workers, psychologists, mental health counselors) from New England participated in in-depth, semi-structured interviews examining their experiences caring for transgender inmates. The interview guide drew on healthcare-related interviews with recently incarcerated transgender women and key informant interviews with correctional healthcare providers and administrators. Data were analyzed using a modified grounded theory framework and thematic analysis. Results Findings revealed that transgender inmates do not consistently receive adequate or gender-affirming care while incarcerated. Factors at the structural level (i.e., lack of training, restrictive healthcare policies, limited budget, and an unsupportive prison culture); interpersonal level (i.e., custody staff bias); and individual level (i.e., lack of transgender cultural and clinical competence) impede correctional healthcare providers’ ability to provide gender-affirming care to transgender patients. These factors result in negative health consequences for incarcerated transgender patients. Conclusions Results call for transgender-specific healthcare policy changes and the implementation of transgender competency trainings for both correctional healthcare providers and custody staff (e.g., officers, lieutenants, wardens). PMID:29028559

  3. T1 mapping with the variable flip angle technique: A simple correction for insufficient spoiling of transverse magnetization.

    PubMed

    Baudrexel, Simon; Nöth, Ulrike; Schüre, Jan-Rüdiger; Deichmann, Ralf

    2018-06-01

    The variable flip angle method derives T 1 maps from radiofrequency-spoiled gradient-echo data sets, acquired with different flip angles α. Because the method assumes validity of the Ernst equation, insufficient spoiling of transverse magnetization yields errors in T 1 estimation, depending on the chosen radiofrequency-spoiling phase increment (Δϕ). This paper presents a versatile correction method that uses modified flip angles α' to restore the validity of the Ernst equation. Spoiled gradient-echo signals were simulated for three commonly used phase increments Δϕ (50°/117°/150°), different values of α, repetition time (TR), T 1 , and a T 2 of 85 ms. For each parameter combination, α' (for which the Ernst equation yielded the same signal) and a correction factor C Δϕ (α, TR, T 1 ) = α'/α were determined. C Δϕ was found to be independent of T 1 and fitted as polynomial C Δϕ (α, TR), allowing to calculate α' for any protocol using this Δϕ. The accuracy of the correction method for T 2 values deviating from 85 ms was also determined. The method was tested in vitro and in vivo for variable flip angle scans with different acquisition parameters. The technique considerably improved the accuracy of variable flip angle-based T 1 maps in vitro and in vivo. The proposed method allows for a simple correction of insufficient spoiling in gradient-echo data. The required polynomial parameters are supplied for three common Δϕ. Magn Reson Med 79:3082-3092, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  4. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  5. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  6. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  7. Experimental aspects of buoyancy correction in measuring reliable high-pressure excess adsorption isotherms using the gravimetric method

    NASA Astrophysics Data System (ADS)

    Nguyen, Huong Giang T.; Horn, Jarod C.; Thommes, Matthias; van Zee, Roger D.; Espinal, Laura

    2017-12-01

    Addressing reproducibility issues in adsorption measurements is critical to accelerating the path to discovery of new industrial adsorbents and to understanding adsorption processes. A National Institute of Standards and Technology Reference Material, RM 8852 (ammonium ZSM-5 zeolite), and two gravimetric instruments with asymmetric two-beam balances were used to measure high-pressure adsorption isotherms. This work demonstrates how common approaches to buoyancy correction, a key factor in obtaining the mass change due to surface excess gas uptake from the apparent mass change, can impact the adsorption isotherm data. Three different approaches to buoyancy correction were investigated and applied to the subcritical CO2 and supercritical N2 adsorption isotherms at 293 K. It was observed that measuring a collective volume for all balance components for the buoyancy correction (helium method) introduces an inherent bias in temperature partition when there is a temperature gradient (i.e. analysis temperature is not equal to instrument air bath temperature). We demonstrate that a blank subtraction is effective in mitigating the biases associated with temperature partitioning, instrument calibration, and the determined volumes of the balance components. In general, the manual and subtraction methods allow for better treatment of the temperature gradient during buoyancy correction. From the study, best practices specific to asymmetric two-beam balances and more general recommendations for measuring isotherms far from critical temperatures using gravimetric instruments are offered.

  8. Experimental aspects of buoyancy correction in measuring reliable highpressure excess adsorption isotherms using the gravimetric method.

    PubMed

    Nguyen, Huong Giang T; Horn, Jarod C; Thommes, Matthias; van Zee, Roger D; Espinal, Laura

    2017-12-01

    Addressing reproducibility issues in adsorption measurements is critical to accelerating the path to discovery of new industrial adsorbents and to understanding adsorption processes. A National Institute of Standards and Technology Reference Material, RM 8852 (ammonium ZSM-5 zeolite), and two gravimetric instruments with asymmetric two-beam balances were used to measure high-pressure adsorption isotherms. This work demonstrates how common approaches to buoyancy correction, a key factor in obtaining the mass change due to surface excess gas uptake from the apparent mass change, can impact the adsorption isotherm data. Three different approaches to buoyancy correction were investigated and applied to the subcritical CO 2 and supercritical N 2 adsorption isotherms at 293 K. It was observed that measuring a collective volume for all balance components for the buoyancy correction (helium method) introduces an inherent bias in temperature partition when there is a temperature gradient (i.e. analysis temperature is not equal to instrument air bath temperature). We demonstrate that a blank subtraction is effective in mitigating the biases associated with temperature partitioning, instrument calibration, and the determined volumes of the balance components. In general, the manual and subtraction methods allow for better treatment of the temperature gradient during buoyancy correction. From the study, best practices specific to asymmetric two-beam balances and more general recommendations for measuring isotherms far from critical temperatures using gravimetric instruments are offered.

  9. Comparison of haemoglobin estimates using direct & indirect cyanmethaemoglobin methods.

    PubMed

    Bansal, Priyanka Gupta; Toteja, Gurudayal Singh; Bhatia, Neena; Gupta, Sanjeev; Kaur, Manpreet; Adhikari, Tulsi; Garg, Ashok Kumar

    2016-10-01

    Estimation of haemoglobin is the most widely used method to assess anaemia. Although direct cyanmethaemoglobin method is the recommended method for estimation of haemoglobin, but it may not be feasible under field conditions. Hence, the present study was undertaken to compare indirect cyanmethaemoglobin method against the conventional direct method for haemoglobin estimation. Haemoglobin levels were estimated for 888 adolescent girls aged 11-18 yr residing in an urban slum in Delhi by both direct and indirect cyanmethaemoglobin methods, and the results were compared. The mean haemoglobin levels for 888 whole blood samples estimated by direct and indirect cyanmethaemoglobin method were 116.1 ± 12.7 and 110.5 ± 12.5 g/l, respectively, with a mean difference of 5.67 g/l (95% confidence interval: 5.45 to 5.90, P<0.001); which is equivalent to 0.567 g%. The prevalence of anaemia was reported as 59.6 and 78.2 per cent by direct and indirect methods, respectively. Sensitivity and specificity of indirect cyanmethaemoglobin method were 99.2 and 56.4 per cent, respectively. Using regression analysis, prediction equation was developed for indirect haemoglobin values. The present findings revealed that indirect cyanmethaemoglobin method overestimated the prevalence of anaemia as compared to the direct method. However, if a correction factor is applied, indirect method could be successfully used for estimating true haemoglobin level. More studies should be undertaken to establish agreement and correction factor between direct and indirect cyanmethaemoglobin methods.

  10. LA-ICP-MS depth profile analysis of apatite: Protocol and implications for (U-Th)/He thermochronometry

    NASA Astrophysics Data System (ADS)

    Johnstone, Samuel; Hourigan, Jeremy; Gallagher, Christopher

    2013-05-01

    Heterogeneous concentrations of α-producing nuclides in apatite have been recognized through a variety of methods. The presence of zonation in apatite complicates both traditional α-ejection corrections and diffusive models, both of which operate under the assumption of homogeneous concentrations. In this work we develop a method for measuring radial concentration profiles of 238U and 232Th in apatite by laser ablation ICP-MS depth profiling. We then focus on one application of this method, removing bias introduced by applying inappropriate α-ejection corrections. Formal treatment of laser ablation ICP-MS depth profile calibration for apatite includes construction and calibration of matrix-matched standards and quantification of rates of elemental fractionation. From this we conclude that matrix-matched standards provide more robust monitors of fractionation rate and concentrations than doped silicate glass standards. We apply laser ablation ICP-MS depth profiling to apatites from three unknown populations and small, intact crystals of Durango fluorapatite. Accurate and reproducible Durango apatite dates suggest that prolonged exposure to laser drilling does not impact cooling ages. Intracrystalline concentrations vary by at least a factor of 2 in the majority of the samples analyzed, but concentration variation only exceeds 5x in 5 grains and 10x in 1 out of the 63 grains analyzed. Modeling of synthetic concentration profiles suggests that for concentration variations of 2x and 10x individual homogeneous versus zonation dependent α-ejection corrections could lead to age bias of >5% and >20%, respectively. However, models based on measured concentration profiles only generated biases exceeding 5% in 13 of the 63 cases modeled. Application of zonation dependent α-ejection corrections did not significantly reduce the age dispersion present in any of the populations studied. This suggests that factors beyond homogeneous α-ejection corrections are the dominant source of overdispersion in apatite (U-Th)/He cooling ages.

  11. Selection of neural network structure for system error correction of electro-optical tracker system with horizontal gimbal

    NASA Astrophysics Data System (ADS)

    Liu, Xing-fa; Cen, Ming

    2007-12-01

    Neural Network system error correction method is more precise than lest square system error correction method and spheric harmonics function system error correction method. The accuracy of neural network system error correction method is mainly related to the frame of Neural Network. Analysis and simulation prove that both BP neural network system error correction method and RBF neural network system error correction method have high correction accuracy; it is better to use RBF Network system error correction method than BP Network system error correction method for little studying stylebook considering training rate and neural network scale.

  12. Improving salt marsh digital elevation model accuracy with full-waveform lidar and nonparametric predictive modeling

    NASA Astrophysics Data System (ADS)

    Rogers, Jeffrey N.; Parrish, Christopher E.; Ward, Larry G.; Burdick, David M.

    2018-03-01

    Salt marsh vegetation tends to increase vertical uncertainty in light detection and ranging (lidar) derived elevation data, often causing the data to become ineffective for analysis of topographic features governing tidal inundation or vegetation zonation. Previous attempts at improving lidar data collected in salt marsh environments range from simply computing and subtracting the global elevation bias to more complex methods such as computing vegetation-specific, constant correction factors. The vegetation specific corrections can be used along with an existing habitat map to apply separate corrections to different areas within a study site. It is hypothesized here that correcting salt marsh lidar data by applying location-specific, point-by-point corrections, which are computed from lidar waveform-derived features, tidal-datum based elevation, distance from shoreline and other lidar digital elevation model based variables, using nonparametric regression will produce better results. The methods were developed and tested using full-waveform lidar and ground truth for three marshes in Cape Cod, Massachusetts, U.S.A. Five different model algorithms for nonparametric regression were evaluated, with TreeNet's stochastic gradient boosting algorithm consistently producing better regression and classification results. Additionally, models were constructed to predict the vegetative zone (high marsh and low marsh). The predictive modeling methods used in this study estimated ground elevation with a mean bias of 0.00 m and a standard deviation of 0.07 m (0.07 m root mean square error). These methods appear very promising for correction of salt marsh lidar data and, importantly, do not require an existing habitat map, biomass measurements, or image based remote sensing data such as multi/hyperspectral imagery.

  13. Effect of sample stratification on dairy GWAS results

    PubMed Central

    2012-01-01

    Background Artificial insemination and genetic selection are major factors contributing to population stratification in dairy cattle. In this study, we analyzed the effect of sample stratification and the effect of stratification correction on results of a dairy genome-wide association study (GWAS). Three methods for stratification correction were used: the efficient mixed-model association expedited (EMMAX) method accounting for correlation among all individuals, a generalized least squares (GLS) method based on half-sib intraclass correlation, and a principal component analysis (PCA) approach. Results Historical pedigree data revealed that the 1,654 contemporary cows in the GWAS were all related when traced through approximately 10–15 generations of ancestors. Genome and phenotype stratifications had a striking overlap with the half-sib structure. A large elite half-sib family of cows contributed to the detection of favorable alleles that had low frequencies in the general population and high frequencies in the elite cows and contributed to the detection of X chromosome effects. All three methods for stratification correction reduced the number of significant effects. EMMAX method had the most severe reduction in the number of significant effects, and the PCA method using 20 principal components and GLS had similar significance levels. Removal of the elite cows from the analysis without using stratification correction removed many effects that were also removed by the three methods for stratification correction, indicating that stratification correction could have removed some true effects due to the elite cows. SNP effects with good consensus between different methods and effect size distributions from USDA’s Holstein genomic evaluation included the DGAT1-NIBP region of BTA14 for production traits, a SNP 45kb upstream from PIGY on BTA6 and two SNPs in NIBP on BTA14 for protein percentage. However, most of these consensus effects had similar frequencies in the elite and average cows. Conclusions Genetic selection and extensive use of artificial insemination contributed to overlapped genome, pedigree and phenotype stratifications. The presence of an elite cluster of cows was related to the detection of rare favorable alleles that had high frequencies in the elite cluster and low frequencies in the remaining cows. Methods for stratification correction could have removed some true effects associated with genetic selection. PMID:23039970

  14. Adjustment of spatio-temporal precipitation patterns in a high Alpine environment

    NASA Astrophysics Data System (ADS)

    Herrnegger, Mathew; Senoner, Tobias; Nachtnebel, Hans-Peter

    2018-01-01

    This contribution presents a method for correcting the spatial and temporal distribution of precipitation fields in a mountainous environment. The approach is applied within a flood forecasting model in the Upper Enns catchment in the Central Austrian Alps. Precipitation exhibits a large spatio-temporal variability in Alpine areas. Additionally the density of the monitoring network is low and measurements are subjected to major errors. This can lead to significant deficits in water balance estimation and stream flow simulations, e.g. for flood forecasting models. Therefore precipitation correction factors are frequently applied. For the presented study a multiplicative, stepwise linear correction model is implemented in the rainfall-runoff model COSERO to adjust the precipitation pattern as a function of elevation. To account for the local meteorological conditions, the correction model is derived for two elevation zones: (1) Valley floors to 2000 m a.s.l. and (2) above 2000 m a.s.l. to mountain peaks. Measurement errors also depend on the precipitation type, with higher magnitudes in winter months during snow fall. Therefore, additionally, separate correction factors for winter and summer months are estimated. Significant improvements in the runoff simulations could be achieved, not only in the long-term water balance simulation and the overall model performance, but also in the simulation of flood peaks.

  15. A technique for rapid source apportionment applied to ambient organic aerosol measurements from a thermal desorption aerosol gas chromatograph (TAG)

    DOE PAGES

    Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.; ...

    2016-11-25

    Here, we present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography–mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arrangedmore » into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.« less

  16. A technique for rapid source apportionment applied to ambient organic aerosol measurements from a thermal desorption aerosol gas chromatograph (TAG)

    NASA Astrophysics Data System (ADS)

    Zhang, Yaping; Williams, Brent J.; Goldstein, Allen H.; Docherty, Kenneth S.; Jimenez, Jose L.

    2016-11-01

    We present a rapid method for apportioning the sources of atmospheric organic aerosol composition measured by gas chromatography-mass spectrometry methods. Here, we specifically apply this new analysis method to data acquired on a thermal desorption aerosol gas chromatograph (TAG) system. Gas chromatograms are divided by retention time into evenly spaced bins, within which the mass spectra are summed. A previous chromatogram binning method was introduced for the purpose of chromatogram structure deconvolution (e.g., major compound classes) (Zhang et al., 2014). Here we extend the method development for the specific purpose of determining aerosol samples' sources. Chromatogram bins are arranged into an input data matrix for positive matrix factorization (PMF), where the sample number is the row dimension and the mass-spectra-resolved eluting time intervals (bins) are the column dimension. Then two-dimensional PMF can effectively do three-dimensional factorization on the three-dimensional TAG mass spectra data. The retention time shift of the chromatogram is corrected by applying the median values of the different peaks' shifts. Bin width affects chemical resolution but does not affect PMF retrieval of the sources' time variations for low-factor solutions. A bin width smaller than the maximum retention shift among all samples requires retention time shift correction. A six-factor PMF comparison among aerosol mass spectrometry (AMS), TAG binning, and conventional TAG compound integration methods shows that the TAG binning method performs similarly to the integration method. However, the new binning method incorporates the entirety of the data set and requires significantly less pre-processing of the data than conventional single compound identification and integration. In addition, while a fraction of the most oxygenated aerosol does not elute through an underivatized TAG analysis, the TAG binning method does have the ability to achieve molecular level resolution on other bulk aerosol components commonly observed by the AMS.

  17. TH-CD-BRA-05: First Water Calorimetric Dw Measurement and Direct Measurement of Magnetic Field Correction Factors, KQ,B, in a 1.5 T B-Field of An MRI Linac

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prez, L de; Pooter, J de; Jansen, B

    2016-06-15

    Purpose: Reference dosimetry in MR-guided radiotherapy is performed in the presence of a B-field. As a consequence the response of ionization chambers changes considerably and depends on parameters not considered in traditional reference dosimetry. Therefore future Codes of Practices need ionization chamber correction factors to correct for both the change in beam quality and the presence of a B-field. The objective was to study the feasibility of water calorimetric absorbed-dose measurements in a 1.5 T B-field of an MRLinac and the direct measurement of kQ,B calibration of ionization chambers. Methods: Calorimetric absorbed dose to water Dw was measured with amore » new water calorimeter in the bore of an MRLinac (TPR20,10 of 0.702). Two waterproof ionization chambers (PTW 30013, IBA FC-65G) were calibrated inside the calorimeter phantom (ND,w,Q,B). Both measurements were normalized to a monitor ionization chamber. Ionization chamber measurements were corrected for conventional influence parameter. Based on the chambers’ Co-60 calibrations (ND,w,Q0), measured directly against the calorimeter. In this study the correction factors kQ,B was determined as the ratio of the calibration coefficients in the MRLinac and in Co-60. Additionally, kB was determined based on kQ values obtained with the IAEA TRS-398 Code of Practice. Results: The kQ,B factors of the ionization chambers mentioned above were respectively 0.9488(8) and 0.9445(8) with resulting kB factors of 0.961(13) and 0.952(13) with standard uncertainties on the least significant digit(s) between brackets. Conclusion: Calorimetric Dw measurements and calibration of waterproof ionization chambers were successfully carried out in the 1.5 T B-field of an MRLinac with a standard uncertainty of 0.7%. Preliminary kQ,B and kB factors were determined with standard uncertainties of respectively 0.8% and 1.3%. The kQ,B agrees with an alternative method within 0.4%. The feasibility of water calorimetry in the presence of B-fields was demonstrated by the direct determination of Dw and kQ,B. This work was supported by EMRP grant HLT06. The EMRP is jointly funded by the EMRP participating countries within EURAMET and the European Union.« less

  18. SU-E-T-169: Initial Investigation into the Use of Optically Stimulated Luminescent Dosimeters (OSLDs) for In-Vivo Dosimetry of TBI Patients.

    PubMed

    Paloor, S; Aland, T; Mathew, J; Al-Hammadi, N; Hammoud, R

    2012-06-01

    To report on an initial investigation into the use of optically stimulated luminescent dosimeters (OSLDs) for in-vivo dosimetry for total body irradiation (TBI) treatments. Specifically, we report on the determination of angular dependence, sensitivity correction factors and the dose calibration factors. The OSLD investigated in our work was InLight/OSL nanoDot dosimeters (Landauer Inc.). Nanodots are 5 mm diameter, 0.2 mm thick disk-shaped Carbon-doped Al2O3, and were read using a Landauer InLight microstar reader and associated software.OSLDs were irradiated under two setup conditions: a) typical clinical reference conditions (95cm SSD, 5cm depth in solid water, 10×10 cm field size), and b) TBI conditions (520cm SSD, 5cm depth in solid water, 40×40 cm field size,). The angular dependence was checked for angles ranging ±60 degree from normal incidence. In order to directly compare the sensitivity correction factors, a common dose was delivered to the OSLDs for the two setups. Pre- and post-irradiation readings were acquired. OSLDs were optically annealed under various techniques (1) by keeping over a film view box, (2) Using multiple scan on a flat bed optical scanner and (3) Using natural room light. Under reference conditions, the calculated sensitivity correction factors of the OSLDs had a SD of 2.2% and a range of 5%. Under TBI conditions, the SD increased to 3.4% and the range to 6.0%. The variation in sensitivity correction factors between individual OSLDs across the two measurement conditions was up to 10.3%. Angular dependence of less than 1% is observed. The best bleaching method we found is to keep OSLDs for more than 3 hours on a film viewer which will reduce normalized response to less than 1%. In order to obtain the most accurate results when using OSLDs for in-vivo dosimetry for TBI treatments, sensitivity correction factors and dose calibration factors should all be determined under clinical TBI conditions. © 2012 American Association of Physicists in Medicine.

  19. Radiative nonrecoil nuclear finite size corrections of order α(Zα)5 to the Lamb shift in light muonic atoms

    NASA Astrophysics Data System (ADS)

    Faustov, R. N.; Martynenko, A. P.; Martynenko, F. A.; Sorokin, V. V.

    2017-12-01

    On the basis of quasipotential method in quantum electrodynamics we calculate nuclear finite size radiative corrections of order α(Zα) 5 to the Lamb shift in muonic hydrogen and helium. To construct the interaction potential of particles, which gives the necessary contributions to the energy spectrum, we use the method of projection operators to states with a definite spin. Separate analytic expressions for the contributions of the muon self-energy, the muon vertex operator and the amplitude with spanning photon are obtained. We present also numerical results for these contributions using modern experimental data on the electromagnetic form factors of light nuclei.

  20. The Effects of Methods of Imputation for Missing Values on the Validity and Reliability of Scales

    ERIC Educational Resources Information Center

    Cokluk, Omay; Kayri, Murat

    2011-01-01

    The main aim of this study is the comparative examination of the factor structures, corrected item-total correlations, and Cronbach-alpha internal consistency coefficients obtained by different methods used in imputation for missing values in conditions of not having missing values, and having missing values of different rates in terms of testing…

  1. Measurement accuracy of FBG used as a surface-bonded strain sensor installed by adhesive.

    PubMed

    Xue, Guangzhe; Fang, Xinqiu; Hu, Xiukun; Gong, Libin

    2018-04-10

    Material and dimensional properties of surface-bonded fiber Bragg gratings (FBGs) can distort strain measurement, thereby lowering the measurement accuracy. To accurately assess measurement precision and correct obtained strain, a new model, considering reinforcement effects on adhesive and measured object, is proposed in this study, which is verified to be accurate enough by the numerical method. Meanwhile, a theoretical strain correction factor is obtained, which is demonstrated to be significantly sensitive to recoating material and bonding length, as suggested by numerical and experimental results. It is also concluded that a short grating length as well as a thin but large-area (preferably covering the whole FBG) adhesive can enhance the correction precision.

  2. Breakthrough in current-in-plane tunneling measurement precision by application of multi-variable fitting algorithm.

    PubMed

    Cagliani, Alberto; Østerberg, Frederik W; Hansen, Ole; Shiv, Lior; Nielsen, Peter F; Petersen, Dirch H

    2017-09-01

    We present a breakthrough in micro-four-point probe (M4PP) metrology to substantially improve precision of transmission line (transfer length) type measurements by application of advanced electrode position correction. In particular, we demonstrate this methodology for the M4PP current-in-plane tunneling (CIPT) technique. The CIPT method has been a crucial tool in the development of magnetic tunnel junction (MTJ) stacks suitable for magnetic random-access memories for more than a decade. On two MTJ stacks, the measurement precision of resistance-area product and tunneling magnetoresistance was improved by up to a factor of 3.5 and the measurement reproducibility by up to a factor of 17, thanks to our improved position correction technique.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schüller, Andreas, E-mail: andreas.schueller@ptb.de; Meier, Markus; Selbach, Hans-Joachim

    Purpose: The aim of this study was to investigate whether a chamber-type-specific radiation quality correction factor k{sub Q} can be determined in order to measure the reference air kerma rate of {sup 60}Co high-dose-rate (HDR) brachytherapy sources with acceptable uncertainty by means of a well-type ionization chamber calibrated for {sup 192}Ir HDR sources. Methods: The calibration coefficients of 35 well-type ionization chambers of two different chamber types for radiation fields of {sup 60}Co and {sup 192}Ir HDR brachytherapy sources were determined experimentally. A radiation quality correction factor k{sub Q} was determined as the ratio of the calibration coefficients for {supmore » 60}Co and {sup 192}Ir. The dependence on chamber-to-chamber variations, source-to-source variations, and source strength was investigated. Results: For the PTW Tx33004 (Nucletron source dosimetry system (SDS)) well-type chamber, the type-specific radiation quality correction factor k{sub Q} is 1.19. Note that this value is valid for chambers with the serial number, SN ≥ 315 (Nucletron SDS SN ≥ 548) onward only. For the Standard Imaging HDR 1000 Plus well-type chambers, the type-specific correction factor k{sub Q} is 1.05. Both k{sub Q} values are independent of the source strengths in the complete clinically relevant range. The relative expanded uncertainty (k = 2) of k{sub Q} is U{sub k{sub Q}} = 2.1% for both chamber types. Conclusions: The calibration coefficient of a well-type chamber for radiation fields of {sup 60}Co HDR brachytherapy sources can be calculated from a given calibration coefficient for {sup 192}Ir radiation by using a chamber-type-specific radiation quality correction factor k{sub Q}. However, the uncertainty of a {sup 60}Co calibration coefficient calculated via k{sub Q} is at least twice as large as that for a direct calibration with a {sup 60}Co source.« less

  4. Separation, identification, quantification, and method validation of anthocyanins in botanical supplement raw materials by HPLC and HPLC-MS.

    PubMed

    Chandra, A; Rana, J; Li, Y

    2001-08-01

    A method has been established and validated for identification and quantification of individual, as well as total, anthocyanins by HPLC and LC/ES-MS in botanical raw materials used in the herbal supplement industry. The anthocyanins were separated and identified on the basis of their respective M(+) (cation) using LC/ES-MS. Separated anthocyanins were individually calculated against one commercially available anthocyanin external standard (cyanidin-3-glucoside chloride) and expressed as its equivalents. Amounts of each anthocyanin calculated as external standard equivalent were then multiplied by a molecular-weight correction factor to afford their specific quantities. Experimental procedures and use of a molecular-weight correction factors are substantiated and validated using Balaton tart cherry and elderberry as templates. Cyanidin-3-glucoside chloride has been widely used in the botanical industry to calculate total anthocyanins. In our studies on tart cherry and elderberry, its use as external standard followed by use of molecular-weight correction factors should provide relatively accurate results for total anthocyanins, because of the presence of cyanidin as their major anthocyanidin backbone. The method proposed here is simple and has a direct sample preparation procedure without any solid-phase extraction. It enables selection and use of commercially available anthocyanins as external standards for quantification of specific anthocyanins in the sample matrix irrespective of their commercial availability as analytical standards. It can be used as a template and applied for similar quantification in several anthocyanin-containing raw materials for routine quality control procedures, thus providing consistency in analytical testing of botanical raw materials used for manufacturing efficacious and true-to-the-label nutritional supplements.

  5. Effects of Gel Thickness on Microscopic Indentation Measurements of Gel Modulus

    PubMed Central

    Long, Rong; Hall, Matthew S.; Wu, Mingming; Hui, Chung-Yuen

    2011-01-01

    In vitro, animal cells are mostly cultured on a gel substrate. It was recently shown that substrate stiffness affects cellular behaviors in a significant way, including adhesion, differentiation, and migration. Therefore, an accurate method is needed to characterize the modulus of the substrate. In situ microscopic measurements of the gel substrate modulus are based on Hertz contact mechanics, where Young's modulus is derived from the indentation force and displacement measurements. In Hertz theory, the substrate is modeled as a linear elastic half-space with an infinite depth, whereas in practice, the thickness of the substrate, h, can be comparable to the contact radius and other relevant dimensions such as the radius of the indenter or steel ball, R. As a result, measurements based on Hertz theory overestimate the Young's modulus. In this work, we discuss the limitations of Hertz theory and then modify it, taking into consideration the nonlinearity of the material and large deformation using a finite-element method. We present our results in a simple correction factor, ψ, the ratio of the corrected Young's modulus and the Hertz modulus in the parameter regime of δ/h ≤ min (0.6, R/h) and 0.3 ≤ R/h ≤ 12.7. The ψ factor depends on two dimensionless parameters, R/h and δ/h (where δ is the indentation depth), both of which are easily accessible to experiments. This correction factor agrees with experimental observations obtained with the use of polyacrylamide gel and a microsphere indentation method in the parameter range of 0.1 ≤ δ/h ≤ 0.4 and 0.3 ≤ R/h ≤ 6.2. The effect of adhesion on the use of Hertz theory for small indentation depth is also discussed. PMID:21806932

  6. Process Evaluation of Two Participatory Approaches: Implementing Total Worker Health® Interventions in a Correctional Workforce

    PubMed Central

    Dugan, Alicia G.; Farr, Dana A.; Namazi, Sara; Henning, Robert A.; Wallace, Kelly N.; El Ghaziri, Mazen; Punnett, Laura; Dussetschleger, Jeffrey L.; Cherniack, Martin G.

    2018-01-01

    Background Correctional Officers (COs) have among the highest injury rates and poorest health of all the public safety occupations. The HITEC-2 (Health Improvement Through Employee Control-2) study uses Participatory Action Research (PAR) to design and implement interventions to improve health and safety of COs. Method HITEC-2 compared two different types of participatory program, a CO-only “Design Team” (DT) and “Kaizen Event Teams” (KET) of COs and supervisors, to determine differences in implementation process and outcomes. The Program Evaluation Rating Sheet (PERS) was developed to document and evaluate program implementation. Results Both programs yielded successful and unsuccessful interventions, dependent upon team-, facility-, organizational, state-, facilitator-, and intervention-level factors. Conclusions PAR in corrections, and possibly other sectors, depends upon factors including participation, leadership, continuity and timing, resilience, and financial circumstances. The new PERS instrument may be useful in other sectors to assist in assessing intervention success. PMID:27378470

  7. Anatomical-based partial volume correction for low-dose dedicated cardiac SPECT/CT

    NASA Astrophysics Data System (ADS)

    Liu, Hui; Chan, Chung; Grobshtein, Yariv; Ma, Tianyu; Liu, Yaqiang; Wang, Shi; Stacy, Mitchel R.; Sinusas, Albert J.; Liu, Chi

    2015-09-01

    Due to the limited spatial resolution, partial volume effect has been a major degrading factor on quantitative accuracy in emission tomography systems. This study aims to investigate the performance of several anatomical-based partial volume correction (PVC) methods for a dedicated cardiac SPECT/CT system (GE Discovery NM/CT 570c) with focused field-of-view over a clinically relevant range of high and low count levels for two different radiotracer distributions. These PVC methods include perturbation geometry transfer matrix (pGTM), pGTM followed by multi-target correction (MTC), pGTM with known concentration in blood pool, the former followed by MTC and our newly proposed methods, which perform the MTC method iteratively, where the mean values in all regions are estimated and updated by the MTC-corrected images each time in the iterative process. The NCAT phantom was simulated for cardiovascular imaging with 99mTc-tetrofosmin, a myocardial perfusion agent, and 99mTc-red blood cell (RBC), a pure intravascular imaging agent. Images were acquired at six different count levels to investigate the performance of PVC methods in both high and low count levels for low-dose applications. We performed two large animal in vivo cardiac imaging experiments following injection of 99mTc-RBC for evaluation of intramyocardial blood volume (IMBV). The simulation results showed our proposed iterative methods provide superior performance than other existing PVC methods in terms of image quality, quantitative accuracy, and reproducibility (standard deviation), particularly for low-count data. The iterative approaches are robust for both 99mTc-tetrofosmin perfusion imaging and 99mTc-RBC imaging of IMBV and blood pool activity even at low count levels. The animal study results indicated the effectiveness of PVC to correct the overestimation of IMBV due to blood pool contamination. In conclusion, the iterative PVC methods can achieve more accurate quantification, particularly for low count cardiac SPECT studies, typically obtained from low-dose protocols, gated studies, and dynamic applications.

  8. Theory study on the bandgap of antimonide-based multi-element alloys

    NASA Astrophysics Data System (ADS)

    An, Ning; Liu, Cheng-Zhi; Fan, Cun-Bo; Dong, Xue; Song, Qing-Li

    2017-05-01

    In order to meet the design requirements of the high-performance antimonide-based optoelectronic devices, the spin-orbit splitting correction method for bandgaps of Sb-based multi-element alloys is proposed. Based on the analysis of band structure, a correction factor is introduced in the InxGa1-xAsySb1-y bandgaps calculation with taking into account the spin-orbit coupling sufficiently. In addition, the InxGa1-xAsySb1-y films with different compositions are grown on GaSb substrates by molecular beam epitaxy (MBE), and the corresponding bandgaps are obtained by photoluminescence (PL) to test the accuracy and reliability of this new method. The results show that the calculated values agree fairly well with the experimental results. To further verify this new method, the bandgaps of a series of experimental samples reported before are calculated. The error rate analysis reveals that the α of spin-orbit splitting correction method is decreased to 2%, almost one order of magnitude smaller than the common method. It means this new method can calculate the antimonide multi-element more accurately and has the merit of wide applicability. This work can give a reasonable interpretation for the reported results and beneficial to tailor the antimonides properties and optoelectronic devices.

  9. Ionization correction factors for H II regions in blue compact dwarf galaxies

    NASA Astrophysics Data System (ADS)

    Holovatyi, V. V.; Melekh, B. Ya.

    2002-08-01

    Energy distributions in the spectra of the ionizing nuclei of H II regions beyond λ <= 91.2 nm were calculated. A grid of photoionization models of 270 H II regions was constructed. The free parameters of the model grid are the hydrogen density nH in the nebular gas, filling factor, energy Lc-spectrum of ionizing nuclei, and metallicity. The chemical composition from the studies of Izotov et al. were used for model grid initialization. The integral linear spectra calculated for the photoionization models were used to determine the concentration ne, temperatures Te of electrons, and ionic concentrations n(A+i)/n(H+) by the nebular gas diagnostic method. The averaged relative ionic abundances n(A+i)/n(H+) thus calculated were used to determine new expressions for ionization correction factors which we recommend for the determination of abundances in the H II regions of blue compact dwarf galaxies.

  10. Detector-specific correction factors in radiosurgery beams and their impact on dose distribution calculations.

    PubMed

    García-Garduño, Olivia A; Rodríguez-Ávila, Manuel A; Lárraga-Gutiérrez, José M

    2018-01-01

    Silicon-diode-based detectors are commonly used for the dosimetry of small radiotherapy beams due to their relatively small volumes and high sensitivity to ionizing radiation. Nevertheless, silicon-diode-based detectors tend to over-respond in small fields because of their high density relative to water. For that reason, detector-specific beam correction factors ([Formula: see text]) have been recommended not only to correct the total scatter factors but also to correct the tissue maximum and off-axis ratios. However, the application of [Formula: see text] to in-depth and off-axis locations has not been studied. The goal of this work is to address the impact of the correction factors on the calculated dose distribution in static non-conventional photon beams (specifically, in stereotactic radiosurgery with circular collimators). To achieve this goal, the total scatter factors, tissue maximum, and off-axis ratios were measured with a stereotactic field diode for 4.0-, 10.0-, and 20.0-mm circular collimators. The irradiation was performed with a Novalis® linear accelerator using a 6-MV photon beam. The detector-specific correction factors were calculated and applied to the experimental dosimetry data for in-depth and off-axis locations. The corrected and uncorrected dosimetry data were used to commission a treatment planning system for radiosurgery planning. Various plans were calculated with simulated lesions using the uncorrected and corrected dosimetry. The resulting dose calculations were compared using the gamma index test with several criteria. The results of this work presented important conclusions for the use of detector-specific beam correction factors ([Formula: see text] in a treatment planning system. The use of [Formula: see text] for total scatter factors has an important impact on monitor unit calculation. On the contrary, the use of [Formula: see text] for tissue-maximum and off-axis ratios has not an important impact on the dose distribution calculation by the treatment planning system. This conclusion is only valid for the combination of treatment planning system, detector, and correction factors used in this work; however, this technique can be applied to other treatment planning systems, detectors, and correction factors.

  11. Classification and correction of the radar bright band with polarimetric radar

    NASA Astrophysics Data System (ADS)

    Hall, Will; Rico-Ramirez, Miguel; Kramer, Stefan

    2015-04-01

    The annular region of enhanced radar reflectivity, known as the Bright Band (BB), occurs when the radar beam intersects a layer of melting hydrometeors. Radar reflectivity is related to rainfall through a power law equation and so this enhanced region can lead to overestimations of rainfall by a factor of up to 5, so it is important to correct for this. The BB region can be identified by using several techniques including hydrometeor classification and freezing level forecasts from mesoscale meteorological models. Advances in dual-polarisation radar measurements and continued research in the field has led to increased accuracy in the ability to identify the melting snow region. A method proposed by Kitchen et al (1994), a form of which is currently used operationally in the UK, utilises idealised Vertical Profiles of Reflectivity (VPR) to correct for the BB enhancement. A simpler and more computationally efficient method involves the formation of an average VPR from multiple elevations for correction that can still cause a significant decrease in error (Vignal 2000). The purpose of this research is to evaluate a method that relies only on analysis of measurements from an operational C-band polarimetric radar without the need for computationally expensive models. Initial results show that LDR is a strong classifier of melting snow with a high Critical Success Index of 97% when compared to the other variables. An algorithm based on idealised VPRs resulted in the largest decrease in error when BB corrected scans are compared to rain gauges and to lower level scans with a reduction in RMSE of 61% for rain-rate measurements. References Kitchen, M., R. Brown, and A. G. Davies, 1994: Real-time correction of weather radar data for the effects of bright band, range and orographic growth in widespread precipitation. Q.J.R. Meteorol. Soc., 120, 1231-1254. Vignal, B. et al, 2000: Three methods to determine profiles of reflectivity from volumetric radar data to correct precipitation estimates. J. Appl. Meteor., 39(10), 1715-1726.

  12. Eye lens monitoring for interventional radiology personnel: dosemeters, calibration and practical aspects of H p (3) monitoring. A 2015 review.

    PubMed

    Carinou, Eleftheria; Ferrari, Paolo; Bjelac, Olivera Ciraj; Gingaume, Merce; Merce, Marta Sans; O'Connor, Una

    2015-09-01

    A thorough literature review about the current situation on the implementation of eye lens monitoring has been performed in order to provide recommendations regarding dosemeter types, calibration procedures and practical aspects of eye lens monitoring for interventional radiology personnel. Most relevant data and recommendations from about 100 papers have been analysed and classified in the following topics: challenges of today in eye lens monitoring; conversion coefficients, phantoms and calibration procedures for eye lens dose evaluation; correction factors and dosemeters for eye lens dose measurements; dosemeter position and influence of protective devices. The major findings of the review can be summarised as follows: the recommended operational quantity for the eye lens monitoring is H p (3). At present, several dosemeters are available for eye lens monitoring and calibration procedures are being developed. However, in practice, very often, alternative methods are used to assess the dose to the eye lens. A summary of correction factors found in the literature for the assessment of the eye lens dose is provided. These factors can give an estimation of the eye lens dose when alternative methods, such as the use of a whole body dosemeter, are used. A wide range of values is found, thus indicating the large uncertainty associated with these simplified methods. Reduction factors from most common protective devices obtained experimentally and using Monte Carlo calculations are presented. The paper concludes that the use of a dosemeter placed at collar level outside the lead apron can provide a useful first estimate of the eye lens exposure. However, for workplaces with estimated annual equivalent dose to the eye lens close to the dose limit, specific eye lens monitoring should be performed. Finally, training of the involved medical staff on the risks of ionising radiation for the eye lens and on the correct use of protective systems is strongly recommended.

  13. The Accuracy and Precision of Flow Measurements Using Phase Contrast Techniques

    NASA Astrophysics Data System (ADS)

    Tang, Chao

    Quantitative volume flow rate measurements using the magnetic resonance imaging technique are studied in this dissertation because the volume flow rates have a special interest in the blood supply of the human body. The method of quantitative volume flow rate measurements is based on the phase contrast technique, which assumes a linear relationship between the phase and flow velocity of spins. By measuring the phase shift of nuclear spins and integrating velocity across the lumen of the vessel, we can determine the volume flow rate. The accuracy and precision of volume flow rate measurements obtained using the phase contrast technique are studied by computer simulations and experiments. The various factors studied include (1) the partial volume effect due to voxel dimensions and slice thickness relative to the vessel dimensions; (2) vessel angulation relative to the imaging plane; (3) intravoxel phase dispersion; (4) flow velocity relative to the magnitude of the flow encoding gradient. The partial volume effect is demonstrated to be the major obstacle to obtaining accurate flow measurements for both laminar and plug flow. Laminar flow can be measured more accurately than plug flow in the same condition. Both the experiment and simulation results for laminar flow show that, to obtain the accuracy of volume flow rate measurements to within 10%, at least 16 voxels are needed to cover the vessel lumen. The accuracy of flow measurements depends strongly on the relative intensity of signal from stationary tissues. A correction method is proposed to compensate for the partial volume effect. The correction method is based on a small phase shift approximation. After the correction, the errors due to the partial volume effect are compensated, allowing more accurate results to be obtained. An automatic program based on the correction method is developed and implemented on a Sun workstation. The correction method is applied to the simulation and experiment results. The results show that the correction significantly reduces the errors due to the partial volume effect. We apply the correction method to the data of in vivo studies. Because the blood flow is not known, the results of correction are tested according to the common knowledge (such as cardiac output) and conservation of flow. For example, the volume of blood flowing to the brain should be equal to the volume of blood flowing from the brain. Our measurement results are very convincing.

  14. Correaltion of full-scale drag predictions with flight measurements on the C-141A aircraft. Phase 2: Wind tunnel test, analysis, and prediction techniques. Volume 1: Drag predictions, wind tunnel data analysis and correlation

    NASA Technical Reports Server (NTRS)

    Macwilkinson, D. G.; Blackerby, W. T.; Paterson, J. H.

    1974-01-01

    The degree of cruise drag correlation on the C-141A aircraft is determined between predictions based on wind tunnel test data, and flight test results. An analysis of wind tunnel tests on a 0.0275 scale model at Reynolds number up to 3.05 x 1 million/MAC is reported. Model support interference corrections are evaluated through a series of tests, and fully corrected model data are analyzed to provide details on model component interference factors. It is shown that predicted minimum profile drag for the complete configuration agrees within 0.75% of flight test data, using a wind tunnel extrapolation method based on flat plate skin friction and component shape factors. An alternative method of extrapolation, based on computed profile drag from a subsonic viscous theory, results in a prediction four percent lower than flight test data.

  15. Comparison of extended field-of-view reconstructions in C-arm flat-detector CT using patient size, shape or attenuation information.

    PubMed

    Kolditz, Daniel; Meyer, Michael; Kyriakou, Yiannis; Kalender, Willi A

    2011-01-07

    In C-arm-based flat-detector computed tomography (FDCT) it frequently happens that the patient exceeds the scan field of view (SFOV) in the transaxial direction because of the limited detector size. This results in data truncation and CT image artefacts. In this work three truncation correction approaches for extended field-of-view (EFOV) reconstructions have been implemented and evaluated. An FDCT-based method estimates the patient size and shape from the truncated projections by fitting an elliptical model to the raw data in order to apply an extrapolation. In a camera-based approach the patient is sampled with an optical tracking system and this information is used to apply an extrapolation. In a CT-based method the projections are completed by artificial projection data obtained from the CT data acquired in an earlier exam. For all methods the extended projections are filtered and backprojected with a standard Feldkamp-type algorithm. Quantitative evaluations have been performed by simulations of voxelized phantoms on the basis of the root mean square deviation and a quality factor Q (Q = 1 represents the ideal correction). Measurements with a C-arm FDCT system have been used to validate the simulations and to investigate the practical applicability using anthropomorphic phantoms which caused truncation in all projections. The proposed approaches enlarged the FOV to cover wider patient cross-sections. Thus, image quality inside and outside the SFOV has been improved. Best results have been obtained using the CT-based method, followed by the camera-based and the FDCT-based truncation correction. For simulations, quality factors up to 0.98 have been achieved. Truncation-induced cupping artefacts have been reduced, e.g., from 218% to less than 1% for the measurements. The proposed truncation correction approaches for EFOV reconstructions are an effective way to ensure accurate CT values inside the SFOV and to recover peripheral information outside the SFOV.

  16. SU-E-T-477: An Efficient Dose Correction Algorithm Accounting for Tissue Heterogeneities in LDR Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mashouf, S; Lai, P; Karotki, A

    2014-06-01

    Purpose: Seed brachytherapy is currently used for adjuvant radiotherapy of early stage prostate and breast cancer patients. The current standard for calculation of dose surrounding the brachytherapy seeds is based on American Association of Physicist in Medicine Task Group No. 43 (TG-43 formalism) which generates the dose in homogeneous water medium. Recently, AAPM Task Group No. 186 emphasized the importance of accounting for tissue heterogeneities. This can be done using Monte Carlo (MC) methods, but it requires knowing the source structure and tissue atomic composition accurately. In this work we describe an efficient analytical dose inhomogeneity correction algorithm implemented usingmore » MIM Symphony treatment planning platform to calculate dose distributions in heterogeneous media. Methods: An Inhomogeneity Correction Factor (ICF) is introduced as the ratio of absorbed dose in tissue to that in water medium. ICF is a function of tissue properties and independent of source structure. The ICF is extracted using CT images and the absorbed dose in tissue can then be calculated by multiplying the dose as calculated by the TG-43 formalism times ICF. To evaluate the methodology, we compared our results with Monte Carlo simulations as well as experiments in phantoms with known density and atomic compositions. Results: The dose distributions obtained through applying ICF to TG-43 protocol agreed very well with those of Monte Carlo simulations as well as experiments in all phantoms. In all cases, the mean relative error was reduced by at least 50% when ICF correction factor was applied to the TG-43 protocol. Conclusion: We have developed a new analytical dose calculation method which enables personalized dose calculations in heterogeneous media. The advantages over stochastic methods are computational efficiency and the ease of integration into clinical setting as detailed source structure and tissue segmentation are not needed. University of Toronto, Natural Sciences and Engineering Research Council of Canada.« less

  17. Evidence for using Monte Carlo calculated wall attenuation and scatter correction factors for three styles of graphite-walled ion chamber.

    PubMed

    McCaffrey, J P; Mainegra-Hing, E; Kawrakow, I; Shortt, K R; Rogers, D W O

    2004-06-21

    The basic equation for establishing a 60Co air-kerma standard based on a cavity ionization chamber includes a wall correction term that corrects for the attenuation and scatter of photons in the chamber wall. For over a decade, the validity of the wall correction terms determined by extrapolation methods (K(w)K(cep)) has been strongly challenged by Monte Carlo (MC) calculation methods (K(wall)). Using the linear extrapolation method with experimental data, K(w)K(cep) was determined in this study for three different styles of primary-standard-grade graphite ionization chamber: cylindrical, spherical and plane-parallel. For measurements taken with the same 60Co source, the air-kerma rates for these three chambers, determined using extrapolated K(w)K(cep) values, differed by up to 2%. The MC code 'EGSnrc' was used to calculate the values of K(wall) for these three chambers. Use of the calculated K(wall) values gave air-kerma rates that agreed within 0.3%. The accuracy of this code was affirmed by its reliability in modelling the complex structure of the response curve obtained by rotation of the non-rotationally symmetric plane-parallel chamber. These results demonstrate that the linear extrapolation technique leads to errors in the determination of air-kerma.

  18. Feasibility assessment of yttrium-90 liver radioembolization imaging using amplitude-based gated PET/CT

    PubMed Central

    Acuff, Shelley N.; Neveu, Melissa L.; Syed, Mumtaz; Kaman, Austin D.; Fu, Yitong

    2018-01-01

    Purpose The usage of PET/computed tomography (CT) to monitor hepatocellular carcinoma patients following yttrium-90 (90Y) radioembolization has increased. Respiratory motion causes liver movement, which can be corrected using gating techniques at the expense of added noise. This work examines the use of amplitude-based gating on 90Y-PET/CT and its potential impact on diagnostic integrity. Patients and methods Patients were imaged using PET/CT following 90Y radioembolization. A respiratory band was used to collect respiratory cycle data. Patient data were processed as both standard and motion-corrected images. Regions of interest were drawn and compared using three methods. Activity concentrations were calculated and converted into dose estimates using previously determined and published scaling factors. Diagnostic assessments were performed using a binary scale created from published 90Y-PET/CT image interpretation guidelines. Results Estimates of radiation dose were increased (P<0.05) when using amplitude-gating methods with 90Y PET/CT imaging. Motion-corrected images show increased noise, but the diagnostic determination of success, using the Kao criteria, did not change between static and motion-corrected data. Conclusion Amplitude-gated PET/CT following 90Y radioembolization is feasible and may improve 90Y dose estimates while maintaining diagnostic assessment integrity. PMID:29351124

  19. Laser Vision Correction with Q Factor Modification for Keratoconus Management.

    PubMed

    Pahuja, Natasha Kishore; Shetty, Rohit; Sinha Roy, Abhijit; Thakkar, Maithil Mukesh; Jayadev, Chaitra; Nuijts, Rudy Mma; Nagaraja, Harsha

    2017-04-01

    To evaluate the outcomes of corneal laser ablation with Q factor modification for vision correction in patients with progressive keratoconus. In this prospective study, 50 eyes of 50 patients were divided into two groups based on Q factor (>-1 in Group I and ≤-1 in Group II). All patients underwent a detailed ophthalmic examination including uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), subjective acceptance and corneal topography using the Pentacam. The topolyzer was used to measure the corneal asphericity (Q). Ablation was performed based on the preoperative Q values and thinnest pachymetry to obtain a target of near normal Q. This was followed by corneal collagen crosslinking to stabilize the progression. Statistically significant improvement (p ≤ 0.05) was noticed in refractive, topographic, and Q values posttreatment in both groups. The improvement in higher-order aberrations and total aberrations were statistically significant in both groups; however, the spherical aberration showed statistically significant improvement only in Group II. Ablation based on the preoperative Q and pachymetry for a near normal postoperative Q value appears to be an effective method to improve the visual acuity and quality in patients with keratoconus.

  20. Using Eye-tracking to Examine How Embedding Risk Corrective Statements Improves Cigarette Risk Beliefs: Implications for Tobacco Regulatory Policy

    PubMed Central

    Lochbuehler, Kirsten; Tang, Kathy Z.; Souprountchouk, Valentina; Campetti, Dana; Cappella, Joseph N.; Kozlowski, Lynn T.; Strasser, Andrew A.

    2016-01-01

    Background Tobacco companies have deliberately used explicit and implicit misleading information in marketing campaigns. The aim of the current study was to experimentally investigate whether the editing of explicit and implicit content of a print advertisement improves smokers’ risk beliefs and smokers’ knowledge of explicit and implicit information. Methods Using a 2(explicit/implicit) x 2(accurate/misleading) between-subject design, 203 smokers were randomly assigned to one of four advertisement conditions. The manipulation of graphic content was examined as an implicit factor to convey product harm. The inclusion of a text corrective in the body of the ad was defined as the manipulated explicit factor. Participants’ eye movements and risk beliefs/recall were measured during and after ad exposure, respectively. Results Results indicate that exposure to a text corrective decreases false beliefs about the product (p < .01) and improves correct recall of information provided by the corrective (p < .05). Accurate graphic content did not alter the harmfulness of the product. Independent of condition, smokers who focused longer on the warning label made fewer false inferences about the product (p = .01) and were more likely to correctly recall the warning information (p < .01). Nonetheless, most smokers largely ignored the text warning. Conclusions Embedding a corrective statement in the body of the ad is an effective strategy to convey health information to consumers, which can be mandated under the Tobacco Control Act (2009). Eye-tracking results objectively demonstrate that text-only warnings are not viewed by smokers, thus minimizing their effectiveness for conveying risk information. PMID:27160034

  1. Systematic uncertainties in the Monte Carlo calculation of ion chamber replacement correction factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, L. L. W.; La Russa, D. J.; Rogers, D. W. O.

    In a previous study [Med. Phys. 35, 1747-1755 (2008)], the authors proposed two direct methods of calculating the replacement correction factors (P{sub repl} or p{sub cav}p{sub dis}) for ion chambers by Monte Carlo calculation. By ''direct'' we meant the stopping-power ratio evaluation is not necessary. The two methods were named as the high-density air (HDA) and low-density water (LDW) methods. Although the accuracy of these methods was briefly discussed, it turns out that the assumption made regarding the dose in an HDA slab as a function of slab thickness is not correct. This issue is reinvestigated in the current study,more » and the accuracy of the LDW method applied to ion chambers in a {sup 60}Co photon beam is also studied. It is found that the two direct methods are in fact not completely independent of the stopping-power ratio of the two materials involved. There is an implicit dependence of the calculated P{sub repl} values upon the stopping-power ratio evaluation through the choice of an appropriate energy cutoff {Delta}, which characterizes a cavity size in the Spencer-Attix cavity theory. Since the {Delta} value is not accurately defined in the theory, this dependence on the stopping-power ratio results in a systematic uncertainty on the calculated P{sub repl} values. For phantom materials of similar effective atomic number to air, such as water and graphite, this systematic uncertainty is at most 0.2% for most commonly used chambers for either electron or photon beams. This uncertainty level is good enough for current ion chamber dosimetry, and the merits of the two direct methods of calculating P{sub repl} values are maintained, i.e., there is no need to do a separate stopping-power ratio calculation. For high-Z materials, the inherent uncertainty would make it practically impossible to calculate reliable P{sub repl} values using the two direct methods.« less

  2. Near-infrared spectroscopy determined cerebral oxygenation with eliminated skin blood flow in young males.

    PubMed

    Hirasawa, Ai; Kaneko, Takahito; Tanaka, Naoki; Funane, Tsukasa; Kiguchi, Masashi; Sørensen, Henrik; Secher, Niels H; Ogoh, Shigehiko

    2016-04-01

    We estimated cerebral oxygenation during handgrip exercise and a cognitive task using an algorithm that eliminates the influence of skin blood flow (SkBF) on the near-infrared spectroscopy (NIRS) signal. The algorithm involves a subtraction method to develop a correction factor for each subject. For twelve male volunteers (age 21 ± 1 yrs) +80 mmHg pressure was applied over the left temporal artery for 30 s by a custom-made headband cuff to calculate an individual correction factor. From the NIRS-determined ipsilateral cerebral oxyhemoglobin concentration (O2Hb) at two source-detector distances (15 and 30 mm) with the algorithm using the individual correction factor, we expressed cerebral oxygenation without influence from scalp and scull blood flow. Validity of the estimated cerebral oxygenation was verified during cerebral neural activation (handgrip exercise and cognitive task). With the use of both source-detector distances, handgrip exercise and a cognitive task increased O2Hb (P < 0.01) but O2Hb was reduced when SkBF became eliminated by pressure on the temporal artery for 5 s. However, when the estimation of cerebral oxygenation was based on the algorithm developed when pressure was applied to the temporal artery, estimated O2Hb was not affected by elimination of SkBF during handgrip exercise (P = 0.666) or the cognitive task (P = 0.105). These findings suggest that the algorithm with the individual correction factor allows for evaluation of changes in an accurate cerebral oxygenation without influence of extracranial blood flow by NIRS applied to the forehead.

  3. WE-G-207-07: Iterative CT Shading Correction Method with No Prior Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, P; Mao, T; Niu, T

    2015-06-15

    Purpose: Shading artifacts are caused by scatter contamination, beam hardening effects and other non-ideal imaging condition. Our Purpose is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT imaging (e.g., cone-beam CT, low-kVp CT) without relying on prior information. Methods: Our method applies general knowledge of the relatively uniform CT number distribution in one tissue component. Image segmentation is applied to construct template image where each structure is filled with the same CT number of that specific tissue. By subtracting the ideal template from CT image, the residual from various error sources are generated.more » Since the forward projection is an integration process, the non-continuous low-frequency shading artifacts in the image become continuous and low-frequency signals in the line integral. Residual image is thus forward projected and its line integral is filtered using Savitzky-Golay filter to estimate the error. A compensation map is reconstructed on the error using standard FDK algorithm and added to the original image to obtain the shading corrected one. Since the segmentation is not accurate on shaded CT image, the proposed scheme is iterated until the variation of residual image is minimized. Results: The proposed method is evaluated on a Catphan600 phantom, a pelvic patient and a CT angiography scan for carotid artery assessment. Compared to the one without correction, our method reduces the overall CT number error from >200 HU to be <35 HU and increases the spatial uniformity by a factor of 1.4. Conclusion: We propose an effective iterative algorithm for shading correction in CT imaging. Being different from existing algorithms, our method is only assisted by general anatomical and physical information in CT imaging without relying on prior knowledge. Our method is thus practical and attractive as a general solution to CT shading correction. This work is supported by the National Science Foundation of China (NSFC Grant No. 81201091), National High Technology Research and Development Program of China (863 program, Grant No. 2015AA020917), and Fund Project for Excellent Abroad Scholar Personnel in Science and Technology.« less

  4. A prospective randomized trial of 1 versus 2 injections during EUS-guided celiac plexus block for chronic pancreatitis pain.

    PubMed

    LeBlanc, Julia K; DeWitt, Jon; Johnson, Cynthia; Okumu, Wycliffe; McGreevy, Kathleen; Symms, Michelle; McHenry, Lee; Sherman, Stuart; Imperiale, Thomas

    2009-04-01

    The efficacy of 1-injection versus a 2-injections method of EUS-guided celiac plexus block (EUS-CPB) in patients with chronic pancreatitis is not known. To compare the clinical effectiveness and safety of EUS-CPB by using 1 versus 2 injections in patients with chronic pancreatitis and pain. The secondary aim is to identify factors that predict responsiveness. A prospective randomized study. EUS-CPB was performed by using bupivacaine and triamcinolone injected into 1 or 2 sites at the level of the celiac trunk during a single EUS-CPB procedure. Duration of pain relief, onset of pain relief, and complications. Fifty [corrected] subjects were enrolled (23 received 1 injection, 27 [corrected] received 2 injections). The median duration of pain relief in the 31 responders was 28 days (range 1-673 days). [corrected] Fifteen [corrected] of 23 (65%) [corrected] subjects who received 1 injection [corrected] had relief from pain compared with 16 of 27 (59%) [corrected] subjects who received 2 injections [corrected] (P = .67). [corrected] The median times to onset in the 1-injection and 2-injections groups were 21 and 14 days, respectively (P = .99). No correlation existed between duration of pain relief and time to onset of pain relief or onset within 24 hours. Age, sex, race, prior EUS-CPB, and smoking or alcohol history did not predict duration of pain relief. Telephone interviewers were not blinded. There was no difference in duration of pain relief or onset of pain relief in subjects with chronic pancreatitis and pain when the same total amount of medication was delivered in 1 or 2 injections during a single EUS-CPB procedure. Both methods were safe.

  5. An energy-based equilibrium contact angle boundary condition on jagged surfaces for phase-field methods.

    PubMed

    Frank, Florian; Liu, Chen; Scanziani, Alessio; Alpak, Faruk O; Riviere, Beatrice

    2018-08-01

    We consider an energy-based boundary condition to impose an equilibrium wetting angle for the Cahn-Hilliard-Navier-Stokes phase-field model on voxel-set-type computational domains. These domains typically stem from μCT (micro computed tomography) imaging of porous rock and approximate a (on μm scale) smooth domain with a certain resolution. Planar surfaces that are perpendicular to the main axes are naturally approximated by a layer of voxels. However, planar surfaces in any other directions and curved surfaces yield a jagged/topologically rough surface approximation by voxels. For the standard Cahn-Hilliard formulation, where the contact angle between the diffuse interface and the domain boundary (fluid-solid interface/wall) is 90°, jagged surfaces have no impact on the contact angle. However, a prescribed contact angle smaller or larger than 90° on jagged voxel surfaces is amplified. As a remedy, we propose the introduction of surface energy correction factors for each fluid-solid voxel face that counterbalance the difference of the voxel-set surface area with the underlying smooth one. The discretization of the model equations is performed with the discontinuous Galerkin method. However, the presented semi-analytical approach of correcting the surface energy is equally applicable to other direct numerical methods such as finite elements, finite volumes, or finite differences, since the correction factors appear in the strong formulation of the model. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Comparison of quantitative Y-90 SPECT and non-time-of-flight PET imaging in post-therapy radioembolization of liver cancer

    PubMed Central

    Yue, Jianting; Mauxion, Thibault; Reyes, Diane K.; Lodge, Martin A.; Hobbs, Robert F.; Rong, Xing; Dong, Yinfeng; Herman, Joseph M.; Wahl, Richard L.; Geschwind, Jean-François H.; Frey, Eric C.

    2016-01-01

    Purpose: Radioembolization with yttrium-90 microspheres may be optimized with patient-specific pretherapy treatment planning. Dose verification and validation of treatment planning methods require quantitative imaging of the post-therapy distribution of yttrium-90 (Y-90). Methods for quantitative imaging of Y-90 using both bremsstrahlung SPECT and PET have previously been described. The purpose of this study was to compare the two modalities quantitatively in humans. Methods: Calibration correction factors for both quantitative Y-90 bremsstrahlung SPECT and a non-time-of-flight PET system without compensation for prompt coincidences were developed by imaging three phantoms. The consistency of these calibration correction factors for the different phantoms was evaluated. Post-therapy images from both modalities were obtained from 15 patients with hepatocellular carcinoma who underwent hepatic radioembolization using Y-90 glass microspheres. Quantitative SPECT and PET images were rigidly registered and the total liver activities and activity distributions estimated for each modality were compared. The activity distributions were compared using profiles, voxel-by-voxel correlation and Bland–Altman analyses, and activity-volume histograms. Results: The mean ± standard deviation of difference in the total activity in the liver between the two modalities was 0% ± 9% (range −21%–18%). Voxel-by-voxel comparisons showed a good agreement in regions corresponding roughly to treated tumor and treated normal liver; the agreement was poorer in regions with low or no expected activity, where PET appeared to overestimate the activity. The correlation coefficients between intrahepatic voxel pairs for the two modalities ranged from 0.86 to 0.94. Cumulative activity volume histograms were in good agreement. Conclusions: These data indicate that, with appropriate reconstruction methods and measured calibration correction factors, either Y-90 SPECT/CT or Y-90 PET/CT can be used for quantitative post-therapy monitoring of Y-90 activity distribution following hepatic radioembolization. PMID:27782730

  7. Estimation of Knudsen diffusion coefficients from tracer experiments conducted with a binary gas system and a porous medium.

    PubMed

    Hibi, Yoshihiko; Kashihara, Ayumi

    2018-03-01

    A previous study has reported that Knudsen diffusion coefficients obtained by tracer experiments conducted with a binary gas system and a porous medium are consistently smaller than those obtained by permeability experiments conducted with a single-gas system and a porous medium. To date, however, that study is the only one in which tracer experiments have been conducted with a binary gas system. Therefore, to confirm this difference in Knudsen diffusion coefficients, we used a method we had developed previously to conduct tracer experiments with a binary carbon dioxide-nitrogen gas system and five porous media with permeability coefficients ranging from 10 -13 to 10 -11  m 2 . The results showed that the Knudsen diffusion coefficient of N 2 (D N2 ) (cm 2 /s) was related to the effective permeability coefficient k e (m 2 ) as D N2  = 7.39 × 10 7 k e 0.767 . Thus, the Knudsen diffusion coefficients of N 2 obtained by our tracer experiments were consistently 1/27 of those obtained by permeability experiments conducted with many porous media and air by other researchers. By using an inversion simulation to fit the advection-diffusion equation to the distribution of concentrations at observation points calculated by mathematically solving the equation, we confirmed that the method used to obtain the Knudsen diffusion coefficient in this study yielded accurate values. Moreover, because the Knudsen diffusion coefficient did not differ when columns with two different lengths, 900 and 1500 mm, were used, this column property did not influence the flow of gas in the column. The equation of the dusty gas model already includes obstruction factors for Knudsen diffusion and molecular diffusion, which relate to medium heterogeneity and tortuosity and depend only on the structure of the porous medium. Furthermore, there is no need to take account of any additional correction factor for molecular diffusion except the obstruction factor because molecular diffusion is only treated in a multicomponent gas system. Thus, molecular diffusion considers only the obstruction factor related to tortuosity. Therefore, we introduced a correction factor for a multicomponent gas system into the DGM equation, multiplying the Knudsen diffusion coefficient, which includes the obstruction factor related to tortuosity, by this correction factor. From the present experimental results, the value of this correction factor was 1/27, and it depended only on the structure of the gas system in the porous medium. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. Estimation of Knudsen diffusion coefficients from tracer experiments conducted with a binary gas system and a porous medium

    NASA Astrophysics Data System (ADS)

    Hibi, Yoshihiko; Kashihara, Ayumi

    2018-03-01

    A previous study has reported that Knudsen diffusion coefficients obtained by tracer experiments conducted with a binary gas system and a porous medium are consistently smaller than those obtained by permeability experiments conducted with a single-gas system and a porous medium. To date, however, that study is the only one in which tracer experiments have been conducted with a binary gas system. Therefore, to confirm this difference in Knudsen diffusion coefficients, we used a method we had developed previously to conduct tracer experiments with a binary carbon dioxide-nitrogen gas system and five porous media with permeability coefficients ranging from 10-13 to 10-11 m2. The results showed that the Knudsen diffusion coefficient of N2 (DN2) (cm2/s) was related to the effective permeability coefficient ke (m2) as DN2 = 7.39 × 107ke0.767. Thus, the Knudsen diffusion coefficients of N2 obtained by our tracer experiments were consistently 1/27 of those obtained by permeability experiments conducted with many porous media and air by other researchers. By using an inversion simulation to fit the advection-diffusion equation to the distribution of concentrations at observation points calculated by mathematically solving the equation, we confirmed that the method used to obtain the Knudsen diffusion coefficient in this study yielded accurate values. Moreover, because the Knudsen diffusion coefficient did not differ when columns with two different lengths, 900 and 1500 mm, were used, this column property did not influence the flow of gas in the column. The equation of the dusty gas model already includes obstruction factors for Knudsen diffusion and molecular diffusion, which relate to medium heterogeneity and tortuosity and depend only on the structure of the porous medium. Furthermore, there is no need to take account of any additional correction factor for molecular diffusion except the obstruction factor because molecular diffusion is only treated in a multicomponent gas system. Thus, molecular diffusion considers only the obstruction factor related to tortuosity. Therefore, we introduced a correction factor for a multicomponent gas system into the DGM equation, multiplying the Knudsen diffusion coefficient, which includes the obstruction factor related to tortuosity, by this correction factor. From the present experimental results, the value of this correction factor was 1/27, and it depended only on the structure of the gas system in the porous medium.

  9. [An automatic peak detection method for LIBS spectrum based on continuous wavelet transform].

    PubMed

    Chen, Peng-Fei; Tian, Di; Qiao, Shu-Jun; Yang, Guang

    2014-07-01

    Spectrum peak detection in the laser-induced breakdown spectroscopy (LIBS) is an essential step, but the presence of background and noise seriously disturb the accuracy of peak position. The present paper proposed a method applied to automatic peak detection for LIBS spectrum in order to enhance the ability of overlapping peaks searching and adaptivity. We introduced the ridge peak detection method based on continuous wavelet transform to LIBS, and discussed the choice of the mother wavelet and optimized the scale factor and the shift factor. This method also improved the ridge peak detection method with a correcting ridge method. The experimental results show that compared with other peak detection methods (the direct comparison method, derivative method and ridge peak search method), our method had a significant advantage on the ability to distinguish overlapping peaks and the precision of peak detection, and could be be applied to data processing in LIBS.

  10. Method and Apparatus for Measuring Thermal Conductivity of Small, Highly Insulating Specimens

    NASA Technical Reports Server (NTRS)

    Miller, Robert A (Inventor); Kuczmarski, Maria A (Inventor)

    2013-01-01

    A method and apparatus for the measurement of thermal conductivity combines the following capabilities: 1) measurements of very small specimens; 2) measurements of specimens with thermal conductivity on the same order of that as air; and, 3) the ability to use air as a reference material. Care is taken to ensure that the heat flow through the test specimen is essentially one-dimensional. No attempt is made to use heated guards to minimize the flow of heat from the hot plate to the surroundings. Results indicate that since large correction factors must be applied to account for guard imperfections when specimen dimensions are small, simply measuring and correcting for heat from the heater disc that does not flow into the specimen is preferable.

  11. Calibration and temperature correction of a V-block refractometer

    NASA Astrophysics Data System (ADS)

    Le Menn, Marc

    2018-03-01

    V-block refractometers have been used since the 1940s to retrieve the refractive index values of substances or optical glasses. When used outside laboratories, they are submitted to temperature variations which degrade their accuracy by varying the refractive index of the glasses and the length of the prisms. This paper proposes a method to calibrate a double-prism V-block refractometer by retrieving the values of two coefficients at a constant temperature and by applying corrections to these coefficients when the instrument is used at different temperatures. This method is applied to calibrate in salinity a NOSS instrument which can be used at sea on drifting floats, and the results show that measurement errors can be reduced by a factor of 5.8.

  12. Figure correction of multilayer coated optics

    DOEpatents

    Chapman; Henry N. , Taylor; John S.

    2010-02-16

    A process is provided for producing near-perfect optical surfaces, for EUV and soft-x-ray optics. The method involves polishing or otherwise figuring the multilayer coating that has been deposited on an optical substrate, in order to correct for errors in the figure of the substrate and coating. A method such as ion-beam milling is used to remove material from the multilayer coating by an amount that varies in a specified way across the substrate. The phase of the EUV light that is reflected from the multilayer will be affected by the amount of multilayer material removed, but this effect will be reduced by a factor of 1-n as compared with height variations of the substrate, where n is the average refractive index of the multilayer.

  13. [Toric add-on intraocular lenses for correction of high astigmatism after pseudophakic keratoplasty].

    PubMed

    Hassenstein, A; Niemeck, F; Giannakakis, K; Klemm, M

    2017-06-01

    Perforating keratoplasty shows good morphological results with a clear cornea; however, a limiting factor is often the resulting astigmatism, which cannot be corrected with either glasses or contact lenses (CL) in up to 20% of the patients. We retrospectively investigated 15 patients after pseudophakic perforating keratoplasty, who received implantation of toric add-on intraocular lenses (IOL) to correct astigmatism. The mean preoperative astigmatism of 6.5 diopter (dpt) could be reduced to a mean postoperative value of 1.0 dpt. The mean visual acuity could be improved from a preoperative value of sc <0.05 (cc 0.6) to a postoperative value of sc 0.4 (cc 0.63). There were no complications except for one case of a lens extension tear. Based on our good experiences we now provide toric add-on IOL to all patients with pseudophakic perforating keratoplasty when this cannot be corrected or only insufficiently corrected by conservative methods.

  14. [A practical procedure to improve the accuracy of radiochromic film dosimetry: a integration with a correction method of uniformity correction and a red/blue correction method].

    PubMed

    Uehara, Ryuzo; Tachibana, Hidenobu; Ito, Yasushi; Yoshino, Shinichi; Matsubayashi, Fumiyasu; Sato, Tomoharu

    2013-06-01

    It has been reported that the light scattering could worsen the accuracy of dose distribution measurement using a radiochromic film. The purpose of this study was to investigate the accuracy of two different films, EDR2 and EBT2, as film dosimetry tools. The effectiveness of a correction method for the non-uniformity caused from EBT2 film and the light scattering was also evaluated. In addition the efficacy of this correction method integrated with the red/blue correction method was assessed. EDR2 and EBT2 films were read using a flatbed charge-coupled device scanner (EPSON 10000G). Dose differences on the axis perpendicular to the scanner lamp movement axis were within 1% with EDR2, but exceeded 3% (Maximum: +8%) with EBT2. The non-uniformity correction method, after a single film exposure, was applied to the readout of the films. A corrected dose distribution data was subsequently created. The correction method showed more than 10%-better pass ratios in dose difference evaluation than when the correction method was not applied. The red/blue correction method resulted in 5%-improvement compared with the standard procedure that employed red color only. The correction method with EBT2 proved to be able to rapidly correct non-uniformity, and has potential for routine clinical IMRT dose verification if the accuracy of EBT2 is required to be similar to that of EDR2. The use of red/blue correction method may improve the accuracy, but we recommend we should use the red/blue correction method carefully and understand the characteristics of EBT2 for red color only and the red/blue correction method.

  15. A new method for weakening the combined effect of residual errors on multibeam bathymetric data

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhu; Yan, Jun; Zhang, Hongmei; Zhang, Yuqing; Wang, Aixue

    2014-12-01

    Multibeam bathymetric system (MBS) has been widely applied in the marine surveying for providing high-resolution seabed topography. However, some factors degrade the precision of bathymetry, including the sound velocity, the vessel attitude, the misalignment angle of the transducer and so on. Although these factors have been corrected strictly in bathymetric data processing, the final bathymetric result is still affected by their residual errors. In deep water, the result usually cannot meet the requirements of high-precision seabed topography. The combined effect of these residual errors is systematic, and it's difficult to separate and weaken the effect using traditional single-error correction methods. Therefore, the paper puts forward a new method for weakening the effect of residual errors based on the frequency-spectrum characteristics of seabed topography and multibeam bathymetric data. Four steps, namely the separation of the low-frequency and the high-frequency part of bathymetric data, the reconstruction of the trend of actual seabed topography, the merging of the actual trend and the extracted microtopography, and the accuracy evaluation, are involved in the method. Experiment results prove that the proposed method could weaken the combined effect of residual errors on multibeam bathymetric data and efficiently improve the accuracy of the final post-processing results. We suggest that the method should be widely applied to MBS data processing in deep water.

  16. Higgs boson gluon-fusion production in QCD at three loops.

    PubMed

    Anastasiou, Charalampos; Duhr, Claude; Dulat, Falko; Herzog, Franz; Mistlberger, Bernhard

    2015-05-29

    We present the cross section for the production of a Higgs boson at hadron colliders at next-to-next-to-next-to-leading order (N^{3}LO) in perturbative QCD. The calculation is based on a method to perform a series expansion of the partonic cross section around the threshold limit to an arbitrary order. We perform this expansion to sufficiently high order to obtain the value of the hadronic cross at N^{3}LO in the large top-mass limit. For renormalization and factorization scales equal to half the Higgs boson mass, the N^{3}LO corrections are of the order of +2.2%. The total scale variation at N^{3}LO is 3%, reducing the uncertainty due to missing higher order QCD corrections by a factor of 3.

  17. Testing the performance of dosimetry measurement standards for calibrating area and personnel dosimeters

    NASA Astrophysics Data System (ADS)

    Walwyn-Salas, G.; Czap, L.; Gomola, I.; Tamayo-García, J. A.

    2016-07-01

    The cylindrical NE2575 and spherical PTW32002 chamber types were tested in this paper to determine their performance at different source-chamber distances, field sizes and two radiation qualities. To ensure an accurate measurement, there is a need to apply a correction factor to NE2575 measurements at different distances because of differences found between the reference point defined by the manufacturer and the effective point of measurements. This correction factor for NE2575 secondary standard from the Center for Radiation Protection and Hygiene of Cuba was assessed with a 0.3% uncertainty using the results of three methods. Those laboratories that use the NE2575 chambers should take into consideration the performance characteristics tested in this paper to obtain accurate measurements.

  18. Assessment and correction of skinfold thickness equations in estimating body fat in children with cerebral palsy

    PubMed Central

    GURKA, MATTHEW J; KUPERMINC, MICHELLE N; BUSBY, MARJORIE G; BENNIS, JACEY A; GROSSBERG, RICHARD I; HOULIHAN, CHRISTINE M; STEVENSON, RICHARD D; HENDERSON, RICHARD C

    2010-01-01

    AIM To assess the accuracy of skinfold equations in estimating percentage body fat in children with cerebral palsy (CP), compared with assessment of body fat from dual energy X-ray absorptiometry (DXA). METHOD Data were collected from 71 participants (30 females, 41 males) with CP (Gross Motor Function Classification System [GMFCS] levels I–V) between the ages of 8 and 18 years. Estimated percentage body fat was computed using established (Slaughter) equations based on the triceps and subscapular skinfolds. A linear model was fitted to assess the use of a simple correction to these equations for children with CP. RESULTS Slaughter’s equations consistently underestimated percentage body fat (mean difference compared with DXA percentage body fat −9.6/100 [SD 6.2]; 95% confidence interval [CI] −11.0 to −8.1). New equations were developed in which a correction factor was added to the existing equations based on sex, race, GMFCS level, size, and pubertal status. These corrected equations for children with CP agree better with DXA (mean difference 0.2/100 [SD=4.8]; 95% CI −1.0 to 1.3) than existing equations. INTERPRETATION A simple correction factor to commonly used equations substantially improves the ability to estimate percentage body fat from two skinfold measures in children with CP. PMID:19811518

  19. The effects of corrective surgery on endothelial biomarkers and anthropometric data in children with congenital heart disease.

    PubMed

    Chung, Hung-Tao; Chang, Yu-Sheng; Liao, Sui-Ling; Lai, Shen-Hao

    2017-04-01

    Objective To investigate the influence of surgical correction on biomarkers of endothelial dysfunction in children with congenital heart disease and to evaluate anthropometric data. Methods Children with pulmonary hypertension (PH) or Tetralogy of Fallot (TOF) who were scheduled for corrective surgery were enrolled in this prospective study. Age-matched healthy children were included as controls. Demographic, haemodynamic and cardiac ultrasonography data were collected. Blood samples were taken pre-surgery, 24-48 hours post-surgery and again 3-6 months later. Several biomarkers (protein C, soluble platelet selectin [CD62P], soluble endothelium selectin [CD62E], soluble leukocyte selectin [CD62L], plasma von Willebrand Factor [vWF] atrial natriuretic peptide [ANP], brain natriuretic peptide[(BNP] and insulin-like growth factor-1 [IGF-1]) were measured. Results Sixty-three children (32 with PH, 15 with TOF, and 16 controls) were enrolled. No significant differences between the PH and TOF groups were observed in the expression of biomarkers pre- and post-surgery. IGF-1 levels were closely related to anthropometric data, particularly those children with PH. Expression of IGF-1 and weight/height normalized after corrective surgery. Conclusions No significant endothelial dysfunction was observed in children with PH or TOF before or after corrective surgery. Significant retardation of growth, particularly weight, was found before surgery and may be related to IGF-1 suppression.

  20. Fully Differential Vector-Boson-Fusion Higgs Production at Next-to-Next-to-Leading Order.

    PubMed

    Cacciari, Matteo; Dreyer, Frédéric A; Karlberg, Alexander; Salam, Gavin P; Zanderighi, Giulia

    2015-08-21

    We calculate the fully differential next-to-next-to-leading-order (NNLO) corrections to vector-boson fusion (VBF) Higgs boson production at proton colliders, in the limit in which there is no cross talk between the hadronic systems associated with the two protons. We achieve this using a new "projection-to-Born" method that combines an inclusive NNLO calculation in the structure-function approach and a suitably factorized next-to-leading-order VBF Higgs plus three-jet calculation, using appropriate Higgs plus two-parton counterevents. An earlier calculation of the fully inclusive cross section had found small NNLO corrections, at the 1% level. In contrast, the cross section after typical experimental VBF cuts receives NNLO contributions of about (5-6)%, while differential distributions show corrections of up to (10-12)% for some standard observables. The corrections are often outside the next-to-leading-order scale-uncertainty band.

  1. Scatter correction method for x-ray CT using primary modulation: Phantom studies

    PubMed Central

    Gao, Hewei; Fahrig, Rebecca; Bennett, N. Robert; Sun, Mingshan; Star-Lack, Josh; Zhu, Lei

    2010-01-01

    Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems, the method is investigated using three phantoms: A Catphan©600 phantom, an anthropomorphic chest phantom, and the Catphan©600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan©600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan©600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an elliptical annulus (30 cm in the minor axis and 38 cm in the major axis) and with a circular annulus (38 cm in diameter). Conclusions: On the three phantom studies, good scatter correction performance of the proposed method has been demonstrated using both image comparisons and quantitative analysis. The theory and experiments demonstrate that a strong primary modulation that possesses a low transmission factor and a high modulation frequency is preferred for high scatter correction accuracy. PMID:20229902

  2. Trust in Leadership DEOCS 4.1 Construct Validity Summary

    DTIC Science & Technology

    2017-08-01

    Item Corrected Item- Total Correlation Cronbach’s Alpha if Item Deleted Four-point Scale Items I can depend on my immediate supervisor to meet...1974) were used to assess the fit between the data and the factor. The BTS hypothesizes that the correlation matrix is an identity matrix. The...to reject the null hypothesis that the correlation matrix is an identity, and to conclude that the factor analysis is an appropriate method to

  3. Sweep Width Estimation for Ground Search and Rescue

    DTIC Science & Technology

    2004-12-30

    Develop data compatible with search planning and POD estimation methods that are de- signed to use sweep width data. An experimental...important for Park Rangers and man- trackers . Search experience was expected to be a significant correction factor. However, the re- sults indicate...41 4.1.1 Signing In

  4. Analysis of Developmental Data: Comparison Among Alternative Methods

    ERIC Educational Resources Information Center

    Wilson, Ronald S.

    1975-01-01

    To examine the ability of the correction factor epsilon to counteract statistical bias in univariate analysis, an analysis of variance (adjusted by epsilon) and a multivariate analysis of variance were performed on the same data. The results indicated that univariate analysis is a fully protected design when used with epsilon. (JMB)

  5. Pavement crack detection combining non-negative feature with fast LoG in complex scene

    NASA Astrophysics Data System (ADS)

    Wang, Wanli; Zhang, Xiuhua; Hong, Hanyu

    2015-12-01

    Pavement crack detection is affected by much interference in the realistic situation, such as the shadow, road sign, oil stain, salt and pepper noise etc. Due to these unfavorable factors, the exist crack detection methods are difficult to distinguish the crack from background correctly. How to extract crack information effectively is the key problem to the road crack detection system. To solve this problem, a novel method for pavement crack detection based on combining non-negative feature with fast LoG is proposed. The two key novelties and benefits of this new approach are that 1) using image pixel gray value compensation to acquisit uniform image, and 2) combining non-negative feature with fast LoG to extract crack information. The image preprocessing results demonstrate that the method is indeed able to homogenize the crack image with more accurately compared to existing methods. A large number of experimental results demonstrate the proposed approach can detect the crack regions more correctly compared with traditional methods.

  6. Parallel/Vector Integration Methods for Dynamical Astronomy

    NASA Astrophysics Data System (ADS)

    Fukushima, T.

    Progress of parallel/vector computers has driven us to develop suitable numerical integrators utilizing their computational power to the full extent while being independent on the size of system to be integrated. Unfortunately, the parallel version of Runge-Kutta type integrators are known to be not so efficient. Recently we developed a parallel version of the extrapolation method (Ito and Fukushima 1997), which allows variable timesteps and still gives an acceleration factor of 3-4 for general problems. While the vector-mode usage of Picard-Chebyshev method (Fukushima 1997a, 1997b) will lead the acceleration factor of order of 1000 for smooth problems such as planetary/satellites orbit integration. The success of multiple-correction PECE mode of time-symmetric implicit Hermitian integrator (Kokubo 1998) seems to enlighten Milankar's so-called "pipelined predictor corrector method", which is expected to lead an acceleration factor of 3-4. We will review these directions and discuss future prospects.

  7. Measurement of dissolved organic matter fluorescense in aquatic environments: An interlaboratory comparison

    USGS Publications Warehouse

    Murphy, Kathleen R.; Butler, Kenna D.; Spencer, Robert G. M.; Stedmon, Colin A.; Boehme, Jennifer R.; Aiken, George R.

    2010-01-01

    The fluorescent properties of dissolved organic matter (DOM) are often studied in order to infer DOM characteristics in aquatic environments, including source, quantity, composition, and behavior. While a potentially powerful technique, a single widely implemented standard method for correcting and presenting fluorescence measurements is lacking, leading to difficulties when comparing data collected by different research groups. This paper reports on a large-scale interlaboratory comparison in which natural samples and well-characterized fluorophores were analyzed in 20 laboratories in the U.S., Europe, and Australia. Shortcomings were evident in several areas, including data quality-assurance, the accuracy of spectral correction factors used to correct EEMs, and the treatment of optically dense samples. Data corrected by participants according to individual laboratory procedures were more variable than when corrected under a standard protocol. Wavelength dependency in measurement precision and accuracy were observed within and between instruments, even in corrected data. In an effort to reduce future occurrences of similar problems, algorithms for correcting and calibrating EEMs are described in detail, and MATLAB scripts for implementing the study's protocol are provided. Combined with the recent expansion of spectral fluorescence standards, this approach will serve to increase the intercomparability of DOM fluorescence studies.

  8. Single Phase Passive Rectification Versus Active Rectification Applied to High Power Stirling Engines

    NASA Technical Reports Server (NTRS)

    Santiago, Walter; Birchenough, Arthur G.

    2006-01-01

    Stirling engine converters are being considered as potential candidates for high power energy conversion systems required by future NASA explorations missions. These types of engines typically contain two major moving parts, the displacer and the piston, in which a linear alternator is attached to the piston to produce a single phase sinusoidal waveform at a specific electric frequency. Since all Stirling engines perform at low electrical frequencies (less or equal to 100 Hz), space explorations missions that will employ these engines will be required to use DC power management and distribution (PMAD) system instead of an AC PMAD system to save on space and weight. Therefore, to supply such DC power an AC to DC converter is connected to the Stirling engine. There are two types of AC to DC converters that can be employed, a passive full bridge diode rectifier and an active switching full bridge rectifier. Due to the inherent line inductance of the Stirling Engine-Linear Alternator (SE-LA), their sinusoidal voltage and current will be phase shifted producing a power factor below 1. In order to keep power the factor close to unity, both AC to DC converters topologies will implement power factor correction. This paper discusses these power factor correction methods as well as their impact on overall mass for exploration applications. Simulation results on both AC to DC converters topologies with power factor correction as a function of output power and SE-LA line inductance impedance are presented and compared.

  9. Continuous correction of differential path length factor in near-infrared spectroscopy

    PubMed Central

    Moore, Jason H.; Diamond, Solomon G.

    2013-01-01

    Abstract. In continuous-wave near-infrared spectroscopy (CW-NIRS), changes in the concentration of oxyhemoglobin and deoxyhemoglobin can be calculated by solving a set of linear equations from the modified Beer-Lambert Law. Cross-talk error in the calculated hemodynamics can arise from inaccurate knowledge of the wavelength-dependent differential path length factor (DPF). We apply the extended Kalman filter (EKF) with a dynamical systems model to calculate relative concentration changes in oxy- and deoxyhemoglobin while simultaneously estimating relative changes in DPF. Results from simulated and experimental CW-NIRS data are compared with results from a weighted least squares (WLSQ) method. The EKF method was found to effectively correct for artificially introduced errors in DPF and to reduce the cross-talk error in simulation. With experimental CW-NIRS data, the hemodynamic estimates from EKF differ significantly from the WLSQ (p<0.001). The cross-correlations among residuals at different wavelengths were found to be significantly reduced by the EKF method compared to WLSQ in three physiologically relevant spectral bands 0.04 to 0.15 Hz, 0.15 to 0.4 Hz and 0.4 to 2.0 Hz (p<0.001). This observed reduction in residual cross-correlation is consistent with reduced cross-talk error in the hemodynamic estimates from the proposed EKF method. PMID:23640027

  10. Continuous correction of differential path length factor in near-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Talukdar, Tanveer; Moore, Jason H.; Diamond, Solomon G.

    2013-05-01

    In continuous-wave near-infrared spectroscopy (CW-NIRS), changes in the concentration of oxyhemoglobin and deoxyhemoglobin can be calculated by solving a set of linear equations from the modified Beer-Lambert Law. Cross-talk error in the calculated hemodynamics can arise from inaccurate knowledge of the wavelength-dependent differential path length factor (DPF). We apply the extended Kalman filter (EKF) with a dynamical systems model to calculate relative concentration changes in oxy- and deoxyhemoglobin while simultaneously estimating relative changes in DPF. Results from simulated and experimental CW-NIRS data are compared with results from a weighted least squares (WLSQ) method. The EKF method was found to effectively correct for artificially introduced errors in DPF and to reduce the cross-talk error in simulation. With experimental CW-NIRS data, the hemodynamic estimates from EKF differ significantly from the WLSQ (p<0.001). The cross-correlations among residuals at different wavelengths were found to be significantly reduced by the EKF method compared to WLSQ in three physiologically relevant spectral bands 0.04 to 0.15 Hz, 0.15 to 0.4 Hz and 0.4 to 2.0 Hz (p<0.001). This observed reduction in residual cross-correlation is consistent with reduced cross-talk error in the hemodynamic estimates from the proposed EKF method.

  11. Analytic image reconstruction from partial data for a single-scan cone-beam CT with scatter correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Min, Jonghwan; Pua, Rizza; Cho, Seungryong, E-mail: scho@kaist.ac.kr

    Purpose: A beam-blocker composed of multiple strips is a useful gadget for scatter correction and/or for dose reduction in cone-beam CT (CBCT). However, the use of such a beam-blocker would yield cone-beam data that can be challenging for accurate image reconstruction from a single scan in the filtered-backprojection framework. The focus of the work was to develop an analytic image reconstruction method for CBCT that can be directly applied to partially blocked cone-beam data in conjunction with the scatter correction. Methods: The authors developed a rebinned backprojection-filteration (BPF) algorithm for reconstructing images from the partially blocked cone-beam data in amore » circular scan. The authors also proposed a beam-blocking geometry considering data redundancy such that an efficient scatter estimate can be acquired and sufficient data for BPF image reconstruction can be secured at the same time from a single scan without using any blocker motion. Additionally, scatter correction method and noise reduction scheme have been developed. The authors have performed both simulation and experimental studies to validate the rebinned BPF algorithm for image reconstruction from partially blocked cone-beam data. Quantitative evaluations of the reconstructed image quality were performed in the experimental studies. Results: The simulation study revealed that the developed reconstruction algorithm successfully reconstructs the images from the partial cone-beam data. In the experimental study, the proposed method effectively corrected for the scatter in each projection and reconstructed scatter-corrected images from a single scan. Reduction of cupping artifacts and an enhancement of the image contrast have been demonstrated. The image contrast has increased by a factor of about 2, and the image accuracy in terms of root-mean-square-error with respect to the fan-beam CT image has increased by more than 30%. Conclusions: The authors have successfully demonstrated that the proposed scanning method and image reconstruction algorithm can effectively estimate the scatter in cone-beam projections and produce tomographic images of nearly scatter-free quality. The authors believe that the proposed method would provide a fast and efficient CBCT scanning option to various applications particularly including head-and-neck scan.« less

  12. Artificial intelligence techniques for automatic screening of amblyogenic factors.

    PubMed

    Van Eenwyk, Jonathan; Agah, Arvin; Giangiacomo, Joseph; Cibis, Gerhard

    2008-01-01

    To develop a low-cost automated video system to effectively screen children aged 6 months to 6 years for amblyogenic factors. In 1994 one of the authors (G.C.) described video vision development assessment, a digitizable analog video-based system combining Brückner pupil red reflex imaging and eccentric photorefraction to screen young children for amblyogenic factors. The images were analyzed manually with this system. We automated the capture of digital video frames and pupil images and applied computer vision and artificial intelligence to analyze and interpret results. The artificial intelligence systems were evaluated by a tenfold testing method. The best system was the decision tree learning approach, which had an accuracy of 77%, compared to the "gold standard" specialist examination with a "refer/do not refer" decision. Criteria for referral were strabismus, including microtropia, and refractive errors and anisometropia considered to be amblyogenic. Eighty-two percent of strabismic individuals were correctly identified. High refractive errors were also correctly identified and referred 90% of the time, as well as significant anisometropia. The program was less correct in identifying more moderate refractive errors, below +5 and less than -7. Although we are pursuing a variety of avenues to improve the accuracy of the automated analysis, the program in its present form provides acceptable cost benefits for detecting ambylogenic factors in children aged 6 months to 6 years.

  13. Partial volume correction using cortical surfaces

    NASA Astrophysics Data System (ADS)

    Blaasvær, Kamille R.; Haubro, Camilla D.; Eskildsen, Simon F.; Borghammer, Per; Otzen, Daniel; Ostergaard, Lasse R.

    2010-03-01

    Partial volume effect (PVE) in positron emission tomography (PET) leads to inaccurate estimation of regional metabolic activities among neighbouring tissues with different tracer concentration. This may be one of the main limiting factors in the utilization of PET in clinical practice. Partial volume correction (PVC) methods have been widely studied to address this issue. MRI based PVC methods are well-established.1 Their performance depend on the quality of the co-registration of the MR and PET dataset, on the correctness of the estimated point-spread function (PSF) of the PET scanner and largely on the performance of the segmentation method that divide the brain into brain tissue compartments.1, 2 In the present study a method for PVC is suggested, that utilizes cortical surfaces, to obtain detailed anatomical information. The objectives are to improve the performance of PVC, facilitate a study of the relationship between metabolic activity in the cerebral cortex and cortical thicknesses, and to obtain an improved visualization of PET data. The gray matter metabolic activity after performing PVC was recovered by 99.7 - 99.8 % , in relation to the true activity when testing on simple simulated data with different PSFs and by 97.9 - 100 % when testing on simulated brain PET data at different cortical thicknesses. When studying the relationship between metabolic activities and anatomical structures it was shown on simulated brain PET data, that it is important to correct for PVE in order to get the true relationship.

  14. Stationary table CT dosimetry and anomalous scanner-reported values of CTDI{sub vol}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dixon, Robert L., E-mail: rdixon@wfubmc.edu; Boone, John M.

    2014-01-15

    Purpose: Anomalous, scanner-reported values of CTDI{sub vol} for stationary phantom/table protocols (having elevated values of CTDI{sub vol} over 300% higher than the actual dose to the phantom) have been observed; which are well-beyond the typical accuracy expected of CTDI{sub vol} as a phantom dose. Recognition of these outliers as “bad data” is important to users of CT dose index tracking systems (e.g., ACR DIR), and a method for recognition and correction is provided. Methods: Rigorous methods and equations are presented which describe the dose distributions for stationary-table CT. A comparison with formulae for scanner-reported values of CTDI{sub vol} clearly identifiesmore » the source of these anomalies. Results: For the stationary table, use of the CTDI{sub 100} formula (applicable to a moving phantom only) overestimates the dose due to extra scatter and also includes an overbeaming correction, both of which are nonexistent when the phantom (or patient) is held stationary. The reported DLP remains robust for the stationary phantom. Conclusions: The CTDI-paradigm does not apply in the case of a stationary phantom and simpler nonintegral equations suffice. A method of correction of the currently reported CTDI{sub vol} using the approach-to-equilibrium formula H(a) and an overbeaming correction factor serves to scale the reported CTDI{sub vol} values to more accurate levels for stationary-table CT, as well as serving as an indicator in the detection of “bad data.”.« less

  15. Correction on the distortion of Scheimpflug imaging for dynamic central corneal thickness

    NASA Astrophysics Data System (ADS)

    Li, Tianjie; Tian, Lei; Wang, Like; Hon, Ying; Lam, Andrew K. C.; Huang, Yifei; Wang, Yuanyuan; Zheng, Yongping

    2015-05-01

    The measurement of central corneal thickness (CCT) is important in ophthalmology. Most studies concerned the value at normal status, while rare ones focused on its dynamic changing. The commercial Corvis ST is the only commercial device currently available to visualize the two-dimensional image of dynamic corneal profiles during an air puff indentation. However, the directly observed CCT involves the Scheimpflug distortion, thus misleading the clinical diagnosis. This study aimed to correct the distortion for better measuring the dynamic CCTs. The optical path was first derived to consider the influence of factors on the use of Covis ST. A correction method was then proposed to estimate the CCT at any time during air puff indentation. Simulation results demonstrated the feasibility of the intuitive-feasible calibration for measuring the stationary CCT and indicated the necessity of correction when air puffed. Experiments on three contact lenses and four human corneas verified the prediction that the CCT would be underestimated when the improper calibration was conducted for air and overestimated when it was conducted on contact lenses made of polymethylmethacrylate. Using the proposed method, the CCT was finally observed to increase by 66±34 μm at highest concavity in 48 normal human corneas.

  16. A new unequal-weighted triple-frequency first order ionosphere correction algorithm and its application in COMPASS

    NASA Astrophysics Data System (ADS)

    Liu, WenXiang; Mou, WeiHua; Wang, FeiXue

    2012-03-01

    As the introduction of triple-frequency signals in GNSS, the multi-frequency ionosphere correction technology has been fast developing. References indicate that the triple-frequency second order ionosphere correction is worse than the dual-frequency first order ionosphere correction because of the larger noise amplification factor. On the assumption that the variances of three frequency pseudoranges were equal, other references presented the triple-frequency first order ionosphere correction, which proved worse or better than the dual-frequency first order correction in different situations. In practice, the PN code rate, carrier-to-noise ratio, parameters of DLL and multipath effect of each frequency are not the same, so three frequency pseudorange variances are unequal. Under this consideration, a new unequal-weighted triple-frequency first order ionosphere correction algorithm, which minimizes the variance of the pseudorange ionosphere-free combination, is proposed in this paper. It is found that conventional dual-frequency first-order correction algorithms and the equal-weighted triple-frequency first order correction algorithm are special cases of the new algorithm. A new pseudorange variance estimation method based on the three carrier combination is also introduced. Theoretical analysis shows that the new algorithm is optimal. The experiment with COMPASS G3 satellite observations demonstrates that the ionosphere-free pseudorange combination variance of the new algorithm is smaller than traditional multi-frequency correction algorithms.

  17. Analytical linear energy transfer model including secondary particles: calculations along the central axis of the proton pencil beam

    NASA Astrophysics Data System (ADS)

    Marsolat, F.; De Marzi, L.; Pouzoulet, F.; Mazal, A.

    2016-01-01

    In proton therapy, the relative biological effectiveness (RBE) depends on various types of parameters such as linear energy transfer (LET). An analytical model for LET calculation exists (Wilkens’ model), but secondary particles are not included in this model. In the present study, we propose a correction factor, L sec, for Wilkens’ model in order to take into account the LET contributions of certain secondary particles. This study includes secondary protons and deuterons, since the effects of these two types of particles can be described by the same RBE-LET relationship. L sec was evaluated by Monte Carlo (MC) simulations using the GATE/GEANT4 platform and was defined by the ratio of the LET d distributions of all protons and deuterons and only primary protons. This method was applied to the innovative Pencil Beam Scanning (PBS) delivery systems and L sec was evaluated along the beam axis. This correction factor indicates the high contribution of secondary particles in the entrance region, with L sec values higher than 1.6 for a 220 MeV clinical pencil beam. MC simulations showed the impact of pencil beam parameters, such as mean initial energy, spot size, and depth in water, on L sec. The variation of L sec with these different parameters was integrated in a polynomial function of the L sec factor in order to obtain a model universally applicable to all PBS delivery systems. The validity of this correction factor applied to Wilkens’ model was verified along the beam axis of various pencil beams in comparison with MC simulations. A good agreement was obtained between the corrected analytical model and the MC calculations, with mean-LET deviations along the beam axis less than 0.05 keV μm-1. These results demonstrate the efficacy of our new correction of the existing LET model in order to take into account secondary protons and deuterons along the pencil beam axis.

  18. Alignment algorithms and per-particle CTF correction for single particle cryo-electron tomography.

    PubMed

    Galaz-Montoya, Jesús G; Hecksel, Corey W; Baldwin, Philip R; Wang, Eryu; Weaver, Scott C; Schmid, Michael F; Ludtke, Steven J; Chiu, Wah

    2016-06-01

    Single particle cryo-electron tomography (cryoSPT) extracts features from cryo-electron tomograms, followed by 3D classification, alignment and averaging to generate improved 3D density maps of such features. Robust methods to correct for the contrast transfer function (CTF) of the electron microscope are necessary for cryoSPT to reach its resolution potential. Many factors can make CTF correction for cryoSPT challenging, such as lack of eucentricity of the specimen stage, inherent low dose per image, specimen charging, beam-induced specimen motions, and defocus gradients resulting both from specimen tilting and from unpredictable ice thickness variations. Current CTF correction methods for cryoET make at least one of the following assumptions: that the defocus at the center of the image is the same across the images of a tiltseries, that the particles all lie at the same Z-height in the embedding ice, and/or that the specimen, the cryo-electron microscopy (cryoEM) grid and/or the carbon support are flat. These experimental conditions are not always met. We have developed a CTF correction algorithm for cryoSPT without making any of the aforementioned assumptions. We also introduce speed and accuracy improvements and a higher degree of automation to the subtomogram averaging algorithms available in EMAN2. Using motion-corrected images of isolated virus particles as a benchmark specimen, recorded with a DE20 direct detection camera, we show that our CTF correction and subtomogram alignment routines can yield subtomogram averages close to 4/5 Nyquist frequency of the detector under our experimental conditions. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Alignment Algorithms and Per-Particle CTF Correction for Single Particle Cryo-Electron Tomography

    PubMed Central

    Galaz-Montoya, Jesús G.; Hecksel, Corey W.; Baldwin, Philip R.; Wang, Eryu; Weaver, Scott C.; Schmid, Michael F.; Ludtke, Steven J.; Chiu, Wah

    2016-01-01

    Single particle cryo-electron tomography (cryoSPT) extracts features from cryo-electron tomograms, followed by 3D classification, alignment and averaging to generate improved 3D density maps of such features. Robust methods to correct for the contrast transfer function (CTF) of the electron microscope are necessary for cryoSPT to reach its resolution potential. Many factors can make CTF correction for cryoSPT challenging, such as lack of eucentricity of the specimen stage, inherent low dose per image, specimen charging, beam-induced specimen motions, and defocus gradients resulting both from specimen tilting and from unpredictable ice thickness variations. Current CTF correction methods for cryoET make at least one of the following assumptions: that the defocus at the center of the image is the same across the images of a tiltseries, that the particles all lie at the same Z-height in the embedding ice, and/or that the specimen grid and carbon support are flat. These experimental conditions are not always met. We have developed a CTF correction algorithm for cryoSPT without making any of the aforementioned assumptions. We also introduce speed and accuracy improvements and a higher degree of automation to the subtomogram averaging algorithms available in EMAN2. Using motion-corrected images of isolated virus particles as a benchmark specimen, recorded with a DE20 direct detection camera, we show that our CTF correction and subtomogram alignment routines can yield subtomogram averages close to 4/5 Nyquist frequency of the detector under our experimental conditions. PMID:27016284

  20. Pipeline for illumination correction of images for high-throughput microscopy.

    PubMed

    Singh, S; Bray, M-A; Jones, T R; Carpenter, A E

    2014-12-01

    The presence of systematic noise in images in high-throughput microscopy experiments can significantly impact the accuracy of downstream results. Among the most common sources of systematic noise is non-homogeneous illumination across the image field. This often adds an unacceptable level of noise, obscures true quantitative differences and precludes biological experiments that rely on accurate fluorescence intensity measurements. In this paper, we seek to quantify the improvement in the quality of high-content screen readouts due to software-based illumination correction. We present a straightforward illumination correction pipeline that has been used by our group across many experiments. We test the pipeline on real-world high-throughput image sets and evaluate the performance of the pipeline at two levels: (a) Z'-factor to evaluate the effect of the image correction on a univariate readout, representative of a typical high-content screen, and (b) classification accuracy on phenotypic signatures derived from the images, representative of an experiment involving more complex data mining. We find that applying the proposed post-hoc correction method improves performance in both experiments, even when illumination correction has already been applied using software associated with the instrument. To facilitate the ready application and future development of illumination correction methods, we have made our complete test data sets as well as open-source image analysis pipelines publicly available. This software-based solution has the potential to improve outcomes for a wide-variety of image-based HTS experiments. © 2014 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.

  1. In situ genetic correction of the sickle cell anemia mutation in human induced pluripotent stem cells using engineered zinc finger nucleases.

    PubMed

    Sebastiano, Vittorio; Maeder, Morgan L; Angstman, James F; Haddad, Bahareh; Khayter, Cyd; Yeo, Dana T; Goodwin, Mathew J; Hawkins, John S; Ramirez, Cherie L; Batista, Luis F Z; Artandi, Steven E; Wernig, Marius; Joung, J Keith

    2011-11-01

    The combination of induced pluripotent stem cell (iPSC) technology and targeted gene modification by homologous recombination (HR) represents a promising new approach to generate genetically corrected, patient-derived cells that could be used for autologous transplantation therapies. This strategy has several potential advantages over conventional gene therapy including eliminating the need for immunosuppression, avoiding the risk of insertional mutagenesis by therapeutic vectors, and maintaining expression of the corrected gene by endogenous control elements rather than a constitutive promoter. However, gene targeting in human pluripotent cells has remained challenging and inefficient. Recently, engineered zinc finger nucleases (ZFNs) have been shown to substantially increase HR frequencies in human iPSCs, raising the prospect of using this technology to correct disease causing mutations. Here, we describe the generation of iPSC lines from sickle cell anemia patients and in situ correction of the disease causing mutation using three ZFN pairs made by the publicly available oligomerized pool engineering method (OPEN). Gene-corrected cells retained full pluripotency and a normal karyotype following removal of reprogramming factor and drug-resistance genes. By testing various conditions, we also demonstrated that HR events in human iPSCs can occur as far as 82 bps from a ZFN-induced break. Our approach delineates a roadmap for using ZFNs made by an open-source method to achieve efficient, transgene-free correction of monogenic disease mutations in patient-derived iPSCs. Our results provide an important proof of principle that ZFNs can be used to produce gene-corrected human iPSCs that could be used for therapeutic applications. Copyright © 2011 AlphaMed Press.

  2. Influence of Clinical Factors and Magnification Correction on Normal Thickness Profiles of Macular Retinal Layers Using Optical Coherence Tomography

    PubMed Central

    Higashide, Tomomi; Ohkubo, Shinji; Hangai, Masanori; Ito, Yasuki; Shimada, Noriaki; Ohno-Matsui, Kyoko; Terasaki, Hiroko; Sugiyama, Kazuhisa; Chew, Paul; Li, Kenneth K. W.; Yoshimura, Nagahisa

    2016-01-01

    Purpose To identify the factors which significantly contribute to the thickness variabilities in macular retinal layers measured by optical coherence tomography with or without magnification correction of analytical areas in normal subjects. Methods The thickness of retinal layers {retinal nerve fiber layer (RNFL), ganglion cell layer plus inner plexiform layer (GCLIPL), RNFL plus GCLIPL (ganglion cell complex, GCC), total retina, total retina minus GCC (outer retina)} were measured by macular scans (RS-3000, NIDEK) in 202 eyes of 202 normal Asian subjects aged 20 to 60 years. The analytical areas were defined by three concentric circles (1-, 3- and 6-mm nominal diameters) with or without magnification correction. For each layer thickness, a semipartial correlation (sr) was calculated for explanatory variables including age, gender, axial length, corneal curvature, and signal strength index. Results Outer retinal thickness was significantly thinner in females than in males (sr2, 0.07 to 0.13) regardless of analytical areas or magnification correction. Without magnification correction, axial length had a significant positive sr with RNFL (sr2, 0.12 to 0.33) and a negative sr with GCLIPL (sr2, 0.22 to 0.31), GCC (sr2, 0.03 to 0.17), total retina (sr2, 0.07 to 0.17) and outer retina (sr2, 0.16 to 0.29) in multiple analytical areas. The significant sr in RNFL, GCLIPL and GCC became mostly insignificant following magnification correction. Conclusions The strong correlation between the thickness of inner retinal layers and axial length appeared to result from magnification effects. Outer retinal thickness may differ by gender and axial length independently of magnification correction. PMID:26814541

  3. Resolution of the COBE Earth sensor anomaly

    NASA Technical Reports Server (NTRS)

    Sedler, J.

    1993-01-01

    Since its launch on November 18, 1989, the Earth sensors on the Cosmic Background Explorer (COBE) have shown much greater noise than expected. The problem was traced to an error in Earth horizon acquisition-of-signal (AOS) times. Due to this error, the AOS timing correction was ignored, causing Earth sensor split-to-index (SI) angles to be incorrectly time-tagged to minor frame synchronization times. Resulting Earth sensor residuals, based on gyro-propagated fine attitude solutions, were as large as plus or minus 0.45 deg (much greater than plus or minus 0.10 deg from scanner specifications (Reference 1)). Also, discontinuities in single-frame coarse attitude pitch and roll angles (as large as 0.80 and 0.30 deg, respectively) were noted several times during each orbit. However, over the course of the mission, each Earth sensor was observed to independently and unexpectedly reset and then reactivate into a new configuration. Although the telemetered AOS timing corrections are still in error, a procedure has been developed to approximate and apply these corrections. This paper describes the approach, analysis, and results of approximating and applying AOS timing adjustments to correct Earth scanner data. Furthermore, due to the continuing degradation of COBE's gyroscopes, gyro-propagated fine attitude solutions may soon become unavailable, requiring an alternative method for attitude determination. By correcting Earth scanner AOS telemetry, as described in this paper, more accurate single-frame attitude solutions are obtained. All aforementioned pitch and roll discontinuities are removed. When proper AOS corrections are applied, the standard deviation of pitch residuals between coarse attitude and gyro-propagated fine attitude solutions decrease by a factor of 3. Also, the overall standard deviation of SI residuals from fine attitude solutions decrease by a factor of 4 (meeting sensor specifications) when AOS corrections are applied.

  4. SU-F-BRD-15: Quality Correction Factors in Scanned Or Broad Proton Therapy Beams Are Indistinguishable

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorriaux, J; Lee, J; ICTEAM Institute, Universite catholique de Louvain, Louvain-la-Neuve

    2015-06-15

    Purpose: The IAEA TRS-398 code of practice details the reference conditions for reference dosimetry of proton beams using ionization chambers and the required beam quality correction factors (kQ). Pencil beam scanning (PBS) requires multiple spots to reproduce the reference conditions. The objective is to demonstrate, using Monte Carlo (MC) calculations, that kQ factors for broad beams can be used for scanned beams under the same reference conditions with no significant additional uncertainty. We consider hereafter the general Alfonso formalism (Alfonso et al, 2008) for non-standard beam. Methods: To approach the reference conditions and the associated dose distributions, PBS must combinemore » many pencil beams with range modulation and shaping techniques different than those used in passive systems (broad beams). This might lead to a different energy spectrum at the measurement point. In order to evaluate the impact of these differences on kQ factors, ion chamber responses are computed with MC (Geant4 9.6) in a dedicated scanned pencil beam (Q-pcsr) producing a 10×10cm2 composite field with a flat dose distribution from 10 to 16 cm depth. Ion chamber responses are also computed by MC in a broad beam with quality Q-ds (double scattering). The dose distribution of Q -pcsr matches the dose distribution of Q-ds. k-(Q-pcsr,Q-ds) is computed for a 2×2×0.2cm{sup 3} idealized air cavity and a realistic plane-parallel ion chamber (IC). Results: Under reference conditions, quality correction factors for a scanned composite field versus a broad beam are the same for air cavity dose response, k-(Q-pcsr,Q-ds) =1.001±0.001 and for a Roos IC, k-(Q-pcsr,Q-ds) =0.999±0.005. Conclusion: Quality correction factors for ion chamber response in scanned and broad proton therapy beams are identical under reference conditions within the calculation uncertainties. The results indicate that quality correction factors published in IAEA TRS-398 can be used for scanned beams in the SOBP of a high-energy proton beam. Jefferson Sorriaux is financed by the Walloon Region under the convention 1217662. Jefferson Sorriaux is sponsored by a public-private partnership IBA - Walloon Region.« less

  5. Improved Kalman Filter Method for Measurement Noise Reduction in Multi Sensor RFID Systems

    PubMed Central

    Eom, Ki Hwan; Lee, Seung Joon; Kyung, Yeo Sun; Lee, Chang Won; Kim, Min Chul; Jung, Kyung Kwon

    2011-01-01

    Recently, the range of available Radio Frequency Identification (RFID) tags has been widened to include smart RFID tags which can monitor their varying surroundings. One of the most important factors for better performance of smart RFID system is accurate measurement from various sensors. In the multi-sensing environment, some noisy signals are obtained because of the changing surroundings. We propose in this paper an improved Kalman filter method to reduce noise and obtain correct data. Performance of Kalman filter is determined by a measurement and system noise covariance which are usually called the R and Q variables in the Kalman filter algorithm. Choosing a correct R and Q variable is one of the most important design factors for better performance of the Kalman filter. For this reason, we proposed an improved Kalman filter to advance an ability of noise reduction of the Kalman filter. The measurement noise covariance was only considered because the system architecture is simple and can be adjusted by the neural network. With this method, more accurate data can be obtained with smart RFID tags. In a simulation the proposed improved Kalman filter has 40.1%, 60.4% and 87.5% less Mean Squared Error (MSE) than the conventional Kalman filter method for a temperature sensor, humidity sensor and oxygen sensor, respectively. The performance of the proposed method was also verified with some experiments. PMID:22346641

  6. Improved Kalman filter method for measurement noise reduction in multi sensor RFID systems.

    PubMed

    Eom, Ki Hwan; Lee, Seung Joon; Kyung, Yeo Sun; Lee, Chang Won; Kim, Min Chul; Jung, Kyung Kwon

    2011-01-01

    Recently, the range of available radio frequency identification (RFID) tags has been widened to include smart RFID tags which can monitor their varying surroundings. One of the most important factors for better performance of smart RFID system is accurate measurement from various sensors. In the multi-sensing environment, some noisy signals are obtained because of the changing surroundings. We propose in this paper an improved Kalman filter method to reduce noise and obtain correct data. Performance of Kalman filter is determined by a measurement and system noise covariance which are usually called the R and Q variables in the Kalman filter algorithm. Choosing a correct R and Q variable is one of the most important design factors for better performance of the Kalman filter. For this reason, we proposed an improved Kalman filter to advance an ability of noise reduction of the Kalman filter. The measurement noise covariance was only considered because the system architecture is simple and can be adjusted by the neural network. With this method, more accurate data can be obtained with smart RFID tags. In a simulation the proposed improved Kalman filter has 40.1%, 60.4% and 87.5% less mean squared error (MSE) than the conventional Kalman filter method for a temperature sensor, humidity sensor and oxygen sensor, respectively. The performance of the proposed method was also verified with some experiments.

  7. An in vitro verification of strength estimation for moving an 125I source during implantation in brachytherapy.

    PubMed

    Tanaka, Kenichi; Kajimoto, Tsuyoshi; Hayashi, Takahiro; Asanuma, Osamu; Hori, Masakazu; Kamo, Ken-Ichi; Sumida, Iori; Takahashi, Yutaka; Tateoka, Kunihiko; Bengua, Gerard; Sakata, Koh-Ichi; Endo, Satoru

    2018-04-11

    This study aims to demonstrate the feasibility of a method for estimating the strength of a moving brachytherapy source during implantation in a patient. Experiments were performed under the same conditions as in the actual treatment, except for one point that the source was not implanted into a patient. The brachytherapy source selected for this study was 125I with an air kerma strength of 0.332 U (μGym2h-1), and the detector used was a plastic scintillator with dimensions of 10 cm × 5 cm × 5 cm. A calibration factor to convert the counting rate of the detector to the source strength was measured and then the accuracy of the proposed method was investigated for a manually driven source. The accuracy was found to be under 10% when the shielding effect of additional needles for implantation at other positions was corrected, and about 30% when the shielding was not corrected. Even without shielding correction, the proposed method can detect dead/dropped source, implantation of a source with the wrong strength, and a mistake in the number of the sources implanted. Furthermore, when the correction was applied, the achieved accuracy came close to within 7% required to find the Oncoseed 6711 (125I seed with unintended strength among the commercially supplied values of 0.392, 0.462 and 0.533 U).

  8. Methods to achieve accurate projection of regional and global raster databases

    USGS Publications Warehouse

    Usery, E.L.; Seong, J.C.; Steinwand, D.R.; Finn, M.P.

    2002-01-01

    This research aims at building a decision support system (DSS) for selecting an optimum projection considering various factors, such as pixel size, areal extent, number of categories, spatial pattern of categories, resampling methods, and error correction methods. Specifically, this research will investigate three goals theoretically and empirically and, using the already developed empirical base of knowledge with these results, develop an expert system for map projection of raster data for regional and global database modeling. The three theoretical goals are as follows: (1) The development of a dynamic projection that adjusts projection formulas for latitude on the basis of raster cell size to maintain equal-sized cells. (2) The investigation of the relationships between the raster representation and the distortion of features, number of categories, and spatial pattern. (3) The development of an error correction and resampling procedure that is based on error analysis of raster projection.

  9. A Method for Calculating Viscosity and Thermal Conductivity of a Helium-Xenon Gas Mixture

    NASA Technical Reports Server (NTRS)

    Johnson, Paul K.

    2006-01-01

    A method for calculating viscosity and thermal conductivity of a helium-xenon (He-Xe) gas mixture was employed, and results were compared to AiResearch (part of Honeywell) analytical data. The method of choice was that presented by Hirschfelder with Singh's third-order correction factor applied to thermal conductivity. Values for viscosity and thermal conductivity were calculated over a temperature range of 400 to 1200 K for He-Xe gas mixture molecular weights of 20.183, 39.94, and 83.8 kg/kmol. First-order values for both transport properties were in good agreement with AiResearch analytical data. Third-order-corrected thermal conductivity values were all greater than AiResearch data, but were considered to be a better approximation of thermal conductivity because higher-order effects of mass and temperature were taken into consideration. Viscosity, conductivity, and Prandtl number were then compared to experimental data presented by Taylor.

  10. Air-kerma strength determination of a new directional 103Pd source

    PubMed Central

    Reed, Joshua L.; DeWerd, Larry A.; Culberson, Wesley S.

    2015-01-01

    Purpose: A new directional 103Pd planar source array called a CivaSheet™ has been developed by CivaTech Oncology, Inc., for potential use in low-dose-rate (LDR) brachytherapy treatments. The array consists of multiple individual polymer capsules called CivaDots, containing 103Pd and a gold shield that attenuates the radiation on one side, thus defining a hot and cold side. This novel source requires new methods to establish a source strength metric. The presence of gold material in such close proximity to the active 103Pd region causes the source spectrum to be significantly different than the energy spectra of seeds normally used in LDR brachytherapy treatments. In this investigation, the authors perform air-kerma strength (SK) measurements, develop new correction factors for these measurements based on an experimentally verified energy spectrum, and test the robustness of transferring SK to a well-type ionization chamber. Methods: SK measurements were performed with the variable-aperture free-air chamber (VAFAC) at the University of Wisconsin Medical Radiation Research Center. Subsequent measurements were then performed in a well-type ionization chamber. To realize the quantity SK from a directional source with gold material present, new methods and correction factors were considered. Updated correction factors were calculated using the mcnp 6 Monte Carlo code in order to determine SK with the presence of gold fluorescent energy lines. In addition to SK measurements, a low-energy high-purity germanium (HPGe) detector was used to experimentally verify the calculated spectrum, a sodium iodide (NaI) scintillating counter was used to verify the azimuthal and polar anisotropy, and a well-type ionization chamber was used to test the feasibility of disseminating SK values for a directional source within a cylindrically symmetric measurement volume. Results: The UW VAFAC was successfully used to measure the SK of four CivaDots with reproducibilities within 0.3%. Monte Carlo methods were used to calculate the UW VAFAC correction factors and the calculated spectrum emitted from a CivaDot was experimentally verified with HPGe detector measurements. The well-type ionization chamber showed minimal variation in response (<1.5%) as a function of source positioning angle, indicating that an American Association of Physicists in Medicine (AAPM) Accredited Dosimetry Calibration Laboratory calibrated well chamber would be a suitable device to transfer an SK-based calibration to a clinical user. SK per well-chamber ionization current ratios were consistent among the four dots measured. Additionally, the measurements and predictions of anisotropy show uniform emission within the solid angle of the VAFAC, which demonstrates the robustness of the SK measurement approach. Conclusions: This characterization of a new 103Pd directional brachytherapy source helps to establish calibration methods that could ultimately be used in the well-established AAPM Task Group 43 formalism. Monte Carlo methods accurately predict the changes in the energy spectrum caused by the fluorescent x-rays produced in the gold shield. PMID:26632069

  11. VizieR Online Data Catalog: PACS photometry of FIR faint stars (Klaas+, 2018)

    NASA Astrophysics Data System (ADS)

    Klaas, U.; Balog, Z.; Nielbock, M.; Mueller, T. G.; Linz, H.; Kiss, Cs.

    2018-01-01

    70, 100 and 160um photometry of FIR faint stars from PACS scan map and chop/nod measurements. For scan maps also the photometry of the combined scan and cross-scan maps (at 160um there are usually two scan and cross-scan maps each as complements to the 70 and 100um maps) is given. Note: Not all stars have measured fluxes in all three filters. Scan maps: The main observing mode was the point-source mini-scan-map mode; selected scan map parameters are given in column mparam. An outline of the data processing using the high-pass filter (HPF) method is presented in Balog et al. (2014ExA....37..129B). Processing proceeded from Herschel Science Archive SPG v13.1.0 level 1 products with HIPE version 15 build 165 for 70 and 100um maps and from Herschel Science Archive SPG v14.2.0 level 1 products with HIPE version 15 build 1480 for 160um maps. Fluxes faper were obtained by aperture photometry with aperture radii of 5.6, 6.8 and 10.7 arcsec for the 70, 100 and 160um filter, respectively. Noise per pixel sigpix was determined with the histogram method, described in this paper, for coverage values greater than or equal to 0.5*maximum coverage. The number of map pixels (1.1, 1.4, and 2.1 arcsec pixel size, respectively) inside the photometric aperture is Naper = 81.42, 74.12, and 81.56, respectively. The corresponding correction factors for correlated noise are fcorr = 3.13, 2.76, and 4.12, respectively. The noise for the photometric aperture is calculated as sig_aper=sqrt(Naper)*fcorr*sigpix. Signal-to-noise ratios are determined as S/N=faper/sigaper. Aperture-correction factors to derive the total flux are caper = 1.61, 1.56 and 1.56 for the 70, 100 and 160um filter, respectively. Applied colour-correction factors for a 5000K black-body SED are cc = 1.016, 1.033, and 1.074 for the 70, 100, and 160um filter, respectively. The final stellar flux is derived as fstar=faper*caper/cc. Maximum and minimum FWHM of the star PSF are determined by an elliptical fit of the intensity profile. Chop/nod observations: The chop/nod point-source mode is described in this paper. An outline of the data processing is presented in Nielbock et al. (2013ExA....36..631N). Processing proceeded from Herschel Science Archive SPG v11.1.0 level 1 products with HIPE version 13 build 2768. Gyro correction was applied for most of the cases to improve the pointing reconstruction performance. Fluxes faper were obtained by aperture photometry with aperture radii of 5.6, 6.8 and 10.7 arcsec for the 70, 100 and 160um filter, respectively. Noise per pixel sigpix was determined with the histogram method, described in this paper, for coverage values greater than or equal to 0.5*maximum coverage. The number of map pixels (1.1, 1.4, and 2.1 arcsec pixel size, respectively) inside the photometric aperture is Naper = 81.42, 74.12, and 81.56, respectively. The corresponding correction factors for correlated noise are fcorr = 6.33, 4.22, and 7.81, respectively. The noise for the photometric aperture is calculated as sigaper=sqrt(Naper)*fcorr*sigpix. Signal-to-noise ratios are determined as S/N=faper/sigaper. Aperture-correction factors to derive the total flux are caper = 1.61, 1.56 and 1.56 for the 70, 100 and 160um filter, respectively. Applied colour-correction factors for a 5000K black-body SED are cc = 1.016, 1.033, and 1.074 for the 70, 100, and 160um filter, respectively. Maximum and minimum FWHM of the star PSF are determined by an elliptical fit of the intensity profile. (7 data files).

  12. Dose-to-water conversion for the backscatter-shielded EPID: A frame-based method to correct for EPID energy response to MLC transmitted radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zwan, Benjamin J., E-mail: benjamin.zwan@uon.edu.au; O’Connor, Daryl J.; King, Brian W.

    2014-08-15

    Purpose: To develop a frame-by-frame correction for the energy response of amorphous silicon electronic portal imaging devices (a-Si EPIDs) to radiation that has transmitted through the multileaf collimator (MLC) and to integrate this correction into the backscatter shielded EPID (BSS-EPID) dose-to-water conversion model. Methods: Individual EPID frames were acquired using a Varian frame grabber and iTools acquisition software then processed using in-house software developed inMATLAB. For each EPID image frame, the region below the MLC leaves was identified and all pixels in this region were multiplied by a factor of 1.3 to correct for the under-response of the imager tomore » MLC transmitted radiation. The corrected frames were then summed to form a corrected integrated EPID image. This correction was implemented as an initial step in the BSS-EPID dose-to-water conversion model which was then used to compute dose planes in a water phantom for 35 IMRT fields. The calculated dose planes, with and without the proposed MLC transmission correction, were compared to measurements in solid water using a two-dimensional diode array. Results: It was observed that the integration of the MLC transmission correction into the BSS-EPID dose model improved agreement between modeled and measured dose planes. In particular, the MLC correction produced higher pass rates for almost all Head and Neck fields tested, yielding an average pass rate of 99.8% for 2%/2 mm criteria. A two-sample independentt-test and fisher F-test were used to show that the MLC transmission correction resulted in a statistically significant reduction in the mean and the standard deviation of the gamma values, respectively, to give a more accurate and consistent dose-to-water conversion. Conclusions: The frame-by-frame MLC transmission response correction was shown to improve the accuracy and reduce the variability of the BSS-EPID dose-to-water conversion model. The correction may be applied as a preprocessing step in any pretreatment portal dosimetry calculation and has been shown to be beneficial for highly modulated IMRT fields.« less

  13. 75 FR 5536 - Pipeline Safety: Control Room Management/Human Factors, Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-03

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Parts...: Control Room Management/Human Factors, Correction AGENCY: Pipeline and Hazardous Materials Safety... following correcting amendments: PART 192--TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM...

  14. An optically stimulated luminescence system to measure dose profiles in x-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Yukihara, E. G.; Ruan, C.; Gasparian, P. B. R.; Clouse, W. J.; Kalavagunta, C.; Ahmad, S.

    2009-10-01

    This paper describes an LED-based optically stimulated luminescence (OSL) system for dose profile measurements using OSL detector strips and investigates its performance in x-ray computed tomography (CT) dosimetry. To compensate for the energy response of the Al2O3:C OSL detectors, which have an effective atomic number of 11.28, field-specific energy correction factors were determined using two methods: (a) comparing the OSL profiles with ionization chamber point measurements (0.3 cm3 ionization chamber) and (b) comparing the OSL profiles integrated over a 100 mm length with 100 mm long pencil ionization chamber measurements. These correction factors were obtained for the CT body and head phantoms, central and peripheral positions and three x-ray tube potential differences (100 kVp, 120 kVp and 140 kVp). The OSL dose profiles corrected by the energy dependence agreed with the ionization chamber point measurements over the entire length of the phantom (300 mm). For 120 kVp x-ray tube potential difference, the CTDI100 values calculated using the OSL dose profiles corrected for the energy dependence and those obtained from an independent measurement with a 100 mm long pencil ionization chamber also agreed within ±5%.

  15. Research on the Application of Fast-steering Mirror in Stellar Interferometer

    NASA Astrophysics Data System (ADS)

    Mei, R.; Hu, Z. W.; Xu, T.; Sun, C. S.

    2017-07-01

    For a stellar interferometer, the fast-steering mirror (FSM) is widely utilized to correct wavefront tilt caused by atmospheric turbulence and internal instrumental vibration due to its high resolution and fast response frequency. In this study, the non-coplanar error between the FSM and actuator deflection axis introduced by manufacture, assembly, and adjustment is analyzed. Via a numerical method, the additional optical path difference (OPD) caused by above factors is studied, and its effects on tracking accuracy of stellar interferometer are also discussed. On the other hand, the starlight parallelism between the beams of two arms is one of the main factors of the loss of fringe visibility. By analyzing the influence of wavefront tilt caused by the atmospheric turbulence on fringe visibility, a simple and efficient real-time correction scheme of starlight parallelism is proposed based on a single array detector. The feasibility of this scheme is demonstrated by laboratory experiment. The results show that starlight parallelism meets the requirement of stellar interferometer in wavefront tilt preliminarily after the correction of fast-steering mirror.

  16. Factors affecting volume calculation with single photon emission tomography (SPECT) method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, T.H.; Lee, K.H.; Chen, D.C.P.

    1985-05-01

    Several factors may influence the calculation of absolute volumes (VL) from SPECT images. The effect of these factors must be established to optimize the technique. The authors investigated the following on the VL calculations: % of background (BG) subtraction, reconstruction filters, sample activity, angular sampling and edge detection methods. Transaxial images of a liver-trunk phantom filled with Tc-99m from 1 to 3 ..mu..Ci/cc were obtained in 64x64 matrix with a Siemens Rota Camera and MDS computer. Different reconstruction filters including Hanning 20,32, 64 and Butterworth 20, 32 were used. Angular samplings were performed in 3 and 6 degree increments. ROI'smore » were drawn manually and with an automatic edge detection program around the image after BG subtraction. VL's were calculated by multiplying the number of pixels within the ROI by the slice thickness and the x- and y- calibrations of each pixel. One or 2 pixel per slice thickness was applied in the calculation. An inverse correlation was found between the calculated VL and the % of BG subtraction (r=0.99 for 1,2,3 ..mu..Ci/cc activity). Based on the authors' linear regression analysis, the correct liver VL was measured with about 53% BG subtraction. The reconstruction filters, slice thickness and angular sampling had only minor effects on the calculated phantom volumes. Detection of the ROI automatically by the computer was not as accurate as the manual method. The authors conclude that the % of BG subtraction appears to be the most important factor affecting the VL calculation. With good quality control and appropriate reconstruction factors, correct VL calculations can be achieved with SPECT.« less

  17. Human Factors Research Under Ground-Based and Space Conditions. Part 1

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Session TP2 includes short reports concerning: (1) Human Factors Engineering of the International space Station Human Research Facility; (2) Structured Methods for Identifying and Correcting Potential Human Errors in Space operation; (3) An Improved Procedure for Selecting Astronauts for Extended Space Missions; (4) The NASA Performance Assessment Workstation: Cognitive Performance During Head-Down Bedrest; (5) Cognitive Performance Aboard the Life and Microgravity Spacelab; and (6) Psychophysiological Reactivity Under MIR-Simulation and Real Micro-G.

  18. Absolute measurements and certified reference material for iron isotopes using multiple-collector inductively coupled mass spectrometry.

    PubMed

    Zhou, Tao; Zhao, Motian; Wang, Jun; Lu, Hai

    2008-01-01

    Two enriched isotopes, 99.94 at.% 56Fe and 99.90 at.% 54Fe, were blended under gravimetric control to prepare ten synthetic isotope samples whose 56Fe isotope abundances ranged from 95% to 20%. For multiple-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) measurements typical polyatomic interferences were removed by using Ar and H2 as collision gas and operating the MC-ICP-MS system in soft mode. Thus high-precision measurements of the Fe isotope abundance ratios were accomplished. Based on the measurement of the synthetic isotope abundance ratios by MC-ICP-MS, the correction factor for mass discrimination was calculated and the results were in agreement with results from IRMM014. The precision of all ten correction factors was 0.044%, indicating a good linearity of the MC-ICP-MS method for different isotope abundance ratio values. An isotopic reference material was certified under the same conditions as the instrument was calibrated. The uncertainties of ten correction factors K were calculated and the final extended uncertainties of the isotopic certified Fe reference material were 5.8363(37) at.% 54Fe, 91.7621(51) at.% 56Fe, 2.1219(23) at.% 57Fe, and 0.2797(32) at.% 58Fe.

  19. Impact of the neutron detector choice on Bell and Glasstone spatial correction factor for subcriticality measurement

    NASA Astrophysics Data System (ADS)

    Talamo, Alberto; Gohar, Y.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2012-03-01

    In subcritical assemblies, the Bell and Glasstone spatial correction factor is used to correct the measured reactivity from different detector positions. In addition to the measuring position, several other parameters affect the correction factor: the detector material, the detector size, and the energy-angle distribution of source neutrons. The effective multiplication factor calculated by computer codes in criticality mode slightly differs from the average value obtained from the measurements in the different experimental channels of the subcritical assembly, which are corrected by the Bell and Glasstone spatial correction factor. Generally, this difference is due to (1) neutron counting errors; (2) geometrical imperfections, which are not simulated in the calculational model, and (3) quantities and distributions of material impurities, which are missing from the material definitions. This work examines these issues and it focuses on the detector choice and the calculation methodologies. The work investigated the YALINA Booster subcritical assembly of Belarus, which has been operated with three different fuel enrichments in the fast zone either: high (90%) and medium (36%), medium (36%), or low (21%) enriched uranium fuel.

  20. Correction of complex foot deformities using the Ilizarov external fixator.

    PubMed

    Kocaoğlu, Mehmet; Eralp, Levent; Atalar, Ata Can; Bilen, F Erkal

    2002-01-01

    There are many drawbacks to using conventional approaches to the treatment of complex foot deformities, like the increased risk of neurovascular injury, soft-tissue injury, and the shortening of the foot. An alternative approach that can eliminate these problems is the Ilizarov method. In the current study, a total of 23 deformed feet in 22 patients were treated using the Ilizarov method. The etiologic factors were burn contracture, poliomyelitis, neglected and relapsed clubfoot, trauma, gun shot injury, meningitis, and leg-length discrepancy (LLD). The average age of the patients was 18.2 (5-50) years. The mean duration of fixator application was 5.1 (2-14) months. We performed corrections without an osteotomy in nine feet and with an osteotomy in 14 feet. Additional bony corrective procedures included three tibial and one femoral osteotomies for lengthening and deformity correction, and one tibiotalar arthrodesis in five separate extremities. At the time of fixator removal, a plantigrade foot was achieved in 21 of the 23 feet by pressure mat analysis. Compared to preoperative status, gait was subjectively improved in all patients. Follow-up time from surgery averaged 25 months (13-38). Pin-tract problems were observed in all cases. Other complications were toe contractures in two feet, metatarsophalangeal subluxation from flexor tendon contractures in one foot, incomplete osteotomy in one foot, residual deformity in two feet, and recurrence of deformity in one foot. Our results indicate that the Ilizarov method is an effective alternative means of correcting complex foot deformities, especially in feet that previously have undergone surgery.

  1. Process influences and correction possibilities for high precision injection molded freeform optics

    NASA Astrophysics Data System (ADS)

    Dick, Lars; Risse, Stefan; Tünnermann, Andreas

    2016-08-01

    Modern injection molding processes offer a cost-efficient method for manufacturing high precision plastic optics for high volume applications. Besides form deviation of molded freeform optics, internal material stress is a relevant influencing factor for the functionality of a freeform optics in an optical system. This paper illustrates dominant influence parameters of an injection molding process relating to form deviation and internal material stress based on a freeform demonstrator geometry. Furthermore, a deterministic and efficient way for 3D mold correcting of systematic, asymmetrical shrinkage errors is shown to reach micrometer range shape accuracy at diameters up to 40 mm. In a second case, a stress-optimized parameter combination using unusual molding conditions was 3D corrected to reach high precision and low stress freeform polymer optics.

  2. Spectrophotometric determination of H2O2-generating oxidases using oxyhemoglobin as oxygen donor and indicator.

    PubMed

    Bârzu, O; Dânşoreanu, M

    1980-01-01

    1. Spectrophotometric determination of oxygen uptake using oxyhemoglobin as oxygen donor and indicator was used for assay of H2O2-generating oxidases like monoamine oxidase and glucose oxidase. 2. In order to decompose H2O2 formed during the oxygen uptake, catalase and methanol (or ethanol) was added to the respiratory system. At pH values higher than 7.5 the oxydation of deoxygenated hemoglobin to methemoglobin was less than 3%. 2. Oxidases with low Km for oxygen can be assayed using the spectrophotometric method if suitable correction factors are introduced into the calculation of oxygen uptake. The correction factor represents the ratio of the rate of formation (or disappearance) of one of the reactants and the rate of oxyhemoglobin deoxygenation, measured under identical experimental conditions.

  3. Human factors process failure modes and effects analysis (HF PFMEA) software tool

    NASA Technical Reports Server (NTRS)

    Chandler, Faith T. (Inventor); Relvini, Kristine M. (Inventor); Shedd, Nathaneal P. (Inventor); Valentino, William D. (Inventor); Philippart, Monica F. (Inventor); Bessette, Colette I. (Inventor)

    2011-01-01

    Methods, computer-readable media, and systems for automatically performing Human Factors Process Failure Modes and Effects Analysis for a process are provided. At least one task involved in a process is identified, where the task includes at least one human activity. The human activity is described using at least one verb. A human error potentially resulting from the human activity is automatically identified, the human error is related to the verb used in describing the task. A likelihood of occurrence, detection, and correction of the human error is identified. The severity of the effect of the human error is identified. The likelihood of occurrence, and the severity of the risk of potential harm is identified. The risk of potential harm is compared with a risk threshold to identify the appropriateness of corrective measures.

  4. Quantification of γ-aminobutyric acid (GABA) in 1 H MRS volumes composed heterogeneously of grey and white matter.

    PubMed

    Mikkelsen, Mark; Singh, Krish D; Brealy, Jennifer A; Linden, David E J; Evans, C John

    2016-11-01

    The quantification of γ-aminobutyric acid (GABA) concentration using localised MRS suffers from partial volume effects related to differences in the intrinsic concentration of GABA in grey (GM) and white (WM) matter. These differences can be represented as a ratio between intrinsic GABA in GM and WM: r M . Individual differences in GM tissue volume can therefore potentially drive apparent concentration differences. Here, a quantification method that corrects for these effects is formulated and empirically validated. Quantification using tissue water as an internal concentration reference has been described previously. Partial volume effects attributed to r M can be accounted for by incorporating into this established method an additional multiplicative correction factor based on measured or literature values of r M weighted by the proportion of GM and WM within tissue-segmented MRS volumes. Simulations were performed to test the sensitivity of this correction using different assumptions of r M taken from previous studies. The tissue correction method was then validated by applying it to an independent dataset of in vivo GABA measurements using an empirically measured value of r M . It was shown that incorrect assumptions of r M can lead to overcorrection and inflation of GABA concentration measurements quantified in volumes composed predominantly of WM. For the independent dataset, GABA concentration was linearly related to GM tissue volume when only the water signal was corrected for partial volume effects. Performing a full correction that additionally accounts for partial volume effects ascribed to r M successfully removed this dependence. With an appropriate assumption of the ratio of intrinsic GABA concentration in GM and WM, GABA measurements can be corrected for partial volume effects, potentially leading to a reduction in between-participant variance, increased power in statistical tests and better discriminability of true effects. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Dating Middle Pleistocene loess using IRSL luminescence

    NASA Astrophysics Data System (ADS)

    Michel, L.

    2008-12-01

    Loess is a unique palaeoclimate proxy that has a relatively global distribution. A major issue in loess studies is their age, as most terrestrial sediments are outside the realm of isotopic dating methods. Luminescence dating of loess has been attempted with limited success as Optically Stimulated Luminescence (OSL) from the two common dosimeters used in luminescence, quartz and feldspar minerals, both yielded age underestimates. Quartz is limited by dose saturation and feldspar suffers from anomalous fading. Over the last decade, we have developed methods to deal with anomalous fading and hence correct Infrared Stimulated Luminescence (IRSL) ages from feldspar dominated samples. A method known as Dose Rate Correction (DRC) has been successfully applied to loess from the Western European Belt, for ages as old as the Middle Pleistocene. Ages using the same method have been obtained for loess in Alaska and the technique is now being extended to loess from Illinois and China. IRSL can also be used as a reliable telecorrelation tool as luminescence properties of loess are broadly similar, whatever the geological provenance. DRC corrected IRSL extends the applicability of luminescence to dating loess up to at least 500 ka. The limiting factor in the specific case of loess is dose saturation due to relatively high dose rate compared to the average terrestrial sediment radioactivity.

  6. Energy level alignment and quantum conductance of functionalized metal-molecule junctions: Density functional theory versus GW calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Chengjun; Markussen, Troels; Thygesen, Kristian S., E-mail: thygesen@fysik.dtu.dk

    We study the effect of functional groups (CH{sub 3}*4, OCH{sub 3}, CH{sub 3}, Cl, CN, F*4) on the electronic transport properties of 1,4-benzenediamine molecular junctions using the non-equilibrium Green function method. Exchange and correlation effects are included at various levels of theory, namely density functional theory (DFT), energy level-corrected DFT (DFT+Σ), Hartree-Fock and the many-body GW approximation. All methods reproduce the expected trends for the energy of the frontier orbitals according to the electron donating or withdrawing character of the substituent group. However, only the GW method predicts the correct ordering of the conductance amongst the molecules. The absolute GWmore » (DFT) conductance is within a factor of two (three) of the experimental values. Correcting the DFT orbital energies by a simple physically motivated scissors operator, Σ, can bring the DFT conductances close to experiments, but does not improve on the relative ordering. We ascribe this to a too strong pinning of the molecular energy levels to the metal Fermi level by DFT which suppresses the variation in orbital energy with functional group.« less

  7. A simple and accurate method for calculation of the structure factor of interacting charged spheres.

    PubMed

    Wu, Chu; Chan, Derek Y C; Tabor, Rico F

    2014-07-15

    Calculation of the structure factor of a system of interacting charged spheres based on the Ginoza solution of the Ornstein-Zernike equation has been developed and implemented on a stand-alone spreadsheet. This facilitates direct interactive numerical and graphical comparisons between experimental structure factors with the pioneering theoretical model of Hayter-Penfold that uses the Hansen-Hayter renormalisation correction. The method is used to fit example experimental structure factors obtained from the small-angle neutron scattering of a well-characterised charged micelle system, demonstrating that this implementation, available in the supplementary information, gives identical results to the Hayter-Penfold-Hansen approach for the structure factor, S(q) and provides direct access to the pair correlation function, g(r). Additionally, the intermediate calculations and outputs can be readily accessed and modified within the familiar spreadsheet environment, along with information on the normalisation procedure. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Relating marten scat contents to prey consumed

    Treesearch

    William J. Zielinski

    1986-01-01

    A European ferret, Mustela putorius furo, was fed typical marten food items to discover the relationship between prey weight and number of scats produced per unit weight of prey. A correction factor was derived that was used in the analysis of pine marten, Martes americana, scats to produce a method capable of comparing foods on a...

  9. 78 FR 68161 - Greenhouse Gas Reporting Program: Final Amendments and Confidentiality Determinations for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-13

    ... measurements corrected for temperature and non-ideal gas behavior). For gases with low volume consumption for... effect of that abatement system when using either the emission factors and calculation methods in 40 CFR...) basis. To develop the preliminary estimate, the reporter must use the gas consumption in the tools...

  10. TH-A-18C-09: Ultra-Fast Monte Carlo Simulation for Cone Beam CT Imaging of Brain Trauma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sisniega, A; Zbijewski, W; Stayman, J

    Purpose: Application of cone-beam CT (CBCT) to low-contrast soft tissue imaging, such as in detection of traumatic brain injury, is challenged by high levels of scatter. A fast, accurate scatter correction method based on Monte Carlo (MC) estimation is developed for application in high-quality CBCT imaging of acute brain injury. Methods: The correction involves MC scatter estimation executed on an NVIDIA GTX 780 GPU (MC-GPU), with baseline simulation speed of ~1e7 photons/sec. MC-GPU is accelerated by a novel, GPU-optimized implementation of variance reduction (VR) techniques (forced detection and photon splitting). The number of simulated tracks and projections is reduced formore » additional speed-up. Residual noise is removed and the missing scatter projections are estimated via kernel smoothing (KS) in projection plane and across gantry angles. The method is assessed using CBCT images of a head phantom presenting a realistic simulation of fresh intracranial hemorrhage (100 kVp, 180 mAs, 720 projections, source-detector distance 700 mm, source-axis distance 480 mm). Results: For a fixed run-time of ~1 sec/projection, GPU-optimized VR reduces the noise in MC-GPU scatter estimates by a factor of 4. For scatter correction, MC-GPU with VR is executed with 4-fold angular downsampling and 1e5 photons/projection, yielding 3.5 minute run-time per scan, and de-noised with optimized KS. Corrected CBCT images demonstrate uniformity improvement of 18 HU and contrast improvement of 26 HU compared to no correction, and a 52% increase in contrast-tonoise ratio in simulated hemorrhage compared to “oracle” constant fraction correction. Conclusion: Acceleration of MC-GPU achieved through GPU-optimized variance reduction and kernel smoothing yields an efficient (<5 min/scan) and accurate scatter correction that does not rely on additional hardware or simplifying assumptions about the scatter distribution. The method is undergoing implementation in a novel CBCT dedicated to brain trauma imaging at the point of care in sports and military applications. Research grant from Carestream Health. JY is an employee of Carestream Health.« less

  11. Iterative CT shading correction with no prior information

    NASA Astrophysics Data System (ADS)

    Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A.; Niu, Tianye

    2015-11-01

    Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical and attractive as a general solution to CT shading correction.

  12. The perturbation correction factors for cylindrical ionization chambers in high-energy photon beams.

    PubMed

    Yoshiyama, Fumiaki; Araki, Fujio; Ono, Takeshi

    2010-07-01

    In this study, we calculated perturbation correction factors for cylindrical ionization chambers in high-energy photon beams by using Monte Carlo simulations. We modeled four Farmer-type cylindrical chambers with the EGSnrc/Cavity code and calculated the cavity or electron fluence correction factor, P (cav), the displacement correction factor, P (dis), the wall correction factor, P (wall), the stem correction factor, P (stem), the central electrode correction factor, P (cel), and the overall perturbation correction factor, P (Q). The calculated P (dis) values for PTW30010/30013 chambers were 0.9967 +/- 0.0017, 0.9983 +/- 0.0019, and 0.9980 +/- 0.0019, respectively, for (60)Co, 4 MV, and 10 MV photon beams. The value for a (60)Co beam was about 1.0% higher than the 0.988 value recommended by the IAEA TRS-398 protocol. The P (dis) values had a substantial discrepancy compared to those of IAEA TRS-398 and AAPM TG-51 at all photon energies. The P (wall) values were from 0.9994 +/- 0.0020 to 1.0031 +/- 0.0020 for PTW30010 and from 0.9961 +/- 0.0018 to 0.9991 +/- 0.0017 for PTW30011/30012, in the range of (60)Co-10 MV. The P (wall) values for PTW30011/30012 were around 0.3% lower than those of the IAEA TRS-398. Also, the chamber response with and without a 1 mm PMMA water-proofing sleeve agreed within their combined uncertainty. The calculated P (stem) values ranged from 0.9945 +/- 0.0014 to 0.9965 +/- 0.0014, but they are not considered in current dosimetry protocols. The values were no significant difference on beam qualities. P (cel) for a 1 mm aluminum electrode agreed within 0.3% with that of IAEA TRS-398. The overall perturbation factors agreed within 0.4% with those for IAEA TRS-398.

  13. A method to correct coordinate distortion in EBSD maps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Y.B., E-mail: yubz@dtu.dk; Elbrønd, A.; Lin, F.X.

    2014-10-15

    Drift during electron backscatter diffraction mapping leads to coordinate distortions in resulting orientation maps, which affects, in some cases significantly, the accuracy of analysis. A method, thin plate spline, is introduced and tested to correct such coordinate distortions in the maps after the electron backscatter diffraction measurements. The accuracy of the correction as well as theoretical and practical aspects of using the thin plate spline method is discussed in detail. By comparing with other correction methods, it is shown that the thin plate spline method is most efficient to correct different local distortions in the electron backscatter diffraction maps. -more » Highlights: • A new method is suggested to correct nonlinear spatial distortion in EBSD maps. • The method corrects EBSD maps more precisely than presently available methods. • Errors less than 1–2 pixels are typically obtained. • Direct quantitative analysis of dynamic data are available after this correction.« less

  14. Aberration corrections for free-space optical communications in atmosphere turbulence using orbital angular momentum states.

    PubMed

    Zhao, S M; Leach, J; Gong, L Y; Ding, J; Zheng, B Y

    2012-01-02

    The effect of atmosphere turbulence on light's spatial structure compromises the information capacity of photons carrying the Orbital Angular Momentum (OAM) in free-space optical (FSO) communications. In this paper, we study two aberration correction methods to mitigate this effect. The first one is the Shack-Hartmann wavefront correction method, which is based on the Zernike polynomials, and the second is a phase correction method specific to OAM states. Our numerical results show that the phase correction method for OAM states outperforms the Shark-Hartmann wavefront correction method, although both methods improve significantly purity of a single OAM state and the channel capacities of FSO communication link. At the same time, our experimental results show that the values of participation functions go down at the phase correction method for OAM states, i.e., the correction method ameliorates effectively the bad effect of atmosphere turbulence.

  15. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

    2010-01-01

    This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…

  16. Determination of small field synthetic single-crystal diamond detector correction factors for CyberKnife, Leksell Gamma Knife Perfexion and linear accelerator.

    PubMed

    Veselsky, T; Novotny, J; Pastykova, V; Koniarova, I

    2017-12-01

    The aim of this study was to determine small field correction factors for a synthetic single-crystal diamond detector (PTW microDiamond) for routine use in clinical dosimetric measurements. Correction factors following small field Alfonso formalism were calculated by comparison of PTW microDiamond measured ratio M Qclin fclin /M Qmsr fmsr with Monte Carlo (MC) based field output factors Ω Qclin,Qmsr fclin,fmsr determined using Dosimetry Diode E or with MC simulation itself. Diode measurements were used for the CyberKnife and Varian Clinac 2100C/D linear accelerator. PTW microDiamond correction factors for Leksell Gamma Knife (LGK) were derived using MC simulated reference values from the manufacturer. PTW microDiamond correction factors for CyberKnife field sizes 25-5 mm were mostly smaller than 1% (except for 2.9% for 5 mm Iris field and 1.4% for 7.5 mm fixed cone field). The correction of 0.1% and 2.0% for 8 mm and 4 mm collimators, respectively, needed to be applied to PTW microDiamond measurements for LGK Perfexion. Finally, PTW microDiamond M Qclin fclin /M Qmsr fmsr for the linear accelerator varied from MC corrected Dosimetry Diode data by less than 0.5% (except for 1 × 1 cm 2 field size with 1.3% deviation). Regarding low resulting correction factor values, the PTW microDiamond detector may be considered an almost ideal tool for relative small field dosimetry in a large variety of stereotactic and radiosurgery treatment devices. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  17. Application of round grating angle measurement composite error amendment in the online measurement accuracy improvement of large diameter

    NASA Astrophysics Data System (ADS)

    Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu

    2008-10-01

    The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.

  18. Factors Associated With Early Loss of Hallux Valgus Correction.

    PubMed

    Shibuya, Naohiro; Kyprios, Evangelos M; Panchani, Prakash N; Martin, Lanster R; Thorud, Jakob C; Jupiter, Daniel C

    Recurrence is common after hallux valgus corrective surgery. Although many investigators have studied the risk factors associated with a suboptimal hallux position at the end of long-term follow-up, few have evaluated the factors associated with actual early loss of correction. We conducted a retrospective cohort study to identify the predictors of lateral deviation of the hallux during the postoperative period. We evaluated the demographic data, preoperative severity of the hallux valgus, other angular measurements characterizing underlying deformities, amount of hallux valgus correction, and postoperative alignment of the corrected hallux valgus for associations with recurrence. After adjusting for the covariates, the only factor associated with recurrence was the postoperative tibial sesamoid position. The recurrence rate was ~50% and ~60% when the postoperative tibial sesamoid position was >4 and >5 on the 7-point scale, respectively. Published by Elsevier Inc.

  19. Inverse probability weighting and doubly robust methods in correcting the effects of non-response in the reimbursed medication and self-reported turnout estimates in the ATH survey.

    PubMed

    Härkänen, Tommi; Kaikkonen, Risto; Virtala, Esa; Koskinen, Seppo

    2014-11-06

    To assess the nonresponse rates in a questionnaire survey with respect to administrative register data, and to correct the bias statistically. The Finnish Regional Health and Well-being Study (ATH) in 2010 was based on a national sample and several regional samples. Missing data analysis was based on socio-demographic register data covering the whole sample. Inverse probability weighting (IPW) and doubly robust (DR) methods were estimated using the logistic regression model, which was selected using the Bayesian information criteria. The crude, weighted and true self-reported turnout in the 2008 municipal election and prevalences of entitlements to specially reimbursed medication, and the crude and weighted body mass index (BMI) means were compared. The IPW method appeared to remove a relatively large proportion of the bias compared to the crude prevalence estimates of the turnout and the entitlements to specially reimbursed medication. Several demographic factors were shown to be associated with missing data, but few interactions were found. Our results suggest that the IPW method can improve the accuracy of results of a population survey, and the model selection provides insight into the structure of missing data. However, health-related missing data mechanisms are beyond the scope of statistical methods, which mainly rely on socio-demographic information to correct the results.

  20. Shading correction assisted iterative cone-beam CT reconstruction

    NASA Astrophysics Data System (ADS)

    Yang, Chunlin; Wu, Pengwei; Gong, Shutao; Wang, Jing; Lyu, Qihui; Tang, Xiangyang; Niu, Tianye

    2017-11-01

    Recent advances in total variation (TV) technology enable accurate CT image reconstruction from highly under-sampled and noisy projection data. The standard iterative reconstruction algorithms, which work well in conventional CT imaging, fail to perform as expected in cone beam CT (CBCT) applications, wherein the non-ideal physics issues, including scatter and beam hardening, are more severe. These physics issues result in large areas of shading artifacts and cause deterioration to the piecewise constant property assumed in reconstructed images. To overcome this obstacle, we incorporate a shading correction scheme into low-dose CBCT reconstruction and propose a clinically acceptable and stable three-dimensional iterative reconstruction method that is referred to as the shading correction assisted iterative reconstruction. In the proposed method, we modify the TV regularization term by adding a shading compensation image to the reconstructed image to compensate for the shading artifacts while leaving the data fidelity term intact. This compensation image is generated empirically, using image segmentation and low-pass filtering, and updated in the iterative process whenever necessary. When the compensation image is determined, the objective function is minimized using the fast iterative shrinkage-thresholding algorithm accelerated on a graphic processing unit. The proposed method is evaluated using CBCT projection data of the Catphan© 600 phantom and two pelvis patients. Compared with the iterative reconstruction without shading correction, the proposed method reduces the overall CT number error from around 200 HU to be around 25 HU and increases the spatial uniformity by a factor of 20 percent, given the same number of sparsely sampled projections. A clinically acceptable and stable iterative reconstruction algorithm for CBCT is proposed in this paper. Differing from the existing algorithms, this algorithm incorporates a shading correction scheme into the low-dose CBCT reconstruction and achieves more stable optimization path and more clinically acceptable reconstructed image. The method proposed by us does not rely on prior information and thus is practically attractive to the applications of low-dose CBCT imaging in the clinic.

  1. A velocity-correction projection method based immersed boundary method for incompressible flows

    NASA Astrophysics Data System (ADS)

    Cai, Shanggui

    2014-11-01

    In the present work we propose a novel direct forcing immersed boundary method based on the velocity-correction projection method of [J.L. Guermond, J. Shen, Velocity-correction projection methods for incompressible flows, SIAM J. Numer. Anal., 41 (1)(2003) 112]. The principal idea of immersed boundary method is to correct the velocity in the vicinity of the immersed object by using an artificial force to mimic the presence of the physical boundaries. Therefore, velocity-correction projection method is preferred to its pressure-correction counterpart in the present work. Since the velocity-correct projection method is considered as a dual class of pressure-correction method, the proposed method here can also be interpreted in the way that first the pressure is predicted by treating the viscous term explicitly without the consideration of the immersed boundary, and the solenoidal velocity is used to determine the volume force on the Lagrangian points, then the non-slip boundary condition is enforced by correcting the velocity with the implicit viscous term. To demonstrate the efficiency and accuracy of the proposed method, several numerical simulations are performed and compared with the results in the literature. China Scholarship Council.

  2. Comparison of quantitative Y-90 SPECT and non-time-of-flight PET imaging in post-therapy radioembolization of liver cancer.

    PubMed

    Yue, Jianting; Mauxion, Thibault; Reyes, Diane K; Lodge, Martin A; Hobbs, Robert F; Rong, Xing; Dong, Yinfeng; Herman, Joseph M; Wahl, Richard L; Geschwind, Jean-François H; Frey, Eric C

    2016-10-01

    Radioembolization with yttrium-90 microspheres may be optimized with patient-specific pretherapy treatment planning. Dose verification and validation of treatment planning methods require quantitative imaging of the post-therapy distribution of yttrium-90 (Y-90). Methods for quantitative imaging of Y-90 using both bremsstrahlung SPECT and PET have previously been described. The purpose of this study was to compare the two modalities quantitatively in humans. Calibration correction factors for both quantitative Y-90 bremsstrahlung SPECT and a non-time-of-flight PET system without compensation for prompt coincidences were developed by imaging three phantoms. The consistency of these calibration correction factors for the different phantoms was evaluated. Post-therapy images from both modalities were obtained from 15 patients with hepatocellular carcinoma who underwent hepatic radioembolization using Y-90 glass microspheres. Quantitative SPECT and PET images were rigidly registered and the total liver activities and activity distributions estimated for each modality were compared. The activity distributions were compared using profiles, voxel-by-voxel correlation and Bland-Altman analyses, and activity-volume histograms. The mean ± standard deviation of difference in the total activity in the liver between the two modalities was 0% ± 9% (range -21%-18%). Voxel-by-voxel comparisons showed a good agreement in regions corresponding roughly to treated tumor and treated normal liver; the agreement was poorer in regions with low or no expected activity, where PET appeared to overestimate the activity. The correlation coefficients between intrahepatic voxel pairs for the two modalities ranged from 0.86 to 0.94. Cumulative activity volume histograms were in good agreement. These data indicate that, with appropriate reconstruction methods and measured calibration correction factors, either Y-90 SPECT/CT or Y-90 PET/CT can be used for quantitative post-therapy monitoring of Y-90 activity distribution following hepatic radioembolization.

  3. Poster — Thur Eve — 72: Clinical Subtleties of Flattening-Filter-Free Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corns, Robert; Thomas, Steven; Huang, Vicky

    2014-08-15

    Flattening-filter-free (fff) beams offer superior dose rates, reducing treatment times for important techniques that utilize small field sizes, such as stereotactic ablative radiotherapy (SABR). The impact of ion collection efficiency (P{sub ion}) on the percent depth dose (PDD) has been discussed at length in the literature. Relative corrections of the order of l%–2% are possible. In the process of commissioning 6fff and 10fff beams, we identified a number of other important details that influence commissioning. We looked at the absolute dose difference between corrected and uncorrected PDD. We discovered a curve with a broad maximum between 10 and 20 cm.more » We wondered about the consequences of this PDD correction on the absolute dose calibration of the linac because the TG-51 protocol does not correct the PDD curve. The quality factor k{sub Q} depends on the PDD, so in principle, a correction to the PDD will alter the absolute calibration of the linac. Finally, there are other clinical tables, such as TMR, which are derived from PDD. Attention to details on how this computation is performed is important because different corrections are possible depending the method of calculation.« less

  4. Determinants of contraceptive use among women of reproductive age in Great Britain and Germany. II: Psychological factors.

    PubMed

    Oddens, B J

    1997-10-01

    Psychological determinants of contraceptive use were investigated in Great Britain and Germany, using national data obtained in 1992. It was hypothesised that current contraceptive use among sexually active, fertile women aged 15-45 was related to their attitude towards the various contraceptive methods, social influences, perceptions of being able to use a method correctly and consistently, a correct estimation of fertility, and communication with their partner. Effects of age and country were also taken into account. The attitude of respondents towards the various contraceptive methods was ambivalent and no method was seen as ideal. On medical methods (OCs, IUDs and sterilisation) many respondents expressed doubts as to their safety for health. Social influences most frequently concerned the use of OCs. Respondents considered themselves able to use oral contraceptives correctly, but expressed general fear about intrauterine devices and sterilisation, and many women believed they were not able to use condoms and periodic abstinence consistently. Multifactorial analyses revealed that current contraceptive use was principally determined by social influences, attitude and self-efficacy with respect to medical methods. Age and country, and, for use of unreliable methods, fertility awareness also played a role. Communication with the partner was less relevant. Contraceptive choice (and the use of non-medical methods) depended greatly on encouragement to use and being in favour of medical methods. A lack of social support for use of medical methods and a negative attitude towards them was related to higher use rates of condoms, periodic abstinence, withdrawal and reliance on 'luck'. In the case of withdrawal and/or no method, underestimation of fertility played an additional role. Contraceptive choice appears to be determined more by a general like or dislike of medical methods rather than on a weighing of the merits of individual available methods.

  5. A retrospective study to reveal factors associated with postoperative shoulder imbalance in patients with adolescent idiopathic scoliosis with double thoracic curve.

    PubMed

    Lee, Choon Sung; Hwang, Chang Ju; Lim, Eic Ju; Lee, Dong-Ho; Cho, Jae Hwan

    2016-12-01

    OBJECTIVE Postoperative shoulder imbalance (PSI) is a critical consideration after corrective surgery for a double thoracic curve (Lenke Type 2); however, the radiographic factors related to PSI remain unclear. The purpose of this study was to identify the radiographic factors related to PSI after corrective surgery for adolescent idiopathic scoliosis (AIS) in patients with a double thoracic curve. METHODS This study included 80 patients with Lenke Type 2 AIS who underwent corrective surgery. Patients were grouped according to the presence [PSI(+)] or absence [PSI(-)] of shoulder imbalance at the final follow-up examination (differences of 20, 15, and 10 mm were used). Various radiographic parameters, including the Cobb angle of the proximal and middle thoracic curves (PTC and MTC), radiographic shoulder height (RSH), clavicle angle, T-1 tilt, trunk shift, and proximal and distal wedge angles (PWA and DWA), were assessed before and after surgery and compared between groups. RESULTS Overall, postoperative RSH decreased with time in the PSI(-) group but not in the PSI(+) group. Statistical analyses revealed that the preoperative Risser grade (p = 0.048), postoperative PWA (p = 0.028), and postoperative PTC/MTC ratio (p = 0.011) correlated with PSI. Presence of the adding-on phenomenon was also correlated with PSI, although this result was not statistically significant (p = 0.089). CONCLUSIONS Postoperative shoulder imbalance is common after corrective surgery for Lenke Type 2 AIS and correlates with a higher Risser grade, a larger postoperative PWA, and a higher postoperative PTC/MTC ratio. Presence of the distal adding-on phenomenon is associated with an increased PSI trend, although this result was not statistically significant. However, preoperative factors other than the Risser grade that affect the development of PSI were not identified by the study. Additional studies are required to reveal the risk factors for the development of PSI.

  6. Factors Influencing the Design, Establishment, Administration, and Governance of Correctional Education for Females

    ERIC Educational Resources Information Center

    Ellis, Johnica; McFadden, Cheryl; Colaric, Susan

    2008-01-01

    This article summarizes the results of a study conducted to investigate factors influencing the organizational design, establishment, administration, and governance of correctional education for females. The research involved interviews with correctional and community college administrators and practitioners representing North Carolina female…

  7. On the output factor measurements of the CyberKnife iris collimator small fields: Experimental determination of the k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} correction factors for microchamber and diode detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pantelis, E.; Moutsatsos, A.; Zourari, K.

    Purpose: To measure the output factors (OFs) of the small fields formed by the variable aperture collimator system (iris) of a CyberKnife (CK) robotic radiosurgery system, and determine the k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} correction factors for a microchamber and four diode detectors. Methods: OF measurements were performed using a PTW PinPoint 31014 microchamber, four diode detectors (PTW-60017, -60012, -60008, and the SunNuclear EDGE detector), TLD-100 microcubes, alanine dosimeters, EBT films, and polymer gels for the 5 mm, 7.5 mm, 10 mm, 12.5 mm, and 15 mm irismore » collimators at 650 mm, 800 mm, and 1000 mm source to detector distance (SDD). The alanine OF measurements were corrected for volume averaging effects using the 3D dose distributions registered in polymer gel dosimeters. k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} correction factors for the PinPoint microchamber and the diode dosimeters were calculated through comparison against corresponding polymer gel, EBT, alanine, and TLD results. Results: Experimental OF results are presented for the array of dosimetric systems used. The PinPoint microchamber was found to underestimate small field OFs, and a k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} correction factor ranging from 1.127 {+-} 0.022 (for the 5 mm iris collimator) to 1.004 {+-} 0.010 (for the 15 mm iris collimator) was determined at the reference SDD of 800 mm. The PinPoint k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} correction factor was also found to increase with decreasing SDD; k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} values equal to 1.220 {+-} 0.028 and 1.077 {+-} 0.016 were obtained for the 5 mm iris collimator at 650 mm and 1000 mm SDD, respectively. On the contrary, diode detectors were found to overestimate small field OFs and a correction factor equal to 0.973 {+-} 0.006, 0.954 {+-} 0.006, 0.937 {+-} 0.007, and 0.964 {+-} 0.006 was measured for the PTW-60017, -60012, -60008 and the EDGE diode detectors, respectively, for the 5 mm iris collimator at 800 mm SDD. The corresponding correction factors for the 15 mm iris collimator were found equal to 0.997 {+-} 0.010, 0.994 {+-} 0.009, 0.988 {+-} 0.010, and 0.986 {+-} 0.010, respectively. No correlation of the diode k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} correction factors with SDD was observed. Conclusions: This work demonstrates an experimental procedure for the determination of the k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} correction factors required to obtain small field OF results of increased accuracy.« less

  8. Approximations for column effect in airplane wing spars

    NASA Technical Reports Server (NTRS)

    Warner, Edward P; Short, Mac

    1927-01-01

    The significance attaching to "column effect" in airplane wing spars has been increasingly realized with the passage of time, but exact computations of the corrections to bending moment curves resulting from the existence of end loads are frequently omitted because of the additional labor involved in an analysis by rigorously correct methods. The present report represents an attempt to provide for approximate column effect corrections that can be graphically or otherwise expressed so as to be applied with a minimum of labor. Curves are plotted giving approximate values of the correction factors for single and two bay trusses of varying proportions and with various relationships between axial and lateral loads. It is further shown from an analysis of those curves that rough but useful approximations can be obtained from Perry's formula for corrected bending moment, with the assumed distance between points of inflection arbitrarily modified in accordance with rules given in the report. The discussion of general rules of variation of bending stress with axial load is accompanied by a study of the best distribution of the points of support along a spar for various conditions of loading.

  9. Automatic Speech Recognition in Air Traffic Control: a Human Factors Perspective

    NASA Technical Reports Server (NTRS)

    Karlsson, Joakim

    1990-01-01

    The introduction of Automatic Speech Recognition (ASR) technology into the Air Traffic Control (ATC) system has the potential to improve overall safety and efficiency. However, because ASR technology is inherently a part of the man-machine interface between the user and the system, the human factors issues involved must be addressed. Here, some of the human factors problems are identified and related methods of investigation are presented. Research at M.I.T.'s Flight Transportation Laboratory is being conducted from a human factors perspective, focusing on intelligent parser design, presentation of feedback, error correction strategy design, and optimal choice of input modalities.

  10. Pose determination of a blade implant in three dimensions from a single two-dimensional radiograph.

    PubMed

    Toti, Paolo; Barone, Antonio; Marconcini, Simone; Menchini-Fabris, Giovanni Battista; Martuscelli, Ranieri; Covani, Ugo

    2018-05-01

    The aim of the study was to introduce a mathematical method to estimate the correct pose of a blade by evaluating the radiographic features obtained from a single two-dimensional image. Blade-form implant bed preparation was performed using the piezosurgery device, and placement was attained with the use of magnetic mallet. The pose determination of the blade was described by means of three consecutive rotations defined by three angles of orientation (triplet φ, θ and ψ). Retrospective analysis on periapical radiographs was performed. This method was used to compare implant (axial length along the marker, i.e. the implant structure) vs angular correction factor (a trigonometric function of the triplet). The accuracy of the method was tested by generating two-dimensional radiographic simulations of the blades, which were then compared with the images of the implants as appearing on the real radiographs. Two patients had to be excluded from further evaluation because the values of the estimated pose angles showed a too-wide range to be effective for a good standardization of serial radiographs: intrapatient range from baseline to 1-year survey was > of a threshold determined by the clinicians (30°). The linear dependence between implant (CF°) and angular correction factor (CF^) was estimated by a robust linear regression, yielding the following coefficients: slope, 0.908; intercept, -0.092; and coefficient of determination, 0.924. The absolute error in accuracy was -0.29 ± 4.35, 0.23 ± 3.81 and 0.64 ± 1.18°, respectively, for the angles φ, θ and ψ. The present theoretical and experimental study established the possibility of determining, a posteriori, a unique triplet of angles (φ, θ and ψ) which described the pose of a blade upon a single two-dimensional radiograph, and of suggesting a method to detect cases in which the standardized geometric projection failed. The angular correction of the bone level yielded results very close to those obtained with an internal marker related to the implant length.

  11. Utilizing an Energy Management System with Distributed Resources to Manage Critical Loads and Reduce Energy Costs

    DTIC Science & Technology

    2014-09-01

    peak shaving, conducting power factor correction, matching critical load to most efficient distributed resource, and islanding a system during...photovoltaic arrays during islanding, and power factor correction, the implementation of the ESS by itself is likely to prove cost prohibitive. The DOD...These functions include peak shaving, conducting power factor correction, matching critical load to most efficient distributed resource, and islanding a

  12. The whole space three-dimensional magnetotelluric inversion algorithm with static shift correction

    NASA Astrophysics Data System (ADS)

    Zhang, K.

    2016-12-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results.The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm. The verification and application example of 3D inversion algorithm is shown in Figure 1. From the comparison of figure 1, the inversion model can reflect all the abnormal bodies and terrain clearly regardless of what type of data (impedance/tipper/impedance and tipper). And the resolution of the bodies' boundary can be improved by using tipper data. The algorithm is very effective for terrain inversion. So it is very useful for the study of continental shelf with continuous exploration of land, marine and underground.The three-dimensional electrical model of the ore zone reflects the basic information of stratum, rock and structure. Although it cannot indicate the ore body position directly, the important clues are provided for prospecting work by the delineation of diorite pluton uplift range. The test results show that, the high quality of the data processing and efficient inversion method for electromagnetic method is an important guarantee for porphyry ore.

  13. Investigation of the ionospheric Faraday rotation for use in orbit corrections

    NASA Technical Reports Server (NTRS)

    Llewellyn, S. K.; Bent, R. B.; Nesterczuk, G.

    1974-01-01

    The possibility of mapping the Faraday factors on a worldwide basis was examined as a simple method of representing the conversion factors for any possible user. However, this does not seem feasible. The complex relationship between the true magnetic coordinates and the geographic latitude, longitude, and azimuth angles eliminates the possibility of setting up some simple tables that would yield worldwide results of sufficient accuracy. Tabular results for specific stations can easily be produced or could be represented in graphic form.

  14. Method of wavefront tilt correction for optical heterodyne detection systems under strong turbulence

    NASA Astrophysics Data System (ADS)

    Xiang, Jing-song; Tian, Xin; Pan, Le-chun

    2014-07-01

    Atmospheric turbulence decreases the heterodyne mixing efficiency of the optical heterodyne detection systems. Wavefront tilt correction is often used to improve the optical heterodyne mixing efficiency. But the performance of traditional centroid tracking tilt correction is poor under strong turbulence conditions. In this paper, a tilt correction method which tracking the peak value of laser spot on focal plane is proposed. Simulation results show that, under strong turbulence conditions, the performance of peak value tracking tilt correction is distinctly better than that of traditional centroid tracking tilt correction method, and the phenomenon of large antenna's performance inferior to small antenna's performance which may be occurred in centroid tracking tilt correction method can also be avoid in peak value tracking tilt correction method.

  15. A positivity-preserving, implicit defect-correction multigrid method for turbulent combustion

    NASA Astrophysics Data System (ADS)

    Wasserman, M.; Mor-Yossef, Y.; Greenberg, J. B.

    2016-07-01

    A novel, robust multigrid method for the simulation of turbulent and chemically reacting flows is developed. A survey of previous attempts at implementing multigrid for the problems at hand indicated extensive use of artificial stabilization to overcome numerical instability arising from non-linearity of turbulence and chemistry model source-terms, small-scale physics of combustion, and loss of positivity. These issues are addressed in the current work. The highly stiff Reynolds-averaged Navier-Stokes (RANS) equations, coupled with turbulence and finite-rate chemical kinetics models, are integrated in time using the unconditionally positive-convergent (UPC) implicit method. The scheme is successfully extended in this work for use with chemical kinetics models, in a fully-coupled multigrid (FC-MG) framework. To tackle the degraded performance of multigrid methods for chemically reacting flows, two major modifications are introduced with respect to the basic, Full Approximation Storage (FAS) approach. First, a novel prolongation operator that is based on logarithmic variables is proposed to prevent loss of positivity due to coarse-grid corrections. Together with the extended UPC implicit scheme, the positivity-preserving prolongation operator guarantees unconditional positivity of turbulence quantities and species mass fractions throughout the multigrid cycle. Second, to improve the coarse-grid-correction obtained in localized regions of high chemical activity, a modified defect correction procedure is devised, and successfully applied for the first time to simulate turbulent, combusting flows. The proposed modifications to the standard multigrid algorithm create a well-rounded and robust numerical method that provides accelerated convergence, while unconditionally preserving the positivity of model equation variables. Numerical simulations of various flows involving premixed combustion demonstrate that the proposed MG method increases the efficiency by a factor of up to eight times with respect to an equivalent single-grid method, and by two times with respect to an artificially-stabilized MG method.

  16. Improving the quantitative accuracy of optical-emission computed tomography by incorporating an attenuation correction: application to HIF1 imaging

    NASA Astrophysics Data System (ADS)

    Kim, E.; Bowsher, J.; Thomas, A. S.; Sakhalkar, H.; Dewhirst, M.; Oldham, M.

    2008-10-01

    Optical computed tomography (optical-CT) and optical-emission computed tomography (optical-ECT) are new techniques for imaging the 3D structure and function (including gene expression) of whole unsectioned tissue samples. This work presents a method of improving the quantitative accuracy of optical-ECT by correcting for the 'self'-attenuation of photons emitted within the sample. The correction is analogous to a method commonly applied in single-photon-emission computed tomography reconstruction. The performance of the correction method was investigated by application to a transparent cylindrical gelatin phantom, containing a known distribution of attenuation (a central ink-doped gelatine core) and a known distribution of fluorescing fibres. Attenuation corrected and uncorrected optical-ECT images were reconstructed on the phantom to enable an evaluation of the effectiveness of the correction. Significant attenuation artefacts were observed in the uncorrected images where the central fibre appeared ~24% less intense due to greater attenuation from the surrounding ink-doped gelatin. This artefact was almost completely removed in the attenuation-corrected image, where the central fibre was within ~4% of the others. The successful phantom test enabled application of attenuation correction to optical-ECT images of an unsectioned human breast xenograft tumour grown subcutaneously on the hind leg of a nude mouse. This tumour cell line had been genetically labelled (pre-implantation) with fluorescent reporter genes such that all viable tumour cells expressed constitutive red fluorescent protein and hypoxia-inducible factor 1 transcription-produced green fluorescent protein. In addition to the fluorescent reporter labelling of gene expression, the tumour microvasculature was labelled by a light-absorbing vasculature contrast agent delivered in vivo by tail-vein injection. Optical-CT transmission images yielded high-resolution 3D images of the absorbing contrast agent, and revealed highly inhomogeneous vasculature perfusion within the tumour. Optical-ECT emission images yielded high-resolution 3D images of the fluorescent protein distribution in the tumour. Attenuation-uncorrected optical-ECT images showed clear loss of signal in regions of high attenuation, including regions of high perfusion, where attenuation is increased by increased vascular ink stain. Application of attenuation correction showed significant changes in an apparent expression of fluorescent proteins, confirming the importance of the attenuation correction. In conclusion, this work presents the first development and application of an attenuation correction for optical-ECT imaging. The results suggest that successful attenuation correction for optical-ECT is feasible and is essential for quantitatively accurate optical-ECT imaging.

  17. Changes in higher order aberrations after wavefront-guided PRK for correction of low to moderate myopia and myopic astigmatism: two-year follow-up.

    PubMed

    Wigledowska-Promienska, D; Zawojska, I

    2007-01-01

    To assess efficacy, safety, and changes in higher order aberrations after wavefront-guided photorefractive keratectomy (PRK) in comparison with conventional PRK for low to moderate myopia with myopic astigmatism using a WASCA Workstation with the MEL 70 G-Scan excimer laser. A total of 126 myopic or myopic-astigmatic eyes of 112 patients were included in this retrospective study. Patients were divided into two groups: Group 1, the study group; and Group 2, the control group. Group 1 consisted of 78 eyes treated with wavefront-guided PRK. Group 2 consisted of 48 eyes treated with spherocylindrical conventional PRK. Two years postoperatively, in Group 1, 5% of eyes achieved an uncorrected visual acuity (UCVA) of 0.05; 69% achieved a UCVA of 0.00; 18% of eyes experienced enhanced visual acuity of -0.18 and 8% of -0.30. In Group 2, 8% of eyes achieved a UCVA of 0.1; 25% achieved a UCVA of 0.05; and 67% achieved a UCVA of 0.00 according to logMAR calculation method. Total higher-order root-mean square increased by a factor 1.18 for Group 1 and 1.6 for Group 2. There was a significant increase of coma by a factor 1.74 in Group 2 and spherical aberration by a factor 2.09 in Group 1 and 3.56 in Group 2. The data support the safety and effectiveness of the wavefront-guided PRK using a WASCA Workstation for correction of low to moderate refractive errors. This method reduced the number of higher order aberrations induced by excimer laser surgery and improved uncorrected and spectacle-corrected visual acuity when compared to conventional PRK.

  18. SU-E-I-38: Improved Metal Artifact Correction Using Adaptive Dual Energy Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Elder, E; Roper, J

    2015-06-15

    Purpose: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Methods: The empirical dual energy calibration (EDEC) method corrects for beam-hardening artifacts, but shows limited performance on metal artifact correction. In this work, we propose an adaptive dual energy calibration (ADEC) method to correct for metal artifacts. Results: Highly attenuating copper rods cause severe streaking artifacts on standard CT images. EDEC improves the image quality, but cannot eliminate the streaking artifacts. Compared tomore » EDEC, the proposed ADEC method further reduces the streaking resulting from metallic inserts and beam-hardening effects and obtains material decomposition images with significantly improved accuracy. Conclusion: We propose an adaptive dual energy calibration method to correct for metal artifacts. ADEC is evaluated with the Shepp-Logan phantom, and shows superior metal artifact correction performance. In the future, we will further evaluate the performance of the proposed method with phantom and patient data.« less

  19. Two-Dimensional Thermal Boundary Layer Corrections for Convective Heat Flux Gauges

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Haddad, George

    2007-01-01

    This work presents a CFD (Computational Fluid Dynamics) study of two-dimensional thermal boundary layer correction factors for convective heat flux gauges mounted in flat plate subjected to a surface temperature discontinuity with variable properties taken into account. A two-equation k - omega turbulence model is considered. Results are obtained for a wide range of Mach numbers (1 to 5), gauge radius ratio, and wall temperature discontinuity. Comparisons are made for correction factors with constant properties and variable properties. It is shown that the variable-property effects on the heat flux correction factors become significant

  20. Multiple Illuminant Colour Estimation via Statistical Inference on Factor Graphs.

    PubMed

    Mutimbu, Lawrence; Robles-Kelly, Antonio

    2016-08-31

    This paper presents a method to recover a spatially varying illuminant colour estimate from scenes lit by multiple light sources. Starting with the image formation process, we formulate the illuminant recovery problem in a statistically datadriven setting. To do this, we use a factor graph defined across the scale space of the input image. In the graph, we utilise a set of illuminant prototypes computed using a data driven approach. As a result, our method delivers a pixelwise illuminant colour estimate being devoid of libraries or user input. The use of a factor graph also allows for the illuminant estimates to be recovered making use of a maximum a posteriori (MAP) inference process. Moreover, we compute the probability marginals by performing a Delaunay triangulation on our factor graph. We illustrate the utility of our method for pixelwise illuminant colour recovery on widely available datasets and compare against a number of alternatives. We also show sample colour correction results on real-world images.

  1. Expectation values of twist fields and universal entanglement saturation of the free massive boson

    NASA Astrophysics Data System (ADS)

    Blondeau-Fournier, Olivier; Doyon, Benjamin

    2017-07-01

    The evaluation of vacuum expectation values (VEVs) in massive integrable quantum field theory (QFT) is a nontrivial renormalization-group ‘connection problem’—relating large and short distance asymptotics—and is in general unsolved. This is particularly relevant in the context of entanglement entropy, where VEVs of branch-point twist fields give universal saturation predictions. We propose a new method to compute VEVs of twist fields associated to continuous symmetries in QFT. The method is based on a differential equation in the continuous symmetry parameter, and gives VEVs as infinite form-factor series which truncate at two-particle level in free QFT. We verify the method by studying U(1) twist fields in free models, which are simply related to the branch-point twist fields. We provide the first exact formulae for the VEVs of such fields in the massive uncompactified free boson model, checking against an independent calculation based on angular quantization. We show that logarithmic terms, overlooked in the original work of Callan and Wilczek (1994 Phys. Lett. B 333 55-61), appear both in the massless and in the massive situations. This implies that, in agreement with numerical form-factor observations by Bianchini and Castro-Alvaredo (2016 Nucl. Phys. B 913 879-911), the standard power-law short-distance behavior is corrected by a logarithmic factor. We discuss how this gives universal formulae for the saturation of entanglement entropy of a single interval in near-critical harmonic chains, including loglog corrections.

  2. Taming cut-off induced artifacts in molecular dynamics studies of solvated polypeptides. The reaction field method.

    PubMed

    Schreiber, H; Steinhauser, O

    1992-12-05

    In this paper we present a model system of a solvated polypeptide, which is a suitable reference platform for the systematic exploration of methods for taming artifacts introduced by an incorrect treatment of long-range Coulomb forces. The essential feature of the system composed of an alpha-helical peptide and 1021 water molecules is the strict neutrality of all charge groups. The dynamical properties of the peptide, i.e. unfolding or maintenance of the helix, already give first hints on the influence of boundary effects. A rigorous and deeper insight is gained, however, if analyzing the system by means of the generalized Kirkwood g-factor, which projects the net dipole moment of concentric spheres onto the respective dipole moment of the reference charge group. The g-factor is a global measure for, and a sensitive probe of, the orientational structure, which in its turn reflects even the smallest inconsistencies in the treatment of long-range forces. While the cut-off scheme failed the g-factor test, the "reaction field" method, the simplest cut-off correction scheme, enables a consistent description. In other words, with the aid of the reaction field, the correct orientational structure is restored. As a consequence, the helix stability is regained and we were able to calculate the dielectric constant epsilon approximately 55 to 60 for our system, which is slightly below the corresponding value epsilon SPC = 66 of the pure solvent.

  3. The Additional Secondary Phase Correction System for AIS Signals

    PubMed Central

    Wang, Xiaoye; Zhang, Shufang; Sun, Xiaowen

    2017-01-01

    This paper looks at the development and implementation of the additional secondary phase factor (ASF) real-time correction system for the Automatic Identification System (AIS) signal. A large number of test data were collected using the developed ASF correction system and the propagation characteristics of the AIS signal that transmits at sea and the ASF real-time correction algorithm of the AIS signal were analyzed and verified. Accounting for the different hardware of the receivers in the land-based positioning system and the variation of the actual environmental factors, the ASF correction system corrects original measurements of positioning receivers in real time and provides corrected positioning accuracy within 10 m. PMID:28362330

  4. Air-kerma strength determination of a new directional (103)Pd source.

    PubMed

    Aima, Manik; Reed, Joshua L; DeWerd, Larry A; Culberson, Wesley S

    2015-12-01

    A new directional (103)Pd planar source array called a CivaSheet™ has been developed by CivaTech Oncology, Inc., for potential use in low-dose-rate (LDR) brachytherapy treatments. The array consists of multiple individual polymer capsules called CivaDots, containing (103)Pd and a gold shield that attenuates the radiation on one side, thus defining a hot and cold side. This novel source requires new methods to establish a source strength metric. The presence of gold material in such close proximity to the active (103)Pd region causes the source spectrum to be significantly different than the energy spectra of seeds normally used in LDR brachytherapy treatments. In this investigation, the authors perform air-kerma strength (S(K)) measurements, develop new correction factors for these measurements based on an experimentally verified energy spectrum, and test the robustness of transferring S(K) to a well-type ionization chamber. S(K) measurements were performed with the variable-aperture free-air chamber (VAFAC) at the University of Wisconsin Medical Radiation Research Center. Subsequent measurements were then performed in a well-type ionization chamber. To realize the quantity S(K) from a directional source with gold material present, new methods and correction factors were considered. Updated correction factors were calculated using the MCNP 6 Monte Carlo code in order to determine S(K) with the presence of gold fluorescent energy lines. In addition to S(K) measurements, a low-energy high-purity germanium (HPGe) detector was used to experimentally verify the calculated spectrum, a sodium iodide (NaI) scintillating counter was used to verify the azimuthal and polar anisotropy, and a well-type ionization chamber was used to test the feasibility of disseminating S(K) values for a directional source within a cylindrically symmetric measurement volume. The UW VAFAC was successfully used to measure the S(K) of four CivaDots with reproducibilities within 0.3%. Monte Carlo methods were used to calculate the UW VAFAC correction factors and the calculated spectrum emitted from a CivaDot was experimentally verified with HPGe detector measurements. The well-type ionization chamber showed minimal variation in response (<1.5%) as a function of source positioning angle, indicating that an American Association of Physicists in Medicine (AAPM) Accredited Dosimetry Calibration Laboratory calibrated well chamber would be a suitable device to transfer an S(K)-based calibration to a clinical user. S(K) per well-chamber ionization current ratios were consistent among the four dots measured. Additionally, the measurements and predictions of anisotropy show uniform emission within the solid angle of the VAFAC, which demonstrates the robustness of the S(K) measurement approach. This characterization of a new (103)Pd directional brachytherapy source helps to establish calibration methods that could ultimately be used in the well-established AAPM Task Group 43 formalism. Monte Carlo methods accurately predict the changes in the energy spectrum caused by the fluorescent x-rays produced in the gold shield.

  5. Quantitative data standardization of X-ray based densitometry methods

    NASA Astrophysics Data System (ADS)

    Sergunova, K. A.; Petraikin, A. V.; Petrjajkin, F. A.; Akhmad, K. S.; Semenov, D. S.; Potrakhov, N. N.

    2018-02-01

    In the present work is proposed the design of special liquid phantom for assessing the accuracy of quantitative densitometric data. Also are represented the dependencies between the measured bone mineral density values and the given values for different X-ray based densitometry techniques. Shown linear graphs make it possible to introduce correction factors to increase the accuracy of BMD measurement by QCT, DXA and DECT methods, and to use them for standardization and comparison of measurements.

  6. Experimental investigation of the response of an amorphous silicon EPID to intensity modulated radiotherapy beams.

    PubMed

    Greer, Peter B; Vial, Philip; Oliver, Lyn; Baldock, Clive

    2007-11-01

    The aim of this work was to experimentally determine the difference in response of an amorphous silicon (a-Si) electronic portal imaging device (EPID) to the open and multileaf collimator (MLC) transmitted beam components of intensity modulated radiation therapy (IMRT) beams. EPID dose response curves were measured for open and MLC transmitted (MLCtr) 10 x 10 cm2 beams at central axis and with off axis distance using a shifting field technique. The EPID signal was obtained by replacing the flood-field correction with a pixel sensitivity variation matrix correction. This signal, which includes energy-dependent response, was then compared to ion-chamber measurements. An EPID calibration method to remove the effect of beam energy variations on EPID response was developed for IMRT beams. This method uses the component of open and MLCtr fluence to an EPID pixel calculated from the MLC delivery file and applies separate radially dependent calibration factors for each component. The calibration procedure does not correct for scatter differences between ion chamber in water measurements and EPID response; these must be accounted for separately with a kernel-based approach or similar method. The EPID response at central axis for the open beam was found to be 1.28 +/- 0.03 of the response for the MLCtr beam, with the ratio increasing to 1.39 at 12.5 cm off axis. The EPID response to MLCtr radiation did not change with off-axis distance. Filtering the beam with copper plates to reduce the beam energy difference between open and MLCtr beams was investigated; however, these were not effective at reducing EPID response differences. The change in EPID response for uniform sliding window IMRT beams with MLCtr dose components from 0.3% to 69% was predicted to within 2.3% using the separate EPID response calibration factors for each dose component. A clinical IMRT image calibrated with this method differed by nearly 30% in high MLCtr regions from an image calibrated with an open beam calibration factor only. Accounting for the difference in EPID response to open and MLCtr radiation should improve IMRT dosimetry with a-Si EPIDs.

  7. Color correction optimization with hue regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Heng; Liu, Huaping; Quan, Shuxue

    2011-01-01

    Previous work has suggested that observers are capable of judging the quality of an image without any knowledge of the original scene. When no reference is available, observers can extract the apparent objects in an image and compare them with the typical colors of similar objects recalled from their memories. Some generally agreed upon research results indicate that although perfect colorimetric rendering is not conspicuous and color errors can be well tolerated, the appropriate rendition of certain memory colors such as skin, grass, and sky is an important factor in the overall perceived image quality. These colors are appreciated in a fairly consistent manner and are memorized with slightly different hues and higher color saturation. The aim of color correction for a digital color pipeline is to transform the image data from a device dependent color space to a target color space, usually through a color correction matrix which in its most basic form is optimized through linear regressions between the two sets of data in two color spaces in the sense of minimized Euclidean color error. Unfortunately, this method could result in objectionable distortions if the color error biased certain colors undesirably. In this paper, we propose a color correction optimization method with preferred color reproduction in mind through hue regularization and present some experimental results.

  8. Compensation of X-ray mirror shape-errors using refractive optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sawhney, Kawal, E-mail: Kawal.sawhney@diamond.ac.uk; Laundy, David; Pape, Ian

    2016-08-01

    Focusing of X-rays to nanometre scale focal spots requires high precision X-ray optics. For nano-focusing mirrors, height errors in the mirror surface retard or advance the X-ray wavefront and after propagation to the focal plane, this distortion of the wavefront causes blurring of the focus resulting in a limit on the spatial resolution. We describe here the implementation of a method for correcting the wavefront that is applied before a focusing mirror using custom-designed refracting structures which locally cancel out the wavefront distortion from the mirror. We demonstrate in measurements on a synchrotron radiation beamline a reduction in the sizemore » of the focal spot of a characterized test mirror by a factor of greater than 10 times. This technique could be used to correct existing synchrotron beamline focusing and nanofocusing optics providing a highly stable wavefront with low distortion for obtaining smaller focus sizes. This method could also correct multilayer or focusing crystal optics allowing larger numerical apertures to be used in order to reduce the diffraction limited focal spot size.« less

  9. Towards the clinical implementation of iterative low-dose cone-beam CT reconstruction in image-guided radiation therapy: Cone/ring artifact correction and multiple GPU implementation

    PubMed Central

    Yan, Hao; Wang, Xiaoyu; Shi, Feng; Bai, Ti; Folkerts, Michael; Cervino, Laura; Jiang, Steve B.; Jia, Xun

    2014-01-01

    Purpose: Compressed sensing (CS)-based iterative reconstruction (IR) techniques are able to reconstruct cone-beam CT (CBCT) images from undersampled noisy data, allowing for imaging dose reduction. However, there are a few practical concerns preventing the clinical implementation of these techniques. On the image quality side, data truncation along the superior–inferior direction under the cone-beam geometry produces severe cone artifacts in the reconstructed images. Ring artifacts are also seen in the half-fan scan mode. On the reconstruction efficiency side, the long computation time hinders clinical use in image-guided radiation therapy (IGRT). Methods: Image quality improvement methods are proposed to mitigate the cone and ring image artifacts in IR. The basic idea is to use weighting factors in the IR data fidelity term to improve projection data consistency with the reconstructed volume. In order to improve the computational efficiency, a multiple graphics processing units (GPUs)-based CS-IR system was developed. The parallelization scheme, detailed analyses of computation time at each step, their relationship with image resolution, and the acceleration factors were studied. The whole system was evaluated in various phantom and patient cases. Results: Ring artifacts can be mitigated by properly designing a weighting factor as a function of the spatial location on the detector. As for the cone artifact, without applying a correction method, it contaminated 13 out of 80 slices in a head-neck case (full-fan). Contamination was even more severe in a pelvis case under half-fan mode, where 36 out of 80 slices were affected, leading to poorer soft tissue delineation and reduced superior–inferior coverage. The proposed method effectively corrects those contaminated slices with mean intensity differences compared to FDK results decreasing from ∼497 and ∼293 HU to ∼39 and ∼27 HU for the full-fan and half-fan cases, respectively. In terms of efficiency boost, an overall 3.1 × speedup factor has been achieved with four GPU cards compared to a single GPU-based reconstruction. The total computation time is ∼30 s for typical clinical cases. Conclusions: The authors have developed a low-dose CBCT IR system for IGRT. By incorporating data consistency-based weighting factors in the IR model, cone/ring artifacts can be mitigated. A boost in computational efficiency is achieved by multi-GPU implementation. PMID:25370645

  10. Towards the clinical implementation of iterative low-dose cone-beam CT reconstruction in image-guided radiation therapy: Cone/ring artifact correction and multiple GPU implementation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Hao, E-mail: steve.jiang@utsouthwestern.edu, E-mail: xun.jia@utsouthwestern.edu; Shi, Feng; Jiang, Steve B.

    Purpose: Compressed sensing (CS)-based iterative reconstruction (IR) techniques are able to reconstruct cone-beam CT (CBCT) images from undersampled noisy data, allowing for imaging dose reduction. However, there are a few practical concerns preventing the clinical implementation of these techniques. On the image quality side, data truncation along the superior–inferior direction under the cone-beam geometry produces severe cone artifacts in the reconstructed images. Ring artifacts are also seen in the half-fan scan mode. On the reconstruction efficiency side, the long computation time hinders clinical use in image-guided radiation therapy (IGRT). Methods: Image quality improvement methods are proposed to mitigate the conemore » and ring image artifacts in IR. The basic idea is to use weighting factors in the IR data fidelity term to improve projection data consistency with the reconstructed volume. In order to improve the computational efficiency, a multiple graphics processing units (GPUs)-based CS-IR system was developed. The parallelization scheme, detailed analyses of computation time at each step, their relationship with image resolution, and the acceleration factors were studied. The whole system was evaluated in various phantom and patient cases. Results: Ring artifacts can be mitigated by properly designing a weighting factor as a function of the spatial location on the detector. As for the cone artifact, without applying a correction method, it contaminated 13 out of 80 slices in a head-neck case (full-fan). Contamination was even more severe in a pelvis case under half-fan mode, where 36 out of 80 slices were affected, leading to poorer soft tissue delineation and reduced superior–inferior coverage. The proposed method effectively corrects those contaminated slices with mean intensity differences compared to FDK results decreasing from ∼497 and ∼293 HU to ∼39 and ∼27 HU for the full-fan and half-fan cases, respectively. In terms of efficiency boost, an overall 3.1 × speedup factor has been achieved with four GPU cards compared to a single GPU-based reconstruction. The total computation time is ∼30 s for typical clinical cases. Conclusions: The authors have developed a low-dose CBCT IR system for IGRT. By incorporating data consistency-based weighting factors in the IR model, cone/ring artifacts can be mitigated. A boost in computational efficiency is achieved by multi-GPU implementation.« less

  11. Adjusting for partial verification or workup bias in meta-analyses of diagnostic accuracy studies.

    PubMed

    de Groot, Joris A H; Dendukuri, Nandini; Janssen, Kristel J M; Reitsma, Johannes B; Brophy, James; Joseph, Lawrence; Bossuyt, Patrick M M; Moons, Karel G M

    2012-04-15

    A key requirement in the design of diagnostic accuracy studies is that all study participants receive both the test under evaluation and the reference standard test. For a variety of practical and ethical reasons, sometimes only a proportion of patients receive the reference standard, which can bias the accuracy estimates. Numerous methods have been described for correcting this partial verification bias or workup bias in individual studies. In this article, the authors describe a Bayesian method for obtaining adjusted results from a diagnostic meta-analysis when partial verification or workup bias is present in a subset of the primary studies. The method corrects for verification bias without having to exclude primary studies with verification bias, thus preserving the main advantages of a meta-analysis: increased precision and better generalizability. The results of this method are compared with the existing methods for dealing with verification bias in diagnostic meta-analyses. For illustration, the authors use empirical data from a systematic review of studies of the accuracy of the immunohistochemistry test for diagnosis of human epidermal growth factor receptor 2 status in breast cancer patients.

  12. Correction factors to convert microdosimetry measurements in silicon to tissue in 12C ion therapy

    NASA Astrophysics Data System (ADS)

    Bolst, David; Guatelli, Susanna; Tran, Linh T.; Chartier, Lachlan; Lerch, Michael L. F.; Matsufuji, Naruhiro; Rosenfeld, Anatoly B.

    2017-03-01

    Silicon microdosimetry is a promising technology for heavy ion therapy (HIT) quality assurance, because of its sub-mm spatial resolution and capability to determine radiation effects at a cellular level in a mixed radiation field. A drawback of silicon is not being tissue-equivalent, thus the need to convert the detector response obtained in silicon to tissue. This paper presents a method for converting silicon microdosimetric spectra to tissue for a therapeutic 12C beam, based on Monte Carlo simulations. The energy deposition spectra in a 10 μm sized silicon cylindrical sensitive volume (SV) were found to be equivalent to those measured in a tissue SV, with the same shape, but with dimensions scaled by a factor κ equal to 0.57 and 0.54 for muscle and water, respectively. A low energy correction factor was determined to account for the enhanced response in silicon at low energy depositions, produced by electrons. The concept of the mean path length < {{l}\\text{Path}}> to calculate the lineal energy was introduced as an alternative to the mean chord length < l> because it was found that adopting Cauchy’s formula for the < l> was not appropriate for the radiation field typical of HIT as it is very directional. < {{l}\\text{Path}}> can be determined based on the peak of the lineal energy distribution produced by the incident carbon beam. Furthermore it was demonstrated that the thickness of the SV along the direction of the incident 12C ion beam can be adopted as < {{l}\\text{Path}}> . The tissue equivalence conversion method and < {{l}\\text{Path}}> were adopted to determine the RBE10, calculated using a modified microdosimetric kinetic model, applied to the microdosimetric spectra resulting from the simulation study. Comparison of the RBE10 along the Bragg peak to experimental TEPC measurements at HIMAC, NIRS, showed good agreement. Such agreement demonstrates the validity of the developed tissue equivalence correction factors and of the determination of < {{l}\\text{Path}}> .

  13. Relativistic corrections to the form factors of Bc into P-wave orbitally excited charmonium

    NASA Astrophysics Data System (ADS)

    Zhu, Ruilin

    2018-06-01

    We investigated the form factors of the Bc meson into P-wave orbitally excited charmonium using the nonrelativistic QCD effective theory. Through the analytic computation, the next-to-leading order relativistic corrections to the form factors were obtained, and the asymptotic expressions were studied in the infinite bottom quark mass limit. Employing the general form factors, we discussed the exclusive decays of the Bc meson into P-wave orbitally excited charmonium and a light meson. We found that the relativistic corrections lead to a large correction for the form factors, which makes the branching ratios of the decay channels B (Bc ± →χcJ (hc) +π± (K±)) larger. These results are useful for the phenomenological analysis of the Bc meson decays into P-wave charmonium, which shall be tested in the LHCb experiments.

  14. Power corrections to TMD factorization for Z-boson production

    DOE PAGES

    Balitsky, I.; Tarasov, A.

    2018-05-24

    A typical factorization formula for production of a particle with a small transverse momentum in hadron-hadron collisions is given by a convolution of two TMD parton densities with cross section of production of the final particle by the two partons. For practical applications at a given transverse momentum, though, one should estimate at what momenta the power corrections to the TMD factorization formula become essential. In this work, we calculate the first power corrections to TMD factorization formula for Z-boson production and Drell-Yan process in high-energy hadron-hadron collisions. At the leading order in N c power corrections are expressed inmore » terms of leading power TMDs by QCD equations of motion.« less

  15. Power corrections to TMD factorization for Z-boson production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balitsky, I.; Tarasov, A.

    A typical factorization formula for production of a particle with a small transverse momentum in hadron-hadron collisions is given by a convolution of two TMD parton densities with cross section of production of the final particle by the two partons. For practical applications at a given transverse momentum, though, one should estimate at what momenta the power corrections to the TMD factorization formula become essential. In this work, we calculate the first power corrections to TMD factorization formula for Z-boson production and Drell-Yan process in high-energy hadron-hadron collisions. At the leading order in N c power corrections are expressed inmore » terms of leading power TMDs by QCD equations of motion.« less

  16. 30 CFR 870.18 - General rules for calculating excess moisture.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Collection of Coal Samples from Core; and, D1412-93, Standard Test Method for Equilibrium Moisture of Coal at... shipment or use. (5) Core sample means a cylindrical sample of coal that represents the thickness of a coal seam penetrated by drilling according to ASTM standard D5192-91. (6) Correction factor means the...

  17. 30 CFR 870.18 - General rules for calculating excess moisture.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Collection of Coal Samples from Core; and, D1412-93, Standard Test Method for Equilibrium Moisture of Coal at... shipment or use. (5) Core sample means a cylindrical sample of coal that represents the thickness of a coal seam penetrated by drilling according to ASTM standard D5192-91. (6) Correction factor means the...

  18. 30 CFR 870.18 - General rules for calculating excess moisture.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Collection of Coal Samples from Core; and, D1412-93, Standard Test Method for Equilibrium Moisture of Coal at... shipment or use. (5) Core sample means a cylindrical sample of coal that represents the thickness of a coal seam penetrated by drilling according to ASTM standard D5192-91. (6) Correction factor means the...

  19. 30 CFR 870.18 - General rules for calculating excess moisture.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Collection of Coal Samples from Core; and, D1412-93, Standard Test Method for Equilibrium Moisture of Coal at... shipment or use. (5) Core sample means a cylindrical sample of coal that represents the thickness of a coal seam penetrated by drilling according to ASTM standard D5192-91. (6) Correction factor means the...

  20. An Inferential Confidence Interval Method of Establishing Statistical Equivalence that Corrects Tryon's (2001) Reduction Factor

    ERIC Educational Resources Information Center

    Tryon, Warren W.; Lewis, Charles

    2008-01-01

    Evidence of group matching frequently takes the form of a nonsignificant test of statistical difference. Theoretical hypotheses of no difference are also tested in this way. These practices are flawed in that null hypothesis statistical testing provides evidence against the null hypothesis and failing to reject H[subscript 0] is not evidence…

  1. Motivation for Instrument Education: A Study from the Perspective of Expectancy-Value and Flow Theories

    ERIC Educational Resources Information Center

    Burak, Sabahat

    2014-01-01

    Problem Statement: In the process of instrument education, students being unwilling (lacking motivation) to play an instrument or to practise is a problem that educators frequently face. Recognizing the factors motivating the students will yield useful results for instrument educators in terms of developing correct teaching methods and approaches.…

  2. Infant Maltreatment-Related Mortality in Alaska: Correcting the Count and Using Birth Certificates to Predict Mortality

    ERIC Educational Resources Information Center

    Parrish, Jared W.; Gessner, Bradford D.

    2010-01-01

    Objectives: To accurately count the number of infant maltreatment-related fatalities and to use information from the birth certificates to predict infant maltreatment-related deaths. Methods: A population-based retrospective cohort study of infants born in Alaska for the years 1992 through 2005 was conducted. Risk factor variables were ascertained…

  3. Iterative Magnetometer Calibration

    NASA Technical Reports Server (NTRS)

    Sedlak, Joseph

    2006-01-01

    This paper presents an iterative method for three-axis magnetometer (TAM) calibration that makes use of three existing utilities recently incorporated into the attitude ground support system used at NASA's Goddard Space Flight Center. The method combines attitude-independent and attitude-dependent calibration algorithms with a new spinning spacecraft Kalman filter to solve for biases, scale factors, nonorthogonal corrections to the alignment, and the orthogonal sensor alignment. The method is particularly well-suited to spin-stabilized spacecraft, but may also be useful for three-axis stabilized missions given sufficient data to provide observability.

  4. Remote sensing of chlorophyll concentrations in the northern Gulf of Mexico

    NASA Technical Reports Server (NTRS)

    Trees, Charles C.; El-Sayed, Sayed Z.

    1986-01-01

    During a 17 month period (November 1978 - March 1980), phytoplankton pigment concentrations were remotely sensed in the northern Gulf of Mexico using the Coastal Zone Color Scanner (CZCS). A total of 29 CZCS orbits were processed into pigment (chlorophyll a + phaeopigments) images and then geometrically warped to a Mercator projection. A correction factor of 1.67 was applied to the pigment concentrations to correct for the tendency of the standard fluorometric method to underestimate chlorophyll a concentrations. The spatial and temporal distributions of pigment fronts were quite variable during this time series. Constant features observed throughout the pigment imagery were the entrainment of coastal waters offshore. The most extensive entrainments occurred during intrusions of the Loop Current. For the 17 month survey, the mean HPLC-corrected pigment concentration was 3.30 + or - 1.45 mg/cu m.

  5. Local Setup Reproducibility of the Spinal Column When Using Intensity-Modulated Radiation Therapy for Craniospinal Irradiation With Patient in Supine Position

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoiber, Eva Maria, E-mail: eva.stoiber@med.uni-heidelberg.de; Department of Medical Physics, German Cancer Research Center, Heidelberg; Giske, Kristina

    Purpose: To evaluate local positioning errors of the lumbar spine during fractionated intensity-modulated radiotherapy of patients treated with craniospinal irradiation and to assess the impact of rotational error correction on these uncertainties for one patient setup correction strategy. Methods and Materials: 8 patients (6 adults, 2 children) treated with helical tomotherapy for craniospinal irradiation were retrospectively chosen for this analysis. Patients were immobilized with a deep-drawn Aquaplast head mask. Additionally to daily megavoltage control computed tomography scans of the skull, once-a-week positioning of the lumbar spine was assessed. Therefore, patient setup was corrected by a target point correction, derived frommore » a registration of the patient's skull. The residual positioning variations of the lumbar spine were evaluated applying a rigid-registration algorithm. The impact of different rotational error corrections was simulated. Results: After target point correction, residual local positioning errors of the lumbar spine varied considerably. Craniocaudal axis rotational error correction did not improve or deteriorate these translational errors, whereas simulation of a rotational error correction of the right-left and anterior-posterior axis increased these errors by a factor of 2 to 3. Conclusion: The patient fixation used allows for deformations between the patient's skull and spine. Therefore, for the setup correction strategy evaluated in this study, generous margins for the lumbar spinal target volume are needed to prevent a local geographic miss. With any applied correction strategy, it needs to be evaluated whether or not a rotational error correction is beneficial.« less

  6. Radiation analysis devices, radiation analysis methods, and articles of manufacture

    DOEpatents

    Roybal, Lyle Gene

    2010-06-08

    Radiation analysis devices include circuitry configured to determine respective radiation count data for a plurality of sections of an area of interest and combine the radiation count data of individual of sections to determine whether a selected radioactive material is present in the area of interest. An amount of the radiation count data for an individual section is insufficient to determine whether the selected radioactive material is present in the individual section. An article of manufacture includes media comprising programming configured to cause processing circuitry to perform processing comprising determining one or more correction factors based on a calibration of a radiation analysis device, measuring radiation received by the radiation analysis device using the one or more correction factors, and presenting information relating to an amount of radiation measured by the radiation analysis device having one of a plurality of specified radiation energy levels of a range of interest.

  7. A Summary of The 2000-2001 NASA Glenn Lear Jet AM0 Solar Cell Calibration Program

    NASA Technical Reports Server (NTRS)

    Scheiman, David; Brinker, David; Snyder, David; Baraona, Cosmo; Jenkins, Phillip; Rieke, William J.; Blankenship, Kurt S.; Tom, Ellen M.

    2002-01-01

    Calibration of solar cells for space is extremely important for satellite power system design. Accurate prediction of solar cell performance is critical to solar array sizing, often required to be within 1%. The NASA Glenn Research Center solar cell calibration airplane facility has been in operation since 1963 with 531 flights to date. The calibration includes real data to Air Mass (AM) 0.2 and uses the Langley plot method plus an ozone correction factor to extrapolate to AM0. Comparison of the AM0 calibration data indicates that there is good correlation with Balloon and Shuttle flown solar cells. This paper will present a history of the airplane calibration procedure, flying considerations, and a brief summary of the previous flying season with some measurement results. This past flying season had a record 35 flights. It will also discuss efforts to more clearly define the ozone correction factor.

  8. Comorbidities impacting on prognosis after lung transplant.

    PubMed

    Vaquero Barrios, José Manuel; Redel Montero, Javier; Santos Luna, Francisco

    2014-01-01

    The aim of this review is to give an overview of the clinical circumstances presenting before lung transplant that may have negative repercussions on the long and short-term prognosis of the transplant. Methods for screening and diagnosis of common comorbidities with negative impact on the prognosis of the transplant are proposed, both for pulmonary and extrapulmonary diseases, and measures aimed at correcting these factors are discussed. Coordination and information exchange between referral centers and transplant centers would allow these comorbidities to be detected and corrected, with the aim of minimizing the risks and improving the life expectancy of transplant receivers. Copyright © 2013 SEPAR. Published by Elsevier Espana. All rights reserved.

  9. [Therapeutic algorithm of idiopathic scoliosis in children].

    PubMed

    Ciortan, Ionica; Goţia, D G

    2008-01-01

    Acquired deformations of spinal cord (scoliosis, kyphosis, lordosis) represent a frequent pathology in child; their treatment is complex, with variable results which depend on various parameters. Mild scoliosis, with an angle less than 30 degrees, is treated with physiotherapy and regular follow-up. If the angle is higher than 30 degrees, the orthopedic corset is required; the angle over 45 degrees impose surgically correction. The indications of every therapeutic method depend on many factors, the main target of the treatment is to prevent the aggravation of the curvature; concerning the surgery, the goal is to obtain a correction as normal as possible of the spinal axis.

  10. Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image

    PubMed Central

    Wen, Wei; Khatibi, Siamak

    2017-01-01

    Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera. PMID:28335459

  11. SPECTRAL CORRECTION FACTORS FOR CONVENTIONAL NEUTRON DOSE METERS USED IN HIGH-ENERGY NEUTRON ENVIRONMENTS-IMPROVED AND EXTENDED RESULTS BASED ON A COMPLETE SURVEY OF ALL NEUTRON SPECTRA IN IAEA-TRS-403.

    PubMed

    Oparaji, U; Tsai, Y H; Liu, Y C; Lee, K W; Patelli, E; Sheu, R J

    2017-06-01

    This paper presents improved and extended results of our previous study on corrections for conventional neutron dose meters used in environments with high-energy neutrons (En > 10 MeV). Conventional moderated-type neutron dose meters tend to underestimate the dose contribution of high-energy neutrons because of the opposite trends of dose conversion coefficients and detection efficiencies as the neutron energy increases. A practical correction scheme was proposed based on analysis of hundreds of neutron spectra in the IAEA-TRS-403 report. By comparing 252Cf-calibrated dose responses with reference values derived from fluence-to-dose conversion coefficients, this study provides recommendations for neutron field characterization and the corresponding dose correction factors. Further sensitivity studies confirm the appropriateness of the proposed scheme and indicate that (1) the spectral correction factors are nearly independent of the selection of three commonly used calibration sources: 252Cf, 241Am-Be and 239Pu-Be; (2) the derived correction factors for Bonner spheres of various sizes (6"-9") are similar in trend and (3) practical high-energy neutron indexes based on measurements can be established to facilitate the application of these correction factors in workplaces. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  12. Cell-Penetrating Peptide as a Means of Directing the Differentiation of Induced-Pluripotent Stem Cells.

    PubMed

    Kaitsuka, Taku; Tomizawa, Kazuhito

    2015-11-06

    Protein transduction using cell-penetrating peptides (CPPs) is useful for the delivery of large protein molecules, including some transcription factors. This method is safer than gene transfection methods with a viral vector because there is no risk of genomic integration of the exogenous DNA. Recently, this method was reported as a means for the induction of induced pluripotent stem (iPS) cells, directing the differentiation into specific cell types and supporting gene editing/correction. Furthermore, we developed a direct differentiation method to obtain a pancreatic lineage from mouse and human pluripotent stem cells via the protein transduction of three transcription factors, Pdx1, NeuroD, and MafA. Here, we discuss the possibility of using CPPs as a means of directing the differentiation of iPS cells and other stem cell technologies.

  13. Adapting Surface Ground Motion Relations to Underground conditions: A case study for the Sudbury Neutrino Observatory in Sudbury, Ontario, Canada

    NASA Astrophysics Data System (ADS)

    Babaie Mahani, A.; Eaton, D. W.

    2013-12-01

    Ground Motion Prediction Equations (GMPEs) are widely used in Probabilistic Seismic Hazard Assessment (PSHA) to estimate ground-motion amplitudes at Earth's surface as a function of magnitude and distance. Certain applications, such as hazard assessment for caprock integrity in the case of underground storage of CO2, waste disposal sites, and underground pipelines, require subsurface estimates of ground motion; at present, such estimates depend upon theoretical modeling and simulations. The objective of this study is to derive correction factors for GMPEs to enable estimation of amplitudes in the subsurface. We use a semi-analytic approach along with finite-difference simulations of ground-motion amplitudes for surface and underground motions. Spectral ratios of underground to surface motions are used to calculate the correction factors. Two predictive methods are used. The first is a semi-analytic approach based on a quarter-wavelength method that is widely used for earthquake site-response investigations; the second is a numerical approach based on elastic finite-difference simulations of wave propagation. Both methods are evaluated using recordings of regional earthquakes by broadband seismometers installed at the surface and at depths of 1400 m and 2100 m in the Sudbury Neutrino Observatory, Canada. Overall, both methods provide a reasonable fit to the peaks and troughs observed in the ratios of real data. The finite-difference method, however, has the capability to simulate ground motion ratios more accurately than the semi-analytic approach.

  14. Determination of the mechanical parameters of rock mass based on a GSI system and displacement back analysis

    NASA Astrophysics Data System (ADS)

    Kang, Kwang-Song; Hu, Nai-Lian; Sin, Chung-Sik; Rim, Song-Ho; Han, Eun-Cheol; Kim, Chol-Nam

    2017-08-01

    It is very important to obtain the mechanical paramerters of rock mass for excavation design, support design, slope design and stability analysis of the underground structure. In order to estimate the mechanical parameters of rock mass exactly, a new method of combining a geological strength index (GSI) system with intelligent displacment back analysis is proposed in this paper. Firstly, average spacing of joints (d) and rock mass block rating (RBR, a new quantitative factor), surface condition rating (SCR) and joint condition factor (J c) are obtained on in situ rock masses using the scanline method, and the GSI values of rock masses are obtained from a new quantitative GSI chart. A correction method of GSI value is newly introduced by considering the influence of joint orientation and groundwater on rock mass mechanical properties, and then value ranges of rock mass mechanical parameters are chosen by the Hoek-Brown failure criterion. Secondly, on the basis of the measurement result of vault settlements and horizontal convergence displacements of an in situ tunnel, optimal parameters are estimated by combination of genetic algorithm (GA) and numerical simulation analysis using FLAC3D. This method has been applied in a lead-zinc mine. By utilizing the improved GSI quantization, correction method and displacement back analysis, the mechanical parameters of the ore body, hanging wall and footwall rock mass were determined, so that reliable foundations were provided for mining design and stability analysis.

  15. About one counterexample of applying method of splitting in modeling of plating processes

    NASA Astrophysics Data System (ADS)

    Solovjev, D. S.; Solovjeva, I. A.; Litovka, Yu V.; Korobova, I. L.

    2018-05-01

    The paper presents the main factors that affect the uniformity of the thickness distribution of plating on the surface of the product. The experimental search for the optimal values of these factors is expensive and time-consuming. The problem of adequate simulation of coating processes is very relevant. The finite-difference approximation using seven-point and five-point templates in combination with the splitting method is considered as solution methods for the equations of the model. To study the correctness of the solution of equations of the mathematical model by these methods, the experiments were conducted on plating with a flat anode and cathode, which relative position was not changed in the bath. The studies have shown that the solution using the splitting method was up to 1.5 times faster, but it did not give adequate results due to the geometric features of the task under the given boundary conditions.

  16. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... illustrate the application of correction factors to sound level measurement readings: (1) Example 1—Highway operations. Assume that a motor vehicle generates a maximum observed sound level reading of 86 dB(A) during a... of the test site is acoustically “hard.” The corrected sound level generated by the motor vehicle...

  17. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... illustrate the application of correction factors to sound level measurement readings: (1) Example 1—Highway operations. Assume that a motor vehicle generates a maximum observed sound level reading of 86 dB(A) during a... of the test site is acoustically “hard.” The corrected sound level generated by the motor vehicle...

  18. Factors influencing workplace violence risk among correctional health workers: insights from an Australian survey.

    PubMed

    Cashmore, Aaron W; Indig, Devon; Hampton, Stephen E; Hegney, Desley G; Jalaludin, Bin B

    2016-11-01

    Little is known about the environmental and organisational determinants of workplace violence in correctional health settings. This paper describes the views of health professionals working in these settings on the factors influencing workplace violence risk. All employees of a large correctional health service in New South Wales, Australia, were invited to complete an online survey. The survey included an open-ended question seeking the views of participants about the factors influencing workplace violence in correctional health settings. Responses to this question were analysed using qualitative thematic analysis. Participants identified several factors that they felt reduced the risk of violence in their workplace, including: appropriate workplace health and safety policies and procedures; professionalism among health staff; the presence of prison guards and the quality of security provided; and physical barriers within clinics. Conversely, participants perceived workplace violence risk to be increased by: low health staff-to-patient and correctional officer-to-patient ratios; high workloads; insufficient or underperforming security staff; and poor management of violence, especially horizontal violence. The views of these participants should inform efforts to prevent workplace violence among correctional health professionals.

  19. Sufficient Forecasting Using Factor Models

    PubMed Central

    Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei

    2017-01-01

    We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537

  20. Fluence correction factors for graphite calorimetry in a low-energy clinical proton beam: I. Analytical and Monte Carlo simulations.

    PubMed

    Palmans, H; Al-Sulaiti, L; Andreo, P; Shipley, D; Lühr, A; Bassler, N; Martinkovič, J; Dobrovodský, J; Rossomme, S; Thomas, R A S; Kacperek, A

    2013-05-21

    The conversion of absorbed dose-to-graphite in a graphite phantom to absorbed dose-to-water in a water phantom is performed by water to graphite stopping power ratios. If, however, the charged particle fluence is not equal at equivalent depths in graphite and water, a fluence correction factor, kfl, is required as well. This is particularly relevant to the derivation of absorbed dose-to-water, the quantity of interest in radiotherapy, from a measurement of absorbed dose-to-graphite obtained with a graphite calorimeter. In this work, fluence correction factors for the conversion from dose-to-graphite in a graphite phantom to dose-to-water in a water phantom for 60 MeV mono-energetic protons were calculated using an analytical model and five different Monte Carlo codes (Geant4, FLUKA, MCNPX, SHIELD-HIT and McPTRAN.MEDIA). In general the fluence correction factors are found to be close to unity and the analytical and Monte Carlo codes give consistent values when considering the differences in secondary particle transport. When considering only protons the fluence correction factors are unity at the surface and increase with depth by 0.5% to 1.5% depending on the code. When the fluence of all charged particles is considered, the fluence correction factor is about 0.5% lower than unity at shallow depths predominantly due to the contributions from alpha particles and increases to values above unity near the Bragg peak. Fluence correction factors directly derived from the fluence distributions differential in energy at equivalent depths in water and graphite can be described by kfl = 0.9964 + 0.0024·zw-eq with a relative standard uncertainty of 0.2%. Fluence correction factors derived from a ratio of calculated doses at equivalent depths in water and graphite can be described by kfl = 0.9947 + 0.0024·zw-eq with a relative standard uncertainty of 0.3%. These results are of direct relevance to graphite calorimetry in low-energy protons but given that the fluence correction factor is almost solely influenced by non-elastic nuclear interactions the results are also relevant for plastic phantoms that consist of carbon, oxygen and hydrogen atoms as well as for soft tissues.

  1. Improved correction for the tissue fraction effect in lung PET/CT imaging

    NASA Astrophysics Data System (ADS)

    Holman, Beverley F.; Cuplov, Vesna; Millner, Lynn; Hutton, Brian F.; Maher, Toby M.; Groves, Ashley M.; Thielemans, Kris

    2015-09-01

    Recently, there has been an increased interest in imaging different pulmonary disorders using PET techniques. Previous work has shown, for static PET/CT, that air content in the lung influences reconstructed image values and that it is vital to correct for this ‘tissue fraction effect’ (TFE). In this paper, we extend this work to include the blood component and also investigate the TFE in dynamic imaging. CT imaging and PET kinetic modelling are used to determine fractional air and blood voxel volumes in six patients with idiopathic pulmonary fibrosis. These values are used to illustrate best and worst case scenarios when interpreting images without correcting for the TFE. In addition, the fractional volumes were used to determine correction factors for the SUV and the kinetic parameters. These were then applied to the patient images. The kinetic parameters K1 and Ki along with the static parameter SUV were all found to be affected by the TFE with both air and blood providing a significant contribution to the errors. Without corrections, errors range from 34-80% in the best case and 29-96% in the worst case. In the patient data, without correcting for the TFE, regions of high density (fibrosis) appeared to have a higher uptake than lower density (normal appearing tissue), however this was reversed after air and blood correction. The proposed correction methods are vital for quantitative and relative accuracy. Without these corrections, images may be misinterpreted.

  2. On the impact of power corrections in the prediction of B → K *μ+μ- observables

    NASA Astrophysics Data System (ADS)

    Descotes-Genon, Sébastien; Hofer, Lars; Matias, Joaquim; Virto, Javier

    2014-12-01

    The recent LHCb angular analysis of the exclusive decay B → K * μ + μ - has indicated significant deviations from the Standard Model expectations. Accurate predictions can be achieved at large K *-meson recoil for an optimised set of observables designed to have no sensitivity to hadronic input in the heavy-quark limit at leading order in α s . However, hadronic uncertainties reappear through non-perturbative ΛQCD /m b power corrections, which must be assessed precisely. In the framework of QCD factorisation we present a systematic method to include factorisable power corrections and point out that their impact on angular observables depends on the scheme chosen to define the soft form factors. Associated uncertainties are found to be under control, contrary to earlier claims in the literature. We also discuss the impact of possible non-factorisable power corrections, including an estimate of charm-loop effects. We provide results for angular observables at large recoil for two different sets of inputs for the form factors, spelling out the different sources of theoretical uncertainties. Finally, we comment on a recent proposal to explain the anomaly in B → K * μ + μ - observables through charm-resonance effects, and we propose strategies to test this proposal identifying observables and kinematic regions where either the charm-loop model can be disentangled from New Physics effects or the two options leave different imprints.

  3. Scatter and cross-talk corrections in simultaneous Tc-99m/I-123 brain SPECT using constrained factor analysis and artificial neural networks

    NASA Astrophysics Data System (ADS)

    Fakhri, G. El; Maksud, P.; Kijewski, M. F.; Haberi, M. O.; Todd-Pokropek, A.; Aurengo, A.; Moore, S. C.

    2000-08-01

    Simultaneous imaging of Tc-99m and I-123 would have a high clinical potential in the assessment of brain perfusion (Tc-99m) and neurotransmission (I-123) but is hindered by cross-talk between the two radionuclides. Monte Carlo simulations of 15 different dual-isotope studies were performed using a digital brain phantom. Several physiologic Tc-99m and I-123 uptake patterns were modeled in the brain structures. Two methods were considered to correct for cross-talk from both scattered and unscattered photons: constrained spectral factor analysis (SFA) and artificial neural networks (ANN). The accuracy and precision of reconstructed pixel values within several brain structures were compared to those obtained with an energy windowing method (WSA). In I-123 images, mean bias was close to 10% in all structures for SFA and ANN and between 14% (in the caudate nucleus) and 25% (in the cerebellum) for WSA. Tc-99m activity was overestimated by 35% in the cortex and 53% in the caudate nucleus with WSA, but by less than 9% in all structures with SFA and ANN. SFA and ANN performed well even in the presence of high-energy I-123 photons. The accuracy was greatly improved by incorporating the contamination into the SFA model or in the learning phase for ANN. SFA and ANN are promising approaches to correct for cross-talk in simultaneous Tc-99m/I-123 SPECT.

  4. Spatial homogenization methods for pin-by-pin neutron transport calculations

    NASA Astrophysics Data System (ADS)

    Kozlowski, Tomasz

    For practical reactor core applications low-order transport approximations such as SP3 have been shown to provide sufficient accuracy for both static and transient calculations with considerably less computational expense than the discrete ordinate or the full spherical harmonics methods. These methods have been applied in several core simulators where homogenization was performed at the level of the pin cell. One of the principal problems has been to recover the error introduced by pin-cell homogenization. Two basic approaches to treat pin-cell homogenization error have been proposed: Superhomogenization (SPH) factors and Pin-Cell Discontinuity Factors (PDF). These methods are based on well established Equivalence Theory and Generalized Equivalence Theory to generate appropriate group constants. These methods are able to treat all sources of error together, allowing even few-group diffusion with one mesh per cell to reproduce the reference solution. A detailed investigation and consistent comparison of both homogenization techniques showed potential of PDF approach to improve accuracy of core calculation, but also reveal its limitation. In principle, the method is applicable only for the boundary conditions at which it was created, i.e. for boundary conditions considered during the homogenization process---normally zero current. Therefore, there exists a need to improve this method, making it more general and environment independent. The goal of proposed general homogenization technique is to create a function that is able to correctly predict the appropriate correction factor with only homogeneous information available, i.e. a function based on heterogeneous solution that could approximate PDFs using homogeneous solution. It has been shown that the PDF can be well approximated by least-square polynomial fit of non-dimensional heterogeneous solution and later used for PDF prediction using homogeneous solution. This shows a promise for PDF prediction for off-reference conditions, such as during reactor transients which provide conditions that can not typically be anticipated a priori.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao Hewei; Fahrig, Rebecca; Bennett, N. Robert

    Purpose: Scatter correction is a major challenge in x-ray imaging using large area detectors. Recently, the authors proposed a promising scatter correction method for x-ray computed tomography (CT) using primary modulation. Proof of concept was previously illustrated by Monte Carlo simulations and physical experiments on a small phantom with a simple geometry. In this work, the authors provide a quantitative evaluation of the primary modulation technique and demonstrate its performance in applications where scatter correction is more challenging. Methods: The authors first analyze the potential errors of the estimated scatter in the primary modulation method. On two tabletop CT systems,more » the method is investigated using three phantoms: A Catphan(c)600 phantom, an anthropomorphic chest phantom, and the Catphan(c)600 phantom with two annuli. Two different primary modulators are also designed to show the impact of the modulator parameters on the scatter correction efficiency. The first is an aluminum modulator with a weak modulation and a low modulation frequency, and the second is a copper modulator with a strong modulation and a high modulation frequency. Results: On the Catphan(c)600 phantom in the first study, the method reduces the error of the CT number in the selected regions of interest (ROIs) from 371.4 to 21.9 Hounsfield units (HU); the contrast to noise ratio also increases from 10.9 to 19.2. On the anthropomorphic chest phantom in the second study, which represents a more difficult case due to the high scatter signals and object heterogeneity, the method reduces the error of the CT number from 327 to 19 HU in the selected ROIs and from 31.4% to 5.7% on the overall average. The third study is to investigate the impact of object size on the efficiency of our method. The scatter-to-primary ratio estimation error on the Catphan(c)600 phantom without any annulus (20 cm in diameter) is at the level of 0.04, it rises to 0.07 and 0.1 on the phantom with an elliptical annulus (30 cm in the minor axis and 38 cm in the major axis) and with a circular annulus (38 cm in diameter). Conclusions: On the three phantom studies, good scatter correction performance of the proposed method has been demonstrated using both image comparisons and quantitative analysis. The theory and experiments demonstrate that a strong primary modulation that possesses a low transmission factor and a high modulation frequency is preferred for high scatter correction accuracy.« less

  6. Radiative corrections to the η(') Dalitz decays

    NASA Astrophysics Data System (ADS)

    Husek, Tomáš; Kampf, Karol; Novotný, Jiří; Leupold, Stefan

    2018-05-01

    We provide the complete set of radiative corrections to the Dalitz decays η(')→ℓ+ℓ-γ beyond the soft-photon approximation, i.e., over the whole range of the Dalitz plot and with no restrictions on the energy of a radiative photon. The corrections inevitably depend on the η(')→ γ*γ(*) transition form factors. For the singly virtual transition form factor appearing, e.g., in the bremsstrahlung correction, recent dispersive calculations are used. For the one-photon-irreducible contribution at the one-loop level (for the doubly virtual form factor), we use a vector-meson-dominance-inspired model while taking into account the η -η' mixing.

  7. Characterization of the nanoDot OSLD dosimeter in CT

    PubMed Central

    Scarboro, Sarah B.; Cody, Dianna; Alvarez, Paola; Followill, David; Court, Laurence; Stingo, Francesco C.; Zhang, Di; Kry, Stephen F.

    2015-01-01

    Purpose: The extensive use of computed tomography (CT) in diagnostic procedures is accompanied by a growing need for more accurate and patient-specific dosimetry techniques. Optically stimulated luminescent dosimeters (OSLDs) offer a potential solution for patient-specific CT point-based surface dosimetry by measuring air kerma. The purpose of this work was to characterize the OSLD nanoDot for CT dosimetry, quantifying necessary correction factors, and evaluating the uncertainty of these factors. Methods: A characterization of the Landauer OSL nanoDot (Landauer, Inc., Greenwood, IL) was conducted using both measurements and theoretical approaches in a CT environment. The effects of signal depletion, signal fading, dose linearity, and angular dependence were characterized through direct measurement for CT energies (80–140 kV) and delivered doses ranging from ∼5 to >1000 mGy. Energy dependence as a function of scan parameters was evaluated using two independent approaches: direct measurement and a theoretical approach based on Burlin cavity theory and Monte Carlo simulated spectra. This beam-quality dependence was evaluated for a range of CT scanning parameters. Results: Correction factors for the dosimeter response in terms of signal fading, dose linearity, and angular dependence were found to be small for most measurement conditions (<3%). The relative uncertainty was determined for each factor and reported at the two-sigma level. Differences in irradiation geometry (rotational versus static) resulted in a difference in dosimeter signal of 3% on average. Beam quality varied with scan parameters and necessitated the largest correction factor, ranging from 0.80 to 1.15 relative to a calibration performed in air using a 120 kV beam. Good agreement was found between the theoretical and measurement approaches. Conclusions: Correction factors for the measurement of air kerma were generally small for CT dosimetry, although angular effects, and particularly effects due to changes in beam quality, could be more substantial. In particular, it would likely be necessary to account for variations in CT scan parameters and measurement location when performing CT dosimetry using OSLD. PMID:25832070

  8. Characterization of the nanoDot OSLD dosimeter in CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scarboro, Sarah B.; Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030; The Methodist Hospital, Houston, Texas 77030

    Purpose: The extensive use of computed tomography (CT) in diagnostic procedures is accompanied by a growing need for more accurate and patient-specific dosimetry techniques. Optically stimulated luminescent dosimeters (OSLDs) offer a potential solution for patient-specific CT point-based surface dosimetry by measuring air kerma. The purpose of this work was to characterize the OSLD nanoDot for CT dosimetry, quantifying necessary correction factors, and evaluating the uncertainty of these factors. Methods: A characterization of the Landauer OSL nanoDot (Landauer, Inc., Greenwood, IL) was conducted using both measurements and theoretical approaches in a CT environment. The effects of signal depletion, signal fading, dosemore » linearity, and angular dependence were characterized through direct measurement for CT energies (80–140 kV) and delivered doses ranging from ∼5 to >1000 mGy. Energy dependence as a function of scan parameters was evaluated using two independent approaches: direct measurement and a theoretical approach based on Burlin cavity theory and Monte Carlo simulated spectra. This beam-quality dependence was evaluated for a range of CT scanning parameters. Results: Correction factors for the dosimeter response in terms of signal fading, dose linearity, and angular dependence were found to be small for most measurement conditions (<3%). The relative uncertainty was determined for each factor and reported at the two-sigma level. Differences in irradiation geometry (rotational versus static) resulted in a difference in dosimeter signal of 3% on average. Beam quality varied with scan parameters and necessitated the largest correction factor, ranging from 0.80 to 1.15 relative to a calibration performed in air using a 120 kV beam. Good agreement was found between the theoretical and measurement approaches. Conclusions: Correction factors for the measurement of air kerma were generally small for CT dosimetry, although angular effects, and particularly effects due to changes in beam quality, could be more substantial. In particular, it would likely be necessary to account for variations in CT scan parameters and measurement location when performing CT dosimetry using OSLD.« less

  9. Air-kerma strength determination of a new directional {sup 103}Pd source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aima, Manik, E-mail: aima@wisc.edu; Reed, Joshua L.; DeWerd, Larry A.

    2015-12-15

    Purpose: A new directional {sup 103}Pd planar source array called a CivaSheet™ has been developed by CivaTech Oncology, Inc., for potential use in low-dose-rate (LDR) brachytherapy treatments. The array consists of multiple individual polymer capsules called CivaDots, containing {sup 103}Pd and a gold shield that attenuates the radiation on one side, thus defining a hot and cold side. This novel source requires new methods to establish a source strength metric. The presence of gold material in such close proximity to the active {sup 103}Pd region causes the source spectrum to be significantly different than the energy spectra of seeds normallymore » used in LDR brachytherapy treatments. In this investigation, the authors perform air-kerma strength (S{sub K}) measurements, develop new correction factors for these measurements based on an experimentally verified energy spectrum, and test the robustness of transferring S{sub K} to a well-type ionization chamber. Methods: S{sub K} measurements were performed with the variable-aperture free-air chamber (VAFAC) at the University of Wisconsin Medical Radiation Research Center. Subsequent measurements were then performed in a well-type ionization chamber. To realize the quantity S{sub K} from a directional source with gold material present, new methods and correction factors were considered. Updated correction factors were calculated using the MCNP 6 Monte Carlo code in order to determine S{sub K} with the presence of gold fluorescent energy lines. In addition to S{sub K} measurements, a low-energy high-purity germanium (HPGe) detector was used to experimentally verify the calculated spectrum, a sodium iodide (NaI) scintillating counter was used to verify the azimuthal and polar anisotropy, and a well-type ionization chamber was used to test the feasibility of disseminating S{sub K} values for a directional source within a cylindrically symmetric measurement volume. Results: The UW VAFAC was successfully used to measure the S{sub K} of four CivaDots with reproducibilities within 0.3%. Monte Carlo methods were used to calculate the UW VAFAC correction factors and the calculated spectrum emitted from a CivaDot was experimentally verified with HPGe detector measurements. The well-type ionization chamber showed minimal variation in response (<1.5%) as a function of source positioning angle, indicating that an American Association of Physicists in Medicine (AAPM) Accredited Dosimetry Calibration Laboratory calibrated well chamber would be a suitable device to transfer an S{sub K}-based calibration to a clinical user. S{sub K} per well-chamber ionization current ratios were consistent among the four dots measured. Additionally, the measurements and predictions of anisotropy show uniform emission within the solid angle of the VAFAC, which demonstrates the robustness of the S{sub K} measurement approach. Conclusions: This characterization of a new {sup 103}Pd directional brachytherapy source helps to establish calibration methods that could ultimately be used in the well-established AAPM Task Group 43 formalism. Monte Carlo methods accurately predict the changes in the energy spectrum caused by the fluorescent x-rays produced in the gold shield.« less

  10. Impact of creatine kinase correction on the predictive value of S-100B after mild traumatic brain injury.

    PubMed

    Bazarian, Jeffrey J; Beck, Christopher; Blyth, Brian; von Ahsen, Nicolas; Hasselblatt, Martin

    2006-01-01

    To validate a correction factor for the extracranial release of the astroglial protein, S-100B, based on concomitant creatine kinase (CK) levels. The CK- S-100B relationship in non-head injured marathon runners was used to derive a correction factor for the extracranial release of S-100B. This factor was then applied to a separate cohort of 96 mild traumatic brain injury (TBI) patients in whom both CK and S-100B levels were measured. Corrected S-100B was compared to uncorrected S-100B for the prediction of initial head CT, three-month headache and three-month post concussive syndrome (PCS). Corrected S-100B resulted in a statistically significant improvement in the prediction of 3-month headache (area under curve [AUC] 0.46 vs 0.52, p=0.02), but not PCS or initial head CT. Using a cutoff that maximizes sensitivity (> or = 90%), corrected S-100B improved the prediction of initial head CT scan (negative predictive value from 75% [95% CI, 2.6%, 67.0%] to 96% [95% CI: 83.5%, 99.8%]). Although S-100B is overall poorly predictive of outcome, a correction factor using CK is a valid means of accounting for extracranial release. By increasing the proportion of mild TBI patients correctly categorized as low risk for abnormal head CT, CK-corrected S100-B can further reduce the number of unnecessary brain CT scans performed after this injury.

  11. Efficient dynamical correction of the transition state theory rate estimate for a flat energy barrier.

    PubMed

    Mökkönen, Harri; Ala-Nissila, Tapio; Jónsson, Hannes

    2016-09-07

    The recrossing correction to the transition state theory estimate of a thermal rate can be difficult to calculate when the energy barrier is flat. This problem arises, for example, in polymer escape if the polymer is long enough to stretch between the initial and final state energy wells while the polymer beads undergo diffusive motion back and forth over the barrier. We present an efficient method for evaluating the correction factor by constructing a sequence of hyperplanes starting at the transition state and calculating the probability that the system advances from one hyperplane to another towards the product. This is analogous to what is done in forward flux sampling except that there the hyperplane sequence starts at the initial state. The method is applied to the escape of polymers with up to 64 beads from a potential well. For high temperature, the results are compared with direct Langevin dynamics simulations as well as forward flux sampling and excellent agreement between the three rate estimates is found. The use of a sequence of hyperplanes in the evaluation of the recrossing correction speeds up the calculation by an order of magnitude as compared with the traditional approach. As the temperature is lowered, the direct Langevin dynamics simulations as well as the forward flux simulations become computationally too demanding, while the harmonic transition state theory estimate corrected for recrossings can be calculated without significant increase in the computational effort.

  12. The accuracy of climate models' simulated season lengths and the effectiveness of grid scale correction factors

    DOE PAGES

    Winterhalter, Wade E.

    2011-09-01

    Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I foundmore » that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.« less

  13. Modeling bias and variation in the stochastic processes of small RNA sequencing

    PubMed Central

    Etheridge, Alton; Sakhanenko, Nikita; Galas, David

    2017-01-01

    Abstract The use of RNA-seq as the preferred method for the discovery and validation of small RNA biomarkers has been hindered by high quantitative variability and biased sequence counts. In this paper we develop a statistical model for sequence counts that accounts for ligase bias and stochastic variation in sequence counts. This model implies a linear quadratic relation between the mean and variance of sequence counts. Using a large number of sequencing datasets, we demonstrate how one can use the generalized additive models for location, scale and shape (GAMLSS) distributional regression framework to calculate and apply empirical correction factors for ligase bias. Bias correction could remove more than 40% of the bias for miRNAs. Empirical bias correction factors appear to be nearly constant over at least one and up to four orders of magnitude of total RNA input and independent of sample composition. Using synthetic mixes of known composition, we show that the GAMLSS approach can analyze differential expression with greater accuracy, higher sensitivity and specificity than six existing algorithms (DESeq2, edgeR, EBSeq, limma, DSS, voom) for the analysis of small RNA-seq data. PMID:28369495

  14. Post-discharge body weight and neurodevelopmental outcomes among very low birth weight infants in Taiwan: A nationwide cohort study

    PubMed Central

    Hsu, Chung-Ting; Chen, Chao-Huei; Wang, Teh-Ming; Hsu, Ya-Chi

    2018-01-01

    Background Premature infants are at high risk for developmental delay and cognitive dysfunction. Besides medical conditions, growth restriction is regarded as an important risk factor for cognitive and neurodevelopmental dysfunction throughout childhood and adolescence and even into adulthood. In this study, we analyzed the relationship between post-discharge body weight and psychomotor development using a nationwide dataset. Materials and methods This was a nationwide cohort study conducted in Taiwan. Total of 1791 premature infants born between 2007 and 2011 with a birth weight of less than 1500 g were enrolled into this multi-center study. The data were obtained from the Taiwan Premature Infant Developmental Collaborative Study Group. The growth and neurodevelopmental evaluations were performed at corrected ages of 6, 12 and 24 months. Post-discharge failure to thrive was defined as a body weight below the 3rd percentile of the standard growth curve for Taiwanese children by the corrected age. Results The prevalence of failure to thrive was 15.8%, 16.9%, and 12.0% at corrected ages of 6, 12, and 24 months, respectively. At corrected ages of 24 months, 12.9% had low Mental Developmental Index (MDI) scores (MDI<70), 17.8% had low Psychomotor Developmental Index (PDI) scores (PDI<70), 12.7% had cerebral palsy, and 29.5% had neurodevelopmental impairment. Post-discharge failure to thrive was significantly associated with poor neurodevelopmental outcomes. After controlling for potential confounding factors (small for gestational age, extra-uterine growth retardation at discharge, cerebral palsy, gender, mild intraventricular hemorrhage, persistent pulmonary hypertension of newborn, respiratory distress syndrome, chronic lung disease, hemodynamic significant patent ductus arteriosus, necrotizing enterocolitis, surfactant use and indomethacin use), post-discharge failure to thrive remained a risk factor. Conclusion This observational study observed the association between lower body weight at corrected age of 6, 12, and 24 months and poor neurodevelopmental outcomes among VLBW premature infants. There are many adverse factors which can influence the neurodevelopment in NICU care. More studies are needed to elucidate the causal relationship. PMID:29444139

  15. High-fidelity artifact correction for cone-beam CT imaging of the brain

    NASA Astrophysics Data System (ADS)

    Sisniega, A.; Zbijewski, W.; Xu, J.; Dang, H.; Stayman, J. W.; Yorkston, J.; Aygun, N.; Koliatsos, V.; Siewerdsen, J. H.

    2015-02-01

    CT is the frontline imaging modality for diagnosis of acute traumatic brain injury (TBI), involving the detection of fresh blood in the brain (contrast of 30-50 HU, detail size down to 1 mm) in a non-contrast-enhanced exam. A dedicated point-of-care imaging system based on cone-beam CT (CBCT) could benefit early detection of TBI and improve direction to appropriate therapy. However, flat-panel detector (FPD) CBCT is challenged by artifacts that degrade contrast resolution and limit application in soft-tissue imaging. We present and evaluate a fairly comprehensive framework for artifact correction to enable soft-tissue brain imaging with FPD CBCT. The framework includes a fast Monte Carlo (MC)-based scatter estimation method complemented by corrections for detector lag, veiling glare, and beam hardening. The fast MC scatter estimation combines GPU acceleration, variance reduction, and simulation with a low number of photon histories and reduced number of projection angles (sparse MC) augmented by kernel de-noising to yield a runtime of ~4 min per scan. Scatter correction is combined with two-pass beam hardening correction. Detector lag correction is based on temporal deconvolution of the measured lag response function. The effects of detector veiling glare are reduced by deconvolution of the glare response function representing the long range tails of the detector point-spread function. The performance of the correction framework is quantified in experiments using a realistic head phantom on a testbench for FPD CBCT. Uncorrected reconstructions were non-diagnostic for soft-tissue imaging tasks in the brain. After processing with the artifact correction framework, image uniformity was substantially improved, and artifacts were reduced to a level that enabled visualization of ~3 mm simulated bleeds throughout the brain. Non-uniformity (cupping) was reduced by a factor of 5, and contrast of simulated bleeds was improved from ~7 to 49.7 HU, in good agreement with the nominal blood contrast of 50 HU. Although noise was amplified by the corrections, the contrast-to-noise ratio (CNR) of simulated bleeds was improved by nearly a factor of 3.5 (CNR = 0.54 without corrections and 1.91 after correction). The resulting image quality motivates further development and translation of the FPD-CBCT system for imaging of acute TBI.

  16. Correction factors in determining speed of sound among freshmen in undergraduate physics laboratory

    NASA Astrophysics Data System (ADS)

    Lutfiyah, A.; Adam, A. S.; Suprapto, N.; Kholiq, A.; Putri, N. P.

    2018-03-01

    This paper deals to identify the correction factor in determining speed of sound that have been done by freshmen in undergraduate physics laboratory. Then, the result will be compared with speed of sound that determining by senior student. Both of them used the similar instrument, namely resonance tube with apparatus. The speed of sound indicated by senior was 333.38 ms-1 with deviation to the theory about 3.98%. Meanwhile, for freshmen, the speed of sound experiment was categorised into three parts: accurate value (52.63%), middle value (31.58%) and lower value (15.79%). Based on analysis, some correction factors were suggested: human error in determining first and second harmonic, end correction of tube diameter, and another factors from environment, such as temperature, humidity, density, and pressure.

  17. Multi-method Assessment of Psychopathy in Relation to Factors of Internalizing and Externalizing from the Personality Assessment Inventory: The Impact of Method Variance and Suppressor Effects

    PubMed Central

    Blonigen, Daniel M.; Patrick, Christopher J.; Douglas, Kevin S.; Poythress, Norman G.; Skeem, Jennifer L.; Lilienfeld, Scott O.; Edens, John F.; Krueger, Robert F.

    2010-01-01

    Research to date has revealed divergent relations across factors of psychopathy measures with criteria of internalizing (INT; anxiety, depression) and externalizing (EXT; antisocial behavior, substance use). However, failure to account for method variance and suppressor effects has obscured the consistency of these findings across distinct measures of psychopathy. Using a large correctional sample, the current study employed a multi-method approach to psychopathy assessment (self-report, interview/file review) to explore convergent and discriminant relations between factors of psychopathy measures and latent criteria of INT and EXT derived from the Personality Assessment Inventory (PAI; L. Morey, 2007). Consistent with prediction, scores on the affective-interpersonal factor of psychopathy were negatively associated with INT and negligibly related to EXT, whereas scores on the social deviance factor exhibited positive associations (moderate and large, respectively) with both INT and EXT. Notably, associations were highly comparable across the psychopathy measures when accounting for method variance (in the case of EXT) and when assessing for suppressor effects (in the case of INT). Findings are discussed in terms of implications for clinical assessment and evaluation of the validity of interpretations drawn from scores on psychopathy measures. PMID:20230156

  18. Hydraulic correction method (HCM) to enhance the efficiency of SRTM DEM in flood modeling

    NASA Astrophysics Data System (ADS)

    Chen, Huili; Liang, Qiuhua; Liu, Yong; Xie, Shuguang

    2018-04-01

    Digital Elevation Model (DEM) is one of the most important controlling factors determining the simulation accuracy of hydraulic models. However, the currently available global topographic data is confronted with limitations for application in 2-D hydraulic modeling, mainly due to the existence of vegetation bias, random errors and insufficient spatial resolution. A hydraulic correction method (HCM) for the SRTM DEM is proposed in this study to improve modeling accuracy. Firstly, we employ the global vegetation corrected DEM (i.e. Bare-Earth DEM), developed from the SRTM DEM to include both vegetation height and SRTM vegetation signal. Then, a newly released DEM, removing both vegetation bias and random errors (i.e. Multi-Error Removed DEM), is employed to overcome the limitation of height errors. Last, an approach to correct the Multi-Error Removed DEM is presented to account for the insufficiency of spatial resolution, ensuring flow connectivity of the river networks. The approach involves: (a) extracting river networks from the Multi-Error Removed DEM using an automated algorithm in ArcGIS; (b) correcting the location and layout of extracted streams with the aid of Google Earth platform and Remote Sensing imagery; and (c) removing the positive biases of the raised segment in the river networks based on bed slope to generate the hydraulically corrected DEM. The proposed HCM utilizes easily available data and tools to improve the flow connectivity of river networks without manual adjustment. To demonstrate the advantages of HCM, an extreme flood event in Huifa River Basin (China) is simulated on the original DEM, Bare-Earth DEM, Multi-Error removed DEM, and hydraulically corrected DEM using an integrated hydrologic-hydraulic model. A comparative analysis is subsequently performed to assess the simulation accuracy and performance of four different DEMs and favorable results have been obtained on the corrected DEM.

  19. Intra-individual variation in urinary iodine concentration: effect of statistical correction on population distribution using seasonal three-consecutive-day spot urine in children

    PubMed Central

    Ji, Xiaohong; Liu, Peng; Sun, Zhenqi; Su, Xiaohui; Wang, Wei; Gao, Yanhui; Sun, Dianjun

    2016-01-01

    Objective To determine the effect of statistical correction for intra-individual variation on estimated urinary iodine concentration (UIC) by sampling on 3 consecutive days in four seasons in children. Setting School-aged children from urban and rural primary schools in Harbin, Heilongjiang, China. Participants 748 and 640 children aged 8–11 years were recruited from urban and rural schools, respectively, in Harbin. Primary and secondary outcome measures The spot urine samples were collected once a day for 3 consecutive days in each season over 1 year. The UIC of the first day was corrected by two statistical correction methods: the average correction method (average of days 1, 2; average of days 1, 2 and 3) and the variance correction method (UIC of day 1 corrected by two replicates and by three replicates). The variance correction method determined the SD between subjects (Sb) and within subjects (Sw), and calculated the correction coefficient (Fi), Fi=Sb/(Sb+Sw/di), where di was the number of observations. The UIC of day 1 was then corrected using the following equation: Results The variance correction methods showed the overall Fi was 0.742 for 2 days’ correction and 0.829 for 3 days’ correction; the values for the seasons spring, summer, autumn and winter were 0.730, 0.684, 0.706 and 0.703 for 2 days’ correction and 0.809, 0.742, 0.796 and 0.804 for 3 days’ correction, respectively. After removal of the individual effect, the correlation coefficient between consecutive days was 0.224, and between non-consecutive days 0.050. Conclusions The variance correction method is effective for correcting intra-individual variation in estimated UIC following sampling on 3 consecutive days in four seasons in children. The method varies little between ages, sexes and urban or rural setting, but does vary between seasons. PMID:26920442

  20. Optimisation of reconstruction--reprojection-based motion correction for cardiac SPECT.

    PubMed

    Kangasmaa, Tuija S; Sohlberg, Antti O

    2014-07-01

    Cardiac motion is a challenging cause of image artefacts in myocardial perfusion SPECT. A wide range of motion correction methods have been developed over the years, and so far automatic algorithms based on the reconstruction--reprojection principle have proved to be the most effective. However, these methods have not been fully optimised in terms of their free parameters and implementational details. Two slightly different implementations of reconstruction--reprojection-based motion correction techniques were optimised for effective, good-quality motion correction and then compared with each other. The first of these methods (Method 1) was the traditional reconstruction-reprojection motion correction algorithm, where the motion correction is done in projection space, whereas the second algorithm (Method 2) performed motion correction in reconstruction space. The parameters that were optimised include the type of cost function (squared difference, normalised cross-correlation and mutual information) that was used to compare measured and reprojected projections, and the number of iterations needed. The methods were tested with motion-corrupt projection datasets, which were generated by adding three different types of motion (lateral shift, vertical shift and vertical creep) to motion-free cardiac perfusion SPECT studies. Method 2 performed slightly better overall than Method 1, but the difference between the two implementations was small. The execution time for Method 2 was much longer than for Method 1, which limits its clinical usefulness. The mutual information cost function gave clearly the best results for all three motion sets for both correction methods. Three iterations were sufficient for a good quality correction using Method 1. The traditional reconstruction--reprojection-based method with three update iterations and mutual information cost function is a good option for motion correction in clinical myocardial perfusion SPECT.

  1. SU-E-T-46: Application of a Twin-Detector Method for the Determination of the Mean Photon Energy Em at Points of Measurement in a Water Phantom Surrounding a GammaMed HDR 192Ir Brachytherapy Source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chofor, N; Poppe, B; Nebah, F

    Purpose: In a brachytherapy photon field in water the fluence-averaged mean photon energy Em at the point of measurement correlates with the radiation quality correction factor kQ of a non water-equivalent detector. To support the experimental assessment of Em, we show that the normalized signal ratio NSR of a pair of radiation detectors, an unshielded silicon diode and a diamond detector can serve to measure quantity Em in a water phantom at a Ir-192 unit. Methods: Photon fluence spectra were computed in EGSnrc based on a detailed model of the GammaMed source. Factor kQ was calculated as the ratio ofmore » the detector's spectrum-weighted responses under calibration conditions at a 60Co unit and under brachytherapy conditions at various radial distances from the source. The NSR was investigated for a pair of a p-type unshielded silicon diode 60012 and a synthetic single crystal diamond detector 60019 (both PTW Freiburg). Each detector was positioned according to its effective point of measurement, with its axis facing the source. Lateral signal profiles were scanned under complete scatter conditions, and the NSR was determined as the quotient of the signal ratio under application conditions x and that at position r-ref = 1 cm. Results: The radiation quality correction factor kQ shows a close correlation with the mean photon energy Em. The NSR of the diode/diamond pair changes by a factor of two from 0–18 cm from the source, while Em drops from 350 to 150 keV. Theoretical and measured NSR profiles agree by ± 2 % for points within 5 cm from the source. Conclusion: In the presence of the close correlation between radiation quality correction factor kQ and photon mean energy Em, the NSR provides a practical means of assessing Em under clinical conditions. Precise detector positioning is the major challenge.« less

  2. Deformation field correction for spatial normalization of PET images

    PubMed Central

    Bilgel, Murat; Carass, Aaron; Resnick, Susan M.; Wong, Dean F.; Prince, Jerry L.

    2015-01-01

    Spatial normalization of positron emission tomography (PET) images is essential for population studies, yet the current state of the art in PET-to-PET registration is limited to the application of conventional deformable registration methods that were developed for structural images. A method is presented for the spatial normalization of PET images that improves their anatomical alignment over the state of the art. The approach works by correcting the deformable registration result using a model that is learned from training data having both PET and structural images. In particular, viewing the structural registration of training data as ground truth, correction factors are learned by using a generalized ridge regression at each voxel given the PET intensities and voxel locations in a population-based PET template. The trained model can then be used to obtain more accurate registration of PET images to the PET template without the use of a structural image. A cross validation evaluation on 79 subjects shows that the proposed method yields more accurate alignment of the PET images compared to deformable PET-to-PET registration as revealed by 1) a visual examination of the deformed images, 2) a smaller error in the deformation fields, and 3) a greater overlap of the deformed anatomical labels with ground truth segmentations. PMID:26142272

  3. Position Accuracy Improvement by Implementing the DGNSS-CP Algorithm in Smartphones

    PubMed Central

    Yoon, Donghwan; Kee, Changdon; Seo, Jiwon; Park, Byungwoon

    2016-01-01

    The position accuracy of Global Navigation Satellite System (GNSS) modules is one of the most significant factors in determining the feasibility of new location-based services for smartphones. Considering the structure of current smartphones, it is impossible to apply the ordinary range-domain Differential GNSS (DGNSS) method. Therefore, this paper describes and applies a DGNSS-correction projection method to a commercial smartphone. First, the local line-of-sight unit vector is calculated using the elevation and azimuth angle provided in the position-related output of Android’s LocationManager, and this is transformed to Earth-centered, Earth-fixed coordinates for use. To achieve position-domain correction for satellite systems other than GPS, such as GLONASS and BeiDou, the relevant line-of-sight unit vectors are used to construct an observation matrix suitable for multiple constellations. The results of static and dynamic tests show that the standalone GNSS accuracy is improved by about 30%–60%, thereby reducing the existing error of 3–4 m to just 1 m. The proposed algorithm enables the position error to be directly corrected via software, without the need to alter the hardware and infrastructure of the smartphone. This method of implementation and the subsequent improvement in performance are expected to be highly effective to portability and cost saving. PMID:27322284

  4. New decoding methods of interleaved burst error-correcting codes

    NASA Astrophysics Data System (ADS)

    Nakano, Y.; Kasahara, M.; Namekawa, T.

    1983-04-01

    A probabilistic method of single burst error correction, using the syndrome correlation of subcodes which constitute the interleaved code, is presented. This method makes it possible to realize a high capability of burst error correction with less decoding delay. By generalizing this method it is possible to obtain probabilistic method of multiple (m-fold) burst error correction. After estimating the burst error positions using syndrome correlation of subcodes which are interleaved m-fold burst error detecting codes, this second method corrects erasure errors in each subcode and m-fold burst errors. The performance of these two methods is analyzed via computer simulation, and their effectiveness is demonstrated.

  5. Variance analysis of forecasted streamflow maxima in a wet temperate climate

    NASA Astrophysics Data System (ADS)

    Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.

    2018-05-01

    Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.

  6. Method and apparatus for measuring thermal conductivity of small, highly insulating specimens

    NASA Technical Reports Server (NTRS)

    Miller, Robert A. (Inventor); Kuczmarski, Maria A. (Inventor)

    2012-01-01

    A hot plate method and apparatus for the measurement of thermal conductivity combines the following capabilities: 1) measurements of very small specimens; 2) measurements of specimens with thermal conductivity on the same order of that as air; and, 3) the ability to use air as a reference material. Care is taken to ensure that the heat flow through the test specimen is essentially one-dimensional. No attempt is made to use heated guards to minimize the flow of heat from the hot plate to the surroundings. Results indicate that since large correction factors must be applied to account for guard imperfections when specimen dimensions are small, simply measuring and correcting for heat from the heater disc that does not flow into the specimen is preferable. The invention is a hot plate method capable of using air as a standard reference material for the steady-state measurement of the thermal conductivity of very small test samples having thermal conductivity on the order of air.

  7. Monte Carlo calculated correction factors for diodes and ion chambers in small photon fields.

    PubMed

    Czarnecki, D; Zink, K

    2013-04-21

    The application of small photon fields in modern radiotherapy requires the determination of total scatter factors Scp or field factors Ω(f(clin), f(msr))(Q(clin), Q(msr)) with high precision. Both quantities require the knowledge of the field-size-dependent and detector-dependent correction factor k(f(clin), f(msr))(Q(clin), Q(msr)). The aim of this study is the determination of the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) for different types of detectors in a clinical 6 MV photon beam of a Siemens KD linear accelerator. The EGSnrc Monte Carlo code was used to calculate the dose to water and the dose to different detectors to determine the field factor as well as the mentioned correction factor for different small square field sizes. Besides this, the mean water to air stopping power ratio as well as the ratio of the mean energy absorption coefficients for the relevant materials was calculated for different small field sizes. As the beam source, a Monte Carlo based model of a Siemens KD linear accelerator was used. The results show that in the case of ionization chambers the detector volume has the largest impact on the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)); this perturbation may contribute up to 50% to the correction factor. Field-dependent changes in stopping-power ratios are negligible. The magnitude of k(f(clin), f(msr))(Q(clin), Q(msr)) is of the order of 1.2 at a field size of 1 × 1 cm(2) for the large volume ion chamber PTW31010 and is still in the range of 1.05-1.07 for the PinPoint chambers PTW31014 and PTW31016. For the diode detectors included in this study (PTW60016, PTW 60017), the correction factor deviates no more than 2% from unity in field sizes between 10 × 10 and 1 × 1 cm(2), but below this field size there is a steep decrease of k(f(clin), f(msr))(Q(clin), Q(msr)) below unity, i.e. a strong overestimation of dose. Besides the field size and detector dependence, the results reveal a clear dependence of the correction factor on the accelerator geometry for field sizes below 1 × 1 cm(2), i.e. on the beam spot size of the primary electrons hitting the target. This effect is especially pronounced for the ionization chambers. In conclusion, comparing all detectors, the unshielded diode PTW60017 is highly recommended for small field dosimetry, since its correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) is closest to unity in small fields and mainly independent of the electron beam spot size.

  8. Likelihood-based modification of experimental crystal structure electron density maps

    DOEpatents

    Terwilliger, Thomas C [Sante Fe, NM

    2005-04-16

    A maximum-likelihood method for improves an electron density map of an experimental crystal structure. A likelihood of a set of structure factors {F.sub.h } is formed for the experimental crystal structure as (1) the likelihood of having obtained an observed set of structure factors {F.sub.h.sup.OBS } if structure factor set {F.sub.h } was correct, and (2) the likelihood that an electron density map resulting from {F.sub.h } is consistent with selected prior knowledge about the experimental crystal structure. The set of structure factors {F.sub.h } is then adjusted to maximize the likelihood of {F.sub.h } for the experimental crystal structure. An improved electron density map is constructed with the maximized structure factors.

  9. Fatigue Crack Growth Rate and Stress-Intensity Factor Corrections for Out-of-Plane Crack Growth

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Herman, Dave J.; James, Mark A.

    2003-01-01

    Fatigue crack growth rate testing is performed by automated data collection systems that assume straight crack growth in the plane of symmetry and use standard polynomial solutions to compute crack length and stress-intensity factors from compliance or potential drop measurements. Visual measurements used to correct the collected data typically include only the horizontal crack length, which for cracks that propagate out-of-plane, under-estimates the crack growth rates and over-estimates the stress-intensity factors. The authors have devised an approach for correcting both the crack growth rates and stress-intensity factors based on two-dimensional mixed mode-I/II finite element analysis (FEA). The approach is used to correct out-of-plane data for 7050-T7451 and 2025-T6 aluminum alloys. Results indicate the correction process works well for high DeltaK levels but fails to capture the mixed-mode effects at DeltaK levels approaching threshold (da/dN approximately 10(exp -10) meter/cycle).

  10. Quantitation of tumor uptake with molecular breast imaging.

    PubMed

    Bache, Steven T; Kappadath, S Cheenu

    2017-09-01

    We developed scatter and attenuation-correction techniques for quantifying images obtained with Molecular Breast Imaging (MBI) systems. To investigate scatter correction, energy spectra of a 99m Tc point source were acquired with 0-7-cm-thick acrylic to simulate scatter between the detector heads. System-specific scatter correction factor, k, was calculated as a function of thickness using a dual energy window technique. To investigate attenuation correction, a 7-cm-thick rectangular phantom containing 99m Tc-water simulating breast tissue and fillable spheres simulating tumors was imaged. Six spheres 10-27 mm in diameter were imaged with sphere-to-background ratios (SBRs) of 3.5, 2.6, and 1.7 and located at depths of 0.5, 1.5, and 2.5 cm from the center of the water bath for 54 unique tumor scenarios (3 SBRs × 6 sphere sizes × 3 depths). Phantom images were also acquired in-air under scatter- and attenuation-free conditions, which provided ground truth counts. To estimate true counts, T, from each tumor, the geometric mean (GM) of the counts within a prescribed region of interest (ROI) from the two projection images was calculated as T=C1C2eμtF, where C are counts within the square ROI circumscribing each sphere on detectors 1 and 2, μ is the linear attenuation coefficient of water, t is detector separation, and the factor F accounts for background activity. Four unique F definitions-standard GM, background-subtraction GM, MIRD Primer 16 GM, and a novel "volumetric GM"-were investigated. Error in T was calculated as the percentage difference with respect to in-air. Quantitative accuracy using the different GM definitions was calculated as a function of SBR, depth, and sphere size. Sensitivity of quantitative accuracy to ROI size was investigated. We developed an MBI simulation to investigate the robustness of our corrections for various ellipsoidal tumor shapes and detector separations. Scatter correction factor k varied slightly (0.80-0.95) over a compressed breast thickness range of 6-9 cm. Corrected energy spectra recovered general characteristics of scatter-free spectra. Quantitatively, photopeak counts were recovered to <10% compared to in-air conditions after scatter correction. After GM attenuation correction, mean errors (95% confidence interval, CI) for all 54 imaging scenarios were 149% (-154% to +455%), -14.0% (-38.4% to +10.4%), 16.8% (-14.7% to +48.2%), and 2.0% (-14.3 to +18.3%) for the standard GM, background-subtraction GM, MIRD 16 GM, and volumetric GM, respectively. Volumetric GM was less sensitive to SBR and sphere size, while all GM methods were insensitive to sphere depth. Simulation results showed that Volumetric GM method produced a mean error within 5% over all compressed breast thicknesses (3-14 cm), and that the use of an estimated radius for nonspherical tumors increases the 95% CI to at most ±23%, compared with ±16% for spherical tumors. Using DEW scatter- and our Volumetric GM attenuation-correction methodology yielded accurate estimates of tumor counts in MBI over various tumor sizes, shapes, depths, background uptake, and compressed breast thicknesses. Accurate tumor uptake can be converted to radiotracer uptake concentration, allowing three patient-specific metrics to be calculated for quantifying absolute uptake and relative uptake change for assessment of treatment response. © 2017 American Association of Physicists in Medicine.

  11. Comparing bias correction methods in downscaling meteorological variables for a hydrologic impact study in an arid area in China

    NASA Astrophysics Data System (ADS)

    Fang, G. H.; Yang, J.; Chen, Y. N.; Zammit, C.

    2015-06-01

    Water resources are essential to the ecosystem and social economy in the desert and oasis of the arid Tarim River basin, northwestern China, and expected to be vulnerable to climate change. It has been demonstrated that regional climate models (RCMs) provide more reliable results for a regional impact study of climate change (e.g., on water resources) than general circulation models (GCMs). However, due to their considerable bias it is still necessary to apply bias correction before they are used for water resources research. In this paper, after a sensitivity analysis on input meteorological variables based on the Sobol' method, we compared five precipitation correction methods and three temperature correction methods in downscaling RCM simulations applied over the Kaidu River basin, one of the headwaters of the Tarim River basin. Precipitation correction methods applied include linear scaling (LS), local intensity scaling (LOCI), power transformation (PT), distribution mapping (DM) and quantile mapping (QM), while temperature correction methods are LS, variance scaling (VARI) and DM. The corrected precipitation and temperature were compared to the observed meteorological data, prior to being used as meteorological inputs of a distributed hydrologic model to study their impacts on streamflow. The results show (1) streamflows are sensitive to precipitation, temperature and solar radiation but not to relative humidity and wind speed; (2) raw RCM simulations are heavily biased from observed meteorological data, and its use for streamflow simulations results in large biases from observed streamflow, and all bias correction methods effectively improved these simulations; (3) for precipitation, PT and QM methods performed equally best in correcting the frequency-based indices (e.g., standard deviation, percentile values) while the LOCI method performed best in terms of the time-series-based indices (e.g., Nash-Sutcliffe coefficient, R2); (4) for temperature, all correction methods performed equally well in correcting raw temperature; and (5) for simulated streamflow, precipitation correction methods have more significant influence than temperature correction methods and the performances of streamflow simulations are consistent with those of corrected precipitation; i.e., the PT and QM methods performed equally best in correcting flow duration curve and peak flow while the LOCI method performed best in terms of the time-series-based indices. The case study is for an arid area in China based on a specific RCM and hydrologic model, but the methodology and some results can be applied to other areas and models.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Xi; Mou, Xuanqin; Nishikawa, Robert M.

    Purpose: Small calcifications are often the earliest and the main indicator of breast cancer. Dual-energy digital mammography (DEDM) has been considered as a promising technique to improve the detectability of calcifications since it can be used to suppress the contrast between adipose and glandular tissues of the breast. X-ray scatter leads to erroneous calculations of the DEDM image. Although the pinhole-array interpolation method can estimate scattered radiations, it requires extra exposures to measure the scatter and apply the correction. The purpose of this work is to design an algorithmic method for scatter correction in DEDM without extra exposures.Methods: In thismore » paper, a scatter correction method for DEDM was developed based on the knowledge that scattered radiation has small spatial variation and that the majority of pixels in a mammogram are noncalcification pixels. The scatter fraction was estimated in the DEDM calculation and the measured scatter fraction was used to remove scatter from the image. The scatter correction method was implemented on a commercial full-field digital mammography system with breast tissue equivalent phantom and calcification phantom. The authors also implemented the pinhole-array interpolation scatter correction method on the system. Phantom results for both methods are presented and discussed. The authors compared the background DE calcification signals and the contrast-to-noise ratio (CNR) of calcifications in the three DE calcification images: image without scatter correction, image with scatter correction using pinhole-array interpolation method, and image with scatter correction using the authors' algorithmic method.Results: The authors' results show that the resultant background DE calcification signal can be reduced. The root-mean-square of background DE calcification signal of 1962 μm with scatter-uncorrected data was reduced to 194 μm after scatter correction using the authors' algorithmic method. The range of background DE calcification signals using scatter-uncorrected data was reduced by 58% with scatter-corrected data by algorithmic method. With the scatter-correction algorithm and denoising, the minimum visible calcification size can be reduced from 380 to 280 μm.Conclusions: When applying the proposed algorithmic scatter correction to images, the resultant background DE calcification signals can be reduced and the CNR of calcifications can be improved. This method has similar or even better performance than pinhole-array interpolation method in scatter correction for DEDM; moreover, this method is convenient and requires no extra exposure to the patient. Although the proposed scatter correction method is effective, it is validated by a 5-cm-thick phantom with calcifications and homogeneous background. The method should be tested on structured backgrounds to more accurately gauge effectiveness.« less

  13. Correction factor for ablation algorithms used in corneal refractive surgery with gaussian-profile beams

    NASA Astrophysics Data System (ADS)

    Jimenez, Jose Ramón; González Anera, Rosario; Jiménez del Barco, Luis; Hita, Enrique; Pérez-Ocón, Francisco

    2005-01-01

    We provide a correction factor to be added in ablation algorithms when a Gaussian beam is used in photorefractive laser surgery. This factor, which quantifies the effect of pulse overlapping, depends on beam radius and spot size. We also deduce the expected post-surgical corneal radius and asphericity when considering this factor. Data on 141 eyes operated on LASIK (laser in situ keratomileusis) with a Gaussian profile show that the discrepancy between experimental and expected data on corneal power is significantly lower when using the correction factor. For an effective improvement of post-surgical visual quality, this factor should be applied in ablation algorithms that do not consider the effects of pulse overlapping with a Gaussian beam.

  14. Extracting the pair distribution function of liquids and liquid-vapor surfaces by grazing incidence x-ray diffraction mode.

    PubMed

    Vaknin, David; Bu, Wei; Travesset, Alex

    2008-07-28

    We show that the structure factor S(q) of water can be obtained from x-ray synchrotron experiments at grazing angle of incidence (in reflection mode) by using a liquid surface diffractometer. The corrections used to obtain S(q) self-consistently are described. Applying these corrections to scans at different incident beam angles (above the critical angle) collapses the measured intensities into a single master curve, without fitting parameters, which within a scale factor yields S(q). Performing the measurements below the critical angle for total reflectivity yields the structure factor of the top most layers of the water/vapor interface. Our results indicate water restructuring at the vapor/water interface. We also introduce a new approach to extract g(r), the pair distribution function (PDF), by expressing the PDF as a linear sum of error functions whose parameters are refined by applying a nonlinear least square fit method. This approach enables a straightforward determination of the inherent uncertainties in the PDF. Implications of our results to previously measured and theoretical predictions of the PDF are also discussed.

  15. Calibration of volume and component biomass equations for Douglas-fir and lodgepole pine in Western Oregon forests

    Treesearch

    Krishna P. Poudel; Temesgen Hailemariam

    2016-01-01

    Using data from destructively sampled Douglas-fir and lodgepole pine trees, we evaluated the performance of regional volume and component biomass equations in terms of bias and RMSE. The volume and component biomass equations were calibrated using three different adjustment methods that used: (a) a correction factor based on ordinary least square regression through...

  16. Eye-motion-corrected optical coherence tomography angiography using Lissajous scanning.

    PubMed

    Chen, Yiwei; Hong, Young-Joo; Makita, Shuichi; Yasuno, Yoshiaki

    2018-03-01

    To correct eye motion artifacts in en face optical coherence tomography angiography (OCT-A) images, a Lissajous scanning method with subsequent software-based motion correction is proposed. The standard Lissajous scanning pattern is modified to be compatible with OCT-A and a corresponding motion correction algorithm is designed. The effectiveness of our method was demonstrated by comparing en face OCT-A images with and without motion correction. The method was further validated by comparing motion-corrected images with scanning laser ophthalmoscopy images, and the repeatability of the method was evaluated using a checkerboard image. A motion-corrected en face OCT-A image from a blinking case is presented to demonstrate the ability of the method to deal with eye blinking. Results show that the method can produce accurate motion-free en face OCT-A images of the posterior segment of the eye in vivo .

  17. Comparison of Various Equations for Estimating GFR in Malawi: How to Determine Renal Function in Resource Limited Settings?

    PubMed Central

    Phiri, Sam; Rothenbacher, Dietrich; Neuhann, Florian

    2015-01-01

    Background Chronic kidney disease (CKD) is a probably underrated public health problem in Sub-Saharan-Africa, in particular in combination with HIV-infection. Knowledge about the CKD prevalence is scarce and in the available literature different methods to classify CKD are used impeding comparison and general prevalence estimates. Methods This study assessed different serum-creatinine based equations for glomerular filtration rates (eGFR) and compared them to a cystatin C based equation. The study was conducted in Lilongwe, Malawi enrolling a population of 363 adults of which 32% were HIV-positive. Results Comparison of formulae based on Bland-Altman-plots and accuracy revealed best performance for the CKD-EPI equation without the correction factor for black Americans. Analyzing the differences between HIV-positive and –negative individuals CKD-EPI systematically overestimated eGFR in comparison to cystatin C and therefore lead to underestimation of CKD in HIV-positives. Conclusions Our findings underline the importance for standardization of eGFR calculation in a Sub-Saharan African setting, to further investigate the differences with regard to HIV status and to develop potential correction factors as established for age and sex. PMID:26083345

  18. Air Temperature Error Correction Based on Solar Radiation in an Economical Meteorological Wireless Sensor Network

    PubMed Central

    Sun, Xingming; Yan, Shuangshuang; Wang, Baowei; Xia, Li; Liu, Qi; Zhang, Hui

    2015-01-01

    Air temperature (AT) is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS). Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR). Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE) and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months. PMID:26213941

  19. Air Temperature Error Correction Based on Solar Radiation in an Economical Meteorological Wireless Sensor Network.

    PubMed

    Sun, Xingming; Yan, Shuangshuang; Wang, Baowei; Xia, Li; Liu, Qi; Zhang, Hui

    2015-07-24

    Air temperature (AT) is an extremely vital factor in meteorology, agriculture, military, etc., being used for the prediction of weather disasters, such as drought, flood, frost, etc. Many efforts have been made to monitor the temperature of the atmosphere, like automatic weather stations (AWS). Nevertheless, due to the high cost of specialized AT sensors, they cannot be deployed within a large spatial density. A novel method named the meteorology wireless sensor network relying on a sensing node has been proposed for the purpose of reducing the cost of AT monitoring. However, the temperature sensor on the sensing node can be easily influenced by environmental factors. Previous research has confirmed that there is a close relation between AT and solar radiation (SR). Therefore, this paper presents a method to decrease the error of sensed AT, taking SR into consideration. In this work, we analyzed all of the collected data of AT and SR in May 2014 and found the numerical correspondence between AT error (ATE) and SR. This corresponding relation was used to calculate real-time ATE according to real-time SR and to correct the error of AT in other months.

  20. Development and Evaluation of A Novel and Cost-Effective Approach for Low-Cost NO₂ Sensor Drift Correction.

    PubMed

    Sun, Li; Westerdahl, Dane; Ning, Zhi

    2017-08-19

    Emerging low-cost gas sensor technologies have received increasing attention in recent years for air quality measurements due to their small size and convenient deployment. However, in the diverse applications these sensors face many technological challenges, including sensor drift over long-term deployment that cannot be easily addressed using mathematical correction algorithms or machine learning methods. This study aims to develop a novel approach to auto-correct the drift of commonly used electrochemical nitrogen dioxide (NO₂) sensor with comprehensive evaluation of its application. The impact of environmental factors on the NO₂ electrochemical sensor in low-ppb concentration level measurement was evaluated in laboratory and the temperature and relative humidity correction algorithm was evaluated. An automated zeroing protocol was developed and assessed using a chemical absorbent to remove NO₂ as a means to perform zero correction in varying ambient conditions. The sensor system was operated in three different environments in which data were compared to a reference NO₂ analyzer. The results showed that the zero-calibration protocol effectively corrected the observed drift of the sensor output. This technique offers the ability to enhance the performance of low-cost sensor based systems and these findings suggest extension of the approach to improve data quality from sensors measuring other gaseous pollutants in urban air.

  1. Image-Based 2D Re-Projection for Attenuation Substitution in PET Neuroimaging.

    PubMed

    Laymon, Charles M; Minhas, Davneet S; Becker, Carl R; Matan, Cristy; Oborski, Matthew J; Price, Julie C; Mountz, James M

    2018-02-27

    In dual modality positron emission tomography (PET)/magnetic resonance imaging (MRI), attenuation correction (AC) methods are continually improving. Although a new AC can sometimes be generated from existing MR data, its application requires a new reconstruction. We evaluate an approximate 2D projection method that allows offline image-based reprocessing. 2-Deoxy-2-[ 18 F]fluoro-D-glucose ([ 18 F]FDG) brain scans were acquired (Siemens HR+) for six subjects. Attenuation data were obtained using the scanner's transmission source (SAC). Additional scanning was performed on a Siemens mMR including production of a Dixon-based MR AC (MRAC). The MRAC was imported to the HR+ and the PET data were reconstructed twice: once using native SAC (ground truth); once using the imported MRAC (imperfect AC). The re-projection method was implemented as follows. The MRAC PET was forward projected to approximately reproduce attenuation-corrected sinograms. The SAC and MRAC images were forward projected and converted to attenuation-correction factors (ACFs). The MRAC ACFs were removed from the MRAC PET sinograms by division; the SAC ACFs were applied by multiplication. The regenerated sinograms were reconstructed by filtered back projection to produce images (SUBAC PET) in which SAC has been substituted for MRAC. Ideally SUBAC PET should match SAC PET. Via coregistered T1 images, FreeSurfer (FS; MGH, Boston) was used to define a set of cortical gray matter regions of interest. Regional activity concentrations were extracted for SAC PET, MRAC PET, and SUBAC PET. SUBAC PET showed substantially smaller root mean square error than MRAC PET with averaged values of 1.5 % versus 8.1 %. Re-projection is a viable image-based method for the application of an alternate attenuation correction in neuroimaging.

  2. Selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays and impacts of using incorrect weighting factors on curve stability, data quality, and assay performance.

    PubMed

    Gu, Huidong; Liu, Guowen; Wang, Jian; Aubry, Anne-Françoise; Arnold, Mark E

    2014-09-16

    A simple procedure for selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays is reported. The correct weighting factor is determined by the relationship between the standard deviation of instrument responses (σ) and the concentrations (x). The weighting factor of 1, 1/x, or 1/x(2) should be selected if, over the entire concentration range, σ is a constant, σ(2) is proportional to x, or σ is proportional to x, respectively. For the first time, we demonstrated with detailed scientific reasoning, solid historical data, and convincing justification that 1/x(2) should always be used as the weighting factor for all bioanalytical LC-MS/MS assays. The impacts of using incorrect weighting factors on curve stability, data quality, and assay performance were thoroughly investigated. It was found that the most stable curve could be obtained when the correct weighting factor was used, whereas other curves using incorrect weighting factors were unstable. It was also found that there was a very insignificant impact on the concentrations reported with calibration curves using incorrect weighting factors as the concentrations were always reported with the passing curves which actually overlapped with or were very close to the curves using the correct weighting factor. However, the use of incorrect weighting factors did impact the assay performance significantly. Finally, the difference between the weighting factors of 1/x(2) and 1/y(2) was discussed. All of the findings can be generalized and applied into other quantitative analysis techniques using calibration curves with weighted least-squares regression algorithm.

  3. FY 2016 Status Report: CIRFT Testing Data Analyses and Updated Curvature Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Jy-An John; Wang, Hong

    This report provides a detailed description of FY15 test result corrections/analysis based on the FY16 Cyclic Integrated Reversible-Bending Fatigue Tester (CIRFT) test program methodology update used to evaluate the vibration integrity of spent nuclear fuel (SNF) under normal transportation conditions. The CIRFT consists of a U-frame testing setup and a real-time curvature measurement method. The three-component U-frame setup of the CIRFT has two rigid arms and linkages to a universal testing machine. The curvature of rod bending is obtained through a three-point deflection measurement method. Three linear variable differential transformers (LVDTs) are used and clamped to the side connecting platesmore » of the U-frame to capture the deformation of the rod. The contact-based measurement, or three-LVDT-based curvature measurement system, on SNF rods has been proven to be quite reliable in CIRFT testing. However, how the LVDT head contacts the SNF rod may have a significant effect on the curvature measurement, depending on the magnitude and direction of rod curvature. It has been demonstrated that the contact/curvature issues can be corrected by using a correction on the sensor spacing. The sensor spacing defines the separation of the three LVDT probes and is a critical quantity in calculating the rod curvature once the deflections are obtained. The sensor spacing correction can be determined by using chisel-type probes. The method has been critically examined this year and has been shown to be difficult to implement in a hot cell environment, and thus cannot be implemented effectively. A correction based on the proposed equivalent gauge-length has the required flexibility and accuracy and can be appropriately used as a correction factor. The correction method based on the equivalent gauge length has been successfully demonstrated in CIRFT data analysis for the dynamic tests conducted on Limerick (LMK) (17 tests), North Anna (NA) (6 tests), and Catawba mixed oxide (MOX) (10 tests) SNF samples. These CIRFT tests were completed in FY14 and FY15. Specifically, the data sets obtained from measurement and monitoring were processed and analyzed. The fatigue life of rods has been characterized in terms of moment, curvature, and equivalent stress and strain..« less

  4. Calibration of Passive Samplers for the Monitoring of Pharmaceuticals in Water-Sampling Rate Variation.

    PubMed

    Męczykowska, Hanna; Kobylis, Paulina; Stepnowski, Piotr; Caban, Magda

    2017-05-04

    Passive sampling is one of the most efficient methods of monitoring pharmaceuticals in environmental water. The reliability of the process relies on a correctly performed calibration experiment and a well-defined sampling rate (R s ) for target analytes. Therefore, in this review the state-of-the-art methods of passive sampler calibration for the most popular pharmaceuticals: antibiotics, hormones, β-blockers and non-steroidal anti-inflammatory drugs (NSAIDs), along with the sampling rate variation, were presented. The advantages and difficulties in laboratory and field calibration were pointed out, according to the needs of control of the exact conditions. Sampling rate calculating equations and all the factors affecting the R s value - temperature, flow, pH, salinity of the donor phase and biofouling - were discussed. Moreover, various calibration parameters gathered from the literature published in the last 16 years, including the device types, were tabled and compared. What is evident is that the sampling rate values for pharmaceuticals are impacted by several factors, whose influence is still unclear and unpredictable, while there is a big gap in experimental data. It appears that the calibration procedure needs to be improved, for example, there is a significant deficiency of PRCs (Performance Reference Compounds) for pharmaceuticals. One of the suggestions is to introduce correction factors for R s values estimated in laboratory conditions.

  5. [The influence of the climatic and weather conditions on the mechanisms underlying the formation of enhanced meteosensitivity (a literature review)].

    PubMed

    Uyanaeva, A I; Tupitsyna, Yu Yu; Rassulova, M A; Turova, E A; Lvova, N V; Ajrapetova, N S

    The present review concerns the problem of the influence of the climatic conditions on the human body, the creation of the medical weather forecast service, the development of non-pharmacological methods for the correction of meteopathic disorders, and the reduction of the risk of the complications provoked by the unfavourable weather conditions. The literature data are used to analyse the influence of climatic and weather factors on the formation of enhanced meteosensitivity and the development of exacerbations of chronic non-communicable diseases under the influence of weather conditions. It is concluded that marked changes of the weather may lead to an increased frequency of exacerbations of the chronic non-communicable diseases. The influence of weather and climate on human health is becoming an increasingly important factor under the current conditions bearing in mind the modern tendency toward variations of the global climatic conditions and their specific regional manifestations. The authors emphasize the necessity of the identification and evaluation of the predictors of the development of high meteosensitivity for the prognostication of the risks of the meteopathic reactions and the complications associated with the changes in weather conditions as well as the importance of the improvement of the existing and the development of new methods for the non-pharmacological prevention and correction of enhanced meteosensitivity with the application of the natural and preformed physical factors.

  6. Preferred color correction for digital LCD TVs

    NASA Astrophysics Data System (ADS)

    Kim, Kyoung Tae; Kim, Choon-Woo; Ahn, Ji-Young; Kang, Dong-Woo; Shin, Hyun-Ho

    2009-01-01

    Instead of colorimetirc color reproduction, preferred color correction is applied for digital TVs to improve subjective image quality. First step of the preferred color correction is to survey the preferred color coordinates of memory colors. This can be achieved by the off-line human visual tests. Next step is to extract pixels of memory colors representing skin, grass and sky. For the detected pixels, colors are shifted towards the desired coordinates identified in advance. This correction process may result in undesirable contours on the boundaries between the corrected and un-corrected areas. For digital TV applications, the process of extraction and correction should be applied in every frame of the moving images. This paper presents a preferred color correction method in LCH color space. Values of chroma and hue are corrected independently. Undesirable contours on the boundaries of correction are minimized. The proposed method change the coordinates of memory color pixels towards the target color coordinates. Amount of correction is determined based on the averaged coordinate of the extracted pixels. The proposed method maintains the relative color difference within memory color areas. Performance of the proposed method is evaluated using the paired comparison. Results of experiments indicate that the proposed method can reproduce perceptually pleasing images to viewers.

  7. An improved level set method for brain MR images segmentation and bias correction.

    PubMed

    Chen, Yunjie; Zhang, Jianwei; Macione, Jim

    2009-10-01

    Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.

  8. Re-evaluation of the correction factors for the GROVEX

    NASA Astrophysics Data System (ADS)

    Ketelhut, Steffen; Meier, Markus

    2018-04-01

    The GROVEX (GROssVolumige EXtrapolationskammer, large-volume extrapolation chamber) is the primary standard for the dosimetry of low-dose-rate interstitial brachytherapy at the Physikalisch-Technische Bundesanstalt (PTB). In the course of setup modifications and re-measuring of several dimensions, the correction factors have been re-evaluated in this work. The correction factors for scatter and attenuation have been recalculated using the Monte Carlo software package EGSnrc, and a new expression has been found for the divergence correction. The obtained results decrease the measured reference air kerma rate by approximately 0.9% for the representative example of a seed of type Bebig I25.S16C. This lies within the expanded uncertainty (k  =  2).

  9. Insar Unwrapping Error Correction Based on Quasi-Accurate Detection of Gross Errors (quad)

    NASA Astrophysics Data System (ADS)

    Kang, Y.; Zhao, C. Y.; Zhang, Q.; Yang, C. S.

    2018-04-01

    Unwrapping error is a common error in the InSAR processing, which will seriously degrade the accuracy of the monitoring results. Based on a gross error correction method, Quasi-accurate detection (QUAD), the method for unwrapping errors automatic correction is established in this paper. This method identifies and corrects the unwrapping errors by establishing a functional model between the true errors and interferograms. The basic principle and processing steps are presented. Then this method is compared with the L1-norm method with simulated data. Results show that both methods can effectively suppress the unwrapping error when the ratio of the unwrapping errors is low, and the two methods can complement each other when the ratio of the unwrapping errors is relatively high. At last the real SAR data is tested for the phase unwrapping error correction. Results show that this new method can correct the phase unwrapping errors successfully in the practical application.

  10. Revised Radiometric Calibration Technique for LANDSAT-4 Thematic Mapper Data by the Canada Centre for Remote Sensing

    NASA Technical Reports Server (NTRS)

    Murphy, J.; Butlin, T.; Duff, P.; Fitzgerald, A.

    1984-01-01

    A technique for the radiometric correction of LANDSAT-4 Thematic Mapper data was proposed by the Canada Center for Remote Sensing. Subsequent detailed observations of raw image data, raw radiometric calibration data and background measurements extracted from the raw data stream on High Density Tape highlighted major shortcomings in the proposed method which if left uncorrected, can cause severe radiometric striping in the output product. Results are presented which correlate measurements of the DC background with variations in both image data background and calibration samples. The effect on both raw data and on data corrected using the earlier proposed technique is explained, and the correction required for these factors as a function of individual scan line number for each detector is described. It is shown how the revised technique can be incorporated into an operational environment.

  11. Talking about health: correction employees' assessments of obstacles to healthy living.

    PubMed

    Morse, Tim; Dussetschleger, Jeffrey; Warren, Nicholas; Cherniack, Martin

    2011-09-01

    Describe health risks/obstacles to health among correctional employees. Mixed-methods approach combined results from four focus groups, 10 interviews, 335 surveys, and 197 physical assessments. Obesity levels were higher than national averages (40.7% overweight and 43.3% obese), with higher levels associated with job tenure, male gender, and working off-shift. Despite widespread concern about the lack of fitness, leisure exercise was higher than national norms. Respondents had higher levels of hypertension than national norms, with 31% of men and 25.8% of women hypertensive compared with 17.1% and 15.1% for national norms. Stress levels were elevated. Officers related their stress to concerns about security, administrative requirements, and work/family imbalance. High stress levels are reflected in elevated levels of hypertension. Correctional employees are at high risk for chronic disease, and environmental changes are needed to reduce risk factors. (C)2011The American College of Occupational and Environmental Medicine

  12. Higgs boson decay into b-quarks at NNLO accuracy

    NASA Astrophysics Data System (ADS)

    Del Duca, Vittorio; Duhr, Claude; Somogyi, Gábor; Tramontano, Francesco; Trócsányi, Zoltán

    2015-04-01

    We compute the fully differential decay rate of the Standard Model Higgs boson into b-quarks at next-to-next-to-leading order (NNLO) accuracy in αs. We employ a general subtraction scheme developed for the calculation of higher order perturbative corrections to QCD jet cross sections, which is based on the universal infrared factorization properties of QCD squared matrix elements. We show that the subtractions render the various contributions to the NNLO correction finite. In particular, we demonstrate analytically that the sum of integrated subtraction terms correctly reproduces the infrared poles of the two-loop double virtual contribution to this process. We present illustrative differential distributions obtained by implementing the method in a parton level Monte Carlo program. The basic ingredients of our subtraction scheme, used here for the first time to compute a physical observable, are universal and can be employed for the computation of more involved processes.

  13. Maximizing the spatial representativeness of NO2 monitoring data using a combination of local wind-based sectoral division and seasonal and diurnal correction factors.

    PubMed

    Donnelly, Aoife; Naughton, Owen; Misstear, Bruce; Broderick, Brian

    2016-10-14

    This article describes a new methodology for increasing the spatial representativeness of individual monitoring sites. Air pollution levels at a given point are influenced by emission sources in the immediate vicinity. Since emission sources are rarely uniformly distributed around a site, concentration levels will inevitably be most affected by the sources in the prevailing upwind direction. The methodology provides a means of capturing this effect and providing additional information regarding source/pollution relationships. The methodology allows for the division of the air quality data from a given monitoring site into a number of sectors or wedges based on wind direction and estimation of annual mean values for each sector, thus optimising the information that can be obtained from a single monitoring station. The method corrects for short-term data, diurnal and seasonal variations in concentrations (which can produce uneven weighting of data within each sector) and uneven frequency of wind directions. Significant improvements in correlations between the air quality data and the spatial air quality indicators were obtained after application of the correction factors. This suggests the application of these techniques would be of significant benefit in land-use regression modelling studies. Furthermore, the method was found to be very useful for estimating long-term mean values and wind direction sector values using only short-term monitoring data. The methods presented in this article can result in cost savings through minimising the number of monitoring sites required for air quality studies while also capturing a greater degree of variability in spatial characteristics. In this way, more reliable, but also more expensive monitoring techniques can be used in preference to a higher number of low-cost but less reliable techniques. The methods described in this article have applications in local air quality management, source receptor analysis, land-use regression mapping and modelling and population exposure studies.

  14. Assessment of bias in US waterfowl harvest estimates

    USGS Publications Warehouse

    Padding, Paul I.; Royle, J. Andrew

    2012-01-01

    Context. North American waterfowl managers have long suspected that waterfowl harvest estimates derived from national harvest surveys in the USA are biased high. Survey bias can be evaluated by comparing survey results with like estimates from independent sources. Aims. We used band-recovery data to assess the magnitude of apparent bias in duck and goose harvest estimates, using mallards (Anas platyrhynchos) and Canada geese (Branta canadensis) as representatives of ducks and geese, respectively. Methods. We compared the number of reported mallard and Canada goose band recoveries, adjusted for band reporting rates, with the estimated harvests of banded mallards and Canada geese from the national harvest surveys. Weused the results of those comparisons to develop correction factors that can be applied to annual duck and goose harvest estimates of the national harvest survey. Key results. National harvest survey estimates of banded mallards harvested annually averaged 1.37 times greater than those calculated from band-recovery data, whereas Canada goose harvest estimates averaged 1.50 or 1.63 times greater than comparable band-recovery estimates, depending on the harvest survey methodology used. Conclusions. Duck harvest estimates produced by the national harvest survey from 1971 to 2010 should be reduced by a factor of 0.73 (95% CI = 0.71–0.75) to correct for apparent bias. Survey-specific correction factors of 0.67 (95% CI = 0.65–0.69) and 0.61 (95% CI = 0.59–0.64) should be applied to the goose harvest estimates for 1971–2001 (duck stamp-based survey) and 1999–2010 (HIP-based survey), respectively. Implications. Although this apparent bias likely has not influenced waterfowl harvest management policy in the USA, it does have negative impacts on some applications of harvest estimates, such as indirect estimation of population size. For those types of analyses, we recommend applying the appropriate correction factor to harvest estimates.

  15. A multi-institutional study of independent calculation verification in inhomogeneous media using a simple and effective method of heterogeneity correction integrated with the Clarkson method.

    PubMed

    Jinno, Shunta; Tachibana, Hidenobu; Moriya, Shunsuke; Mizuno, Norifumi; Takahashi, Ryo; Kamima, Tatsuya; Ishibashi, Satoru; Sato, Masanori

    2018-05-21

    In inhomogeneous media, there is often a large systematic difference in the dose between the conventional Clarkson algorithm (C-Clarkson) for independent calculation verification and the superposition-based algorithms of treatment planning systems (TPSs). These treatment site-dependent differences increase the complexity of the radiotherapy planning secondary check. We developed a simple and effective method of heterogeneity correction integrated with the Clarkson algorithm (L-Clarkson) to account for the effects of heterogeneity in the lateral dimension, and performed a multi-institutional study to evaluate the effectiveness of the method. In the method, a 2D image reconstructed from computed tomography (CT) images is divided according to lines extending from the reference point to the edge of the multileaf collimator (MLC) or jaw collimator for each pie sector, and the radiological path length (RPL) of each line is calculated on the 2D image to obtain a tissue maximum ratio and phantom scatter factor, allowing the dose to be calculated. A total of 261 plans (1237 beams) for conventional breast and lung treatments and lung stereotactic body radiotherapy were collected from four institutions. Disagreements in dose between the on-site TPSs and a verification program using the C-Clarkson and L-Clarkson algorithms were compared. Systematic differences with the L-Clarkson method were within 1% for all sites, while the C-Clarkson method resulted in systematic differences of 1-5%. The L-Clarkson method showed smaller variations. This heterogeneity correction integrated with the Clarkson algorithm would provide a simple evaluation within the range of -5% to +5% for a radiotherapy plan secondary check.

  16. Implementation of a MFAC based position sensorless drive for high speed BLDC motors with nonideal back EMF.

    PubMed

    Li, Haitao; Ning, Xin; Li, Wenzhuo

    2017-03-01

    In order to improve the reliability and reduce power consumption of the high speed BLDC motor system, this paper presents a model free adaptive control (MFAC) based position sensorless drive with only a dc-link current sensor. The initial commutation points are obtained by detecting the phase of EMF zero-crossing point and then delaying 30 electrical degrees. According to the commutation error caused by the low pass filter (LPF) and other factors, the relationship between commutation error angle and dc-link current is analyzed, a corresponding MFAC based control method is proposed, and the commutation error can be corrected by the controller in real time. Both the simulation and experimental results show that the proposed correction method can achieve ideal commutation effect within the entire operating speed range. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Calculation of the Pitot tube correction factor for Newtonian and non-Newtonian fluids.

    PubMed

    Etemad, S Gh; Thibault, J; Hashemabadi, S H

    2003-10-01

    This paper presents the numerical investigation performed to calculate the correction factor for Pitot tubes. The purely viscous non-Newtonian fluids with the power-law model constitutive equation were considered. It was shown that the power-law index, the Reynolds number, and the distance between the impact and static tubes have a major influence on the Pitot tube correction factor. The problem was solved for a wide range of these parameters. It was shown that employing Bernoulli's equation could lead to large errors, which depend on the magnitude of the kinetic energy and energy friction loss terms. A neural network model was used to correlate the correction factor of a Pitot tube as a function of these three parameters. This correlation is valid for most Newtonian, pseudoplastic, and dilatant fluids at low Reynolds number.

  18. Research on evaluation methods for water regulation ability of dams in the Huai River Basin

    NASA Astrophysics Data System (ADS)

    Shan, G. H.; Lv, S. F.; Ma, K.

    2016-08-01

    Water environment protection is a global and urgent problem that requires correct and precise evaluation. Evaluation methods have been studied for many years; however, there is a lack of research on the methods of assessing the water regulation ability of dams. Currently, evaluating the ability of dams has become a practical and significant research orientation because of the global water crisis, and the lack of effective ways to manage a dam's regulation ability has only compounded this. This paper firstly constructs seven evaluation factors and then develops two evaluation approaches to implement the factors according to the features of the problem. Dams of the Yin Shang ecological control section in the Huai He River basin are selected as an example to demonstrate the method. The results show that the evaluation approaches can produce better and more practical suggestions for dam managers.

  19. Implementation of the SPH Procedure Within the MOOSE Finite Element Framework

    NASA Astrophysics Data System (ADS)

    Laurier, Alexandre

    The goal of this thesis was to implement the SPH homogenization procedure within the MOOSE finite element framework at INL. Before this project, INL relied on DRAGON to do their SPH homogenization which was not flexible enough for their needs. As such, the SPH procedure was implemented for the neutron diffusion equation with the traditional, Selengut and true Selengut normalizations. Another aspect of this research was to derive the SPH corrected neutron transport equations and implement them in the same framework. Following in the footsteps of other articles, this feature was implemented and tested successfully with both the PN and S N transport calculation schemes. Although the results obtained for the power distribution in PWR assemblies show no advantages over the use of the SPH diffusion equation, we believe the inclusion of this transport correction will allow for better results in cases where either P N or SN are required. An additional aspect of this research was the implementation of a novel way of solving the non-linear SPH problem. Traditionally, this was done through a Picard, fixed-point iterative process whereas the new implementation relies on MOOSE's Preconditioned Jacobian-Free Newton Krylov (PJFNK) method to allow for a direct solution to the non-linear problem. This novel implementation showed a decrease in calculation time by a factor reaching 50 and generated SPH factors that correspond to those obtained through a fixed-point iterative process with a very tight convergence criteria: epsilon < 10-8. The use of the PJFNK SPH procedure also allows to reach convergence in problems containing important reflector regions and void boundary conditions, something that the traditional SPH method has never been able to achieve. At times when the PJFNK method cannot reach convergence to the SPH problem, a hybrid method is used where by the traditional SPH iteration forces the initial condition to be within the radius of convergence of the Newton method. This new method was tested on a simplified model of INL's TREAT reactor, a problem that includes very important graphite reflector regions as well as vacuum boundary conditions with great success. To demonstrate the power of PJFNK SPH on a more common case, the correction was applied to a simplified PWR reactor core from the BEAVRS benchmark that included 15 assemblies and the water reflector to obtain very good results. This opens up the possibility to apply the SPH correction to full reactor cores in order to reduce homogenization errors for use in transient or multi-physics calculations.

  20. Entrance dose measurements for in‐vivo diode dosimetry: Comparison of correction factors for two types of commercial silicon diode detectors

    PubMed Central

    Zhu, X. R.

    2000-01-01

    Silicon diode dosimeters have been used routinely for in‐vivo dosimetry. Despite their popularity, an appropriate implementation of an in‐vivo dosimetry program using diode detectors remains a challenge for clinical physicists. One common approach is to relate the diode readout to the entrance dose, that is, dose to the reference depth of maximum dose such as dmax for the 10×10 cm2 field. Various correction factors are needed in order to properly infer the entrance dose from the diode readout, depending on field sizes, target‐to‐surface distances (TSD), and accessories (such as wedges and compensate filters). In some clinical practices, however, no correction factor is used. In this case, a diode‐dosimeter‐based in‐vivo dosimetry program may not serve the purpose effectively; that is, to provide an overall check of the dosimetry procedure. In this paper, we provide a formula to relate the diode readout to the entrance dose. Correction factors for TSD, field size, and wedges used in this formula are also clearly defined. Two types of commercial diode detectors, ISORAD (n‐type) and the newly available QED (p‐type) (Sun Nuclear Corporation), are studied. We compared correction factors for TSDs, field sizes, and wedges. Our results are consistent with the theory of radiation damage of silicon diodes. Radiation damage has been shown to be more serious for n‐type than for p‐type detectors. In general, both types of diode dosimeters require correction factors depending on beam energy, TSD, field size, and wedge. The magnitudes of corrections for QED (p‐type) diodes are smaller than ISORAD detectors. PACS number(s): 87.66.–a, 87.52.–g PMID:11674824

Top