Sample records for size correction factor

  1. Size Distribution of Sea-Salt Emissions as a Function of Relative Humidity

    NASA Astrophysics Data System (ADS)

    Zhang, K. M.; Knipping, E. M.; Wexler, A. S.; Bhave, P. V.; Tonnesen, G. S.

    2004-12-01

    Here we introduced a simple method for correcting sea-salt particle-size distributions as a function of relative humidity. Distinct from previous approaches, our derivation uses particle size at formation as the reference state rather than dry particle size. The correction factors, corresponding to the size at formation and the size at 80% RH, are given as polynomial functions of local relative humidity which are straightforward to implement. Without major compromises, the correction factors are thermodynamically accurate and can be applied between 0.45 and 0.99 RH. Since the thermodynamic properties of sea-salt electrolytes are weakly dependent on ambient temperature, these factors can be regarded as temperature independent. The correction factor w.r.t. to the size at 80% RH is in excellent agreement with those from Fitzgerald's and Gerber's growth equations; while the correction factor w.r.t. the size at formation has the advantage of being independent of dry size and relative humidity at formation. The resultant sea-salt emissions can be used directly in atmospheric model simulations at urban, regional and global scales without further correction. Application of this method to several common open-ocean and surf-zone sea-salt-particle source functions is described.

  2. SU-E-T-469: A Practical Approach for the Determination of Small Field Output Factors Using Published Monte Carlo Derived Correction Factors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calderon, E; Siergiej, D

    2014-06-01

    Purpose: Output factor determination for small fields (less than 20 mm) presents significant challenges due to ion chamber volume averaging and diode over-response. Measured output factor values between detectors are known to have large deviations as field sizes are decreased. No set standard to resolve this difference in measurement exists. We observed differences between measured output factors of up to 14% using two different detectors. Published Monte Carlo derived correction factors were used to address this challenge and decrease the output factor deviation between detectors. Methods: Output factors for Elekta's linac-based stereotactic cone system were measured using the EDGE detectormore » (Sun Nuclear) and the A16 ion chamber (Standard Imaging). Measurements conditions were 100 cm SSD (source to surface distance) and 1.5 cm depth. Output factors were first normalized to a 10.4 cm × 10.4 cm field size using a daisy-chaining technique to minimize the dependence of field size on detector response. An equation expressing the relation between published Monte Carlo correction factors as a function of field size for each detector was derived. The measured output factors were then multiplied by the calculated correction factors. EBT3 gafchromic film dosimetry was used to independently validate the corrected output factors. Results: Without correction, the deviation in output factors between the EDGE and A16 detectors ranged from 1.3 to 14.8%, depending on cone size. After applying the calculated correction factors, this deviation fell to 0 to 3.4%. Output factors determined with film agree within 3.5% of the corrected output factors. Conclusion: We present a practical approach to applying published Monte Carlo derived correction factors to measured small field output factors for the EDGE and A16 detectors. Using this method, we were able to decrease the percent deviation between both detectors from 14.8% to 3.4% agreement.« less

  3. Monte Carlo calculated correction factors for diodes and ion chambers in small photon fields.

    PubMed

    Czarnecki, D; Zink, K

    2013-04-21

    The application of small photon fields in modern radiotherapy requires the determination of total scatter factors Scp or field factors Ω(f(clin), f(msr))(Q(clin), Q(msr)) with high precision. Both quantities require the knowledge of the field-size-dependent and detector-dependent correction factor k(f(clin), f(msr))(Q(clin), Q(msr)). The aim of this study is the determination of the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) for different types of detectors in a clinical 6 MV photon beam of a Siemens KD linear accelerator. The EGSnrc Monte Carlo code was used to calculate the dose to water and the dose to different detectors to determine the field factor as well as the mentioned correction factor for different small square field sizes. Besides this, the mean water to air stopping power ratio as well as the ratio of the mean energy absorption coefficients for the relevant materials was calculated for different small field sizes. As the beam source, a Monte Carlo based model of a Siemens KD linear accelerator was used. The results show that in the case of ionization chambers the detector volume has the largest impact on the correction factor k(f(clin), f(msr))(Q(clin), Q(msr)); this perturbation may contribute up to 50% to the correction factor. Field-dependent changes in stopping-power ratios are negligible. The magnitude of k(f(clin), f(msr))(Q(clin), Q(msr)) is of the order of 1.2 at a field size of 1 × 1 cm(2) for the large volume ion chamber PTW31010 and is still in the range of 1.05-1.07 for the PinPoint chambers PTW31014 and PTW31016. For the diode detectors included in this study (PTW60016, PTW 60017), the correction factor deviates no more than 2% from unity in field sizes between 10 × 10 and 1 × 1 cm(2), but below this field size there is a steep decrease of k(f(clin), f(msr))(Q(clin), Q(msr)) below unity, i.e. a strong overestimation of dose. Besides the field size and detector dependence, the results reveal a clear dependence of the correction factor on the accelerator geometry for field sizes below 1 × 1 cm(2), i.e. on the beam spot size of the primary electrons hitting the target. This effect is especially pronounced for the ionization chambers. In conclusion, comparing all detectors, the unshielded diode PTW60017 is highly recommended for small field dosimetry, since its correction factor k(f(clin), f(msr))(Q(clin), Q(msr)) is closest to unity in small fields and mainly independent of the electron beam spot size.

  4. 75 FR 48815 - Medicaid Program and Children's Health Insurance Program (CHIP); Revisions to the Medicaid...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-11

    ... size may be reduced by the finite population correction factor. The finite population correction is a statistical formula utilized to determine sample size where the population is considered finite rather than... program may notify us and the annual sample size will be reduced by the finite population correction...

  5. SEMICONDUCTOR TECHNOLOGY: An efficient dose-compensation method for proximity effect correction

    NASA Astrophysics Data System (ADS)

    Ying, Wang; Weihua, Han; Xiang, Yang; Renping, Zhang; Yang, Zhang; Fuhua, Yang

    2010-08-01

    A novel simple dose-compensation method is developed for proximity effect correction in electron-beam lithography. The sizes of exposed patterns depend on dose factors while other exposure parameters (including accelerate voltage, resist thickness, exposing step size, substrate material, and so on) remain constant. This method is based on two reasonable assumptions in the evaluation of the compensated dose factor: one is that the relation between dose factors and circle-diameters is linear in the range under consideration; the other is that the compensated dose factor is only affected by the nearest neighbors for simplicity. Four-layer-hexagon photonic crystal structures were fabricated as test patterns to demonstrate this method. Compared to the uncorrected structures, the homogeneity of the corrected hole-size in photonic crystal structures was clearly improved.

  6. Entrance dose measurements for in‐vivo diode dosimetry: Comparison of correction factors for two types of commercial silicon diode detectors

    PubMed Central

    Zhu, X. R.

    2000-01-01

    Silicon diode dosimeters have been used routinely for in‐vivo dosimetry. Despite their popularity, an appropriate implementation of an in‐vivo dosimetry program using diode detectors remains a challenge for clinical physicists. One common approach is to relate the diode readout to the entrance dose, that is, dose to the reference depth of maximum dose such as dmax for the 10×10 cm2 field. Various correction factors are needed in order to properly infer the entrance dose from the diode readout, depending on field sizes, target‐to‐surface distances (TSD), and accessories (such as wedges and compensate filters). In some clinical practices, however, no correction factor is used. In this case, a diode‐dosimeter‐based in‐vivo dosimetry program may not serve the purpose effectively; that is, to provide an overall check of the dosimetry procedure. In this paper, we provide a formula to relate the diode readout to the entrance dose. Correction factors for TSD, field size, and wedges used in this formula are also clearly defined. Two types of commercial diode detectors, ISORAD (n‐type) and the newly available QED (p‐type) (Sun Nuclear Corporation), are studied. We compared correction factors for TSDs, field sizes, and wedges. Our results are consistent with the theory of radiation damage of silicon diodes. Radiation damage has been shown to be more serious for n‐type than for p‐type detectors. In general, both types of diode dosimeters require correction factors depending on beam energy, TSD, field size, and wedge. The magnitudes of corrections for QED (p‐type) diodes are smaller than ISORAD detectors. PACS number(s): 87.66.–a, 87.52.–g PMID:11674824

  7. SU-C-304-07: Are Small Field Detector Correction Factors Strongly Dependent On Machine-Specific Characteristics?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mathew, D; Tanny, S; Parsai, E

    2015-06-15

    Purpose: The current small field dosimetry formalism utilizes quality correction factors to compensate for the difference in detector response relative to dose deposited in water. The correction factors are defined on a machine-specific basis for each beam quality and detector combination. Some research has suggested that the correction factors may only be weakly dependent on machine-to-machine variations, allowing for determinations of class-specific correction factors for various accelerator models. This research examines the differences in small field correction factors for three detectors across two Varian Truebeam accelerators to determine the correction factor dependence on machine-specific characteristics. Methods: Output factors were measuredmore » on two Varian Truebeam accelerators for equivalently tuned 6 MV and 6 FFF beams. Measurements were obtained using a commercial plastic scintillation detector (PSD), two ion chambers, and a diode detector. Measurements were made at a depth of 10 cm with an SSD of 100 cm for jaw-defined field sizes ranging from 3×3 cm{sup 2} to 0.6×0.6 cm{sup 2}, normalized to values at 5×5cm{sup 2}. Correction factors for each field on each machine were calculated as the ratio of the detector response to the PSD response. Percent change of correction factors for the chambers are presented relative to the primary machine. Results: The Exradin A26 demonstrates a difference of 9% for 6×6mm{sup 2} fields in both the 6FFF and 6MV beams. The A16 chamber demonstrates a 5%, and 3% difference in 6FFF and 6MV fields at the same field size respectively. The Edge diode exhibits less than 1.5% difference across both evaluated energies. Field sizes larger than 1.4×1.4cm2 demonstrated less than 1% difference for all detectors. Conclusion: Preliminary results suggest that class-specific correction may not be appropriate for micro-ionization chamber. For diode systems, the correction factor was substantially similar and may be useful for class-specific reference conditions.« less

  8. Can small field diode correction factors be applied universally?

    PubMed

    Liu, Paul Z Y; Suchowerska, Natalka; McKenzie, David R

    2014-09-01

    Diode detectors are commonly used in dosimetry, but have been reported to over-respond in small fields. Diode correction factors have been reported in the literature. The purpose of this study is to determine whether correction factors for a given diode type can be universally applied over a range of irradiation conditions including beams of different qualities. A mathematical relation of diode over-response as a function of the field size was developed using previously published experimental data in which diodes were compared to an air core scintillation dosimeter. Correction factors calculated from the mathematical relation were then compared those available in the literature. The mathematical relation established between diode over-response and the field size was found to predict the measured diode correction factors for fields between 5 and 30 mm in width. The average deviation between measured and predicted over-response was 0.32% for IBA SFD and PTW Type E diodes. Diode over-response was found to be not strongly dependent on the type of linac, the method of collimation or the measurement depth. The mathematical relation was found to agree with published diode correction factors derived from Monte Carlo simulations and measurements, indicating that correction factors are robust in their transportability between different radiation beams. Copyright © 2014. Published by Elsevier Ireland Ltd.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, J C; Karmanos Cancer Institute McLaren-Macomb, Clinton Township, MI; Knill, C

    Purpose: To determine small field correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These factors allow the microDiamond to be used in QA measurements of output factors in the Gamma Knife Model-C; additionally, the results also contribute to the discussion on the water equivalence of the relatively-new microDiamond detector and its overall effectiveness in small field applications. Methods: The small field correction factors were calculated as k correction factors according to the Alfonso formalism. An MC model of the Gamma Knife and microDiamond was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes.more » Validation of the model was accomplished by simulating field output factors and measurement ratios for an available ABS plastic phantom and then comparing simulated results to film measurements, detector measurements, and treatment planning system (TPS) data. Once validated, the final k factors were determined by applying the model to a more waterlike solid water phantom. Results: During validation, all MC methods agreed with experiment within the stated uncertainties: MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1%. The final k correction factors for the PTW microDiamond in the solid water phantom approached unity to within 0.4%±1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2%±1.7%, resulting in a k factor of 0.969. Conclusion: Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond requires little to no corrections except for the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes. The authors would like to thank PTW (Friedberg, Germany) for providing the PTW microDiamond detector for this research.« less

  10. Finite-nuclear-size contribution to the g factor of a bound electron: Higher-order effects

    NASA Astrophysics Data System (ADS)

    Karshenboim, Savely G.; Ivanov, Vladimir G.

    2018-02-01

    A precision comparison of theory and experiments on the g factor of an electron bound in a hydrogenlike ion with a spinless nucleus requires a detailed account of finite-nuclear-size contributions. While the relativistic corrections to the leading finite-size contribution are known, the higher-order effects need an additional consideration. Two results are presented in the paper. One is on the anomalous-magnetic-moment correction to the finite-size effects and the other is due to higher-order effects in Z α m RN . We also present here a method to relate the contributions to the g factor of a bound electron in a hydrogenlike atom to its energy within a nonrelativistic approach.

  11. Finite element and analytical solutions for van der Pauw and four-point probe correction factors when multiple non-ideal measurement conditions coexist

    NASA Astrophysics Data System (ADS)

    Reveil, Mardochee; Sorg, Victoria C.; Cheng, Emily R.; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O.

    2017-09-01

    This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.

  12. Finite element and analytical solutions for van der Pauw and four-point probe correction factors when multiple non-ideal measurement conditions coexist.

    PubMed

    Reveil, Mardochee; Sorg, Victoria C; Cheng, Emily R; Ezzyat, Taha; Clancy, Paulette; Thompson, Michael O

    2017-09-01

    This paper presents an extensive collection of calculated correction factors that account for the combined effects of a wide range of non-ideal conditions often encountered in realistic four-point probe and van der Pauw experiments. In this context, "non-ideal conditions" refer to conditions that deviate from the assumptions on sample and probe characteristics made in the development of these two techniques. We examine the combined effects of contact size and sample thickness on van der Pauw measurements. In the four-point probe configuration, we examine the combined effects of varying the sample's lateral dimensions, probe placement, and sample thickness. We derive an analytical expression to calculate correction factors that account, simultaneously, for finite sample size and asymmetric probe placement in four-point probe experiments. We provide experimental validation of the analytical solution via four-point probe measurements on a thin film rectangular sample with arbitrary probe placement. The finite sample size effect is very significant in four-point probe measurements (especially for a narrow sample) and asymmetric probe placement only worsens such effects. The contribution of conduction in multilayer samples is also studied and found to be substantial; hence, we provide a map of the necessary correction factors. This library of correction factors will enable the design of resistivity measurements with improved accuracy and reproducibility over a wide range of experimental conditions.

  13. Small field detector correction factors kQclin,Qmsr (fclin,fmsr) for silicon-diode and diamond detectors with circular 6 MV fields derived using both empirical and numerical methods.

    PubMed

    O'Brien, D J; León-Vintró, L; McClean, B

    2016-01-01

    The use of radiotherapy fields smaller than 3 cm in diameter has resulted in the need for accurate detector correction factors for small field dosimetry. However, published factors do not always agree and errors introduced by biased reference detectors, inaccurate Monte Carlo models, or experimental errors can be difficult to distinguish. The aim of this study was to provide a robust set of detector-correction factors for a range of detectors using numerical, empirical, and semiempirical techniques under the same conditions and to examine the consistency of these factors between techniques. Empirical detector correction factors were derived based on small field output factor measurements for circular field sizes from 3.1 to 0.3 cm in diameter performed with a 6 MV beam. A PTW 60019 microDiamond detector was used as the reference dosimeter. Numerical detector correction factors for the same fields were derived based on calculations from a geant4 Monte Carlo model of the detectors and the Linac treatment head. Semiempirical detector correction factors were derived from the empirical output factors and the numerical dose-to-water calculations. The PTW 60019 microDiamond was found to over-respond at small field sizes resulting in a bias in the empirical detector correction factors. The over-response was similar in magnitude to that of the unshielded diode. Good agreement was generally found between semiempirical and numerical detector correction factors except for the PTW 60016 Diode P, where the numerical values showed a greater over-response than the semiempirical values by a factor of 3.7% for a 1.1 cm diameter field and higher for smaller fields. Detector correction factors based solely on empirical measurement or numerical calculation are subject to potential bias. A semiempirical approach, combining both empirical and numerical data, provided the most reliable results.

  14. Performance of a Line Loss Correction Method for Gas Turbine Emission Measurements

    NASA Astrophysics Data System (ADS)

    Hagen, D. E.; Whitefield, P. D.; Lobo, P.

    2015-12-01

    International concern for the environmental impact of jet engine exhaust emissions in the atmosphere has led to increased attention on gas turbine engine emission testing. The Society of Automotive Engineers Aircraft Exhaust Emissions Measurement Committee (E-31) has published an Aerospace Information Report (AIR) 6241 detailing the sampling system for the measurement of non-volatile particulate matter from aircraft engines, and is developing an Aerospace Recommended Practice (ARP) for methodology and system specification. The Missouri University of Science and Technology (MST) Center for Excellence for Aerospace Particulate Emissions Reduction Research has led numerous jet engine exhaust sampling campaigns to characterize emissions at different locations in the expanding exhaust plume. Particle loss, due to various mechanisms, occurs in the sampling train that transports the exhaust sample from the engine exit plane to the measurement instruments. To account for the losses, both the size dependent penetration functions and the size distribution of the emitted particles need to be known. However in the proposed ARP, particle number and mass are measured, but size is not. Here we present a methodology to generate number and mass correction factors for line loss, without using direct size measurement. A lognormal size distribution is used to represent the exhaust aerosol at the engine exit plane and is defined by the measured number and mass at the downstream end of the sample train. The performance of this line loss correction is compared to corrections based on direct size measurements using data taken by MST during numerous engine test campaigns. The experimental uncertainty in these correction factors is estimated. Average differences between the line loss correction method and size based corrections are found to be on the order of 10% for number and 2.5% for mass.

  15. Measuring and modeling the interaction among reward size, delay to reward, and satiation level on motivation in monkeys.

    PubMed

    Minamimoto, Takafumi; La Camera, Giancarlo; Richmond, Barry J

    2009-01-01

    Motivation is usually inferred from the likelihood or the intensity with which behavior is carried out. It is sensitive to external factors (e.g., the identity, amount, and timing of a rewarding outcome) and internal factors (e.g., hunger or thirst). We trained macaque monkeys to perform a nonchoice instrumental task (a sequential red-green color discrimination) while manipulating two external factors: reward size and delay-to-reward. We also inferred the state of one internal factor, level of satiation, by monitoring the accumulated reward. A visual cue indicated the forthcoming reward size and delay-to-reward in each trial. The fraction of trials completed correctly by the monkeys increased linearly with reward size and was hyperbolically discounted by delay-to-reward duration, relations that are similar to those found in free operant and choice tasks. The fraction of correct trials also decreased progressively as a function of the satiation level. Similar (albeit noiser) relations were obtained for reaction times. The combined effect of reward size, delay-to-reward, and satiation level on the proportion of correct trials is well described as a multiplication of the effects of the single factors when each factor is examined alone. These results provide a quantitative account of the interaction of external and internal factors on instrumental behavior, and allow us to extend the concept of subjective value of a rewarding outcome, usually confined to external factors, to account also for slow changes in the internal drive of the subject.

  16. Measuring and Modeling the Interaction Among Reward Size, Delay to Reward, and Satiation Level on Motivation in Monkeys

    PubMed Central

    Minamimoto, Takafumi; La Camera, Giancarlo; Richmond, Barry J.

    2009-01-01

    Motivation is usually inferred from the likelihood or the intensity with which behavior is carried out. It is sensitive to external factors (e.g., the identity, amount, and timing of a rewarding outcome) and internal factors (e.g., hunger or thirst). We trained macaque monkeys to perform a nonchoice instrumental task (a sequential red-green color discrimination) while manipulating two external factors: reward size and delay-to-reward. We also inferred the state of one internal factor, level of satiation, by monitoring the accumulated reward. A visual cue indicated the forthcoming reward size and delay-to-reward in each trial. The fraction of trials completed correctly by the monkeys increased linearly with reward size and was hyperbolically discounted by delay-to-reward duration, relations that are similar to those found in free operant and choice tasks. The fraction of correct trials also decreased progressively as a function of the satiation level. Similar (albeit noiser) relations were obtained for reaction times. The combined effect of reward size, delay-to-reward, and satiation level on the proportion of correct trials is well described as a multiplication of the effects of the single factors when each factor is examined alone. These results provide a quantitative account of the interaction of external and internal factors on instrumental behavior, and allow us to extend the concept of subjective value of a rewarding outcome, usually confined to external factors, to account also for slow changes in the internal drive of the subject. PMID:18987119

  17. Radiative-Transfer Modeling of Spectra of Densely Packed Particulate Media

    NASA Astrophysics Data System (ADS)

    Ito, G.; Mishchenko, M. I.; Glotch, T. D.

    2017-12-01

    Remote sensing measurements over a wide range of wavelengths from both ground- and space-based platforms have provided a wealth of data regarding the surfaces and atmospheres of various solar system bodies. With proper interpretations, important properties, such as composition and particle size, can be inferred. However, proper interpretation of such datasets can often be difficult, especially for densely packed particulate media with particle sizes on the order of wavelength of light being used for remote sensing. Radiative transfer theory has often been applied to the study of densely packed particulate media like planetary regoliths and snow, but with difficulty, and here we continue to investigate radiative transfer modeling of spectra of densely packed media. We use the superposition T-matrix method to compute scattering properties of clusters of particles and capture the near-field effects important for dense packing. Then, the scattering parameters from the T-matrix computations are modified with the static structure factor correction, accounting for the dense packing of the clusters themselves. Using these corrected scattering parameters, reflectance (or emissivity via Kirchhoff's Law) is computed with the method of invariance imbedding solution to the radiative transfer equation. For this work we modeled the emissivity spectrum of the 3.3 µm particle size fraction of enstatite, representing some common mineralogical and particle size components of regoliths, in the mid-infrared wavelengths (5 - 50 µm). The modeled spectrum from the T-matrix method with static structure factor correction using moderate packing densities (filling factors of 0.1 - 0.2) produced better fits to the laboratory measurement of corresponding spectrum than the spectrum modeled by the equivalent method without static structure factor correction. Future work will test the method of the superposition T-matrix and static structure factor correction combination for larger particles sizes and polydispersed clusters in search for the most effective modeling of spectra of densely packed particulate media.

  18. Determination of small field synthetic single-crystal diamond detector correction factors for CyberKnife, Leksell Gamma Knife Perfexion and linear accelerator.

    PubMed

    Veselsky, T; Novotny, J; Pastykova, V; Koniarova, I

    2017-12-01

    The aim of this study was to determine small field correction factors for a synthetic single-crystal diamond detector (PTW microDiamond) for routine use in clinical dosimetric measurements. Correction factors following small field Alfonso formalism were calculated by comparison of PTW microDiamond measured ratio M Qclin fclin /M Qmsr fmsr with Monte Carlo (MC) based field output factors Ω Qclin,Qmsr fclin,fmsr determined using Dosimetry Diode E or with MC simulation itself. Diode measurements were used for the CyberKnife and Varian Clinac 2100C/D linear accelerator. PTW microDiamond correction factors for Leksell Gamma Knife (LGK) were derived using MC simulated reference values from the manufacturer. PTW microDiamond correction factors for CyberKnife field sizes 25-5 mm were mostly smaller than 1% (except for 2.9% for 5 mm Iris field and 1.4% for 7.5 mm fixed cone field). The correction of 0.1% and 2.0% for 8 mm and 4 mm collimators, respectively, needed to be applied to PTW microDiamond measurements for LGK Perfexion. Finally, PTW microDiamond M Qclin fclin /M Qmsr fmsr for the linear accelerator varied from MC corrected Dosimetry Diode data by less than 0.5% (except for 1 × 1 cm 2 field size with 1.3% deviation). Regarding low resulting correction factor values, the PTW microDiamond detector may be considered an almost ideal tool for relative small field dosimetry in a large variety of stereotactic and radiosurgery treatment devices. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  19. Calibration of entrance dose measurement for an in vivo dosimetry programme.

    PubMed

    Ding, W; Patterson, W; Tremethick, L; Joseph, D

    1995-11-01

    An increasing number of cancer treatment centres are using in vivo dosimetry as a quality assurance tool for verifying dosimetry as either the entrance or exit surface of the patient undergoing external beam radiotherapy. Equipment is usually limited to either thermoluminescent dosimeters (TLD) or semiconductor detectors such as p-type diodes. The semiconductor detector is more popular than the TLD due to the major advantage of real time analysis of the actual dose delivered. If a discrepancy is observed between the calculated and the measured entrance dose, it is possible to eliminate several likely sources of errors by immediately verifying all treatment parameters. Five Scanditronix EDP-10 p-type diodes were investigated to determine their calibration and relevant correction factors for entrance dose measurements using a Victoreen White Water-RW3 tissue equivalent phantom and a 6 MV photon beam from a Varian Clinac 2100C linear accelerator. Correction factors were determined for individual diodes for the following parameters: source to surface distance (SSD), collimator size, wedge, plate (tray) and temperature. The directional dependence of diode response was also investigated. The SSD correction factor (CSSD) was found to increase by approximately 3% over the range of SSD from 80 to 130 cm. The correction factor for collimator size (Cfield) also varied by approximately 3% between 5 x 5 and 40 x 40 cm2. The wedge correction factor (Cwedge) and plate correction factor (Cplate) were found to be a function of collimator size. Over the range of measurement, these factors varied by a maximum of 1 and 1.5%, respectively. The Cplate variation between the solid and the drilled plates under the same irradiation conditions was a maximum of 2.4%. The diode sensitivity demonstrated an increase with temperature. A maximum of 2.5% variation for the directional dependence of diode response was observed for angle of +/- 60 degrees. In conclusion, in vivo dosimetry is an important and reliable method for checking the dose delivered to the patient. Preclinical calibration and determination of the relevant correction factors for each diode are essential in order to achieve a high accuracy of dose delivered to the patient.

  20. Experimental determination of field factors (\\Omega _{{{Q}_{\\text{clin}}},{{Q}_{\\text{msr}}}}^{{{f}_{\\text{clin}}},{{f}_{\\text{msr}}}} ) for small radiotherapy beams using the daisy chain correction method

    NASA Astrophysics Data System (ADS)

    Lárraga-Gutiérrez, José Manuel

    2015-08-01

    Recently, Alfonso et al proposed a new formalism for the dosimetry of small and non-standard fields. The proposed new formalism is strongly based on the calculation of detector-specific beam correction factors by Monte Carlo simulation methods, which accounts for the difference in the response of the detector between the small and the machine specific reference field. The correct calculation of the detector-specific beam correction factors demands an accurate knowledge of the linear accelerator, detector geometry and composition materials. The present work shows that the field factors in water may be determined experimentally using the daisy chain correction method down to a field size of 1 cm  ×  1 cm for a specific set of detectors. The detectors studied were: three mini-ionization chambers (PTW-31014, PTW-31006, IBA-CC01), three silicon-based diodes (PTW-60018, IBA-SFD and IBA-PFD) and one synthetic diamond detector (PTW-60019). Monte Carlo simulations and experimental measurements were performed for a 6 MV photon beam at 10 cm depth in water with a source-to-axis distance of 100 cm. The results show that the differences between the experimental and Monte Carlo calculated field factors are less than 0.5%—with the exception of the IBA-PFD—for field sizes between 1.5 cm  ×  1.5 cm and 5 cm  ×  5 cm. For the 1 cm  ×  1 cm field size, the differences are within 2%. By using the daisy chain correction method, it is possible to determine measured field factors in water. The results suggest that the daisy chain correction method is not suitable for measurements performed with the IBA-PFD detector. The latter is due to the presence of tungsten powder in the detector encapsulation material. The use of Monte Carlo calculated k{{Q\\text{clin}},{{Q}\\text{msr}}}{{f\\text{clin}},{{f}\\text{msr}}} is encouraged for field sizes less than or equal to 1 cm  ×  1 cm for the dosimeters used in this work.

  1. Monte Carlo modeling of fluorescence in semi-infinite turbid media

    NASA Astrophysics Data System (ADS)

    Ong, Yi Hong; Finlay, Jarod C.; Zhu, Timothy C.

    2018-02-01

    The incident field size and the interplay of absorption and scattering can influence the in-vivo light fluence rate distribution and complicate the absolute quantification of fluorophore concentration in-vivo. In this study, we use Monte Carlo simulations to evaluate the effect of incident beam radius and optical properties to the fluorescence signal collected by isotropic detector placed on the tissue surface. The optical properties at the excitation and emission wavelengths are assumed to be identical. We compute correction factors to correct the fluorescence intensity for variations due to incident field size and optical properties. The correction factors are fitted to a 4-parameters empirical correction function and the changes in each parameter are compared for various beam radius over a range of physiologically relevant tissue optical properties (μa = 0.1 - 1 cm-1 , μs'= 5 - 40 cm-1 ).

  2. Ratios of total suspended solids to suspended sediment concentrations by particle size

    USGS Publications Warehouse

    Selbig, W.R.; Bannerman, R.T.

    2011-01-01

    Wet-sieving sand-sized particles from a whole storm-water sample before splitting the sample into laboratory-prepared containers can reduce bias and improve the precision of suspended-sediment concentrations (SSC). Wet-sieving, however, may alter concentrations of total suspended solids (TSS) because the analytical method used to determine TSS may not have included the sediment retained on the sieves. Measuring TSS is still commonly used by environmental managers as a regulatory metric for solids in storm water. For this reason, a new method of correlating concentrations of TSS and SSC by particle size was used to develop a series of correction factors for SSC as a means to estimate TSS. In general, differences between TSS and SSC increased with greater particle size and higher sand content. Median correction factors to SSC ranged from 0.29 for particles larger than 500m to 0.85 for particles measuring from 32 to 63m. Great variability was observed in each fraction-a result of varying amounts of organic matter in the samples. Wide variability in organic content could reduce the transferability of the correction factors. ?? 2011 American Society of Civil Engineers.

  3. Communication: Finite size correction in periodic coupled cluster theory calculations of solids.

    PubMed

    Liao, Ke; Grüneis, Andreas

    2016-10-14

    We present a method to correct for finite size errors in coupled cluster theory calculations of solids. The outlined technique shares similarities with electronic structure factor interpolation methods used in quantum Monte Carlo calculations. However, our approach does not require the calculation of density matrices. Furthermore we show that the proposed finite size corrections achieve chemical accuracy in the convergence of second-order Møller-Plesset perturbation and coupled cluster singles and doubles correlation energies per atom for insulating solids with two atomic unit cells using 2 × 2 × 2 and 3 × 3 × 3 k-point meshes only.

  4. TU-F-CAMPUS-T-04: Variations in Nominally Identical Small Fields From Photon Jaw Reproducibility and Associated Effects On Small Field Dosimetric Parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muir, B R; McEwen, M R

    2015-06-15

    Purpose: To investigate uncertainties in small field output factors and detector specific correction factors from variations in field size for nominally identical fields using measurements and Monte Carlo simulations. Methods: Repeated measurements of small field output factors are made with the Exradin W1 (plastic scintillation detector) and the PTW microDiamond (synthetic diamond detector) in beams from the Elekta Precise linear accelerator. We investigate corrections for a 0.6x0.6 cm{sup 2} nominal field size shaped with secondary photon jaws at 100 cm source to surface distance (SSD). Measurements of small field profiles are made in a water phantom at 10 cm depthmore » using both detectors and are subsequently used for accurate detector positioning. Supplementary Monte Carlo simulations with EGSnrc are used to calculate the absorbed dose to the detector and absorbed dose to water under the same conditions when varying field size. The jaws in the BEAMnrc model of the accelerator are varied by a reasonable amount to investigate the same situation without the influence of measurements uncertainties (such as detector positioning or variation in beam output). Results: For both detectors, small field output factor measurements differ by up to 11 % when repeated measurements are made in nominally identical 0.6x0.6 cm{sup 2} fields. Variations in the FWHM of measured profiles are consistent with field size variations reported by the accelerator. Monte Carlo simulations of the dose to detector vary by up to 16 % under worst case variations in field size. These variations are also present in calculations of absorbed dose to water. However, calculated detector specific correction factors are within 1 % when varying field size because of cancellation of effects. Conclusion: Clinical physicists should be aware of potentially significant uncertainties in measured output factors required for dosimetry of small fields due to field size variations for nominally identical fields.« less

  5. SU-E-T-101: Determination and Comparison of Correction Factors Obtained for TLDs in Small Field Lung Heterogenous Phantom Using Acuros XB and EGSnrc

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soh, R; Lee, J; Harianto, F

    Purpose: To determine and compare the correction factors obtained for TLDs in 2 × 2cm{sup 2} small field in lung heterogenous phantom using Acuros XB (AXB) and EGSnrc. Methods: This study will simulate the correction factors due to the perturbation of TLD-100 chips (Harshaw/Thermoscientific, 3 × 3 × 0.9mm{sup 3}, 2.64g/cm{sup 3}) in small field lung medium for Stereotactic Body Radiation Therapy (SBRT). A physical lung phantom was simulated by a 14cm thick composite cork phantom (0.27g/cm{sup 3}, HU:-743 ± 11) sandwiched between 4cm thick Plastic Water (CIRS,Norfolk). Composite cork has been shown to be a good lung substitute materialmore » for dosimetric studies. 6MV photon beam from Varian Clinac iX (Varian Medical Systems, Palo Alto, CA) with field size 2 × 2cm{sup 2} was simulated. Depth dose profiles were obtained from the Eclipse treatment planning system Acuros XB (AXB) and independently from DOSxyznrc, EGSnrc. Correction factors was calculated by the ratio of unperturbed to perturbed dose. Since AXB has limitations in simulating actual material compositions, EGSnrc will also simulate the AXB-based material composition for comparison to the actual lung phantom. Results: TLD-100, with its finite size and relatively high density, causes significant perturbation in 2 × 2cm{sup 2} small field in a low lung density phantom. Correction factors calculated by both EGSnrc and AXB was found to be as low as 0.9. It is expected that the correction factor obtained by EGSnrc wlll be more accurate as it is able to simulate the actual phantom material compositions. AXB have a limited material library, therefore it only approximates the composition of TLD, Composite cork and Plastic water, contributing to uncertainties in TLD correction factors. Conclusion: It is expected that the correction factors obtained by EGSnrc will be more accurate. Studies will be done to investigate the correction factors for higher energies where perturbation may be more pronounced.« less

  6. Multicenter evaluation of a synthetic single-crystal diamond detector for CyberKnife small field size output factors.

    PubMed

    Russo, Serenella; Masi, Laura; Francescon, Paolo; Frassanito, Maria Cristina; Fumagalli, Maria Luisa; Marinelli, Marco; Falco, Maria Daniela; Martinotti, Anna Stefania; Pimpinella, Maria; Reggiori, Giacomo; Verona Rinati, Gianluca; Vigorito, Sabrina; Mancosu, Pietro

    2016-04-01

    The aim of the present work was to evaluate small field size output factors (OFs) using the latest diamond detector commercially available, PTW-60019 microDiamond, over different CyberKnife systems. OFs were measured also by silicon detectors routinely used by each center, considered as reference. Five Italian CyberKnife centers performed OFs measurements for field sizes ranging from 5 to 60mm, defined by fixed circular collimators (5 centers) and by Iris(™) variable aperture collimator (4 centers). Setup conditions were: 80cm source to detector distance, and 1.5cm depth in water. To speed up measurements two diamond detectors were used and their equivalence was evaluated. MonteCarlo (MC) correction factors for silicon detectors were used for comparing the OF measurements. Considering OFs values averaged over all centers, diamond data resulted lower than uncorrected silicon diode ones. The agreement between diamond and MC corrected silicon values was within 0.6% for all fixed circular collimators. Relative differences between microDiamond and MC corrected silicon diodes data for Iris(™) collimator were lower than 1.0% for all apertures in the totality of centers. The two microDiamond detectors showed similar characteristics, in agreement with the technical specifications. Excellent agreement between microDiamond and MC corrected silicon diode detectors OFs was obtained for both collimation systems fixed cones and Iris(™), demonstrating the microDiamond could be a suitable detector for CyberKnife commissioning and routine checks. These results obtained in five centers suggest that for CyberKnife systems microDiamond can be used without corrections even at the smallest field size. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  7. Correction for spatial averaging in laser speckle contrast analysis

    PubMed Central

    Thompson, Oliver; Andrews, Michael; Hirst, Evan

    2011-01-01

    Practical laser speckle contrast analysis systems face a problem of spatial averaging of speckles, due to the pixel size in the cameras used. Existing practice is to use a system factor in speckle contrast analysis to account for spatial averaging. The linearity of the system factor correction has not previously been confirmed. The problem of spatial averaging is illustrated using computer simulation of time-integrated dynamic speckle, and the linearity of the correction confirmed using both computer simulation and experimental results. The valid linear correction allows various useful compromises in the system design. PMID:21483623

  8. Correction factor for ablation algorithms used in corneal refractive surgery with gaussian-profile beams

    NASA Astrophysics Data System (ADS)

    Jimenez, Jose Ramón; González Anera, Rosario; Jiménez del Barco, Luis; Hita, Enrique; Pérez-Ocón, Francisco

    2005-01-01

    We provide a correction factor to be added in ablation algorithms when a Gaussian beam is used in photorefractive laser surgery. This factor, which quantifies the effect of pulse overlapping, depends on beam radius and spot size. We also deduce the expected post-surgical corneal radius and asphericity when considering this factor. Data on 141 eyes operated on LASIK (laser in situ keratomileusis) with a Gaussian profile show that the discrepancy between experimental and expected data on corneal power is significantly lower when using the correction factor. For an effective improvement of post-surgical visual quality, this factor should be applied in ablation algorithms that do not consider the effects of pulse overlapping with a Gaussian beam.

  9. Sunlight Transmission through Desert Dust and Marine Aerosols: Diffuse Light Corrections to Sun Photometry and Pyrheliometry

    NASA Technical Reports Server (NTRS)

    Russell, P. B.; Livingston, J. M.; Dubovik, O.; Ramirez, S. A.; Wang, J.; Redemann, J.; Schmid, B.; Box, M.; Holben, B. N.

    2003-01-01

    Desert dust and marine aerosols are receiving increased scientific attention because of their prevalence on intercontinental scales and their potentially large effects on Earth radiation and climate, as well as on other aerosols, clouds, and precipitation. The relatively large size of desert dust and marine aerosols produces scattering phase functions that are strongly forward- peaked. Hence, Sun photometry and pyrheliometry of these aerosols are more subject to diffuse-light errors than is the case for smaller aerosols. Here we quantify these diffuse-light effects for common Sun photometer and pyrheliometer fields of view (FOV), using a data base on dust and marine aerosols derived from (1) AERONET measurements of sky radiance and solar beam transmission and (2) in situ measurements of aerosol layer size distribution and chemical composition. Accounting for particle non-sphericity is important when deriving dust size distribution from both AERONET and in situ aerodynamic measurements. We express our results in terms of correction factors that can be applied to Sun photometer and pyrheliometer measurements of aerosol optical depth (AOD). We find that the corrections are negligible (less than approximately 1% of AOD) for Sun photometers with narrow FOV (half-angle eta less than degree), but that they can be as large as 10% of AOD at 354 nm wavelength for Sun photometers with eta = 1.85 degrees. For pyrheliometers (which can have eta up to approximately 2.8 degrees), corrections can be as large as 16% at 354 nm. We find that AOD correction factors are well correlated with AOD wavelength dependence (hence Angstrom exponent). We provide best-fit equations for determining correction factors from Angstrom exponents of uncorrected AOD spectra, and we demonstrate their application to vertical profiles of multiwavelength AOD.

  10. Finite-Size Effects of Binary Mutual Diffusion Coefficients from Molecular Dynamics

    PubMed Central

    2018-01-01

    Molecular dynamics simulations were performed for the prediction of the finite-size effects of Maxwell-Stefan diffusion coefficients of molecular mixtures and a wide variety of binary Lennard–Jones systems. A strong dependency of computed diffusivities on the system size was observed. Computed diffusivities were found to increase with the number of molecules. We propose a correction for the extrapolation of Maxwell–Stefan diffusion coefficients to the thermodynamic limit, based on the study by Yeh and Hummer (J. Phys. Chem. B, 2004, 108, 15873−15879). The proposed correction is a function of the viscosity of the system, the size of the simulation box, and the thermodynamic factor, which is a measure for the nonideality of the mixture. Verification is carried out for more than 200 distinct binary Lennard–Jones systems, as well as 9 binary systems of methanol, water, ethanol, acetone, methylamine, and carbon tetrachloride. Significant deviations between finite-size Maxwell–Stefan diffusivities and the corresponding diffusivities at the thermodynamic limit were found for mixtures close to demixing. In these cases, the finite-size correction can be even larger than the simulated (finite-size) Maxwell–Stefan diffusivity. Our results show that considering these finite-size effects is crucial and that the suggested correction allows for reliable computations. PMID:29664633

  11. Impact of the neutron detector choice on Bell and Glasstone spatial correction factor for subcriticality measurement

    NASA Astrophysics Data System (ADS)

    Talamo, Alberto; Gohar, Y.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.

    2012-03-01

    In subcritical assemblies, the Bell and Glasstone spatial correction factor is used to correct the measured reactivity from different detector positions. In addition to the measuring position, several other parameters affect the correction factor: the detector material, the detector size, and the energy-angle distribution of source neutrons. The effective multiplication factor calculated by computer codes in criticality mode slightly differs from the average value obtained from the measurements in the different experimental channels of the subcritical assembly, which are corrected by the Bell and Glasstone spatial correction factor. Generally, this difference is due to (1) neutron counting errors; (2) geometrical imperfections, which are not simulated in the calculational model, and (3) quantities and distributions of material impurities, which are missing from the material definitions. This work examines these issues and it focuses on the detector choice and the calculation methodologies. The work investigated the YALINA Booster subcritical assembly of Belarus, which has been operated with three different fuel enrichments in the fast zone either: high (90%) and medium (36%), medium (36%), or low (21%) enriched uranium fuel.

  12. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data

    PubMed Central

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-01-01

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration’s (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003–2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW’s) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach. PMID:27314363

  13. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data.

    PubMed

    Bhatti, Haris Akram; Rientjes, Tom; Haile, Alemseged Tamiru; Habib, Emad; Verhoef, Wouter

    2016-06-15

    With the advances in remote sensing technology, satellite-based rainfall estimates are gaining attraction in the field of hydrology, particularly in rainfall-runoff modeling. Since estimates are affected by errors correction is required. In this study, we tested the high resolution National Oceanic and Atmospheric Administration's (NOAA) Climate Prediction Centre (CPC) morphing technique (CMORPH) satellite rainfall product (CMORPH) in the Gilgel Abbey catchment, Ethiopia. CMORPH data at 8 km-30 min resolution is aggregated to daily to match in-situ observations for the period 2003-2010. Study objectives are to assess bias of the satellite estimates, to identify optimum window size for application of bias correction and to test effectiveness of bias correction. Bias correction factors are calculated for moving window (MW) sizes and for sequential windows (SW's) of 3, 5, 7, 9, …, 31 days with the aim to assess error distribution between the in-situ observations and CMORPH estimates. We tested forward, central and backward window (FW, CW and BW) schemes to assess the effect of time integration on accumulated rainfall. Accuracy of cumulative rainfall depth is assessed by Root Mean Squared Error (RMSE). To systematically correct all CMORPH estimates, station based bias factors are spatially interpolated to yield a bias factor map. Reliability of interpolation is assessed by cross validation. The uncorrected CMORPH rainfall images are multiplied by the interpolated bias map to result in bias corrected CMORPH estimates. Findings are evaluated by RMSE, correlation coefficient (r) and standard deviation (SD). Results showed existence of bias in the CMORPH rainfall. It is found that the 7 days SW approach performs best for bias correction of CMORPH rainfall. The outcome of this study showed the efficiency of our bias correction approach.

  14. Is the PTW 60019 microDiamond a suitable candidate for small field reference dosimetry?

    NASA Astrophysics Data System (ADS)

    De Coste, Vanessa; Francescon, Paolo; Marinelli, Marco; Masi, Laura; Paganini, Lucia; Pimpinella, Maria; Prestopino, Giuseppe; Russo, Serenella; Stravato, Antonella; Verona, Claudio; Verona-Rinati, Gianluca

    2017-09-01

    A systematic study of the PTW microDiamond (MD) output factors (OF) is reported, aimed at clarifying its response in small fields and investigating its suitability for small field reference dosimetry. Ten MDs were calibrated under 60Co irradiation. OF measurements were performed in 6 MV photon beams by a CyberKnife M6, a Varian DHX and an Elekta Synergy linacs. Two PTW silicon diodes E (Si-D) were used for comparison. The results obtained by the MDs were evaluated in terms of absorbed dose to water determination in reference conditions and OF measurements, and compared to the results reported in the recent literature. To this purpose, the Monte Carlo (MC) beam-quality correction factor, kQMD , was calculated for the MD, and the small field output correction factors, k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} , were calculated for both the MD and the Si-D by two different research groups. An empirical function was also derived, providing output correction factors within 0.5% from the MC values calculated for all of the three linacs. A high reproducibility of the dosimetric properties was observed among the ten MDs. The experimental kQMD values are in agreement within 1% with the MC calculated ones. Output correction factors within  +0.7% and  -1.4% were obtained down to field sizes as narrow as 5 mm. The resulting MD and Si-D field factors are in agreement within 0.2% in the case of CyberKnife measurements and 1.6% in the other cases. This latter higher spread of the data was demonstrated to be due to a lower reproducibility of small beam sizes defined by jaws or multi leaf collimators. The results of the present study demonstrate the reproducibility of the MD response and provide a validation of the MC modelling of this device. In principle, accurate reference dosimetry is thus feasible by using the microDiamond dosimeter for field sizes down to 5 mm.

  15. Application of the Exradin W1 scintillator to determine Ediode 60017 and microDiamond 60019 correction factors for relative dosimetry within small MV and FFF fields.

    PubMed

    Underwood, T S A; Rowland, B C; Ferrand, R; Vieillevigne, L

    2015-09-07

    In this work we use EBT3 film measurements at 10 MV to demonstrate the suitability of the Exradin W1 (plastic scintillator) for relative dosimetry within small photon fields. We then use the Exradin W1 to measure the small field correction factors required by two other detectors: the PTW unshielded Ediode 60017 and the PTW microDiamond 60019. We consider on-axis correction-factors for small fields collimated using MLCs for four different TrueBeam energies: 6 FFF, 6 MV, 10 FFF and 10 MV. We also investigate percentage depth dose and lateral profile perturbations. In addition to high-density effects from its silicon sensitive region, the Ediode exhibited a dose-rate dependence and its known over-response to low energy scatter was found to be greater for 6 FFF than 6 MV. For clinical centres without access to a W1 scintillator, we recommend the microDiamond over the Ediode and suggest that 'limits of usability', field sizes below which a detector introduces unacceptable errors, can form a practical alternative to small-field correction factors. For a dosimetric tolerance of 2% on-axis, the microDiamond might be utilised down to 10 mm and 15 mm field sizes for 6 MV and 10 MV, respectively.

  16. Correlating hydrodynamic radii with that of two-dimensional nanoparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yue, Yuan; Kan, Yuwei; Clearfield, Abraham

    2015-12-21

    Dynamic light scattering (DLS) is one of the most adapted methods to measure the size of nanoparticles, as referred to the hydrodynamic radii (R{sub h}). However, the R{sub h} represents only that of three-dimensional spherical nanoparticles. In the present research, the size of two-dimensional (2D) nanoparticles of yttrium oxide (Y{sub 2}O{sub 3}) and zirconium phosphate (ZrP) was evaluated through comparing their hydrodynamic diameters via DLS with lateral sizes obtained using scanning and transmission electron microscopy. We demonstrate that the hydrodynamic radii are correlated with the lateral sizes of both square and circle shaped 2D nanoparticles. Two proportional coefficients, i.e., correctingmore » factors, are proposed for the Brownian motion status of 2D nanoparticles. The correction is possible by simplifying the calculation of integrals in the case of small thickness approximation. The correcting factor has great significance for investigating the translational diffusion behavior of 2D nanoparticles in a liquid and in effective and low-cost measurement in terms of size and morphology of shape-specific nanoparticles.« less

  17. Spectral distribution of particle fluence in small field detectors and its implication on small field dosimetry.

    PubMed

    Benmakhlouf, Hamza; Andreo, Pedro

    2017-02-01

    Correction factors for the relative dosimetry of narrow megavoltage photon beams have recently been determined in several publications. These corrections are required because of the several small-field effects generally thought to be caused by the lack of lateral charged particle equilibrium (LCPE) in narrow beams. Correction factors for relative dosimetry are ultimately necessary to account for the fluence perturbation caused by the detector. For most small field detectors the perturbation depends on field size, resulting in large correction factors when the field size is decreased. In this work, electron and photon fluence differential in energy will be calculated within the radiation sensitive volume of a number of small field detectors for 6 MV linear accelerator beams. The calculated electron spectra will be used to determine electron fluence perturbation as a function of field size and its implication on small field dosimetry analyzed. Fluence spectra were calculated with the user code PenEasy, based on the PENELOPE Monte Carlo system. The detectors simulated were one liquid ionization chamber, two air ionization chambers, one diamond detector, and six silicon diodes, all manufactured either by PTW or IBA. The spectra were calculated for broad (10 cm × 10 cm) and narrow (0.5 cm × 0.5 cm) photon beams in order to investigate the field size influence on the fluence spectra and its resulting perturbation. The photon fluence spectra were used to analyze the impact of absorption and generation of photons. These will have a direct influence on the electrons generated in the detector radiation sensitive volume. The electron fluence spectra were used to quantify the perturbation effects and their relation to output correction factors. The photon fluence spectra obtained for all detectors were similar to the spectrum in water except for the shielded silicon diodes. The photon fluence in the latter group was strongly influenced, mostly in the low-energy region, by photoabsorption in the high-Z shielding material. For the ionization chambers and the diamond detector, the electron fluence spectra were found to be similar to that in water, for both field sizes. In contrast, electron spectra in the silicon diodes were much higher than that in water for both field sizes. The estimated perturbations of the fluence spectra for the silicon diodes were 11-21% for the large fields and 14-27% for the small fields. These perturbations are related to the atomic number, density and mean excitation energy (I-value) of silicon, as well as to the influence of the "extracameral"' components surrounding the detector sensitive volume. For most detectors the fluence perturbation was also found to increase when the field size was decreased, in consistency with the increased small-field effects observed for the smallest field sizes. The present work improves the understanding of small-field effects by relating output correction factors to spectral fluence perturbations in small field detectors. It is shown that the main reasons for the well-known small-field effects in silicon diodes are the high-Z and density of the "extracameral" detector components and the high I-value of silicon relative to that of water and diamond. Compared to these parameters, the density and atomic number of the radiation sensitive volume material play a less significant role. © 2016 American Association of Physicists in Medicine.

  18. SU-F-T-367: Using PRIMO, a PENELOPE-Based Software, to Improve the Small Field Dosimetry of Linear Accelerators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benmakhlouf, H; Andreo, P; Brualla, L

    2016-06-15

    Purpose: To calculate output correction factors for Varian Clinac 2100iX beams for seven small field detectors and use the values to determine the small field output factors for the linacs at Karolinska university hospital. Methods: Phase space files (psf) for square fields between 0.25cm and 10cm were calculated using the PENELOPE-based PRIMO software. The linac MC-model was tuned by comparing PRIMO-estimated and experimentally determined depth doses and lateral dose-profiles for 40cmx40cm fields. The calculated psf were used as radiation sources to calculate the correction factors of IBA and PTW detectors with the code penEasy/PENELOPE. Results: The optimal tuning parameters ofmore » the MClinac model in PRIMO were 5.4 MeV incident electron energy and zero energy spread, focal spot size and beam divergence. Correction factors obtained for the liquid ion chamber (PTW-T31018) are within 1% down to 0.5 cm fields. For unshielded diodes (IBA-EFD, IBA-SFD, PTW-T60017 and PTW-T60018) the corrections are up to 2% at intermediate fields (>1cm side), becoming down to −11% for fields smaller than 1cm. The shielded diode (IBA-PFD and PTW-T60016) corrections vary with field size from 0 to −4%. Volume averaging effects are found for most detectors in the presence of 0.25cm fields. Conclusion: Good agreement was found between correction factors based on PRIMO-generated psf and those from other publications. The calculated factors will be implemented in output factor measurements (using several detectors) in the clinic. PRIMO is a userfriendly general code capable of generating small field psf and can be used without having to code own linac geometries. It can therefore be used to improve the clinical dosimetry, especially in the commissioning of linear accelerators. Important dosimetry data, such as dose-profiles and output factors can be determined more accurately for a specific machine, geometry and setup by using PRIMO and having a MC-model of the detector used.« less

  19. Evaluation of Potential Energy Loss Reduction and Savings for U. S. Army Electrical Distribution Systems

    DTIC Science & Technology

    1993-09-01

    Different Size Transformers (Per Transformer ) 41 15 Additional Energy Losses for Mis-Sized Transformers (Per Transformer ) 42 16 Power System ...directly affects the amount of neutral line power loss in the system . Since most Army three-phase loads are distribution transformers spread out over a...61 Balancing Three-Phase Loads Balancing Feeder Circuit Loads Power Factor Correction Optimal Transformer Sizing Conductor Sizing Combined

  20. Application of a radiophotoluminescent glass dosimeter to nonreference condition dosimetry in the postal dose audit system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mizuno, Hideyuki, E-mail: h-mizuno@nirs.go.jp; Fukumura, Akifumi; Fukahori, Mai

    Purpose: The purpose of this study was to obtain a set of correction factors of the radiophotoluminescent glass dosimeter (RGD) output for field size changes and wedge insertions. Methods: Several linear accelerators were used for irradiation of the RGDs. The field sizes were changed from 5 × 5 cm to 25 × 25 cm for 4, 6, 10, and 15 MV x-ray beams. The wedge angles were 15°, 30°, 45°, and 60°. In addition to physical wedge irradiation, nonphysical (dynamic/virtual) wedge irradiations were performed. Results: The obtained data were fitted with a single line for each energy, and correction factorsmore » were determined. Compared with ionization chamber outputs, the RGD outputs gradually increased with increasing field size, because of the higher RGD response to scattered low-energy photons. The output increase was about 1% per 10 cm increase in field size, with a slight difference dependent on the beam energy. For both physical and nonphysical wedged beam irradiation, there were no systematic trends in the RGD outputs, such as monotonic increase or decrease depending on the wedge angle change if the authors consider the uncertainty, which is approximately 0.6% for each set of measured points. Therefore, no correction factor was needed for all inserted wedges. Based on this work, postal dose audits using RGDs for the nonreference condition were initiated in 2010. The postal dose audit results between 2010 and 2012 were analyzed. The mean difference between the measured and stated doses was within 0.5% for all fields with field sizes between 5 × 5 cm and 25 × 25 cm and with wedge angles from 15° to 60°. The standard deviations (SDs) of the difference distribution were within the estimated uncertainty (1SD) except for the 25 × 25 cm field size data, which were not reliable because of poor statistics (n = 16). Conclusions: A set of RGD output correction factors was determined for field size changes and wedge insertions. The results obtained from recent postal dose audits were analyzed, and the mean differences between the measured and stated doses were within 0.5% for every field size and wedge angle. The SDs of the distribution were within the estimated uncertainty, except for one condition that was not reliable because of poor statistics.« less

  1. Impact of Target Distance, Target Size, and Visual Acuity on the Video Head Impulse Test.

    PubMed

    Judge, Paul D; Rodriguez, Amanda I; Barin, Kamran; Janky, Kristen L

    2018-05-01

    The video head impulse test (vHIT) assesses the vestibulo-ocular reflex. Few have evaluated whether environmental factors or visual acuity influence the vHIT. The purpose of this study was to evaluate the influence of target distance, target size, and visual acuity on vHIT outcomes. Thirty-eight normal controls and 8 subjects with vestibular loss (VL) participated. vHIT was completed at 3 distances and with 3 target sizes. Normal controls were subdivided on the basis of visual acuity. Corrective saccade frequency, corrective saccade amplitude, and gain were tabulated. In the normal control group, there were no significant effects of target size or visual acuity for any vHIT outcome parameters; however, gain increased as target distance decreased. The VL group demonstrated higher corrective saccade frequency and amplitude and lower gain as compared with controls. In conclusion, decreasing target distance increases gain for normal controls but not subjects with VL. Preliminarily, visual acuity does not affect vHIT outcomes.

  2. Comment on “Breakdown of the expansion of finite-size corrections to the hydrogen Lamb shift in moments of charge distribution”

    DOE PAGES

    Arrington, J.

    2016-02-23

    In a recent study, Hagelstein and Pascalutsa [F. Hagelstein and V. Pascalutsa, Phys. Rev. A 91, 040502 (2015)] examine the error associated with an expansion of proton structure corrections to the Lamb shift in terms of moments of the charge distribution. They propose a small modification to a conventional parametrization of the proton's charge form factor and show that this can resolve the proton radius puzzle. However, while the size of the bump they add to the form factor is small, it is large compared to the total proton structure effects in the initial parametrization, yielding a final form factormore » that is unphysical. Reducing their modification to the point where the resulting form factor is physical does not allow for a resolution of the radius puzzle.« less

  3. Estimating Occupancy of Gopher Tortoise (Gorpherus polyphemus) Burrows in Coastal Scrub and Slash Pine Flatwoods

    NASA Technical Reports Server (NTRS)

    Breininger, David R.; Schmalzer, Paul A.; Hinkle, C. Ross

    1991-01-01

    One hundred twelve plots were established in coastal scrub and slash pine flatwoods habitats on the John F. Kennedy Space Center (KSC) to evaluate relationships between the number of burrows and gopher tortoise (Gopherus polyphemus) density. All burrows were located within these plots and were classified according to tortoise activity. Depending on season, bucket trapping, a stick method, a gopher tortoise pulling device, and a camera system were used to estimate tortoise occupancy. Correction factors (% of burrows occupied) were calculated by season and habitat type. Our data suggest that less than 20% of the active and inactive burrows combined were occupied during seasons when gopher tortoises were active. Correction factors were higher in poorly-drained areas and lower in well-drained areas during the winter, when gopher tortoise activity was low. Correction factors differed from studies elsewhere, indicating that population estimates require correction factors specific to the site and season to accurately estimate population size.

  4. 1/ f noise from the laws of thermodynamics for finite-size fluctuations.

    PubMed

    Chamberlin, Ralph V; Nasir, Derek M

    2014-07-01

    Computer simulations of the Ising model exhibit white noise if thermal fluctuations are governed by Boltzmann's factor alone; whereas we find that the same model exhibits 1/f noise if Boltzmann's factor is extended to include local alignment entropy to all orders. We show that this nonlinear correction maintains maximum entropy during equilibrium fluctuations. Indeed, as with the usual way to resolve Gibbs' paradox that avoids entropy reduction during reversible processes, the correction yields the statistics of indistinguishable particles. The correction also ensures conservation of energy if an instantaneous contribution from local entropy is included. Thus, a common mechanism for 1/f noise comes from assuming that finite-size fluctuations strictly obey the laws of thermodynamics, even in small parts of a large system. Empirical evidence for the model comes from its ability to match the measured temperature dependence of the spectral-density exponents in several metals and to show non-Gaussian fluctuations characteristic of nanoscale systems.

  5. SPECTRAL CORRECTION FACTORS FOR CONVENTIONAL NEUTRON DOSE METERS USED IN HIGH-ENERGY NEUTRON ENVIRONMENTS-IMPROVED AND EXTENDED RESULTS BASED ON A COMPLETE SURVEY OF ALL NEUTRON SPECTRA IN IAEA-TRS-403.

    PubMed

    Oparaji, U; Tsai, Y H; Liu, Y C; Lee, K W; Patelli, E; Sheu, R J

    2017-06-01

    This paper presents improved and extended results of our previous study on corrections for conventional neutron dose meters used in environments with high-energy neutrons (En > 10 MeV). Conventional moderated-type neutron dose meters tend to underestimate the dose contribution of high-energy neutrons because of the opposite trends of dose conversion coefficients and detection efficiencies as the neutron energy increases. A practical correction scheme was proposed based on analysis of hundreds of neutron spectra in the IAEA-TRS-403 report. By comparing 252Cf-calibrated dose responses with reference values derived from fluence-to-dose conversion coefficients, this study provides recommendations for neutron field characterization and the corresponding dose correction factors. Further sensitivity studies confirm the appropriateness of the proposed scheme and indicate that (1) the spectral correction factors are nearly independent of the selection of three commonly used calibration sources: 252Cf, 241Am-Be and 239Pu-Be; (2) the derived correction factors for Bonner spheres of various sizes (6"-9") are similar in trend and (3) practical high-energy neutron indexes based on measurements can be established to facilitate the application of these correction factors in workplaces. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Monte Carlo and experimental determination of correction factors for gamma knife perfexion small field dosimetry measurements

    NASA Astrophysics Data System (ADS)

    Zoros, E.; Moutsatsos, A.; Pappas, E. P.; Georgiou, E.; Kollias, G.; Karaiskos, P.; Pantelis, E.

    2017-09-01

    Detector-, field size- and machine-specific correction factors are required for precise dosimetry measurements in small and non-standard photon fields. In this work, Monte Carlo (MC) simulation techniques were used to calculate the k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} and k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors for a series of ionization chambers, a synthetic microDiamond and diode dosimeters, used for reference and/or output factor (OF) measurements in the Gamma Knife Perfexion photon fields. Calculations were performed for the solid water (SW) and ABS plastic phantoms, as well as for a water phantom of the same geometry. MC calculations for the k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors in SW were compared against corresponding experimental results for a subset of ionization chambers and diode detectors. Reference experimental OF data were obtained through the weighted average of corresponding measurements using TLDs, EBT-2 films and alanine pellets. k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} values close to unity (within 1%) were calculated for most of ionization chambers in water. Greater corrections of up to 6.0% were observed for chambers with relatively large air-cavity dimensions and steel central electrode. A phantom correction of 1.006 and 1.024 (breaking down to 1.014 from the ABS sphere and 1.010 from the accompanying ABS phantom adapter) were calculated for the SW and ABS phantoms, respectively, adding up to k{{Qmsr},{{Q}0}}{{fmsr},{{f}ref}} corrections in water. Both measurements and MC calculations for the diode and microDiamond detectors resulted in lower than unit k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} correction factors, due to their denser sensitive volume and encapsulation materials. In comparison, higher than unit k{{Qclin},{{Q}msr}}{{fclin},{{f}msr}} results for the ionization chambers suggested field size depended dose underestimations (being significant for the 4 mm field), with magnitude depending on the combination of contradicting phenomena associated with volume averaging and electron fluence perturbations. Finally, the presence of 0.5 mm air-gap between the diodes’ frontal surface and their phantom-inserts may considerably influence OF measurements, reaching 4.6% for the Razor diode.

  7. Design and experimental testing of air slab caps which convert commercial electron diodes into dual purpose, correction-free diodes for small field dosimetry.

    PubMed

    Charles, P H; Cranmer-Sargison, G; Thwaites, D I; Kairn, T; Crowe, S B; Pedrazzini, G; Aland, T; Kenny, J; Langton, C M; Trapp, J V

    2014-10-01

    Two diodes which do not require correction factors for small field relative output measurements are designed and validated using experimental methodology. This was achieved by adding an air layer above the active volume of the diode detectors, which canceled out the increase in response of the diodes in small fields relative to standard field sizes. Due to the increased density of silicon and other components within a diode, additional electrons are created. In very small fields, a very small air gap acts as an effective filter of electrons with a high angle of incidence. The aim was to design a diode that balanced these perturbations to give a response similar to a water-only geometry. Three thicknesses of air were placed at the proximal end of a PTW 60017 electron diode (PTWe) using an adjustable "air cap". A set of output ratios (ORDet (fclin) ) for square field sizes of side length down to 5 mm was measured using each air thickness and compared to ORDet (fclin) measured using an IBA stereotactic field diode (SFD). kQclin,Qmsr (fclin,fmsr) was transferred from the SFD to the PTWe diode and plotted as a function of air gap thickness for each field size. This enabled the optimal air gap thickness to be obtained by observing which thickness of air was required such that kQclin,Qmsr (fclin,fmsr) was equal to 1.00 at all field sizes. A similar procedure was used to find the optimal air thickness required to make a modified Sun Nuclear EDGE detector (EDGEe) which is "correction-free" in small field relative dosimetry. In addition, the feasibility of experimentally transferring kQclin,Qmsr (fclin,fmsr) values from the SFD to unknown diodes was tested by comparing the experimentally transferred kQclin,Qmsr (fclin,fmsr) values for unmodified PTWe and EDGEe diodes to Monte Carlo simulated values. 1.0 mm of air was required to make the PTWe diode correction-free. This modified diode (PTWeair) produced output factors equivalent to those in water at all field sizes (5-50 mm). The optimal air thickness required for the EDGEe diode was found to be 0.6 mm. The modified diode (EDGEeair) produced output factors equivalent to those in water, except at field sizes of 8 and 10 mm where it measured approximately 2% greater than the relative dose to water. The experimentally calculated kQclin,Qmsr (fclin,fmsr) for both the PTWe and the EDGEe diodes (without air) matched Monte Carlo simulated results, thus proving that it is feasible to transfer kQclin,Qmsr (fclin,fmsr) from one commercially available detector to another using experimental methods and the recommended experimental setup. It is possible to create a diode which does not require corrections for small field output factor measurements. This has been performed and verified experimentally. The ability of a detector to be "correction-free" depends strongly on its design and composition. A nonwater-equivalent detector can only be "correction-free" if competing perturbations of the beam cancel out at all field sizes. This should not be confused with true water equivalency of a detector.

  8. Stress-Strain Behavior of Cementitious Materials with Different Sizes

    PubMed Central

    Zhou, Jikai; Qian, Pingping; Chen, Xudong

    2014-01-01

    The size dependence of flexural properties of cement mortar and concrete beams is investigated. Bazant's size effect law and modified size effect law by Kim and Eo give a very good fit to the flexural strength of both cement mortar and concrete. As observed in the test results, a strong size effect in flexural strength is found in cement mortar than in concrete. A modification has been suggested to Li's equation for describing the stress-strain curve of cement mortar and concrete by incorporating two different correction factors, the factors contained in the modified equation being established empirically as a function of specimen size. A comparison of the predictions of this equation with test data generated in this study shows good agreement. PMID:24744688

  9. Further Improvement of the RITS Code for Pulsed Neutron Bragg-edge Transmission Imaging

    NASA Astrophysics Data System (ADS)

    Sato, H.; Watanabe, K.; Kiyokawa, K.; Kiyanagi, R.; Hara, K. Y.; Kamiyama, T.; Furusaka, M.; Shinohara, T.; Kiyanagi, Y.

    The RITS code is a unique and powerful tool for a whole Bragg-edge transmission spectrum fitting analysis. However, it has had two major problems. Therefore, we have proposed methods to overcome these problems. The first issue is the difference in the crystallite size values between the diffraction and the Bragg-edge analyses. We found the reason was a different definition of the crystal structure factor. It affects the crystallite size because the crystallite size is deduced from the primary extinction effect which depends on the crystal structure factor. As a result of algorithm change, crystallite sizes obtained by RITS drastically approached to crystallite sizes obtained by Rietveld analyses of diffraction data; from 155% to 110%. The second issue is correction of the effect of background neutrons scattered from a specimen. Through neutron transport simulation studies, we found that the background components consist of forward Bragg scattering, double backward Bragg scattering, and thermal diffuse scattering. RITS with the background correction function which was developed through the simulation studies could well reconstruct various simulated and experimental transmission spectra, but refined crystalline microstructural parameters were often distorted. Finally, it was recommended to reduce the background by improving experimental conditions.

  10. Martian particle size based on thermal inertia corrected for elevation-dependent atmospheric properties

    NASA Technical Reports Server (NTRS)

    Bridges, N. T.

    1993-01-01

    Thermal inertia is commonly used to derive physical properties of the Martian surface. If the surface is composed of loosely consolidated grains, then the thermal conductivity derived from the inertia can theoretically be used to compute the particle size. However, one persistent difficulty associated with the interpretation of thermal inertia and the derivation of particle size from it has been the degree to which atmospheric properties affect both the radiation balance at the surface and the gas conductivity. These factors vary with atmospheric pressure so that derived thermal inertias and particle sizes are a function of elevation. By utilizing currently available thermal models and laboratory information, a fine component thermal inertia map was convolved with digital topography to produce particle size maps of the Martian surface corrected for these elevation-dependent effects. Such an approach is especially applicable for the highest elevations on Mars, where atmospheric back radiation and gas conductivity are low.

  11. Energy response corrections for profile measurements using a combination of different detector types.

    PubMed

    Wegener, Sonja; Sauer, Otto A

    2018-02-01

    Different detector properties will heavily affect the results of off-axis measurements outside of radiation fields, where a different energy spectrum is encountered. While a diode detector would show a high spatial resolution, it contains high atomic number elements, which lead to perturbations and energy-dependent response. An ionization chamber, on the other hand, has a much smaller energy dependence, but shows dose averaging over its larger active volume. We suggest a way to obtain spatial energy response corrections of a detector independent of its volume effect for profiles of arbitrary fields by using a combination of two detectors. Measurements were performed at an Elekta Versa HD accelerator equipped with an Agility MLC. Dose profiles of fields between 10 × 4 cm² and 0.6 × 0.6 cm² were recorded several times, first with different small-field detectors (unshielded diode 60012 and stereotactic field detector SFD, microDiamond, EDGE, and PinPoint 31006) and then with a larger volume ionization chamber Semiflex 31010 for different photon beam qualities of 6, 10, and 18 MV. Correction factors for the small-field detectors were obtained from the readings of the respective detector and the ionization chamber using a convolution method. Selected profiles were also recorded on film to enable a comparison. After applying the correction factors to the profiles measured with different detectors, agreement between the detectors and with profiles measured on EBT3 film was improved considerably. Differences in the full width half maximum obtained with the detectors and the film typically decreased by a factor of two. Off-axis correction factors outside of a 10 × 1 cm² field ranged from about 1.3 for the EDGE diode about 10 mm from the field edge to 0.7 for the PinPoint 31006 25 mm from the field edge. The microDiamond required corrections comparable in size to the Si-diodes and even exceeded the values in the tail region of the field. The SFD was found to require the smallest correction. The corrections typically became larger for higher energies and for smaller field sizes. With a combination of two detectors, experimentally derived correction factors can be obtained. Application of those factors leads to improved agreement between the measured profiles and those recorded on EBT3 film. The results also complement so far only Monte Carlo-simulated values for the off-axis response of different detectors. © 2017 American Association of Physicists in Medicine.

  12. Hadron mass corrections in semi-inclusive deep-inelastic scattering

    DOE PAGES

    Guerrero Teran, Juan Vicente; Ethier, James J.; Accardi, Alberto; ...

    2015-09-24

    We found that the spin-dependent cross sections for semi-inclusive lepton-nucleon scattering are derived in the framework of collinear factorization, including the effects of masses of the target and produced hadron at finite Q 2. At leading order the cross sections factorize into products of parton distribution and fragmentation functions evaluated in terms of new, mass-dependent scaling variables. Furthermore, the size of the hadron mass corrections is estimated at kinematics relevant for current and future experiments, and the implications for the extraction of parton distributions from semi-inclusive measurements are discussed.

  13. The more the merrier? Increasing group size may be detrimental to decision-making performance in nominal groups.

    PubMed

    Amir, Ofra; Amir, Dor; Shahar, Yuval; Hart, Yuval; Gal, Kobi

    2018-01-01

    Demonstrability-the extent to which group members can recognize a correct solution to a problem-has a significant effect on group performance. However, the interplay between group size, demonstrability and performance is not well understood. This paper addresses these gaps by studying the joint effect of two factors-the difficulty of solving a problem and the difficulty of verifying the correctness of a solution-on the ability of groups of varying sizes to converge to correct solutions. Our empirical investigations use problem instances from different computational complexity classes, NP-Complete (NPC) and PSPACE-complete (PSC), that exhibit similar solution difficulty but differ in verification difficulty. Our study focuses on nominal groups to isolate the effect of problem complexity on performance. We show that NPC problems have higher demonstrability than PSC problems: participants were significantly more likely to recognize correct and incorrect solutions for NPC problems than for PSC problems. We further show that increasing the group size can actually decrease group performance for some problems of low demonstrability. We analytically derive the boundary that distinguishes these problems from others for which group performance monotonically improves with group size. These findings increase our understanding of the mechanisms that underlie group problem-solving processes, and can inform the design of systems and processes that would better facilitate collective decision-making.

  14. Slip Correction Measurements of Certified PSL Nanoparticles Using a Nanometer Differential Mobility Analyzer (Nano-DMA) for Knudsen Number From 0.5 to 83

    PubMed Central

    Kim, Jung Hyeun; Mulholland, George W.; Kukuck, Scott R.; Pui, David Y. H.

    2005-01-01

    The slip correction factor has been investigated at reduced pressures and high Knudsen number using polystyrene latex (PSL) particles. Nano-differential mobility analyzers (NDMA) were used in determining the slip correction factor by measuring the electrical mobility of 100.7 nm, 269 nm, and 19.90 nm particles as a function of pressure. The aerosol was generated via electrospray to avoid multiplets for the 19.90 nm particles and to reduce the contaminant residue on the particle surface. System pressure was varied down to 8.27 kPa, enabling slip correction measurements for Knudsen numbers as large as 83. A condensation particle counter was modified for low pressure application. The slip correction factor obtained for the three particle sizes is fitted well by the equation: C = 1 + Kn (α + β exp(−γ/Kn)), with α = 1.165, β = 0.483, and γ = 0.997. The first quantitative uncertainty analysis for slip correction measurements was carried out. The expanded relative uncertainty (95 % confidence interval) in measuring slip correction factor was about 2 % for the 100.7 nm SRM particles, about 3 % for the 19.90 nm PSL particles, and about 2.5 % for the 269 nm SRM particles. The major sources of uncertainty are the diameter of particles, the geometric constant associated with NDMA, and the voltage. PMID:27308102

  15. Safe and sensible preprocessing and baseline correction of pupil-size data.

    PubMed

    Mathôt, Sebastiaan; Fabius, Jasper; Van Heusden, Elle; Van der Stigchel, Stefan

    2018-02-01

    Measurement of pupil size (pupillometry) has recently gained renewed interest from psychologists, but there is little agreement on how pupil-size data is best analyzed. Here we focus on one aspect of pupillometric analyses: baseline correction, i.e., analyzing changes in pupil size relative to a baseline period. Baseline correction is useful in experiments that investigate the effect of some experimental manipulation on pupil size. In such experiments, baseline correction improves statistical power by taking into account random fluctuations in pupil size over time. However, we show that baseline correction can also distort data if unrealistically small pupil sizes are recorded during the baseline period, which can easily occur due to eye blinks, data loss, or other distortions. Divisive baseline correction (corrected pupil size = pupil size/baseline) is affected more strongly by such distortions than subtractive baseline correction (corrected pupil size = pupil size - baseline). We discuss the role of baseline correction as a part of preprocessing of pupillometric data, and make five recommendations: (1) before baseline correction, perform data preprocessing to mark missing and invalid data, but assume that some distortions will remain in the data; (2) use subtractive baseline correction; (3) visually compare your corrected and uncorrected data; (4) be wary of pupil-size effects that emerge faster than the latency of the pupillary response allows (within ±220 ms after the manipulation that induces the effect); and (5) remove trials on which baseline pupil size is unrealistically small (indicative of blinks and other distortions).

  16. Stiffness of frictional contact of dissimilar elastic solids

    DOE PAGES

    Lee, Jin Haeng; Gao, Yanfei; Bower, Allan F.; ...

    2017-12-22

    The classic Sneddon relationship between the normal contact stiffness and the contact size is valid for axisymmetric, frictionless contact, in which the two contacting solids are approximated by elastic half-spaces. Deviation from this result critically affects the accuracy of the load and displacement sensing nanoindentation techniques. This study gives a thorough numerical and analytical investigation of corrections needed to the Sneddon solution when finite Coulomb friction exists between an elastic half-space and a flat-ended rigid punch with circular or noncircular shape. Because of linearity of the Coulomb friction, the correction factor is found to be a function of the frictionmore » coefficient, Poisson's ratio, and the contact shape, but independent of the contact size. Two issues are of primary concern in the finite element simulations – adequacy of the mesh near the contact edge and the friction implementation methodology. Although the stick or slip zone sizes are quite different from the penalty or Lagrangian methods, the calculated contact stiffnesses are almost the same and may be considerably larger than those in Sneddon's solution. For circular punch contact, the numerical solutions agree remarkably well with a previous analytical solution. For non-circular punch contact, the results can be represented using the equivalence between the contact problem and bi-material fracture mechanics. Finally, the correction factor is found to be a product of that for the circular contact and a multiplicative factor that depends only on the shape of the punch but not on the friction coefficient or Poisson's ratio.« less

  17. Stiffness of frictional contact of dissimilar elastic solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Jin Haeng; Gao, Yanfei; Bower, Allan F.

    The classic Sneddon relationship between the normal contact stiffness and the contact size is valid for axisymmetric, frictionless contact, in which the two contacting solids are approximated by elastic half-spaces. Deviation from this result critically affects the accuracy of the load and displacement sensing nanoindentation techniques. This study gives a thorough numerical and analytical investigation of corrections needed to the Sneddon solution when finite Coulomb friction exists between an elastic half-space and a flat-ended rigid punch with circular or noncircular shape. Because of linearity of the Coulomb friction, the correction factor is found to be a function of the frictionmore » coefficient, Poisson's ratio, and the contact shape, but independent of the contact size. Two issues are of primary concern in the finite element simulations – adequacy of the mesh near the contact edge and the friction implementation methodology. Although the stick or slip zone sizes are quite different from the penalty or Lagrangian methods, the calculated contact stiffnesses are almost the same and may be considerably larger than those in Sneddon's solution. For circular punch contact, the numerical solutions agree remarkably well with a previous analytical solution. For non-circular punch contact, the results can be represented using the equivalence between the contact problem and bi-material fracture mechanics. Finally, the correction factor is found to be a product of that for the circular contact and a multiplicative factor that depends only on the shape of the punch but not on the friction coefficient or Poisson's ratio.« less

  18. Stiffness of frictional contact of dissimilar elastic solids

    NASA Astrophysics Data System (ADS)

    Lee, Jin Haeng; Gao, Yanfei; Bower, Allan F.; Xu, Haitao; Pharr, George M.

    2018-03-01

    The classic Sneddon relationship between the normal contact stiffness and the contact size is valid for axisymmetric, frictionless contact, in which the two contacting solids are approximated by elastic half-spaces. Deviation from this result critically affects the accuracy of the load and displacement sensing nanoindentation techniques. This paper gives a thorough numerical and analytical investigation of corrections needed to the Sneddon solution when finite Coulomb friction exists between an elastic half-space and a flat-ended rigid punch with circular or noncircular shape. Because of linearity of the Coulomb friction, the correction factor is found to be a function of the friction coefficient, Poisson's ratio, and the contact shape, but independent of the contact size. Two issues are of primary concern in the finite element simulations - adequacy of the mesh near the contact edge and the friction implementation methodology. Although the stick or slip zone sizes are quite different from the penalty or Lagrangian methods, the calculated contact stiffnesses are almost the same and may be considerably larger than those in Sneddon's solution. For circular punch contact, the numerical solutions agree remarkably well with a previous analytical solution. For non-circular punch contact, the results can be represented using the equivalence between the contact problem and bi-material fracture mechanics. The correction factor is found to be a product of that for the circular contact and a multiplicative factor that depends only on the shape of the punch but not on the friction coefficient or Poisson's ratio.

  19. Habitat complexity and fish size affect the detection of Indo-Pacific lionfish on invaded coral reefs

    NASA Astrophysics Data System (ADS)

    Green, S. J.; Tamburello, N.; Miller, S. E.; Akins, J. L.; Côté, I. M.

    2013-06-01

    A standard approach to improving the accuracy of reef fish population estimates derived from underwater visual censuses (UVCs) is the application of species-specific correction factors, which assumes that a species' detectability is constant under all conditions. To test this assumption, we quantified detection rates for invasive Indo-Pacific lionfish ( Pterois volitans and P. miles), which are now a primary threat to coral reef conservation throughout the Caribbean. Estimates of lionfish population density and distribution, which are essential for managing the invasion, are currently obtained through standard UVCs. Using two conventional UVC methods, the belt transect and stationary visual census (SVC), we assessed how lionfish detection rates vary with lionfish body size and habitat complexity (measured as rugosity) on invaded continuous and patch reefs off Cape Eleuthera, the Bahamas. Belt transect and SVC surveys performed equally poorly, with both methods failing to detect the presence of lionfish in >50 % of surveys where thorough, lionfish-focussed searches yielded one or more individuals. Conventional methods underestimated lionfish biomass by ~200 %. Crucially, detection rate varied significantly with both lionfish size and reef rugosity, indicating that the application of a single correction factor across habitats and stages of invasion is unlikely to accurately characterize local populations. Applying variable correction factors that account for site-specific lionfish size and rugosity to conventional survey data increased estimates of lionfish biomass, but these remained significantly lower than actual biomass. To increase the accuracy and reliability of estimates of lionfish density and distribution, monitoring programs should use detailed area searches rather than standard visual survey methods. Our study highlights the importance of accounting for sources of spatial and temporal variation in detection to increase the accuracy of survey data from coral reef systems.

  20. Stress Intensity Factor Plasticity Correction for Flaws in Stress Concentration Regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, E.; Wilson, W.K.

    2000-02-01

    Plasticity corrections to elastically computed stress intensity factors are often included in brittle fracture evaluation procedures. These corrections are based on the existence of a plastic zone in the vicinity of the crack tip. Such a plastic zone correction is included in the flaw evaluation procedure of Appendix A to Section XI of the ASME Boiler and Pressure Vessel Code. Plasticity effects from the results of elastic and elastic-plastic explicit flaw finite element analyses are examined for various size cracks emanating from the root of a notch in a panel and for cracks located at fillet fadii. The results ofmore » these caluclations provide conditions under which the crack-tip plastic zone correction based on the Irwin plastic zone size overestimates the plasticity effect for crack-like flaws embedded in stress concentration regions in which the elastically computed stress exceeds the yield strength of the material. A failure assessment diagram (FAD) curve is employed to graphically c haracterize the effect of plasticity on the crack driving force. The Option 1 FAD curve of the Level 3 advanced fracture assessment procedure of British Standard PD 6493:1991, adjusted for stress concentration effects by a term that is a function of the applied load and the ratio of the local radius of curvature at the flaw location to the flaw depth, provides a satisfactory bound to all the FAD curves derived from the explicit flaw finite element calculations. The adjusted FAD curve is a less restrictive plasticity correction than the plastic zone correction of Section XI for flaws embedded in plastic zones at geometric stress concentrators. This enables unnecessary conservatism to be removed from flaw evaluation procedures that utilize plasticity corrections.« less

  1. Predictive Value of Early Tumor Shrinkage and Density Reduction of Lung Metastases in Patients With Metastatic Colorectal Cancer Treated With Regorafenib.

    PubMed

    Vanwynsberghe, Hannes; Verbeke, Xander; Coolen, Johan; Van Cutsem, Eric

    2017-12-01

    The benefit of regorafenib in colorectal cancer is not very pronounced. At present, there is lack of predictive biological or radiological markers. We studied if density reduction or small changes in size of lung metastases could be a predictive marker. We retrospectively measured density in size of lung metastases of all patients included in the CORRECT and CONSIGN trials at our center. Contrast-enhanced CT scan at baseline and at week 8 were compared. Data of progressive-free survival and overall survival were collected from the CORRECT and CONSIGN trials. A significant difference in progressive-free survival was seen in 3 groups: response or stable disease in size (5.36 vs. 3.96 months), response in density (6.03 vs. 2.72 months), and response in corrected density (6.14 vs. 3.08 months). No difference was seen for response in size versus stable disease or progressive disease in size. For overall survival, a difference was observed in the same 3 groups: response or stable disease in size (9.89 vs. 6.44 months), response in density (9.59 vs. 7.04 months), and response in corrected density (9.09 vs. 7.16 months). No difference was seen for response in size versus stable disease or progressive disease in size. Density reduction in lung metastases might be a good predictive parameter to predict outcome for regorafenib. Early tumor progression might be a negative predictive factor. If further validated, density reduction and early tumor progression might be useful to ameliorate the cost-benefit of regorafenib. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. SU-E-T-169: Initial Investigation into the Use of Optically Stimulated Luminescent Dosimeters (OSLDs) for In-Vivo Dosimetry of TBI Patients.

    PubMed

    Paloor, S; Aland, T; Mathew, J; Al-Hammadi, N; Hammoud, R

    2012-06-01

    To report on an initial investigation into the use of optically stimulated luminescent dosimeters (OSLDs) for in-vivo dosimetry for total body irradiation (TBI) treatments. Specifically, we report on the determination of angular dependence, sensitivity correction factors and the dose calibration factors. The OSLD investigated in our work was InLight/OSL nanoDot dosimeters (Landauer Inc.). Nanodots are 5 mm diameter, 0.2 mm thick disk-shaped Carbon-doped Al2O3, and were read using a Landauer InLight microstar reader and associated software.OSLDs were irradiated under two setup conditions: a) typical clinical reference conditions (95cm SSD, 5cm depth in solid water, 10×10 cm field size), and b) TBI conditions (520cm SSD, 5cm depth in solid water, 40×40 cm field size,). The angular dependence was checked for angles ranging ±60 degree from normal incidence. In order to directly compare the sensitivity correction factors, a common dose was delivered to the OSLDs for the two setups. Pre- and post-irradiation readings were acquired. OSLDs were optically annealed under various techniques (1) by keeping over a film view box, (2) Using multiple scan on a flat bed optical scanner and (3) Using natural room light. Under reference conditions, the calculated sensitivity correction factors of the OSLDs had a SD of 2.2% and a range of 5%. Under TBI conditions, the SD increased to 3.4% and the range to 6.0%. The variation in sensitivity correction factors between individual OSLDs across the two measurement conditions was up to 10.3%. Angular dependence of less than 1% is observed. The best bleaching method we found is to keep OSLDs for more than 3 hours on a film viewer which will reduce normalized response to less than 1%. In order to obtain the most accurate results when using OSLDs for in-vivo dosimetry for TBI treatments, sensitivity correction factors and dose calibration factors should all be determined under clinical TBI conditions. © 2012 American Association of Physicists in Medicine.

  3. Determination of small-field correction factors for cylindrical ionization chambers using a semiempirical method

    NASA Astrophysics Data System (ADS)

    Park, Kwangwoo; Bak, Jino; Park, Sungho; Choi, Wonhoon; Park, Suk Won

    2016-02-01

    A semiempirical method based on the averaging effect of the sensitive volumes of different air-filled ionization chambers (ICs) was employed to approximate the correction factors for beam quality produced from the difference in the sizes of the reference field and small fields. We measured the output factors using several cylindrical ICs and calculated the correction factors using a mathematical method similar to deconvolution; in the method, we modeled the variable and inhomogeneous energy fluence function within the chamber cavity. The parameters of the modeled function and the correction factors were determined by solving a developed system of equations as well as on the basis of the measurement data and the geometry of the chambers. Further, Monte Carlo (MC) computations were performed using the Monaco® treatment planning system to validate the proposed method. The determined correction factors (k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} ) were comparable to the values derived from the MC computations performed using Monaco®. For example, for a 6 MV photon beam and a field size of 1  ×  1 cm2, k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} was calculated to be 1.125 for a PTW 31010 chamber and 1.022 for a PTW 31016 chamber. On the other hand, the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values determined from the MC computations were 1.121 and 1.031, respectively; the difference between the proposed method and the MC computation is less than 2%. In addition, we determined the k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} values for PTW 30013, PTW 31010, PTW 31016, IBA FC23-C, and IBA CC13 chambers as well. We devised a method for determining k{{Q\\text{msr}},Q}{{f\\text{smf}}, {{f}\\text{ref}}} from both the measurement of the output factors and model-based mathematical computation. The proposed method can be useful in case the MC simulation would not be applicable for the clinical settings.

  4. Quantitation of tumor uptake with molecular breast imaging.

    PubMed

    Bache, Steven T; Kappadath, S Cheenu

    2017-09-01

    We developed scatter and attenuation-correction techniques for quantifying images obtained with Molecular Breast Imaging (MBI) systems. To investigate scatter correction, energy spectra of a 99m Tc point source were acquired with 0-7-cm-thick acrylic to simulate scatter between the detector heads. System-specific scatter correction factor, k, was calculated as a function of thickness using a dual energy window technique. To investigate attenuation correction, a 7-cm-thick rectangular phantom containing 99m Tc-water simulating breast tissue and fillable spheres simulating tumors was imaged. Six spheres 10-27 mm in diameter were imaged with sphere-to-background ratios (SBRs) of 3.5, 2.6, and 1.7 and located at depths of 0.5, 1.5, and 2.5 cm from the center of the water bath for 54 unique tumor scenarios (3 SBRs × 6 sphere sizes × 3 depths). Phantom images were also acquired in-air under scatter- and attenuation-free conditions, which provided ground truth counts. To estimate true counts, T, from each tumor, the geometric mean (GM) of the counts within a prescribed region of interest (ROI) from the two projection images was calculated as T=C1C2eμtF, where C are counts within the square ROI circumscribing each sphere on detectors 1 and 2, μ is the linear attenuation coefficient of water, t is detector separation, and the factor F accounts for background activity. Four unique F definitions-standard GM, background-subtraction GM, MIRD Primer 16 GM, and a novel "volumetric GM"-were investigated. Error in T was calculated as the percentage difference with respect to in-air. Quantitative accuracy using the different GM definitions was calculated as a function of SBR, depth, and sphere size. Sensitivity of quantitative accuracy to ROI size was investigated. We developed an MBI simulation to investigate the robustness of our corrections for various ellipsoidal tumor shapes and detector separations. Scatter correction factor k varied slightly (0.80-0.95) over a compressed breast thickness range of 6-9 cm. Corrected energy spectra recovered general characteristics of scatter-free spectra. Quantitatively, photopeak counts were recovered to <10% compared to in-air conditions after scatter correction. After GM attenuation correction, mean errors (95% confidence interval, CI) for all 54 imaging scenarios were 149% (-154% to +455%), -14.0% (-38.4% to +10.4%), 16.8% (-14.7% to +48.2%), and 2.0% (-14.3 to +18.3%) for the standard GM, background-subtraction GM, MIRD 16 GM, and volumetric GM, respectively. Volumetric GM was less sensitive to SBR and sphere size, while all GM methods were insensitive to sphere depth. Simulation results showed that Volumetric GM method produced a mean error within 5% over all compressed breast thicknesses (3-14 cm), and that the use of an estimated radius for nonspherical tumors increases the 95% CI to at most ±23%, compared with ±16% for spherical tumors. Using DEW scatter- and our Volumetric GM attenuation-correction methodology yielded accurate estimates of tumor counts in MBI over various tumor sizes, shapes, depths, background uptake, and compressed breast thicknesses. Accurate tumor uptake can be converted to radiotracer uptake concentration, allowing three patient-specific metrics to be calculated for quantifying absolute uptake and relative uptake change for assessment of treatment response. © 2017 American Association of Physicists in Medicine.

  5. Simple, Fast and Effective Correction for Irradiance Spatial Nonuniformity in Measurement of IVs of Large Area Cells at NREL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moriarty, Tom

    The NREL cell measurement lab measures the IV parameters of cells of multiple sizes and configurations. A large contributing factor to errors and uncertainty in Jsc, Imax, Pmax and efficiency can be the irradiance spatial nonuniformity. Correcting for this nonuniformity through its precise and frequent measurement can be very time consuming. This paper explains a simple, fast and effective method based on bicubic interpolation for determining and correcting for spatial nonuniformity and verification of the method's efficacy.

  6. Percolation in three-dimensional fracture networks for arbitrary size and shape distributions

    NASA Astrophysics Data System (ADS)

    Thovert, J.-F.; Mourzenko, V. V.; Adler, P. M.

    2017-04-01

    The percolation threshold of fracture networks is investigated by extensive direct numerical simulations. The fractures are randomly located and oriented in three-dimensional space. A very wide range of regular, irregular, and random fracture shapes is considered, in monodisperse or polydisperse networks containing fractures with different shapes and/or sizes. The results are rationalized in terms of a dimensionless density. A simple model involving a new shape factor is proposed, which accounts very efficiently for the influence of the fracture shape. It applies with very good accuracy in monodisperse or moderately polydisperse networks, and provides a good first estimation in other situations. A polydispersity index is shown to control the need for a correction, and the corrective term is modelled for the investigated size distributions.

  7. The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.

    Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.

  8. The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation

    DOE PAGES

    Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.

    2017-11-27

    Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.

  9. Regression dilution bias: tools for correction methods and sample size calculation.

    PubMed

    Berglund, Lars

    2012-08-01

    Random errors in measurement of a risk factor will introduce downward bias of an estimated association to a disease or a disease marker. This phenomenon is called regression dilution bias. A bias correction may be made with data from a validity study or a reliability study. In this article we give a non-technical description of designs of reliability studies with emphasis on selection of individuals for a repeated measurement, assumptions of measurement error models, and correction methods for the slope in a simple linear regression model where the dependent variable is a continuous variable. Also, we describe situations where correction for regression dilution bias is not appropriate. The methods are illustrated with the association between insulin sensitivity measured with the euglycaemic insulin clamp technique and fasting insulin, where measurement of the latter variable carries noticeable random error. We provide software tools for estimation of a corrected slope in a simple linear regression model assuming data for a continuous dependent variable and a continuous risk factor from a main study and an additional measurement of the risk factor in a reliability study. Also, we supply programs for estimation of the number of individuals needed in the reliability study and for choice of its design. Our conclusion is that correction for regression dilution bias is seldom applied in epidemiological studies. This may cause important effects of risk factors with large measurement errors to be neglected.

  10. Requirements for fault-tolerant factoring on an atom-optics quantum computer.

    PubMed

    Devitt, Simon J; Stephens, Ashley M; Munro, William J; Nemoto, Kae

    2013-01-01

    Quantum information processing and its associated technologies have reached a pivotal stage in their development, with many experiments having established the basic building blocks. Moving forward, the challenge is to scale up to larger machines capable of performing computational tasks not possible today. This raises questions that need to be urgently addressed, such as what resources these machines will consume and how large will they be. Here we estimate the resources required to execute Shor's factoring algorithm on an atom-optics quantum computer architecture. We determine the runtime and size of the computer as a function of the problem size and physical error rate. Our results suggest that once the physical error rate is low enough to allow quantum error correction, optimization to reduce resources and increase performance will come mostly from integrating algorithms and circuits within the error correction environment, rather than from improving the physical hardware.

  11. Bicycle helmet size, adjustment, and stability.

    PubMed

    Thai, Kim T; McIntosh, Andrew S; Pang, Toh Yen

    2015-01-01

    One of the main requirements of a protective bicycle helmet is to provide and maintain adequate coverage to the head. A poorly fitting or fastened helmet may be displaced during normal use or even ejected during a crash. The aims of the current study were to identify factors that influence the size of helmet worn, identify factors that influence helmet position and adjustment, and examine the effects of helmet size worn and adjustment on helmet stability. Recreational and commuter cyclists in Sydney were surveyed to determine how helmet size and/or adjustment affected helmet stability in the real world. Anthropometric characteristics of the head were measured and, to assess helmet stability, a test analogous to the requirements of the Australian bicycle helmet standard was undertaken. Two hundred sixty-seven cyclists were recruited across all age groups and 91% wore an AS/NZS 2063-compliant helmet. The main ethnic group was Europeans (71%) followed by Asians (18%). The circumferences of the cyclists' heads matched well the circumference of the relevant ISO headform for the chosen helmet size, but the head shapes differed with respect to ISO headforms. Age and gender were associated with wearing an incorrectly sized helmet and helmet adjustment. Older males (>55 years) were most likely to wear an incorrectly sized helmet. Adult males in the 35-54 year age group were most likely to wear a correctly adjusted helmet. Using quasistatic helmet stability tests, it was found that the correctness of adjustment, rather than size, head dimensions, or shape, significantly affected helmet stability in all test directions. Bicycle helmets worn by recreational and commuter cyclists are often the wrong size and are often worn and adjusted incorrectly, especially in children and young people. Cyclists need to be encouraged to adjust their helmets correctly. Current headforms used in standards testing may not be representative of cyclists' head shapes. This may create challenges to helmet suppliers if on one hand they optimize the helmet to meet tests on ISO-related headforms while on the other seeking to offer greater range of sizes.

  12. Sediment redistribution and grainsize effects on 230Th-normalized mass accumulation rates and focusing factors in the Panama Basin

    NASA Astrophysics Data System (ADS)

    Loveley, Matthew R.; Marcantonio, Franco; Lyle, Mitchell; Ibrahim, Rami; Hertzberg, Jennifer E.; Schmidt, Matthew W.

    2017-12-01

    Here, we examine how redistribution of differing grain sizes by sediment focusing processes in Panama Basin sediments affects the use of 230Th as a constant-flux proxy. We study representative sediments of Holocene and Last Glacial Maximum (LGM) time slices from four sediment cores from two different localities close to the ridges that bound the Panama Basin. Each locality contains paired sites that are seismically interpreted to have undergone extremes in sediment redistribution, i.e., focused versus winnowed sites. Both Holocene and LGM samples from sites where winnowing has occurred contain significant amounts (up to 50%) of the 230Th within the >63 μm grain size fraction, which makes up 40-70% of the bulk sediment analyzed. For sites where focusing has occurred, Holocene and LGM samples contain the greatest amounts of 230Th (up to 49%) in the finest grain-sized fraction (<4 μm), which makes up 26-40% of the bulk sediment analyzed. There are slight underestimations of 230Th-derived mass accumulation rates (MARs) and overestimations of 230Th-derived focusing factors at focused sites, while the opposite is true for winnowed sites. Corrections made using a model by Kretschmer et al. (2010) suggest a maximum change of about 30% in 230Th-derived MARs and focusing factors at focused sites, except for our most focused site which requires an approximate 70% correction in one sample. Our 230Th-corrected 232Th flux results suggest that the boundary between hemipelagically- and pelagically-derived sediments falls between 350 and 600 km from the continental margin.

  13. Monte Carlo study of si diode response in electron beams.

    PubMed

    Wang, Lilie L W; Rogers, David W O

    2007-05-01

    Silicon semiconductor diodes measure almost the same depth-dose distributions in both photon and electron beams as those measured by ion chambers. A recent study in ion chamber dosimetry has suggested that the wall correction factor for a parallel-plate ion chamber in electron beams changes with depth by as much as 6%. To investigate diode detector response with respect to depth, a silicon diode model is constructed and the water/silicon dose ratio at various depths in electron beams is calculated using EGSnrc. The results indicate that, for this particular diode model, the diode response per unit water dose (or water/diode dose ratio) in both 6 and 18 MeV electron beams is flat within 2% versus depth, from near the phantom surface to the depth of R50 (with calculation uncertainty <0.3%). This suggests that there must be some other correction factors for ion chambers that counter-balance the large wall correction factor at depth in electron beams. In addition, the beam quality and field-size dependence of the diode model are also calculated. The results show that the water/diode dose ratio remains constant within 2% over the electron energy range from 6 to 18 MeV. The water/diode dose ratio does not depend on field size as long as the incident electron beam is broad and the electron energy is high. However, for a very small beam size (1 X 1 cm(2)) and low electron energy (6 MeV), the water/diode dose ratio may decrease by more than 2% compared to that of a broad beam.

  14. SU-E-T-414: Experimental Correction of High-Z Electrode Effect in Mini-Ionization Chambers for Small Beam Dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larraga-Gutierrez, J

    Purpose: To correct for the over-response of mini-ionization chambers with high-Z central electrodes. The hypothesis is that by applying a negative/reverse voltage, it is possible to suppress the signal generated in the high-Z central electrode by low-energy photons. Methods: The mini-ionization chambers used in the experiments were a PTW-31014, PTW-31006 and IBA-CC01. The PTW-31014 has an aluminum central electrode while the PTW-31006 and IBA-CC01 have a steel one. Total scatter factors (Scp) were measured for a 6 MV photon beam down to a square field size of 0.5 cm. The measurements were performed in water at 10 cm depth withmore » SAD of 100 cm. The Scp were measured with the dosimeters with +400V bias voltage. In the case of the PTW-31006 and IBA-CC01, the measurements were repeated with −400V bias voltage. Also, the field factors in water were calculated with Monte Carlo simulations for comparison. Results: The measured Scp at +400V with the PTW-31006 and IBA-CC01 detectors were in agreement within 0.2% down to a field size of 1.5 cm. Both dosimeters shown a systematic difference about 2.5% with the Scp measured with the PTW-31014 and the Monte Carlo calculated field factors. The measured Scp at −400V with the PTW-31006 and IBA-CC01 detectors were in close agreement with the PTW-31014 measured Scp and the field factors within 0.3 and 1.0%, respectively. In the case of the IBA-CC01 it was found a good agreement (1%) down to field size of 1.0 cm. All the dosimeters shown differences up to 17% between the measured Scp and the field factor for the 0.5 cm field size. Conclusion: By applying a negative/reverse voltage to the mini-ionization chambers with high-Z central electrode it was possible to correct for their over-response to low energy photons.« less

  15. Small field detector correction factors: effects of the flattening filter for Elekta and Varian linear accelerators

    PubMed Central

    Liu, Paul Z.Y.; Lee, Christopher; McKenzie, David R.; Suchowerska, Natalka

    2016-01-01

    Flattening filter‐free (FFF) beams are becoming the preferred beam type for stereotactic radiosurgery (SRS) and stereotactic ablative radiation therapy (SABR), as they enable an increase in dose rate and a decrease in treatment time. This work assesses the effects of the flattening filter on small field output factors for 6 MV beams generated by both Elekta and Varian linear accelerators, and determines differences between detector response in flattened (FF) and FFF beams. Relative output factors were measured with a range of detectors (diodes, ionization chambers, radiochromic film, and microDiamond) and referenced to the relative output factors measured with an air core fiber optic dosimeter (FOD), a scintillation dosimeter developed at Chris O'Brien Lifehouse, Sydney. Small field correction factors were generated for both FF and FFF beams. Diode measured detector response was compared with a recently published mathematical relation to predict diode response corrections in small fields. The effect of flattening filter removal on detector response was quantified using a ratio of relative detector responses in FFF and FF fields for the same field size. The removal of the flattening filter was found to have a small but measurable effect on ionization chamber response with maximum deviations of less than ±0.9% across all field sizes measured. Solid‐state detectors showed an increased dependence on the flattening filter of up to ±1.6%. Measured diode response was within ±1.1% of the published mathematical relation for all fields up to 30 mm, independent of linac type and presence or absence of a flattening filter. For 6 MV beams, detector correction factors between FFF and FF beams are interchangeable for a linac between FF and FFF modes, providing that an additional uncertainty of up to ±1.6% is accepted. PACS number(s): 87.55.km, 87.56.bd, 87.56.Da PMID:27167280

  16. Consistency of Pilot Trainee Cognitive Ability, Personality, and Training Performance in Undergraduate Pilot Training

    DTIC Science & Technology

    2013-09-09

    multivariate correction method (Lawley, 1943) was used for all scores except the MAB FSIQ which used the univariate ( Thorndike , 1949) method. FSIQ... Thorndike , R. L. (1949). Personnel selection. NY: Wiley. Tupes, E. C., & Christal, R. C. (1961). Recurrent personality factors based on trait ratings... Thorndike , 1949). aThe correlations for 1995 were not corrected due to the small sample size (N = 17). *p< .05 Consistency of Pilot Attributes

  17. Analysis of various factors affecting pupil size in patients with glaucoma.

    PubMed

    Park, Ji Woong; Kang, Bong Hui; Kwon, Ji Won; Cho, Kyong Jin

    2017-09-16

    Pupil size is an important factor in predicting post-operative satisfaction. We assessed the correlation between pupil size, measured by Humphrey static perimetry, and various affecting factors in patients with glaucoma. In total, 825 eyes of 415 patients were evaluated retrospectively. Pupil size was measured with Humphrey static perimetry. Comparisons of pupil size according to the presence of glaucoma were evaluated, as were correlations between pupil size and various factors, including age, logMAR best corrected visual acuity (BCVA), retinal nerve fiber layer (RNFL) thickness, spherical equivalent, intraocular pressure, axial length, central corneal thickness, white-to-white, and the kappa angle. Pupil size was significantly smaller in glaucoma patients than in glaucoma suspects (p < 0.001) or the normal group (p < 0.001). Pupil size decreased significantly as age (p < 0.001) and central cornea thickness (p = 0.007) increased, and increased significantly as logMAR BCVA (p = 0.02) became worse and spherical equivalent (p = 0.007) and RNFL thickness (p = 0.042) increased. In patients older than 50 years, pupil size was significantly larger in eyes with a history of cataract surgery. Humphrey static perimetry can be useful in measuring pupil size. Pupil size was significantly smaller in eyes with glaucoma. Other factors affecting pupil size can be used in a preoperative evaluation when considering cataract surgery or laser refractive surgery.

  18. Bias and uncertainty in regression-calibrated models of groundwater flow in heterogeneous media

    USGS Publications Warehouse

    Cooley, R.L.; Christensen, S.

    2006-01-01

    Groundwater models need to account for detailed but generally unknown spatial variability (heterogeneity) of the hydrogeologic model inputs. To address this problem we replace the large, m-dimensional stochastic vector ?? that reflects both small and large scales of heterogeneity in the inputs by a lumped or smoothed m-dimensional approximation ????*, where ?? is an interpolation matrix and ??* is a stochastic vector of parameters. Vector ??* has small enough dimension to allow its estimation with the available data. The consequence of the replacement is that model function f(????*) written in terms of the approximate inputs is in error with respect to the same model function written in terms of ??, ??,f(??), which is assumed to be nearly exact. The difference f(??) - f(????*), termed model error, is spatially correlated, generates prediction biases, and causes standard confidence and prediction intervals to be too small. Model error is accounted for in the weighted nonlinear regression methodology developed to estimate ??* and assess model uncertainties by incorporating the second-moment matrix of the model errors into the weight matrix. Techniques developed by statisticians to analyze classical nonlinear regression methods are extended to analyze the revised method. The analysis develops analytical expressions for bias terms reflecting the interaction of model nonlinearity and model error, for correction factors needed to adjust the sizes of confidence and prediction intervals for this interaction, and for correction factors needed to adjust the sizes of confidence and prediction intervals for possible use of a diagonal weight matrix in place of the correct one. If terms expressing the degree of intrinsic nonlinearity for f(??) and f(????*) are small, then most of the biases are small and the correction factors are reduced in magnitude. Biases, correction factors, and confidence and prediction intervals were obtained for a test problem for which model error is large to test robustness of the methodology. Numerical results conform with the theoretical analysis. ?? 2005 Elsevier Ltd. All rights reserved.

  19. Radiative nonrecoil nuclear finite size corrections of order α(Zα)5 to the Lamb shift in light muonic atoms

    NASA Astrophysics Data System (ADS)

    Faustov, R. N.; Martynenko, A. P.; Martynenko, F. A.; Sorokin, V. V.

    2017-12-01

    On the basis of quasipotential method in quantum electrodynamics we calculate nuclear finite size radiative corrections of order α(Zα) 5 to the Lamb shift in muonic hydrogen and helium. To construct the interaction potential of particles, which gives the necessary contributions to the energy spectrum, we use the method of projection operators to states with a definite spin. Separate analytic expressions for the contributions of the muon self-energy, the muon vertex operator and the amplitude with spanning photon are obtained. We present also numerical results for these contributions using modern experimental data on the electromagnetic form factors of light nuclei.

  20. Evaluation of dual energy quantitative CT for determining the spatial distributions of red marrow and bone for dosimetry in internal emitter radiation therapy

    PubMed Central

    Goodsitt, Mitchell M.; Shenoy, Apeksha; Shen, Jincheng; Howard, David; Schipper, Matthew J.; Wilderman, Scott; Christodoulou, Emmanuel; Chun, Se Young; Dewaraja, Yuni K.

    2014-01-01

    Purpose: To evaluate a three-equation three-unknown dual-energy quantitative CT (DEQCT) technique for determining region specific variations in bone spongiosa composition for improved red marrow dose estimation in radionuclide therapy. Methods: The DEQCT method was applied to 80/140 kVp images of patient-simulating lumbar sectional body phantoms of three sizes (small, medium, and large). External calibration rods of bone, red marrow, and fat-simulating materials were placed beneath the body phantoms. Similar internal calibration inserts were placed at vertebral locations within the body phantoms. Six test inserts of known volume fractions of bone, fat, and red marrow were also scanned. External-to-internal calibration correction factors were derived. The effects of body phantom size, radiation dose, spongiosa region segmentation granularity [single (∼17 × 17 mm) region of interest (ROI), 2 × 2, and 3 × 3 segmentation of that single ROI], and calibration method on the accuracy of the calculated volume fractions of red marrow (cellularity) and trabecular bone were evaluated. Results: For standard low dose DEQCT x-ray technique factors and the internal calibration method, the RMS errors of the estimated volume fractions of red marrow of the test inserts were 1.2–1.3 times greater in the medium body than in the small body phantom and 1.3–1.5 times greater in the large body than in the small body phantom. RMS errors of the calculated volume fractions of red marrow within 2 × 2 segmented subregions of the ROIs were 1.6–1.9 times greater than for no segmentation, and RMS errors for 3 × 3 segmented subregions were 2.3–2.7 times greater than those for no segmentation. Increasing the dose by a factor of 2 reduced the RMS errors of all constituent volume fractions by an average factor of 1.40 ± 0.29 for all segmentation schemes and body phantom sizes; increasing the dose by a factor of 4 reduced those RMS errors by an average factor of 1.71 ± 0.25. Results for external calibrations exhibited much larger RMS errors than size matched internal calibration. Use of an average body size external-to-internal calibration correction factor reduced the errors to closer to those for internal calibration. RMS errors of less than 30% or about 0.01 for the bone and 0.1 for the red marrow volume fractions would likely be satisfactory for human studies. Such accuracies were achieved for 3 × 3 segmentation of 5 mm slice images for: (a) internal calibration with 4 times dose for all size body phantoms, (b) internal calibration with 2 times dose for the small and medium size body phantoms, and (c) corrected external calibration with 4 times dose and all size body phantoms. Conclusions: Phantom studies are promising and demonstrate the potential to use dual energy quantitative CT to estimate the spatial distributions of red marrow and bone within the vertebral spongiosa. PMID:24784380

  1. Determination of the thermodynamic correction factor of fluids confined in nano-metric slit pores from molecular simulation

    NASA Astrophysics Data System (ADS)

    Collell, Julien; Galliero, Guillaume

    2014-05-01

    The multi-component diffusive mass transport is generally quantified by means of the Maxwell-Stefan diffusion coefficients when using molecular simulations. These coefficients can be related to the Fick diffusion coefficients using the thermodynamic correction factor matrix, which requires to run several simulations to estimate all the elements of the matrix. In a recent work, Schnell et al. ["Thermodynamics of small systems embedded in a reservoir: A detailed analysis of finite size effects," Mol. Phys. 110, 1069-1079 (2012)] developed an approach to determine the full matrix of thermodynamic factors from a single simulation in bulk. This approach relies on finite size effects of small systems on the density fluctuations. We present here an extension of their work for inhomogeneous Lennard Jones fluids confined in slit pores. We first verified this extension by cross validating the results obtained from this approach with the results obtained from the simulated adsorption isotherms, which allows to determine the thermodynamic factor in porous medium. We then studied the effects of the pore width (from 1 to 15 molecular sizes), of the solid-fluid interaction potential (Lennard Jones 9-3, hard wall potential) and of the reduced fluid density (from 0.1 to 0.7 at a reduced temperature T* = 2) on the thermodynamic factor. The deviation of the thermodynamic factor compared to its equivalent bulk value decreases when increasing the pore width and becomes insignificant for reduced pore width above 15. We also found that the thermodynamic factor is sensitive to the magnitude of the fluid-fluid and solid-fluid interactions, which softens or exacerbates the density fluctuations.

  2. Determination of the thermodynamic correction factor of fluids confined in nano-metric slit pores from molecular simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collell, Julien; Galliero, Guillaume, E-mail: guillaume.galliero@univ-pau.fr

    2014-05-21

    The multi-component diffusive mass transport is generally quantified by means of the Maxwell-Stefan diffusion coefficients when using molecular simulations. These coefficients can be related to the Fick diffusion coefficients using the thermodynamic correction factor matrix, which requires to run several simulations to estimate all the elements of the matrix. In a recent work, Schnell et al. [“Thermodynamics of small systems embedded in a reservoir: A detailed analysis of finite size effects,” Mol. Phys. 110, 1069–1079 (2012)] developed an approach to determine the full matrix of thermodynamic factors from a single simulation in bulk. This approach relies on finite size effectsmore » of small systems on the density fluctuations. We present here an extension of their work for inhomogeneous Lennard Jones fluids confined in slit pores. We first verified this extension by cross validating the results obtained from this approach with the results obtained from the simulated adsorption isotherms, which allows to determine the thermodynamic factor in porous medium. We then studied the effects of the pore width (from 1 to 15 molecular sizes), of the solid-fluid interaction potential (Lennard Jones 9-3, hard wall potential) and of the reduced fluid density (from 0.1 to 0.7 at a reduced temperature T* = 2) on the thermodynamic factor. The deviation of the thermodynamic factor compared to its equivalent bulk value decreases when increasing the pore width and becomes insignificant for reduced pore width above 15. We also found that the thermodynamic factor is sensitive to the magnitude of the fluid-fluid and solid-fluid interactions, which softens or exacerbates the density fluctuations.« less

  3. A 3D correction method for predicting the readings of a PinPoint chamber on the CyberKnife® M6™ machine

    NASA Astrophysics Data System (ADS)

    Zhang, Yongqian; Brandner, Edward; Ozhasoglu, Cihat; Lalonde, Ron; Heron, Dwight E.; Saiful Huq, M.

    2018-02-01

    The use of small fields in radiation therapy techniques has increased substantially in particular in stereotactic radiosurgery (SRS) and stereotactic body radiation therapy (SBRT). However, as field size reduces further still, the response of the detector changes more rapidly with field size, and the effects of measurement uncertainties become increasingly significant due to the lack of lateral charged particle equilibrium, spectral changes as a function of field size, detector choice, and subsequent perturbations of the charged particle fluence. This work presents a novel 3D dose volume-to-point correction method to predict the readings of a 0.015 cc PinPoint chamber (PTW 31014) for both small static-fields and composite-field dosimetry formed by fixed cones on the CyberKnife® M6™ machine. A 3D correction matrix is introduced to link the 3D dose distribution to the response of the PinPoint chamber in water. The parameters of the correction matrix are determined by modeling its 3D dose response in circular fields created using the 12 fixed cones (5 mm-60 mm) on a CyberKnife® M6™ machine. A penalized least-square optimization problem is defined by fitting the calculated detector reading to the experimental measurement data to generate the optimal correction matrix; the simulated annealing algorithm is used to solve the inverse optimization problem. All the experimental measurements are acquired for every 2 mm chamber shift in the horizontal planes for each field size. The 3D dose distributions for the measurements are calculated using the Monte Carlo calculation with the MultiPlan® treatment planning system (Accuray Inc., Sunnyvale, CA, USA). The performance evaluation of the 3D conversion matrix is carried out by comparing the predictions of the output factors (OFs), off-axis ratios (OARs) and percentage depth dose (PDD) data to the experimental measurement data. The discrepancy of the measurement and the prediction data for composite fields is also performed for clinical SRS plans. The optimization algorithm used for generating the optimal correction factors is stable, and the resulting correction factors were smooth in the spatial domain. The measurement and prediction of OFs agree closely with percentage differences of less than 1.9% for all the 12 cones. The discrepancies between the prediction and the measurement PDD readings at 50 mm and 80 mm depth are 1.7% and 1.9%, respectively. The percentage differences of OARs between measurement and prediction data are less than 2% in the low dose gradient region, and 2%/1 mm discrepancies are observed within the high dose gradient regions. The differences between the measurement and prediction data for all the CyberKnife based SRS plans are less than 1%. These results demonstrate the existence and efficiency of the novel 3D correction method for small field dosimetry. The 3D correction matrix links the 3D dose distribution and the reading of the PinPoint chamber. The comparison between the predicted reading and the measurement data for static small fields (OFs, OARs and PDDs) yield discrepancies within 2% for low dose gradient regions and 2%/1 mm for high dose gradient regions; the discrepancies between the predicted and the measurement data are less than 1% for all the SRS plans. The 3D correction method provides an access to evaluate the clinical measurement data and can be applied to non-standard composite fields intensity modulated radiation therapy point dose verification.

  4. Methods to achieve accurate projection of regional and global raster databases

    USGS Publications Warehouse

    Usery, E.L.; Seong, J.C.; Steinwand, D.R.; Finn, M.P.

    2002-01-01

    This research aims at building a decision support system (DSS) for selecting an optimum projection considering various factors, such as pixel size, areal extent, number of categories, spatial pattern of categories, resampling methods, and error correction methods. Specifically, this research will investigate three goals theoretically and empirically and, using the already developed empirical base of knowledge with these results, develop an expert system for map projection of raster data for regional and global database modeling. The three theoretical goals are as follows: (1) The development of a dynamic projection that adjusts projection formulas for latitude on the basis of raster cell size to maintain equal-sized cells. (2) The investigation of the relationships between the raster representation and the distortion of features, number of categories, and spatial pattern. (3) The development of an error correction and resampling procedure that is based on error analysis of raster projection.

  5. Design and dosimetry characteristics of a commercial applicator system for intra-operative electron beam therapy utilizing ELEKTA Precise accelerator.

    PubMed

    Nevelsky, Alexander; Bernstein, Zvi; Bar-Deroma, Raquel; Kuten, Abraham; Orion, Itzhak

    2010-07-19

    The design concept and dosimetric characteristics of a new applicator system for intraoperative radiation therapy (IORT) are presented in this work. A new hard-docking commercial system includes polymethylmethacrylate (PMMA) applicators with different diameters and applicator end angles and a set of secondary lead collimators. A telescopic device allows changing of source-to-surface distance (SSD). All measurements were performed for 6, 9, 12 and 18 MeV electron energies. Output factors and percentage depth doses (PDD) were measured in a water phantom using a plane-parallel ion chamber. Isodose contours and radiation leakage were measured using a solid water phantom and radiographic films. The dependence of PDD on SSD was checked for the applicators with the smallest and the biggest diameters. SSD dependence of the output factors was measured. Hardcopies of PDD and isodose contours were prepared to help the team during the procedure on deciding applicator size and energy to be chosen. Applicator output factors are a function of energy, applicator size and applicator type. Dependence of SSD correction factors on applicator size and applicator type was found to be weak. The same SSD correction will be applied for all applicators in use for each energy. The radiation leakage through the applicators is clinically acceptable. The applicator system enables effective collimation of electron beams for IORT. The data presented are sufficient for applicator, energy and monitor unit selection for IORT treatment of a patient.

  6. An advanced method to assess the diet of free-ranging large carnivores based on scats.

    PubMed

    Wachter, Bettina; Blanc, Anne-Sophie; Melzheimer, Jörg; Höner, Oliver P; Jago, Mark; Hofer, Heribert

    2012-01-01

    The diet of free-ranging carnivores is an important part of their ecology. It is often determined from prey remains in scats. In many cases, scat analyses are the most efficient method but they require correction for potential biases. When the diet is expressed as proportions of consumed mass of each prey species, the consumed prey mass to excrete one scat needs to be determined and corrected for prey body mass because the proportion of digestible to indigestible matter increases with prey body mass. Prey body mass can be corrected for by conducting feeding experiments using prey of various body masses and fitting a regression between consumed prey mass to excrete one scat and prey body mass (correction factor 1). When the diet is expressed as proportions of consumed individuals of each prey species and includes prey animals not completely consumed, the actual mass of each prey consumed by the carnivore needs to be controlled for (correction factor 2). No previous study controlled for this second bias. Here we use an extended series of feeding experiments on a large carnivore, the cheetah (Acinonyx jubatus), to establish both correction factors. In contrast to previous studies which fitted a linear regression for correction factor 1, we fitted a biologically more meaningful exponential regression model where the consumed prey mass to excrete one scat reaches an asymptote at large prey sizes. Using our protocol, we also derive correction factor 1 and 2 for other carnivore species and apply them to published studies. We show that the new method increases the number and proportion of consumed individuals in the diet for large prey animals compared to the conventional method. Our results have important implications for the interpretation of scat-based studies in feeding ecology and the resolution of human-wildlife conflicts for the conservation of large carnivores.

  7. An Advanced Method to Assess the Diet of Free-Ranging Large Carnivores Based on Scats

    PubMed Central

    Wachter, Bettina; Blanc, Anne-Sophie; Melzheimer, Jörg; Höner, Oliver P.; Jago, Mark; Hofer, Heribert

    2012-01-01

    Background The diet of free-ranging carnivores is an important part of their ecology. It is often determined from prey remains in scats. In many cases, scat analyses are the most efficient method but they require correction for potential biases. When the diet is expressed as proportions of consumed mass of each prey species, the consumed prey mass to excrete one scat needs to be determined and corrected for prey body mass because the proportion of digestible to indigestible matter increases with prey body mass. Prey body mass can be corrected for by conducting feeding experiments using prey of various body masses and fitting a regression between consumed prey mass to excrete one scat and prey body mass (correction factor 1). When the diet is expressed as proportions of consumed individuals of each prey species and includes prey animals not completely consumed, the actual mass of each prey consumed by the carnivore needs to be controlled for (correction factor 2). No previous study controlled for this second bias. Methodology/Principal Findings Here we use an extended series of feeding experiments on a large carnivore, the cheetah (Acinonyx jubatus), to establish both correction factors. In contrast to previous studies which fitted a linear regression for correction factor 1, we fitted a biologically more meaningful exponential regression model where the consumed prey mass to excrete one scat reaches an asymptote at large prey sizes. Using our protocol, we also derive correction factor 1 and 2 for other carnivore species and apply them to published studies. We show that the new method increases the number and proportion of consumed individuals in the diet for large prey animals compared to the conventional method. Conclusion/Significance Our results have important implications for the interpretation of scat-based studies in feeding ecology and the resolution of human-wildlife conflicts for the conservation of large carnivores. PMID:22715373

  8. Impact of correction factors in human brain lesion-behavior inference.

    PubMed

    Sperber, Christoph; Karnath, Hans-Otto

    2017-03-01

    Statistical voxel-based lesion-behavior mapping (VLBM) in neurological patients with brain lesions is frequently used to examine the relationship between structure and function of the healthy human brain. Only recently, two simulation studies noted reduced anatomical validity of this method, observing the results of VLBM to be systematically misplaced by about 16 mm. However, both simulation studies differed from VLBM analyses of real data in that they lacked the proper use of two correction factors: lesion size and "sufficient lesion affection." In simulation experiments on a sample of 274 real stroke patients, we found that the use of these two correction factors reduced misplacement markedly compared to uncorrected VLBM. Apparently, the misplacement is due to physiological effects of brain lesion anatomy. Voxel-wise topographies of collateral damage in the real data were generated and used to compute a metric for the inter-voxel relation of brain damage. "Anatomical bias" vectors that were solely calculated from these inter-voxel relations in the patients' real anatomical data, successfully predicted the VLBM misplacement. The latter has the potential to help in the development of new VLBM methods that provide even higher anatomical validity than currently available by the proper use of correction factors. Hum Brain Mapp 38:1692-1701, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  9. Fast sweeping method for the factored eikonal equation

    NASA Astrophysics Data System (ADS)

    Fomel, Sergey; Luo, Songting; Zhao, Hongkai

    2009-09-01

    We develop a fast sweeping method for the factored eikonal equation. By decomposing the solution of a general eikonal equation as the product of two factors: the first factor is the solution to a simple eikonal equation (such as distance) or a previously computed solution to an approximate eikonal equation. The second factor is a necessary modification/correction. Appropriate discretization and a fast sweeping strategy are designed for the equation of the correction part. The key idea is to enforce the causality of the original eikonal equation during the Gauss-Seidel iterations. Using extensive numerical examples we demonstrate that (1) the convergence behavior of the fast sweeping method for the factored eikonal equation is the same as for the original eikonal equation, i.e., the number of iterations for the Gauss-Seidel iterations is independent of the mesh size, (2) the numerical solution from the factored eikonal equation is more accurate than the numerical solution directly computed from the original eikonal equation, especially for point sources.

  10. Relativistic corrections to heavy quark fragmentation to S-wave heavy mesons

    NASA Astrophysics Data System (ADS)

    Sang, Wen-Long; Yang, Lan-Fei; Chen, Yu-Qi

    2009-07-01

    The relativistic corrections of order v2 to the fragmentation functions for the heavy quark to S-wave heavy quarkonia are calculated in the framework of the nonrelativistic quantum chromodynamics factorization formula. We derive the fragmentation functions by using the Collins-Soper definition in both the Feynman gauge and the axial gauge. We also extract them through the process Z0→Hq qmacr in the limit MZ/m→∞. We find that all results obtained by these two different methods and in different gauges are the same. We estimate the relative size of the relativistic corrections to the fragmentation functions.

  11. Relativistic corrections to heavy quark fragmentation to S-wave heavy mesons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sang Wenlong; Yang Lanfei; Chen Yuqi

    The relativistic corrections of order v{sup 2} to the fragmentation functions for the heavy quark to S-wave heavy quarkonia are calculated in the framework of the nonrelativistic quantum chromodynamics factorization formula. We derive the fragmentation functions by using the Collins-Soper definition in both the Feynman gauge and the axial gauge. We also extract them through the process Z{sup 0}{yields}Hqq in the limit M{sub Z}/m{yields}{infinity}. We find that all results obtained by these two different methods and in different gauges are the same. We estimate the relative size of the relativistic corrections to the fragmentation functions.

  12. Do impression management and self-deception distort self-report measures with content of dynamic risk factors in offender samples? A meta-analytic review.

    PubMed

    Hildebrand, Martin; Wibbelink, Carlijn J M; Verschuere, Bruno

    Self-report measures provide an important source of information in correctional/forensic settings, yet at the same time the validity of that information is often questioned because self-reports are thought to be highly vulnerable to self-presentation biases. Primary studies in offender samples have provided mixed results with regard to the impact of socially desirable responding on self-reports. The main aim of the current study was therefore to investigate-via a meta-analytic review of published studies-the association between the two dimensions of socially desirable responding, impression management and self-deceptive enhancement, and self-report measures with content of dynamic risk factors using the Balanced Inventory of Desirable Responding (BIDR) in offender samples. These self-report measures were significantly and negatively related with self-deception (r = -0.120, p < 0.001; k = 170 effect sizes) and impression management (r = -0.158, p < 0.001; k = 157 effect sizes), yet there was evidence of publication bias for the impression management effect with the trim and fill method indicating that the relation is probably even smaller (r = -0.07). The magnitude of the effect sizes was small. Moderation analyses suggested that type of dynamic risk factor (e.g., antisocial cognition versus antisocial personality), incentives, and publication year affected the relationship between impression management and self-report measures with content of dynamic risk factors, whereas sample size, setting (e.g., incarcerated, community), and publication year influenced the relation between self-deception and these self-report measures. The results indicate that the use of self-report measures to assess dynamic risk factors in correctional/forensic settings is not inevitably compromised by socially desirable responding, yet caution is warranted for some risk factors (antisocial personality traits), particularly when incentives are at play. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Correcting the Relative Bias of Light Obscuration and Flow Imaging Particle Counters.

    PubMed

    Ripple, Dean C; Hu, Zhishang

    2016-03-01

    Industry and regulatory bodies desire more accurate methods for counting and characterizing particles. Measurements of proteinaceous-particle concentrations by light obscuration and flow imaging can differ by factors of ten or more. We propose methods to correct the diameters reported by light obscuration and flow imaging instruments. For light obscuration, diameters were rescaled based on characterization of the refractive index of typical particles and a light scattering model for the extinction efficiency factor. The light obscuration models are applicable for either homogeneous materials (e.g., silicone oil) or for chemically homogeneous, but spatially non-uniform aggregates (e.g., protein aggregates). For flow imaging, the method relied on calibration of the instrument with silica beads suspended in water-glycerol mixtures. These methods were applied to a silicone-oil droplet suspension and four particle suspensions containing particles produced from heat stressed and agitated human serum albumin, agitated polyclonal immunoglobulin, and abraded ethylene tetrafluoroethylene polymer. All suspensions were measured by two flow imaging and one light obscuration apparatus. Prior to correction, results from the three instruments disagreed by a factor ranging from 3.1 to 48 in particle concentration over the size range from 2 to 20 μm. Bias corrections reduced the disagreement from an average factor of 14 down to an average factor of 1.5. The methods presented show promise in reducing the relative bias between light obscuration and flow imaging.

  14. Challenges in Physical Characterization of Dim Space Objects: What Can We Learn from NEOs

    NASA Astrophysics Data System (ADS)

    Reddy, V.; Sanchez, J.; Thirouin, A.; Rivera-Valentin, E.; Ryan, W.; Ryan, E.; Mokovitz, N.; Tegler, S.

    2016-09-01

    Physical characterization of dim space objects in cis-lunar space can be a challenging task. Of particular interest to both natural and artificial space object behavior scientists are the properties beyond orbital parameters that can uniquely identify them. These properties include rotational state, size, shape, density and composition. A wide range of observational and non-observational factors affect our ability to characterize dim objects in cis-lunar space. For example, phase angle (angle between Sun-Target-Observer), temperature, rotational variations, temperature, and particle size (for natural dim objects). Over the last two decades, space object behavior scientists studying natural dim objects have attempted to quantify and correct for a majority of these factors to enhance our situational awareness. These efforts have been primarily focused on developing laboratory spectral calibrations in a space-like environment. Calibrations developed correcting spectral observations of natural dim objects could be applied to characterizing artificial objects, as the underlying physics is the same. The paper will summarize our current understanding of these observational and non-observational factors and present a case study showcasing the state of the art in characterization of natural dim objects.

  15. Inflence of air shear and adjuvants on spray atomization

    USDA-ARS?s Scientific Manuscript database

    Droplet size is critical to maximizing pesticide efficacy and mitigating off-target movement and correct selection and adjustment of nozzles and application equipment, as well as the use of adjuvants can aid in this process. However, in aerial applications air shear tends to be the dominate factor ...

  16. Applications of multivariate modeling to neuroimaging group analysis: A comprehensive alternative to univariate general linear model

    PubMed Central

    Chen, Gang; Adleman, Nancy E.; Saad, Ziad S.; Leibenluft, Ellen; Cox, RobertW.

    2014-01-01

    All neuroimaging packages can handle group analysis with t-tests or general linear modeling (GLM). However, they are quite hamstrung when there are multiple within-subject factors or when quantitative covariates are involved in the presence of a within-subject factor. In addition, sphericity is typically assumed for the variance–covariance structure when there are more than two levels in a within-subject factor. To overcome such limitations in the traditional AN(C)OVA and GLM, we adopt a multivariate modeling (MVM) approach to analyzing neuroimaging data at the group level with the following advantages: a) there is no limit on the number of factors as long as sample sizes are deemed appropriate; b) quantitative covariates can be analyzed together with within- subject factors; c) when a within-subject factor is involved, three testing methodologies are provided: traditional univariate testing (UVT)with sphericity assumption (UVT-UC) and with correction when the assumption is violated (UVT-SC), and within-subject multivariate testing (MVT-WS); d) to correct for sphericity violation at the voxel level, we propose a hybrid testing (HT) approach that achieves equal or higher power via combining traditional sphericity correction methods (Greenhouse–Geisser and Huynh–Feldt) with MVT-WS. PMID:24954281

  17. Grainsize evolution and differential comminution in an experimental regolith

    NASA Technical Reports Server (NTRS)

    Horz, F.; Cintala, M.; See, T.

    1984-01-01

    The comminution of planetary surfaces by exposure to continuous meteorite bombardment was simulated by impacting the same fragmental gabbro target 200 times. The role of comminution and in situ gardening of planetary regoliths was addressed. Mean grain size continuously decreased with increasing shot number. Initially it decreased linearly with accumulated energy, but at some stage comminution efficiency started to decrease gradually. Point counting techniques, aided by the electron microprobe for mineral identification, were performed on a number of comminution products. Bulk chemical analyses of specific grain size fractions were also carried out. The finest sizes ( 10 microns) display generally the strongest enrichment/depletion factors. Similar, if not exactly identical, trends are reported from lunar soils. It is, therefore, not necessarily correct to explain the chemical characteristics of various grain sizes via different admixtures of materials from distant source terrains. Differential comminution of local source rocks may be the dominating factor.

  18. Size misperception among overweight and obese families.

    PubMed

    Paul, Tracy K; Sciacca, Robert R; Bier, Michael; Rodriguez, Juviza; Song, Sharon; Giardina, Elsa-Grace V

    2015-01-01

    Perception of body size is a key factor driving health behavior. Mothers directly influence children's nutritional and exercise behaviors. Mothers of ethnic minority groups and lower socioeconomic status are less likely to correctly identify young children as overweight or obese. Little evaluation has been done of the inverse--the child's perception of the mother's weight. To determine awareness of weight status among mother-child dyads (n = 506). Cross-sectional study conducted in an outpatient pediatric dental clinic of Columbia University Medical Center, New York, NY. Primarily Hispanic (82.2 %) mothers (n = 253), 38.8 ± 7.5 years of age, and children (n = 253), 10.5 ± 1.4 years of age, responding to a questionnaire adapted from the validated Behavioral Risk Factor Surveillance System. Anthropometric measures-including height, weight, and waist circumference-and awareness of self-size and size of other generation were obtained. 71.4 % of obese adults and 35.1 % of overweight adults underestimated size, vs. 8.6 % of normal-weight (NW) adults (both p < 0.001). Among overweight and obese children, 86.3 % and 62.3 % underestimated their size, vs. 14.9 % NW children (both p < 0.001). Among mothers with overweight children, 80.0 % underestimated their child's weight, vs. 7.1 % of mothers with NW children (p < 0.001); 23.1 % of mothers with obese children also underestimated their child's weight (p < 0.01). Among children with obese mothers, only 13.0 % correctly classified the adult's size, vs. 76.5 % with NW mothers (p < 0.001). Among obese mothers, 20.8 % classified overweight body size as ideal, vs. 1.2 % among NW mothers (p < 0.001). Overweight/obese adults and children frequently underestimate their size. Adults misjudge overweight/obese children as being of normal weight, and children of obese mothers often underestimate the adult's size. Failure to recognize overweight/obesity status among adults and children can lead to prolonged exposure to obesity-related comorbidities.

  19. Characterization of an in vivo diode dosimetry system for clinical use

    PubMed Central

    Huang, Kai; Bice, William S.; Hidalgo‐Salvatierra, Oscar

    2003-01-01

    An in vivo dosimetry system that uses p‐type semiconductor diodes with buildup caps was characterized for clinical use on accelerators ranging in energy from 4 to 18 MV. The dose per pulse dependence was investigated. This was done by altering the source‐surface distance, field size, and wedge for photons. The off‐axis correction and effect of changing repetition rate were also investigated. A model was developed to fit the measured two‐dimensional diode correction factors. PACS number(s): 87.66.–a, 87.52.–g PMID:12777148

  20. Imprinting high-gradient topographical structures onto optical surfaces using magnetorheological finishing: manufacturing corrective optical elements for high-power laser applications.

    PubMed

    Menapace, Joseph A; Ehrmann, Paul E; Bayramian, Andrew J; Bullington, Amber; Di Nicola, Jean-Michel G; Haefner, Constantin; Jarboe, Jeffrey; Marshall, Christopher; Schaffers, Kathleen I; Smith, Cal

    2016-07-01

    Corrective optical elements form an important part of high-precision optical systems. We have developed a method to manufacture high-gradient corrective optical elements for high-power laser systems using deterministic magnetorheological finishing (MRF) imprinting technology. Several process factors need to be considered for polishing ultraprecise topographical structures onto optical surfaces using MRF. They include proper selection of MRF removal function and wheel sizes, detailed MRF tool and interferometry alignment, and optimized MRF polishing schedules. Dependable interferometry also is a key factor in high-gradient component manufacture. A wavefront attenuating cell, which enables reliable measurement of gradients beyond what is attainable using conventional interferometry, is discussed. The results of MRF imprinting a 23 μm deep structure containing gradients over 1.6 μm / mm onto a fused-silica window are presented as an example of the technique's capabilities. This high-gradient element serves as a thermal correction plate in the high-repetition-rate advanced petawatt laser system currently being built at Lawrence Livermore National Laboratory.

  1. Imprinting high-gradient topographical structures onto optical surfaces using magnetorheological finishing: Manufacturing corrective optical elements for high-power laser applications

    DOE PAGES

    Menapace, Joseph A.; Ehrmann, Paul E.; Bayramian, Andrew J.; ...

    2016-03-15

    Corrective optical elements form an important part of high-precision optical systems. We have developed a method to manufacture high-gradient corrective optical elements for high-power laser systems using deterministic magnetorheological finishing (MRF) imprinting technology. Several process factors need to be considered for polishing ultraprecise topographical structures onto optical surfaces using MRF. They include proper selection of MRF removal function and wheel sizes, detailed MRF tool and interferometry alignment, and optimized MRF polishing schedules. Dependable interferometry also is a key factor in high-gradient component manufacture. A wavefront attenuating cell, which enables reliable measurement of gradients beyond what is attainable using conventional interferometry,more » is discussed. The results of MRF imprinting a 23 μm deep structure containing gradients over 1.6 μm / mm onto a fused-silica window are presented as an example of the technique’s capabilities. As a result, this high-gradient element serves as a thermal correction plate in the high-repetition-rate advanced petawatt laser system currently being built at Lawrence Livermore National Laboratory.« less

  2. Determination of recombination and polarity correction factors, kS and kP, for small cylindrical ionization chambers PTW 31021 and PTW 31022 in pulsed filtered and unfiltered beams.

    PubMed

    Bruggmoser, Gregor; Saum, Rainer; Kranzer, Rafael

    2018-01-12

    The aim of this technical communication is to provide correction factors for recombination and polarity effect for two new ionization chambers PTW PinPoint 3D (type 31022) and PTW Semiflex 3D (type 31021). The correction factors provided are for the (based on the) German DIN 6800-2 dosimetry protocol and the AAPM TG51 protocol. The measurements were made in filtered and unfiltered high-energy photon beams in a water equivalent phantom at maximum depth of the PDD and a field size on the surface of 10cm×10cm. The design of the new chamber types leads to an ion collection efficiency and a polarity effect that are well within the specifications requested by pertinent dosimetry protocols including the addendum of TG-51. It was confirmed that the recombination effect of both chambers mainly depends on dose per pulse and is independent of the filtration of the photon beam. Copyright © 2018. Published by Elsevier GmbH.

  3. Monte Carlo simulated corrections for beam commissioning measurements with circular and MLC shaped fields on the CyberKnife M6 System: a study including diode, microchamber, point scintillator, and synthetic microdiamond detectors.

    PubMed

    Francescon, P; Kilby, W; Noll, J M; Masi, L; Satariano, N; Russo, S

    2017-02-07

    Monte Carlo simulation was used to calculate correction factors for output factor (OF), percentage depth-dose (PDD), and off-axis ratio (OAR) measurements with the CyberKnife M6 System. These include the first such data for the InCise MLC. Simulated detectors include diodes, air-filled microchambers, a synthetic microdiamond detector, and point scintillator. Individual perturbation factors were also evaluated. OF corrections show similar trends to previous studies. With a 5 mm fixed collimator the diode correction to convert a measured OF to the corresponding point dose ratio varies between  -6.1% and  -3.5% for the diode models evaluated, while in a 7.6 mm  ×  7.7 mm MLC field these are  -4.5% to  -1.8%. The corresponding microchamber corrections are  +9.9% to  +10.7% and  +3.5% to  +4.0%. The microdiamond corrections have a maximum of  -1.4% for the 7.5 mm and 10 mm collimators. The scintillator corrections are  <1% in all beams. Measured OF showed uncorrected inter-detector differences  >15%, reducing to  <3% after correction. PDD corrections at d  >  d max were  <2% for all detectors except IBA Razor where a maximum 4% correction was observed at 300 mm depth. OAR corrections were smaller inside the field than outside. At the beam edge microchamber OAR corrections were up to 15%, mainly caused by density perturbations, which blurs the measured penumbra. With larger beams and depths, PTW and IBA diode corrections outside the beam were up to 20% while the Edge detector needed smaller corrections although these did vary with orientation. These effects are most noticeable for large field size and depth, where they are dominated by fluence and stopping power perturbations. The microdiamond OAR corrections were  <3% outside the beam. This paper provides OF corrections that can be used for commissioning new CyberKnife M6 Systems and retrospectively checking estimated corrections used previously. We recommend the PDD and OAR corrections are used to guide detector selection and inform the evaluation of results rather than to explicitly correct measurements.

  4. Monte Carlo simulated corrections for beam commissioning measurements with circular and MLC shaped fields on the CyberKnife M6 System: a study including diode, microchamber, point scintillator, and synthetic microdiamond detectors

    NASA Astrophysics Data System (ADS)

    Francescon, P.; Kilby, W.; Noll, J. M.; Masi, L.; Satariano, N.; Russo, S.

    2017-02-01

    Monte Carlo simulation was used to calculate correction factors for output factor (OF), percentage depth-dose (PDD), and off-axis ratio (OAR) measurements with the CyberKnife M6 System. These include the first such data for the InCise MLC. Simulated detectors include diodes, air-filled microchambers, a synthetic microdiamond detector, and point scintillator. Individual perturbation factors were also evaluated. OF corrections show similar trends to previous studies. With a 5 mm fixed collimator the diode correction to convert a measured OF to the corresponding point dose ratio varies between  -6.1% and  -3.5% for the diode models evaluated, while in a 7.6 mm  ×  7.7 mm MLC field these are  -4.5% to  -1.8%. The corresponding microchamber corrections are  +9.9% to  +10.7% and  +3.5% to  +4.0%. The microdiamond corrections have a maximum of  -1.4% for the 7.5 mm and 10 mm collimators. The scintillator corrections are  <1% in all beams. Measured OF showed uncorrected inter-detector differences  >15%, reducing to  <3% after correction. PDD corrections at d  >  d max were  <2% for all detectors except IBA Razor where a maximum 4% correction was observed at 300 mm depth. OAR corrections were smaller inside the field than outside. At the beam edge microchamber OAR corrections were up to 15%, mainly caused by density perturbations, which blurs the measured penumbra. With larger beams and depths, PTW and IBA diode corrections outside the beam were up to 20% while the Edge detector needed smaller corrections although these did vary with orientation. These effects are most noticeable for large field size and depth, where they are dominated by fluence and stopping power perturbations. The microdiamond OAR corrections were  <3% outside the beam. This paper provides OF corrections that can be used for commissioning new CyberKnife M6 Systems and retrospectively checking estimated corrections used previously. We recommend the PDD and OAR corrections are used to guide detector selection and inform the evaluation of results rather than to explicitly correct measurements.

  5. Body Image Satisfaction among Blacks

    ERIC Educational Resources Information Center

    Gustat, Jeanette; Carton, Thomas W.; Shahien, Amir A.; Andersen, Lori

    2017-01-01

    Satisfaction with body image is a factor related to health outcomes. The purpose of this study is to examine the relationship between body image satisfaction and body size perception in an urban, Black community sample in New Orleans, Louisiana. Only 42.2% of respondents were satisfied with their body image and 44.1% correctly perceived their body…

  6. Stability and bias of classification rates in biological applications of discriminant analysis

    USGS Publications Warehouse

    Williams, B.K.; Titus, K.; Hines, J.E.

    1990-01-01

    We assessed the sampling stability of classification rates in discriminant analysis by using a factorial design with factors for multivariate dimensionality, dispersion structure, configuration of group means, and sample size. A total of 32,400 discriminant analyses were conducted, based on data from simulated populations with appropriate underlying statistical distributions. Simulation results indicated strong bias in correct classification rates when group sample sizes were small and when overlap among groups was high. We also found that stability of the correct classification rates was influenced by these factors, indicating that the number of samples required for a given level of precision increases with the amount of overlap among groups. In a review of 60 published studies, we found that 57% of the articles presented results on classification rates, though few of them mentioned potential biases in their results. Wildlife researchers should choose the total number of samples per group to be at least 2 times the number of variables to be measured when overlap among groups is low. Substantially more samples are required as the overlap among groups increases

  7. Nanoscale simulation of shale transport properties using the lattice Boltzmann method: Permeability and diffusivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Li; Zhang, Lei; Kang, Qinjun

    Here, porous structures of shales are reconstructed using the markov chain monte carlo (MCMC) method based on scanning electron microscopy (SEM) images of shale samples from Sichuan Basin, China. Characterization analysis of the reconstructed shales is performed, including porosity, pore size distribution, specific surface area and pore connectivity. The lattice Boltzmann method (LBM) is adopted to simulate fluid flow and Knudsen diffusion within the reconstructed shales. Simulation results reveal that the tortuosity of the shales is much higher than that commonly employed in the Bruggeman equation, and such high tortuosity leads to extremely low intrinsic permeability. Correction of the intrinsicmore » permeability is performed based on the dusty gas model (DGM) by considering the contribution of Knudsen diffusion to the total flow flux, resulting in apparent permeability. The correction factor over a range of Knudsen number and pressure is estimated and compared with empirical correlations in the literature. We find that for the wide pressure range investigated, the correction factor is always greater than 1, indicating Knudsen diffusion always plays a role on shale gas transport mechanisms in the reconstructed shales. Specifically, we found that most of the values of correction factor fall in the slip and transition regime, with no Darcy flow regime observed.« less

  8. Nanoscale simulation of shale transport properties using the lattice Boltzmann method: Permeability and diffusivity

    DOE PAGES

    Chen, Li; Zhang, Lei; Kang, Qinjun; ...

    2015-01-28

    Here, porous structures of shales are reconstructed using the markov chain monte carlo (MCMC) method based on scanning electron microscopy (SEM) images of shale samples from Sichuan Basin, China. Characterization analysis of the reconstructed shales is performed, including porosity, pore size distribution, specific surface area and pore connectivity. The lattice Boltzmann method (LBM) is adopted to simulate fluid flow and Knudsen diffusion within the reconstructed shales. Simulation results reveal that the tortuosity of the shales is much higher than that commonly employed in the Bruggeman equation, and such high tortuosity leads to extremely low intrinsic permeability. Correction of the intrinsicmore » permeability is performed based on the dusty gas model (DGM) by considering the contribution of Knudsen diffusion to the total flow flux, resulting in apparent permeability. The correction factor over a range of Knudsen number and pressure is estimated and compared with empirical correlations in the literature. We find that for the wide pressure range investigated, the correction factor is always greater than 1, indicating Knudsen diffusion always plays a role on shale gas transport mechanisms in the reconstructed shales. Specifically, we found that most of the values of correction factor fall in the slip and transition regime, with no Darcy flow regime observed.« less

  9. Nanoscale simulation of shale transport properties using the lattice Boltzmann method: permeability and diffusivity

    PubMed Central

    Chen, Li; Zhang, Lei; Kang, Qinjun; Viswanathan, Hari S.; Yao, Jun; Tao, Wenquan

    2015-01-01

    Porous structures of shales are reconstructed using the markov chain monte carlo (MCMC) method based on scanning electron microscopy (SEM) images of shale samples from Sichuan Basin, China. Characterization analysis of the reconstructed shales is performed, including porosity, pore size distribution, specific surface area and pore connectivity. The lattice Boltzmann method (LBM) is adopted to simulate fluid flow and Knudsen diffusion within the reconstructed shales. Simulation results reveal that the tortuosity of the shales is much higher than that commonly employed in the Bruggeman equation, and such high tortuosity leads to extremely low intrinsic permeability. Correction of the intrinsic permeability is performed based on the dusty gas model (DGM) by considering the contribution of Knudsen diffusion to the total flow flux, resulting in apparent permeability. The correction factor over a range of Knudsen number and pressure is estimated and compared with empirical correlations in the literature. For the wide pressure range investigated, the correction factor is always greater than 1, indicating Knudsen diffusion always plays a role on shale gas transport mechanisms in the reconstructed shales. Specifically, we found that most of the values of correction factor fall in the slip and transition regime, with no Darcy flow regime observed. PMID:25627247

  10. 77 FR 32013 - Truck Size and Weight; Technical Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-31

    ...-2012-0037] RIN 2125-AF45 Truck Size and Weight; Technical Correction AGENCY: Federal Highway...: This rule makes a technical correction to the regulations that govern Longer Combination Vehicles (LCV... INFORMATION CONTACT: John Nicholas, Truck Size and Weight Program Manager, Office of Freight Management and...

  11. Comparison of extended field-of-view reconstructions in C-arm flat-detector CT using patient size, shape or attenuation information.

    PubMed

    Kolditz, Daniel; Meyer, Michael; Kyriakou, Yiannis; Kalender, Willi A

    2011-01-07

    In C-arm-based flat-detector computed tomography (FDCT) it frequently happens that the patient exceeds the scan field of view (SFOV) in the transaxial direction because of the limited detector size. This results in data truncation and CT image artefacts. In this work three truncation correction approaches for extended field-of-view (EFOV) reconstructions have been implemented and evaluated. An FDCT-based method estimates the patient size and shape from the truncated projections by fitting an elliptical model to the raw data in order to apply an extrapolation. In a camera-based approach the patient is sampled with an optical tracking system and this information is used to apply an extrapolation. In a CT-based method the projections are completed by artificial projection data obtained from the CT data acquired in an earlier exam. For all methods the extended projections are filtered and backprojected with a standard Feldkamp-type algorithm. Quantitative evaluations have been performed by simulations of voxelized phantoms on the basis of the root mean square deviation and a quality factor Q (Q = 1 represents the ideal correction). Measurements with a C-arm FDCT system have been used to validate the simulations and to investigate the practical applicability using anthropomorphic phantoms which caused truncation in all projections. The proposed approaches enlarged the FOV to cover wider patient cross-sections. Thus, image quality inside and outside the SFOV has been improved. Best results have been obtained using the CT-based method, followed by the camera-based and the FDCT-based truncation correction. For simulations, quality factors up to 0.98 have been achieved. Truncation-induced cupping artefacts have been reduced, e.g., from 218% to less than 1% for the measurements. The proposed truncation correction approaches for EFOV reconstructions are an effective way to ensure accurate CT values inside the SFOV and to recover peripheral information outside the SFOV.

  12. Evaluation of attenuation and scatter correction requirements in small animal PET and SPECT imaging

    NASA Astrophysics Data System (ADS)

    Konik, Arda Bekir

    Positron emission tomography (PET) and single photon emission tomography (SPECT) are two nuclear emission-imaging modalities that rely on the detection of high-energy photons emitted from radiotracers administered to the subject. The majority of these photons are attenuated (absorbed or scattered) in the body, resulting in count losses or deviations from true detection, which in turn degrades the accuracy of images. In clinical emission tomography, sophisticated correction methods are often required employing additional x-ray CT or radionuclide transmission scans. Having proven their potential in both clinical and research areas, both PET and SPECT are being adapted for small animal imaging. However, despite the growing interest in small animal emission tomography, little scientific information exists about the accuracy of these correction methods on smaller size objects, and what level of correction is required. The purpose of this work is to determine the role of attenuation and scatter corrections as a function of object size through simulations. The simulations were performed using Interactive Data Language (IDL) and a Monte Carlo based package, Geant4 application for emission tomography (GATE). In IDL simulations, PET and SPECT data acquisition were modeled in the presence of attenuation. A mathematical emission and attenuation phantom approximating a thorax slice and slices from real PET/CT data were scaled to 5 different sizes (i.e., human, dog, rabbit, rat and mouse). The simulated emission data collected from these objects were reconstructed. The reconstructed images, with and without attenuation correction, were compared to the ideal (i.e., non-attenuated) reconstruction. Next, using GATE, scatter fraction values (the ratio of the scatter counts to the total counts) of PET and SPECT scanners were measured for various sizes of NEMA (cylindrical phantoms representing small animals and human), MOBY (realistic mouse/rat model) and XCAT (realistic human model) digital phantoms. In addition, PET projection files for different sizes of MOBY phantoms were reconstructed in 6 different conditions including attenuation and scatter corrections. Selected regions were analyzed for these different reconstruction conditions and object sizes. Finally, real mouse data from the real version of the same small animal PET scanner we modeled in our simulations were analyzed for similar reconstruction conditions. Both our IDL and GATE simulations showed that, for small animal PET and SPECT, even the smallest size objects (˜2 cm diameter) showed ˜15% error when both attenuation and scatter were not corrected. However, a simple attenuation correction using a uniform attenuation map and object boundary obtained from emission data significantly reduces this error in non-lung regions (˜1% for smallest size and ˜6% for largest size). In lungs, emissions values were overestimated when only attenuation correction was performed. In addition, we did not observe any significant improvement between the uses of uniform or actual attenuation map (e.g., only ˜0.5% for largest size in PET studies). The scatter correction was not significant for smaller size objects, but became increasingly important for larger sizes objects. These results suggest that for all mouse sizes and most rat sizes, uniform attenuation correction can be performed using emission data only. For smaller sizes up to ˜ 4 cm, scatter correction is not required even in lung regions. For larger sizes if accurate quantization needed, additional transmission scan may be required to estimate an accurate attenuation map for both attenuation and scatter corrections.

  13. Improving Dose Determination Accuracy in Nonstandard Fields of the Varian TrueBeam Accelerator

    NASA Astrophysics Data System (ADS)

    Hyun, Megan A.

    In recent years, the use of flattening-filter-free (FFF) linear accelerators in radiation-based cancer therapy has gained popularity, especially for hypofractionated treatments (high doses of radiation given in few sessions). However, significant challenges to accurate radiation dose determination remain. If physicists cannot accurately determine radiation dose in a clinical setting, cancer patients treated with these new machines will not receive safe, accurate and effective treatment. In this study, an extensive characterization of two commonly used clinical radiation detectors (ionization chambers and diodes) and several potential reference detectors (thermoluminescent dosimeters, plastic scintillation detectors, and alanine pellets) has been performed to investigate their use in these challenging, nonstandard fields. From this characterization, reference detectors were identified for multiple beam sizes, and correction factors were determined to improve dosimetric accuracy for ionization chambers and diodes. A validated computational (Monte Carlo) model of the TrueBeam(TM) accelerator, including FFF beam modes, was also used to calculate these correction factors, which compared favorably to measured results. Small-field corrections of up to 18 % were shown to be necessary for clinical detectors such as microionization chambers. Because the impact of these large effects on treatment delivery is not well known, a treatment planning study was completed using actual hypofractionated brain, spine, and lung treatments that were delivered at the UW Carbone Cancer Center. This study demonstrated that improperly applying these detector correction factors can have a substantial impact on patient treatments. This thesis work has taken important steps toward improving the accuracy of FFF dosimetry through rigorous experimentally and Monte-Carlo-determined correction factors, the validation of an important published protocol (TG-51) for use with FFF reference fields, and a demonstration of the clinical significance of small-field correction factors. These results will facilitate the safe, accurate and effective use of this treatment modality in the clinic.

  14. Consideration of Kaolinite Interference Correction for Quartz Measurements in Coal Mine Dust

    PubMed Central

    Lee, Taekhee; Chisholm, William P.; Kashon, Michael; Key-Schwartz, Rosa J.; Harper, Martin

    2015-01-01

    Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed “deviation,” not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected. PMID:23767881

  15. Consideration of kaolinite interference correction for quartz measurements in coal mine dust.

    PubMed

    Lee, Taekhee; Chisholm, William P; Kashon, Michael; Key-Schwartz, Rosa J; Harper, Martin

    2013-01-01

    Kaolinite interferes with the infrared analysis of quartz. Improper correction can cause over- or underestimation of silica concentration. The standard sampling method for quartz in coal mine dust is size selective, and, since infrared spectrometry is sensitive to particle size, it is intuitively better to use the same size fractions for quantification of quartz and kaolinite. Standard infrared spectrometric methods for quartz measurement in coal mine dust correct interference from the kaolinite, but they do not specify a particle size for the material used for correction. This study compares calibration curves using as-received and respirable size fractions of nine different examples of kaolinite in the different correction methods from the National Institute for Occupational Safety and Health Manual of Analytical Methods (NMAM) 7603 and the Mine Safety and Health Administration (MSHA) P-7. Four kaolinites showed significant differences between calibration curves with as-received and respirable size fractions for NMAM 7603 and seven for MSHA P-7. The quartz mass measured in 48 samples spiked with respirable fraction silica and kaolinite ranged between 0.28 and 23% (NMAM 7603) and 0.18 and 26% (MSHA P-7) of the expected applied mass when the kaolinite interference was corrected with respirable size fraction kaolinite. This is termed "deviation," not bias, because the applied mass is also subject to unknown variance. Generally, the deviations in the spiked samples are larger when corrected with the as-received size fraction of kaolinite than with the respirable size fraction. Results indicate that if a kaolinite correction with reference material of respirable size fraction is applied in current standard methods for quartz measurement in coal mine dust, the quartz result would be somewhat closer to the true exposure, although the actual mass difference would be small. Most kinds of kaolinite can be used for laboratory calibration, but preferably, the size fraction should be the same as the coal dust being collected.

  16. Nuclear recoil effect on g-factor of heavy ions: prospects for tests of quantum electrodynamics in a new region

    NASA Astrophysics Data System (ADS)

    Malyshev, A. V.; Shabaev, V. M.; Glazov, D. A.; Tupitsyn, I. I.

    2017-12-01

    The nuclear recoil effect on the g-factor of H- and Li-like heavy ions is evaluated to all orders in αZ. The calculations include an approximate treatment of the nuclear size and the electron-electron interaction corrections to the recoil effect. As the result, the second largest contribution to the theoretical uncertainty of the g-factor values of 208Pb79+ and 238U89+ is strongly reduced. Special attention is paid to tests of the QED recoil effect on the g-factor in experiments with heavy ions. It is found that, while the QED recoil effect on the g-factor value is masked by the uncertainties of the nuclear size and nuclear polarization contributions, it can be probed on a few-percent level in the specific difference of the g-factors of H- and Li-like heavy ions. This provides a unique opportunity to test QED in a new region-strong-coupling regime beyond the Furry picture.

  17. Compensation of X-ray mirror shape-errors using refractive optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sawhney, Kawal, E-mail: Kawal.sawhney@diamond.ac.uk; Laundy, David; Pape, Ian

    2016-08-01

    Focusing of X-rays to nanometre scale focal spots requires high precision X-ray optics. For nano-focusing mirrors, height errors in the mirror surface retard or advance the X-ray wavefront and after propagation to the focal plane, this distortion of the wavefront causes blurring of the focus resulting in a limit on the spatial resolution. We describe here the implementation of a method for correcting the wavefront that is applied before a focusing mirror using custom-designed refracting structures which locally cancel out the wavefront distortion from the mirror. We demonstrate in measurements on a synchrotron radiation beamline a reduction in the sizemore » of the focal spot of a characterized test mirror by a factor of greater than 10 times. This technique could be used to correct existing synchrotron beamline focusing and nanofocusing optics providing a highly stable wavefront with low distortion for obtaining smaller focus sizes. This method could also correct multilayer or focusing crystal optics allowing larger numerical apertures to be used in order to reduce the diffraction limited focal spot size.« less

  18. Fractography and estimates of fracture origin size from fracture mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinn, G.D.; Swab, J.J.

    1996-12-31

    Fracture mechanics should be used routinely in fractographic analyses in order to verify that the correct feature has been identified as the fracture origin. This was highlighted in a recent Versailles Advanced Materials and Standards (VAMAS) fractographic analysis round robin. The practice of using fracture mechanics as an aid to fractographic interpretation is codified in a new ASTM Standard Practice. Conversely, very good estimates for fracture toughness often come from fractographic analysis of strength tested specimens. In many instances however, the calculated flaw size is different from the empirically-measured flaw size. This paper reviews the factors which may cause themore » discrepancies.« less

  19. Testing the performance of dosimetry measurement standards for calibrating area and personnel dosimeters

    NASA Astrophysics Data System (ADS)

    Walwyn-Salas, G.; Czap, L.; Gomola, I.; Tamayo-García, J. A.

    2016-07-01

    The cylindrical NE2575 and spherical PTW32002 chamber types were tested in this paper to determine their performance at different source-chamber distances, field sizes and two radiation qualities. To ensure an accurate measurement, there is a need to apply a correction factor to NE2575 measurements at different distances because of differences found between the reference point defined by the manufacturer and the effective point of measurements. This correction factor for NE2575 secondary standard from the Center for Radiation Protection and Hygiene of Cuba was assessed with a 0.3% uncertainty using the results of three methods. Those laboratories that use the NE2575 chambers should take into consideration the performance characteristics tested in this paper to obtain accurate measurements.

  20. Assessment and correction of skinfold thickness equations in estimating body fat in children with cerebral palsy.

    PubMed

    Gurka, Matthew J; Kuperminc, Michelle N; Busby, Marjorie G; Bennis, Jacey A; Grossberg, Richard I; Houlihan, Christine M; Stevenson, Richard D; Henderson, Richard C

    2010-02-01

    To assess the accuracy of skinfold equations in estimating percentage body fat in children with cerebral palsy (CP), compared with assessment of body fat from dual energy X-ray absorptiometry (DXA). Data were collected from 71 participants (30 females, 41 males) with CP (Gross Motor Function Classification System [GMFCS] levels I-V) between the ages of 8 and 18 years. Estimated percentage body fat was computed using established (Slaughter) equations based on the triceps and subscapular skinfolds. A linear model was fitted to assess the use of a simple correction to these equations for children with CP. Slaughter's equations consistently underestimated percentage body fat (mean difference compared with DXA percentage body fat -9.6/100 [SD 6.2]; 95% confidence interval [CI] -11.0 to -8.1). New equations were developed in which a correction factor was added to the existing equations based on sex, race, GMFCS level, size, and pubertal status. These corrected equations for children with CP agree better with DXA (mean difference 0.2/100 [SD=4.8]; 95% CI -1.0 to 1.3) than existing equations. A simple correction factor to commonly used equations substantially improves the ability to estimate percentage body fat from two skinfold measures in children with CP.

  1. Laser induced damage thresholds and laser safety levels. Do the units of measurement matter?

    NASA Astrophysics Data System (ADS)

    Wood, R. M.

    1998-04-01

    The commonly used units of measurement for laser induced damage are those of peak energy or power density. However, the laser induced damage thresholds, LIDT, of all materials are well known to be absorption, wavelength, spot size and pulse length dependent. As workers using these values become divorced from the theory it becomes increasingly important to use the correct units and to understand the correct scaling factors. This paper summarizes the theory and highlights the danger of using the wrong LIDT units in the context of potentially hazardous materials, laser safety eyewear and laser safety screens.

  2. Effect of formulated glyphosate and adjuvant tank mixes on atomization from aerial application flat fan nozzles

    USDA-ARS?s Scientific Manuscript database

    This study was designed to determine if the present USDA ARS Spray Nozzle models based on water plus non-ionic surfactant spray solutions could be used to estimate spray droplet size data for different spray formulations through use of experimentally determined correction factors or if full spray fo...

  3. HIV-risk characteristics in community corrections.

    PubMed

    Clark, C Brendan; McCullumsmith, Cheryl B; Waesche, Matthew C; Islam, M Aminul; Francis, Reginald; Cropsey, Karen L

    2013-01-01

    Individuals in the criminal justice system engage in behaviors that put them at high risk for HIV. This study sought to identify characteristics of individuals who are under community corrections supervision (eg, probation) and at risk for HIV. Approximately 25,000 individuals under community corrections supervision were assessed for HIV risk, and 5059 participants were deemed high-risk or no-risk. Of those, 1519 exhibited high sexual-risk (SR) behaviors, 203 exhibited injection drug risk (IVR), 957 exhibited both types of risk (SIVR), and 2380 exhibited no risk. Sociodemographic characteristics and drug of choice were then examined using univariate and binary logistic regression. Having a history of sexual abuse, not having insurance, and selecting any drug of choice were associated with all forms of HIV risk. However, the effect sizes associated with the various drugs of choice varied significantly by group. Aside from those common risk factors, very different patterns emerged. Female gender was a risk factor for the SR group but was less likely to be associated with IVR. Younger age was associated with SR, whereas older age was associated with IVR. Black race was a risk factor for SR but had a negative association with IVR and SIVR. Living in a shelter, living with relatives/friends, and being unemployed were all risk factors for IVR but were protective factors for SR. Distinct sociodemographic and substance use characteristics were associated with sexual versus injection drug use risk for individuals under community corrections supervision who were at risk for HIV. Information from this study could help identify high-risk individuals and allow tailoring of interventions.

  4. Ellipsoidal corrections for geoid undulation computations using gravity anomalies in a cap

    NASA Technical Reports Server (NTRS)

    Rapp, R. H.

    1981-01-01

    Ellipsoidal correction terms have been derived for geoid undulation computations when the Stokes equation using gravity anomalies in a cap is combined with potential coefficient information. The correction terms are long wavelength and depend on the cap size in which its gravity anomalies are given. Using the regular Stokes equation, the maximum correction for a cap size of 20 deg is -33 cm, which reduces to -27 cm when the Stokes function is modified by subtracting the value of the Stokes function at the cap radius. Ellipsoidal correction terms were also derived for the well-known Marsh/Chang geoids. When no gravity was used, the correction could reach 101 cm, while for a cap size of 20 deg the maximum correction was -45 cm. Global correction maps are given for a number of different cases. For work requiring accurate geoid computations these correction terms should be applied.

  5. SU-F-T-577: Comparison of Small Field Dosimetry Measurements in Fields Shaped with Conical Applicators On Two Different Accelerating Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muir, B; McEwen, M; Belec, J

    2016-06-15

    Purpose: To investigate small field dosimetry measurements and associated uncertainties when conical applicators are used to shape treatment fields from two different accelerating systems. Methods: Output factor measurements are made in water in beams from the CyberKnife radiosurgery system, which uses conical applicators to shape fields from a (flattening filter-free) 6 MV beam, and in a 6 MV beam from the Elekta Precise linear accelerator (with flattening filter) with BrainLab external conical applicators fitted to shape the field. The measurements use various detectors: (i) an Exradin A16 ion chamber, (ii) two Exradin W1 plastic scintillation detectors, (iii) a Sun Nuclearmore » Edge diode, and (iv) two PTW microDiamond synthetic diamond detectors. Profiles are used for accurate detector positioning and to specify field size (FWHM). Output factor measurements are corrected with detector specific correction factors taken from the literature where available and/or from Monte Carlo simulations using the EGSnrc code system. Results: Differences in measurements of up to 1.7% are observed with a given detector type in the same beam (i.e., intra-detector variability). Corrected results from different detectors in the same beam (inter-detector differences) show deviations up to 3 %. Combining data for all detectors and comparing results from the two accelerators results in a 5.9% maximum difference for the smallest field sizes (FWHM=5.2–5.6 mm), well outside the combined uncertainties (∼1% for the smallest beams) and/or differences among detectors. This suggests that the FWHM of a measured profile is not a good specifier to compare results from different small fields with the same nominal energy. Conclusion: Large differences in results for both intra-detector variability and inter-detector differences suggest potentially high uncertainties in detector-specific correction factors. Differences between the results measured in circular fields from different accelerating systems provide insight into sources of variability in small field dosimetric measurements reported in the literature.« less

  6. Experimental investigation of the effect of air cavity size in cylindrical ionization chambers on the measurements in 60Co radiotherapy beams

    NASA Astrophysics Data System (ADS)

    Swanpalmer, John; Johansson, Karl-Axel

    2011-11-01

    In the late 1970s, Johansson et al (1978 Int. Symp. National and International Standardization of Radiation Dosimetry (Atlanta 1977) vol 2 (Vienna: IAEA) pp 243-70) reported experimentally determined displacement correction factors (pdis) for cylindrical ionization chamber dosimetry in 60Co and high-energy photon beams. These pdis factors have been implemented and are currently in use in a number of dosimetry protocols. However, the accuracy of these factors has recently been questioned by Wang and Rogers (2009a Phys. Med. Biol. 54 1609-20), who performed Monte Carlo simulations of the experiments performed by Johansson et al. They reported that the inaccuracy of the pdis factors originated from the normalization procedure used by Johansson et al. In their experiments, Johansson et al normalized the measured depth-ionization curves at the depth of maximum ionization for each of the different ionization chambers. In this study, we experimentally investigated the effect of air cavity size of cylindrical ionization chambers in a PMMA phantom and 60Co γ-beam. Two different pairs of air-filled cylindrical ionization chambers were used. The chambers in each pair had identical construction and materials but different air cavity volume (diameter). A 20 MeV electron beam was utilized to determine the ratio of the mass of air in the cavity of the two chambers in each pair. This ratio of the mass of air in each pair was then used to compare the ratios of the ionizations obtained at different depths in the PMMA phantom and 60Co γ-beam using the two pairs of chambers. The diameter of the air cavity of cylindrical ionization chambers influences both the depth at which the maximum ionization is observed and the ionization per unit mass of air at this depth. The correction determined at depths of 50 mm and 100 mm is smaller than the correction currently used in many dosimetry protocols. The results presented here agree with the findings of Wang and Rogers' Monte Carlo simulations and show that the normalization procedure employed by Johansson et al is not correct.

  7. The influence of Monte Carlo source parameters on detector design and dose perturbation in small field dosimetry

    NASA Astrophysics Data System (ADS)

    Charles, P. H.; Crowe, S. B.; Kairn, T.; Knight, R.; Hill, B.; Kenny, J.; Langton, C. M.; Trapp, J. V.

    2014-03-01

    To obtain accurate Monte Carlo simulations of small radiation fields, it is important model the initial source parameters (electron energy and spot size) accurately. However recent studies have shown that small field dosimetry correction factors are insensitive to these parameters. The aim of this work is to extend this concept to test if these parameters affect dose perturbations in general, which is important for detector design and calculating perturbation correction factors. The EGSnrc C++ user code cavity was used for all simulations. Varying amounts of air between 0 and 2 mm were deliberately introduced upstream to a diode and the dose perturbation caused by the air was quantified. These simulations were then repeated using a range of initial electron energies (5.5 to 7.0 MeV) and electron spot sizes (0.7 to 2.2 FWHM). The resultant dose perturbations were large. For example 2 mm of air caused a dose reduction of up to 31% when simulated with a 6 mm field size. However these values did not vary by more than 2 % when simulated across the full range of source parameters tested. If a detector is modified by the introduction of air, one can be confident that the response of the detector will be the same across all similar linear accelerators and the Monte Carlo modelling of each machine is not required.

  8. Nuclear Recoil Effect on the g-Factor of Heavy Ions: Prospects for Tests of Quantum Electrodynamics in a New Region

    NASA Astrophysics Data System (ADS)

    Malyshev, A. V.; Shabaev, V. M.; Glazov, D. A.; Tupitsyn, I. I.

    2017-12-01

    The nuclear recoil effect on the g-factor of H- and Li-like heavy ions is evaluated to all orders in αZ. The calculations include an approximate treatment of the nuclear size and the electron-electron interaction corrections to the recoil effect. As the result, the second largest contribution to the theoretical uncertainty of the g-factor values of 208Pb79+ and 238U89+ is strongly reduced. Special attention is paid to tests of the QED recoil effect on the g-factor in experiments with heavy ions. It is found that, while the QED recoil effect on the gfactor value is masked by the uncertainties of the nuclear size and nuclear polarization contributions, it can be probed on a few-percent level in the specific difference of the g-factors of H- and Li-like heavy ions. This provides a unique opportunity to test QED in a new region of the strong-coupling regime beyond the Furry picture.

  9. Properties of a commercial PTW-60019 synthetic diamond detector for the dosimetry of small radiotherapy beams.

    PubMed

    Lárraga-Gutiérrez, José Manuel; Ballesteros-Zebadúa, Paola; Rodríguez-Ponce, Miguel; García-Garduño, Olivia Amanda; de la Cruz, Olga Olinca Galván

    2015-01-21

    A CVD based radiation detector has recently become commercially available from the manufacturer PTW-Freiburg (Germany). This detector has a sensitive volume of 0.004 mm(3), a nominal sensitivity of 1 nC Gy(-1) and operates at 0 V. Unlike natural diamond based detectors, the CVD diamond detector reports a low dose rate dependence. The dosimetric properties investigated in this work were dose rate, angular dependence and detector sensitivity and linearity. Also, percentage depth dose, off-axis dose profiles and total scatter ratios were measured and compared against equivalent measurements performed with a stereotactic diode. A Monte Carlo simulation was carried out to estimate the CVD small beam correction factors for a 6 MV photon beam. The small beam correction factors were compared with those obtained from stereotactic diode and ionization chambers in the same irradiation conditions The experimental measurements were performed in 6 and 15 MV photon beams with the following square field sizes: 10 × 10, 5 × 5, 4 × 4, 3 × 3, 2 × 2, 1.5 × 1.5, 1 × 1 and 0.5 × 0.5 cm. The CVD detector showed an excellent signal stability (<0.2%) and linearity, negligible dose rate dependence (<0.2%) and lower response angular dependence. The percentage depth dose and off-axis dose profiles measurements were comparable (within 1%) to the measurements performed with ionization chamber and diode in both conventional and small radiotherapy beams. For the 0.5 × 0.5 cm, the measurements performed with the CVD detector showed a partial volume effect for all the dosimetric quantities measured. The Monte Carlo simulation showed that the small beam correction factors were close to unity (within 1.0%) for field sizes ≥1 cm. The synthetic diamond detector had high linearity, low angular and negligible dose rate dependence, and its response was energy independent within 1% for field sizes from 1.0 to 5.0 cm. This work provides new data showing the performance of the CVD detector compared against a high spatial resolution diode. It also presents a comparison of the CVD small beam correction factors with those of diode and ionization chamber for a 6 MV photon beam.

  10. Fluence correction factor for graphite calorimetry in a clinical high-energy carbon-ion beam.

    PubMed

    Lourenço, A; Thomas, R; Homer, M; Bouchard, H; Rossomme, S; Renaud, J; Kanai, T; Royle, G; Palmans, H

    2017-04-07

    The aim of this work is to develop and adapt a formalism to determine absorbed dose to water from graphite calorimetry measurements in carbon-ion beams. Fluence correction factors, [Formula: see text], needed when using a graphite calorimeter to derive dose to water, were determined in a clinical high-energy carbon-ion beam. Measurements were performed in a 290 MeV/n carbon-ion beam with a field size of 11  ×  11 cm 2 , without modulation. In order to sample the beam, a plane-parallel Roos ionization chamber was chosen for its small collecting volume in comparison with the field size. Experimental information on fluence corrections was obtained from depth-dose measurements in water. This procedure was repeated with graphite plates in front of the water phantom. Fluence corrections were also obtained with Monte Carlo simulations through the implementation of three methods based on (i) the fluence distributions differential in energy, (ii) a ratio of calculated doses in water and graphite at equivalent depths and (iii) simulations of the experimental setup. The [Formula: see text] term increased in depth from 1.00 at the entrance toward 1.02 at a depth near the Bragg peak, and the average difference between experimental and numerical simulations was about 0.13%. Compared to proton beams, there was no reduction of the [Formula: see text] due to alpha particles because the secondary particle spectrum is dominated by projectile fragmentation. By developing a practical dose conversion technique, this work contributes to improving the determination of absolute dose to water from graphite calorimetry in carbon-ion beams.

  11. Design and experimental testing of air slab caps which convert commercial electron diodes into dual purpose, correction-free diodes for small field dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charles, P. H., E-mail: paulcharles111@gmail.com; Cranmer-Sargison, G.; Thwaites, D. I.

    2014-10-15

    Purpose: Two diodes which do not require correction factors for small field relative output measurements are designed and validated using experimental methodology. This was achieved by adding an air layer above the active volume of the diode detectors, which canceled out the increase in response of the diodes in small fields relative to standard field sizes. Methods: Due to the increased density of silicon and other components within a diode, additional electrons are created. In very small fields, a very small air gap acts as an effective filter of electrons with a high angle of incidence. The aim was tomore » design a diode that balanced these perturbations to give a response similar to a water-only geometry. Three thicknesses of air were placed at the proximal end of a PTW 60017 electron diode (PTWe) using an adjustable “air cap”. A set of output ratios (OR{sub Det}{sup f{sub c}{sub l}{sub i}{sub n}}) for square field sizes of side length down to 5 mm was measured using each air thickness and compared to OR{sub Det}{sup f{sub c}{sub l}{sub i}{sub n}} measured using an IBA stereotactic field diode (SFD). k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} was transferred from the SFD to the PTWe diode and plotted as a function of air gap thickness for each field size. This enabled the optimal air gap thickness to be obtained by observing which thickness of air was required such that k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} was equal to 1.00 at all field sizes. A similar procedure was used to find the optimal air thickness required to make a modified Sun Nuclear EDGE detector (EDGEe) which is “correction-free” in small field relative dosimetry. In addition, the feasibility of experimentally transferring k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} values from the SFD to unknown diodes was tested by comparing the experimentally transferred k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} values for unmodified PTWe and EDGEe diodes to Monte Carlo simulated values. Results: 1.0 mm of air was required to make the PTWe diode correction-free. This modified diode (PTWe{sub air}) produced output factors equivalent to those in water at all field sizes (5–50 mm). The optimal air thickness required for the EDGEe diode was found to be 0.6 mm. The modified diode (EDGEe{sub air}) produced output factors equivalent to those in water, except at field sizes of 8 and 10 mm where it measured approximately 2% greater than the relative dose to water. The experimentally calculated k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} for both the PTWe and the EDGEe diodes (without air) matched Monte Carlo simulated results, thus proving that it is feasible to transfer k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} from one commercially available detector to another using experimental methods and the recommended experimental setup. Conclusions: It is possible to create a diode which does not require corrections for small field output factor measurements. This has been performed and verified experimentally. The ability of a detector to be “correction-free” depends strongly on its design and composition. A nonwater-equivalent detector can only be “correction-free” if competing perturbations of the beam cancel out at all field sizes. This should not be confused with true water equivalency of a detector.« less

  12. Atmospheric scattering corrections to solar radiometry

    NASA Technical Reports Server (NTRS)

    Box, M. A.; Deepak, A.

    1979-01-01

    Whenever a solar radiometer is used to measure direct solar radiation, some diffuse sky radiation invariably enters the detector's field of view along with the direct beam. Therefore, the atmospheric optical depth obtained by the use of Bouguer's transmission law (also called Beer-Lambert's law), that is valid only for direct radiation, needs to be corrected by taking account of the scattered radiation. This paper discusses the correction factors needed to account for the diffuse (i,e., singly and multiply scattered) radiation and the algorithms developed for retrieving aerosol size distribution from such measurements. For a radiometer with a small field of view (half-cone angle of less than 5 deg) and relatively clear skies (optical depths less than 0.4), it is shown that the total diffuse contribution represents approximately 1% of the total intensity.

  13. SU-G-BRB-12: Polarity Effects in Small Volume Ionization Chambers in Small Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arora, V; Parsai, E; Mathew, D

    2016-06-15

    Purpose: Dosimetric quantities such as the polarity correction factor (Ppol) are important parameters for determining the absorbed dose and can influence the choice of dosimeter. Ppol has been shown to depend on beam energy, chamber design, and field size. This study is to investigate the field size and detector orientation dependence of Ppol in small fields for several commercially available micro-chambers. Methods: We evaluate the Exradin A26, Exradin A16, PTW 31014, PTW 31016, and two prototype IBA CC-01 micro-chambers in both horizontal and vertical orientations. Measurements were taken at 10cm depth and 100cm SSD in a Wellhofer BluePhantom2. Measurements weremore » made at square fields of 0.6, 0.8, 1.0, 1.2, 1.4, 2.0, 2.4, 3.0, and 5.0 cm on each side using 6MV with both ± 300VDC biases. PPol was evaluated as described in TG-51, reported using −300VDC bias for Mraw. Ratios of PPol measured in the clinical field to the reference field are presented. Results: A field size dependence of Ppol was observed for all chambers, with increased variations when mounted vertically. The maximum variation observed in PPol over all chambers mounted horizontally was <1%, and occurred at different field sizes for different chambers. Vertically mounted chambers demonstrated variations as large as 3.2%, always at the smallest field sizes. Conclusion: Large variations in Ppol were observed for vertically mounted chambers compared to horizontal mountings. Horizontal mountings demonstrated a complicated relationship between polarity variation and field size, probably relating to differing details in each chambers construction. Vertically mounted chambers consistently demonstrated the largest PPol variations for the smallest field sizes. Measurements obtained with a horizontal mounting appear to not need significant polarity corrections for relative measurements, while those obtained using a vertical mounting should be corrected for variations in PPol.« less

  14. Mesopic pupil size in a refractive surgery population (13,959 eyes).

    PubMed

    Linke, Stephan J; Baviera, Julio; Munzer, Gur; Fricke, Otto H; Richard, Gisbert; Katz, Toam

    2012-08-01

    To evaluate factors that may affect mesopic pupil size in refractive surgery candidates. Medical records of 13,959 eyes of 13,959 refractive surgery candidates were reviewed, and one eye per subject was selected randomly for statistical analysis. Detailed ophthalmological examination data were obtained from medical records. Preoperative measurements included uncorrected distance visual acuity, corrected distance visual acuity, manifest and cycloplegic refraction, topography, slit lamp examination, and funduscopy. Mesopic pupil size measurements were performed with Colvard pupillometer. Relationship between mesopic pupil size and age, gender, refractive state, average keratometry, and pachymetry (thinnest point) were analyzed by means of ANOVA (+ANCOVA) and multivariate regression analyses. Overall mesopic pupil size was 6.45 ± 0.82 mm, and mean age was 36.07 years. Mesopic pupil size was 5.96 ± 0.8 mm in hyperopic astigmatism, 6.36 ± 0.83 mm in high astigmatism, and 6.51 ± 0.8 mm in myopic astigmatism. The difference in mesopic pupil size between all refractive subgroups was statistically significant (p < 0.001). Age revealed the strongest correlation (r = -0.405, p < 0.001) with mesopic pupil size. Spherical equivalent showed a moderate correlation (r = -0.136), whereas keratometry (r = -0.064) and pachymetry (r = -0.057) had a weak correlation with mesopic pupil size. No statistically significant difference in mesopic pupil size was noted regarding gender and ocular side. The sum of all analyzed factors (age, refractive state, keratometry, and pachymetry) can only predict the expected pupil size in <20% (R = 0.179, p < 0.001). Our analysis confirmed that age and refractive state are determinative factors on mesopic pupil size. Average keratometry and minimal pachymetry exhibited a statistically significant, but clinically insignificant, impact on mesopic pupil size.

  15. Conception and realization of a parallel-plate free-air ionization chamber for the absolute dosimetry of an ultrasoft X-ray beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groetz, J.-E., E-mail: jegroetz@univ-fcomte.fr; Mavon, C.; Fromm, M.

    2014-08-15

    We report the design of a millimeter-sized parallel plate free-air ionization chamber (IC) aimed at determining the absolute air kerma rate of an ultra-soft X-ray beam (E = 1.5 keV). The size of the IC was determined so that the measurement volume satisfies the condition of charged-particle equilibrium. The correction factors necessary to properly measure the absolute kerma using the IC have been established. Particular attention was given to the determination of the effective mean energy for the 1.5 keV photons using the PENELOPE code. Other correction factors were determined by means of computer simulation (COMSOL™and FLUKA). Measurements of airmore » kerma rates under specific operating parameters of the lab-bench X-ray source have been performed at various distances from that source and compared to Monte Carlo calculations. We show that the developed ionization chamber makes it possible to determine accurate photon fluence rates in routine work and will constitute substantial time-savings for future radiobiological experiments based on the use of ultra-soft X-rays.« less

  16. Lack of dependence on resonant error field of locked mode island size in ohmic plasmas in DIII-D

    NASA Astrophysics Data System (ADS)

    La Haye, R. J.; Paz-Soldan, C.; Strait, E. J.

    2015-02-01

    DIII-D experiments show that fully penetrated resonant n = 1 error field locked modes in ohmic plasmas with safety factor q95 ≳ 3 grow to similar large disruptive size, independent of resonant error field correction. Relatively small resonant (m/n = 2/1) static error fields are shielded in ohmic plasmas by the natural rotation at the electron diamagnetic drift frequency. However, the drag from error fields can lower rotation such that a bifurcation results, from nearly complete shielding to full penetration, i.e., to a driven locked mode island that can induce disruption. Error field correction (EFC) is performed on DIII-D (in ITER relevant shape and safety factor q95 ≳ 3) with either the n = 1 C-coil (no handedness) or the n = 1 I-coil (with ‘dominantly’ resonant field pitch). Despite EFC, which allows significantly lower plasma density (a ‘figure of merit’) before penetration occurs, the resulting saturated islands have similar large size; they differ only in the phase of the locked mode after typically being pulled (by up to 30° toroidally) in the electron diamagnetic drift direction as they grow to saturation. Island amplification and phase shift are explained by a second change-of-state in which the classical tearing index changes from stable to marginal by the presence of the island, which changes the current density profile. The eventual island size is thus governed by the inherent stability and saturation mechanism rather than the driving error field.

  17. Determination of the KQclinfclin,Qmsr fmsr correction factors for detectors used with an 800 MU/min CyberKnife(®) system equipped with fixed collimators and a study of detector response to small photon beams using a Monte Carlo method.

    PubMed

    Moignier, C; Huet, C; Makovicka, L

    2014-07-01

    In a previous work, output ratio (ORdet) measurements were performed for the 800 MU/min CyberKnife(®) at the Oscar Lambret Center (COL, France) using several commercially available detectors as well as using two passive dosimeters (EBT2 radiochromic film and micro-LiF TLD-700). The primary aim of the present work was to determine by Monte Carlo calculations the output factor in water (OFMC,w) and the [Formula: see text] correction factors. The secondary aim was to study the detector response in small beams using Monte Carlo simulation. The LINAC head of the CyberKnife(®) was modeled using the PENELOPE Monte Carlo code system. The primary electron beam was modeled using a monoenergetic source with a radial gaussian distribution. The model was adjusted by comparisons between calculated and measured lateral profiles and tissue-phantom ratios obtained with the largest field. In addition, the PTW 60016 and 60017 diodes, PTW 60003 diamond, and micro-LiF were modeled. Output ratios with modeled detectors (ORMC,det) and OFMC,w were calculated and compared to measurements, in order to validate the model for smallest fields and to calculate [Formula: see text] correction factors, respectively. For the study of the influence of detector characteristics on their response in small beams; first, the impact of the atomic composition and the mass density of silicon, LiF, and diamond materials were investigated; second, the material, the volume averaging, and the coating effects of detecting material on the detector responses were estimated. Finally, the influence of the size of silicon chip on diode response was investigated. Looking at measurement ratios (uncorrected output factors) compared to the OFMC,w, the PTW 60016, 60017 and Sun Nuclear EDGE diodes systematically over-responded (about +6% for the 5 mm field), whereas the PTW 31014 Pinpoint chamber systematically under-responded (about -12% for the 5 mm field). ORdet measured with the SFD diode and PTW 60003 diamond detectors were in good agreement with OFMC,w except for the 5 mm field size (about -7.5% for the diamond and +3% for the SFD). A good agreement with OFMC,w was obtained with the EBT2 film and micro-LiF dosimeters (deviation less than 1.4% for all fields investigated). [Formula: see text] correction factors for several detectors used in this work have been calculated. The impact of atomic composition on the dosimetric response of detectors was found to be insignificant, unlike the mass density and size of the detecting material. The results obtained with the passive dosimeters showed that they can be used for small beam OF measurements without correction factors. The study of detector response showed that ORdet is depending on the mass density, the volume averaging, and the coating effects of the detecting material. Each effect was quantified for the PTW 60016 and 60017 diodes, the micro-LiF, and the PTW 60003 diamond detectors. None of the active detectors used in this work can be recommended as a reference for small field dosimetry, but an improved diode detector with a smaller silicon chip coated with tissue-equivalent material is anticipated (by simulation) to be a reliable small field dosimetric detector in a nonequilibrium field.

  18. Molecular density functional theory of water describing hydrophobicity at short and long length scales

    NASA Astrophysics Data System (ADS)

    Jeanmairet, Guillaume; Levesque, Maximilien; Borgis, Daniel

    2013-10-01

    We present an extension of our recently introduced molecular density functional theory of water [G. Jeanmairet et al., J. Phys. Chem. Lett. 4, 619 (2013)] to the solvation of hydrophobic solutes of various sizes, going from angstroms to nanometers. The theory is based on the quadratic expansion of the excess free energy in terms of two classical density fields: the particle density and the multipolar polarization density. Its implementation requires as input a molecular model of water and three measurable bulk properties, namely, the structure factor and the k-dependent longitudinal and transverse dielectric susceptibilities. The fine three-dimensional water structure around small hydrophobic molecules is found to be well reproduced. In contrast, the computed solvation free-energies appear overestimated and do not exhibit the correct qualitative behavior when the hydrophobic solute is grown in size. These shortcomings are corrected, in the spirit of the Lum-Chandler-Weeks theory, by complementing the functional with a truncated hard-sphere functional acting beyond quadratic order in density, and making the resulting functional compatible with the Van-der-Waals theory of liquid-vapor coexistence at long range. Compared to available molecular simulations, the approach yields reasonable solvation structure and free energy of hard or soft spheres of increasing size, with a correct qualitative transition from a volume-driven to a surface-driven regime at the nanometer scale.

  19. Improving the accuracy of ionization chamber dosimetry in small megavoltage x-ray fields

    NASA Astrophysics Data System (ADS)

    McNiven, Andrea L.

    The dosimetry of small x-ray fields is difficult, but important, in many radiation therapy delivery methods. The accuracy of ion chambers for small field applications, however, is limited due to the relatively large size of the chamber with respect to the field size, leading to partial volume effects, lateral electronic disequilibrium and calibration difficulties. The goal of this dissertation was to investigate the use of ionization chambers for the purpose of dosimetry in small megavoltage photon beams with the aim of improving clinical dose measurements in stereotactic radiotherapy and helical tomotherapy. A new method for the direct determination of the sensitive volume of small-volume ion chambers using micro computed tomography (muCT) was investigated using four nominally identical small-volume (0.56 cm3) cylindrical ion chambers. Agreement between their measured relative volume and ionization measurements (within 2%) demonstrated the feasibility of volume determination through muCT. Cavity-gas calibration coefficients were also determined, demonstrating the promise for accurate ion chamber calibration based partially on muCT. The accuracy of relative dose factor measurements in 6MV stereotactic x-ray fields (5 to 40mm diameter) was investigated using a set of prototype plane-parallel ionization chambers (diameters of 2, 4, 10 and 20mm). Chamber and field size specific correction factors ( CSFQ ), that account for perturbation of the secondary electron fluence, were calculated using Monte Carlo simulation methods (BEAM/EGSnrc simulations). These correction factors (e.g. CSFQ = 1.76 (2mm chamber, 5mm field) allow for accurate relative dose factor (RDF) measurement when applied to ionization readings, under conditions of electronic disequilibrium. With respect to the dosimetry of helical tomotherapy, a novel application of the ion chambers was developed to characterize the fan beam size and effective dose rate. Characterization was based on an adaptation of the computed tomography dose index (CTDI), a concept normally used in diagnostic radiology. This involved experimental determination of the fan beam thickness using the ion chambers to acquire fan beam profiles and extrapolation to a 'zero-size' detector. In conclusion, improvements have been made in the accuracy of small field dosimetry measurements in stereotactic radiotherapy and helical tomotherapy. This was completed through introduction of an original technique involving micro-CT imaging for sensitive volume determination and potentially ion chamber calibration coefficients, the use of appropriate Monte Carlo derived correction factors for RDF measurement, and the exploitation of the partial volume effect for helical tomotherapy fan beam dosimetry. With improved dosimetry for a wide range of challenging small x-ray field situations, it is expected that the patient's radiation safety will be maintained, and that clinical trials will adopt calibration protocols specialized for modern radiotherapy with small fields or beamlets. Keywords. radiation therapy, ionization chambers, small field dosimetry, stereotactic radiotherapy, helical tomotherapy, micro-CT.

  20. Clinical implementation of MOSFET detectors for dosimetry in electron beams.

    PubMed

    Bloemen-van Gurp, Esther J; Minken, Andre W H; Mijnheer, Ben J; Dehing-Oberye, Cary J G; Lambin, Philippe

    2006-09-01

    To determine the factors converting the reading of a MOSFET detector placed on the patient's skin without additional build-up to the dose at the depth of dose maximum (D(max)) and investigate their feasibility for in vivo dose measurements in electron beams. Factors were determined to relate the reading of a MOSFET detector to D(max) for 4 - 15 MeV electron beams in reference conditions. The influence of variation in field size, SSD, angle and field shape on the MOSFET reading, obtained without additional build-up, was evaluated using 4, 8 and 15 MeV beams and compared to ionisation chamber data at the depth of dose maximum (z(max)). Patient entrance in vivo measurements included 40 patients, mostly treated for breast tumours. The MOSFET reading, converted to D(max), was compared to the dose prescribed at this depth. The factors to convert MOSFET reading to D(max) vary between 1.33 and 1.20 for the 4 and 15 MeV beams, respectively. The SSD correction factor is approximately 8% for a change in SSD from 95 to 100 cm, and 2% for each 5-cm increment above 100 cm SSD. A correction for fields having sides smaller than 6 cm and for irregular field shape is also recommended. For fields up to 20 x 20 cm(2) and for oblique incidence up to 45 degrees, a correction is not necessary. Patient measurements demonstrated deviations from the prescribed dose with a mean difference of -0.7% and a standard deviation of 2.9%. Performing dose measurements with MOSFET detectors placed on the patient's skin without additional build-up is a well suited technique for routine dose verification in electron beams, when applying the appropriate conversion and correction factors.

  1. Assessment and correction of skinfold thickness equations in estimating body fat in children with cerebral palsy

    PubMed Central

    GURKA, MATTHEW J; KUPERMINC, MICHELLE N; BUSBY, MARJORIE G; BENNIS, JACEY A; GROSSBERG, RICHARD I; HOULIHAN, CHRISTINE M; STEVENSON, RICHARD D; HENDERSON, RICHARD C

    2010-01-01

    AIM To assess the accuracy of skinfold equations in estimating percentage body fat in children with cerebral palsy (CP), compared with assessment of body fat from dual energy X-ray absorptiometry (DXA). METHOD Data were collected from 71 participants (30 females, 41 males) with CP (Gross Motor Function Classification System [GMFCS] levels I–V) between the ages of 8 and 18 years. Estimated percentage body fat was computed using established (Slaughter) equations based on the triceps and subscapular skinfolds. A linear model was fitted to assess the use of a simple correction to these equations for children with CP. RESULTS Slaughter’s equations consistently underestimated percentage body fat (mean difference compared with DXA percentage body fat −9.6/100 [SD 6.2]; 95% confidence interval [CI] −11.0 to −8.1). New equations were developed in which a correction factor was added to the existing equations based on sex, race, GMFCS level, size, and pubertal status. These corrected equations for children with CP agree better with DXA (mean difference 0.2/100 [SD=4.8]; 95% CI −1.0 to 1.3) than existing equations. INTERPRETATION A simple correction factor to commonly used equations substantially improves the ability to estimate percentage body fat from two skinfold measures in children with CP. PMID:19811518

  2. Influence of Co-57 and CT Transmission Measurements on the Quantification Accuracy and Partial Volume Effect of a Small Animal PET Scanner.

    PubMed

    Mannheim, Julia G; Schmid, Andreas M; Pichler, Bernd J

    2017-12-01

    Non-invasive in vivo positron emission tomography (PET) provides high detection sensitivity in the nano- to picomolar range and in addition to other advantages, the possibility to absolutely quantify the acquired data. The present study focuses on the comparison of transmission data acquired with an X-ray computed tomography (CT) scanner or a Co-57 source for the Inveon small animal PET scanner (Siemens Healthcare, Knoxville, TN, USA), as well as determines their influences on the quantification accuracy and partial volume effect (PVE). A special focus included the impact of the performed calibration on the quantification accuracy. Phantom measurements were carried out to determine the quantification accuracy, the influence of the object size on the quantification, and the PVE for different sphere sizes, along the field of view and for different contrast ratios. An influence of the emission activity on the Co-57 transmission measurements was discovered (deviations up to 24.06 % measured to true activity), whereas no influence of the emission activity on the CT attenuation correction was identified (deviations <3 % for measured to true activity). The quantification accuracy was substantially influenced by the applied calibration factor and by the object size. The PVE demonstrated a dependency on the sphere size, the position within the field of view, the reconstruction and correction algorithms and the count statistics. Depending on the reconstruction algorithm, only ∼30-40 % of the true activity within a small sphere could be resolved. The iterative 3D reconstruction algorithms uncovered substantially increased recovery values compared to the analytical and 2D iterative reconstruction algorithms (up to 70.46 % and 80.82 % recovery for the smallest and largest sphere using iterative 3D reconstruction algorithms). The transmission measurement (CT or Co-57 source) to correct for attenuation did not severely influence the PVE. The analysis of the quantification accuracy and the PVE revealed an influence of the object size, the reconstruction algorithm and the applied corrections. Particularly, the influence of the emission activity during the transmission measurement performed with a Co-57 source must be considered. To receive comparable results, also among different scanner configurations, standardization of the acquisition (imaging parameters, as well as applied reconstruction and correction protocols) is necessary.

  3. Backscatter correction factor for megavoltage photon beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Yida; Zhu, Timothy C.

    2011-10-15

    Purpose: For routine clinical dosimetry of photon beams, it is often necessary to know the minimum thickness of backscatter phantom material to ensure that full backscatter condition exists. Methods: In case of insufficient backscatter thickness, one can determine the backscatter correction factor, BCF(s,d,t), defined as the ratio of absorbed dose measured on the central-axis of a phantom with backscatter thickness of t to that with full backscatter for square field size s and forward depth d. Measurements were performed in SAD geometry for 6 and 15 MV photon beams using a 0.125 cc thimble chamber for field sizes between 10more » x 10 and 30 x 30 cm at depths between d{sub max} (1.5 cm for 6 MV and 3 cm for 15 MV) and 20 cm. Results: A convolution method was used to calculate BCF using Monte-Carlo simulated point-spread kernels generated for clinical photon beams for energies between Co-60 and 24 MV. The convolution calculation agrees with the experimental measurements to within 0.8% with the same physical trend. The value of BCF deviates more from 1 for lower energies and larger field sizes. According to our convolution calculation, the minimum BCF occurs at forward depth d{sub max} and 40 x 40 cm field size, 0.970 for 6 MV and 0.983 for 15 MV. Conclusions: The authors concluded that backscatter thickness is 6.0 cm for 6 MV and 4.0 cm for 15 MV for field size up to 10 x 10 cm when BCF = 0.998. If 4 cm backscatter thickness is used, BCF is 0.997 and 0.983 for field size of 10 x 10 and 40 x 40 cm for 6 MV, and is 0.998 and 0.990 for 10 x 10 and 40 x 40 cm for 15 MV, respectively.« less

  4. Experimental investigation on the accuracy of plastic scintillators and of the spectrum discrimination method in small photon fields.

    PubMed

    Papaconstadopoulos, Pavlos; Archambault, Louis; Seuntjens, Jan

    2017-02-01

    To investigate the accuracy of output factor measurements using a commercial (Exradin W1, SI) and a prototype, "in-house" developed, plastic scintillation dosimeter (PSD) in small photon fields. Repetitive detector-specific output factor OF det measurements were performed in water (parallel to the CAX) using two W1 PSDs (SI), a PTW microLion, a PTW microDiamond and an unshielded diode D1V (SI) to which Monte Carlo calculated corrections factors were applied. Four sets of repetitive measurements were performed with the W1 PSD positioned parallel and perpendicular to the CAX, each set on a different day, and with analytically calculated volume averaging corrections applied. The W1 OF det measurements were compared to measurements using an "in-house" developed PSD in water (CHUQ) and both were validated against a previously commissioned Monte Carlo beam model in small photon fields. The performance of the spectrum discrimination calibration procedure was evaluated under different fiber orientations and wavelength threshold choices and the impact on the respective OF det was reported. For all detectors in the study an excellent agreement was observed down to a field size of 1 × 1 cm 2 . For the smallest field size of 0.5 × 0.5 cm 2 , the W1 PSDs presented OF det readings higher by 3.8 to 5.0% relative to the mean corrected OF det of the rest of the detectors and by 5.8 to 6.1% relative to the CHUQ PSD. The repetitive W1 OF det measurements in water (parallel CAX) were higher by 3.9% relative to the OF det measurements in Solid Water TM (perpendicular CAX) even after volume averaging corrections were applied, indicating a potential fiber orientation dependency in small fields. Uncertainties in jaw and detector repositioning as well as source variations with time were estimated to be less than 0.9% (1 σ) for the W1 under both orientations. The CHUQ PSD agreed with the MC dose calculations in water, for the smallest field size, within 1.1-1.7% before any corrections and within 0.3-0.8% after volume averaging corrections. The spectrum discrimination method provided reproducible Cherenkov spectra under the different calibration set-ups with noisier spectra extracted if the calibration is performed in water and parallel to the CAX. The impact of fiber orientation and wavelength threshold during calibration on OF det was in general minimal. Clinically relevant differences were observed between similar scintillator dosimeters in photon fields smaller than 1 ×  1 cm 2 . Further research on PSDs is needed that can explain the origin of these differences especially related to the Cherenkov spectrum dependencies on the optical fiber technical characteristics. © 2016 American Association of Physicists in Medicine.

  5. Analyses of factors of crash avoidance maneuvers using the general estimates system.

    PubMed

    Yan, Xuedong; Harb, Rami; Radwan, Essam

    2008-06-01

    Taking an effective corrective action to a critical traffic situation provides drivers an opportunity to avoid crash occurrence and minimize crash severity. The objective of this study is to investigate the relationship between the probability of taking corrective actions and the characteristics of drivers, vehicles, and driving environments. Using the 2004 GES crash database, this study classified drivers who encountered critical traffic events (identified as P_CRASH3 in the GES database) into two pre-crash groups: corrective avoidance actions group and no corrective avoidance actions group. Single and multiple logistic regression analyses were performed to identify potential traffic factors associated with the probability of drivers taking corrective actions. The regression results showed that the driver/vehicle factors associated with the probability of taking corrective actions include: driver age, gender, alcohol use, drug use, physical impairments, distraction, sight obstruction, and vehicle type. In particular, older drivers, female drivers, drug/alcohol use, physical impairment, distraction, or poor visibility may increase the probability of failing to attempt to avoid crashes. Moreover, drivers of larger size vehicles are 42.5% more likely to take corrective avoidance actions than passenger car drivers. On the other hand, the significant environmental factors correlated with the drivers' crash avoidance maneuver include: highway type, number of lanes, divided/undivided highway, speed limit, highway alignment, highway profile, weather condition, and surface condition. Some adverse highway environmental factors, such as horizontal curves, vertical curves, worse weather conditions, and slippery road surface conditions are correlated with a higher probability of crash avoidance maneuvers. These results may seem counterintuitive but they can be explained by the fact that motorists may be more likely to drive cautiously in those adverse driving environments. The analyses revealed that drivers' distraction could be the highest risk factor leading to the failure of attempting to avoid crashes. Further analyses entailing distraction causes (e.g., cellular phone use) and their possible countermeasures need to be conducted. The age and gender factors are overrepresented in the "no avoidance maneuver." A possible solution could involve the integration of a new function in the current ITS technologies. A personalized system, which could be related to the expected type of maneuver for a driver with certain characteristics, would assist different drivers with different characteristics to avoid crashes. Further crash database studies are recommended to investigate the association of drivers' emergency maneuvers such as braking, steering, or their combination with crash severity.

  6. Wavefront control of high-power laser beams in the National Ignition Facility (NIF)

    NASA Astrophysics Data System (ADS)

    Zacharias, Richard A.; Bliss, Erlan S.; Winters, Scott; Sacks, Richard A.; Feldman, Mark; Grey, Andrew; Koch, Jeffrey A.; Stolz, Christopher J.; Toeppen, John S.; Van Atta, Lewis; Woods, Bruce W.

    2000-04-01

    The use of lasers as the driver for inertial confinement fusion and weapons physics experiments is based on their ability to produce high-energy short pulses in a beam with low divergence. Indeed, the focusability of high quality laser beams far exceeds alternate technologies and is a major factor in the rationale for building high power lasers for such applications. The National Ignition Facility (NIF) is a large, 192-beam, high-power laser facility under construction at the Lawrence Livermore National Laboratory for fusion and weapons physics experiments. Its uncorrected minimum focal spot size is limited by laser system aberrations. The NIF includes a Wavefront Control System to correct these aberrations to yield a focal spot small enough for its applications. Sources of aberrations to be corrected include prompt pump-induced distortions in the laser amplifiers, previous-shot thermal distortions, beam off-axis effects, and gravity, mounting, and coating-induced optic distortions. Aberrations from gas density variations and optic-manufacturing figure errors are also partially corrected. This paper provides an overview of the NIF Wavefront Control System and describes the target spot size performance improvement it affords. It describes provisions made to accommodate the NIF's high fluence (laser beam and flashlamp), large wavefront correction range, wavefront temporal bandwidth, temperature and humidity variations, cleanliness requirements, and exception handling requirements (e.g. wavefront out-of-limits conditions).

  7. Estimating Patient Dose from X-ray Tube Output Metrics: Automated Measurement of Patient Size from CT Images Enables Large-scale Size-specific Dose Estimates

    PubMed Central

    Ikuta, Ichiro; Warden, Graham I.; Andriole, Katherine P.; Khorasani, Ramin

    2014-01-01

    Purpose To test the hypothesis that patient size can be accurately calculated from axial computed tomographic (CT) images, including correction for the effects of anatomy truncation that occur in routine clinical CT image reconstruction. Materials and Methods Institutional review board approval was obtained for this HIPAA-compliant study, with waiver of informed consent. Water-equivalent diameter (DW) was computed from the attenuation-area product of each image within 50 adult CT scans of the thorax and of the abdomen and pelvis and was also measured for maximal field of view (FOV) reconstructions. Linear regression models were created to compare DW with the effective diameter (Deff) used to select size-specific volume CT dose index (CTDIvol) conversion factors as defined in report 204 of the American Association of Physicists in Medicine. Linear regression models relating reductions in measured DW to a metric of anatomy truncation were used to compensate for the effects of clinical image truncation. Results In the thorax, DW versus Deff had an R2 of 0.51 (n = 200, 50 patients at four anatomic locations); in the abdomen and pelvis, R2 was 0.90 (n = 150, 50 patients at three anatomic locations). By correcting for image truncation, the proportion of clinically reconstructed images with an extracted DW within ±5% of the maximal FOV DW increased from 54% to 90% in the thorax (n = 3602 images) and from 95% to 100% in the abdomen and pelvis (6181 images). Conclusion The DW extracted from axial CT images is a reliable measure of patient size, and varying degrees of clinical image truncation can be readily corrected. Automated measurement of patient size combined with CT radiation exposure metrics may enable patient-specific dose estimation on a large scale. © RSNA, 2013 PMID:24086075

  8. Corrected goodness-of-fit test in covariance structure analysis.

    PubMed

    Hayakawa, Kazuhiko

    2018-05-17

    Many previous studies report simulation evidence that the goodness-of-fit test in covariance structure analysis or structural equation modeling suffers from the overrejection problem when the number of manifest variables is large compared with the sample size. In this study, we demonstrate that one of the tests considered in Browne (1974) can address this long-standing problem. We also propose a simple modification of Satorra and Bentler's mean and variance adjusted test for non-normal data. A Monte Carlo simulation is carried out to investigate the performance of the corrected tests in the context of a confirmatory factor model, a panel autoregressive model, and a cross-lagged panel (panel vector autoregressive) model. The simulation results reveal that the corrected tests overcome the overrejection problem and outperform existing tests in most cases. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. Evaluation of dual energy quantitative CT for determining the spatial distributions of red marrow and bone for dosimetry in internal emitter radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodsitt, Mitchell M., E-mail: goodsitt@umich.edu; Shenoy, Apeksha; Howard, David

    2014-05-15

    Purpose: To evaluate a three-equation three-unknown dual-energy quantitative CT (DEQCT) technique for determining region specific variations in bone spongiosa composition for improved red marrow dose estimation in radionuclide therapy. Methods: The DEQCT method was applied to 80/140 kVp images of patient-simulating lumbar sectional body phantoms of three sizes (small, medium, and large). External calibration rods of bone, red marrow, and fat-simulating materials were placed beneath the body phantoms. Similar internal calibration inserts were placed at vertebral locations within the body phantoms. Six test inserts of known volume fractions of bone, fat, and red marrow were also scanned. External-to-internal calibration correctionmore » factors were derived. The effects of body phantom size, radiation dose, spongiosa region segmentation granularity [single (∼17 × 17 mm) region of interest (ROI), 2 × 2, and 3 × 3 segmentation of that single ROI], and calibration method on the accuracy of the calculated volume fractions of red marrow (cellularity) and trabecular bone were evaluated. Results: For standard low dose DEQCT x-ray technique factors and the internal calibration method, the RMS errors of the estimated volume fractions of red marrow of the test inserts were 1.2–1.3 times greater in the medium body than in the small body phantom and 1.3–1.5 times greater in the large body than in the small body phantom. RMS errors of the calculated volume fractions of red marrow within 2 × 2 segmented subregions of the ROIs were 1.6–1.9 times greater than for no segmentation, and RMS errors for 3 × 3 segmented subregions were 2.3–2.7 times greater than those for no segmentation. Increasing the dose by a factor of 2 reduced the RMS errors of all constituent volume fractions by an average factor of 1.40 ± 0.29 for all segmentation schemes and body phantom sizes; increasing the dose by a factor of 4 reduced those RMS errors by an average factor of 1.71 ± 0.25. Results for external calibrations exhibited much larger RMS errors than size matched internal calibration. Use of an average body size external-to-internal calibration correction factor reduced the errors to closer to those for internal calibration. RMS errors of less than 30% or about 0.01 for the bone and 0.1 for the red marrow volume fractions would likely be satisfactory for human studies. Such accuracies were achieved for 3 × 3 segmentation of 5 mm slice images for: (a) internal calibration with 4 times dose for all size body phantoms, (b) internal calibration with 2 times dose for the small and medium size body phantoms, and (c) corrected external calibration with 4 times dose and all size body phantoms. Conclusions: Phantom studies are promising and demonstrate the potential to use dual energy quantitative CT to estimate the spatial distributions of red marrow and bone within the vertebral spongiosa.« less

  10. Finite Size Corrections to the Parisi Overlap Function in the GREM

    NASA Astrophysics Data System (ADS)

    Derrida, Bernard; Mottishaw, Peter

    2018-01-01

    We investigate the effects of finite size corrections on the overlap probabilities in the Generalized Random Energy Model in two situations where replica symmetry is broken in the thermodynamic limit. Our calculations do not use replicas, but shed some light on what the replica method should give for finite size corrections. In the gradual freezing situation, which is known to exhibit full replica symmetry breaking, we show that the finite size corrections lead to a modification of the simple relations between the sample averages of the overlaps Y_k between k configurations predicted by replica theory. This can be interpreted as fluctuations in the replica block size with a negative variance. The mechanism is similar to the one we found recently in the random energy model in Derrida and Mottishaw (J Stat Mech 2015(1): P01021, 2015). We also consider a simultaneous freezing situation, which is known to exhibit one step replica symmetry breaking. We show that finite size corrections lead to full replica symmetry breaking and give a more complete derivation of the results presented in Derrida and Mottishaw (Europhys Lett 115(4): 40005, 2016) for the directed polymer on a tree.

  11. Investigating the Relation between Sunspots and Umbral Dots

    NASA Astrophysics Data System (ADS)

    Yadav, Rahul; Louis, Rohan E.; Mathew, Shibu K.

    2018-03-01

    Umbral dots (UDs) are transient, bright features observed in the umbral region of a sunspot. We study the physical properties of UDs observed in sunspots of different sizes. The aim of our study is to relate the physical properties of UDs with the large-scale properties of sunspots. For this purpose, we analyze high-resolution G-band images of 42 sunspots observed by Hinode/SOT, located close to disk center. The images were corrected for instrumental stray light and restored with the modeled point-spread function. An automated multilevel tracking algorithm was employed to identify the UDs located in selected G-band images. Furthermore, we employed Solar Dynamics Observatory/HMI, limb-darkening-corrected, full-disk continuum images to estimate the sunspot phase and epoch for the selected sunspots. The number of UDs identified in different umbrae exhibits a linear relation to the umbral size. The observed filling factor ranges from 3% to 7% and increases with the mean umbral intensity. Moreover, the filling factor shows a decreasing trend with the umbral size. We also found that the observed mean and maximum intensities of UDs are correlated with the mean umbral intensity. However, we do not find any significant relationship between the mean (and maximum) intensity and effective diameter of UDs and the sunspot area, epoch, and decay rate. We suggest that this lack of relation could be due to either the distinct transition of spatial scales associated with overturning convection in the umbra or the shallow depth associated with UDs, or both.

  12. A Summary of The 2000-2001 NASA Glenn Lear Jet AM0 Solar Cell Calibration Program

    NASA Technical Reports Server (NTRS)

    Scheiman, David; Brinker, David; Snyder, David; Baraona, Cosmo; Jenkins, Phillip; Rieke, William J.; Blankenship, Kurt S.; Tom, Ellen M.

    2002-01-01

    Calibration of solar cells for space is extremely important for satellite power system design. Accurate prediction of solar cell performance is critical to solar array sizing, often required to be within 1%. The NASA Glenn Research Center solar cell calibration airplane facility has been in operation since 1963 with 531 flights to date. The calibration includes real data to Air Mass (AM) 0.2 and uses the Langley plot method plus an ozone correction factor to extrapolate to AM0. Comparison of the AM0 calibration data indicates that there is good correlation with Balloon and Shuttle flown solar cells. This paper will present a history of the airplane calibration procedure, flying considerations, and a brief summary of the previous flying season with some measurement results. This past flying season had a record 35 flights. It will also discuss efforts to more clearly define the ozone correction factor.

  13. Explanation of Two Anomalous Results in Statistical Mediation Analysis.

    PubMed

    Fritz, Matthew S; Taylor, Aaron B; Mackinnon, David P

    2012-01-01

    Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special concern as the bias-corrected bootstrap is often recommended and used due to its higher statistical power compared with other tests. The second result is statistical power reaching an asymptote far below 1.0 and in some conditions even declining slightly as the size of the relationship between X and M , a , increased. Two computer simulations were conducted to examine these findings in greater detail. Results from the first simulation found that the increased Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap are a function of an interaction between the size of the individual paths making up the mediated effect and the sample size, such that elevated Type I error rates occur when the sample size is small and the effect size of the nonzero path is medium or larger. Results from the second simulation found that stagnation and decreases in statistical power as a function of the effect size of the a path occurred primarily when the path between M and Y , b , was small. Two empirical mediation examples are provided using data from a steroid prevention and health promotion program aimed at high school football players (Athletes Training and Learning to Avoid Steroids; Goldberg et al., 1996), one to illustrate a possible Type I error for the bias-corrected bootstrap test and a second to illustrate a loss in power related to the size of a . Implications of these findings are discussed.

  14. Hydrophone area-averaging correction factors in nonlinearly generated ultrasonic beams

    NASA Astrophysics Data System (ADS)

    Cooling, M. P.; Humphrey, V. F.; Wilkens, V.

    2011-02-01

    The nonlinear propagation of an ultrasonic wave can be used to produce a wavefield rich in higher frequency components that is ideally suited to the calibration, or inter-calibration, of hydrophones. These techniques usually use a tone-burst signal, limiting the measurements to harmonics of the fundamental calibration frequency. Alternatively, using a short pulse enables calibration at a continuous spectrum of frequencies. Such a technique is used at PTB in conjunction with an optical measurement technique to calibrate devices. Experimental findings indicate that the area-averaging correction factor for a hydrophone in such a field demonstrates a complex behaviour, most notably varying periodically between frequencies that are harmonics of the centre frequency of the original pulse and frequencies that lie midway between these harmonics. The beam characteristics of such nonlinearly generated fields have been investigated using a finite difference solution to the nonlinear Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation for a focused field. The simulation results are used to calculate the hydrophone area-averaging correction factors for 0.2 mm and 0.5 mm devices. The results clearly demonstrate a number of significant features observed in the experimental investigations, including the variation with frequency, drive level and hydrophone element size. An explanation for these effects is also proposed.

  15. 49 CFR 325.75 - Ground surface correction factors. 1

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 5 2010-10-01 2010-10-01 false Ground surface correction factors. 1 325.75... MOTOR CARRIER NOISE EMISSION STANDARDS Correction Factors § 325.75 Ground surface correction factors. 1... account both the distance correction factors contained in § 325.73 and the ground surface correction...

  16. 49 CFR 325.75 - Ground surface correction factors. 1

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 5 2011-10-01 2011-10-01 false Ground surface correction factors. 1 325.75... MOTOR CARRIER NOISE EMISSION STANDARDS Correction Factors § 325.75 Ground surface correction factors. 1... account both the distance correction factors contained in § 325.73 and the ground surface correction...

  17. Correcting speckle contrast at small speckle size to enhance signal to noise ratio for laser speckle contrast imaging.

    PubMed

    Qiu, Jianjun; Li, Yangyang; Huang, Qin; Wang, Yang; Li, Pengcheng

    2013-11-18

    In laser speckle contrast imaging, it was usually suggested that speckle size should exceed two camera pixels to eliminate the spatial averaging effect. In this work, we show the benefit of enhancing signal to noise ratio by correcting the speckle contrast at small speckle size. Through simulations and experiments, we demonstrated that local speckle contrast, even at speckle size much smaller than one pixel size, can be corrected through dividing the original speckle contrast by the static speckle contrast. Moreover, we show a 50% higher signal to noise ratio of the speckle contrast image at speckle size below 0.5 pixel size than that at speckle size of two pixels. These results indicate the possibility of selecting a relatively large aperture to simultaneously ensure sufficient light intensity and high accuracy and signal to noise ratio, making the laser speckle contrast imaging more flexible.

  18. Poster — Thur Eve — 72: Clinical Subtleties of Flattening-Filter-Free Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corns, Robert; Thomas, Steven; Huang, Vicky

    2014-08-15

    Flattening-filter-free (fff) beams offer superior dose rates, reducing treatment times for important techniques that utilize small field sizes, such as stereotactic ablative radiotherapy (SABR). The impact of ion collection efficiency (P{sub ion}) on the percent depth dose (PDD) has been discussed at length in the literature. Relative corrections of the order of l%–2% are possible. In the process of commissioning 6fff and 10fff beams, we identified a number of other important details that influence commissioning. We looked at the absolute dose difference between corrected and uncorrected PDD. We discovered a curve with a broad maximum between 10 and 20 cm.more » We wondered about the consequences of this PDD correction on the absolute dose calibration of the linac because the TG-51 protocol does not correct the PDD curve. The quality factor k{sub Q} depends on the PDD, so in principle, a correction to the PDD will alter the absolute calibration of the linac. Finally, there are other clinical tables, such as TMR, which are derived from PDD. Attention to details on how this computation is performed is important because different corrections are possible depending the method of calculation.« less

  19. Intercalibration of research survey vessels on Lake Erie

    USGS Publications Warehouse

    Tyson, J.T.; Johnson, T.B.; Knight, C.T.; Bur, M.T.

    2006-01-01

    Fish abundance indices obtained from annual research trawl surveys are an integral part of fisheries stock assessment and management in the Great Lakes. It is difficult, however, to administer trawl surveys using a single vessel-gear combination owing to the large size of these systems, the jurisdictional boundaries that bisect the Great Lakes, and changes in vessels as a result of fleet replacement. When trawl surveys are administered by multiple vessel-gear combinations, systematic error may be introduced in combining catch-per-unit-effort (CPUE) data across vessels. This bias is associated with relative differences in catchability among vessel-gear combinations. In Lake Erie, five different research vessels conduct seasonal trawl surveys in the western half of the lake. To eliminate this systematic bias, the Lake Erie agencies conducted a side-by-side trawling experiment in 2003 to develop correction factors for CPUE data associated with different vessel-gear combinations. Correcting for systematic bias in CPUE data should lead to more accurate and comparable estimates of species density and biomass. We estimated correction factors for the 10 most commonly collected species age-groups for each vessel during the experiment. Most of the correction factors (70%) ranged from 0.5 to 2.0, indicating that the systematic bias associated with different vessel-gear combinations was not large. Differences in CPUE were most evident for vessels using different sampling gears, although significant differences also existed for vessels using the same gears. These results suggest that standardizing gear is important for multiple-vessel surveys, but there will still be significant differences in catchability stemming from the vessel effects and agencies must correct for this. With standardized estimates of CPUE, the Lake Erie agencies will have the ability to directly compare and combine time series for species abundance. ?? Copyright by the American Fisheries Society 2006.

  20. People--things and data--ideas: bipolar dimensions?

    PubMed

    Tay, Louis; Su, Rong; Rounds, James

    2011-07-01

    We examined a longstanding assumption in vocational psychology that people-things and data-ideas are bipolar dimensions. Two minimal criteria for bipolarity were proposed and examined across 3 studies: (a) The correlation between opposite interest types should be negative; (b) after correcting for systematic responding, the correlation should be greater than -.40. In Study 1, a meta-analysis using 26 interest inventories with a sample size of 1,008,253 participants showed that meta-analytic correlations between opposite RIASEC (realistic, investigative, artistic, social, enterprising, conventional) types ranged from -.03 to .18 (corrected meta-analytic correlations ranged from -.23 to -.06). In Study 2, structural equation models (SEMs) were fit to the Interest Finder (IF; Wall, Wise, & Baker, 1996) and the Interest Profiler (IP; Rounds, Smith, Hubert, Lewis, & Rivkin, 1999) with sample sizes of 13,939 and 1,061, respectively. The correlations of opposite RIASEC types were positive, ranging from .17 to .53. No corrected correlation met the criterion of -.40 except for investigative-enterprising (r = -.67). Nevertheless, a direct estimate of the correlation between data-ideas end poles using targeted factor rotation did not reveal bipolarity. Furthermore, bipolar SEMs fit substantially worse than a multiple-factor representation of vocational interests. In Study 3, a two-way clustering solution on IF and IP respondents and items revealed a substantial number of individuals with interests in both people and things. We discuss key theoretical, methodological, and practical implications such as the structure of vocational interests, interpretation and scoring of interest measures for career counseling, and expert RIASEC ratings of occupations.

  1. SU-F-T-322: A Comparison of Two Si Detectors for in Vivo Dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talarico, O; Krylova, T; Lebedenko, I

    Purpose: To compare two types of semiconductor detectors for in vivo dosimetry by their dependence from various parameters in different conditions. Methods: QED yellow (Sun Nuclear) and EDP (Scanditronix) Si detectors were radiated by a Varian Clinac 2300 ix with 6 and 18 MV energies. 10 cm thickness water equivalent phantom consisted of 30×30 cm{sup 2} squared plates was used for experiments. Dose dependencies for different beam angles (0 – 180°), field size (3–40 cm), dose (50 – 300 MU), and dose rates (50 – 300 MU/min) were obtained and calibrated with Standard Farmer chamber (PTW). Results: Reproducibility, linearity, dosemore » rate, angular dependence, and field size dependence were obtained for QED and EDP. They show no dose-rate dependence in available clinical dose rate range (100–600 MU/min). Both diodes have linear dependence with increasing the dose. Therefore even in case of high radiation therapy (including total body irradiation) it is not necessary to apply an additional correction during in vivo dosimetry. The diodes have different behavior for angular and field size dependencies. QED diode showed that dose value is stable for beam angles from 0 to 60°, for 60–180° correction factor has to be applied for each beam angle during in vivo measurements. For EDP diode dose value is sensitive to beam angle in whole range of angles. Conclusion: The study shows that QED diode is more suitable for in vivo dosimetry due to dose value independence from incident beam angle in the range 0–60°. There is no need in correction factors for increasing of dose and dose rate for both diodes. The next step will be to carry out measurements in non-standard conditions of total body irradiation. After this modeling of these experiments with Monte Carlo simulation for comparison calculated and obtained data is planned.« less

  2. SU-F-T-143: Implementation of a Correction-Based Output Model for a Compact Passively Scattered Proton Therapy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, S; Ahmad, S; Chen, Y

    2016-06-15

    Purpose: To commission and investigate the accuracy of an output (cGy/MU) prediction model for a compact passively scattered proton therapy system. Methods: A previously published output prediction model (Sahoo et al, Med Phys, 35, 5088–5097, 2008) was commissioned for our Mevion S250 proton therapy system. This model is a correction-based model that multiplies correction factors (d/MUwnc=ROFxSOBPF xRSFxSOBPOCFxOCRxFSFxISF). These factors accounted for changes in output due to options (12 large, 5 deep, and 7 small), modulation width M, range R, off-center, off-axis, field-size, and off-isocenter. In this study, the model was modified to ROFxSOBPFxRSFxOCRxFSFxISF-OCFxGACF by merging SOBPOCF and ISF for simplicitymore » and introducing a gantry angle correction factor (GACF). To commission the model, outputs over 1,000 data points were taken at the time of the system commissioning. The output was predicted by interpolation (1D for SOBPF, FSF, and GACF; 2D for RSF and OCR) with inverse-square calculation (ISF-OCR). The outputs of 273 combinations of R and M covering total 24 options were measured to test the model. To minimize fluence perturbation, scattered dose from range compensator and patient was not considered. The percent differences between the predicted (P) and measured (M) outputs were calculated to test the prediction accuracy ([P-M]/Mx100%). Results: GACF was required because of up to 3.5% output variation dependence on the gantry angle. A 2D interpolation was required for OCR because the dose distribution was not radially symmetric especially for the deep options. The average percent differences were −0.03±0.98% (mean±SD) and the differences of all the measurements fell within ±3%. Conclusion: It is concluded that the model can be clinically used for the compact passively scattered proton therapy system. However, great care should be taken when the field-size is less than 5×5 cm{sup 2} where a direct output measurement is required due to substantial output change by irregular block shape.« less

  3. Estimating effective population size from linkage disequilibrium between unlinked loci: theory and application to fruit fly outbreak populations.

    PubMed

    Sved, John A; Cameron, Emilie C; Gilchrist, A Stuart

    2013-01-01

    There is a substantial literature on the use of linkage disequilibrium (LD) to estimate effective population size using unlinked loci. The Ne estimates are extremely sensitive to the sampling process, and there is currently no theory to cope with the possible biases. We derive formulae for the analysis of idealised populations mating at random with multi-allelic (microsatellite) loci. The 'Burrows composite index' is introduced in a novel way with a 'composite haplotype table'. We show that in a sample of diploid size S, the mean value of x2 or r2 from the composite haplotype table is biased by a factor of 1-1/(2S-1)2, rather than the usual factor 1+1/(2S-1) for a conventional haplotype table. But analysis of population data using these formulae leads to Ne estimates that are unrealistically low. We provide theory and simulation to show that this bias towards low Ne estimates is due to null alleles, and introduce a randomised permutation correction to compensate for the bias. We also consider the effect of introducing a within-locus disequilibrium factor to r2, and find that this factor leads to a bias in the Ne estimate. However this bias can be overcome using the same randomised permutation correction, to yield an altered r2 with lower variance than the original r2, and one that is also insensitive to null alleles. The resulting formulae are used to provide Ne estimates on 40 samples of the Queensland fruit fly, Bactrocera tryoni, from populations with widely divergent Ne expectations. Linkage relationships are known for most of the microsatellite loci in this species. We find that there is little difference in the estimated Ne values from using known unlinked loci as compared to using all loci, which is important for conservation studies where linkage relationships are unknown.

  4. Decision-related factors in pupil old/new effects: Attention, response execution, and false memory.

    PubMed

    Brocher, Andreas; Graf, Tim

    2017-07-28

    In this study, we investigate the effects of decision-related factors on recognition memory in pupil old/new paradigms. In Experiment 1, we used an old/new paradigm with words and pseudowords and participants made lexical decisions during recognition rather than old/new decisions. Importantly, participants were instructed to focus on the nonword-likeness of presented items, not their word-likeness. We obtained no old/new effects. In Experiment 2, participants discriminated old from new words and old from new pseudowords during recognition, and they did so as quickly as possible. We found old/new effects for both words and pseudowords. In Experiment 3, we used materials and an old/new design known to elicit a large number of incorrect responses. For false alarms ("old" response for new word), we found larger pupils than for correctly classified new items, starting at the point at which response execution was allowed (2750ms post stimulus onset). In contrast, pupil size for misses ("new" response for old word) was statistically indistinguishable from pupil size in correct rejections. Taken together, our data suggest that pupil old/new effects result more from the intentional use of memory than from its automatic use. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. How often should we expect to be wrong? Statistical power, P values, and the expected prevalence of false discoveries.

    PubMed

    Marino, Michael J

    2018-05-01

    There is a clear perception in the literature that there is a crisis in reproducibility in the biomedical sciences. Many underlying factors contributing to the prevalence of irreproducible results have been highlighted with a focus on poor design and execution of experiments along with the misuse of statistics. While these factors certainly contribute to irreproducibility, relatively little attention outside of the specialized statistical literature has focused on the expected prevalence of false discoveries under idealized circumstances. In other words, when everything is done correctly, how often should we expect to be wrong? Using a simple simulation of an idealized experiment, it is possible to show the central role of sample size and the related quantity of statistical power in determining the false discovery rate, and in accurate estimation of effect size. According to our calculations, based on current practice many subfields of biomedical science may expect their discoveries to be false at least 25% of the time, and the only viable course to correct this is to require the reporting of statistical power and a minimum of 80% power (1 - β = 0.80) for all studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Replacing the CCSDS Telecommand Protocol with Next Generation Uplink

    NASA Technical Reports Server (NTRS)

    Kazz, Greg; Burleigh, Scott; Greenberg, Ed

    2012-01-01

    Better performing Forward Error Correction on the forward link along with adequate power in the data open an uplink operations trade space that enable missions to: Command to greater distances in deep space (increased uplink margin) Increase the size of the payload data (latency may be a factor) Provides space for the security header/trailer of the CCSDS Space Data Link Security Protocol Note: These higher rates could be used for relief of emergency communication margins/rates and not limited to improving top-end rate performance. A higher performance uplink could also reduce the requirements on flight emergency antenna size and/or the performance required from ground stations. Use of a selective repeat ARQ protocol may increase the uplink design requirements but the resultant development is deemed acceptable, due the factor of 4 to 8 potential increase in uplink data rate.

  7. NAND Flash Qualification Guideline

    NASA Technical Reports Server (NTRS)

    Heidecker, Jason

    2012-01-01

    Better performing Forward Error Correction on the forward link along with adequate power in the data open an uplink operations trade space that enable missions to: Command to greater distances in deep space (increased uplink margin). Increase the size of the payload data (latency may be a factor). Provides space for the security header/trailer of the CCSDS Space Data Link Security Protocol. Note: These higher rates could be used for relief of emergency communication margins/rates and not limited to improving top-end rate performance. A higher performance uplink could also reduce the requirements on flight emergency antenna size and/or the performance required from ground stations. Use of a selective repeat ARQ protocol may increase the uplink design requirements but the resultant development is deemed acceptable, due the factor of 4 to 8 potential increase in uplink data rate.

  8. Interfacial ion solvation: Obtaining the thermodynamic limit from molecular simulations

    NASA Astrophysics Data System (ADS)

    Cox, Stephen J.; Geissler, Phillip L.

    2018-06-01

    Inferring properties of macroscopic solutions from molecular simulations is complicated by the limited size of systems that can be feasibly examined with a computer. When long-ranged electrostatic interactions are involved, the resulting finite size effects can be substantial and may attenuate very slowly with increasing system size, as shown by previous work on dilute ions in bulk aqueous solution. Here we examine corrections for such effects, with an emphasis on solvation near interfaces. Our central assumption follows the perspective of Hünenberger and McCammon [J. Chem. Phys. 110, 1856 (1999)]: Long-wavelength solvent response underlying finite size effects should be well described by reduced models like dielectric continuum theory, whose size dependence can be calculated straightforwardly. Applied to an ion in a periodic slab of liquid coexisting with vapor, this approach yields a finite size correction for solvation free energies that differs in important ways from results previously derived for bulk solution. For a model polar solvent, we show that this new correction quantitatively accounts for the variation of solvation free energy with volume and aspect ratio of the simulation cell. Correcting periodic slab results for an aqueous system requires an additional accounting for the solvent's intrinsic charge asymmetry, which shifts electric potentials in a size-dependent manner. The accuracy of these finite size corrections establishes a simple method for a posteriori extrapolation to the thermodynamic limit and also underscores the realism of dielectric continuum theory down to the nanometer scale.

  9. Analytical linear energy transfer model including secondary particles: calculations along the central axis of the proton pencil beam

    NASA Astrophysics Data System (ADS)

    Marsolat, F.; De Marzi, L.; Pouzoulet, F.; Mazal, A.

    2016-01-01

    In proton therapy, the relative biological effectiveness (RBE) depends on various types of parameters such as linear energy transfer (LET). An analytical model for LET calculation exists (Wilkens’ model), but secondary particles are not included in this model. In the present study, we propose a correction factor, L sec, for Wilkens’ model in order to take into account the LET contributions of certain secondary particles. This study includes secondary protons and deuterons, since the effects of these two types of particles can be described by the same RBE-LET relationship. L sec was evaluated by Monte Carlo (MC) simulations using the GATE/GEANT4 platform and was defined by the ratio of the LET d distributions of all protons and deuterons and only primary protons. This method was applied to the innovative Pencil Beam Scanning (PBS) delivery systems and L sec was evaluated along the beam axis. This correction factor indicates the high contribution of secondary particles in the entrance region, with L sec values higher than 1.6 for a 220 MeV clinical pencil beam. MC simulations showed the impact of pencil beam parameters, such as mean initial energy, spot size, and depth in water, on L sec. The variation of L sec with these different parameters was integrated in a polynomial function of the L sec factor in order to obtain a model universally applicable to all PBS delivery systems. The validity of this correction factor applied to Wilkens’ model was verified along the beam axis of various pencil beams in comparison with MC simulations. A good agreement was obtained between the corrected analytical model and the MC calculations, with mean-LET deviations along the beam axis less than 0.05 keV μm-1. These results demonstrate the efficacy of our new correction of the existing LET model in order to take into account secondary protons and deuterons along the pencil beam axis.

  10. Parametric interactions in presence of different size colloids in semiconductor quantum plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vanshpal, R., E-mail: ravivanshpal@gmail.com; Sharma, Uttam; Dubey, Swati

    2015-07-31

    Present work is an attempt to investigate the effect of different size colloids on parametric interaction in semiconductor quantum plasma. Inclusion of quantum effect is being done in this analysis through quantum correction term in classical hydrodynamic model of homogeneous semiconductor plasma. The effect is associated with purely quantum origin using quantum Bohm potential and quantum statistics. Colloidal size and quantum correction term modify the parametric dispersion characteristics of ion implanted semiconductor plasma medium. It is found that quantum effect on colloids is inversely proportional to their size. Moreover critical size of implanted colloids for the effective quantum correction ismore » determined which is found to be equal to the lattice spacing of the crystal.« less

  11. Top-pair production at hadron colliders with next-to-next-to-leading logarithmic soft-gluon resummation

    NASA Astrophysics Data System (ADS)

    Cacciari, Matteo; Czakon, Michał; Mangano, Michelangelo; Mitov, Alexander; Nason, Paolo

    2012-04-01

    Incorporating all recent theoretical advances, we resum soft-gluon corrections to the total ttbar cross-section at hadron colliders at the next-to-next-to-leading logarithmic (NNLL) order. We perform the resummation in the well established framework of Mellin N-space resummation. We exhaustively study the sources of systematic uncertainty like renormalization and factorization scale variation, power suppressed effects and missing two- and higher-loop corrections. The inclusion of soft-gluon resummation at NNLL brings only a minor decrease in the perturbative uncertainty with respect to the NLL approximation, and a small shift in the central value, consistent with the quoted uncertainties. These numerical predictions agree with the currently available measurements from the Tevatron and LHC and have uncertainty of similar size. We conclude that significant improvements in the ttbar cross-sections can potentially be expected only upon inclusion of the complete NNLO corrections.

  12. Lack of dependence on resonant error field of locked mode island size in ohmic plasmas in DIII-D

    DOE PAGES

    Haye, R. J. La; Paz-Soldan, C.; Strait, E. J.

    2015-01-23

    DIII-D experiments show that fully penetrated resonant n=1 error field locked modes in Ohmic plasmas with safety factor q 95≳3 grow to similar large disruptive size, independent of resonant error field correction. Relatively small resonant (m/n=2/1) static error fields are shielded in Ohmic plasmas by the natural rotation at the electron diamagnetic drift frequency. However, the drag from error fields can lower rotation such that a bifurcation results, from nearly complete shielding to full penetration, i.e., to a driven locked mode island that can induce disruption.

  13. Underwater and Dive Station Work-Site Noise Surveys

    DTIC Science & Technology

    2008-03-14

    A) octave band noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet...band noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet dB (A...noise measurements, dB (A) correction factors, dB ( A ) levels , MK-21 diving helmet attenuation correction factors, overall in-helmet dB (A) level, and

  14. Assessment of Cracks in Stress Concentration Regions with Localized Plastic Zones

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, E.

    1998-11-25

    Marty brittle fracture evaluation procedures include plasticity corrections to elastically computed stress intensity factors. These corrections, which are based on the existence of a plastic zone in the vicinity of the crack tip, can overestimate the plasticity effect for a crack embedded in a stress concentration region in which the elastically computed stress exceeds the yield strength of the material in a localized zone. The interactions between the crack, which acts to relieve the high stresses driving the crack, plasticity effects in the stress concentration region, and the nature and source of the loading are examined by formulating explicit flawmore » finite element models for a crack emanating from the root of a notch located in a panel subject to an applied tensile stress. The results of these calculations provide conditions under which a crack-tip plasticity correction based on the Irwin plastic zone size overestimates the plasticity effect. A failure assessment diagram (FAD) curve is used to characterize the effect of plasticity on the crack driving force and to define a less restrictive plasticity correction for cracks at notch roots when load-controlled boundary conditions are imposed. The explicit flaw finite element results also demonstrate that stress intensity factors associated with load-controlled boundary conditions, such as those inherent in the ASME Boiler and Pressure Vessel Code as well as in most handbooks of stress intensity factors, can be much higher than those associated with displacement-controlled conditions, such as those that produce residual or thermal stresses. Under certain conditions, the inclusion of plasticity effects for cracks loaded by displacement-controlled boundary conditions reduces the crack driving force thus justifying the elimination of a plasticity correction for such loadings. The results of this study form the basis for removing unnecessary conservatism from flaw evaluation procedures that utilize plasticity corrections.« less

  15. The Radiological Physics Center's standard dataset for small field size output factors.

    PubMed

    Followill, David S; Kry, Stephen F; Qin, Lihong; Lowenstein, Jessica; Molineu, Andrea; Alvarez, Paola; Aguirre, Jose Francisco; Ibbott, Geoffrey S

    2012-08-08

    Delivery of accurate intensity-modulated radiation therapy (IMRT) or stereotactic radiotherapy depends on a multitude of steps in the treatment delivery process. These steps range from imaging of the patient to dose calculation to machine delivery of the treatment plan. Within the treatment planning system's (TPS) dose calculation algorithm, various unique small field dosimetry parameters are essential, such as multileaf collimator modeling and field size dependence of the output. One of the largest challenges in this process is determining accurate small field size output factors. The Radiological Physics Center (RPC), as part of its mission to ensure that institutions deliver comparable and consistent radiation doses to their patients, conducts on-site dosimetry review visits to institutions. As a part of the on-site audit, the RPC measures the small field size output factors as might be used in IMRT treatments, and compares the resulting field size dependent output factors to values calculated by the institution's treatment planning system (TPS). The RPC has gathered multiple small field size output factor datasets for X-ray energies ranging from 6 to 18 MV from Varian, Siemens and Elekta linear accelerators. These datasets were measured at 10 cm depth and ranged from 10 × 10 cm(2) to 2 × 2 cm(2). The field sizes were defined by the MLC and for the Varian machines the secondary jaws were maintained at a 10 × 10 cm(2). The RPC measurements were made with a micro-ion chamber whose volume was small enough to gather a full ionization reading even for the 2 × 2 cm(2) field size. The RPC-measured output factors are tabulated and are reproducible with standard deviations (SD) ranging from 0.1% to 1.5%, while the institutions' calculated values had a much larger SD range, ranging up to 7.9% [corrected].The absolute average percent differences were greater for the 2 × 2 cm(2) than for the other field sizes. The RPC's measured small field output factors provide institutions with a standard dataset against which to compare their TPS calculated values. Any discrepancies noted between the standard dataset and calculated values should be investigated with careful measurements and with attention to the specific beam model.

  16. An Analysis of the Relationship Between Metabolism, Developmental Schedules, and Longevity Using Phylogenetic Independent Contrasts

    PubMed Central

    de Magalhães, João Pedro; Costa, Joana; Church, George M.

    2008-01-01

    Comparative studies of aging are often difficult to interpret because of the different factors that tend to correlate with longevity. We used the AnAge database to study these factors, particularly metabolism and developmental schedules, previously associated with longevity in vertebrate species. Our results show that, after correcting for body mass and phylogeny, basal metabolic rate does not correlate with longevity in eutherians or birds, although it negatively correlates with marsupial longevity and time to maturity. We confirm the idea that age at maturity is typically proportional to adult life span, and show that mammals that live longer for their body size, such as bats and primates, also tend to have a longer developmental time for their body size. Lastly, postnatal growth rates were negatively correlated with adult life span in mammals but not in birds. Our work provides a detailed view of factors related to species longevity with implications for how comparative studies of aging are interpreted. PMID:17339640

  17. Finite-size corrections to the excitation energy transfer in a massless scalar interaction model

    NASA Astrophysics Data System (ADS)

    Maeda, Nobuki; Yabuki, Tetsuo; Tobita, Yutaka; Ishikawa, Kenzo

    2017-05-01

    We study the excitation energy transfer (EET) for a simple model in which a massless scalar particle is exchanged between two molecules. We show that a finite-size effect appears in EET by the interaction energy due to overlapping of the quantum waves in a short time interval. The effect generates finite-size corrections to Fermi's golden rule and modifies EET probability from the standard formula in the Förster mechanism. The correction terms come from transition modes outside the resonance energy region and enhance EET probability substantially.

  18. Exploratory graph analysis: A new approach for estimating the number of dimensions in psychological research

    PubMed Central

    Golino, Hudson F.; Epskamp, Sacha

    2017-01-01

    The estimation of the correct number of dimensions is a long-standing problem in psychometrics. Several methods have been proposed, such as parallel analysis (PA), Kaiser-Guttman’s eigenvalue-greater-than-one rule, multiple average partial procedure (MAP), the maximum-likelihood approaches that use fit indexes as BIC and EBIC and the less used and studied approach called very simple structure (VSS). In the present paper a new approach to estimate the number of dimensions will be introduced and compared via simulation to the traditional techniques pointed above. The approach proposed in the current paper is called exploratory graph analysis (EGA), since it is based on the graphical lasso with the regularization parameter specified using EBIC. The number of dimensions is verified using the walktrap, a random walk algorithm used to identify communities in networks. In total, 32,000 data sets were simulated to fit known factor structures, with the data sets varying across different criteria: number of factors (2 and 4), number of items (5 and 10), sample size (100, 500, 1000 and 5000) and correlation between factors (orthogonal, .20, .50 and .70), resulting in 64 different conditions. For each condition, 500 data sets were simulated using lavaan. The result shows that the EGA performs comparable to parallel analysis, EBIC, eBIC and to Kaiser-Guttman rule in a number of situations, especially when the number of factors was two. However, EGA was the only technique able to correctly estimate the number of dimensions in the four-factor structure when the correlation between factors were .7, showing an accuracy of 100% for a sample size of 5,000 observations. Finally, the EGA was used to estimate the number of factors in a real dataset, in order to compare its performance with the other six techniques tested in the simulation study. PMID:28594839

  19. SU-E-T-75: Commissioning Optically Stimulated Luminescence Dosimeters for Fast Neutron Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Young, L; Yang, F; Sandison, G

    Purpose: Fast neutrons therapy used at the University of Washington is clinically proven to be more effective than photon therapy in treating salivary gland and other cancers. A nanodot optically stimulated luminescence (OSL) system was chosen to be commissioned for patient in vivo dosimetry for neutron therapy. The OSL-based radiation detectors are not susceptible to radiation damage caused by neutrons compared to diodes or MOSFET systems. Methods: An In-Light microStar OSL system was commissioned for in vivo use by radiating Landauer nanodots with neutrons generated from 50.0 MeV protons accelerated onto a beryllium target. The OSLs were calibrated the depthmore » of maximum dose in solid water localized to 150 cm SAD isocenter in a 10.3 cm square field. Linearity was tested over a typical clinical dose fractionation range i.e. 0 to 150 neutron-cGy. Correction factors for transient signal fading, trap depletion, gantry angle, field size, and wedge factor dependencies were also evaluated. The OSLs were photo-bleached between radiations using a tungsten-halogen lamp. Results: Landauer sensitivity factors published for each nanodot are valid for measuring photon and electron doses but do not apply for neutron irradiation. Individually calculated nanodot calibration factors exhibited a 2–5% improvement over calibration factors computed by the microStar InLight software. Transient fading effects had a significant impact on neutron dose reading accuracy compared to photon and electron in vivo dosimetry. Greater accuracy can be achieved by calibrating and reading each dosimeter within 1–2 hours after irradiation. No additional OSL correction factors were needed for field size, gantry angle, or wedge factors in solid water phantom measurements. Conclusion: OSL detectors are a useful for neutron beam in vivo dosimetry verification. Dosimetric accuracy comparable to conventional diode systems can be achieved. Accounting for transient fading effects during the neutron beam calibration is a critical component for achieving comparable accuracy.« less

  20. Exploratory graph analysis: A new approach for estimating the number of dimensions in psychological research.

    PubMed

    Golino, Hudson F; Epskamp, Sacha

    2017-01-01

    The estimation of the correct number of dimensions is a long-standing problem in psychometrics. Several methods have been proposed, such as parallel analysis (PA), Kaiser-Guttman's eigenvalue-greater-than-one rule, multiple average partial procedure (MAP), the maximum-likelihood approaches that use fit indexes as BIC and EBIC and the less used and studied approach called very simple structure (VSS). In the present paper a new approach to estimate the number of dimensions will be introduced and compared via simulation to the traditional techniques pointed above. The approach proposed in the current paper is called exploratory graph analysis (EGA), since it is based on the graphical lasso with the regularization parameter specified using EBIC. The number of dimensions is verified using the walktrap, a random walk algorithm used to identify communities in networks. In total, 32,000 data sets were simulated to fit known factor structures, with the data sets varying across different criteria: number of factors (2 and 4), number of items (5 and 10), sample size (100, 500, 1000 and 5000) and correlation between factors (orthogonal, .20, .50 and .70), resulting in 64 different conditions. For each condition, 500 data sets were simulated using lavaan. The result shows that the EGA performs comparable to parallel analysis, EBIC, eBIC and to Kaiser-Guttman rule in a number of situations, especially when the number of factors was two. However, EGA was the only technique able to correctly estimate the number of dimensions in the four-factor structure when the correlation between factors were .7, showing an accuracy of 100% for a sample size of 5,000 observations. Finally, the EGA was used to estimate the number of factors in a real dataset, in order to compare its performance with the other six techniques tested in the simulation study.

  1. Asymmetric collimation: Dosimetric characteristics, treatment planning algorithm, and clinical applications

    NASA Astrophysics Data System (ADS)

    Kwa, William

    1998-11-01

    In this thesis the dosimetric characteristics of asymmetric fields are investigated and a new computation method for the dosimetry of asymmetric fields is described and implemented into an existing treatment planning algorithm. Based on this asymmetric field treatment planning algorithm, the clinical use of asymmetric fields in cancer treatment is investigated, and new treatment techniques for conformal therapy are developed. Dose calculation is verified with thermoluminescent dosimeters in a body phantom. In this thesis, an analytical approach is proposed to account for the dose reduction when a corresponding symmetric field is collimated asymmetrically to a smaller asymmetric field. This is represented by a correction factor that uses the ratio of the equivalent field dose contributions between the asymmetric and symmetric fields. The same equation used in the expression of the correction factor can be used for a wide range of asymmetric field sizes, photon energies and linear accelerators. This correction factor will account for the reduction in scatter contributions within an asymmetric field, resulting in the dose profile of an asymmetric field resembling that of a wedged field. The output factors of some linear accelerators are dependent on the collimator settings and whether the upper or lower collimators are used to set the narrower dimension of a radiation field. In addition to this collimator exchange effect for symmetric fields, asymmetric fields are also found to exhibit some asymmetric collimator backscatter effect. The proposed correction factor is extended to account for these effects. A set of correction factors determined semi-empirically to account for the dose reduction in the penumbral region and outside the radiated field is established. Since these correction factors rely only on the output factors and the tissue maximum ratios, they can easily be implemented into an existing treatment planning system. There is no need to store either additional sets of asymmetric field profiles or databases for the implementation of these correction factors into an existing in-house treatment planning system. With this asymmetric field algorithm, the computation time is found to be 20 times faster than a commercial system. This computation method can also be generalized to the dose representation of a two-fold asymmetric field whereby both the field width and length are set asymmetrically, and the calculations are not limited to points lying on one of the principal planes. The dosimetric consequences of asymmetric fields on the dose delivery in clinical situations are investigated. Examples of the clinical use of asymmetric fields are given and the potential use of asymmetric fields in conformal therapy is demonstrated. An alternative head and neck conformal therapy is described, and the treatment plan is compared to the conventional technique. The dose distributions calculated for the standard and alternative techniques are confirmed with thermoluminescent dosimeters in a body phantom at selected dose points. (Abstract shortened by UMI.)

  2. SU-F-T-281: Monte Carlo Investigation of Sources of Dosimetric Discrepancies with 2D Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Afifi, M; Deiab, N; El-Farrash, A

    2016-06-15

    Purpose: Intensity modulated radiation therapy (IMRT) poses a number of challenges for properly measuring commissioning data and quality assurance (QA). Understanding the limitations and use of dosimeters to measure these dose distributions is critical to safe IMRT implementation. In this work, we used Monte Carlo simulations to investigate the possible sources of discrepancy between our measurement with 2D array system and our dose calculation using our treatment planning system (TPS). Material and Methods: MCBEAM and MCSIM Monte Carlo codes were used for treatment head simulation and phantom dose calculation. Accurate modeling of a 6MV beam from Varian trilogy machine wasmore » verified by comparing simulated and measured percentage depth doses and profiles. Dose distribution inside the 2D array was calculated using Monte Carlo simulations and our TPS. Then Cross profiles for different field sizes were compared with actual measurements for zero and 90° gantry angle setup. Through the analysis and comparison, we tried to determine the differences and quantify a possible angular calibration factor. Results: Minimum discrepancies was seen in the comparison between the simulated and the measured profiles for the zero gantry angles at all studied field sizes (4×4cm{sup 2}, 10×10cm{sup 2}, 15×15cm{sup 2}, and 20×20cm{sup 2}). Discrepancies between our measurements and calculations increased dramatically for the cross beam profiles at the 90° gantry angle. This could ascribe mainly to the different attenuation caused by the layer of electronics at the base behind the ion chambers in the 2D array. The degree of attenuation will vary depending on the angle of beam incidence. Correction factors were implemented to correct the errors. Conclusion: Monte Carlo modeling of the 2D arrays and the derivation of angular dependence correction factors will allow for improved accuracy of the device for IMRT QA.« less

  3. Technical Factors Influencing Cone Packing Density Estimates in Adaptive Optics Flood Illuminated Retinal Images

    PubMed Central

    Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

    2014-01-01

    Purpose To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Methods Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. Results The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. Conclusions The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic. PMID:25203681

  4. Technical factors influencing cone packing density estimates in adaptive optics flood illuminated retinal images.

    PubMed

    Lombardo, Marco; Serrao, Sebastiano; Lombardo, Giuseppe

    2014-01-01

    To investigate the influence of various technical factors on the variation of cone packing density estimates in adaptive optics flood illuminated retinal images. Adaptive optics images of the photoreceptor mosaic were obtained in fifteen healthy subjects. The cone density and Voronoi diagrams were assessed in sampling windows of 320×320 µm, 160×160 µm and 64×64 µm at 1.5 degree temporal and superior eccentricity from the preferred locus of fixation (PRL). The technical factors that have been analyzed included the sampling window size, the corrected retinal magnification factor (RMFcorr), the conversion from radial to linear distance from the PRL, the displacement between the PRL and foveal center and the manual checking of cone identification algorithm. Bland-Altman analysis was used to assess the agreement between cone density estimated within the different sampling window conditions. The cone density declined with decreasing sampling area and data between areas of different size showed low agreement. A high agreement was found between sampling areas of the same size when comparing density calculated with or without using individual RMFcorr. The agreement between cone density measured at radial and linear distances from the PRL and between data referred to the PRL or the foveal center was moderate. The percentage of Voronoi tiles with hexagonal packing arrangement was comparable between sampling areas of different size. The boundary effect, presence of any retinal vessels, and the manual selection of cones missed by the automated identification algorithm were identified as the factors influencing variation of cone packing arrangements in Voronoi diagrams. The sampling window size is the main technical factor that influences variation of cone density. Clear identification of each cone in the image and the use of a large buffer zone are necessary to minimize factors influencing variation of Voronoi diagrams of the cone mosaic.

  5. Universal thermal corrections to single interval entanglement entropy for two dimensional conformal field theories.

    PubMed

    Cardy, John; Herzog, Christopher P

    2014-05-02

    We consider single interval Rényi and entanglement entropies for a two dimensional conformal field theory on a circle at nonzero temperature. Assuming that the finite size of the system introduces a unique ground state with a nonzero mass gap, we calculate the leading corrections to the Rényi and entanglement entropy in a low temperature expansion. These corrections have a universal form for any two dimensional conformal field theory that depends only on the size of the mass gap and its degeneracy. We analyze the limits where the size of the interval becomes small and where it becomes close to the size of the spatial circle.

  6. Impact of and correction for instrument sensitivity drift on nanoparticle size measurements by single-particle ICP-MS

    PubMed Central

    El Hadri, Hind; Petersen, Elijah J.; Winchester, Michael R.

    2016-01-01

    The effect of ICP-MS instrument sensitivity drift on the accuracy of NP size measurements using single particle (sp)ICP-MS is investigated. Theoretical modeling and experimental measurements of the impact of instrument sensitivity drift are in agreement and indicate that drift can impact the measured size of spherical NPs by up to 25 %. Given this substantial bias in the measured size, a method was developed using an internal standard to correct for the impact of drift and was shown to accurately correct for a decrease in instrument sensitivity of up to 50 % for 30 nm and 60 nm gold nanoparticles. PMID:26894759

  7. Small refractive errors--their correction and practical importance.

    PubMed

    Skrbek, Matej; Petrová, Sylvie

    2013-04-01

    Small refractive errors present a group of specifc far-sighted refractive dispositions that are compensated by enhanced accommodative exertion and aren't exhibited by loss of the visual acuity. This paper should answer a few questions about their correction, flowing from theoretical presumptions and expectations of this dilemma. The main goal of this research was to (dis)confirm the hypothesis about convenience, efficiency and frequency of the correction that do not raise the visual acuity (or if the improvement isn't noticeable). The next goal was to examine the connection between this correction and other factors (age, size of the refractive error, etc.). The last aim was to describe the subjective personal rating of the correction of these small refractive errors, and to determine the minimal improvement of the visual acuity, that is attractive enough for the client to purchase the correction (glasses, contact lenses). It was confirmed, that there's an indispensable group of subjects with good visual acuity, where the correction is applicable, although it doesn't improve the visual acuity much. The main importance is to eliminate the asthenopia. The prime reason for acceptance of the correction is typically changing during the life, so as the accommodation is declining. Young people prefer the correction on the ground of the asthenopia, caused by small refractive error or latent strabismus; elderly people acquire the correction because of improvement of the visual acuity. Generally the correction was found useful in more than 30%, if the gain of the visual acuity was at least 0,3 of the decimal row.

  8. A corrected model for static and dynamic electromechanical instability of narrow nanotweezers: Incorporation of size effect, surface layer and finite dimensions

    NASA Astrophysics Data System (ADS)

    Koochi, Ali; Hosseini-Toudeshky, Hossein; Abadyan, Mohamadreza

    2018-03-01

    Herein, a corrected theoretical model is proposed for modeling the static and dynamic behavior of electrostatically actuated narrow-width nanotweezers considering the correction due to finite dimensions, size dependency and surface energy. The Gurtin-Murdoch surface elasticity in conjunction with the modified couple stress theory is employed to consider the coupling effect of surface stresses and size phenomenon. In addition, the model accounts for the external force corrections by incorporating the impact of narrow width on the distribution of Casimir attraction, van der Waals (vdW) force and the fringing field effect. The proposed model is beneficial for the precise modeling of the narrow nanotweezers in nano-scale.

  9. cgCorrect: a method to correct for confounding cell-cell variation due to cell growth in single-cell transcriptomics

    NASA Astrophysics Data System (ADS)

    Blasi, Thomas; Buettner, Florian; Strasser, Michael K.; Marr, Carsten; Theis, Fabian J.

    2017-06-01

    Accessing gene expression at a single-cell level has unraveled often large heterogeneity among seemingly homogeneous cells, which remains obscured when using traditional population-based approaches. The computational analysis of single-cell transcriptomics data, however, still imposes unresolved challenges with respect to normalization, visualization and modeling the data. One such issue is differences in cell size, which introduce additional variability into the data and for which appropriate normalization techniques are needed. Otherwise, these differences in cell size may obscure genuine heterogeneities among cell populations and lead to overdispersed steady-state distributions of mRNA transcript numbers. We present cgCorrect, a statistical framework to correct for differences in cell size that are due to cell growth in single-cell transcriptomics data. We derive the probability for the cell-growth-corrected mRNA transcript number given the measured, cell size-dependent mRNA transcript number, based on the assumption that the average number of transcripts in a cell increases proportionally to the cell’s volume during the cell cycle. cgCorrect can be used for both data normalization and to analyze the steady-state distributions used to infer the gene expression mechanism. We demonstrate its applicability on both simulated data and single-cell quantitative real-time polymerase chain reaction (PCR) data from mouse blood stem and progenitor cells (and to quantitative single-cell RNA-sequencing data obtained from mouse embryonic stem cells). We show that correcting for differences in cell size affects the interpretation of the data obtained by typically performed computational analysis.

  10. Deriving detector-specific correction factors for rectangular small fields using a scintillator detector.

    PubMed

    Qin, Yujiao; Zhong, Hualiang; Wen, Ning; Snyder, Karen; Huang, Yimei; Chetty, Indrin J

    2016-11-08

    The goal of this study was to investigate small field output factors (OFs) for flat-tening filter-free (FFF) beams on a dedicated stereotactic linear accelerator-based system. From this data, the collimator exchange effect was quantified, and detector-specific correction factors were generated. Output factors for 16 jaw-collimated small fields (from 0.5 to 2 cm) were measured using five different detectors including an ion chamber (CC01), a stereotactic field diode (SFD), a diode detector (Edge), Gafchromic film (EBT3), and a plastic scintillator detector (PSD, W1). Chamber, diodes, and PSD measurements were performed in a Wellhofer water tank, while films were irradiated in solid water at 100 cm source-to-surface distance and 10 cm depth. The collimator exchange effect was quantified for rectangular fields. Monte Carlo (MC) simulations of the measured configurations were also performed using the EGSnrc/DOSXYZnrc code. Output factors measured by the PSD and verified against film and MC calculations were chosen as the benchmark measurements. Compared with plastic scintillator detector (PSD), the small volume ion chamber (CC01) underestimated output factors by an average of -1.0% ± 4.9% (max. = -11.7% for 0.5 × 0.5 cm2 square field). The stereotactic diode (SFD) overestimated output factors by 2.5% ± 0.4% (max. = 3.3% for 0.5 × 1 cm2 rectangular field). The other diode detector (Edge) also overestimated the OFs by an average of 4.2% ± 0.9% (max. = 6.0% for 1 × 1 cm2 square field). Gafchromic film (EBT3) measure-ments and MC calculations agreed with the scintillator detector measurements within 0.6% ± 1.8% and 1.2% ± 1.5%, respectively. Across all the X and Y jaw combinations, the average collimator exchange effect was computed: 1.4% ± 1.1% (CC01), 5.8% ± 5.4% (SFD), 5.1% ± 4.8% (Edge diode), 3.5% ± 5.0% (Monte Carlo), 3.8% ± 4.7% (film), and 5.5% ± 5.1% (PSD). Small field detectors should be used with caution with a clear understanding of their behaviors, especially for FFF beams and small, elongated fields. The scintillator detector exhibited good agreement against Gafchromic film measurements and MC simulations over the range of field sizes studied. The collimator exchange effect was found to be impor-tant at these small field sizes. Detector-specific correction factors were computed using the scintillator measurements as the benchmark. © 2016 The Authors.

  11. Small field out-put factors comparison between ion chambers and diode dedectors for different photon energies

    NASA Astrophysics Data System (ADS)

    Tas, B.; Durmus, I. F.

    2018-02-01

    To compare small fields out-put factors of linear accelerator by using different ion chambers and diode dedectors for different photon energies. We measured small fields (1×1 to 5×5 cm2) out-put factors by using IBA® cc003 nano chamber, cc01 Razor, cc01, cc04, cc13, fc65 ion chambers and SFD, Razor diode dedectors for 6MV, 10MV, 15MV, 6MV FFF and 10MV FFF energies. We determined the most compatible out-put factors between ion chamber and diode dedector by using cc003 nano ion chamber for 1×1cm2 field size. We determined less than %2 dose difference between cc003 nano chamber, cc01 Razor, cc01, cc04 and cc13 ion chambers from 2×2 to 5×5 cm2. We determined %12±2 and %13±1 underestimate doses by using cc01 and cc13 ion chambers, also we determined %57±2 underesimate dose by using fc65 ion chamber's than razor diode for 1×1 cm2 field size. These results show that we shouldn't measure out-put factors of 1×1 cm2 field size by using cc01, cc13 and fc65 ion chambers. The dose difference between SFD and Razor diodes were determined less than %1.5. If we would like to use ion chambers for ≤1×1cm2 field size out-put measurement, we should use correction factor while commisionning linear accelerator. Otherwise we could determine underestimate dose by using ion chambers.

  12. Development and Evaluation of A Novel and Cost-Effective Approach for Low-Cost NO₂ Sensor Drift Correction.

    PubMed

    Sun, Li; Westerdahl, Dane; Ning, Zhi

    2017-08-19

    Emerging low-cost gas sensor technologies have received increasing attention in recent years for air quality measurements due to their small size and convenient deployment. However, in the diverse applications these sensors face many technological challenges, including sensor drift over long-term deployment that cannot be easily addressed using mathematical correction algorithms or machine learning methods. This study aims to develop a novel approach to auto-correct the drift of commonly used electrochemical nitrogen dioxide (NO₂) sensor with comprehensive evaluation of its application. The impact of environmental factors on the NO₂ electrochemical sensor in low-ppb concentration level measurement was evaluated in laboratory and the temperature and relative humidity correction algorithm was evaluated. An automated zeroing protocol was developed and assessed using a chemical absorbent to remove NO₂ as a means to perform zero correction in varying ambient conditions. The sensor system was operated in three different environments in which data were compared to a reference NO₂ analyzer. The results showed that the zero-calibration protocol effectively corrected the observed drift of the sensor output. This technique offers the ability to enhance the performance of low-cost sensor based systems and these findings suggest extension of the approach to improve data quality from sensors measuring other gaseous pollutants in urban air.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, Elsayed

    Purpose: To characterize and correct for radiation-induced background (RIB) observed in the signals from a class of scanning water tanks. Methods: A method was developed to isolate the RIB through detector measurements in the background-free linac console area. Variation of the RIB against a large number of parameters was characterized, and its impact on basic clinical data for photon and electron beams was quantified. Different methods to minimize and/or correct for the RIB were proposed and evaluated. Results: The RIB is due to the presence of the electrometer and connection box in a low background radiation field (by design). Themore » absolute RIB current with a biased detector is up to 2 pA, independent of the detector size, which is 0.6% and 1.5% of the central axis reference signal for a standard and a mini scanning chamber, respectively. The RIB monotonically increases with field size, is three times smaller for detectors that do not require a bias (e.g., diodes), is up to 80% larger for positive (versus negative) polarity, decreases with increasing photon energy, exhibits a single curve versus dose rate at the electrometer location, and is negligible for electron beams. Data after the proposed field-size correction method agree with point measurements from an independent system to within a few tenth of a percent for output factor, head scatter, depth dose at depth, and out-of-field profile dose. Manufacturer recommendations for electrometer placement are insufficient and sometimes incorrect. Conclusions: RIB in scanning water tanks can have a non-negligible effect on dosimetric data.« less

  14. Detector to detector corrections: a comprehensive experimental study of detector specific correction factors for beam output measurements for small radiotherapy beams.

    PubMed

    Azangwe, Godfrey; Grochowska, Paulina; Georg, Dietmar; Izewska, Joanna; Hopfgartner, Johannes; Lechner, Wolfgang; Andersen, Claus E; Beierholm, Anders R; Helt-Hansen, Jakob; Mizuno, Hideyuki; Fukumura, Akifumi; Yajima, Kaori; Gouldstone, Clare; Sharpe, Peter; Meghzifene, Ahmed; Palmans, Hugo

    2014-07-01

    The aim of the present study is to provide a comprehensive set of detector specific correction factors for beam output measurements for small beams, for a wide range of real time and passive detectors. The detector specific correction factors determined in this study may be potentially useful as a reference data set for small beam dosimetry measurements. Dose response of passive and real time detectors was investigated for small field sizes shaped with a micromultileaf collimator ranging from 0.6 × 0.6 cm(2) to 4.2 × 4.2 cm(2) and the measurements were extended to larger fields of up to 10 × 10 cm(2). Measurements were performed at 5 cm depth, in a 6 MV photon beam. Detectors used included alanine, thermoluminescent dosimeters (TLDs), stereotactic diode, electron diode, photon diode, radiophotoluminescent dosimeters (RPLDs), radioluminescence detector based on carbon-doped aluminium oxide (Al2O3:C), organic plastic scintillators, diamond detectors, liquid filled ion chamber, and a range of small volume air filled ionization chambers (volumes ranging from 0.002 cm(3) to 0.3 cm(3)). All detector measurements were corrected for volume averaging effect and compared with dose ratios determined from alanine to derive a detector correction factors that account for beam perturbation related to nonwater equivalence of the detector materials. For the detectors used in this study, volume averaging corrections ranged from unity for the smallest detectors such as the diodes, 1.148 for the 0.14 cm(3) air filled ionization chamber and were as high as 1.924 for the 0.3 cm(3) ionization chamber. After applying volume averaging corrections, the detector readings were consistent among themselves and with alanine measurements for several small detectors but they differed for larger detectors, in particular for some small ionization chambers with volumes larger than 0.1 cm(3). The results demonstrate how important it is for the appropriate corrections to be applied to give consistent and accurate measurements for a range of detectors in small beam geometry. The results further demonstrate that depending on the choice of detectors, there is a potential for large errors when effects such as volume averaging, perturbation and differences in material properties of detectors are not taken into account. As the commissioning of small fields for clinical treatment has to rely on accurate dose measurements, the authors recommend the use of detectors that require relatively little correction, such as unshielded diodes, diamond detectors or microchambers, and solid state detectors such as alanine, TLD, Al2O3:C, or scintillators.

  15. SU-E-T-104: An Examination of Dose in the Buildup and Build-Down Regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tome, W; Kuo, H; Phillips, J

    2015-06-15

    Purpose: To examine dose in the buildup and build-down regions and compare measurements made with various models and dosimeters Methods: Dose was examined in a 30×30cm {sup 2} phantom of water-equivalent plastic with 10cm of backscatter for various field sizes. Examination was performed with radiochromic film and optically-stimulated-luminescent-dosimeter (OSLD) chips, and compared against a plane-parallel chamber with a correction factor applied to approximate the response of an extrapolation chamber. For the build-down region, a correction factor to account for table absorption and chamber orientation in the posterior-anterior direction was applied. The measurement depths used for the film were halfway throughmore » their sensitive volumes, and a polynomial best fit curve was used to determine the dose to their surfaces. This chamber was also compared with the dose expected in a clinical kernel-based computer model, and a clinical Boltzmann-transport-equation-based (BTE) computer model. The two models were also compared against each other for cases with air gaps in the buildup region. Results: Within 3mm, all dosimeters and models agreed with the chamber within 10% for all field sizes. At the entrance surface, film differed in comparison with the chamber from +90% to +15%, the BTE-model by +140 to +3%, and the kernel-based model by +20% to −25%, decreasing with increasing field size. At the exit surface, film differed in comparison with the chamber from −10% to −15%, the BTE-model by −53% to −50%, the kernel-based model by −55% to −57%, mostly independent of field size. Conclusion: The largest differences compared with the chamber were found at the surface for all field sizes. Differences decreased with increasing field size and increasing depth in phantom. Air gaps in the buildup region cause dose buildup to occur again post-gap, but the effect decreases with increasing phantom thickness prior to the gap.« less

  16. Dosimetric Characteristics of Wedged Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sidhu, N.P.S.; Breitman, Karen

    2015-01-15

    The beam characteristics of the wedged fields in the nonwedged planes (planes normal to the wedged planes) were studied for 6 MV and 15 MV x-ray beams. A method was proposed for determining the maximum field length of a wedged field that can be used in the nonwedged plane without introducing undesirable alterations in the dose distributions of these fields. The method requires very few measurements. The relative wedge factors of 6 MV and 15 MV X-rays were determined for wedge filters of nominal wedge angles of 15°, 30°, 45°, and 60° as a function of depth and field size.more » For a 6 MV beam the relative wedge factors determined for a field size of 10 × 10 cm{sup 2} for 30°, 45°, and 60° wedge filters can be used for various field sizes ranging from 4 cm{sup 2} to 20 cm{sup 2} (except for the 60° wedge for which the maximum field size that can be used is 15 × 20 cm{sup 2}) without introducing errors in the dosimetric calculations of more than 0.5% for depths up to 20 cm and 1% for depths up to 30 cm. For the 15° wedge filter the relative wedge factor for a field size of 10 × 10 cm{sup 2} can be used over the same range of field sizes by introducing slightly higher error, 0.5% for depths up to 10 cm and 1% for depths up to 30 cm. For a 15 MV beam the maximum magnitude of the relative wedge factors for 45° and 60° lead wedges is of the order of 1%, and it is not important clinically to apply a correction of that magnitude. For a 15 MV beam the relative wedge factors determined for a field size of 6 × 6 cm{sup 2} for the 15° and 30° steel wedges can be used over a range of field sizes from 4 cm{sup 2} to 20 cm{sup 2} without causing dosimetric errors greater than 0.5% for depths up to 10 cm.« less

  17. Experimental testing of four correction algorithms for the forward scattering spectrometer probe

    NASA Technical Reports Server (NTRS)

    Hovenac, Edward A.; Oldenburg, John R.; Lock, James A.

    1992-01-01

    Three number density correction algorithms and one size distribution correction algorithm for the Forward Scattering Spectrometer Probe (FSSP) were compared with data taken by the Phase Doppler Particle Analyzer (PDPA) and an optical number density measuring instrument (NDMI). Of the three number density correction algorithms, the one that compared best to the PDPA and NDMI data was the algorithm developed by Baumgardner, Strapp, and Dye (1985). The algorithm that corrects sizing errors in the FSSP that was developed by Lock and Hovenac (1989) was shown to be within 25 percent of the Phase Doppler measurements at number densities as high as 3000/cc.

  18. An evaluation of parturition indices in fishers

    USGS Publications Warehouse

    Frost, H.C.; York, E.C.; Krohn, W.B.; Elowe, K.D.; Decker, T.A.; Powell, S.M.; Fuller, T.K.

    1999-01-01

    Fishers (Martes pennanti) are important forest carnivores and furbearers that are susceptible to overharvest. Traditional indices used to monitor fisher populations typically overestimate litter size and proportion of females that give birth. We evaluated the usefulness of 2 indices of reproduction to determine proportion of female fishers that gave birth in a particular year. We used female fishers of known age and reproductive histories to compare appearance of placental scars with incidence of pregnancy and litter size. Microscopic observation of freshly removed reproductive tracts correctly identified pregnant fishers and correctly estimated litter size in 3 of 4 instances, but gross observation of placental scars failed to correctly identify pregnant fishers and litter size. Microscopic observations of reproductive tracts in carcasses that were not fresh also failed to identify pregnant animals and litter size. We evaluated mean sizes of anterior nipples to see if different reproductive classes could be distinguished. Mean anterior nipple size of captive and wild fishers correctly identified current-year breeders from nonbreeders. Former breeders were misclassified in 4 of 13 instances. Presence of placental scars accurately predicted parturition in a small sample size of fishers, but absence of placental scars did not signify that a female did not give birth. In addition to enabling the estimation of parturition rates in live animals more accurately than traditional indices, mean anterior nipple size also provided an estimate of the percentage of adult females that successfully raised young. Though using mean anterior nipple size to index reproductive success looks promising, additional data are needed to evaluate effects of using dried, stretched pelts on nipple size for management purposes.

  19. System-size corrections for self-diffusion coefficients calculated from molecular dynamics simulations: The case of CO2, n-alkanes, and poly(ethylene glycol) dimethyl ethers

    NASA Astrophysics Data System (ADS)

    Moultos, Othonas A.; Zhang, Yong; Tsimpanogiannis, Ioannis N.; Economou, Ioannis G.; Maginn, Edward J.

    2016-08-01

    Molecular dynamics simulations were carried out to study the self-diffusion coefficients of CO2, methane, propane, n-hexane, n-hexadecane, and various poly(ethylene glycol) dimethyl ethers (glymes in short, CH3O-(CH2CH2O)n-CH3 with n = 1, 2, 3, and 4, labeled as G1, G2, G3, and G4, respectively) at different conditions. Various system sizes were examined. The widely used Yeh and Hummer [J. Phys. Chem. B 108, 15873 (2004)] correction for the prediction of diffusion coefficient at the thermodynamic limit was applied and shown to be accurate in all cases compared to extrapolated values at infinite system size. The magnitude of correction, in all cases examined, is significant, with the smallest systems examined giving for some cases a self-diffusion coefficient approximately 15% lower than the infinite system-size extrapolated value. The results suggest that finite size corrections to computed self-diffusivities must be used in order to obtain accurate results.

  20. Correction: Influence of particle size and dielectric environment on the dispersion behaviour and surface plasmon in nickel nanoparticles.

    PubMed

    Sharma, Vikash; Chotia, Chanderbhan; Tarachand; Ganesan, Vedachalaiyer; Okram, Gunadhor S

    2017-07-21

    Correction for 'Influence of particle size and dielectric environment on the dispersion behaviour and surface plasmon in nickel nanoparticles' by Vikash Sharma et al., Phys. Chem. Chem. Phys., 2017, 19, 14096-14106.

  1. Dense arrays of millimeter-sized glass lenses fabricated at wafer-level.

    PubMed

    Albero, Jorge; Perrin, Stéphane; Bargiel, Sylwester; Passilly, Nicolas; Baranski, Maciej; Gauthier-Manuel, Ludovic; Bernard, Florent; Lullin, Justine; Froehly, Luc; Krauter, Johann; Osten, Wolfgang; Gorecki, Christophe

    2015-05-04

    This paper presents the study of a fabrication technique of lenses arrays based on the reflow of glass inside cylindrical silicon cavities. Lenses whose sizes are out of the microfabrication standards are considered. In particular, the case of high fill factor arrays is discussed in detail since the proximity between lenses generates undesired effects. These effects, not experienced when lenses are sufficiently separated so that they can be considered as single items, are corrected by properly designing the silicon cavities. Complete topographic as well as optical characterizations are reported. The compatibility of materials with Micro-Opto-Electromechanical Systems (MOEMS) integration processes makes this technology attractive for the miniaturization of inspection systems, especially those devoted to imaging.

  2. Magnetic resonance imaging for staging and treatment planning in cervical cancer.

    PubMed

    López-Carballeira, A; Baleato-González, S; García-Figueiras, R; Otero-Estévez, I; Villalba-Martín, C

    2016-01-01

    To review the key points that are essential for the correct staging of cervical cancer by magnetic resonance imaging. Magnetic resonance imaging is the method of choice for locoregional staging of cervical cancer. Thorough evaluation of prognostic factors such as tumor size, invasion of adjacent structures, and the presence of lymph node metastases is fundamental for planning appropriate treatment. Copyright © 2015 SERAM. Publicado por Elsevier España, S.L.U. All rights reserved.

  3. Library based x-ray scatter correction for dedicated cone beam breast CT

    PubMed Central

    Shi, Linxi; Karellas, Andrew; Zhu, Lei

    2016-01-01

    Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the geant4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correction on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal views. Conclusions: The library-based scatter correction does not require increase in radiation dose or hardware modifications, and it improves over the existing methods on implementation simplicity and computational efficiency. As demonstrated through patient studies, the authors’ approach is effective and stable, and is therefore clinically attractive for CBBCT imaging. PMID:27487870

  4. Child t-shirt size data set from 3D body scanner anthropometric measurements and a questionnaire.

    PubMed

    Pierola, A; Epifanio, I; Alemany, S

    2017-04-01

    A dataset of a fit assessment study in children is presented. Anthropometric measurements of 113 children were obtained using a 3D body scanner. Children tested a t-shirt of different sizes and a different model for boys and girls, and their fit was assessed by an expert. This expert labeled the fit as 0 (correct), -1 (if the garment was small for that child), or 1 (if the garment was large for that child) in an ordered factor called Size-fit. Moreover, the fit was numerically assessed from 1 (very poor fit) to 10 (perfect fit) in a variable called Expert evaluation. This data set contains the differences between the reference mannequin of the evaluated size and the child׳s anthropometric measurements for 27 variables. Besides these variables, in the data set, we can also find the gender, the size evaluated, and the size recommended by the expert, including if an intermediate, but nonexistent size between two consecutive sizes would have been the right size. In total, there are 232 observations. The analysis of these data can be found in Pierola et al. (2016) [2].

  5. Computer-assisted recording of tensile tests for the evaluation of serrated flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weinhandl, H.; Mitter, F.; Bernt, W.

    1994-12-01

    In a previous paper the authors pointed out the difficulties which arise in the evaluation of serrated flow curves when the applied tensile strain rates are just above normal''. The recording system of tensile testing machines which were built, say, twenty years ago, are not capable of recording the full size of the load drops due to the inertia of the recording pen. This handicap was then overcome by establishing correction factors which were determined from recording a small number of load drops with an oscilloscope. Modern testing machines are equipped with digital recording. The disadvantage of the common systemmore » is, however, their limited capacity, so that not enough space for data points is available. Consequently, the time intervals between data points are of the order of tenths of seconds. It will become obvious from the present results that such a time interval is too large for recording a correct serration size. This report is concerned with the recording of complete load-extension relations during tensile tests using a computer which is capable of storing the data at sufficiently small time intervals.« less

  6. Aneurysms of the anterior and posterior cerebral circulation: comparison of the morphometric features.

    PubMed

    Tykocki, Tomasz; Kostkiewicz, Bogusław

    2014-09-01

    Intracranial aneurysms (IAs) located in the posterior circulation are considered to have higher annual bleed rates than those in the anterior circulation. The aim of the study was to compare the morphometric factors differentiating between IAs located in the anterior and posterior cerebral circulation. A total number of 254 IAs diagnosed between 2009 and 2012 were retrospectively analyzed. All patients qualified for diagnostic, three-dimensional rotational angiography. IAs were assigned to either the anterior or posterior cerebral circulation subsets for the analysis. Means were compared with a t-test. The univariate and stepwise logistic regression analyses were used to determine the predictors of morphometric differences between the groups. For the defined predictors, ROC (receiver-operating characteristic) curves and interactive dot diagrams were calculated with the cutoff values of the morphometric factors. The number of anterior cerebral circulation IAs was 179 (70.5 %); 141 (55.5 %) aneurysms were ruptured. Significant differences between anterior and posterior circulation IAs were found for: the parent artery size (5.08 ± 1.8 mm vs. 3.95 ± 1.5 mm; p < 0.05), size ratio (2.22 ± 0.9 vs. 3.19 ± 1.8; p < 0.045) and aspect ratio (AR) (1.91 ± 0.8 vs. 2.75 ± 1.8; p = 0.02). Predicting factors differentiating anterior and posterior circulation IAs were: the AR (OR = 2.20; 95 % CI 1.80-270; Is 270 correct or should it be 2.70 and parent artery size (OR = 0.44; 95 % CI 0.38-0.54). The cutoff point in the ROC curve was 2.185 for the AR and 4.89 mm for parent artery size. Aspect ratio and parent artery size were found to be predictive morphometric factors in differentiating between anterior and posterior cerebral IAs.

  7. Logical and Methodological Issues Affecting Genetic Studies of Humans Reported in Top Neuroscience Journals.

    PubMed

    Grabitz, Clara R; Button, Katherine S; Munafò, Marcus R; Newbury, Dianne F; Pernet, Cyril R; Thompson, Paul A; Bishop, Dorothy V M

    2018-01-01

    Genetics and neuroscience are two areas of science that pose particular methodological problems because they involve detecting weak signals (i.e., small effects) in noisy data. In recent years, increasing numbers of studies have attempted to bridge these disciplines by looking for genetic factors associated with individual differences in behavior, cognition, and brain structure or function. However, different methodological approaches to guarding against false positives have evolved in the two disciplines. To explore methodological issues affecting neurogenetic studies, we conducted an in-depth analysis of 30 consecutive articles in 12 top neuroscience journals that reported on genetic associations in nonclinical human samples. It was often difficult to estimate effect sizes in neuroimaging paradigms. Where effect sizes could be calculated, the studies reporting the largest effect sizes tended to have two features: (i) they had the smallest samples and were generally underpowered to detect genetic effects, and (ii) they did not fully correct for multiple comparisons. Furthermore, only a minority of studies used statistical methods for multiple comparisons that took into account correlations between phenotypes or genotypes, and only nine studies included a replication sample or explicitly set out to replicate a prior finding. Finally, presentation of methodological information was not standardized and was often distributed across Methods sections and Supplementary Material, making it challenging to assemble basic information from many studies. Space limits imposed by journals could mean that highly complex statistical methods were described in only a superficial fashion. In summary, methods that have become standard in the genetics literature-stringent statistical standards, use of large samples, and replication of findings-are not always adopted when behavioral, cognitive, or neuroimaging phenotypes are used, leading to an increased risk of false-positive findings. Studies need to correct not just for the number of phenotypes collected but also for the number of genotypes examined, genetic models tested, and subsamples investigated. The field would benefit from more widespread use of methods that take into account correlations between the factors corrected for, such as spectral decomposition, or permutation approaches. Replication should become standard practice; this, together with the need for larger sample sizes, will entail greater emphasis on collaboration between research groups. We conclude with some specific suggestions for standardized reporting in this area.

  8. Spectral method for the correction of the Cerenkov light effect in plastic scintillation detectors: A comparison study of calibration procedures and validation in Cerenkov light-dominated situations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guillot, Mathieu; Gingras, Luc; Archambault, Louis

    2011-04-15

    Purpose: The purposes of this work were: (1) To determine if a spectral method can accurately correct the Cerenkov light effect in plastic scintillation detectors (PSDs) for situations where the Cerenkov light is dominant over the scintillation light and (2) to develop a procedural guideline for accurately determining the calibration factors of PSDs. Methods: The authors demonstrate, by using the equations of the spectral method, that the condition for accurately correcting the effect of Cerenkov light is that the ratio of the two calibration factors must be equal to the ratio of the Cerenkov light measured within the two differentmore » spectral regions used for analysis. Based on this proof, the authors propose two new procedures to determine the calibration factors of PSDs, which were designed to respect this condition. A PSD that consists of a cylindrical polystyrene scintillating fiber (1.6 mm{sup 3}) coupled to a plastic optical fiber was calibrated by using these new procedures and the two reference procedures described in the literature. To validate the extracted calibration factors, relative dose profiles and output factors for a 6 MV photon beam from a medical linac were measured with the PSD and an ionization chamber. Emphasis was placed on situations where the Cerenkov light is dominant over the scintillation light and on situations dissimilar to the calibration conditions. Results: The authors found that the accuracy of the spectral method depends on the procedure used to determine the calibration factors of the PSD and on the attenuation properties of the optical fiber used. The results from the relative dose profile measurements showed that the spectral method can correct the Cerenkov light effect with an accuracy level of 1%. The results obtained also indicate that PSDs measure output factors that are lower than those measured with ionization chambers for square field sizes larger than 25x25 cm{sup 2}, in general agreement with previously published Monte Carlo results. Conclusions: The authors conclude that the spectral method can be used to accurately correct the Cerenkov light effect in PSDs. The authors confirmed the importance of maximizing the difference of Cerenkov light production between calibration measurements. The authors also found that the attenuation of the optical fiber, which is assumed to be constant in the original formulation of the spectral method, may cause a variation of the calibration factors in some experimental setups.« less

  9. A 2 × 2 taxonomy of multilevel latent contextual models: accuracy-bias trade-offs in full and partial error correction models.

    PubMed

    Lüdtke, Oliver; Marsh, Herbert W; Robitzsch, Alexander; Trautwein, Ulrich

    2011-12-01

    In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.

  10. Capturing 'R&D excellence': indicators, international statistics, and innovative universities.

    PubMed

    Tijssen, Robert J W; Winnink, Jos J

    2018-01-01

    Excellent research may contribute to successful science-based technological innovation. We define 'R&D excellence' in terms of scientific research that has contributed to the development of influential technologies, where 'excellence' refers to the top segment of a statistical distribution based on internationally comparative performance scores. Our measurements are derived from frequency counts of literature references ('citations') from patents to research publications during the last 15 years. The 'D' part in R&D is represented by the top 10% most highly cited 'excellent' patents worldwide. The 'R' part is captured by research articles in international scholarly journals that are cited by these patented technologies. After analyzing millions of citing patents and cited research publications, we find very large differences between countries worldwide in terms of the volume of domestic science contributing to those patented technologies. Where the USA produces the largest numbers of cited research publications (partly because of database biases), Switzerland and Israel outperform the US after correcting for the size of their national science systems. To tease out possible explanatory factors, which may significantly affect or determine these performance differentials, we first studied high-income nations and advanced economies. Here we find that the size of R&D expenditure correlates with the sheer size of cited publications, as does the degree of university research cooperation with domestic firms. When broadening our comparative framework to 70 countries (including many medium-income nations) while correcting for size of national science systems, the important explanatory factors become the availability of human resources and quality of science systems. Focusing on the latter factor, our in-depth analysis of 716 research-intensive universities worldwide reveals several universities with very high scores on our two R&D excellence indicators. Confirming the above macro-level findings, an in-depth study of 27 leading US universities identifies research expenditure size as a prime determinant. Our analytical model and quantitative indicators provides a supplementary perspective to input-oriented statistics based on R&D expenditures. The country-level findings are indicative of significant disparities between national R&D systems. Comparing the performance of individual universities, we observe large differences within national science systems. The top ranking 'innovative' research universities contribute significantly to the development of advanced science-based technologies.

  11. Aberration correction results in the IBM STEM instrument.

    PubMed

    Batson, P E

    2003-09-01

    Results from the installation of aberration correction in the IBM 120 kV STEM argue that a sub-angstrom probe size has been achieved. Results and the experimental methods used to obtain them are described here. Some post-experiment processing is necessary to demonstrate the probe size of about 0.078 nm. While the promise of aberration correction is demonstrated, we remain at the very threshold of practicality, given the very stringent stability requirements.

  12. Astigmatism correction in x-ray scanning photoemission microscope with use of elliptical zone plate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ade, H.; Ko, C.; Anderson, E.

    1992-03-02

    We report the impact of an elliptical, high resolution zone plate on the performance of an initially astigmatic soft x-ray scanning photoemission microscope. A zone plate with carefully calibrated eccentricity has been used to eliminate astigmatism arising from transport optics, and an improvement of about a factor of 3 in spatial resolution was achieved. The resolution is still dominated by the source size and chromatic aberrations rather than by diffraction and coma, and a further gain of about a factor of 2 in resolution is possible. Sub 100 nm photoemission microscopy with primary photoelectrons is now within reach.

  13. Methods for obtaining true particle size distributions from cross section measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lord, Kristina Alyse

    2013-01-01

    Sectioning methods are frequently used to measure grain sizes in materials. These methods do not provide accurate grain sizes for two reasons. First, the sizes of features observed on random sections are always smaller than the true sizes of solid spherical shaped objects, as noted by Wicksell [1]. This is the case because the section very rarely passes through the center of solid spherical shaped objects randomly dispersed throughout a material. The sizes of features observed on random sections are inversely related to the distance of the center of the solid object from the section [1]. Second, on a planemore » section through the solid material, larger sized features are more frequently observed than smaller ones due to the larger probability for a section to come into contact with the larger sized portion of the spheres than the smaller sized portion. As a result, it is necessary to find a method that takes into account these reasons for inaccurate particle size measurements, while providing a correction factor for accurately determining true particle size measurements. I present a method for deducing true grain size distributions from those determined from specimen cross sections, either by measurement of equivalent grain diameters or linear intercepts.« less

  14. Investigation of electron-loss and photon scattering correction factors for FAC-IR-300 ionization chamber

    NASA Astrophysics Data System (ADS)

    Mohammadi, S. M.; Tavakoli-Anbaran, H.; Zeinali, H. Z.

    2017-02-01

    The parallel-plate free-air ionization chamber termed FAC-IR-300 was designed at the Atomic Energy Organization of Iran, AEOI. This chamber is used for low and medium X-ray dosimetry on the primary standard level. In order to evaluate the air-kerma, some correction factors such as electron-loss correction factor (ke) and photon scattering correction factor (ksc) are needed. ke factor corrects the charge loss from the collecting volume and ksc factor corrects the scattering of photons into collecting volume. In this work ke and ksc were estimated by Monte Carlo simulation. These correction factors are calculated for mono-energy photon. As a result of the simulation data, the ke and ksc values for FAC-IR-300 ionization chamber are 1.0704 and 0.9982, respectively.

  15. SU-F-T-490: Separating Effects Influencing Detector Response in Small MV Photon Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wegener, S; Sauer, O

    2016-06-15

    Purpose: Different detector properties influence their responses especially in field sizes below the lateral electron range. Due to the finite active volume, the detector density and electron perturbation at other structural parts, the response factor is in general field size dependent. We aimed to visualize and separate the main effects contributing to detector behavior for a variety of detector types. This was achieved in an experimental setup, shielding the field center. Thus, effects caused by scattered radiation could be examined separately. Methods: Signal ratios for field sizes down to 8 mm (SSD 90 cm, water depth 10 cm) of amore » 6MV beam from a Siemens Primus LINAC were recorded with several detectors: PTW microDiamond and PinPoint ionization chamber, shielded diodes (PTW P-60008, IBA PFD and SNC Edge) and unshielded diodes (PTW E-60012 and IBA SFD). Measurements were carried out in open fields and with an aluminum pole of 4 mm diameter as a central block. The geometric volume effect was calculated from profiles obtained with Gafchromic EBT3 film, evaluated using FilmQA Pro software (Ashland, USA). Results: Volume corrections were 1.7% at maximum. After correction, in small open fields, unshielded diodes showed a lower response than the diamond, i.e. diamond detector over-response seems to be higher than that for unshielded diodes. Beneath the block, this behavior was amplified by a factor of 2. For the shielded diodes, the overresponse for small open fields could be confirmed. However their lateral response behavior was strongly type dependent, e.g. the signal ratio dropped from 1.02 to 0.98 for the P-60008 diode. Conclusion: The lateral detector response was experimentally examined. Detector volume and density alone do not fully account for the field size dependence of detector response. Detector construction details play a major role, especially for shielded diodes.« less

  16. On the two-loop virtual QCD corrections to Higgs boson pair production in the standard model

    DOE PAGES

    Degrassi, Giuseppe; Giardino, Pier Paolo; Gröber, Ramona

    2016-07-21

    Here, we compute the next-to-leading order virtual QCD corrections to Higgs-pair production via gluon fusion. We also present analytic results for the two-loop contributions to the spin-0 and spin-2 form factors in the amplitude. The reducible contributions, given by the double-triangle diagrams, are evaluated exactly while the two-loop irreducible diagrams are evaluated by an asymptotic expansion in heavy top-quark mass up to and including terms of O(1/mmore » $$8\\atop{t}$$). We estimate that mass effects can reduce the hadronic cross section by at most 10 %, assuming that the finite top-quark mass effects are of similar size in the entire range of partonic energies.« less

  17. Simulation of reflecting surface deviations of centimeter-band parabolic space radiotelescope (SRT) with the large-size mirror

    NASA Astrophysics Data System (ADS)

    Kotik, A.; Usyukin, V.; Vinogradov, I.; Arkhipov, M.

    2017-11-01

    he realization of astrophysical researches requires the development of high-sensitive centimeterband parabolic space radiotelescopes (SRT) with the large-size mirrors. Constructively such SRT with the mirror size more than 10 m can be realized as deployable rigid structures. Mesh-structures of such size do not provide the reflector reflecting surface accuracy which is necessary for the centimeter band observations. Now such telescope with the 10 m diameter mirror is developed in Russia in the frame of "SPECTR - R" program. External dimensions of the telescope is more than the size of existing thermo-vacuum chambers used to prove SRT reflecting surface accuracy parameters under the action of space environment factors. That's why the numerical simulation turns out to be the basis required to accept the taken designs. Such modeling should be based on experimental working of the basic constructive materials and elements of the future reflector. In the article computational modeling of reflecting surface deviations of a centimeter-band of a large-sized deployable space reflector at a stage of his orbital functioning is considered. The analysis of the factors that determines the deviations - both determined (temperatures fields) and not-determined (telescope manufacturing and installation faults; the deformations caused by features of composite materials behavior in space) is carried out. The finite-element model and complex of methods are developed. They allow to carry out computational modeling of reflecting surface deviations caused by influence of all factors and to take into account the deviations correction by space vehicle orientation system. The results of modeling for two modes of functioning (orientation at the Sun) SRT are presented.

  18. Determination of macular hole size in relation to individual variabilities of fovea morphology.

    PubMed

    Shin, J Y; Chu, Y K; Hong, Y T; Kwon, O W; Byeon, S H

    2015-08-01

    To determine the preoperative anatomic factors in macular holes and their correlation to hole closure. Forty-six eyes with consecutive unilateral macular hole who had undergone surgery and followed up for at least 6 months were enrolled. Optical coherence tomography images and best-corrected visual acuity (BCVA) within 2 weeks prior to operation and 6 months after surgery were analyzed. The maximal hole dimension, foveal degeneration factors (inner nuclear layer cysts, outer segment (OS) shortening) and the widest foveolar floor size of the fellow eyes were measured. For overcoming preoperative individual variability of foveal morphology, an 'adjusted' hole size parameter (the ratio between the hole size and the fellow eye foveolar floor size) was used based on the fact that both eyes were morphologically symmetrical. Mean preoperative BCVA (logMAR) was 1.03±0.43 and the mean postoperative BCVA was 0.50±0.38 at 6 months. Preoperative BCVA is significantly associated with postoperative BCVA (P=0.0002). The average hole diameter was 448.9±196.8 μm and the average fellow eye foveolar floor size was 461.3±128.4 μm. There was a correlation between hole diameter and the size of the fellow eye foveolar floor (Pearson's coefficient=0.608, P<0.0001). The adjusted hole size parameter was 0.979±0.358 (0.761-2.336), which was a strong predictor for both anatomic (P=0.0281) and visual (P=0.0016) outcome. When determining the extent of preoperative hole size, we have to take into consideration the foveal morphologic variations among individuals. Hole size may be related to the original foveal shape, especially in relation to the centrifugal retraction of the foveal tissues.

  19. System-size corrections for self-diffusion coefficients calculated from molecular dynamics simulations: The case of CO{sub 2}, n-alkanes, and poly(ethylene glycol) dimethyl ethers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moultos, Othonas A.; Economou, Ioannis G.; Zhang, Yong

    Molecular dynamics simulations were carried out to study the self-diffusion coefficients of CO{sub 2}, methane, propane, n-hexane, n-hexadecane, and various poly(ethylene glycol) dimethyl ethers (glymes in short, CH{sub 3}O–(CH{sub 2}CH{sub 2}O){sub n}–CH{sub 3} with n = 1, 2, 3, and 4, labeled as G1, G2, G3, and G4, respectively) at different conditions. Various system sizes were examined. The widely used Yeh and Hummer [J. Phys. Chem. B 108, 15873 (2004)] correction for the prediction of diffusion coefficient at the thermodynamic limit was applied and shown to be accurate in all cases compared to extrapolated values at infinite system size. Themore » magnitude of correction, in all cases examined, is significant, with the smallest systems examined giving for some cases a self-diffusion coefficient approximately 15% lower than the infinite system-size extrapolated value. The results suggest that finite size corrections to computed self-diffusivities must be used in order to obtain accurate results.« less

  20. Assessment of bias in US waterfowl harvest estimates

    USGS Publications Warehouse

    Padding, Paul I.; Royle, J. Andrew

    2012-01-01

    Context. North American waterfowl managers have long suspected that waterfowl harvest estimates derived from national harvest surveys in the USA are biased high. Survey bias can be evaluated by comparing survey results with like estimates from independent sources. Aims. We used band-recovery data to assess the magnitude of apparent bias in duck and goose harvest estimates, using mallards (Anas platyrhynchos) and Canada geese (Branta canadensis) as representatives of ducks and geese, respectively. Methods. We compared the number of reported mallard and Canada goose band recoveries, adjusted for band reporting rates, with the estimated harvests of banded mallards and Canada geese from the national harvest surveys. Weused the results of those comparisons to develop correction factors that can be applied to annual duck and goose harvest estimates of the national harvest survey. Key results. National harvest survey estimates of banded mallards harvested annually averaged 1.37 times greater than those calculated from band-recovery data, whereas Canada goose harvest estimates averaged 1.50 or 1.63 times greater than comparable band-recovery estimates, depending on the harvest survey methodology used. Conclusions. Duck harvest estimates produced by the national harvest survey from 1971 to 2010 should be reduced by a factor of 0.73 (95% CI = 0.71–0.75) to correct for apparent bias. Survey-specific correction factors of 0.67 (95% CI = 0.65–0.69) and 0.61 (95% CI = 0.59–0.64) should be applied to the goose harvest estimates for 1971–2001 (duck stamp-based survey) and 1999–2010 (HIP-based survey), respectively. Implications. Although this apparent bias likely has not influenced waterfowl harvest management policy in the USA, it does have negative impacts on some applications of harvest estimates, such as indirect estimation of population size. For those types of analyses, we recommend applying the appropriate correction factor to harvest estimates.

  1. VizieR Online Data Catalog: PACS photometry of FIR faint stars (Klaas+, 2018)

    NASA Astrophysics Data System (ADS)

    Klaas, U.; Balog, Z.; Nielbock, M.; Mueller, T. G.; Linz, H.; Kiss, Cs.

    2018-01-01

    70, 100 and 160um photometry of FIR faint stars from PACS scan map and chop/nod measurements. For scan maps also the photometry of the combined scan and cross-scan maps (at 160um there are usually two scan and cross-scan maps each as complements to the 70 and 100um maps) is given. Note: Not all stars have measured fluxes in all three filters. Scan maps: The main observing mode was the point-source mini-scan-map mode; selected scan map parameters are given in column mparam. An outline of the data processing using the high-pass filter (HPF) method is presented in Balog et al. (2014ExA....37..129B). Processing proceeded from Herschel Science Archive SPG v13.1.0 level 1 products with HIPE version 15 build 165 for 70 and 100um maps and from Herschel Science Archive SPG v14.2.0 level 1 products with HIPE version 15 build 1480 for 160um maps. Fluxes faper were obtained by aperture photometry with aperture radii of 5.6, 6.8 and 10.7 arcsec for the 70, 100 and 160um filter, respectively. Noise per pixel sigpix was determined with the histogram method, described in this paper, for coverage values greater than or equal to 0.5*maximum coverage. The number of map pixels (1.1, 1.4, and 2.1 arcsec pixel size, respectively) inside the photometric aperture is Naper = 81.42, 74.12, and 81.56, respectively. The corresponding correction factors for correlated noise are fcorr = 3.13, 2.76, and 4.12, respectively. The noise for the photometric aperture is calculated as sig_aper=sqrt(Naper)*fcorr*sigpix. Signal-to-noise ratios are determined as S/N=faper/sigaper. Aperture-correction factors to derive the total flux are caper = 1.61, 1.56 and 1.56 for the 70, 100 and 160um filter, respectively. Applied colour-correction factors for a 5000K black-body SED are cc = 1.016, 1.033, and 1.074 for the 70, 100, and 160um filter, respectively. The final stellar flux is derived as fstar=faper*caper/cc. Maximum and minimum FWHM of the star PSF are determined by an elliptical fit of the intensity profile. Chop/nod observations: The chop/nod point-source mode is described in this paper. An outline of the data processing is presented in Nielbock et al. (2013ExA....36..631N). Processing proceeded from Herschel Science Archive SPG v11.1.0 level 1 products with HIPE version 13 build 2768. Gyro correction was applied for most of the cases to improve the pointing reconstruction performance. Fluxes faper were obtained by aperture photometry with aperture radii of 5.6, 6.8 and 10.7 arcsec for the 70, 100 and 160um filter, respectively. Noise per pixel sigpix was determined with the histogram method, described in this paper, for coverage values greater than or equal to 0.5*maximum coverage. The number of map pixels (1.1, 1.4, and 2.1 arcsec pixel size, respectively) inside the photometric aperture is Naper = 81.42, 74.12, and 81.56, respectively. The corresponding correction factors for correlated noise are fcorr = 6.33, 4.22, and 7.81, respectively. The noise for the photometric aperture is calculated as sigaper=sqrt(Naper)*fcorr*sigpix. Signal-to-noise ratios are determined as S/N=faper/sigaper. Aperture-correction factors to derive the total flux are caper = 1.61, 1.56 and 1.56 for the 70, 100 and 160um filter, respectively. Applied colour-correction factors for a 5000K black-body SED are cc = 1.016, 1.033, and 1.074 for the 70, 100, and 160um filter, respectively. Maximum and minimum FWHM of the star PSF are determined by an elliptical fit of the intensity profile. (7 data files).

  2. Extrinsic Factors Influencing Fetal Deformations and Intrauterine Growth Restriction

    PubMed Central

    Moh, Wendy; Graham, John M.; Wadhawan, Isha; Sanchez-Lara, Pedro A.

    2012-01-01

    The causes of intrauterine growth restriction (IUGR) are multifactorial with both intrinsic and extrinsic influences. While many studies focus on the intrinsic pathological causes, the possible long-term consequences resulting from extrinsic intrauterine physiological constraints merit additional consideration and further investigation. Infants with IUGR can exhibit early symmetric or late asymmetric growth abnormality patterns depending on the fetal stage of development, of which the latter is most common occurring in 70–80% of growth-restricted infants. Deformation is the consequence of extrinsic biomechanical factors interfering with normal growth, functioning, or positioning of the fetus in utero, typically arising during late gestation. Biomechanical forces play a critical role in the normal morphogenesis of most tissues. The magnitude and direction of force impact the form of the developing fetus, with a specific tissue response depending on its pliability and stage of development. Major uterine constraining factors include primigravida, small maternal size, uterine malformation, uterine fibromata, early pelvic engagement of the fetal head, aberrant fetal position, oligohydramnios, and multifetal gestation. Corrective mechanical forces similar to those that gave rise to the deformation to reshape the deformed structures are often used and should take advantage of the rapid postnatal growth to correct form. PMID:22888434

  3. Children's accuracy of portion size estimation using digital food images: effects of interface design and size of image on computer screen.

    PubMed

    Baranowski, Tom; Baranowski, Janice C; Watson, Kathleen B; Martin, Shelby; Beltran, Alicia; Islam, Noemi; Dadabhoy, Hafza; Adame, Su-heyla; Cullen, Karen; Thompson, Debbe; Buday, Richard; Subar, Amy

    2011-03-01

    To test the effect of image size and presence of size cues on the accuracy of portion size estimation by children. Children were randomly assigned to seeing images with or without food size cues (utensils and checked tablecloth) and were presented with sixteen food models (foods commonly eaten by children) in varying portion sizes, one at a time. They estimated each food model's portion size by selecting a digital food image. The same food images were presented in two ways: (i) as small, graduated portion size images all on one screen or (ii) by scrolling across large, graduated portion size images, one per sequential screen. Laboratory-based with computer and food models. Volunteer multi-ethnic sample of 120 children, equally distributed by gender and ages (8 to 13 years) in 2008-2009. Average percentage of correctly classified foods was 60·3 %. There were no differences in accuracy by any design factor or demographic characteristic. Multiple small pictures on the screen at once took half the time to estimate portion size compared with scrolling through large pictures. Larger pictures had more overestimation of size. Multiple images of successively larger portion sizes of a food on one computer screen facilitated quicker portion size responses with no decrease in accuracy. This is the method of choice for portion size estimation on a computer.

  4. High-fidelity artifact correction for cone-beam CT imaging of the brain

    NASA Astrophysics Data System (ADS)

    Sisniega, A.; Zbijewski, W.; Xu, J.; Dang, H.; Stayman, J. W.; Yorkston, J.; Aygun, N.; Koliatsos, V.; Siewerdsen, J. H.

    2015-02-01

    CT is the frontline imaging modality for diagnosis of acute traumatic brain injury (TBI), involving the detection of fresh blood in the brain (contrast of 30-50 HU, detail size down to 1 mm) in a non-contrast-enhanced exam. A dedicated point-of-care imaging system based on cone-beam CT (CBCT) could benefit early detection of TBI and improve direction to appropriate therapy. However, flat-panel detector (FPD) CBCT is challenged by artifacts that degrade contrast resolution and limit application in soft-tissue imaging. We present and evaluate a fairly comprehensive framework for artifact correction to enable soft-tissue brain imaging with FPD CBCT. The framework includes a fast Monte Carlo (MC)-based scatter estimation method complemented by corrections for detector lag, veiling glare, and beam hardening. The fast MC scatter estimation combines GPU acceleration, variance reduction, and simulation with a low number of photon histories and reduced number of projection angles (sparse MC) augmented by kernel de-noising to yield a runtime of ~4 min per scan. Scatter correction is combined with two-pass beam hardening correction. Detector lag correction is based on temporal deconvolution of the measured lag response function. The effects of detector veiling glare are reduced by deconvolution of the glare response function representing the long range tails of the detector point-spread function. The performance of the correction framework is quantified in experiments using a realistic head phantom on a testbench for FPD CBCT. Uncorrected reconstructions were non-diagnostic for soft-tissue imaging tasks in the brain. After processing with the artifact correction framework, image uniformity was substantially improved, and artifacts were reduced to a level that enabled visualization of ~3 mm simulated bleeds throughout the brain. Non-uniformity (cupping) was reduced by a factor of 5, and contrast of simulated bleeds was improved from ~7 to 49.7 HU, in good agreement with the nominal blood contrast of 50 HU. Although noise was amplified by the corrections, the contrast-to-noise ratio (CNR) of simulated bleeds was improved by nearly a factor of 3.5 (CNR = 0.54 without corrections and 1.91 after correction). The resulting image quality motivates further development and translation of the FPD-CBCT system for imaging of acute TBI.

  5. Resistivity Correction Factor for the Four-Probe Method: Experiment II

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo

    1989-05-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F can be applied to a system consisting of a disk sample and a four-probe array. Measurements are made on isotropic graphite disks and crystalline ITO films. Factor F can correct the apparent variations of the data and lead to reasonable resistivities and sheet resistances. Here factor F is compared to other correction factors; i.e. FASTM and FJIS.

  6. Development and Evaluation of A Novel and Cost-Effective Approach for Low-Cost NO2 Sensor Drift Correction

    PubMed Central

    Sun, Li; Westerdahl, Dane; Ning, Zhi

    2017-01-01

    Emerging low-cost gas sensor technologies have received increasing attention in recent years for air quality measurements due to their small size and convenient deployment. However, in the diverse applications these sensors face many technological challenges, including sensor drift over long-term deployment that cannot be easily addressed using mathematical correction algorithms or machine learning methods. This study aims to develop a novel approach to auto-correct the drift of commonly used electrochemical nitrogen dioxide (NO2) sensor with comprehensive evaluation of its application. The impact of environmental factors on the NO2 electrochemical sensor in low-ppb concentration level measurement was evaluated in laboratory and the temperature and relative humidity correction algorithm was evaluated. An automated zeroing protocol was developed and assessed using a chemical absorbent to remove NO2 as a means to perform zero correction in varying ambient conditions. The sensor system was operated in three different environments in which data were compared to a reference NO2 analyzer. The results showed that the zero-calibration protocol effectively corrected the observed drift of the sensor output. This technique offers the ability to enhance the performance of low-cost sensor based systems and these findings suggest extension of the approach to improve data quality from sensors measuring other gaseous pollutants in urban air. PMID:28825633

  7. A Correction to the Stress-Strain Curve During Multistage Hot Deformation of 7150 Aluminum Alloy Using Instantaneous Friction Factors

    NASA Astrophysics Data System (ADS)

    Jiang, Fulin; Tang, Jie; Fu, Dinfa; Huang, Jianping; Zhang, Hui

    2018-04-01

    Multistage stress-strain curve correction based on an instantaneous friction factor was studied for axisymmetric uniaxial hot compression of 7150 aluminum alloy. Experimental friction factors were calculated based on continuous isothermal axisymmetric uniaxial compression tests at various deformation parameters. Then, an instantaneous friction factor equation was fitted by mathematic analysis. After verification by comparing single-pass flow stress correction with traditional average friction factor correction, the instantaneous friction factor equation was applied to correct multistage stress-strain curves. The corrected results were reasonable and validated by multistage relative softening calculations. This research provides a broad potential for implementing axisymmetric uniaxial compression in multistage physical simulations and friction optimization in finite element analysis.

  8. Factors affecting first return to work following a compensable occupational back injury.

    PubMed

    Oleinick, A; Gluck, J V; Guire, K

    1996-11-01

    Occupational back injuries produced $27 billion in direct and indirect costs in 1988. Predictors of prolonged disability have generally been identified in selected clinical populations, but there have been few population-based studies using statewide registries from workers' compensation systems. This study uses a 1986 cohort of 8,628 Michigan workers with compensable back injuries followed to March 1, 1990. Cox proportional hazards analyses with nine categorical covariates identified factors predicting missed worktime for the first disability episode following the injury. The model distinguished factors affecting the acute (< or = 8 weeks) and chronic disability periods (> 8 weeks). The first disability episode following injury contains 69.6% of the missed worktime observed through follow-up. In the acute phase, which contributes 15.2% of first episode missed worktime, gender, age, number of dependents, industry (construction), occupation, and type of accident predict continued work disability. Marital status, weekly wage compensation rate, and establishment size do not. Beyond 8 weeks, age, establishment size and, to a lesser degree, wage compensation rate predict duration of work disability. Graphs show the predicted disability course for injured workers with specific covariate patterns. Future efforts to reduce missed worktime may require modifications in current clinical practice by patient age group and the development of new strategies to encourage small and medium-size employers to find ways to return their injured employees to work sooner. Recent federal statutes covering disabled workers will only partially correct the strong effect of employer establishment size.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia-Mateo, Carlos, E-mail: cgm@cenim.csic.es

    Since the major strengthening mechanisms in nanocrystalline bainitic steels arise from the exceptionally small size of the bainitc ferrite plate, accurate determination of this parameter is fundamental for quantitative relating the microstructure to the mechanical properties. In this work, the thickness of the bainitic ferrite subunits obtained by different bainitic heat treatments was determined in two steels, with carbon contents of 0.3 and 0.7 wt.%, from SEM and TEM micrographs. As these measurements were made on 2D images taken from random sections, the method includes some stereological correction factors to obtain accurate information. Finally, the determined thicknesses of bainitic ferritemore » plates were compared with the crystallite size calculated from the analysis of X-ray diffraction peak broadening. Although in some case the values obtained for crystallite size and plate thickness can be similar, this study confirms that indeed they are two different parameters. - Highlights: •Bainitic microstructure in a nanostructured and sub-micron steel •Bainitic ferrite plate thickness measured by SEM and TEM •Crystallite size determined by X-ray analysis.« less

  10. Integral image rendering procedure for aberration correction and size measurement.

    PubMed

    Sommer, Holger; Ihrig, Andreas; Ebenau, Melanie; Flühs, Dirk; Spaan, Bernhard; Eichmann, Marion

    2014-05-20

    The challenge in rendering integral images is to use as much information preserved by the light field as possible to reconstruct a captured scene in a three-dimensional way. We propose a rendering algorithm based on the projection of rays through a detailed simulation of the optical path, considering all the physical properties and locations of the optical elements. The rendered images contain information about the correct size of imaged objects without the need to calibrate the imaging device. Additionally, aberrations of the optical system may be corrected, depending on the setup of the integral imaging device. We show simulation data that illustrates the aberration correction ability and experimental data from our plenoptic camera, which illustrates the capability of our proposed algorithm to measure size and distance. We believe this rendering procedure will be useful in the future for three-dimensional ophthalmic imaging of the human retina.

  11. Size, shape and flow characterization of ground wood chip and ground wood pellet particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rezaei, Hamid; Lim, C. Jim; Lau, Anthony

    Size, shape and density of biomass particles influence their transportation, fluidization, rates of drying and thermal decomposition. Pelleting wood particles increases the particle density and reduces the variability of physical properties among biomass particles. In this study, pine chips prepared for pulping and commercially produced pine pellets were ground in a hammer mill using grinder screens of 3.2, 6.3, 12.7 and 25.4mmperforations. Pellets consumed about 7 times lower specific grinding energy than chips to produce the same size of particles. Grinding pellets produced the smaller particles with narrower size distribution than grinding chips. Derived shape factors in digital image analysismore » showed that chip particles were rectangular and had the aspect ratios about one third of pellet particles. Pellet particles were more circular shape. The mechanical sieving underestimated the actual particle size and did not represent the size of particles correctly. Instead, digital imaging is preferred. Angle of repose and compressibility tests represented the flow properties of ground particles. Pellet particles made a less compacted bulk, had lower cohesion and did flow easier in a pile of particles. In conclusion, particle shape affected the flow properties more than particle size« less

  12. Size, shape and flow characterization of ground wood chip and ground wood pellet particles

    DOE PAGES

    Rezaei, Hamid; Lim, C. Jim; Lau, Anthony; ...

    2016-07-11

    Size, shape and density of biomass particles influence their transportation, fluidization, rates of drying and thermal decomposition. Pelleting wood particles increases the particle density and reduces the variability of physical properties among biomass particles. In this study, pine chips prepared for pulping and commercially produced pine pellets were ground in a hammer mill using grinder screens of 3.2, 6.3, 12.7 and 25.4mmperforations. Pellets consumed about 7 times lower specific grinding energy than chips to produce the same size of particles. Grinding pellets produced the smaller particles with narrower size distribution than grinding chips. Derived shape factors in digital image analysismore » showed that chip particles were rectangular and had the aspect ratios about one third of pellet particles. Pellet particles were more circular shape. The mechanical sieving underestimated the actual particle size and did not represent the size of particles correctly. Instead, digital imaging is preferred. Angle of repose and compressibility tests represented the flow properties of ground particles. Pellet particles made a less compacted bulk, had lower cohesion and did flow easier in a pile of particles. In conclusion, particle shape affected the flow properties more than particle size« less

  13. Improved image quality of cone beam CT scans for radiotherapy image guidance using fiber-interspaced antiscatter grid

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stankovic, Uros; Herk, Marcel van; Ploeger, Lennert S.

    Purpose: Medical linear accelerator mounted cone beam CT (CBCT) scanner provides useful soft tissue contrast for purposes of image guidance in radiotherapy. The presence of extensive scattered radiation has a negative effect on soft tissue visibility and uniformity of CBCT scans. Antiscatter grids (ASG) are used in the field of diagnostic radiography to mitigate the scatter. They usually do increase the contrast of the scan, but simultaneously increase the noise. Therefore, and considering other scatter mitigation mechanisms present in a CBCT scanner, the applicability of ASGs with aluminum interspacing for a wide range of imaging conditions has been inconclusive inmore » previous studies. In recent years, grids using fiber interspacers have appeared, providing grids with higher scatter rejection while maintaining reasonable transmission of primary radiation. The purpose of this study was to evaluate the impact of one such grid on CBCT image quality. Methods: The grid used (Philips Medical Systems) had ratio of 21:1, frequency 36 lp/cm, and nominal selectivity of 11.9. It was mounted on the kV flat panel detector of an Elekta Synergy linear accelerator and tested in a phantom and a clinical study. Due to the flex of the linac and presence of gridline artifacts an angle dependent gain correction algorithm was devised to mitigate resulting artifacts. Scan reconstruction was performed using XVI4.5 augmented with inhouse developed image lag correction and Hounsfield unit calibration. To determine the necessary parameters for Hounsfield unit calibration and software scatter correction parameters, the Catphan 600 (The Phantom Laboratory) phantom was used. Image quality parameters were evaluated using CIRS CBCT Image Quality and Electron Density Phantom (CIRS) in two different geometries: one modeling head and neck and other pelvic region. Phantoms were acquired with and without the grid and reconstructed with and without software correction which was adapted for the different acquisition scenarios. Parameters used in the phantom study weret{sub cup} for nonuniformity and contrast-to-noise ratio (CNR) for soft tissue visibility. Clinical scans were evaluated in an observer study in which four experienced radiotherapy technologists rated soft tissue visibility and uniformity of scans with and without the grid. Results: The proposed angle dependent gain correction algorithm suppressed the visible ring artifacts. Grid had a beneficial impact on nonuniformity, contrast to noise ratio, and Hounsfield unit accuracy for both scanning geometries. The nonuniformity reduced by 90% for head sized object and 91% for pelvic-sized object. CNR improved compared to no corrections on average by a factor 2.8 for the head sized object, and 2.2 for the pelvic sized phantom. Grid outperformed software correction alone, but adding additional software correction to the grid was overall the best strategy. In the observer study, a significant improvement was found in both soft tissue visibility and nonuniformity of scans when grid is used. Conclusions: The evaluated fiber-interspaced grid improved the image quality of the CBCT system for broad range of imaging conditions. Clinical scans show significant improvement in soft tissue visibility and uniformity without the need to increase the imaging dose.« less

  14. Improved image quality of cone beam CT scans for radiotherapy image guidance using fiber-interspaced antiscatter grid.

    PubMed

    Stankovic, Uros; van Herk, Marcel; Ploeger, Lennert S; Sonke, Jan-Jakob

    2014-06-01

    Medical linear accelerator mounted cone beam CT (CBCT) scanner provides useful soft tissue contrast for purposes of image guidance in radiotherapy. The presence of extensive scattered radiation has a negative effect on soft tissue visibility and uniformity of CBCT scans. Antiscatter grids (ASG) are used in the field of diagnostic radiography to mitigate the scatter. They usually do increase the contrast of the scan, but simultaneously increase the noise. Therefore, and considering other scatter mitigation mechanisms present in a CBCT scanner, the applicability of ASGs with aluminum interspacing for a wide range of imaging conditions has been inconclusive in previous studies. In recent years, grids using fiber interspacers have appeared, providing grids with higher scatter rejection while maintaining reasonable transmission of primary radiation. The purpose of this study was to evaluate the impact of one such grid on CBCT image quality. The grid used (Philips Medical Systems) had ratio of 21:1, frequency 36 lp/cm, and nominal selectivity of 11.9. It was mounted on the kV flat panel detector of an Elekta Synergy linear accelerator and tested in a phantom and a clinical study. Due to the flex of the linac and presence of gridline artifacts an angle dependent gain correction algorithm was devised to mitigate resulting artifacts. Scan reconstruction was performed using XVI4.5 augmented with inhouse developed image lag correction and Hounsfield unit calibration. To determine the necessary parameters for Hounsfield unit calibration and software scatter correction parameters, the Catphan 600 (The Phantom Laboratory) phantom was used. Image quality parameters were evaluated using CIRS CBCT Image Quality and Electron Density Phantom (CIRS) in two different geometries: one modeling head and neck and other pelvic region. Phantoms were acquired with and without the grid and reconstructed with and without software correction which was adapted for the different acquisition scenarios. Parameters used in the phantom study were t(cup) for nonuniformity and contrast-to-noise ratio (CNR) for soft tissue visibility. Clinical scans were evaluated in an observer study in which four experienced radiotherapy technologists rated soft tissue visibility and uniformity of scans with and without the grid. The proposed angle dependent gain correction algorithm suppressed the visible ring artifacts. Grid had a beneficial impact on nonuniformity, contrast to noise ratio, and Hounsfield unit accuracy for both scanning geometries. The nonuniformity reduced by 90% for head sized object and 91% for pelvic-sized object. CNR improved compared to no corrections on average by a factor 2.8 for the head sized object, and 2.2 for the pelvic sized phantom. Grid outperformed software correction alone, but adding additional software correction to the grid was overall the best strategy. In the observer study, a significant improvement was found in both soft tissue visibility and nonuniformity of scans when grid is used. The evaluated fiber-interspaced grid improved the image quality of the CBCT system for broad range of imaging conditions. Clinical scans show significant improvement in soft tissue visibility and uniformity without the need to increase the imaging dose.

  15. Relationships between media use, body fatness and physical activity in children and youth: a meta-analysis.

    PubMed

    Marshall, S J; Biddle, S J H; Gorely, T; Cameron, N; Murdey, I

    2004-10-01

    To review the empirical evidence of associations between television (TV) viewing, video/computer game use and (a) body fatness, and (b) physical activity. Meta-analysis. Published English-language studies were located from computerized literature searches, bibliographies of primary studies and narrative reviews, and manual searches of personal archives. Included studies presented at least one empirical association between TV viewing, video/computer game use and body fatness or physical activity among samples of children and youth aged 3-18 y. The mean sample-weighted corrected effect size (Pearson r). Based on data from 52 independent samples, the mean sample-weighted effect size between TV viewing and body fatness was 0.066 (95% CI=0.056-0.078; total N=44,707). The sample-weighted fully corrected effect size was 0.084. Based on data from six independent samples, the mean sample-weighted effect size between video/computer game use and body fatness was 0.070 (95% CI=-0.048 to 0.188; total N=1,722). The sample-weighted fully corrected effect size was 0.128. Based on data from 39 independent samples, the mean sample-weighted effect size between TV viewing and physical activity was -0.096 (95% CI=-0.080 to -0.112; total N=141,505). The sample-weighted fully corrected effect size was -0.129. Based on data from 10 independent samples, the mean sample-weighted effect size between video/computer game use and physical activity was -0.104 (95% CI=-0.080 to -0.128; total N=119,942). The sample-weighted fully corrected effect size was -0.141. A statistically significant relationship exists between TV viewing and body fatness among children and youth although it is likely to be too small to be of substantial clinical relevance. The relationship between TV viewing and physical activity is small but negative. The strength of these relationships remains virtually unchanged even after correcting for common sources of bias known to impact study outcomes. While the total amount of time per day engaged in sedentary behavior is inevitably prohibitive of physical activity, media-based inactivity may be unfairly implicated in recent epidemiologic trends of overweight and obesity among children and youth. Relationships between sedentary behavior and health are unlikely to be explained using single markers of inactivity, such as TV viewing or video/computer game use.

  16. The effects of age, viewing distance, display type, font type, colour contrast and number of syllables on the legibility of Korean characters.

    PubMed

    Kong, Yong-Ku; Lee, Inseok; Jung, Myung-Chul; Song, Young-Woong

    2011-05-01

    This study evaluated the effects of age (20s and 60s), viewing distance (50 cm, 200 cm), display type (paper, monitor), font type (Gothic, Ming), colour contrast (black letters on white background, white letters on black background) and number of syllables (one, two) on the legibility of Korean characters by using the four legibility measures (minimum letter size for 100% correctness, maximum letter size for 0% correctness, minimum letter size for the least discomfort and maximum letter size for the most discomfort). Ten subjects in each age group read the four letters presented on a slide (letter size varied from 80 pt to 2 pt). Subjects also subjectively rated the reading discomfort of the letters on a 4-point scale (1 = no discomfort, 4 = most discomfort). According to the ANOVA procedure, age, viewing distance and font type significantly affected the four dependent variables (p < 0.05), while the main effect of colour contrast was not statistically significant for any measures. Two-syllable letters had smaller letters than one-syllable letters in the two correctness measures. The younger group could see letter sizes two times smaller than the old group could and the viewing distance of 50 cm showed letters about three times smaller than those at a 200 cm viewing distance. The Gothic fonts were smaller than the Ming fonts. Monitors were smaller than paper for correctness and maximum letter size for the most discomfort. From a comparison of the results for correctness and discomfort, people generally preferred larger letter sizes to those that they could read. The findings of this study may provide basic information for setting a global standard of letter size or font type to improve the legibility of characters written in Korean. STATEMENT OF RELEVANCE: Results obtained in this study will provide basic information and guidelines for setting standards of letter size and font type to improve the legibility of characters written in Korean. Also, the results might offer useful information for people who are working on design of visual displays.

  17. Availability and capacity of substance abuse programs in correctional settings: A classification and regression tree analysis.

    PubMed

    Taxman, Faye S; Kitsantas, Panagiota

    2009-08-01

    OBJECTIVE TO BE ADDRESSED: The purpose of this study was to investigate the structural and organizational factors that contribute to the availability and increased capacity for substance abuse treatment programs in correctional settings. We used classification and regression tree statistical procedures to identify how multi-level data can explain the variability in availability and capacity of substance abuse treatment programs in jails and probation/parole offices. The data for this study combined the National Criminal Justice Treatment Practices (NCJTP) Survey and the 2000 Census. The NCJTP survey was a nationally representative sample of correctional administrators for jails and probation/parole agencies. The sample size included 295 substance abuse treatment programs that were classified according to the intensity of their services: high, medium, and low. The independent variables included jurisdictional-level structural variables, attributes of the correctional administrators, and program and service delivery characteristics of the correctional agency. The two most important variables in predicting the availability of all three types of services were stronger working relationships with other organizations and the adoption of a standardized substance abuse screening tool by correctional agencies. For high and medium intensive programs, the capacity increased when an organizational learning strategy was used by administrators and the organization used a substance abuse screening tool. Implications on advancing treatment practices in correctional settings are discussed, including further work to test theories on how to better understand access to intensive treatment services. This study presents the first phase of understanding capacity-related issues regarding treatment programs offered in correctional settings.

  18. Mean grain size detection of DP590 steel plate using a corrected method with electromagnetic acoustic resonance.

    PubMed

    Wang, Bin; Wang, Xiaokai; Hua, Lin; Li, Juanjuan; Xiang, Qing

    2017-04-01

    Electromagnetic acoustic resonance (EMAR) is a considerable method to determine the mean grain size of the metal material with a high precision. The basic ultrasonic attenuation theory used for the mean grain size detection of EMAR is come from the single phase theory. In this paper, the EMAR testing was carried out based on the ultrasonic attenuation theory. The detection results show that the double peaks phenomenon occurs in the EMAR testing of DP590 steel plate. The dual phase structure of DP590 steel is the inducement of the double peaks phenomenon in the EMAR testing. In reaction to the phenomenon, a corrected method with EMAR was put forward to detect the mean grain size of dual phase steel. Compared with the traditional attenuation evaluation method and the uncorrected method with EMAR, the corrected method with EMAR shows great effectiveness and superiority for the mean grain size detection of DP590 steel plate. Copyright © 2016. Published by Elsevier B.V.

  19. SHOEBOX Modulates Root Meristem Size in Rice through Dose-Dependent Effects of Gibberellins on Cell Elongation and Proliferation

    PubMed Central

    Li, Jintao; Zhao, Yu; Chu, Huangwei; Wang, Likai; Fu, Yanru; Liu, Ping; Upadhyaya, Narayana; Chen, Chunli; Mou, Tongmin; Feng, Yuqi; Kumar, Prakash; Xu, Jian

    2015-01-01

    Little is known about how the size of meristem cells is regulated and whether it participates in the control of meristem size in plants. Here, we report our findings on shoebox (shb), a mild gibberellin (GA) deficient rice mutant that has a short root meristem size. Quantitative analysis of cortical cell length and number indicates that shb has shorter, rather than fewer, cells in the root meristem until around the fifth day after sowing, from which the number of cortical cells is also reduced. These defects can be either corrected by exogenous application of bioactive GA or induced in wild-type roots by a dose-dependent inhibitory effect of paclobutrazol on GA biosynthesis, suggesting that GA deficiency is the primary cause of shb mutant phenotypes. SHB encodes an AP2/ERF transcription factor that directly activates transcription of the GA biosynthesis gene KS1. Thus, root meristem size in rice is modulated by SHB-mediated GA biosynthesis that regulates the elongation and proliferation of meristem cells in a developmental stage-specific manner. PMID:26275148

  20. SHOEBOX Modulates Root Meristem Size in Rice through Dose-Dependent Effects of Gibberellins on Cell Elongation and Proliferation.

    PubMed

    Li, Jintao; Zhao, Yu; Chu, Huangwei; Wang, Likai; Fu, Yanru; Liu, Ping; Upadhyaya, Narayana; Chen, Chunli; Mou, Tongmin; Feng, Yuqi; Kumar, Prakash; Xu, Jian

    2015-08-01

    Little is known about how the size of meristem cells is regulated and whether it participates in the control of meristem size in plants. Here, we report our findings on shoebox (shb), a mild gibberellin (GA) deficient rice mutant that has a short root meristem size. Quantitative analysis of cortical cell length and number indicates that shb has shorter, rather than fewer, cells in the root meristem until around the fifth day after sowing, from which the number of cortical cells is also reduced. These defects can be either corrected by exogenous application of bioactive GA or induced in wild-type roots by a dose-dependent inhibitory effect of paclobutrazol on GA biosynthesis, suggesting that GA deficiency is the primary cause of shb mutant phenotypes. SHB encodes an AP2/ERF transcription factor that directly activates transcription of the GA biosynthesis gene KS1. Thus, root meristem size in rice is modulated by SHB-mediated GA biosynthesis that regulates the elongation and proliferation of meristem cells in a developmental stage-specific manner.

  1. No rationale for 1 variable per 10 events criterion for binary logistic regression analysis.

    PubMed

    van Smeden, Maarten; de Groot, Joris A H; Moons, Karel G M; Collins, Gary S; Altman, Douglas G; Eijkemans, Marinus J C; Reitsma, Johannes B

    2016-11-24

    Ten events per variable (EPV) is a widely advocated minimal criterion for sample size considerations in logistic regression analysis. Of three previous simulation studies that examined this minimal EPV criterion only one supports the use of a minimum of 10 EPV. In this paper, we examine the reasons for substantial differences between these extensive simulation studies. The current study uses Monte Carlo simulations to evaluate small sample bias, coverage of confidence intervals and mean square error of logit coefficients. Logistic regression models fitted by maximum likelihood and a modified estimation procedure, known as Firth's correction, are compared. The results show that besides EPV, the problems associated with low EPV depend on other factors such as the total sample size. It is also demonstrated that simulation results can be dominated by even a few simulated data sets for which the prediction of the outcome by the covariates is perfect ('separation'). We reveal that different approaches for identifying and handling separation leads to substantially different simulation results. We further show that Firth's correction can be used to improve the accuracy of regression coefficients and alleviate the problems associated with separation. The current evidence supporting EPV rules for binary logistic regression is weak. Given our findings, there is an urgent need for new research to provide guidance for supporting sample size considerations for binary logistic regression analysis.

  2. Detectability in Audio-Visual Surveys of Tropical Rainforest Birds: The Influence of Species, Weather and Habitat Characteristics.

    PubMed

    Anderson, Alexander S; Marques, Tiago A; Shoo, Luke P; Williams, Stephen E

    2015-01-01

    Indices of relative abundance do not control for variation in detectability, which can bias density estimates such that ecological processes are difficult to infer. Distance sampling methods can be used to correct for detectability, but in rainforest, where dense vegetation and diverse assemblages complicate sampling, information is lacking about factors affecting their application. Rare species present an additional challenge, as data may be too sparse to fit detection functions. We present analyses of distance sampling data collected for a diverse tropical rainforest bird assemblage across broad elevational and latitudinal gradients in North Queensland, Australia. Using audio and visual detections, we assessed the influence of various factors on Effective Strip Width (ESW), an intuitively useful parameter, since it can be used to calculate an estimate of density from count data. Body size and species exerted the most important influence on ESW, with larger species detectable over greater distances than smaller species. Secondarily, wet weather and high shrub density decreased ESW for most species. ESW for several species also differed between summer and winter, possibly due to seasonal differences in calling behavior. Distance sampling proved logistically intensive in these environments, but large differences in ESW between species confirmed the need to correct for detection probability to obtain accurate density estimates. Our results suggest an evidence-based approach to controlling for factors influencing detectability, and avenues for further work including modeling detectability as a function of species characteristics such as body size and call characteristics. Such models may be useful in developing a calibration for non-distance sampling data and for estimating detectability of rare species.

  3. Detectability in Audio-Visual Surveys of Tropical Rainforest Birds: The Influence of Species, Weather and Habitat Characteristics

    PubMed Central

    Anderson, Alexander S.; Marques, Tiago A.; Shoo, Luke P.; Williams, Stephen E.

    2015-01-01

    Indices of relative abundance do not control for variation in detectability, which can bias density estimates such that ecological processes are difficult to infer. Distance sampling methods can be used to correct for detectability, but in rainforest, where dense vegetation and diverse assemblages complicate sampling, information is lacking about factors affecting their application. Rare species present an additional challenge, as data may be too sparse to fit detection functions. We present analyses of distance sampling data collected for a diverse tropical rainforest bird assemblage across broad elevational and latitudinal gradients in North Queensland, Australia. Using audio and visual detections, we assessed the influence of various factors on Effective Strip Width (ESW), an intuitively useful parameter, since it can be used to calculate an estimate of density from count data. Body size and species exerted the most important influence on ESW, with larger species detectable over greater distances than smaller species. Secondarily, wet weather and high shrub density decreased ESW for most species. ESW for several species also differed between summer and winter, possibly due to seasonal differences in calling behavior. Distance sampling proved logistically intensive in these environments, but large differences in ESW between species confirmed the need to correct for detection probability to obtain accurate density estimates. Our results suggest an evidence-based approach to controlling for factors influencing detectability, and avenues for further work including modeling detectability as a function of species characteristics such as body size and call characteristics. Such models may be useful in developing a calibration for non-distance sampling data and for estimating detectability of rare species. PMID:26110433

  4. Missing female fetus: a micro level investigation of sex determination in a periurban area of Northern India.

    PubMed

    Ghosh, Rohini; Sharma, Arun Kumar

    2012-01-01

    A micro-level investigation of 983 pregnant women (aged 15-49 years) regarding sex determination and associated factors was carried out in a periurban region of Northern India. Among the women surveyed, 183 chose to use sex determination. The highest percentage of sex determination was among 30-39-year-old women, and general caste and family size were two risk factors associated with sex determination. Correcting imbalances in sex ratios at birth is a complex issue without easy answers, especially in patriarchal societies. Apart from raising awareness among decisionmakers, property rights in favor of women and strict vigilance and record of registration of ultrasound machines are necessary.

  5. Detection rates of geckos in visual surveys: Turning confounding variables into useful knowledge

    USGS Publications Warehouse

    Lardner, Bjorn; Rodda, Gordon H.; Yackel Adams, Amy A.; Savidge, Julie A.; Reed, Robert N.

    2016-01-01

    Transect surveys without some means of estimating detection probabilities generate population size indices prone to bias because survey conditions differ in time and space. Knowing what causes such bias can help guide the collection of relevant survey covariates, correct the survey data, anticipate situations where bias might be unacceptably large, and elucidate the ecology of target species. We used negative binomial regression to evaluate confounding variables for gecko (primarily Hemidactylus frenatus and Lepidodactylus lugubris) counts on 220-m-long transects surveyed at night, primarily for snakes, on 9,475 occasions. Searchers differed in gecko detection rates by up to a factor of six. The worst and best headlamps differed by a factor of at least two. Strong winds had a negative effect potentially as large as those of searchers or headlamps. More geckos were seen during wet weather conditions, but the effect size was small. Compared with a detection nadir during waxing gibbous (nearly full) moons above the horizon, we saw 28% more geckos during waning crescent moons below the horizon. A sine function suggested that we saw 24% more geckos at the end of the wet season than at the end of the dry season. Fluctuations on a longer timescale also were verified. Disturbingly, corrected data exhibited strong short-term fluctuations that covariates apparently failed to capture. Although some biases can be addressed with measured covariates, others will be difficult to eliminate as a significant source of error in longterm monitoring programs.

  6. Integrability in AdS/CFT correspondence: quasi-classical analysis

    NASA Astrophysics Data System (ADS)

    Gromov, Nikolay

    2009-06-01

    In this review, we consider a quasi-classical method applicable to integrable field theories which is based on a classical integrable structure—the algebraic curve. We apply it to the Green-Schwarz superstring on the AdS5 × S5 space. We show that the proposed method reproduces perfectly the earlier results obtained by expanding the string action for some simple classical solutions. The construction is explicitly covariant and is not based on a particular parameterization of the fields and as a result is free from ambiguities. On the other hand, the finite size corrections in some particularly important scaling limit are studied in this paper for a system of Bethe equations. For the general superalgebra \\su(N|K) , the result for the 1/L corrections is obtained. We find an integral equation which describes these corrections in a closed form. As an application, we consider the conjectured Beisert-Staudacher (BS) equations with the Hernandez-Lopez dressing factor where the finite size corrections should reproduce quasi-classical results around a general classical solution. Indeed, we show that our integral equation can be interpreted as a sum of all physical fluctuations and thus prove the complete one-loop consistency of the BS equations. We demonstrate that any local conserved charge (including the AdS energy) computed from the BS equations is indeed given at one loop by the sum of the charges of fluctuations with an exponential precision for large S5 angular momentum of the string. As an independent result, the BS equations in an \\su(2) sub-sector were derived from Zamolodchikovs's S-matrix. The paper is based on the author's PhD thesis.

  7. Recovery and radiation corrections and time constants of several sizes of shielded and unshielded thermocouple probes for measuring gas temperature

    NASA Technical Reports Server (NTRS)

    Glawe, G. E.; Holanda, R.; Krause, L. N.

    1978-01-01

    Performance characteristics were experimentally determined for several sizes of a shielded and unshielded thermocouple probe design. The probes are of swaged construction and were made of type K wire with a stainless steel sheath and shield and MgO insulation. The wire sizes ranged from 0.03- to 1.02-mm diameter for the unshielded design and from 0.16- to 0.81-mm diameter for the shielded design. The probes were tested through a Mach number range of 0.2 to 0.9, through a temperature range of room ambient to 1420 K, and through a total-pressure range of 0.03 to 0.2.2 MPa (0.3 to 22 atm). Tables and graphs are presented to aid in selecting a particular type and size. Recovery corrections, radiation corrections, and time constants were determined.

  8. Thermalization Time Bounds for Pauli Stabilizer Hamiltonians

    NASA Astrophysics Data System (ADS)

    Temme, Kristan

    2017-03-01

    We prove a general lower bound to the spectral gap of the Davies generator for Hamiltonians that can be written as the sum of commuting Pauli operators. These Hamiltonians, defined on the Hilbert space of N-qubits, serve as one of the most frequently considered candidates for a self-correcting quantum memory. A spectral gap bound on the Davies generator establishes an upper limit on the life time of such a quantum memory and can be used to estimate the time until the system relaxes to thermal equilibrium when brought into contact with a thermal heat bath. The bound can be shown to behave as {λ ≥ O(N^{-1} exp(-2β overline{ɛ}))}, where {overline{ɛ}} is a generalization of the well known energy barrier for logical operators. Particularly in the low temperature regime we expect this bound to provide the correct asymptotic scaling of the gap with the system size up to a factor of N -1. Furthermore, we discuss conditions and provide scenarios where this factor can be removed and a constant lower bound can be proven.

  9. Bias Corrections for Standardized Effect Size Estimates Used with Single-Subject Experimental Designs

    ERIC Educational Resources Information Center

    Ugille, Maaike; Moeyaert, Mariola; Beretvas, S. Natasha; Ferron, John M.; Van den Noortgate, Wim

    2014-01-01

    A multilevel meta-analysis can combine the results of several single-subject experimental design studies. However, the estimated effects are biased if the effect sizes are standardized and the number of measurement occasions is small. In this study, the authors investigated 4 approaches to correct for this bias. First, the standardized effect…

  10. Correcting for the effects of pupil discontinuities with the ACAD method

    NASA Astrophysics Data System (ADS)

    Mazoyer, Johan; Pueyo, Laurent; N'Diaye, Mamadou; Mawet, Dimitri; Soummer, Rémi; Norman, Colin

    2016-07-01

    The current generation of ground-based coronagraphic instruments uses deformable mirrors to correct for phase errors and to improve contrast levels at small angular separations. Improving these techniques, several space and ground based instruments are currently developed using two deformable mirrors to correct for both phase and amplitude errors. However, as wavefront control techniques improve, more complex telescope pupil geometries (support structures, segmentation) will soon be a limiting factor for these next generation coronagraphic instruments. The technique presented in this proceeding, the Active Correction of Aperture Discontinuities method, is taking advantage of the fact that most future coronagraphic instruments will include two deformable mirrors, and is proposing to find the shapes and actuator movements to correct for the effect introduced by these complex pupil geometries. For any coronagraph previously designed for continuous apertures, this technique allow to obtain similar performance in contrast with a complex aperture (with segmented and secondary mirror support structures), with high throughput and flexibility to adapt to changing pupil geometry (e.g. in case of segment failure or maintenance of the segments). We here present the results of the parametric analysis realized on the WFIRST pupil for which we obtained high contrast levels with several deformable mirror setups (size, separation between them), coronagraphs (Vortex charge 2, vortex charge 4, APLC) and spectral bandwidths. However, because contrast levels and separation are not the only metrics to maximize the scientific return of an instrument, we also included in this study the influence of these deformable mirror shapes on the throughput of the instrument and sensitivity to pointing jitters. Finally, we present results obtained on another potential space based telescope segmented aperture. The main result of this proceeding is that we now obtain comparable performance than the coronagraphs previously designed for WFIRST. First result from the parametric analysis strongly suggest that the 2 deformable mirror set up (size and distance between them) have a important impact on the performance in contrast and throughput of the final instrument.

  11. Determination of Residual Stress Distributions in Polycrystalline Alumina using Fluorescence Microscopy

    PubMed Central

    Michaels, Chris A.; Cook, Robert F.

    2016-01-01

    Maps of residual stress distributions arising from anisotropic thermal expansion effects in a polycrystalline alumina are generated using fluorescence microscopy. The shifts of both the R1 and R2 ruby fluorescence lines of Cr in alumina are used to create maps with sub-µm resolution of either the local mean and shear stresses or local crystallographic a- and c-stresses in the material, with approximately ± 1 MPa stress resolution. The use of single crystal control materials and explicit correction for temperature and composition effects on line shifts enabled determination of the absolute values and distributions of values of stresses. Temperature correction is shown to be critical in absolute stress determination. Experimental determinations of average stress parameters in the mapped structure are consistent with assumed equilibrium conditions and with integrated large-area measurements. Average crystallographic stresses of order hundreds of MPa are determined with characteristic distribution widths of tens of MPa. The stress distributions reflect contributions from individual clusters of stress in the structure; the cluster size is somewhat larger than the grain size. An example application of the use of stress maps is shown in the calculation of stress-intensity factors for fracture in the residual stress field. PMID:27563163

  12. Pixel-super-resolved lensfree holography using adaptive relaxation factor and positional error correction

    NASA Astrophysics Data System (ADS)

    Zhang, Jialin; Chen, Qian; Sun, Jiasong; Li, Jiaji; Zuo, Chao

    2018-01-01

    Lensfree holography provides a new way to effectively bypass the intrinsical trade-off between the spatial resolution and field-of-view (FOV) of conventional lens-based microscopes. Unfortunately, due to the limited sensor pixel-size, unpredictable disturbance during image acquisition, and sub-optimum solution to the phase retrieval problem, typical lensfree microscopes only produce compromised imaging quality in terms of lateral resolution and signal-to-noise ratio (SNR). In this paper, we propose an adaptive pixel-super-resolved lensfree imaging (APLI) method to address the pixel aliasing problem by Z-scanning only, without resorting to subpixel shifting or beam-angle manipulation. Furthermore, an automatic positional error correction algorithm and adaptive relaxation strategy are introduced to enhance the robustness and SNR of reconstruction significantly. Based on APLI, we perform full-FOV reconstruction of a USAF resolution target across a wide imaging area of {29.85 mm2 and achieve half-pitch lateral resolution of 770 nm, surpassing 2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed by the sensor pixel-size (1.67 μm). Full-FOV imaging result of a typical dicot root is also provided to demonstrate its promising potential applications in biologic imaging.

  13. Characterization of a cable-free system based on p-type MOSFET detectors for "in vivo" entrance skin dose measurements in interventional radiology.

    PubMed

    Falco, Maria Daniela; D'Andrea, Marco; Strigari, Lidia; D'Alessio, Daniela; Quagliani, Francesco; Santoni, Riccardo; Bosco, Alessia Lo

    2012-08-01

    During radiological interventional procedures (RIP) the skin of a patient under examination may undergo a prolonged x-ray exposure, receiving a dose as high as 5 Gy in a single session. This paper describes the use of the OneDose(TM) cable-free system based on p-type MOSFET detectors to determine the entrance skin dose (ESD) at selected points during RIP. At first, some dosimetric characteristics of the detector, such as reproducibility, linearity, and fading, have been investigated using a C-arc as a source of radiation. The reference setting (RS) was: 80 kV energy, 40 cm × 40 cm field of view (FOV), current-time product of 50 mAs and source to skin distance (SSD) of 50 cm. A calibrated PMX III solid state detector was used as the reference detector and Gafchromic(®) films have been used as an independent dosimetric system to test the entire procedure. A calibration factor for the RS and correction factors as functions of tube voltage and FOV size have been determined. Reproducibility ranged from 4% at low doses (around 10 cGy as measured by the reference detector) to about 1% for high doses (around 2 Gy). The system response was found to be linear with respect to both dose measured with the PMX III and tube voltage. The fading test has shown that the maximum deviation from the optimal reading conditions (3 min after a single irradiation) was 9.1% corresponding to four irradiations in one hour read 3 min after the last exposure. The calibration factor in the RS has shown that the system response at the kV energy range is about four times larger than in the MV energy range. A fifth order and fourth order polynomial functions were found to provide correction factors for tube voltage and FOV size, respectively, in measurement settings different than the RS. ESDs measured with the system after applying the proper correction factors agreed within one standard deviation (SD) with the corresponding ESDs measured with the reference detector. The ESDs measured with Gafchromic(®) films were in agreement within one SD compared to the ESDs measured using the OneDose(TM) system, as well. The global uncertainty associated to the OneDose(TM) system established in our experiments, ranged from 7% to 10%, depending on the duration of the RIP due to fading. These values are much lower than the uncertainty commonly accepted for general diagnostic practices (20%) and of about the same size of the uncertainty recommended for practices with high risk of deterministic side effects (7%). The OneDose(TM) system has shown a high sensitivity in the kV energy range and has been found capable of measuring the entrance skin dose in RIP.

  14. Finding Mars-Sized Planets in Inner Orbits of Other Stars by Photometry

    NASA Technical Reports Server (NTRS)

    Borucki, W.; Cullers, K.; Dunham, E.; Koch, D.; Mena-Werth, J.; Cuzzi, Jeffrey N. (Technical Monitor)

    1995-01-01

    High precision photometry from a spaceborne telescope has the potential of discovering sub-earth sized inner planets. Model calculations by Wetherill indicate that Mars-sized planets can be expected to form throughout the range of orbits from that of Mercury to Mars. While a transit of an Earth-sized planet causes a 0.084% decrease in brightness from a solar-like star, a transit of a planet as small as Mars causes a flux decrease of only 0.023%. Stellar variability will be the limiting factor for transit measurements. Recent analysis of solar variability from the SOLSTICE experiment shows that much of the variability is in the UV at <400 nm. Combining this result with the total flux variability measured by the ACRIM-1 photometer implies that the Sun has relative amplitude variations of about 0.0007% in the 17-69 pHz bandpass and is presumably typical for solar-like stars. Tests were conducted at Lick Observatory to determine the photometric precision of CCD detectors in the 17-69 pHz bandpass. With frame-by-frame corrections of the image centroids it was found that a precision of 0.001% could be readily achieved, corresponding to a signal to noise ratio of 1.4, provided the telescope aperture was sufficient to keep the statistical noise below 0.0006%. With 24 transits a planet as small as Mars should be reliably detectable. If Wetherill's models are correct in postulating that Mars-like planets are present in Mercury-like orbits, then a six year search should be able to find them.

  15. Does filler database size influence identification accuracy?

    PubMed

    Bergold, Amanda N; Heaton, Paul

    2018-06-01

    Police departments increasingly use large photo databases to select lineup fillers using facial recognition software, but this technological shift's implications have been largely unexplored in eyewitness research. Database use, particularly if coupled with facial matching software, could enable lineup constructors to increase filler-suspect similarity and thus enhance eyewitness accuracy (Fitzgerald, Oriet, Price, & Charman, 2013). However, with a large pool of potential fillers, such technologies might theoretically produce lineup fillers too similar to the suspect (Fitzgerald, Oriet, & Price, 2015; Luus & Wells, 1991; Wells, Rydell, & Seelau, 1993). This research proposes a new factor-filler database size-as a lineup feature affecting eyewitness accuracy. In a facial recognition experiment, we select lineup fillers in a legally realistic manner using facial matching software applied to filler databases of 5,000, 25,000, and 125,000 photos, and find that larger databases are associated with a higher objective similarity rating between suspects and fillers and lower overall identification accuracy. In target present lineups, witnesses viewing lineups created from the larger databases were less likely to make correct identifications and more likely to select known innocent fillers. When the target was absent, database size was associated with a lower rate of correct rejections and a higher rate of filler identifications. Higher algorithmic similarity ratings were also associated with decreases in eyewitness identification accuracy. The results suggest that using facial matching software to select fillers from large photograph databases may reduce identification accuracy, and provides support for filler database size as a meaningful system variable. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. On geological interpretations of crystal size distributions: Constant vs. proportionate growth

    USGS Publications Warehouse

    Eberl, D.D.; Kile, D.E.; Drits, V.A.

    2002-01-01

    Geological interpretations of crystal size distributions (CSDs) depend on understanding the crystal growth laws that generated the distributions. Most descriptions of crystal growth, including a population-balance modeling equation that is widely used in petrology, assume that crystal growth rates at any particular time are identical for all crystals, and, therefore, independent of crystal size. This type of growth under constant conditions can be modeled by adding a constant length to the diameter of each crystal for each time step. This growth equation is unlikely to be correct for most mineral systems because it neither generates nor maintains the shapes of lognormal CSDs, which are among the most common types of CSDs observed in rocks. In an alternative approach, size-dependent (proportionate) growth is modeled approximately by multiplying the size of each crystal by a factor, an operation that maintains CSD shape and variance, and which is in accord with calcite growth experiments. The latter growth law can be obtained during supply controlled growth using a modified version of the Law of Proportionate Effect (LPE), an equation that simulates the reaction path followed by a CSD shape as mean size increases.

  17. Finite-size corrections in simulation of dipolar fluids

    NASA Astrophysics Data System (ADS)

    Belloni, Luc; Puibasset, Joël

    2017-12-01

    Monte Carlo simulations of dipolar fluids are performed at different numbers of particles N = 100-4000. For each size of the cubic cell, the non-spherically symmetric pair distribution function g(r,Ω) is accumulated in terms of projections gmnl(r) onto rotational invariants. The observed N dependence is in very good agreement with the theoretical predictions for the finite-size corrections of different origins: the explicit corrections due to the absence of fluctuations in the number of particles within the canonical simulation and the implicit corrections due to the coupling between the environment around a given particle and that around its images in the neighboring cells. The latter dominates in fluids of strong dipolar coupling characterized by low compressibility and high dielectric constant. The ability to clean with great precision the simulation data from these corrections combined with the use of very powerful anisotropic integral equation techniques means that exact correlation functions both in real and Fourier spaces, Kirkwood-Buff integrals, and bridge functions can be derived from box sizes as small as N ≈ 100, even with existing long-range tails. In the presence of dielectric discontinuity with the external medium surrounding the central box and its replica within the Ewald treatment of the Coulombic interactions, the 1/N dependence of the gmnl(r) is shown to disagree with the, yet well-accepted, prediction of the literature.

  18. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  19. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  20. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... microphone location point and the microphone target point is 60 feet (18.3 m) and that the measurement area... vehicle would be 87 dB(A), calculated as follows: 88 dB(A)Uncorrected average of readings −3 dB(A)Distance correction factor +2 dB(A)Ground surface correction factor _____ 87 dB(A)Corrected reading ...

  1. Chance-corrected classification for use in discriminant analysis: Ecological applications

    USGS Publications Warehouse

    Titus, K.; Mosher, J.A.; Williams, B.K.

    1984-01-01

    A method for evaluating the classification table from a discriminant analysis is described. The statistic, kappa, is useful to ecologists in that it removes the effects of chance. It is useful even with equal group sample sizes although the need for a chance-corrected measure of prediction becomes greater with more dissimilar group sample sizes. Examples are presented.

  2. Nomograms Predicting Progression-Free Survival, Overall Survival, and Pelvic Recurrence in Locally Advanced Cervical Cancer Developed From an Analysis of Identifiable Prognostic Factors in Patients From NRG Oncology/Gynecologic Oncology Group Randomized Trials of Chemoradiotherapy

    PubMed Central

    Rose, Peter G.; Java, James; Whitney, Charles W.; Stehman, Frederick B.; Lanciano, Rachelle; Thomas, Gillian M.; DiSilvestro, Paul A.

    2015-01-01

    Purpose To evaluate the prognostic factors in locally advanced cervical cancer limited to the pelvis and develop nomograms for 2-year progression-free survival (PFS), 5-year overall survival (OS), and pelvic recurrence. Patients and Methods We retrospectively reviewed 2,042 patients with locally advanced cervical carcinoma enrolled onto Gynecologic Oncology Group clinical trials of concurrent cisplatin-based chemotherapy and radiotherapy. Nomograms for 2-year PFS, five-year OS, and pelvic recurrence were created as visualizations of Cox proportional hazards regression models. The models were validated by bootstrap-corrected, relatively unbiased estimates of discrimination and calibration. Results Multivariable analysis identified prognostic factors including histology, race/ethnicity, performance status, tumor size, International Federation of Gynecology and Obstetrics stage, tumor grade, pelvic node status, and treatment with concurrent cisplatin-based chemotherapy. PFS, OS, and pelvic recurrence nomograms had bootstrap-corrected concordance indices of 0.62, 0.64, and 0.73, respectively, and were well calibrated. Conclusion Prognostic factors were used to develop nomograms for 2-year PFS, 5-year OS, and pelvic recurrence for locally advanced cervical cancer clinically limited to the pelvis treated with concurrent cisplatin-based chemotherapy and radiotherapy. These nomograms can be used to better estimate individual and collective outcomes. PMID:25732170

  3. Refractive optics to compensate x-ray mirror shape-errors

    NASA Astrophysics Data System (ADS)

    Laundy, David; Sawhney, Kawal; Dhamgaye, Vishal; Pape, Ian

    2017-08-01

    Elliptically profiled mirrors operating at glancing angle are frequently used at X-ray synchrotron sources to focus X-rays into sub-micrometer sized spots. Mirror figure error, defined as the height difference function between the actual mirror surface and the ideal elliptical profile, causes a perturbation of the X-ray wavefront for X- rays reflecting from the mirror. This perturbation, when propagated to the focal plane results in an increase in the size of the focused beam. At Diamond Light Source we are developing refractive optics that can be used to locally cancel out the wavefront distortion caused by figure error from nano-focusing elliptical mirrors. These optics could be used to correct existing optical components on synchrotron radiation beamlines in order to give focused X-ray beam sizes approaching the theoretical diffraction limit. We present our latest results showing measurement of the X-ray wavefront error after reflection from X-ray mirrors and the translation of the measured wavefront into a design for refractive optical elements for correction of the X-ray wavefront. We show measurement of the focused beam with and without the corrective optics inserted showing reduction in the size of the focus resulting from the correction to the wavefront.

  4. Small Sample Performance of Bias-corrected Sandwich Estimators for Cluster-Randomized Trials with Binary Outcomes

    PubMed Central

    Li, Peng; Redden, David T.

    2014-01-01

    SUMMARY The sandwich estimator in generalized estimating equations (GEE) approach underestimates the true variance in small samples and consequently results in inflated type I error rates in hypothesis testing. This fact limits the application of the GEE in cluster-randomized trials (CRTs) with few clusters. Under various CRT scenarios with correlated binary outcomes, we evaluate the small sample properties of the GEE Wald tests using bias-corrected sandwich estimators. Our results suggest that the GEE Wald z test should be avoided in the analyses of CRTs with few clusters even when bias-corrected sandwich estimators are used. With t-distribution approximation, the Kauermann and Carroll (KC)-correction can keep the test size to nominal levels even when the number of clusters is as low as 10, and is robust to the moderate variation of the cluster sizes. However, in cases with large variations in cluster sizes, the Fay and Graubard (FG)-correction should be used instead. Furthermore, we derive a formula to calculate the power and minimum total number of clusters one needs using the t test and KC-correction for the CRTs with binary outcomes. The power levels as predicted by the proposed formula agree well with the empirical powers from the simulations. The proposed methods are illustrated using real CRT data. We conclude that with appropriate control of type I error rates under small sample sizes, we recommend the use of GEE approach in CRTs with binary outcomes due to fewer assumptions and robustness to the misspecification of the covariance structure. PMID:25345738

  5. Library based x-ray scatter correction for dedicated cone beam breast CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Linxi; Zhu, Lei, E-mail: leizhu@gatech.edu

    Purpose: The image quality of dedicated cone beam breast CT (CBBCT) is limited by substantial scatter contamination, resulting in cupping artifacts and contrast-loss in reconstructed images. Such effects obscure the visibility of soft-tissue lesions and calcifications, which hinders breast cancer detection and diagnosis. In this work, we propose a library-based software approach to suppress scatter on CBBCT images with high efficiency, accuracy, and reliability. Methods: The authors precompute a scatter library on simplified breast models with different sizes using the GEANT4-based Monte Carlo (MC) toolkit. The breast is approximated as a semiellipsoid with homogeneous glandular/adipose tissue mixture. For scatter correctionmore » on real clinical data, the authors estimate the breast size from a first-pass breast CT reconstruction and then select the corresponding scatter distribution from the library. The selected scatter distribution from simplified breast models is spatially translated to match the projection data from the clinical scan and is subtracted from the measured projection for effective scatter correction. The method performance was evaluated using 15 sets of patient data, with a wide range of breast sizes representing about 95% of general population. Spatial nonuniformity (SNU) and contrast to signal deviation ratio (CDR) were used as metrics for evaluation. Results: Since the time-consuming MC simulation for library generation is precomputed, the authors’ method efficiently corrects for scatter with minimal processing time. Furthermore, the authors find that a scatter library on a simple breast model with only one input parameter, i.e., the breast diameter, sufficiently guarantees improvements in SNU and CDR. For the 15 clinical datasets, the authors’ method reduces the average SNU from 7.14% to 2.47% in coronal views and from 10.14% to 3.02% in sagittal views. On average, the CDR is improved by a factor of 1.49 in coronal views and 2.12 in sagittal views. Conclusions: The library-based scatter correction does not require increase in radiation dose or hardware modifications, and it improves over the existing methods on implementation simplicity and computational efficiency. As demonstrated through patient studies, the authors’ approach is effective and stable, and is therefore clinically attractive for CBBCT imaging.« less

  6. Rotational Diffusion Depends on Box Size in Molecular Dynamics Simulations.

    PubMed

    Linke, Max; Köfinger, Jürgen; Hummer, Gerhard

    2018-06-07

    We show that the rotational dynamics of proteins and nucleic acids determined from molecular dynamics simulations under periodic boundary conditions suffer from significant finite-size effects. We remove the box-size dependence of the rotational diffusion coefficients by adding a hydrodynamic correction k B T/6 ηV with k B Boltzmann's constant, T the absolute temperature, η the solvent shear viscosity, and V the box volume. We show that this correction accounts for the finite-size dependence of the rotational diffusion coefficients of horse-heart myoglobin and a B-DNA dodecamer in aqueous solution. The resulting hydrodynamic radii are in excellent agreement with experiment.

  7. Worldwide orthopaedic research activity 2010-2014: Publication rates in the top 15 orthopaedic journals related to population size and gross domestic product.

    PubMed

    Hohmann, Erik; Glatt, Vaida; Tetsworth, Kevin

    2017-06-18

    To perform a bibliometric analysis of publications rates in orthopedics in the top 15 orthopaedic journals. Based on their 2015 impact factor, the fifteen highest ranked orthopaedic journals between January 2010 and December 2014 were used to establish the total number of publications; cumulative impact factor points (IF) per country were determined, and normalized to population size, GDP, and GDP/capita, comparison to the median country output and the global leader. Twenty-three thousand and twenty-one orthopaedic articles were published, with 66 countries publishing. The United States had 8149 publications, followed by the United Kingdom (1644) and Japan (1467). The highest IF was achieved by the United States (24744), United Kingdom (4776), and Japan (4053). Normalized by population size Switzerland lead. Normalized by GDP, Croatia was the top achiever. Adjusting GDP/capita, for publications and IF, China, India, and the United States were the leaders. Adjusting for population size and GDP, 28 countries achieved numbers of publications to be considered at least equivalent with the median academic output. Adjusting GDP/capita only China and India reached the number of publications to be considered equivalent to the current global leader, the United States. Five countries were responsible for 60% of the orthopaedic research output over this 5-year period. After correcting for GDP/capita, only 28 of 66 countries achieved a publication rate equivalent to the median country. The United States, United Kingdom, South Korea, Japan, and Germany were the top five countries for both publication totals and cumulative impact factor points.

  8. Region of Interest Correction Factors Improve Reliability of Diffusion Imaging Measures Within and Across Scanners and Field Strengths

    PubMed Central

    Venkatraman, Vijay K; Gonzalez, Christopher E.; Landman, Bennett; Goh, Joshua; Reiter, David A.; An, Yang; Resnick, Susan M.

    2017-01-01

    Diffusion tensor imaging (DTI) measures are commonly used as imaging markers to investigate individual differences in relation to behavioral and health-related characteristics. However, the ability to detect reliable associations in cross-sectional or longitudinal studies is limited by the reliability of the diffusion measures. Several studies have examined reliability of diffusion measures within (i.e. intra-site) and across (i.e. inter-site) scanners with mixed results. Our study compares the test-retest reliability of diffusion measures within and across scanners and field strengths in cognitively normal older adults with a follow-up interval less than 2.25 years. Intra-class correlation (ICC) and coefficient of variation (CoV) of fractional anisotropy (FA) and mean diffusivity (MD) were evaluated in sixteen white matter and twenty-six gray matter bilateral regions. The ICC for intra-site reliability (0.32 to 0.96 for FA and 0.18 to 0.95 for MD in white matter regions; 0.27 to 0.89 for MD and 0.03 to 0.79 for FA in gray matter regions) and inter-site reliability (0.28 to 0.95 for FA in white matter regions, 0.02 to 0.86 for MD in gray matter regions) with longer follow-up intervals were similar to earlier studies using shorter follow-up intervals. The reliability of across field strengths comparisons was lower than intra- and inter-site reliability. Within and across scanner comparisons showed that diffusion measures were more stable in larger white matter regions (> 1500 mm3). For gray matter regions, the MD measure showed stability in specific regions and was not dependent on region size. Linear correction factor estimated from cross-sectional or longitudinal data improved the reliability across field strengths. Our findings indicate that investigations relating diffusion measures to external variables must consider variable reliability across the distinct regions of interest and that correction factors can be used to improve consistency of measurement across field strengths. An important result of this work is that inter-scanner and field strength effects can be partially mitigated with linear correction factors specific to regions of interest. These data-driven linear correction techniques can be applied in cross-sectional or longitudinal studies. PMID:26146196

  9. Incidental physiological sliding hiatal hernia: a single center comparison study between CT with water enema and CT colonography.

    PubMed

    Revelli, Matteo; Furnari, Manuele; Bacigalupo, Lorenzo; Paparo, Francesco; Astengo, Davide; Savarino, Edoardo; Rollandi, Gian Andrea

    2015-08-01

    Hiatal hernia is a well-known factor impacting on most mechanisms underlying gastroesophageal reflux, related with the risk of developing complications such as erosive esophagitis, Barrett's esophagus and ultimately, esophageal adenocarcinoma. It is our firm opinion that an erroneous reporting of hiatal hernia in CT exams performed with colonic distention may trigger a consecutive diagnostic process that is not only unnecessary, inducing a unmotivated anxiety in the patient, but also expensive and time-consuming for both the patient and the healthcare system. The purposes of our study were to determine whether colonic distention at CT with water enema and CT colonography can induce small sliding hiatal hernias and to detect whether hiatal hernias size modifications could be considered significant for both water and gas distention techniques. We retrospectively evaluated 400 consecutive patients, 200 undergoing CT-WE and 200 undergoing CTC, including 59 subjects who also underwent a routine abdominal CT evaluation on a different time, used as internal control, while a separate group of 200 consecutive patients who underwent abdominal CT evaluation was used as external control. Two abdominal radiologists assessed the CT exams for the presence of a sliding hiatal hernia, grading the size as small, moderate, or large; the internal control groups were directly compared with the corresponding CT-WE or CTC study looking for a change in hernia size. We used the Student's t test applying a size-specific correction factor, in order to account for the effect of colonic distention: these "corrected" values were then individually compared with the external control group. A sliding hiatal hernia was present in 51 % (102/200) of the CT-WE patients and in 48.5 % (97/200) of the CTC patients. Internal control CT of the 31 patients with a hernia at CT-WE showed resolution of the hernia in 58.1 % (18/31) of patients, including 76.5 % (13/17) and 45.5 % (5/11) of small and moderate hernias. Comparison CT of the 28 patients with a hiatal hernia at CTC showed the absence of the hernia in 57.1 % (16/28) patients, including 68.8 % (11/16) and 50 % (5/10) of small and moderate hernias. The prevalence of sliding hiatal hernias in the external control group was 22 % (44/200), significantly lower than the CT-WE and CTC cohorts' prevalence of 51 % (p < 0.0001) and 48.5 % (p < 0.0001). After applying the correction factors for the CT-WE and the CTC groups, the estimated residual prevalences (16 and 18.5 %, respectively) were much closer to that of the external control patients (p = 0.160 for CT-WE and p = 0.455 for CTC). We believe that incidental findings at CT-WE and CTC should be considered according to the clinical background, and that small sliding hiatal hernias should not be reported in patients with symptoms not related to reflux disease undergoing CT-WE or CTC: When encountering these findings, accurate anamnesis and review of medical history looking for GERD-related symptoms are essential, in order to address these patients to a correct diagnostic iter, taking advantage from more appropriate techniques such as endoscopy or functional techniques.

  10. Violators of a child passenger safety law.

    PubMed

    Agran, Phyllis F; Anderson, Craig L; Winn, Diane G

    2004-07-01

    Nonuse of child car safety seats (CSSs) remains significant; in 2000, 47% of occupant fatalities among children <5 years of age involved unrestrained children. Nonusers and part-time users of CSSs represent small proportions of the US population that have not responded to intervention efforts. Our study examined the factors contributing to nonuse or part-time use of CSSs and the effects of exposure to a class for violators of the California Child Passenger Safety (CPS) law. Focus groups (in English and Spanish) were conducted with individuals cited for violation of the law (N = 24). A thematic analysis of notes made by an observer, supplemented by audiotapes of the sessions, was conducted. In addition, a study of the effects of exposure to a violator class on knowledge and correct CSS use was conducted among violators. Certified CPS technicians conducted the classes and interviews. Subjects were parents cited as the driver with a child of 20 to 40 pounds, between 12 and 47 months of age. One hundred subjects recruited from the class were compared with 50 subjects who did not attend a class. Follow-up home interviews, with inspection of CCS use, were conducted 3 months after payment of the fine and completion of all court requirements. Fisher's exact test was used for 2 x 2 tables, because some of the tables had small cell sizes. The Mann-Whitney rank sum test was used for child restraint use, knowledge, and correct use scales, because some of these variables were not normally distributed. Linear and logistic regression models were used to examine the effects of several variables on these parameters. Factors influencing CSS nonuse were 1) lifestyle factors, 2) transportation and trip circumstances, 3) nonparent or nondriver issues, 4) parenting style, 5) child's behavior, and 6) perceived risks of nonuse. Violator subjects were mostly Hispanic and female, with incomes of less than 30,000 dollars per year. Those exposed to the class (citation and education group) scored 1 point higher on a knowledge test and had 1 more item correct on a CSS use instrument than did the group not exposed to the class (citation only group). In the logistic model, the citation and education group scored higher on the 2 items that were corrected by the instructor during the class. Our focus group study of CPS law violators revealed that multiple complex factors influence consistent use of a CSS. The interplay of the particular vehicle, the trip circumstances, and family/parent/child factors affected the use of a CSS at the time of parent citation. Addressing transportation issues and parenting skills in CPS programs is necessary. Among parents who had been ticketed for not restraining their children, exposure to a violator class demonstrated some benefit, compared with a fine alone. Correct CSS use improved most on items corrected by the instructor. Violator classes that include "hands-on" training show promise for improving rates of correct use of CSSs.

  11. Communications between intraretinal and subretinal space on optical coherence tomography of neurosensory retinal detachment in diabetic macular edema

    PubMed Central

    Gupta, Aditi; Raman, Rajiv; Mohana, KP; Kulothungan, Vaitheeswaran; Sharma, Tarun

    2013-01-01

    Background: The pathogenesis of development and progression of neurosensory retinal detachment (NSD) in diabetic macular edema (DME) is not yet fully understood. The purpose of this study is to describe the spectral domain optical coherence tomography (SD-OCT) morphological characteristics of NSD associated with DME in the form of outer retinal communications and to assess the correlation between the size of communications and various factors. Materials and Methods: This was an observational retrospective nonconsecutive case series in a tertiary care eye institute. We imaged NSD and outer retinal communications in 17 eyes of 16 patients having NSD associated with DME using SD-OCT. We measured manually the size of the outer openings of these communications and studied its correlation with various factors. Statistical analysis (correlation test) was performed using the Statistical Package for Social Sciences (SPSS) software (version 14.0). The main outcome measures were correlation of the size of communications with dimensions of NSD, presence of subretinal hyper-reflective dots, and best-corrected visual acuity (BCVA). Results: The communications were seen as focal defects of the outer layers of elevated retina. With increasing size of communication, there was increase in height of NSD (r = 0.701, P = 0.002), horizontal diameter of NSD (r = 0.695, P = 0.002), and the number of hyper-reflective dots in the subretinal space (r = 0.729, P = 0.002). The minimum angle of resolution (logMAR) BCVA increased with the increasing size of communications (r = 0.827, P < 0.0001). Conclusions: Outer retinal communications between intra and subretinal space were noted in eyes having NSD associated with DME. The size of communications correlated positively with the size of NSD and subretinal detachment space hyper-reflective dots, and inversely with BCVA. PMID:24379554

  12. Communications between intraretinal and subretinal space on optical coherence tomography of neurosensory retinal detachment in diabetic macular edema.

    PubMed

    Gupta, Aditi; Raman, Rajiv; Mohana, Kp; Kulothungan, Vaitheeswaran; Sharma, Tarun

    2013-09-01

    The pathogenesis of development and progression of neurosensory retinal detachment (NSD) in diabetic macular edema (DME) is not yet fully understood. The purpose of this study is to describe the spectral domain optical coherence tomography (SD-OCT) morphological characteristics of NSD associated with DME in the form of outer retinal communications and to assess the correlation between the size of communications and various factors. This was an observational retrospective nonconsecutive case series in a tertiary care eye institute. We imaged NSD and outer retinal communications in 17 eyes of 16 patients having NSD associated with DME using SD-OCT. We measured manually the size of the outer openings of these communications and studied its correlation with various factors. Statistical analysis (correlation test) was performed using the Statistical Package for Social Sciences (SPSS) software (version 14.0). The main outcome measures were correlation of the size of communications with dimensions of NSD, presence of subretinal hyper-reflective dots, and best-corrected visual acuity (BCVA). The communications were seen as focal defects of the outer layers of elevated retina. With increasing size of communication, there was increase in height of NSD (r = 0.701, P = 0.002), horizontal diameter of NSD (r = 0.695, P = 0.002), and the number of hyper-reflective dots in the subretinal space (r = 0.729, P = 0.002). The minimum angle of resolution (logMAR) BCVA increased with the increasing size of communications (r = 0.827, P < 0.0001). Outer retinal communications between intra and subretinal space were noted in eyes having NSD associated with DME. The size of communications correlated positively with the size of NSD and subretinal detachment space hyper-reflective dots, and inversely with BCVA.

  13. Eye-size variability in deep-sea lanternfishes (Myctophidae): an ecological and phylogenetic study.

    PubMed

    de Busserolles, Fanny; Fitzpatrick, John L; Paxton, John R; Marshall, N Justin; Collin, Shaun P

    2013-01-01

    One of the most common visual adaptations seen in the mesopelagic zone (200-1000 m), where the amount of light diminishes exponentially with depth and where bioluminescent organisms predominate, is the enlargement of the eye and pupil area. However, it remains unclear how eye size is influenced by depth, other environmental conditions and phylogeny. In this study, we determine the factors influencing variability in eye size and assess whether this variability is explained by ecological differences in habitat and lifestyle within a family of mesopelagic fishes characterized by broad intra- and interspecific variance in depth range and luminous patterns. We focus our study on the lanternfish family (Myctophidae) and hypothesise that lanternfishes with a deeper distribution and/or a reduction of bioluminescent emissions have smaller eyes and that ecological factors rather than phylogenetic relationships will drive the evolution of the visual system. Eye diameter and standard length were measured in 237 individuals from 61 species of lanternfishes representing all the recognised tribes within the family in addition to compiling an ecological dataset including depth distribution during night and day and the location and sexual dimorphism of luminous organs. Hypotheses were tested by investigating the relationship between the relative size of the eye (corrected for body size) and variations in depth and/or patterns of luminous-organs using phylogenetic comparative analyses. Results show a great variability in relative eye size within the Myctophidae at all taxonomic levels (from subfamily to genus), suggesting that this character may have evolved several times. However, variability in eye size within the family could not be explained by any of our ecological variables (bioluminescence and depth patterns), and appears to be driven solely by phylogenetic relationships.

  14. Eye-Size Variability in Deep-Sea Lanternfishes (Myctophidae): An Ecological and Phylogenetic Study

    PubMed Central

    de Busserolles, Fanny; Fitzpatrick, John L.; Paxton, John R.; Marshall, N. Justin; Collin, Shaun P.

    2013-01-01

    One of the most common visual adaptations seen in the mesopelagic zone (200–1000 m), where the amount of light diminishes exponentially with depth and where bioluminescent organisms predominate, is the enlargement of the eye and pupil area. However, it remains unclear how eye size is influenced by depth, other environmental conditions and phylogeny. In this study, we determine the factors influencing variability in eye size and assess whether this variability is explained by ecological differences in habitat and lifestyle within a family of mesopelagic fishes characterized by broad intra- and interspecific variance in depth range and luminous patterns. We focus our study on the lanternfish family (Myctophidae) and hypothesise that lanternfishes with a deeper distribution and/or a reduction of bioluminescent emissions have smaller eyes and that ecological factors rather than phylogenetic relationships will drive the evolution of the visual system. Eye diameter and standard length were measured in 237 individuals from 61 species of lanternfishes representing all the recognised tribes within the family in addition to compiling an ecological dataset including depth distribution during night and day and the location and sexual dimorphism of luminous organs. Hypotheses were tested by investigating the relationship between the relative size of the eye (corrected for body size) and variations in depth and/or patterns of luminous-organs using phylogenetic comparative analyses. Results show a great variability in relative eye size within the Myctophidae at all taxonomic levels (from subfamily to genus), suggesting that this character may have evolved several times. However, variability in eye size within the family could not be explained by any of our ecological variables (bioluminescence and depth patterns), and appears to be driven solely by phylogenetic relationships. PMID:23472203

  15. Use of Bayes theorem to correct size-specific sampling bias in growth data.

    PubMed

    Troynikov, V S

    1999-03-01

    The bayesian decomposition of posterior distribution was used to develop a likelihood function to correct bias in the estimates of population parameters from data collected randomly with size-specific selectivity. Positive distributions with time as a parameter were used for parametrization of growth data. Numerical illustrations are provided. The alternative applications of the likelihood to estimate selectivity parameters are discussed.

  16. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  17. Fixing Stellarator Magnetic Surfaces

    NASA Astrophysics Data System (ADS)

    Hanson, James D.

    1999-11-01

    Magnetic surfaces are a perennial issue for stellarators. The design heuristic of finding a magnetic field with zero perpendicular component on a specified outer surface often yields inner magnetic surfaces with very small resonant islands. However, magnetic fields in the laboratory are not design fields. Island-causing errors can arise from coil placement errors, stray external fields, and design inadequacies such as ignoring coil leads and incomplete characterization of current distributions within the coil pack. The problem addressed is how to eliminate such error-caused islands. I take a perturbation approach, where the zero order field is assumed to have good magnetic surfaces, and comes from a VMEC equilibrium. The perturbation field consists of error and correction pieces. The error correction method is to determine the correction field so that the sum of the error and correction fields gives zero island size at specified rational surfaces. It is particularly important to correctly calculate the island size for a given perturbation field. The method works well with many correction knobs, and a Singular Value Decomposition (SVD) technique is used to determine minimal corrections necessary to eliminate islands.

  18. The SAMI Galaxy Survey: can we trust aperture corrections to predict star formation?

    NASA Astrophysics Data System (ADS)

    Richards, S. N.; Bryant, J. J.; Croom, S. M.; Hopkins, A. M.; Schaefer, A. L.; Bland-Hawthorn, J.; Allen, J. T.; Brough, S.; Cecil, G.; Cortese, L.; Fogarty, L. M. R.; Gunawardhana, M. L. P.; Goodwin, M.; Green, A. W.; Ho, I.-T.; Kewley, L. J.; Konstantopoulos, I. S.; Lawrence, J. S.; Lorente, N. P. F.; Medling, A. M.; Owers, M. S.; Sharp, R.; Sweet, S. M.; Taylor, E. N.

    2016-01-01

    In the low-redshift Universe (z < 0.3), our view of galaxy evolution is primarily based on fibre optic spectroscopy surveys. Elaborate methods have been developed to address aperture effects when fixed aperture sizes only probe the inner regions for galaxies of ever decreasing redshift or increasing physical size. These aperture corrections rely on assumptions about the physical properties of galaxies. The adequacy of these aperture corrections can be tested with integral-field spectroscopic data. We use integral-field spectra drawn from 1212 galaxies observed as part of the SAMI Galaxy Survey to investigate the validity of two aperture correction methods that attempt to estimate a galaxy's total instantaneous star formation rate. We show that biases arise when assuming that instantaneous star formation is traced by broad-band imaging, and when the aperture correction is built only from spectra of the nuclear region of galaxies. These biases may be significant depending on the selection criteria of a survey sample. Understanding the sensitivities of these aperture corrections is essential for correct handling of systematic errors in galaxy evolution studies.

  19. Jitter Correction

    NASA Technical Reports Server (NTRS)

    Waegell, Mordecai J.; Palacios, David M.

    2011-01-01

    Jitter_Correct.m is a MATLAB function that automatically measures and corrects inter-frame jitter in an image sequence to a user-specified precision. In addition, the algorithm dynamically adjusts the image sample size to increase the accuracy of the measurement. The Jitter_Correct.m function takes an image sequence with unknown frame-to-frame jitter and computes the translations of each frame (column and row, in pixels) relative to a chosen reference frame with sub-pixel accuracy. The translations are measured using a Cross Correlation Fourier transformation method in which the relative phase of the two transformed images is fit to a plane. The measured translations are then used to correct the inter-frame jitter of the image sequence. The function also dynamically expands the image sample size over which the cross-correlation is measured to increase the accuracy of the measurement. This increases the robustness of the measurement to variable magnitudes of inter-frame jitter

  20. Full self-consistency in the Fermi-orbital self-interaction correction

    NASA Astrophysics Data System (ADS)

    Yang, Zeng-hui; Pederson, Mark R.; Perdew, John P.

    2017-05-01

    The Perdew-Zunger self-interaction correction cures many common problems associated with semilocal density functionals, but suffers from a size-extensivity problem when Kohn-Sham orbitals are used in the correction. Fermi-Löwdin-orbital self-interaction correction (FLOSIC) solves the size-extensivity problem, allowing its use in periodic systems and resulting in better accuracy in finite systems. Although the previously published FLOSIC algorithm Pederson et al., J. Chem. Phys. 140, 121103 (2014)., 10.1063/1.4869581 appears to work well in many cases, it is not fully self-consistent. This would be particularly problematic for systems where the occupied manifold is strongly changed by the correction. In this paper, we demonstrate a different algorithm for FLOSIC to achieve full self-consistency with only marginal increase of computational cost. The resulting total energies are found to be lower than previously reported non-self-consistent results.

  1. Detector-specific correction factors in radiosurgery beams and their impact on dose distribution calculations.

    PubMed

    García-Garduño, Olivia A; Rodríguez-Ávila, Manuel A; Lárraga-Gutiérrez, José M

    2018-01-01

    Silicon-diode-based detectors are commonly used for the dosimetry of small radiotherapy beams due to their relatively small volumes and high sensitivity to ionizing radiation. Nevertheless, silicon-diode-based detectors tend to over-respond in small fields because of their high density relative to water. For that reason, detector-specific beam correction factors ([Formula: see text]) have been recommended not only to correct the total scatter factors but also to correct the tissue maximum and off-axis ratios. However, the application of [Formula: see text] to in-depth and off-axis locations has not been studied. The goal of this work is to address the impact of the correction factors on the calculated dose distribution in static non-conventional photon beams (specifically, in stereotactic radiosurgery with circular collimators). To achieve this goal, the total scatter factors, tissue maximum, and off-axis ratios were measured with a stereotactic field diode for 4.0-, 10.0-, and 20.0-mm circular collimators. The irradiation was performed with a Novalis® linear accelerator using a 6-MV photon beam. The detector-specific correction factors were calculated and applied to the experimental dosimetry data for in-depth and off-axis locations. The corrected and uncorrected dosimetry data were used to commission a treatment planning system for radiosurgery planning. Various plans were calculated with simulated lesions using the uncorrected and corrected dosimetry. The resulting dose calculations were compared using the gamma index test with several criteria. The results of this work presented important conclusions for the use of detector-specific beam correction factors ([Formula: see text] in a treatment planning system. The use of [Formula: see text] for total scatter factors has an important impact on monitor unit calculation. On the contrary, the use of [Formula: see text] for tissue-maximum and off-axis ratios has not an important impact on the dose distribution calculation by the treatment planning system. This conclusion is only valid for the combination of treatment planning system, detector, and correction factors used in this work; however, this technique can be applied to other treatment planning systems, detectors, and correction factors.

  2. Pore size distribution calculation from 1H NMR signal and N2 adsorption-desorption techniques

    NASA Astrophysics Data System (ADS)

    Hassan, Jamal

    2012-09-01

    The pore size distribution (PSD) of nano-material MCM-41 is determined using two different approaches: N2 adsorption-desorption and 1H NMR signal of water confined in silica nano-pores of MCM-41. The first approach is based on the recently modified Kelvin equation [J.V. Rocha, D. Barrera, K. Sapag, Top. Catal. 54(2011) 121-134] which deals with the known underestimation in pore size distribution for the mesoporous materials such as MCM-41 by introducing a correction factor to the classical Kelvin equation. The second method employs the Gibbs-Thompson equation, using NMR, for melting point depression of liquid in confined geometries. The result shows that both approaches give similar pore size distribution to some extent, and also the NMR technique can be considered as an alternative direct method to obtain quantitative results especially for mesoporous materials. The pore diameter estimated for the nano-material used in this study was about 35 and 38 Å for the modified Kelvin and NMR methods respectively. A comparison between these methods and the classical Kelvin equation is also presented.

  3. Localized shape abnormalities in the thalamus and pallidum are associated with secondarily generalized seizures in mesial temporal lobe epilepsy.

    PubMed

    Yang, Linglin; Li, Hong; Zhu, Lujia; Yu, Xinfeng; Jin, Bo; Chen, Cong; Wang, Shan; Ding, Meiping; Zhang, Minming; Chen, Zhong; Wang, Shuang

    2017-05-01

    Mesial temporal lobe epilepsy (mTLE) is a common type of drug-resistant epilepsy and secondarily generalized tonic-clonic seizures (sGTCS) have devastating consequences for patients' safety and quality of life. To probe the mechanism underlying the genesis of sGTCS, we investigated the structural differences between patients with and without sGTCS in a cohort of mTLE with radiologically defined unilateral hippocampal sclerosis. We performed voxel-based morphometric analysis of cortex and vertex-wise shape analysis of subcortical structures (the basal ganglia and thalamus) on MRI of 39 patients (21 with and 18 without sGTCS). Comparisons were initially made between sGTCS and non-sGTCS groups, and subsequently made between uncontrolled-sGTCS and controlled-sGTCS subgroups. Regional atrophy of the ipsilateral ventral pallidum (cluster size=450 voxels, corrected p=0.047, Max voxel coordinate=107, 120, 65), medial thalamus (cluster size=1128 voxels, corrected p=0.049, Max voxel coordinate=107, 93, 67), middle frontal gyrus (cluster size=60 voxels, corrected p<0.05, Max voxel coordinate=-30, 49.5, 6), and contralateral posterior cingulate cortex (cluster size=130 voxels, corrected p<0.05, Max voxel coordinate=16.5, -57, 27) was found in the sGTCS group relative to the non-sGTCS group. Furthermore, the uncontrolled-sGTCS subgroup showed more pronounced atrophy of the ipsilateral medial thalamus (cluster size=1240 voxels, corrected p=0.014, Max voxel coordinate=107, 93, 67) than the controlled-sGTCS subgroup. These findings indicate a central role of thalamus and pallidum in the pathophysiology of sGTCS in mTLE. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Dilution correction equation revisited: The impact of stream slope, relief ratio and area size of basin on geochemical anomalies

    NASA Astrophysics Data System (ADS)

    Shahrestani, Shahed; Mokhtari, Ahmad Reza

    2017-04-01

    Stream sediment sampling is a well-known technique used to discover the geochemical anomalies in regional exploration activities. In an upstream catchment basin of stream sediment sample, the geochemical signals originating from probable mineralization could be diluted due to mixing with the weathering material coming from the non-anomalous sources. Hawkes's equation (1976) was an attempt to overcome the problem in which the area size of catchment basin was used to remove dilution from geochemical anomalies. However, the metal content of a stream sediment sample could be linked to several geomorphological, sedimentological, climatic and geological factors. The area size is not itself a comprehensive representative of dilution taking place in a catchment basin. The aim of the present study was to consider a number of geomorphological factors affecting the sediment supply, transportation processes, storage and in general, the geochemistry of stream sediments and their incorporation in the dilution correction procedure. This was organized through employing the concept of sediment yield and sediment delivery ratio and linking such characteristics to the dilution phenomenon in a catchment basin. Main stream slope (MSS), relief ratio (RR) and area size (Aa) of catchment basin were selected as the important proxies (PSDRa) for sediment delivery ratio and then entered to the Hawkes's equation. Then, Hawkes's and new equations were applied on the stream sediment dataset collected from Takhte-Soleyman district, west of Iran for Au, As and Sb values. A number of large and small gold, antimony and arsenic mineral occurrences were used to evaluate the results. Anomaly maps based on the new equations displayed improvement in anomaly delineation taking the spatial distribution of mineral deposits into account and could present new catchment basins containing known mineralization as the anomaly class, especially in the case of Au and As. Four catchment basins having Au and As mineralization were added to anomaly class and also one catchment basin with known As occurrence was highlighted as anomalous using new approach. The results demonstrated the usefulness of considering geomorphological parameters in dealing with dilution phenomenon in a catchment basin.

  5. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    PubMed Central

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.

    2013-01-01

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol−1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol−1). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning. PMID:24320250

  6. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: an accurate correction scheme for electrostatic finite-size effects.

    PubMed

    Rocklin, Gabriel J; Mobley, David L; Dill, Ken A; Hünenberger, Philippe H

    2013-11-14

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol(-1)) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol(-1)). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning.

  7. Calculating the binding free energies of charged species based on explicit-solvent simulations employing lattice-sum methods: An accurate correction scheme for electrostatic finite-size effects

    NASA Astrophysics Data System (ADS)

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.; Hünenberger, Philippe H.

    2013-11-01

    The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges -5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol-1) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non-periodic PB calculations for a given system, its dependence on the box size being analytical. The latter scheme also provides insight into the physical origin of the finite-size effects. These two schemes also encompass a correction for discrete solvent effects that persists even in the limit of infinite box sizes. Application of either scheme essentially eliminates the size dependence of the corrected charging free energies (maximal deviation of 1.5 kJ mol-1). Because it is simple to apply, the analytical correction scheme offers a general solution to the problem of finite-size effects in free-energy calculations involving charged solutes, as encountered in calculations concerning, e.g., protein-ligand binding, biomolecular association, residue mutation, pKa and redox potential estimation, substrate transformation, solvation, and solvent-solvent partitioning.

  8. Bias correction factors for near-Earth asteroids

    NASA Technical Reports Server (NTRS)

    Benedix, Gretchen K.; Mcfadden, Lucy Ann; Morrow, Esther M.; Fomenkova, Marina N.

    1992-01-01

    Knowledge of the population size and physical characteristics (albedo, size, and rotation rate) of near-Earth asteroids (NEA's) is biased by observational selection effects which are functions of the population's intrinsic properties and the size of the telescope, detector sensitivity, and search strategy used. The NEA population is modeled in terms of orbital and physical elements: a, e, i, omega, Omega, M, albedo, and diameter, and an asteroid search program is simulated using actual telescope pointings of right ascension, declination, date, and time. The position of each object in the model population is calculated at the date and time of each telescope pointing. The program tests to see if that object is within the field of view (FOV = 8.75 degrees) of the telescope and above the limiting magnitude (V = +1.65) of the film. The effect of the starting population on the outcome of the simulation's discoveries is compared to the actual discoveries in order to define a most probable starting population.

  9. The determination of total burn surface area: How much difference?

    PubMed

    Giretzlehner, M; Dirnberger, J; Owen, R; Haller, H L; Lumenta, D B; Kamolz, L-P

    2013-09-01

    Burn depth and burn size are crucial determinants for assessing patients suffering from burns. Therefore, a correct evaluation of these factors is optimal for adapting the appropriate treatment in modern burn care. Burn surface assessment is subject to considerable differences among clinicians. This work investigated the accuracy among experts based on conventional surface estimation methods (e.g. "Rule of Palm", "Rule of Nines" or "Lund-Browder Chart"). The estimation results were compared to a computer-based evaluation method. Survey data was collected during one national and one international burn conference. The poll confirmed deviations of burn depth/size estimates of up to 62% in relation to the mean value of all participants. In comparison to the computer-based method, overestimation of up to 161% was found. We suggest introducing improved methods for burn depth/size assessment in clinical routine in order to efficiently allocate and distribute the available resources for practicing burn care. Copyright © 2013 Elsevier Ltd and ISBI. All rights reserved.

  10. Variational second order density matrix study of F3-: importance of subspace constraints for size-consistency.

    PubMed

    van Aggelen, Helen; Verstichel, Brecht; Bultinck, Patrick; Van Neck, Dimitri; Ayers, Paul W; Cooper, David L

    2011-02-07

    Variational second order density matrix theory under "two-positivity" constraints tends to dissociate molecules into unphysical fractionally charged products with too low energies. We aim to construct a qualitatively correct potential energy surface for F(3)(-) by applying subspace energy constraints on mono- and diatomic subspaces of the molecular basis space. Monoatomic subspace constraints do not guarantee correct dissociation: the constraints are thus geometry dependent. Furthermore, the number of subspace constraints needed for correct dissociation does not grow linearly with the number of atoms. The subspace constraints do impose correct chemical properties in the dissociation limit and size-consistency, but the structure of the resulting second order density matrix method does not exactly correspond to a system of noninteracting units.

  11. Resistivity Correction Factor for the Four-Probe Method: Experiment III

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Nishii, Toshifumi; Kurihara, Hiroshi; Enjoji, Hideo; Iwata, Atsushi

    1990-04-01

    Experimental verification of the theoretically derived resistivity correction factor F is presented. Factor F is applied to a system consisting of a rectangular parallelepiped sample and a square four-probe array. Resistivity and sheet resistance measurements are made on isotropic graphites and crystalline ITO films. Factor F corrects experimental data and leads to reasonable resistivity and sheet resistance.

  12. A physics investigation of deadtime losses in neutron counting at low rates with Cf252

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, Louise G; Croft, Stephen

    2009-01-01

    {sup 252}Cf spontaneous fission sources are used for the characterization of neutron counters and the determination of calibration parameters; including both neutron coincidence counting (NCC) and neutron multiplicity deadtime (DT) parameters. Even at low event rates, temporally-correlated neutron counting using {sup 252}Cf suffers a deadtime effect. Meaning that in contrast to counting a random neutron source (e.g. AmLi to a close approximation), DT losses do not vanish in the low rate limit. This is because neutrons are emitted from spontaneous fission events in time-correlated 'bursts', and are detected over a short period commensurate with their lifetime in the detector (characterizedmore » by the system die-away time, {tau}). Thus, even when detected neutron events from different spontaneous fissions are unlikely to overlap in time, neutron events within the detected 'burst' are subject to intrinsic DT losses. Intrinsic DT losses for dilute Pu will be lower since the multiplicity distribution is softer, but real items also experience self-multiplication which can increase the 'size' of the bursts. Traditional NCC DT correction methods do not include the intrinsic (within burst) losses. We have proposed new forms of the traditional NCC Singles and Doubles DT correction factors. In this work, we apply Monte Carlo neutron pulse train analysis to investigate the functional form of the deadtime correction factors for an updating deadtime. Modeling is based on a high efficiency {sup 3}He neutron counter with short die-away time, representing an ideal {sup 3}He based detection system. The physics of dead time losses at low rates is explored and presented. It is observed that new forms are applicable and offer more accurate correction than the traditional forms.« less

  13. Roundness variation in JPEG images affects the automated process of nuclear immunohistochemical quantification: correction with a linear regression model.

    PubMed

    López, Carlos; Jaén Martinez, Joaquín; Lejeune, Marylène; Escrivà, Patricia; Salvadó, Maria T; Pons, Lluis E; Alvaro, Tomás; Baucells, Jordi; García-Rojo, Marcial; Cugat, Xavier; Bosch, Ramón

    2009-10-01

    The volume of digital image (DI) storage continues to be an important problem in computer-assisted pathology. DI compression enables the size of files to be reduced but with the disadvantage of loss of quality. Previous results indicated that the efficiency of computer-assisted quantification of immunohistochemically stained cell nuclei may be significantly reduced when compressed DIs are used. This study attempts to show, with respect to immunohistochemically stained nuclei, which morphometric parameters may be altered by the different levels of JPEG compression, and the implications of these alterations for automated nuclear counts, and further, develops a method for correcting this discrepancy in the nuclear count. For this purpose, 47 DIs from different tissues were captured in uncompressed TIFF format and converted to 1:3, 1:23 and 1:46 compression JPEG images. Sixty-five positive objects were selected from these images, and six morphological parameters were measured and compared for each object in TIFF images and those of the different compression levels using a set of previously developed and tested macros. Roundness proved to be the only morphological parameter that was significantly affected by image compression. Factors to correct the discrepancy in the roundness estimate were derived from linear regression models for each compression level, thereby eliminating the statistically significant differences between measurements in the equivalent images. These correction factors were incorporated in the automated macros, where they reduced the nuclear quantification differences arising from image compression. Our results demonstrate that it is possible to carry out unbiased automated immunohistochemical nuclear quantification in compressed DIs with a methodology that could be easily incorporated in different systems of digital image analysis.

  14. Evaluation of the Actuator Line Model with coarse resolutions

    NASA Astrophysics Data System (ADS)

    Draper, M.; Usera, G.

    2015-06-01

    The aim of the present paper is to evaluate the Actuator Line Model (ALM) in spatial resolutions coarser than what is generally recommended, also using larger time steps. To accomplish this, the ALM has been implemented in the open source code caffa3d.MBRi and validated against experimental measurements of two wind tunnel campaigns (stand alone wind turbine and two wind turbines in line, case A and B respectively), taking into account two spatial resolutions: R/8 and R/15 (R is the rotor radius). A sensitivity analysis in case A was performed in order to get some insight into the influence of the smearing factor (3D Gaussian distribution) and time step size in power and thrust, as well as in the wake, without applying a tip loss correction factor (TLCF), for one tip speed ratio (TSR). It is concluded that as the smearing factor is larger or time step size is smaller the power is increased, but the velocity deficit is not as much affected. From this analysis, a smearing factor was obtained in order to calculate precisely the power coefficient for that TSR without applying TLCF. Results with this approach were compared with another simulation choosing a larger smearing factor and applying Prandtl's TLCF, for three values of TSR. It is found that applying the TLCF improves the power estimation and weakens the influence of the smearing factor. Finally, these 2 alternatives were tested in case B, confirming that conclusion.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, N; Lu, S; Qin, Y

    Purpose: To evaluate the dosimetric uncertainty associated with Gafchromic (EBT3) films and establish an absolute dosimetry protocol for Stereotactic Radiosurgery (SRS) and Stereotactic Body Radiotherapy (SBRT). Methods: EBT3 films were irradiated at each of seven different dose levels between 1 and 15 Gy with open fields, and standard deviations of dose maps were calculated at each color channel for evaluation. A scanner non-uniform response correction map was built by registering and comparing film doses to the reference diode array-based dose map delivered with the same doses. To determine the temporal dependence of EBT3 films, the average correction factors of differentmore » dose levels as a function of time were evaluated up to four days after irradiation. An integrated film dosimetry protocol was developed for dose calibration, calibration curve fitting, dose mapping, and profile/gamma analysis. Patient specific quality assurance (PSQA) was performed for 93 SRS/SBRT treatment plans. Results: The scanner response varied within 1% for the field sizes less than 5 × 5 cm{sup 2}, and up to 5% for the field sizes of 10 × 10 cm{sup 2}. The scanner correction method was able to remove visually evident, irregular detector responses found for larger field sizes. The dose response of the film changed rapidly (∼10%) in the first two hours and plateaued afterwards, ∼3% change between 2 and 24 hours. The mean uncertainties (mean of the standard deviations) were <0.5% over the dose range 1∼15Gy for all color channels for the OD response curves. The percentage of points passing the 3%/1mm gamma criteria based on absolute dose analysis, averaged over all tests, was 95.0 ± 4.2. Conclusion: We have developed an absolute film dose dosimetry protocol using EBT3 films. The overall uncertainty has been established to be approximately 1% for SRS and SBRT PSQA. The work was supported by a Research Scholar Grant, RSG-15-137-01-CCE from the American Cancer Society.« less

  16. Effect of Peer Instruction on the Likelihood for Choosing the Correct Response to a Physiology Question

    ERIC Educational Resources Information Center

    Relling, Alejandro E.; Giuliodori, Mauricio J.

    2015-01-01

    The aims of the present study were to measure the effects of individual answer (correct vs. incorrect), individual answer of group members (no vs. some vs. all correct), self-confidence about the responses (low vs. mid vs. high), sex (female vs. male students), and group size (2-4 students) on the odds for change and for correctness after peer…

  17. Semiempirical evaluation of post-Hartree-Fock diagonal-Born-Oppenheimer corrections for organic molecules.

    PubMed

    Mohallem, José R

    2008-04-14

    Recent post-Hartree-Fock calculations of the diagonal-Born-Oppenheimer correction empirically show that it behaves quite similar to atomic nuclear mass corrections. An almost constant contribution per electron is identified, which converges with system size for specific series of organic molecules. This feature permits pocket-calculator evaluation of the corrections within thermochemical accuracy (10(-1) mhartree or kcal/mol).

  18. Spot diameters for scanning photorefractive keratectomy: a comparative study

    NASA Astrophysics Data System (ADS)

    Manns, Fabrice; Parel, Jean-Marie A.

    1998-06-01

    Purpose: The purpose of this study was to compare with computer simulations the duration, smoothness and accuracy of scanning photo-refractive keratectomy with spot diameters ranging from 0.2 to 1 mm. Methods: We calculated the number of pulses per diopter of flattening for spot sizes varying from 0.2 to 1 mm. We also computed the corneal shape after the correction of 4 diopters of myopia and 4 diopters of astigmatism with a 6 mm ablation zone and a spot size of 0.4 mm with 600 mJ/cm2 peak radiant exposure and 0.8 mm with 300 mJ/cm2 peak radiant exposure. The accuracy and smoothness of the ablations were compared. Results: The repetition rate required to produce corrections of myopia with a 6 mm ablation zone in a duration of 5 s per diopter is on the order of 1 kHz for spot sizes smaller than 0.5 mm, and of 100 Hz for spot sizes larger than 0.5 mm. The accuracy and smoothness after the correction of myopia and astigmatism with small and large spot sizes were not significantly different. Conclusions: This study seems to indicate that there is no theoretical advantage for using either smaller spots with higher radiant exposures or larger spots with lower radiant exposures. However, at fixed radiant exposure, treatments with smaller spots require a larger duration of surgery but provide a better accuracy for the correction of astigmatism.

  19. Sediment size fractionation and focusing in the equatorial Pacific: Effect on 230Th normalization and paleoflux measurements

    NASA Astrophysics Data System (ADS)

    Lyle, Mitchell; Marcantonio, Franco; Moore, Willard S.; Murray, Richard W.; Huh, Chih-An; Finney, Bruce P.; Murray, David W.; Mix, Alan C.

    2014-07-01

    We use flux, dissolution, and excess 230Th data from the Joint Global Ocean Flux Study and Manganese Nodule Project equatorial Pacific study Site C to assess the extent of sediment focusing in the equatorial Pacific. Measured mass accumulation rates (MAR) from sediment cores were compared to reconstructed MAR by multiplying the particulate rain caught in sediment traps by the 230Th focusing factor and subtracting measured dissolution. CaCO3 MAR is severely overestimated when the 230Th focusing factor correction is large but is estimated correctly when the focusing factor is small. In contrast, Al fluxes in the sediment fine fraction are well matched when the focusing correction is used. Since CaCO3 is primarily a coarse sediment component, we propose that there is significant sorting of fine and coarse sediments during lateral sediment transport by weak currents. Because CaCO3 does not move with 230Th, normalization typically overcorrects the CaCO3 MAR; and because CaCO3 is 80% of the total sediment, 230Th normalization overestimates lateral sediment flux. Fluxes of 230Th in particulate rain caught in sediment traps agree with the water column production-sorption model, except within 500 m of the bottom. Near the bottom, 230Th flux measurements are as much as 3 times higher than model predictions. There is also evidence for lateral near-bottom 230Th transport in the bottom nepheloid layer since 230Th fluxes caught by near-bottom sediment traps are higher than predicted by resuspension of surface sediments alone. Resuspension and nepheloid layer transport under weak currents need to be better understood in order to use 230Th within a quantitative model of lateral sediment transport.

  20. Distribution of Attenuation Factor Beneath the Japanese Islands

    NASA Astrophysics Data System (ADS)

    Fujihara, S.; Hashimoto, M.

    2001-12-01

    In this research, we tried to estimate the distribution of attenuation factor of seismic wave, which is closely related to the above-mentioned inelastic parameters. Here the velocity records of events from the Freesia network and the J-array network were used. The events were selected based on the following criteria: (a) events with JMA magnitudes from 3.8 to 5.0 and hypocentral distance from 20km to 200km, (b) events with JMA magnitudes from 5.1 to 6.8 and hypocentral distance from 200km to 10_?, (c) Depth of all events is greater than 30km with S/N ratio greater than 2. After correcting the instrument response, P-wave spectra were estimated. Following Boatwright (1991), the observed spectra were modeled by the theoretical spectra by assuming the following relation; Aij(f) = Si(f) Pij(f) Cj(f). Brune's model (1970) was assumed for the source model. Aij(f), Si(f), Pij(f), and Cj(f) are defined as observed spectrum, source spectrum, propagation effect, and site effect, respectively. Frequency dependence of attenuation factor was not assumed here. The global standard velocity model (AK135) is used for ray tracing. Ellipticity corrections and station elevation corrections are also done. The block sizes are 50km by 50km laterally and increase vertically. As the results of analysis, the attenuation structure beneath Japanese Islands up to the depth of 180km was reconstructed with relatively good resolution. The low Q distribution is clearly seen in central Hokkaido, western Hokkaido, Tohoku region, Hida region, Izu region, and southern Kyushu. The relatively sharp decrease in Q associated with asthenosphere can be seen below the depth of 70km.

  1. Refractive Outcomes, Contrast Sensitivity, HOAs, and Patient Satisfaction in Moderate Myopia: Wavefront-Optimized Versus Tissue-Saving PRK.

    PubMed

    Nassiri, Nader; Sheibani, Kourosh; Azimi, Abbas; Khosravi, Farinaz Mahmoodi; Heravian, Javad; Yekta, Abasali; Moghaddam, Hadi Ostadi; Nassiri, Saman; Yasseri, Mehdi; Nassiri, Nariman

    2015-10-01

    To compare refractive outcomes, contrast sensitivity, higher-order aberrations (HOAs), and patient satisfaction after photorefractive keratectomy for correction of moderate myopia with two methods: tissue saving versus wavefront optimized. In this prospective, comparative study, 152 eyes (80 patients) with moderate myopia with and without astigmatism were randomly divided into two groups: the tissue-saving group (Technolas 217z Zyoptix laser; Bausch & Lomb, Rochester, NY) (76 eyes of 39 patients) or the wavefront-optimized group (WaveLight Allegretto Wave Eye-Q laser; Alcon Laboratories, Inc., Fort Worth, TX) (76 eyes of 41 patients). Preoperative and 3-month postoperative refractive outcomes, contrast sensitivity, HOAs, and patient satisfaction were compared between the two groups. The mean spherical equivalent was -4.50 ± 1.02 diopters. No statistically significant differences were detected between the groups in terms of uncorrected and corrected distance visual acuity and spherical equivalent preoperatively and 3 months postoperatively. No statistically significant differences were seen in the amount of preoperative to postoperative contrast sensitivity changes between the two groups in photopic and mesopic conditions. HOAs and Q factor increased in both groups postoperatively (P = .001), with the tissue-saving method causing more increases in HOAs (P = .007) and Q factor (P = .039). Patient satisfaction was comparable between both groups. Both platforms were effective in correcting moderate myopia with or without astigmatism. No difference in refractive outcome, contrast sensitivity changes, and patient satisfaction between the groups was observed. Postoperatively, the tissue-saving method caused a higher increase in HOAs and Q factor compared to the wavefront-optimized method, which could be due to larger optical zone sizes in the tissue-saving group. Copyright 2015, SLACK Incorporated.

  2. Influence of CT-based depth correction of renal scintigraphy in evaluation of living kidney donors on side selection and postoperative renal function: is it necessary to know the relative renal function?

    PubMed

    Weinberger, Sarah; Klarholz-Pevere, Carola; Liefeldt, Lutz; Baeder, Michael; Steckhan, Nico; Friedersdorff, Frank

    2018-03-22

    To analyse the influence of CT-based depth correction in the assessment of split renal function in potential living kidney donors. In 116 consecutive living kidney donors preoperative split renal function was assessed using the CT-based depth correction. Influence on donor side selection and postoperative renal function of the living kidney donors were analyzed. Linear regression analysis was performed to identify predictors of postoperative renal function. A left versus right kidney depth variation of more than 1 cm was found in 40/114 donors (35%). 11 patients (10%) had a difference of more than 5% in relative renal function after depth correction. Kidney depth variation and changes in relative renal function after depth correction would have had influence on side selection in 30 of 114 living kidney donors. CT depth correction did not improve the predictability of postoperative renal function of the living kidney donor. In general, it was not possible to predict the postoperative renal function from preoperative total and relative renal function. In multivariate linear regression analysis, age and BMI were identified as most important predictors for postoperative renal function of the living kidney donors. Our results clearly indicate that concerning the postoperative renal function of living kidney donors, the relative renal function of the donated kidney seems to be less important than other factors. A multimodal assessment with consideration of all available results including kidney size, location of the kidney and split renal function remains necessary.

  3. Lymph node size as a simple prognostic factor in node negative colon cancer and an alternative thesis to stage migration.

    PubMed

    Märkl, Bruno; Schaller, Tina; Kokot, Yuriy; Endhardt, Katharina; Kretsinger, Hallie; Hirschbühl, Klaus; Aumann, Georg; Schenkirsch, Gerhard

    2016-10-01

    Stage migration is an accepted explanation for the association between lymph node (LN) yield and outcome in colon cancer. To investigate whether the alternative thesis of immune response is more likely, we performed a retrospective study. We enrolled 239 cases of node negative cancers, which were categorized according to the number of LNs with diameters larger than 5 mm (LN5) into the groups LN5-very low (0 to 1 LN5), LN5-low (2 to 5 LN5), and LN5-high (≥6 LN5). Significant differences were found in pT3/4 cancers with median survival times of 40, 57, and 71 months (P = .022) in the LN5-very low, LN5-low, and LN5-high groups, respectively. Multivariable analysis revealed that LN5 number and infiltration type were independent prognostic factors. LN size is prognostic in node negative colon cancer. The correct explanation for outcome differences associated with LN harvest is probably the activation status of LNs. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. An aerial survey method to estimate sea otter abundance

    USGS Publications Warehouse

    Bodkin, James L.; Udevitz, Mark S.; Garner, Gerald W.; Amstrup, Steven C.; Laake, Jeffrey L.; Manly, Bryan F.J.; McDonald, Lyman L.; Robertson, Donna G.

    1999-01-01

    Sea otters (Enhydra lutris) occur in shallow coastal habitats and can be highly visible on the sea surface. They generally rest in groups and their detection depends on factors that include sea conditions, viewing platform, observer technique and skill, distance, habitat and group size. While visible on the surface, they are difficult to see while diving and may dive in response to an approaching survey platform. We developed and tested an aerial survey method that uses intensive searches within portions of strip transects to adjust for availability and sightability biases. Correction factors are estimated independently for each survey and observer. In tests of our method using shore-based observers, we estimated detection probabilities of 0.52-0.72 in standard strip-transects and 0.96 in intensive searches. We used the survey method in Prince William Sound, Alaska to estimate a sea otter population size of 9,092 (SE = 1422). The new method represents an improvement over various aspects of previous methods, but additional development and testing will be required prior to its broad application.

  5. Occlusion properties of prosthetic contact lenses for the treatment of amblyopia.

    PubMed

    Collins, Randall S; McChesney, Megan E; McCluer, Craig A; Schatz, Martha P

    2008-12-01

    The efficacy of opaque contact lenses as occlusion therapy for amblyopia has been established in the literature. Prosthetic contact lenses use similar tints to improve cosmesis in scarred or deformed eyes and may be an alternative in occlusion therapy. To test this idea, we determined the degree of vision penalization elicited by prosthetic contact lenses and their effect on peripheral fusion. We tested 19 CIBA Vision DuraSoft 3 Prosthetic soft contact lenses with varying iris prints, underprints, and opaque pupil sizes in 10 volunteers with best-corrected Snellen distance visual acuity of 20/20 or better in each eye. Snellen visual acuity and peripheral fusion using the Worth 4-Dot test at near were measured on each subject wearing each of the 19 lenses. Results were analyzed with 3-factor analysis of variance. Mean visual acuity through the various lenses ranged from 20/79 to 20/620. Eight lenses allowed preservation of peripheral fusion in 50% or more of the subjects tested. Iris print pattern and opaque pupil size were significant factors in determining visual acuity (p < 0.05). Sufficient vision penalization can be achieved to make occlusion with prosthetic contact lenses a viable therapy for amblyopia. The degree of penalization can be varied and different iris print patterns and pupil sizes, using peripheral fusion, can be preserved with some lenses. Prosthetic contact lenses can be more cosmetically appealing and more tolerable than other amblyopia treatment modalities. These factors may improve compliance in occlusion therapy.

  6. Validation of two-dimensional and three-dimensional measurements of subpleural alveolar size parameters by optical coherence tomography

    PubMed Central

    Warger, William C.; Hostens, Jeroen; Namati, Eman; Birngruber, Reginald; Bouma, Brett E.; Tearney, Guillermo J.

    2012-01-01

    Abstract. Optical coherence tomography (OCT) has been increasingly used for imaging pulmonary alveoli. Only a few studies, however, have quantified individual alveolar areas, and the validity of alveolar volumes represented within OCT images has not been shown. To validate quantitative measurements of alveoli from OCT images, we compared the cross-sectional area, perimeter, volume, and surface area of matched subpleural alveoli from microcomputed tomography (micro-CT) and OCT images of fixed air-filled swine samples. The relative change in size between different alveoli was extremely well correlated (r>0.9, P<0.0001), but OCT images underestimated absolute sizes compared to micro-CT by 27% (area), 7% (perimeter), 46% (volume), and 25% (surface area) on average. We hypothesized that the differences resulted from refraction at the tissue–air interfaces and developed a ray-tracing model that approximates the reconstructed alveolar size within OCT images. Using this model and OCT measurements of the refractive index for lung tissue (1.41 for fresh, 1.53 for fixed), we derived equations to obtain absolute size measurements of superellipse and circular alveoli with the use of predictive correction factors. These methods and results should enable the quantification of alveolar sizes from OCT images in vivo. PMID:23235834

  7. Species-specific differences in relative eye size are related to patterns of edge avoidance in an Amazonian rainforest bird community

    PubMed Central

    Martínez-Ortega, Cristina; Santos, Eduardo SA; Gil, Diego

    2014-01-01

    Eye size shows a large degree of variation among species, even after correcting for body size. In birds, relatively larger eyes have been linked to predation risk, capture of mobile prey, and nocturnal habits. Relatively larger eyes enhance visual acuity and also allow birds to forage and communicate in low-light situations. Complex habitats such as tropical rain forests provide a mosaic of diverse lighting conditions, including differences among forest strata and at different distances from the forest edge. We examined in an Amazonian forest bird community whether microhabitat occupancy (defined by edge avoidance and forest stratum) was a predictor of relative eye size. We found that relative eye size increased with edge avoidance, but did not differ according to forest stratum. Nevertheless, the relationship between edge avoidance and relative eye size showed a nonsignificant positive trend for species that inhabit lower forest strata. Our analysis shows that birds that avoid forest edges have larger eyes than those living in lighter parts. We expect that this adaptation may allow birds to increase their active daily period in dim areas of the forest. The pattern that we found raises the question of what factors may limit the evolution of large eyes. PMID:25614788

  8. Validation of two-dimensional and three-dimensional measurements of subpleural alveolar size parameters by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Unglert, Carolin I.; Warger, William C.; Hostens, Jeroen; Namati, Eman; Birngruber, Reginald; Bouma, Brett E.; Tearney, Guillermo J.

    2012-12-01

    Optical coherence tomography (OCT) has been increasingly used for imaging pulmonary alveoli. Only a few studies, however, have quantified individual alveolar areas, and the validity of alveolar volumes represented within OCT images has not been shown. To validate quantitative measurements of alveoli from OCT images, we compared the cross-sectional area, perimeter, volume, and surface area of matched subpleural alveoli from microcomputed tomography (micro-CT) and OCT images of fixed air-filled swine samples. The relative change in size between different alveoli was extremely well correlated (r>0.9, P<0.0001), but OCT images underestimated absolute sizes compared to micro-CT by 27% (area), 7% (perimeter), 46% (volume), and 25% (surface area) on average. We hypothesized that the differences resulted from refraction at the tissue-air interfaces and developed a ray-tracing model that approximates the reconstructed alveolar size within OCT images. Using this model and OCT measurements of the refractive index for lung tissue (1.41 for fresh, 1.53 for fixed), we derived equations to obtain absolute size measurements of superellipse and circular alveoli with the use of predictive correction factors. These methods and results should enable the quantification of alveolar sizes from OCT images in vivo.

  9. OBT analysis method using polyethylene beads for limited quantities of animal tissue.

    PubMed

    Kim, S B; Stuart, M

    2015-08-01

    This study presents a polyethylene beads method for OBT determination in animal tissues and animal products for cases where the amount of water recovered by combustion is limited by sample size or quantity. In the method, the amount of water recovered after combustion is enhanced by adding tritium-free polyethylene beads to the sample prior to combustion in an oxygen bomb. The method reduces process time by allowing the combustion water to be easily collected with a pipette. Sufficient water recovery was achieved using the polyethylene beads method when 2 g of dry animal tissue or animal product were combusted with 2 g of polyethylene beads. Correction factors, which account for the dilution due to the combustion water of the beads, are provided for beef, chicken, pork, fish and clams, as well as egg, milk and cheese. The method was tested by comparing its OBT results with those of the conventional method using animal samples collected on the Chalk River Laboratories (CRL) site. The results determined that the polyethylene beads method added no more than 25% uncertainty when appropriate correction factors are used. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  10. Application of modern radiative transfer tools to model laboratory quartz emissivity

    NASA Astrophysics Data System (ADS)

    Pitman, Karly M.; Wolff, Michael J.; Clayton, Geoffrey C.

    2005-08-01

    Planetary remote sensing of regolith surfaces requires use of theoretical models for interpretation of constituent grain physical properties. In this work, we review and critically evaluate past efforts to strengthen numerical radiative transfer (RT) models with comparison to a trusted set of nadir incidence laboratory quartz emissivity spectra. By first establishing a baseline statistical metric to rate successful model-laboratory emissivity spectral fits, we assess the efficacy of hybrid computational solutions (Mie theory + numerically exact RT algorithm) to calculate theoretical emissivity values for micron-sized α-quartz particles in the thermal infrared (2000-200 cm-1) wave number range. We show that Mie theory, a widely used but poor approximation to irregular grain shape, fails to produce the single scattering albedo and asymmetry parameter needed to arrive at the desired laboratory emissivity values. Through simple numerical experiments, we show that corrections to single scattering albedo and asymmetry parameter values generated via Mie theory become more necessary with increasing grain size. We directly compare the performance of diffraction subtraction and static structure factor corrections to the single scattering albedo, asymmetry parameter, and emissivity for dense packing of grains. Through these sensitivity studies, we provide evidence that, assuming RT methods work well given sufficiently well-quantified inputs, assumptions about the scatterer itself constitute the most crucial aspect of modeling emissivity values.

  11. Filling the gap: Calibration of the low molar-mass range of cellulose in size exclusion chromatography with cello-oligomers.

    PubMed

    Oberlerchner, J T; Vejdovszky, P; Zweckmair, T; Kindler, A; Koch, S; Rosenau, T; Potthast, A

    2016-11-04

    Degraded celluloses are becoming increasingly important as part of product streams coming from various biorefinery scenarios. Analysis of the molar mass distribution of such fractions is a challenge, since neither established methods for mono- or disaccharides nor common methods for polysaccharide characterization cover the intermediate oligomer range appropriately. Size exclusion chromatography (SEC) with multi-angle laser light scattering (MALLS), the standard approach for celluloses, suffers from decreased scattering intensities in the lower-molar mass range. The limitation in the low-molecular range can, in principle, be overcome by calibration, but calibration standards for such "short" celluloses are either not readily available or structurally remote and thus questionable. In this paper, we present the calibration of a SEC system- for the first time - with monodisperse cellooligomer standards up to about 3400gmol -1 . These cellooligomers are "short-chain celluloses" and can be seen as the "true" standard compounds, by contrast to commonly used standards that are chemically different from cellulose, such as pullulan, dextran, polystyrene, or poly(methyl methacrylate). The calibration is compared against those commercial standards and correction factors are calculated. Calibrations with non-cellulose standards can now be adjusted to yield better fitting results, and data already available can be corrected retrospectively. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Empirical Derivation of Correction Factors for Human Spiral Ganglion Cell Nucleus and Nucleolus Count Units.

    PubMed

    Robert, Mark E; Linthicum, Fred H

    2016-01-01

    Profile count method for estimating cell number in sectioned tissue applies a correction factor for double count (resulting from transection during sectioning) of count units selected to represent the cell. For human spiral ganglion cell counts, we attempted to address apparent confusion between published correction factors for nucleus and nucleolus count units that are identical despite the role of count unit diameter in a commonly used correction factor formula. We examined a portion of human cochlea to empirically derive correction factors for the 2 count units, using 3-dimensional reconstruction software to identify double counts. The Neurotology and House Histological Temporal Bone Laboratory at University of California at Los Angeles. Using a fully sectioned and stained human temporal bone, we identified and generated digital images of sections of the modiolar region of the lower first turn of cochlea, identified count units with a light microscope, labeled them on corresponding digital sections, and used 3-dimensional reconstruction software to identify double-counted count units. For 25 consecutive sections, we determined that double-count correction factors for nucleus count unit (0.91) and nucleolus count unit (0.92) matched the published factors. We discovered that nuclei and, therefore, spiral ganglion cells were undercounted by 6.3% when using nucleolus count units. We determined that correction factors for count units must include an element for undercounting spiral ganglion cells as well as the double-count element. We recommend a correction factor of 0.91 for the nucleus count unit and 0.98 for the nucleolus count unit when using 20-µm sections. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.

  13. Numerical algorithms for scatter-to-attenuation reconstruction in PET: empirical comparison of convergence, acceleration, and the effect of subsets.

    PubMed

    Berker, Yannick; Karp, Joel S; Schulz, Volkmar

    2017-09-01

    The use of scattered coincidences for attenuation correction of positron emission tomography (PET) data has recently been proposed. For practical applications, convergence speeds require further improvement, yet there exists a trade-off between convergence speed and the risk of non-convergence. In this respect, a maximum-likelihood gradient-ascent (MLGA) algorithm and a two-branch back-projection (2BP), which was previously proposed, were evaluated. MLGA was combined with the Armijo step size rule; and accelerated using conjugate gradients, Nesterov's momentum method, and data subsets of different sizes. In 2BP, we varied the subset size, an important determinant of convergence speed and computational burden. We used three sets of simulation data to evaluate the impact of a spatial scale factor. The Armijo step size allowed 10-fold increased step sizes compared to native MLGA. Conjugate gradients and Nesterov momentum lead to slightly faster, yet non-uniform convergence; improvements were mostly confined to later iterations, possibly due to the non-linearity of the problem. MLGA with data subsets achieved faster, uniform, and predictable convergence, with a speed-up factor equivalent to the number of subsets and no increase in computational burden. By contrast, 2BP computational burden increased linearly with the number of subsets due to repeated evaluation of the objective function, and convergence was limited to the case of many (and therefore small) subsets, which resulted in high computational burden. Possibilities of improving 2BP appear limited. While general-purpose acceleration methods appear insufficient for MLGA, results suggest that data subsets are a promising way of improving MLGA performance.

  14. Parallel/Vector Integration Methods for Dynamical Astronomy

    NASA Astrophysics Data System (ADS)

    Fukushima, T.

    Progress of parallel/vector computers has driven us to develop suitable numerical integrators utilizing their computational power to the full extent while being independent on the size of system to be integrated. Unfortunately, the parallel version of Runge-Kutta type integrators are known to be not so efficient. Recently we developed a parallel version of the extrapolation method (Ito and Fukushima 1997), which allows variable timesteps and still gives an acceleration factor of 3-4 for general problems. While the vector-mode usage of Picard-Chebyshev method (Fukushima 1997a, 1997b) will lead the acceleration factor of order of 1000 for smooth problems such as planetary/satellites orbit integration. The success of multiple-correction PECE mode of time-symmetric implicit Hermitian integrator (Kokubo 1998) seems to enlighten Milankar's so-called "pipelined predictor corrector method", which is expected to lead an acceleration factor of 3-4. We will review these directions and discuss future prospects.

  15. Heterogeneity of Mosquito (Diptera: Culicidae) Control Community Size, Research Productivity, and Arboviral Diseases Across the United States.

    PubMed

    Hamer, Gabriel L

    2016-05-01

    Multiple factors lead to extensive variation in mosquito and mosquito-borne virus control programs throughout the United States. This variation is related to differences in budgets, number of personnel, operational activities targeting nuisance or vector species, integration of Geographical Information Systems, and the degree of research and development to improve management interventions through collaboration with academic institutions. To highlight this heterogeneity, the current study evaluates associations among the size of a mosquito control community, the research productivity, and the mosquito-borne virus human disease burden among states within the continental United States. I used the attendance at state mosquito and vector control meetings as a proxy for the size of the mosquito control community in each state. To judge research productivity, I used all peer-reviewed publications on mosquitoes and mosquito-borne viruses using data originating in each state over a 5- and 20-yr period. Total neuroinvasive human disease cases caused by mosquito-borne viruses were aggregated for each state. These data were compared directly and after adjusting for differences in human population size for each state. Results revealed that mean meeting attendance was positively correlated with the number of publications in each state, but not after correcting for the size of the population in each state. Additionally, human disease cases were positively correlated with the number of publications in each state. Finally, mean meeting attendance and human disease cases were only marginally positively associated, and no correlation existed after correcting for human population size. These analyses indicated that the mosquito control community size, research productivity, and mosquito-borne viral human disease burden varied greatly among states. The mechanisms resulting in this variation were discussed and the consequences of this variation are important given the constantly changing environment due to invasive mosquito species and arboviruses, urbanization, immigration, global travel, and climate change. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. What determines species richness of parasitic organisms? A meta-analysis across animal, plant and fungal hosts.

    PubMed

    Kamiya, Tsukushi; O'Dwyer, Katie; Nakagawa, Shinichi; Poulin, Robert

    2014-02-01

    Although a small set of external factors account for much of the spatial variation in plant and animal diversity, the search continues for general drivers of variation in parasite species richness among host species. Qualitative reviews of existing evidence suggest idiosyncrasies and inconsistent predictive power for all proposed determinants of parasite richness. Here, we provide the first quantitative synthesis of the evidence using a meta-analysis of 62 original studies testing the relationship between parasite richness across animal, plant and fungal hosts, and each of its four most widely used presumed predictors: host body size, host geographical range size, host population density, and latitude. We uncover three universal predictors of parasite richness across host species, namely host body size, geographical range size and population density, applicable regardless of the taxa considered and independently of most aspects of study design. A proper match in the primary studies between the focal predictor and both the spatial scale of study and the level at which parasite species richness was quantified (i.e. within host populations or tallied across a host species' entire range) also affected the magnitude of effect sizes. By contrast, except for a couple of indicative trends in subsets of the full dataset, there was no strong evidence for an effect of latitude on parasite species richness; where found, this effect ran counter to the general latitude gradient in diversity, with parasite species richness tending to be higher further from the equator. Finally, the meta-analysis also revealed a negative relationship between the magnitude of effect sizes and the year of publication of original studies (i.e. a time-lag bias). This temporal bias may be due to the increasing use of phylogenetic correction in comparative analyses of parasite richness over time, as this correction yields more conservative effect sizes. Overall, these findings point to common underlying processes of parasite diversification fundamentally different from those controlling the diversity of free-living organisms. © 2013 The Authors. Biological Reviews © 2013 Cambridge Philosophical Society.

  17. Stabilization and Reconstruction Operations Doctrine and Theory

    DTIC Science & Technology

    2014-05-22

    best definition of security for our purposes, which is the reduction of civil violence to a level manageable by HN law enforcement authorities or...manner. The literature suggests determining the correct size and scope of the S&R force. Two concepts outlined in assessing correct size and scope...tolerant of political dissent. Should this be the case and the insurgency manages to create a party with a united front, then the insurgency is

  18. Calibration correction of an active scattering spectrometer probe to account for refractive index of stratospheric aerosols

    NASA Technical Reports Server (NTRS)

    Pueschel, R. F.; Overbeck, V. R.; Snetsinger, K. G.; Russell, P. B.; Ferry, G. V.

    1990-01-01

    The use of the active scattering spectrometer probe (ASAS-X) to measure sulfuric acid aerosols on U-2 and ER-2 research aircraft has yielded results that are at times ambiguous due to the dependence of particles' optical signatures on refractive index as well as physical dimensions. The calibration correction of the ASAS-X optical spectrometer probe for stratospheric aerosol studies is validated through an independent and simultaneous sampling of the particles with impactors; sizing and counting of particles on SEM images yields total particle areas and volumes. Upon correction of calibration in light of these data, spectrometer results averaged over four size distributions are found to agree with similarly averaged impactor results to within a few percent: indicating that the optical properties or chemical composition of the sample aerosol must be known in order to achieve accurate optical aerosol spectrometer size analysis.

  19. An Improved Rank Correlation Effect Size Statistic for Single-Case Designs: Baseline Corrected Tau.

    PubMed

    Tarlow, Kevin R

    2017-07-01

    Measuring treatment effects when an individual's pretreatment performance is improving poses a challenge for single-case experimental designs. It may be difficult to determine whether improvement is due to the treatment or due to the preexisting baseline trend. Tau- U is a popular single-case effect size statistic that purports to control for baseline trend. However, despite its strengths, Tau- U has substantial limitations: Its values are inflated and not bound between -1 and +1, it cannot be visually graphed, and its relatively weak method of trend control leads to unacceptable levels of Type I error wherein ineffective treatments appear effective. An improved effect size statistic based on rank correlation and robust regression, Baseline Corrected Tau, is proposed and field-tested with both published and simulated single-case time series. A web-based calculator for Baseline Corrected Tau is also introduced for use by single-case investigators.

  20. Increasing conclusiveness of clinical breath analysis by improved baseline correction of multi capillary column - ion mobility spectrometry (MCC-IMS) data.

    PubMed

    Szymańska, Ewa; Tinnevelt, Gerjen H; Brodrick, Emma; Williams, Mark; Davies, Antony N; van Manen, Henk-Jan; Buydens, Lutgarde M C

    2016-08-05

    Current challenges of clinical breath analysis include large data size and non-clinically relevant variations observed in exhaled breath measurements, which should be urgently addressed with competent scientific data tools. In this study, three different baseline correction methods are evaluated within a previously developed data size reduction strategy for multi capillary column - ion mobility spectrometry (MCC-IMS) datasets. Introduced for the first time in breath data analysis, the Top-hat method is presented as the optimum baseline correction method. A refined data size reduction strategy is employed in the analysis of a large breathomic dataset on a healthy and respiratory disease population. New insights into MCC-IMS spectra differences associated with respiratory diseases are provided, demonstrating the additional value of the refined data analysis strategy in clinical breath analysis. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Influence of spatial and temporal spot distribution on the ocular surface quality and maximum ablation depth after photoablation with a 1050 Hz excimer laser system.

    PubMed

    Mrochen, Michael; Schelling, Urs; Wuellner, Christian; Donitzky, Christof

    2009-02-01

    To investigate the effect of temporal and spatial distributions of laser spots (scan sequences) on the corneal surface quality after ablation and the maximum ablation of a given refractive correction after photoablation with a high-repetition-rate scanning-spot laser. IROC AG, Zurich, Switzerland, and WaveLight AG, Erlangen, Germany. Bovine corneas and poly(methyl methacrylate) (PMMA) plates were photoablated using a 1050 Hz excimer laser prototype for corneal laser surgery. Four temporal and spatial spot distributions (scan sequences) with different temporal overlapping factors were created for 3 myopic, 3 hyperopic, and 3 phototherapeutic keratectomy ablation profiles. Surface quality and maximum ablation depth were measured using a surface profiling system. The surface quality factor increased (rough surfaces) as the amount of temporal overlapping in the scan sequence and the amount of correction increased. The rise in surface quality factor was less for bovine corneas than for PMMA. The scan sequence might cause systematic substructures at the surface of the ablated material depending on the overlapping factor. The maximum ablation varied within the scan sequence. The temporal and spatial distribution of the laser spots (scan sequence) during a corneal laser procedure affected the surface quality and maximum ablation depth of the ablation profile. Corneal laser surgery could theoretically benefit from smaller spot sizes and higher repetition rates. The temporal and spatial spot distributions are relevant to achieving these aims.

  2. Worldwide orthopaedic research activity 2010-2014: Publication rates in the top 15 orthopaedic journals related to population size and gross domestic product

    PubMed Central

    Hohmann, Erik; Glatt, Vaida; Tetsworth, Kevin

    2017-01-01

    AIM To perform a bibliometric analysis of publications rates in orthopedics in the top 15 orthopaedic journals. METHODS Based on their 2015 impact factor, the fifteen highest ranked orthopaedic journals between January 2010 and December 2014 were used to establish the total number of publications; cumulative impact factor points (IF) per country were determined, and normalized to population size, GDP, and GDP/capita, comparison to the median country output and the global leader. RESULTS Twenty-three thousand and twenty-one orthopaedic articles were published, with 66 countries publishing. The United States had 8149 publications, followed by the United Kingdom (1644) and Japan (1467). The highest IF was achieved by the United States (24744), United Kingdom (4776), and Japan (4053). Normalized by population size Switzerland lead. Normalized by GDP, Croatia was the top achiever. Adjusting GDP/capita, for publications and IF, China, India, and the United States were the leaders. Adjusting for population size and GDP, 28 countries achieved numbers of publications to be considered at least equivalent with the median academic output. Adjusting GDP/capita only China and India reached the number of publications to be considered equivalent to the current global leader, the United States. CONCLUSION Five countries were responsible for 60% of the orthopaedic research output over this 5-year period. After correcting for GDP/capita, only 28 of 66 countries achieved a publication rate equivalent to the median country. The United States, United Kingdom, South Korea, Japan, and Germany were the top five countries for both publication totals and cumulative impact factor points. PMID:28660144

  3. Accuracy of Spencer-Attix cavity theory and calculations of fluence correction factors for the air kerma formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    La Russa, D. J.; Rogers, D. W. O.

    EGSnrc calculations of ion chamber response and Spencer-Attix (SA) restricted stopping-power ratios are used to test the assumptions of the SA cavity theory and to assess the accuracy of this theory as it applies to the air kerma formalism for {sup 60}Co beams. Consistent with previous reports, the EGSnrc calculations show that the SA cavity theory, as it is normally applied, requires a correction for the perturbation of the charged particle fluence (K{sub fl}) by the presence of the cavity. The need for K{sub fl} corrections arises from the fact that the standard prescription for choosing the low-energy threshold {Delta}more » in the SA restricted stopping-power ratio consistently underestimates the values of {Delta} needed if no perturbation to the fluence is assumed. The use of fluence corrections can be avoided by appropriately choosing {Delta}, but it is not clear how {Delta} can be calculated from first principles. Values of {Delta} required to avoid K{sub fl} corrections were found to be consistently higher than {Delta} values obtained using the conventional approach and are also observed to be dependent on the composition of the wall in addition to the cavity size. Values of K{sub fl} have been calculated for many of the graphite-walled ion chambers used by the national metrology institutes around the world and found to be within 0.04% of unity in all cases, with an uncertainty of about 0.02%.« less

  4. SU-E-T-17: A Mathematical Model for PinPoint Chamber Correction in Measuring Small Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, T; Zhang, Y; Li, X

    2014-06-01

    Purpose: For small field dosimetry, such as measuring the cone output factor for stereotactic radiosurgery, ion chambers often result in underestimation of the dose, due to both the volume averaging effect and the lack of electron equilibrium. The purpose of this work is to develop a mathematical model, specifically for the pinpoint chamber, to calculate the correction factors corresponding to different type of small fields, including single cone-based circular field and non-standard composite fields. Methods: A PTW 0.015cc PinPoint chamber was used in the study. Its response in a certain field was modeled as the total contribution of many smallmore » beamlets, each with different response factor depending on the relative strength, radial distance to the chamber axis, and the beam angle. To get these factors, 12 cone-shaped circular fields (5mm,7.5mm, 10mm, 12.5mm, 15mm, 20mm, 25mm, 30mm, 35mm, 40mm, 50mm, 60mm) were irradiated and measured with the PinPoint chamber. For each field size, hundreds of readings were recorded for every 2mm chamber shift in the horizontal plane. These readings were then compared with the theoretical doses as obtained with Monte Carlo calculation. A penalized-least-square optimization algorithm was developed to find out the beamlet response factors. After the parameter fitting, the established mathematical model was validated with the same MC code for other non-circular fields. Results: The optimization algorithm used for parameter fitting was stable and the resulted response factors were smooth in spatial domain. After correction with the mathematical model, the chamber reading matched with the Monte Carlo calculation for all the tested fields to within 2%. Conclusion: A novel mathematical model has been developed for the PinPoint chamber for dosimetric measurement of small fields. The current model is applicable only when the beam axis is perpendicular to the chamber axis. It can be applied to non-standard composite fields. Further validation with other type of detectors is being conducted.« less

  5. Neutron Capture and the Antineutrino Yield from Nuclear Reactors.

    PubMed

    Huber, Patrick; Jaffke, Patrick

    2016-03-25

    We identify a new, flux-dependent correction to the antineutrino spectrum as produced in nuclear reactors. The abundance of certain nuclides, whose decay chains produce antineutrinos above the threshold for inverse beta decay, has a nonlinear dependence on the neutron flux, unlike the vast majority of antineutrino producing nuclides, whose decay rate is directly related to the fission rate. We have identified four of these so-called nonlinear nuclides and determined that they result in an antineutrino excess at low energies below 3.2 MeV, dependent on the reactor thermal neutron flux. We develop an analytic model for the size of the correction and compare it to the results of detailed reactor simulations for various real existing reactors, spanning 3 orders of magnitude in neutron flux. In a typical pressurized water reactor the resulting correction can reach ∼0.9% of the low energy flux which is comparable in size to other, known low-energy corrections from spent nuclear fuel and the nonequilibrium correction. For naval reactors the nonlinear correction may reach the 5% level by the end of cycle.

  6. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    PubMed

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  7. Generalized Effective Medium Theory for Particulate Nanocomposite Materials

    PubMed Central

    Siddiqui, Muhammad Usama; Arif, Abul Fazal M.

    2016-01-01

    The thermal conductivity of particulate nanocomposites is strongly dependent on the size, shape, orientation and dispersion uniformity of the inclusions. To correctly estimate the effective thermal conductivity of the nanocomposite, all these factors should be included in the prediction model. In this paper, the formulation of a generalized effective medium theory for the determination of the effective thermal conductivity of particulate nanocomposites with multiple inclusions is presented. The formulated methodology takes into account all the factors mentioned above and can be used to model nanocomposites with multiple inclusions that are randomly oriented or aligned in a particular direction. The effect of inclusion dispersion non-uniformity is modeled using a two-scale approach. The applications of the formulated effective medium theory are demonstrated using previously published experimental and numerical results for several particulate nanocomposites. PMID:28773817

  8. Quantum Simulation of Tunneling in Small Systems

    PubMed Central

    Sornborger, Andrew T.

    2012-01-01

    A number of quantum algorithms have been performed on small quantum computers; these include Shor's prime factorization algorithm, error correction, Grover's search algorithm and a number of analog and digital quantum simulations. Because of the number of gates and qubits necessary, however, digital quantum particle simulations remain untested. A contributing factor to the system size required is the number of ancillary qubits needed to implement matrix exponentials of the potential operator. Here, we show that a set of tunneling problems may be investigated with no ancillary qubits and a cost of one single-qubit operator per time step for the potential evolution, eliminating at least half of the quantum gates required for the algorithm and more than that in the general case. Such simulations are within reach of current quantum computer architectures. PMID:22916333

  9. Anomalies in the detection of change: When changes in sample size are mistaken for changes in proportions.

    PubMed

    Fiedler, Klaus; Kareev, Yaakov; Avrahami, Judith; Beier, Susanne; Kutzner, Florian; Hütter, Mandy

    2016-01-01

    Detecting changes, in performance, sales, markets, risks, social relations, or public opinions, constitutes an important adaptive function. In a sequential paradigm devised to investigate detection of change, every trial provides a sample of binary outcomes (e.g., correct vs. incorrect student responses). Participants have to decide whether the proportion of a focal feature (e.g., correct responses) in the population from which the sample is drawn has decreased, remained constant, or increased. Strong and persistent anomalies in change detection arise when changes in proportional quantities vary orthogonally to changes in absolute sample size. Proportional increases are readily detected and nonchanges are erroneously perceived as increases when absolute sample size increases. Conversely, decreasing sample size facilitates the correct detection of proportional decreases and the erroneous perception of nonchanges as decreases. These anomalies are however confined to experienced samples of elementary raw events from which proportions have to be inferred inductively. They disappear when sample proportions are described as percentages in a normalized probability format. To explain these challenging findings, it is essential to understand the inductive-learning constraints imposed on decisions from experience.

  10. The influence of the atmosphere on geoid and potential coefficient determinations from gravity data

    NASA Technical Reports Server (NTRS)

    Rummel, R.; Rapp, R. H.

    1976-01-01

    For the precise computation of geoid undulations the effect of the attraction of the atmosphere on the solution of the basic boundary value problem of gravimetric geodesy must be considered. This paper extends the theory of Moritz for deriving an atmospheric correction to the case when the undulations are computed by combining anomalies in a cap surrounding the computation point with information derived from potential coefficients. The correction term is a function of the cap size and the topography within the cap. It reaches a value of 3.0 m for a cap size of 30 deg, variations on the decimeter level being caused by variations in the topography. The effect of the atmospheric correction terms on potential coefficients is found to be small, reaching a maximum of 0.0055 millionths at n = 2, m = 2 when terrestrial gravity data are considered. The magnitude of this correction indicates that in future potential coefficient determination from gravity data the atmospheric correction should be made to such data.

  11. SU-F-T-492: The Impact of Water Temperature On Absolute Dose Calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Islam, N; Podgorsak, M; Roswell Park Cancer Institute, Buffalo, NY

    Purpose: The Task Group 51 (TG 51) protocol prescribes that dose calibration of photon beams be done by irradiating an ionization chamber in a water tank at pre-defined depths. Methodologies are provided to account for variations in measurement conditions by applying correction factors. However, the protocol does not completely account for the impact of water temperature. It is well established that water temperature will influence the density of air in the ion chamber collecting volume. Water temperature, however, will also influence the size of the collecting volume via thermal expansion of the cavity wall and the density of the watermore » in the tank. In this work the overall effect of water temperature on absolute dosimetry has been investigated. Methods: Dose measurements were made using a Farmer-type ion chamber for 6 and 23 MV photon beams with water temperatures ranging from 10 to 40°C. A reference ion chamber was used to account for fluctuations in beam output between successive measurements. Results: For the same beam output, the dose determined using TG 51 was dependent on the temperature of the water in the tank. A linear regression of the data suggests that the dependence is statistically significant with p-values of the slope equal to 0.003 and 0.01 for 6 and 23 MV beams, respectively. For a 10 degree increase in water phantom temperature, the absolute dose determined with TG 51 increased by 0.27% and 0.31% for 6 and 23 MV beams, respectively. Conclusion: There is a measurable effect of water temperature on absolute dose calibration. To account for this effect, a reference temperature can be defined and a correction factor applied to account for deviations from this reference temperature during beam calibration. Such a factor is expected to be of similar magnitude to most of the existing TG 51 correction factors.« less

  12. Correction factors to convert microdosimetry measurements in silicon to tissue in 12C ion therapy

    NASA Astrophysics Data System (ADS)

    Bolst, David; Guatelli, Susanna; Tran, Linh T.; Chartier, Lachlan; Lerch, Michael L. F.; Matsufuji, Naruhiro; Rosenfeld, Anatoly B.

    2017-03-01

    Silicon microdosimetry is a promising technology for heavy ion therapy (HIT) quality assurance, because of its sub-mm spatial resolution and capability to determine radiation effects at a cellular level in a mixed radiation field. A drawback of silicon is not being tissue-equivalent, thus the need to convert the detector response obtained in silicon to tissue. This paper presents a method for converting silicon microdosimetric spectra to tissue for a therapeutic 12C beam, based on Monte Carlo simulations. The energy deposition spectra in a 10 μm sized silicon cylindrical sensitive volume (SV) were found to be equivalent to those measured in a tissue SV, with the same shape, but with dimensions scaled by a factor κ equal to 0.57 and 0.54 for muscle and water, respectively. A low energy correction factor was determined to account for the enhanced response in silicon at low energy depositions, produced by electrons. The concept of the mean path length < {{l}\\text{Path}}> to calculate the lineal energy was introduced as an alternative to the mean chord length < l> because it was found that adopting Cauchy’s formula for the < l> was not appropriate for the radiation field typical of HIT as it is very directional. < {{l}\\text{Path}}> can be determined based on the peak of the lineal energy distribution produced by the incident carbon beam. Furthermore it was demonstrated that the thickness of the SV along the direction of the incident 12C ion beam can be adopted as < {{l}\\text{Path}}> . The tissue equivalence conversion method and < {{l}\\text{Path}}> were adopted to determine the RBE10, calculated using a modified microdosimetric kinetic model, applied to the microdosimetric spectra resulting from the simulation study. Comparison of the RBE10 along the Bragg peak to experimental TEPC measurements at HIMAC, NIRS, showed good agreement. Such agreement demonstrates the validity of the developed tissue equivalence correction factors and of the determination of < {{l}\\text{Path}}> .

  13. Upscaling gas permeability in tight-gas sandstones

    NASA Astrophysics Data System (ADS)

    Ghanbarian, B.; Torres-Verdin, C.; Lake, L. W.; Marder, M. P.

    2017-12-01

    Klinkenberg-corrected gas permeability (k) estimation in tight-gas sandstones is essential for gas exploration and production in low-permeability porous rocks. Most models for estimating k are a function of porosity (ϕ), tortuosity (τ), pore shape factor (s) and a characteristic length scale (lc). Estimation of the latter, however, has been the subject of debate in the literature. Here we invoke two different upscaling approaches from statistical physics: (1) the EMA and (2) critical path analysis (CPA) to estimate lc from pore throat-size distribution derived from mercury intrusion capillary pressure (MICP) curve. τ is approximated from: (1) concepts of percolation theory and (2) formation resistivity factor measurements (F = τ/ϕ). We then estimate k of eighteen tight-gas sandstones from lc, τ, and ϕ by assuming two different pore shapes: cylindrical and slit-shaped. Comparison with Klinkenberg-corrected k measurements showed that τ was estimated more accurately from F measurements than from percolation theory. Generally speaking, our results implied that the EMA estimated k within a factor of two of the measurements and more precisely than CPA. We further found that the assumption of cylindrical pores yielded more accurate k estimates when τ was estimated from concepts of percolation theory than the assumption of slit-shaped pores. However, the EMA with slit-shaped pores estimated k more precisely than that with cylindrical pores when τ was estimated from F measurements.

  14. Correction of projective distortion in long-image-sequence mosaics without prior information

    NASA Astrophysics Data System (ADS)

    Yang, Chenhui; Mao, Hongwei; Abousleman, Glen; Si, Jennie

    2010-04-01

    Image mosaicking is the process of piecing together multiple video frames or still images from a moving camera to form a wide-area or panoramic view of the scene being imaged. Mosaics have widespread applications in many areas such as security surveillance, remote sensing, geographical exploration, agricultural field surveillance, virtual reality, digital video, and medical image analysis, among others. When mosaicking a large number of still images or video frames, the quality of the resulting mosaic is compromised by projective distortion. That is, during the mosaicking process, the image frames that are transformed and pasted to the mosaic become significantly scaled down and appear out of proportion with respect to the mosaic. As more frames continue to be transformed, important target information in the frames can be lost since the transformed frames become too small, which eventually leads to the inability to continue further. Some projective distortion correction techniques make use of prior information such as GPS information embedded within the image, or camera internal and external parameters. Alternatively, this paper proposes a new algorithm to reduce the projective distortion without using any prior information whatsoever. Based on the analysis of the projective distortion, we approximate the projective matrix that describes the transformation between image frames using an affine model. Using singular value decomposition, we can deduce the affine model scaling factor that is usually very close to 1. By resetting the image scale of the affine model to 1, the transformed image size remains unchanged. Even though the proposed correction introduces some error in the image matching, this error is typically acceptable and more importantly, the final mosaic preserves the original image size after transformation. We demonstrate the effectiveness of this new correction algorithm on two real-world unmanned air vehicle (UAV) sequences. The proposed method is shown to be effective and suitable for real-time implementation.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nagesh, S Setlur; Rana, R; Russ, M

    Purpose: CMOS-based aSe detectors compared to CsI-TFT-based flat panels have the advantages of higher spatial sampling due to smaller pixel size and decreased blurring characteristic of direct rather than indirect detection. For systems with such detectors, the limiting factor degrading image resolution then becomes the focal-spot geometric unsharpness. This effect can seriously limit the use of such detectors in areas such as cone beam computed tomography, clinical fluoroscopy and angiography. In this work a technique to remove the effect of focal-spot blur is presented for a simulated aSe detector. Method: To simulate images from an aSe detector affected with focal-spotmore » blur, first a set of high-resolution images of a stent (FRED from Microvention, Inc.) were acquired using a 75µm pixel size Dexela-Perkin-Elmer detector and averaged to reduce quantum noise. Then the averaged image was blurred with a known Gaussian blur at two different magnifications to simulate an idealized focal spot. The blurred images were then deconvolved with a set of different Gaussian blurs to remove the effect of focal-spot blurring using a threshold-based, inverse-filtering method. Results: The blur was removed by deconvolving the images using a set of Gaussian functions for both magnifications. Selecting the correct function resulted in an image close to the original; however, selection of too wide a function would cause severe artifacts. Conclusion: Experimentally, focal-spot blur at different magnifications can be measured using a pin hole with a high resolution detector. This spread function can be used to deblur the input images that are acquired at corresponding magnifications to correct for the focal spot blur. For CBCT applications, the magnification of specific objects can be obtained using initial reconstructions then corrected for focal-spot blurring to improve resolution. Similarly, if object magnification can be determined such correction may be applied in fluoroscopy and angiography.« less

  16. SU-F-T-23: Correspondence Factor Correction Coefficient for Commissioning of Leipzig and Valencia Applicators with the Standard Imaging IVB 1000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donaghue, J; Gajdos, S

    Purpose: To determine the correction factor of the correspondence factor for the Standard Imaging IVB 1000 well chamber for commissioning of Elekta’s Leipzig and Valencia skin applicators. Methods: The Leipzig and Valencia applicators are designed to treat small skin lesions by collimating irradiation to the treatment area. Published output factors are used to calculate dose rates for clinical treatments. To validate onsite applicators, a correspondence factor (CFrev) is measured and compared to published values. The published CFrev is based on well chamber model SI HDR 1000 Plus. The CFrev is determined by correlating raw values of the source calibration setupmore » (Rcal,raw) and values taken when each applicator is mounted on the same well chamber with an adapter (Rapp,raw). The CFrev is calculated by using the equation CFrev =Rapp,raw/Rcal,raw. The CFrev was measured for each applicator in both the SI HDR 1000 Plus and the SI IVB 1000. A correction factor, CFIVB for the SI IVB 1000 was determined by finding the ratio of CFrev (SI IVB 1000) and CFrev (SI HDR 1000 Plus). Results: The average correction factors at dwell position 1121 were found to be 1.073, 1.039, 1.209, 1.091, and 1.058 for the Valencia V2, Valencia V3, Leipzig H1, Leipzig H2, and Leipzig H3 respectively. There were no significant variations in the correction factor for dwell positions 1119 through 1121. Conclusion: By using the appropriate correction factor, the correspondence factors for the Leipzig and Valencia surface applicators can be validated with the Standard Imaging IVB 1000. This allows users to correlate their measurements with the Standard Imaging IVB 1000 to the published data. The correction factor is included in the equation for the CFrev as follows: CFrev= Rapp,raw/(CFIVB*Rcal,raw). Each individual applicator has its own correction factor, so care must be taken that the appropriate factor is used.« less

  17. [Evaluation of crossing calibration of (123)I-MIBG H/M ration, with the IDW scatter correction method, on different gamma camera systems].

    PubMed

    Kittaka, Daisuke; Takase, Tadashi; Akiyama, Masayuki; Nakazawa, Yasuo; Shinozuka, Akira; Shirai, Muneaki

    2011-01-01

    (123)I-MIBG Heart-to-Mediastinum activity ratio (H/M) is commonly used as an indicator of relative myocardial (123)I-MIBG uptake. H/M ratios reflect myocardial sympathetic nerve function, therefore it is a useful parameter to assess regional myocardial sympathetic denervation in various cardiac diseases. However, H/M ratio values differ by site, gamma camera system, position and size of region of interest (ROI), and collimator. In addition to these factors, 529 keV scatter component may also affect (123)I-MIBG H/M ratio. In this study, we examined whether the H/M ratio shows correlation between two different gamma camera systems and that sought for H/M ratio calculation formula. Moreover, we assessed the feasibility of (123)I Dual Window (IDW) method, which is a scatter correction method, and compared H/M ratios with and without IDW method. H/M ratio displayed a good correlation between two gamma camera systems. Additionally, we were able to create a new H/M calculation formula. These results indicated that the IDW method is a useful scatter correction method for calculating (123)I-MIBG H/M ratios.

  18. Applications of multivariate modeling to neuroimaging group analysis: a comprehensive alternative to univariate general linear model.

    PubMed

    Chen, Gang; Adleman, Nancy E; Saad, Ziad S; Leibenluft, Ellen; Cox, Robert W

    2014-10-01

    All neuroimaging packages can handle group analysis with t-tests or general linear modeling (GLM). However, they are quite hamstrung when there are multiple within-subject factors or when quantitative covariates are involved in the presence of a within-subject factor. In addition, sphericity is typically assumed for the variance-covariance structure when there are more than two levels in a within-subject factor. To overcome such limitations in the traditional AN(C)OVA and GLM, we adopt a multivariate modeling (MVM) approach to analyzing neuroimaging data at the group level with the following advantages: a) there is no limit on the number of factors as long as sample sizes are deemed appropriate; b) quantitative covariates can be analyzed together with within-subject factors; c) when a within-subject factor is involved, three testing methodologies are provided: traditional univariate testing (UVT) with sphericity assumption (UVT-UC) and with correction when the assumption is violated (UVT-SC), and within-subject multivariate testing (MVT-WS); d) to correct for sphericity violation at the voxel level, we propose a hybrid testing (HT) approach that achieves equal or higher power via combining traditional sphericity correction methods (Greenhouse-Geisser and Huynh-Feldt) with MVT-WS. To validate the MVM methodology, we performed simulations to assess the controllability for false positives and power achievement. A real FMRI dataset was analyzed to demonstrate the capability of the MVM approach. The methodology has been implemented into an open source program 3dMVM in AFNI, and all the statistical tests can be performed through symbolic coding with variable names instead of the tedious process of dummy coding. Our data indicates that the severity of sphericity violation varies substantially across brain regions. The differences among various modeling methodologies were addressed through direct comparisons between the MVM approach and some of the GLM implementations in the field, and the following two issues were raised: a) the improper formulation of test statistics in some univariate GLM implementations when a within-subject factor is involved in a data structure with two or more factors, and b) the unjustified presumption of uniform sphericity violation and the practice of estimating the variance-covariance structure through pooling across brain regions. Published by Elsevier Inc.

  19. Modifying Spearman's Attenuation Equation to Yield Partial Corrections for Measurement Error--With Application to Sample Size Calculations

    ERIC Educational Resources Information Center

    Nicewander, W. Alan

    2018-01-01

    Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…

  20. The effect of low-energy electrons on the response of ion chambers to ionizing photon beams

    NASA Astrophysics Data System (ADS)

    La Russa, Daniel J.

    Cavity ionization chambers are one of the most popular and widely used devices for quantifying ionizing photon beams. This popularity originates from the precision of these devices and the relative ease with which ionization measurements are converted to quantities of interest in therapeutic radiology or radiation protection, collectively referred to as radiation dosimetry. The formalisms used for these conversions, known as cavity theory, make several assumptions about the electron spectrum in the low-energy range resulting from the incident photon beam. These electrons often account for a significant fraction of the ion chamber response. An inadequate treatment of low-energy electrons can therefore significantly effect calculated quantities of interest. This thesis sets out to investigate the effect of low-energy electrons on (1) the use of Spencer-Attix cavity theory with 60Co beams; and (2) the standard temperature-pressure correction factor, P TP, used to relate the measured ionization to a set of reference temperature and pressure conditions for vented ion chambers. Problems with the PTP correction are shown to arise when used with kilovoltage x rays, where ionization measurements are due primarily to electrons that do not have enough energy to cross the cavity. A combination of measurements and Monte Carlo calculations using the EGSnrc Monte Carlo code demonstrate the breakdown of PTP in these situations when used with non-air-equivalent chambers. The extent of the breakdown is shown to depend on cavity size, energy of the incident photons, and the composition of the chamber. In the worst case, the standard P TP factor overcorrects the response of an aluminum chamber by ≈12% at an air density typical of Mexico City. The response of a more common graphite-walled chamber with similar dimensions at the same air density is undercorrected by ≈ 2%. The EGSnrc Monte Carlo code is also used to investigate Spencer-Attix cavity theory as it is used in the formalism to determine the air kerma for a 60Co beam. Following a comparison with measurements in the literature, the air kerma formalism is shown to require a fluence correction factor, Kfl, to ensure the accuracy of the formalism regardless of chamber composition and cavity size. The need for such a correction stems from the fact that the cavity clearly distorts the fluence for mismatched cavity and wall materials, and the inability to select the appropriate "cut-off" energy, Delta, in the Spencer-Attix stopping-power ratio. A discussion of this issue is followed by detailed calculations of K fl values for several of the graphite ionization chambers used at national metrology institutes, which range between 0.9999 and 0.9994 with a one standard deviation uncertainty of +/- 0.0002.

  1. Computed tomographic assessment of the causal factors of unsuccessful medialization thyroplasty.

    PubMed

    Iwahashi, Toshihiko; Ogawa, Makoto; Hosokawa, Kiyohito; Mochizuki, Ryuichi; Inohara, Hidenori

    2015-03-01

    The present results demonstrate that a small implant size, undercorrection of the vocal fold, antero-posterior implant malposition, and the use of expanded polytetrafluoroethylene (ePTFE) are the primary factors that cause a poor outcome of medialization thyroplasty (MT). To assess the postoperative laryngeal condition using computed tomography (CT) in patients with unilateral vocal fold paralysis who underwent MT alone, and to identify the primary causal factors in terms of the surgical procedures that affect the outcomes of MT. Twenty-two patients who underwent MT alone were divided into two groups based on either the maximal phonation time or the perceived vocal breathiness. Two laryngologists assessed the postoperative laryngeal CT images during sustained vowel phonation and judged whether there were abnormalities of the arytenoid cartilage position, window position, implant size, and implant position, as well as the degree of correction of the vocal fold. As implant material, a silicone block, ePTFE, and hydroxyapatite had been inserted in 2, 9, and 11 patients, respectively. Comparisons of the prevalence of abnormalities in the abovementioned factors between the different outcomes and between the types of material used for the implant were performed. Twelve patients with a poor outcome and 10 with a good outcome showed 36 and 18 abnormal findings identified by either of the two laryngologists, respectively. In the poor outcome group, a smaller implant size and undercorrection of the vocal fold showed both high kappa values and a significantly higher prevalence than those in the good outcome group (p < 0.001 and p < 0.05), respectively. The comparison between material types demonstrated that the sheet-like material (ePTFE) group exhibited a significantly higher prevalence of undercorrection than the block-like material group (p < 0.05).

  2. Class Enumeration and Parameter Recovery of Growth Mixture Modeling and Second-Order Growth Mixture Modeling in the Presence of Measurement Noninvariance between Latent Classes

    PubMed Central

    Kim, Eun Sook; Wang, Yan

    2017-01-01

    Population heterogeneity in growth trajectories can be detected with growth mixture modeling (GMM). It is common that researchers compute composite scores of repeated measures and use them as multiple indicators of growth factors (baseline performance and growth) assuming measurement invariance between latent classes. Considering that the assumption of measurement invariance does not always hold, we investigate the impact of measurement noninvariance on class enumeration and parameter recovery in GMM through a Monte Carlo simulation study (Study 1). In Study 2, we examine the class enumeration and parameter recovery of the second-order growth mixture modeling (SOGMM) that incorporates measurement models at the first order level. Thus, SOGMM estimates growth trajectory parameters with reliable sources of variance, that is, common factor variance of repeated measures and allows heterogeneity in measurement parameters between latent classes. The class enumeration rates are examined with information criteria such as AIC, BIC, sample-size adjusted BIC, and hierarchical BIC under various simulation conditions. The results of Study 1 showed that the parameter estimates of baseline performance and growth factor means were biased to the degree of measurement noninvariance even when the correct number of latent classes was extracted. In Study 2, the class enumeration accuracy of SOGMM depended on information criteria, class separation, and sample size. The estimates of baseline performance and growth factor mean differences between classes were generally unbiased but the size of measurement noninvariance was underestimated. Overall, SOGMM is advantageous in that it yields unbiased estimates of growth trajectory parameters and more accurate class enumeration compared to GMM by incorporating measurement models. PMID:28928691

  3. Stability of radiomic features in CT perfusion maps

    NASA Astrophysics Data System (ADS)

    Bogowicz, M.; Riesterer, O.; Bundschuh, R. A.; Veit-Haibach, P.; Hüllner, M.; Studer, G.; Stieb, S.; Glatz, S.; Pruschy, M.; Guckenberger, M.; Tanadini-Lang, S.

    2016-12-01

    This study aimed to identify a set of stable radiomic parameters in CT perfusion (CTP) maps with respect to CTP calculation factors and image discretization, as an input for future prognostic models for local tumor response to chemo-radiotherapy. Pre-treatment CTP images of eleven patients with oropharyngeal carcinoma and eleven patients with non-small cell lung cancer (NSCLC) were analyzed. 315 radiomic parameters were studied per perfusion map (blood volume, blood flow and mean transit time). Radiomics robustness was investigated regarding the potentially standardizable (image discretization method, Hounsfield unit (HU) threshold, voxel size and temporal resolution) and non-standardizable (artery contouring and noise threshold) perfusion calculation factors using the intraclass correlation (ICC). To gain added value for our model radiomic parameters correlated with tumor volume, a well-known predictive factor for local tumor response to chemo-radiotherapy, were excluded from the analysis. The remaining stable radiomic parameters were grouped according to inter-parameter Spearman correlations and for each group the parameter with the highest ICC was included in the final set. The acceptance level was 0.9 and 0.7 for the ICC and correlation, respectively. The image discretization method using fixed number of bins or fixed intervals gave a similar number of stable radiomic parameters (around 40%). The potentially standardizable factors introduced more variability into radiomic parameters than the non-standardizable ones with 56-98% and 43-58% instability rates, respectively. The highest variability was observed for voxel size (instability rate  >97% for both patient cohorts). Without standardization of CTP calculation factors none of the studied radiomic parameters were stable. After standardization with respect to non-standardizable factors ten radiomic parameters were stable for both patient cohorts after correction for inter-parameter correlations. Voxel size, image discretization, HU threshold and temporal resolution have to be standardized to build a reliable predictive model based on CTP radiomics analysis.

  4. 75 FR 5536 - Pipeline Safety: Control Room Management/Human Factors, Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-03

    ... DEPARTMENT OF TRANSPORTATION Pipeline and Hazardous Materials Safety Administration 49 CFR Parts...: Control Room Management/Human Factors, Correction AGENCY: Pipeline and Hazardous Materials Safety... following correcting amendments: PART 192--TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM...

  5. Transfer of extensor carpi radialis brevis as an extensor to extensor motor transfer (EEMT) in ulnar nerve palsy.

    PubMed

    Jamali, Allah Rakha; Bhatti, Anisuddin; Mehboob, Ghulam

    2006-07-01

    To evaluate functional outcome and correction of deformity with extensor carpiradialis brevis motor transfer to replace the intrinsic muscles as an extensor to extensor motor transfer (EEMT). This was a prospective observational study with one group pretest posttest design conducted from 1996 to 2004. Convenience sampling technique was used and the sample size was twenty one. The independent variable was transfer of extensor carpiradialis brevis to replace the intrinsic muscles. The dependent variable was functional outcome and the correction of deformity. The extraneous variables were age, sex interval between injury and transfer as well as local factors related to wound and grafts used. The average follow up was 22.5 months. The mean preoperative unassisted extensor lag was 56.79 +/- 10.39 which improved to 9.6% +/- 5.4 (correction of 83%) at six months after surgery. With open hand assessment 76.19% reported good to excellent results, while 79.89% achieved good to excellent results with closed hand assessment. The mechanism of closing was good to excellent in 89.42% cases, however only 71.42% patients considered their hands good to excellent. Significant problems were seen with use of tendoachilles as a graft. Extensor carpiradialis brevis transfer to replace the intrinsic muscles as an extensor to extensor motor transfer can achieve good functional outcome as well as correction of deformity despite shortcomings in physical rehabilitation.

  6. Performance prediction using geostatistics and window reservoir simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fontanilla, J.P.; Al-Khalawi, A.A.; Johnson, S.G.

    1995-11-01

    This paper is the first window model study in the northern area of a large carbonate reservoir in Saudi Arabia. It describes window reservoir simulation with geostatistics to model uneven water encroachment in the southwest producing area of the northern portion of the reservoir. In addition, this paper describes performance predictions that investigate the sweep efficiency of the current peripheral waterflood. A 50 x 50 x 549 (240 m. x 260 m. x 0.15 m. average grid block size) geological model was constructed with geostatistics software. Conditional simulation was used to obtain spatial distributions of porosity and volume of dolomite.more » Core data transforms were used to obtain horizontal and vertical permeability distributions. Simple averaging techniques were used to convert the 549-layer geological model to a 50 x 50 x 10 (240 m. x 260 m. x 8 m. average grid block size) window reservoir simulation model. Flux injectors and flux producers were assigned to the outermost grid blocks. Historical boundary flux rates were obtained from a coarsely-ridded full-field model. Pressure distribution, water cuts, GORs, and recent flowmeter data were history matched. Permeability correction factors and numerous parameter adjustments were required to obtain the final history match. The permeability correction factors were based on pressure transient permeability-thickness analyses. The prediction phase of the study evaluated the effects of infill drilling, the use of artificial lifts, workovers, horizontal wells, producing rate constraints, and tight zone development to formulate depletion strategies for the development of this area. The window model will also be used to investigate day-to-day reservoir management problems in this area.« less

  7. Expression of Glial Cell Line-Derived Neurotrophic Factor (GDNF) and the GDNF Family Receptor Alpha Subunit 1 in the Paravaginal Ganglia of Nulliparous and Primiparous Rabbits.

    PubMed

    García-Villamar, Verónica; Hernández-Aragón, Laura G; Chávez-Ríos, Jesús R; Ortega, Arturo; Martínez-Gómez, Margarita; Castelán, Francisco

    2018-01-01

    To evaluate the expression of glial cell line-derived neurotrophic factor (GDNF) and its receptor, GDNF family receptor alpha subunit 1 (GFRα-1) in the pelvic (middle third) vagina and, particularly, in the paravaginal ganglia of nulliparous and primiparous rabbits. Chinchilla-breed female rabbits were used. Primiparas were killed on postpartum day 3 and nulliparas upon reaching a similar age. The vaginal tracts were processed for histological analyses or frozen for Western blot assays. We measured the ganglionic area, the Abercrombie-corrected number of paravaginal neurons, the cross-sectional area of the neuronal somata, and the number of satellite glial cells (SGCs) per neuron. The relative expression of both GDNF and GFRα-1 were assessed by Western blotting, and the immunostaining was semiquantitated. Unpaired two-tailed Student t -test or Wilcoxon test was used to identify statistically significant differences (P≤0.05) between the groups. Our findings demonstrated that the ganglionic area, neuronal soma size, Abercrombie-corrected number of neurons, and number of SGCs per neuron were similar in nulliparas and primiparas. The relative expression of both GDNF and GFRα-1 was similar. Immunostaining for both GDNF and GFRα-1 was observed in several vaginal layers, and no differences were detected regarding GDNF and GFRα-1 immunostaining between the 2 groups. In the paravaginal ganglia, the expression of GDNF was increased in neurons, while that of GFRα-1 was augmented in the SGCs of primiparous rabbits. The present findings suggest an ongoing regenerative process related to the recovery of neuronal soma size in the paravaginal ganglia, in which GDNF and GFRα-1 could be involved in cross-talk between neurons and SGCs.

  8. High transport efficiency of nanoparticles through a total-consumption sample introduction system and its beneficial application for particle size evaluation in single-particle ICP-MS.

    PubMed

    Miyashita, Shin-Ichi; Mitsuhashi, Hiroaki; Fujii, Shin-Ichiro; Takatsu, Akiko; Inagaki, Kazumi; Fujimoto, Toshiyuki

    2017-02-01

    In order to facilitate reliable and efficient determination of both the particle number concentration (PNC) and the size of nanoparticles (NPs) by single-particle ICP-MS (spICP-MS) without the need to correct for the particle transport efficiency (TE, a possible source of bias in the results), a total-consumption sample introduction system consisting of a large-bore, high-performance concentric nebulizer and a small-volume on-axis cylinder chamber was utilized. Such a system potentially permits a particle TE of 100 %, meaning that there is no need to include a particle TE correction when calculating the PNC and the NP size. When the particle TE through the sample introduction system was evaluated by comparing the frequency of sharp transient signals from the NPs in a measured NP standard of precisely known PNC to the particle frequency for a measured NP suspension, the TE for platinum NPs with a nominal diameter of 70 nm was found to be very high (i.e., 93 %), and showed satisfactory repeatability (relative standard deviation of 1.0 % for four consecutive measurements). These results indicated that employing this total consumption system allows the particle TE correction to be ignored when calculating the PNC. When the particle size was determined using a solution-standard-based calibration approach without an NP standard, the particle diameters of platinum and silver NPs with nominal diameters of 30-100 nm were found to agree well with the particle diameters determined by transmission electron microscopy, regardless of whether a correction was performed for the particle TE. Thus, applying the proposed system enables NP size to be accurately evaluated using a solution-standard-based calibration approach without the need to correct for the particle TE.

  9. Assessment and Mapping of Forest Parcel Sizes

    Treesearch

    Brett J. Butler; Susan L. King

    2005-01-01

    A method for analyzing and mapping forest parcel sizes in the Northeastern United States is presented. A decision tree model was created that predicts forest parcel size from spatially explicit predictor variables: population density, State, percentage forest land cover, and road density. The model correctly predicted parcel size for 60 percent of the observations in a...

  10. Gustaf: Detecting and correctly classifying SVs in the NGS twilight zone.

    PubMed

    Trappe, Kathrin; Emde, Anne-Katrin; Ehrlich, Hans-Christian; Reinert, Knut

    2014-12-15

    The landscape of structural variation (SV) including complex duplication and translocation patterns is far from resolved. SV detection tools usually exhibit low agreement, are often geared toward certain types or size ranges of variation and struggle to correctly classify the type and exact size of SVs. We present Gustaf (Generic mUlti-SpliT Alignment Finder), a sound generic multi-split SV detection tool that detects and classifies deletions, inversions, dispersed duplications and translocations of ≥ 30 bp. Our approach is based on a generic multi-split alignment strategy that can identify SV breakpoints with base pair resolution. We show that Gustaf correctly identifies SVs, especially in the range from 30 to 100 bp, which we call the next-generation sequencing (NGS) twilight zone of SVs, as well as larger SVs >500 bp. Gustaf performs better than similar tools in our benchmark and is furthermore able to correctly identify size and location of dispersed duplications and translocations, which otherwise might be wrongly classified, for example, as large deletions. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. Population entropies estimates of proteins

    NASA Astrophysics Data System (ADS)

    Low, Wai Yee

    2017-05-01

    The Shannon entropy equation provides a way to estimate variability of amino acids sequences in a multiple sequence alignment of proteins. Knowledge of protein variability is useful in many areas such as vaccine design, identification of antibody binding sites, and exploration of protein 3D structural properties. In cases where the population entropies of a protein are of interest but only a small sample size can be obtained, a method based on linear regression and random subsampling can be used to estimate the population entropy. This method is useful for comparisons of entropies where the actual sequence counts differ and thus, correction for alignment size bias is needed. In the current work, an R based package named EntropyCorrect that enables estimation of population entropy is presented and an empirical study on how well this new algorithm performs on simulated dataset of various combinations of population and sample sizes is discussed. The package is available at https://github.com/lloydlow/EntropyCorrect. This article, which was originally published online on 12 May 2017, contained an error in Eq. (1), where the summation sign was missing. The corrected equation appears in the Corrigendum attached to the pdf.

  12. An analysis of the ArcCHECK-MR diode array's performance for ViewRay quality assurance.

    PubMed

    Ellefson, Steven T; Culberson, Wesley S; Bednarz, Bryan P; DeWerd, Larry A; Bayouth, John E

    2017-07-01

    The ArcCHECK-MR diode array utilizes a correction system with a virtual inclinometer to correct the angular response dependencies of the diodes. However, this correction system cannot be applied to measurements on the ViewRay MR-IGRT system due to the virtual inclinometer's incompatibility with the ViewRay's multiple simultaneous beams. Additionally, the ArcCHECK's current correction factors were determined without magnetic field effects taken into account. In the course of performing ViewRay IMRT quality assurance with the ArcCHECK, measurements were observed to be consistently higher than the ViewRay TPS predictions. The goals of this study were to quantify the observed discrepancies and test whether applying the current factors improves the ArcCHECK's accuracy for measurements on the ViewRay. Gamma and frequency analysis were performed on 19 ViewRay patient plans. Ion chamber measurements were performed at a subset of diode locations using a PMMA phantom with the same dimensions as the ArcCHECK. A new method for applying directionally dependent factors utilizing beam information from the ViewRay TPS was developed in order to analyze the current ArcCHECK correction factors. To test the current factors, nine ViewRay plans were altered to be delivered with only a single simultaneous beam and were measured with the ArcCHECK. The current correction factors were applied using both the new and current methods. The new method was also used to apply corrections to the original 19 ViewRay plans. It was found the ArcCHECK systematically reports doses higher than those actually delivered by the ViewRay. Application of the current correction factors by either method did not consistently improve measurement accuracy. As dose deposition and diode response have both been shown to change under the influence of a magnetic field, it can be concluded the current ArcCHECK correction factors are invalid and/or inadequate to correct measurements on the ViewRay system. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  13. Finite-density effects in the Fredrickson-Andersen and Kob-Andersen kinetically-constrained models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teomy, Eial, E-mail: eialteom@post.tau.ac.il; Shokef, Yair, E-mail: shokef@tau.ac.il

    2014-08-14

    We calculate the corrections to the thermodynamic limit of the critical density for jamming in the Kob-Andersen and Fredrickson-Andersen kinetically-constrained models, and find them to be finite-density corrections, and not finite-size corrections. We do this by introducing a new numerical algorithm, which requires negligible computer memory since contrary to alternative approaches, it generates at each point only the necessary data. The algorithm starts from a single unfrozen site and at each step randomly generates the neighbors of the unfrozen region and checks whether they are frozen or not. Our results correspond to systems of size greater than 10{sup 7} ×more » 10{sup 7}, much larger than any simulated before, and are consistent with the rigorous bounds on the asymptotic corrections. We also find that the average number of sites that seed a critical droplet is greater than 1.« less

  14. Exact Derivation of a Finite-Size Scaling Law and Corrections to Scaling in the Geometric Galton-Watson Process

    PubMed Central

    Corral, Álvaro; Garcia-Millan, Rosalba; Font-Clos, Francesc

    2016-01-01

    The theory of finite-size scaling explains how the singular behavior of thermodynamic quantities in the critical point of a phase transition emerges when the size of the system becomes infinite. Usually, this theory is presented in a phenomenological way. Here, we exactly demonstrate the existence of a finite-size scaling law for the Galton-Watson branching processes when the number of offsprings of each individual follows either a geometric distribution or a generalized geometric distribution. We also derive the corrections to scaling and the limits of validity of the finite-size scaling law away the critical point. A mapping between branching processes and random walks allows us to establish that these results also hold for the latter case, for which the order parameter turns out to be the probability of hitting a distant boundary. PMID:27584596

  15. Factors controlling threshold friction velocity in semiarid and arid areas of the United States

    USGS Publications Warehouse

    Marticorena, Beatrice; Bergametti, G.; Belnap, Jayne

    1997-01-01

    A physical model was developed to explain threshold friction velocities u*t for particles of the size 60a??120 I?m lying on a rough surface in loose soils for semiarid and arid parts of the United States. The model corrected for the effect of momentum absorption by the nonerodible roughness. For loose or disturbed soils the most important parameter that controls u*t is the aerodynamic roughness height z 0. For physical crusts damaged by wind the size of erodible crust pieces is important along with the roughness. The presence of cyanobacteriallichen soil crusts roughens the surface, and the biological fibrous growth aggregates soil particles. Only undisturbed sandy soils and disturbed soils of all types would be expected to be erodible in normal wind storms. Therefore disturbance of soils by both cattle and humans is very important in predicting wind erosion as confirmed by our measurements.

  16. Hybrid seine for full fish community collections

    USGS Publications Warehouse

    McKenna, James E.; Waldt, Emily M.; Abbett, Ross; David, Anthony; Snyder, James

    2013-01-01

    Seines are simple and effective fish collection gears, but the net mesh size influences how well the catch represents the fish communities. We designed and tested a hybrid seine with a dual-mesh bag (1/4″ and 1/8″) and compared the fish assemblage collected by each mesh. The fine-mesh net retained three times as many fish and collected more species (as many as eight), including representatives of several rare species, than did the coarser mesh. The dual-mesh bag permitted us to compare both sizes and species retained by each layer and to develop species-specific abundance correction factors, which allowed comparison of catches with the coarse-mesh seine used for earlier collections. The results indicate that a hybrid seine with coarse-mesh wings and a fine-mesh bag would enhance future studies of fish communities, especially when small-bodied fishes or early life stages are the research focus.

  17. Asymptotics of empirical eigenstructure for high dimensional spiked covariance.

    PubMed

    Wang, Weichen; Fan, Jianqing

    2017-06-01

    We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies.

  18. Asymptotics of empirical eigenstructure for high dimensional spiked covariance

    PubMed Central

    Wang, Weichen

    2017-01-01

    We derive the asymptotic distributions of the spiked eigenvalues and eigenvectors under a generalized and unified asymptotic regime, which takes into account the magnitude of spiked eigenvalues, sample size, and dimensionality. This regime allows high dimensionality and diverging eigenvalues and provides new insights into the roles that the leading eigenvalues, sample size, and dimensionality play in principal component analysis. Our results are a natural extension of those in Paul (2007) to a more general setting and solve the rates of convergence problems in Shen et al. (2013). They also reveal the biases of estimating leading eigenvalues and eigenvectors by using principal component analysis, and lead to a new covariance estimator for the approximate factor model, called shrinkage principal orthogonal complement thresholding (S-POET), that corrects the biases. Our results are successfully applied to outstanding problems in estimation of risks of large portfolios and false discovery proportions for dependent test statistics and are illustrated by simulation studies. PMID:28835726

  19. Special electronic distance meter calibration for precise engineering surveying industrial applications

    NASA Astrophysics Data System (ADS)

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf

    2015-05-01

    All surveying instruments and their measurements suffer from some errors. To refine the measurement results, it is necessary to use procedures restricting influence of the instrument errors on the measured values or to implement numerical corrections. In precise engineering surveying industrial applications the accuracy of the distances usually realized on relatively short distance is a key parameter limiting the resulting accuracy of the determined values (coordinates, etc.). To determine the size of systematic and random errors of the measured distances were made test with the idea of the suppression of the random error by the averaging of the repeating measurement, and reducing systematic errors influence of by identifying their absolute size on the absolute baseline realized in geodetic laboratory at the Faculty of Civil Engineering CTU in Prague. The 16 concrete pillars with forced centerings were set up and the absolute distances between the points were determined with a standard deviation of 0.02 millimetre using a Leica Absolute Tracker AT401. For any distance measured by the calibrated instruments (up to the length of the testing baseline, i.e. 38.6 m) can now be determined the size of error correction of the distance meter in two ways: Firstly by the interpolation on the raw data, or secondly using correction function derived by previous FFT transformation usage. The quality of this calibration and correction procedure was tested on three instruments (Trimble S6 HP, Topcon GPT-7501, Trimble M3) experimentally using Leica Absolute Tracker AT401. By the correction procedure was the standard deviation of the measured distances reduced significantly to less than 0.6 mm. In case of Topcon GPT-7501 is the nominal standard deviation 2 mm, achieved (without corrections) 2.8 mm and after corrections 0.55 mm; in case of Trimble M3 is nominal standard deviation 3 mm, achieved (without corrections) 1.1 mm and after corrections 0.58 mm; and finally in case of Trimble S6 is nominal standard deviation 1 mm, achieved (without corrections) 1.2 mm and after corrections 0.51 mm. Proposed procedure of the calibration and correction is in our opinion very suitable for increasing of the accuracy of the electronic distance measurement and allows the use of the common surveying instrument to achieve uncommonly high precision.

  20. Correction to the plant canopy gap-size analysis theory used by the Tracing Radiation and Architecture of Canopies instrument

    NASA Astrophysics Data System (ADS)

    Leblanc, Sylvain G.

    2002-12-01

    A plant canopy gap-size analyzer, the Tracing Radiation and Architecture of Canopies (TRAC), developed by Chen and Cihlar [Appl. Opt. 34, 6211 (1995)] and commercialized by 3rd Wave Engineering (Nepean, Canada), has been used around the world to quantify the fraction of photosynthetically active radiation absorbed by plant canopies, the leaf area index (LAI), and canopy architectural parameters. The TRAC is walked under a canopy along transects to measure sunflecks that are converted into a gap-size distribution. A numerical gap-removal technique is performed to remove gaps that are not theoretically possible in a random canopy. The resulting reduced gap-size distribution is used to quantify the heterogeneity of the canopy and to improve LAI measurements. It is explicitly shown here that the original derivation of the clumping index was missing a normalization factor. For a very clumped canopy with a large gap fraction, the resulting LAI can be more than 100% smaller than previously estimated. A test case is used to demonstrate that the new clumping index derivation allows a more accurate change of LAI to be measured.

  1. The perturbation correction factors for cylindrical ionization chambers in high-energy photon beams.

    PubMed

    Yoshiyama, Fumiaki; Araki, Fujio; Ono, Takeshi

    2010-07-01

    In this study, we calculated perturbation correction factors for cylindrical ionization chambers in high-energy photon beams by using Monte Carlo simulations. We modeled four Farmer-type cylindrical chambers with the EGSnrc/Cavity code and calculated the cavity or electron fluence correction factor, P (cav), the displacement correction factor, P (dis), the wall correction factor, P (wall), the stem correction factor, P (stem), the central electrode correction factor, P (cel), and the overall perturbation correction factor, P (Q). The calculated P (dis) values for PTW30010/30013 chambers were 0.9967 +/- 0.0017, 0.9983 +/- 0.0019, and 0.9980 +/- 0.0019, respectively, for (60)Co, 4 MV, and 10 MV photon beams. The value for a (60)Co beam was about 1.0% higher than the 0.988 value recommended by the IAEA TRS-398 protocol. The P (dis) values had a substantial discrepancy compared to those of IAEA TRS-398 and AAPM TG-51 at all photon energies. The P (wall) values were from 0.9994 +/- 0.0020 to 1.0031 +/- 0.0020 for PTW30010 and from 0.9961 +/- 0.0018 to 0.9991 +/- 0.0017 for PTW30011/30012, in the range of (60)Co-10 MV. The P (wall) values for PTW30011/30012 were around 0.3% lower than those of the IAEA TRS-398. Also, the chamber response with and without a 1 mm PMMA water-proofing sleeve agreed within their combined uncertainty. The calculated P (stem) values ranged from 0.9945 +/- 0.0014 to 0.9965 +/- 0.0014, but they are not considered in current dosimetry protocols. The values were no significant difference on beam qualities. P (cel) for a 1 mm aluminum electrode agreed within 0.3% with that of IAEA TRS-398. The overall perturbation factors agreed within 0.4% with those for IAEA TRS-398.

  2. Assessing the failure of continuum formula for solid-solid drag force using discrete element method in large size ratios

    NASA Astrophysics Data System (ADS)

    Jalali, Payman; Hyppänen, Timo

    2017-06-01

    In loose or moderately-dense particle mixtures, the contact forces between particles due to successive collisions create average volumetric solid-solid drag force between different granular phases (of different particle sizes). The derivation of the mathematical formula for this drag force is based on the homogeneity of mixture within the calculational control volume. This assumption especially fails when the size ratio of particles grows to a large value of 10 or greater. The size-driven inhomogeneity is responsible to the deviation of intergranular force from the continuum formula. In this paper, we have implemented discrete element method (DEM) simulations to obtain the volumetric mean force exchanged between the granular phases with the size ratios greater than 10. First, the force is calculated directly from DEM averaged over a proper time window. Second, the continuum formula is applied to calculate the drag forces using the DEM quantities. We have shown the two volumetric forces are in good agreement as long as the homogeneity condition is maintained. However, the relative motion of larger particles in a cloud of finer particles imposes the inhomogeneous distribution of finer particles around the larger ones. We have presented correction factors to the volumetric force from continuum formula.

  3. Ecomorphological selectivity among marine teleost fishes during the end-Cretaceous extinction

    PubMed Central

    Friedman, Matt

    2009-01-01

    Despite the attention focused on mass extinction events in the fossil record, patterns of extinction in the dominant group of marine vertebrates—fishes—remain largely unexplored. Here, I demonstrate ecomorphological selectivity among marine teleost fishes during the end-Cretaceous extinction, based on a genus-level dataset that accounts for lineages predicted on the basis of phylogeny but not yet sampled in the fossil record. Two ecologically relevant anatomical features are considered: body size and jaw-closing lever ratio. Extinction intensity is higher for taxa with large body sizes and jaws consistent with speed (rather than force) transmission; resampling tests indicate that victims represent a nonrandom subset of taxa present in the final stage of the Cretaceous. Logistic regressions of the raw data reveal that this nonrandom distribution stems primarily from the larger body sizes of victims relative to survivors. Jaw mechanics are also a significant factor for most dataset partitions but are always less important than body size. When data are corrected for phylogenetic nonindependence, jaw mechanics show a significant correlation with extinction risk, but body size does not. Many modern large-bodied, predatory taxa currently suffering from overexploitation, such billfishes and tunas, first occur in the Paleocene, when they appear to have filled the functional space vacated by some extinction victims. PMID:19276106

  4. Ecomorphological selectivity among marine teleost fishes during the end-Cretaceous extinction.

    PubMed

    Friedman, Matt

    2009-03-31

    Despite the attention focused on mass extinction events in the fossil record, patterns of extinction in the dominant group of marine vertebrates-fishes-remain largely unexplored. Here, I demonstrate ecomorphological selectivity among marine teleost fishes during the end-Cretaceous extinction, based on a genus-level dataset that accounts for lineages predicted on the basis of phylogeny but not yet sampled in the fossil record. Two ecologically relevant anatomical features are considered: body size and jaw-closing lever ratio. Extinction intensity is higher for taxa with large body sizes and jaws consistent with speed (rather than force) transmission; resampling tests indicate that victims represent a nonrandom subset of taxa present in the final stage of the Cretaceous. Logistic regressions of the raw data reveal that this nonrandom distribution stems primarily from the larger body sizes of victims relative to survivors. Jaw mechanics are also a significant factor for most dataset partitions but are always less important than body size. When data are corrected for phylogenetic nonindependence, jaw mechanics show a significant correlation with extinction risk, but body size does not. Many modern large-bodied, predatory taxa currently suffering from overexploitation, such billfishes and tunas, first occur in the Paleocene, when they appear to have filled the functional space vacated by some extinction victims.

  5. Determination of the k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} correction factors for detectors used with an 800 MU/min CyberKnife{sup ®} system equipped with fixed collimators and a study of detector response to small photon beams using a Monte Carlo method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moignier, C., E-mail: cyril.moignier@free.fr; Huet, C.; Makovicka, L.

    Purpose: In a previous work, output ratio (OR{sub det}) measurements were performed for the 800 MU/min CyberKnife{sup ®} at the Oscar Lambret Center (COL, France) using several commercially available detectors as well as using two passive dosimeters (EBT2 radiochromic film and micro-LiF TLD-700). The primary aim of the present work was to determine by Monte Carlo calculations the output factor in water (OF{sub MC,w}) and the k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} correction factors. The secondary aim was to study the detector response in small beams using Monte Carlomore » simulation. Methods: The LINAC head of the CyberKnife{sup ®} was modeled using the PENELOPE Monte Carlo code system. The primary electron beam was modeled using a monoenergetic source with a radial gaussian distribution. The model was adjusted by comparisons between calculated and measured lateral profiles and tissue-phantom ratios obtained with the largest field. In addition, the PTW 60016 and 60017 diodes, PTW 60003 diamond, and micro-LiF were modeled. Output ratios with modeled detectors (OR{sub MC,det}) and OF{sub MC,w} were calculated and compared to measurements, in order to validate the model for smallest fields and to calculate k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} correction factors, respectively. For the study of the influence of detector characteristics on their response in small beams; first, the impact of the atomic composition and the mass density of silicon, LiF, and diamond materials were investigated; second, the material, the volume averaging, and the coating effects of detecting material on the detector responses were estimated. Finally, the influence of the size of silicon chip on diode response was investigated. Results: Looking at measurement ratios (uncorrected output factors) compared to the OF{sub MC,w}, the PTW 60016, 60017 and Sun Nuclear EDGE diodes systematically over-responded (about +6% for the 5 mm field), whereas the PTW 31014 Pinpoint chamber systematically under-responded (about −12% for the 5 mm field). OR{sub det} measured with the SFD diode and PTW 60003 diamond detectors were in good agreement with OF{sub MC,w} except for the 5 mm field size (about −7.5% for the diamond and +3% for the SFD). A good agreement with OF{sub MC,w} was obtained with the EBT2 film and micro-LiF dosimeters (deviation less than 1.4% for all fields investigated). k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} correction factors for several detectors used in this work have been calculated. The impact of atomic composition on the dosimetric response of detectors was found to be insignificant, unlike the mass density and size of the detecting material. Conclusions: The results obtained with the passive dosimeters showed that they can be used for small beam OF measurements without correction factors. The study of detector response showed that OR{sub det} is depending on the mass density, the volume averaging, and the coating effects of the detecting material. Each effect was quantified for the PTW 60016 and 60017 diodes, the micro-LiF, and the PTW 60003 diamond detectors. None of the active detectors used in this work can be recommended as a reference for small field dosimetry, but an improved diode detector with a smaller silicon chip coated with tissue-equivalent material is anticipated (by simulation) to be a reliable small field dosimetric detector in a nonequilibrium field.« less

  6. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

    2010-01-01

    This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile…

  7. A novel method for correcting scanline-observational bias of discontinuity orientation

    PubMed Central

    Huang, Lei; Tang, Huiming; Tan, Qinwen; Wang, Dingjian; Wang, Liangqing; Ez Eldin, Mutasim A. M.; Li, Changdong; Wu, Qiong

    2016-01-01

    Scanline observation is known to introduce an angular bias into the probability distribution of orientation in three-dimensional space. In this paper, numerical solutions expressing the functional relationship between the scanline-observational distribution (in one-dimensional space) and the inherent distribution (in three-dimensional space) are derived using probability theory and calculus under the independence hypothesis of dip direction and dip angle. Based on these solutions, a novel method for obtaining the inherent distribution (also for correcting the bias) is proposed, an approach which includes two procedures: 1) Correcting the cumulative probabilities of orientation according to the solutions, and 2) Determining the distribution of the corrected orientations using approximation methods such as the one-sample Kolmogorov-Smirnov test. The inherent distribution corrected by the proposed method can be used for discrete fracture network (DFN) modelling, which is applied to such areas as rockmass stability evaluation, rockmass permeability analysis, rockmass quality calculation and other related fields. To maximize the correction capacity of the proposed method, the observed sample size is suggested through effectiveness tests for different distribution types, dispersions and sample sizes. The performance of the proposed method and the comparison of its correction capacity with existing methods are illustrated with two case studies. PMID:26961249

  8. Factors Associated With Early Loss of Hallux Valgus Correction.

    PubMed

    Shibuya, Naohiro; Kyprios, Evangelos M; Panchani, Prakash N; Martin, Lanster R; Thorud, Jakob C; Jupiter, Daniel C

    Recurrence is common after hallux valgus corrective surgery. Although many investigators have studied the risk factors associated with a suboptimal hallux position at the end of long-term follow-up, few have evaluated the factors associated with actual early loss of correction. We conducted a retrospective cohort study to identify the predictors of lateral deviation of the hallux during the postoperative period. We evaluated the demographic data, preoperative severity of the hallux valgus, other angular measurements characterizing underlying deformities, amount of hallux valgus correction, and postoperative alignment of the corrected hallux valgus for associations with recurrence. After adjusting for the covariates, the only factor associated with recurrence was the postoperative tibial sesamoid position. The recurrence rate was ~50% and ~60% when the postoperative tibial sesamoid position was >4 and >5 on the 7-point scale, respectively. Published by Elsevier Inc.

  9. Correcting intensity loss errors in the absence of texture-free reference samples during pole figure measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saleh, Ahmed A., E-mail: asaleh@uow.edu.au

    Even with the use of X-ray polycapillary lenses, sample tilting during pole figure measurement results in a decrease in the recorded X-ray intensity. The magnitude of this error is affected by the sample size and/or the finite detector size. These errors can be typically corrected by measuring the intensity loss as a function of the tilt angle using a texture-free reference sample (ideally made of the same alloy as the investigated material). Since texture-free reference samples are not readily available for all alloys, the present study employs an empirical procedure to estimate the correction curve for a particular experimental configuration.more » It involves the use of real texture-free reference samples that pre-exist in any X-ray diffraction laboratory to first establish the empirical correlations between X-ray intensity, sample tilt and their Bragg angles and thereafter generate correction curves for any Bragg angle. It will be shown that the empirically corrected textures are in very good agreement with the experimentally corrected ones. - Highlights: •Sample tilting during X-ray pole figure measurement leads to intensity loss errors. •Texture-free reference samples are typically used to correct the pole figures. •An empirical correction procedure is proposed in the absence of reference samples. •The procedure relies on reference samples that pre-exist in any texture laboratory. •Experimentally and empirically corrected textures are in very good agreement.« less

  10. An improved bias correction method of daily rainfall data using a sliding window technique for climate change impact assessment

    NASA Astrophysics Data System (ADS)

    Smitha, P. S.; Narasimhan, B.; Sudheer, K. P.; Annamalai, H.

    2018-01-01

    Regional climate models (RCMs) are used to downscale the coarse resolution General Circulation Model (GCM) outputs to a finer resolution for hydrological impact studies. However, RCM outputs often deviate from the observed climatological data, and therefore need bias correction before they are used for hydrological simulations. While there are a number of methods for bias correction, most of them use monthly statistics to derive correction factors, which may cause errors in the rainfall magnitude when applied on a daily scale. This study proposes a sliding window based daily correction factor derivations that help build reliable daily rainfall data from climate models. The procedure is applied to five existing bias correction methods, and is tested on six watersheds in different climatic zones of India for assessing the effectiveness of the corrected rainfall and the consequent hydrological simulations. The bias correction was performed on rainfall data downscaled using Conformal Cubic Atmospheric Model (CCAM) to 0.5° × 0.5° from two different CMIP5 models (CNRM-CM5.0, GFDL-CM3.0). The India Meteorological Department (IMD) gridded (0.25° × 0.25°) observed rainfall data was considered to test the effectiveness of the proposed bias correction method. The quantile-quantile (Q-Q) plots and Nash Sutcliffe efficiency (NSE) were employed for evaluation of different methods of bias correction. The analysis suggested that the proposed method effectively corrects the daily bias in rainfall as compared to using monthly factors. The methods such as local intensity scaling, modified power transformation and distribution mapping, which adjusted the wet day frequencies, performed superior compared to the other methods, which did not consider adjustment of wet day frequencies. The distribution mapping method with daily correction factors was able to replicate the daily rainfall pattern of observed data with NSE value above 0.81 over most parts of India. Hydrological simulations forced using the bias corrected rainfall (distribution mapping and modified power transformation methods that used the proposed daily correction factors) was similar to those simulated by the IMD rainfall. The results demonstrate that the methods and the time scales used for bias correction of RCM rainfall data have a larger impact on the accuracy of the daily rainfall and consequently the simulated streamflow. The analysis suggests that the distribution mapping with daily correction factors can be preferred for adjusting RCM rainfall data irrespective of seasons or climate zones for realistic simulation of streamflow.

  11. [Accuracy and relevance of CT volumetry in open ocular injuries with intraocular foreign bodies].

    PubMed

    Maneschg, O A; Volek, E; Lohinai, Z; Resch, M D; Papp, A; Korom, C; Karlinger, K; Németh, J

    2015-04-01

    The aim of the study was to evaluate the volume of intraocular foreign bodies (IOFB) using computed tomography (CT) volumetry as a prognostic factor for clinical outcome in open ocular injuries. This study compared the volume of 11 IOFBs more than 5 mm(3) in size based on CT volumetry with the real size determined by in vitro measurement. A retrospective evaluation of clinical data, visual acuity, complications and relation of size of IOFBs with clinical outcome in 33 patients (mean age 41.0 ± 13.5 years) with open ocular injuries treated at our department between January 2005 and December 2010 was carried out. No significant differences were found between pairwise in vitro measurement and CT volumetric size (p = 0.07). All patients were surgically treated by pars plana vitrectomy. The mean follow-up time was 7.6± 6.2 months and the mean preoperative best corrected visual acuity (BCVA) was 0.063 ± 0.16 (logMAR 1.2 ± 0.79). Postoperatively, a mean BCVA of 0.25 ± 0.2 (logMAR 0.6 ± 0.69) could be achieved. Clinical outcomes were significantly better in injuries with small IOFBs measuring < 15 mm(3) (p = 0.0098). The use of CT volumetry is an accurate method for measurement of IOFBs. Exact data about the size and measurement of volume are also an important factor for the prognosis of clinical outcome in open ocular injuries with IOFBs and CT volumetry can also provide important information about the localization of IOFBs.

  12. SU-F-BRE-01: A Rapid Method to Determine An Upper Limit On a Radiation Detector's Correction Factor During the QA of IMRT Plans

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamio, Y; Bouchard, H

    2014-06-15

    Purpose: Discrepancies in the verification of the absorbed dose to water from an IMRT plan using a radiation dosimeter can be wither caused by 1) detector specific nonstandard field correction factors as described by the formalism of Alfonso et al. 2) inaccurate delivery of the DQA plan. The aim of this work is to develop a simple/fast method to determine an upper limit on the contribution of composite field correction factors to these discrepancies. Methods: Indices that characterize the non-flatness of the symmetrised collapsed delivery (VSC) of IMRT fields over detector-specific regions of interest were shown to be correlated withmore » IMRT field correction factors. The indices introduced are the uniformity index (UI) and the mean fluctuation index (MF). Each one of these correlation plots have 10 000 fields generated with a stochastic model. A total of eight radiation detectors were investigated in the radial orientation. An upper bound on the correction factors was evaluated by fitting values of high correction factors for a given index value. Results: These fitted curves can be used to compare the performance of radiation dosimeters in composite IMRT fields. Highly water-equivalent dosimeters like the scintillating detector (Exradin W1) and a generic alanine detector have been found to have corrections under 1% over a broad range of field modulations (0 – 0.12 for MF and 0 – 0.5 for UI). Other detectors have been shown to have corrections of a few percent over this range. Finally, a full Monte Carlo simulations of 18 clinical and nonclinical IMRT field showed good agreement with the fitted curve for the A12 ionization chamber. Conclusion: This work proposes a rapid method to evaluate an upper bound on the contribution of correction factors to discrepancies found in the verification of DQA plans.« less

  13. Size distribution and volume fraction of T(1) phase precipitates from TEM images: Direct measurements and related correction.

    PubMed

    Dorin, Thomas; Donnadieu, Patricia; Chaix, Jean-Marc; Lefebvre, Williams; Geuser, Frédéric De; Deschamps, Alexis

    2015-11-01

    Transmission Electron Microscopy (TEM) can be used to measure the size distribution and volume fraction of fine scale precipitates in metallic systems. However, such measurements suffer from a number of artefacts that need to be accounted for, related to the finite thickness of the TEM foil and to the projected observation in two dimensions of the microstructure. We present a correction procedure to describe the 3D distribution of disc-like particles and apply this method to the plate-like T1 precipitates in an Al-Li-Cu alloy in two ageing conditions showing different particle morphologies. The precipitates were imaged in a High-Angular Annular Dark Field Microscope (HAADF-STEM). The corrected size distribution is further used to determine the precipitate volume fraction. Atom probe tomography (APT) is finally utilised as an alternative way to measure the precipitate volume fraction and test the validity of the electron microscopy results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Factors that influence search termination decisions in free recall: an examination of response type and confidence.

    PubMed

    Unsworth, Nash; Brewer, Gene A; Spillers, Gregory J

    2011-09-01

    In three experiments search termination decisions were examined as a function of response type (correct vs. incorrect) and confidence. It was found that the time between the last retrieved item and the decision to terminate search (exit latency) was related to the type of response and confidence in the last item retrieved. Participants were willing to search longer when the last retrieved item was a correct item vs. an incorrect item and when the confidence was high in the last retrieved item. It was also found that the number of errors retrieved during the recall period was related to search termination decisions such that the more errors retrieved, the more likely participants were to terminate the search. Finally, it was found that knowledge of overall search set size influenced the time needed to search for items, but did not influence search termination decisions. Copyright © 2011 Elsevier B.V. All rights reserved.

  15. Viscous compressible flow direct and inverse computation and illustrations

    NASA Technical Reports Server (NTRS)

    Yang, T. T.; Ntone, F.

    1986-01-01

    An algorithm for laminar and turbulent viscous compressible two dimensional flows is presented. For the application of precise boundary conditions over an arbitrary body surface, a body-fitted coordinate system is used in the physical plane. A thin-layer approximation of tne Navier-Stokes equations is introduced to keep the viscous terms relatively simple. The flow field computation is performed in the transformed plane. A factorized, implicit scheme is used to facilitate the computation. Sample calculations, for Couette flow, developing pipe flow, an isolated airflow, two dimensional compressor cascade flow, and segmental compressor blade design are presented. To a certain extent, the effective use of the direct solver depends on the user's skill in setting up the gridwork, the time step size and the choice of the artificial viscosity. The design feature of the algorithm, an iterative scheme to correct geometry for a specified surface pressure distribution, works well for subsonic flows. A more elaborate correction scheme is required in treating transonic flows where local shock waves may be involved.

  16. Factors Influencing the Design, Establishment, Administration, and Governance of Correctional Education for Females

    ERIC Educational Resources Information Center

    Ellis, Johnica; McFadden, Cheryl; Colaric, Susan

    2008-01-01

    This article summarizes the results of a study conducted to investigate factors influencing the organizational design, establishment, administration, and governance of correctional education for females. The research involved interviews with correctional and community college administrators and practitioners representing North Carolina female…

  17. Improving satellite retrievals of NO2 in biomass burning regions

    NASA Astrophysics Data System (ADS)

    Bousserez, N.; Martin, R. V.; Lamsal, L. N.; Mao, J.; Cohen, R. C.; Anderson, B. E.

    2010-12-01

    The quality of space-based nitrogen dioxide (NO2) retrievals from solar backscatter depends on a priori knowledge of the NO2 profile shape as well as the effects of atmospheric scattering. These effects are characterized by the air mass factor (AMF) calculation. Calculation of the AMF combines a radiative transfer calculation together with a priori information about aerosols and about NO2 profiles (shape factors), which are usually taken from a chemical transport model. In this work we assess the impact of biomass burning emissions on the AMF using the LIDORT radiative transfer model and a GEOS-Chem simulation based on a daily fire emissions inventory (FLAMBE). We evaluate the GEOS-Chem aerosol optical properties and NO2 shape factors using in situ data from the ARCTAS summer 2008 (North America) and DABEX winter 2006 (western Africa) experiments. Sensitivity studies are conducted to assess the impact of biomass burning on the aerosols and the NO2 shape factors used in the AMF calculation. The mean aerosol correction over boreal fires is negligible (+3%), in contrast with a large reduction (-18%) over African savanna fires. The change in sign and magnitude over boreal forest and savanna fires appears to be driven by the shielding effects that arise from the greater biomass burning aerosol optical thickness (AOT) above the African biomass burning NO2. In agreement with previous work, the single scattering albedo (SSA) also affects the aerosol correction. We further investigated the effect of clouds on the aerosol correction. For a fixed AOT, the aerosol correction can increase from 20% to 50% when cloud fraction increases from 0 to 30%. Over both boreal and savanna fires, the greatest impact on the AMF is from the fire-induced change in the NO2 profile (shape factor correction), that decreases the AMF by 38% over the boreal fires and by 62% of the savanna fires. Combining the aerosol and shape factor corrections together results in small differences compared to the shape factor correction alone (without the aerosol correction), indicating that a shape factor-only correction is a good approximation of the total AMF correction associated with fire emissions. We use this result to define a measurement-based correction of the AMF based on the relationship between the slant column variability and the variability of the shape factor in the lower troposphere. This method may be generalized to other types of emission sources.

  18. Utilizing an Energy Management System with Distributed Resources to Manage Critical Loads and Reduce Energy Costs

    DTIC Science & Technology

    2014-09-01

    peak shaving, conducting power factor correction, matching critical load to most efficient distributed resource, and islanding a system during...photovoltaic arrays during islanding, and power factor correction, the implementation of the ESS by itself is likely to prove cost prohibitive. The DOD...These functions include peak shaving, conducting power factor correction, matching critical load to most efficient distributed resource, and islanding a

  19. Seismic Yield Estimates of UTTR Surface Explosions

    NASA Astrophysics Data System (ADS)

    Hayward, C.; Park, J.; Stump, B. W.

    2016-12-01

    Since 2007 the Utah Test and Training Range (UTTR) has used explosive demolition as a method to destroy excess solid rocket motors ranging in size from 19 tons to less than 2 tons. From 2007 to 2014, 20 high quality seismic stations within 180 km recorded most of the more than 200 demolitions. This provides an interesting dataset to examine seismic source scaling for surface explosions. Based upon observer records, shots were of 4 sizes, corresponding to the size of the rocket motors. Instrument corrections for the stations were quality controlled by examining the P-wave amplitudes of all magnitude 6.5-8 earthquakes from 30 to 90 degrees away. For each station recording, the instrument corrected RMS seismic amplitude in the first 10 seconds after the P-onset was calculated. Waveforms at any given station for all the observed explosions are nearly identical. The observed RMS amplitudes were fit to a model including a term for combined distance and station correction, a term for observed RMS amplitude, and an error term for the actual demolition size. The observed seismic yield relationship is RMS=k*Weight2/3 . Estimated yields for the largest shots vary by about 50% from the stated weights, with a nearly normal distribution.

  20. Repeatability and heritability of reproductive traits in free-ranging snakes.

    PubMed

    Brown, G P; Shine, R

    2007-03-01

    The underlying genetic basis of life-history traits in free-ranging animals is critical to the effects of selection on such traits, but logistical constraints mean that such data are rarely available. Our long-term ecological studies on free-ranging oviparous snakes (keelbacks, Tropidonophis mairii (Gray, 1841), Colubridae) on an Australian floodplain provide the first such data for any tropical reptile. All size-corrected reproductive traits (egg mass, clutch size, clutch mass and post-partum maternal mass) were moderately repeatable between pairs of clutches produced by 69 female snakes after intervals of 49-1152 days, perhaps because maternal body condition was similar between clutches. Parent-offspring regression of reproductive traits of 59 pairs of mothers and daughters revealed high heritability for egg mass (h2= 0.73, SE=0.24), whereas heritability for the other three traits was low (< 0.37). The estimated heritability of egg mass may be inflated by maternal effects such as differential allocation of yolk steroids to different-sized eggs. High heritability of egg size may be maintained (rather than eroded by stabilizing selection) because selection acts on a trait (hatchling size) that is determined by the interaction between egg size and incubation substrate rather than by egg size alone. Variation in clutch size was mainly because of environmental factors (h2=0.04), indicating that one component of the trade-off between egg size and clutch size is under much tighter genetic control than the other. Thus, the phenotypic trade-off between egg size and egg number in keelback snakes occurs because each female snake must allocate a finite amount of energy into eggs of a genetically determined size.

  1. Convergence analysis of two-node CMFD method for two-group neutron diffusion eigenvalue problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong, Yongjin; Park, Jinsu; Lee, Hyun Chul

    2015-12-01

    In this paper, the nonlinear coarse-mesh finite difference method with two-node local problem (CMFD2N) is proven to be unconditionally stable for neutron diffusion eigenvalue problems. The explicit current correction factor (CCF) is derived based on the two-node analytic nodal method (ANM2N), and a Fourier stability analysis is applied to the linearized algorithm. It is shown that the analytic convergence rate obtained by the Fourier analysis compares very well with the numerically measured convergence rate. It is also shown that the theoretical convergence rate is only governed by the converged second harmonic buckling and the mesh size. It is also notedmore » that the convergence rate of the CCF of the CMFD2N algorithm is dependent on the mesh size, but not on the total problem size. This is contrary to expectation for eigenvalue problem. The novel points of this paper are the analytical derivation of the convergence rate of the CMFD2N algorithm for eigenvalue problem, and the convergence analysis based on the analytic derivations.« less

  2. Methodological basis for the optimization of a marine sea-urchin embryo test (SET) for the ecological assessment of coastal water quality.

    PubMed

    Saco-Alvarez, Liliana; Durán, Iria; Ignacio Lorenzo, J; Beiras, Ricardo

    2010-05-01

    The sea-urchin embryo test (SET) has been frequently used as a rapid, sensitive, and cost-effective biological tool for marine monitoring worldwide, but the selection of a sensitive, objective, and automatically readable endpoint, a stricter quality control to guarantee optimum handling and biological material, and the identification of confounding factors that interfere with the response have hampered its widespread routine use. Size increase in a minimum of n=30 individuals per replicate, either normal larvae or earlier developmental stages, was preferred to observer-dependent, discontinuous responses as test endpoint. Control size increase after 48 h incubation at 20 degrees C must meet an acceptability criterion of 218 microm. In order to avoid false positives minimums of 32 per thousand salinity, 7 pH and 2mg/L oxygen, and a maximum of 40 microg/L NH(3) (NOEC) are required in the incubation media. For in situ testing size increase rates must be corrected on a degree-day basis using 12 degrees C as the developmental threshold. Copyright 2010 Elsevier Inc. All rights reserved.

  3. Factors Associated with the Performance and Cost-Effectiveness of Using Lymphatic Filariasis Transmission Assessment Surveys for Monitoring Soil-Transmitted Helminths: A Case Study in Kenya

    PubMed Central

    Smith, Jennifer L.; Sturrock, Hugh J. W.; Assefa, Liya; Nikolay, Birgit; Njenga, Sammy M.; Kihara, Jimmy; Mwandawiro, Charles S.; Brooker, Simon J.

    2015-01-01

    Transmission assessment surveys (TAS) for lymphatic filariasis have been proposed as a platform to assess the impact of mass drug administration (MDA) on soil-transmitted helminths (STHs). This study used computer simulation and field data from pre- and post-MDA settings across Kenya to evaluate the performance and cost-effectiveness of the TAS design for STH assessment compared with alternative survey designs. Variations in the TAS design and different sample sizes and diagnostic methods were also evaluated. The district-level TAS design correctly classified more districts compared with standard STH designs in pre-MDA settings. Aggregating districts into larger evaluation units in a TAS design decreased performance, whereas age group sampled and sample size had minimal impact. The low diagnostic sensitivity of Kato-Katz and mini-FLOTAC methods was found to increase misclassification. We recommend using a district-level TAS among children 8–10 years of age to assess STH but suggest that key consideration is given to evaluation unit size. PMID:25487730

  4. Photovoltaic static concentrator analysis

    NASA Astrophysics Data System (ADS)

    Almonacid, G.; Luque, A.; Molledo, A. G.

    1984-12-01

    Ray tracing is the basis of the present analysis of truncated bifacial compound parabolic concentrators filled with a dielectric substance, which are of interest in photovoltaic applications where the bifacial cells allow higher static concentrations to be achieved. Among the figures of merit for this type of concentrator, the directional intercept factor plays a major role and is defined as the ratio of the power of the collector to that at the entry aperture, in a lossless concentrator illuminated by light arriving from a given direction. A procedure for measuring outdoor, full size panels has been developed, and a correction method for avoiding the effect of unwanted diffuse radiation during the measurements is presented.

  5. Association of HMO penetration and other credit quality factors with tax-exempt bond yields.

    PubMed

    McCue, M J

    1997-01-01

    This paper evaluates the relationship of HMO penetration, as well as other credit quality measures of market, institutional, operational, and financial traits, to tax-exempt bond yields. The study analyzed more than 1,500 bond issues from 1990 through 1993 and corrected for simultaneous relationships between bond size and yield and selection bias. The study found lower bond yields for hospitals located in markets with no HMO penetration. Lower yields for bond issues also were found for facilities generating higher numbers of days cash on hand and greater debt service coverage. Finally, results show that hospitals with higher occupancy rates achieve a lower yield.

  6. SU-F-I-13: Correction Factor Computations for the NIST Ritz Free Air Chamber for Medium-Energy X Rays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bergstrom, P

    Purpose: The National Institute of Standards and Technology (NIST) uses 3 free-air chambers to establish primary standards for radiation dosimetry at x-ray energies. For medium-energy × rays, the Ritz free-air chamber is the main measurement device. In order to convert the charge or current collected by the chamber to the radiation quantities air kerma or air kerma rate, a number of correction factors specific to the chamber must be applied. Methods: We used the Monte Carlo codes EGSnrc and PENELOPE. Results: Among these correction factors are the diaphragm correction (which accounts for interactions of photons from the x-ray source inmore » the beam-defining diaphragm of the chamber), the scatter correction (which accounts for the effects of photons scattered out of the primary beam), the electron-loss correction (which accounts for electrons that only partially expend their energy in the collection region), the fluorescence correction (which accounts for ionization due to reabsorption ffluorescence photons and the bremsstrahlung correction (which accounts for the reabsorption of bremsstrahlung photons). We have computed monoenergetic corrections for the NIST Ritz chamber for the 1 cm, 3 cm and 7 cm collection plates. Conclusion: We find good agreement with other’s results for the 7 cm plate. The data used to obtain these correction factors will be used to establish air kerma and it’s uncertainty in the standard NIST x-ray beams.« less

  7. A linear diode array (JFD-5) for match line in vivo dosimetry in photon and electron beams; evaluation for a chest wall irradiation technique.

    PubMed

    Essers, M; van Battum, L; Heijmen, B J

    2001-11-01

    In vivo dosimetry using thermoluminiscence detectors (TLD) is routinely performed in our institution to determine dose inhomogeneities in the match line region during chest wall irradiation. However, TLDs have some drawbacks: online in vivo dosimetry cannot be performed; generally, doses delivered by the contributing fields are not measured separately; measurement analysis is time consuming. To overcome these problems, the Joined Field Detector (JFD-5), a detector for match line in vivo dosimetry based on diodes, has been developed. This detector and its characteristics are presented. The JFD-5 is a linear array of 5 p-type diodes. The middle three diodes, used to measure the dose in the match line region, are positioned at 5-mm intervals. The outer two diodes, positioned at 3-cm distance from the central diode, are used to measure the dose in the two contributing fields. For three JFD-5 detectors, calibration factors for different energies, and sensitivity correction factors for non-standard field sizes, patient skin temperature, and oblique incidence have been determined. The accuracy of penumbra and match line dose measurements has been determined in phantom studies and in vivo. Calibration factors differ significantly between diodes and between photon and electron beams. However, conversion factors between energies can be applied. The correction factor for temperature is 0.35%/ degrees C, and for oblique incidence 2% at maximum. The penumbra measured with the JFD-5 agrees well with film and linear diode array measurements. JFD-5 in vivo match line dosimetry reproducibility was 2.0% (1 SD) while the agreement with TLD was 0.999+/-0.023 (1 SD). The JFD-5 can be used for accurate, reproducible, and fast on-line match line in vivo dosimetry.

  8. Higher Flexibility and Better Immediate Spontaneous Correction May Not Gain Better Results for Nonstructural Thoracic Curve in Lenke 5C AIS Patients

    PubMed Central

    Zhang, Yanbin; Lin, Guanfeng; Wang, Shengru; Zhang, Jianguo; Shen, Jianxiong; Wang, Yipeng; Guo, Jianwei; Yang, Xinyu; Zhao, Lijuan

    2016-01-01

    Study Design. Retrospective study. Objective. To study the behavior of the unfused thoracic curve in Lenke type 5C during the follow-up and to identify risk factors for its correction loss. Summary of Background Data. Few studies have focused on the spontaneous behaviors of the unfused thoracic curve after selective thoracolumbar or lumbar fusion during the follow-up and the risk factors for spontaneous correction loss. Methods. We retrospectively reviewed 45 patients (41 females and 4 males) with AIS who underwent selective TL/L fusion from 2006 to 2012 in a single institution. The follow-up averaged 36 months (range, 24–105 months). Patients were divided into two groups. Thoracic curves in group A improved or maintained their curve magnitude after spontaneous correction, with a negative or no correction loss during the follow-up. Thoracic curves in group B deteriorated after spontaneous correction with a positive correction loss. Univariate analysis and multivariate analysis were built to identify the risk factors for correction loss of the unfused thoracic curves. Results. The minor thoracic curve was 26° preoperatively. It was corrected to 13° immediately with a spontaneous correction of 48.5%. At final follow-up it was 14° with a correction loss of 1°. Thoracic curves did not deteriorate after spontaneous correction in 23 cases in group A, while 22 cases were identified with thoracic curve progressing in group B. In multivariate analysis, two risk factors were independently associated with thoracic correction loss: higher flexibility and better immediate spontaneous correction rate of thoracic curve. Conclusion. Posterior selective TL/L fusion with pedicle screw constructs is an effective treatment for Lenke 5C AIS patients. Nonstructural thoracic curves with higher flexibility or better immediate correction are more likely to progress during the follow-up and close attentions must be paid to these patients in case of decompensation. Level of Evidence: 4 PMID:27831989

  9. Two-Dimensional Thermal Boundary Layer Corrections for Convective Heat Flux Gauges

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Haddad, George

    2007-01-01

    This work presents a CFD (Computational Fluid Dynamics) study of two-dimensional thermal boundary layer correction factors for convective heat flux gauges mounted in flat plate subjected to a surface temperature discontinuity with variable properties taken into account. A two-equation k - omega turbulence model is considered. Results are obtained for a wide range of Mach numbers (1 to 5), gauge radius ratio, and wall temperature discontinuity. Comparisons are made for correction factors with constant properties and variable properties. It is shown that the variable-property effects on the heat flux correction factors become significant

  10. Modeling of stress distributions on the microstructural level in Alloy 600

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kozaczek, K.J.; Petrovic, B.G.; Ruud, C.O.

    1995-04-01

    Stress distribution in a random polycrystalline material (Alloy 600) was studied using a topologically correct microstructural model. Distributions of von Mises and hydrostatic stresses at the grain vertices, which could be important in intergranular stress corrosion cracking, were analyzed as functions of microstructure, grain orientations and loading conditions. Grain size, shape, and orientation had a more pronounced effect on stress distribution than loading conditions. At grain vertices the stress concentration factor was higher for hydrostatic stress (1.7) than for von Mises stress (1.5). The stress/strain distribution in the volume (grain interiors) is a normal distribution and does not depend onmore » the location of the studied material volume i.e., surface vs/bulk. The analysis of stress distribution in the volume showed the von Mises stress concentration of 1.75 and stress concentration of 2.2 for the hydrostatic pressure. The observed stress concentration is high enough to cause localized plastic microdeformation, even when the polycrystalline aggregate is in the macroscopic elastic regime. Modeling of stresses and strains in polycrystalline materials can identify the microstructures (grain size distributions, texture) intrinsically susceptible to stress/strain concentrations and justify the correctness of applied stress state during the stress corrosion cracking tests. Also, it supplies the information necessary to formulate the local failure criteria and interpret of nondestructive stress measurements.« less

  11. Modeling aboveground biomass of Tamarix ramosissima in the Arkansas River Basin of Southeastern Colorado, USA

    USGS Publications Warehouse

    Evangelista, P.; Kumar, S.; Stohlgren, T.J.; Crall, A.W.; Newman, G.J.

    2007-01-01

    Predictive models of aboveground biomass of nonnative Tamarix ramosissima of various sizes were developed using destructive sampling techniques on 50 individuals and four 100-m2 plots. Each sample was measured for average height (m) of stems and canopy area (m2) prior to cutting, drying, and weighing. Five competing regression models (P < 0.05) were developed to estimate aboveground biomass of T. ramosissima using average height and/or canopy area measurements and were evaluated using Akaike's Information Criterion corrected for small sample size (AICc). Our best model (AICc = -148.69, ??AICc = 0) successfully predicted T. ramosissima aboveground biomass (R2 = 0.97) and used average height and canopy area as predictors. Our 2nd-best model, using the same predictors, was also successful in predicting aboveground biomass (R2 = 0.97, AICc = -131.71, ??AICc = 16.98). A 3rd model demonstrated high correlation between only aboveground biomass and canopy area (R2 = 0.95), while 2 additional models found high correlations between aboveground biomass and average height measurements only (R2 = 0.90 and 0.70, respectively). These models illustrate how simple field measurements, such as height and canopy area, can be used in allometric relationships to accurately predict aboveground biomass of T. ramosissima. Although a correction factor may be necessary for predictions at larger scales, the models presented will prove useful for many research and management initiatives.

  12. Dye shift: a neglected source of genotyping error in molecular ecology.

    PubMed

    Sutton, Jolene T; Robertson, Bruce C; Jamieson, Ian G

    2011-05-01

    Molecular ecologists must be vigilant in detecting and accounting for genotyping error, yet potential errors stemming from dye-induced mobility shift (dye shift) may be frequently neglected and largely unknown to researchers who employ 3-primer systems with automated genotyping. When left uncorrected, dye shift can lead to mis-scoring alleles and even to falsely calling new alleles if different dyes are used to genotype the same locus in subsequent reactions. When we used four different fluorophore labels from a standard dye set to genotype the same set of loci, differences in the resulting size estimates for a single allele ranged from 2.07 bp to 3.68 bp. The strongest effects were associated with the fluorophore PET, and relative degree of dye shift was inversely related to locus size. We found little evidence in the literature that dye shift is regularly accounted for in 3-primer studies, despite knowledge of this phenomenon existing for over a decade. However, we did find some references to erroneous standard correction factors for the same set of dyes that we tested. We thus reiterate the need for strict quality control when attempting to reduce possible sources of genotyping error, and in cases where different dyes are applied to a single locus, perhaps mistakenly, we strongly discourage researchers from assuming generic correction patterns. © 2011 Blackwell Publishing Ltd.

  13. The Additional Secondary Phase Correction System for AIS Signals

    PubMed Central

    Wang, Xiaoye; Zhang, Shufang; Sun, Xiaowen

    2017-01-01

    This paper looks at the development and implementation of the additional secondary phase factor (ASF) real-time correction system for the Automatic Identification System (AIS) signal. A large number of test data were collected using the developed ASF correction system and the propagation characteristics of the AIS signal that transmits at sea and the ASF real-time correction algorithm of the AIS signal were analyzed and verified. Accounting for the different hardware of the receivers in the land-based positioning system and the variation of the actual environmental factors, the ASF correction system corrects original measurements of positioning receivers in real time and provides corrected positioning accuracy within 10 m. PMID:28362330

  14. Genetic and environmental influence on thyroid gland volume and thickness of thyroid isthmus: a twin study.

    PubMed

    Tarnoki, Adam Domonkos; Tarnoki, David Laszlo; Speer, Gabor; Littvay, Levente; Bata, Pal; Garami, Zsolt; Berczi, Viktor; Karlinger, Kinga

    2015-12-01

    Decreased thyroid volume has been related to increased prevalence of thyroid cancer. One hundred and fourteen Hungarian adult twin pairs (69 monozygotic, 45 dizygotic) with or without known thyroid disorders underwent thyroid ultrasound. Thickness of the thyroid isthmus was measured at the thickest portion of the gland in the midline using electronic calipers at the time of scanning. Volume of the thyroid lobe was computed according to the following formula: thyroid height*width*depth*correction factor (0.63). Age-, sex-, body mass index- and smoking-adjusted heritability of the thickness of thyroid isthmus was 50% (95% confidence interval [CI], 35 to 66%). Neither left nor right thyroid volume showed additive genetic effects, but shared environments were 68% (95% CI, 48 to 80%) and 79% (95% CI, 72 to 87%), respectively. Magnitudes of monozygotic and dizygotic co-twin correlations were not substantially impacted by the correction of covariates of body mass index and smoking. Unshared environmental effects showed a moderate influence on dependent parameters (24-50%). Our analysis support that familial factors are important for thyroid measures in a general twin population. A larger sample size is needed to show whether this is because of common environmental (e.g. intrauterine effects, regional nutrition habits, iodine supply) or genetic effects.

  15. Development of 1D Liner Compression Code for IDL

    NASA Astrophysics Data System (ADS)

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony

    2015-11-01

    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  16. SU-E-T-123: Anomalous Altitude Effect in Permanent Implant Brachytherapy Seeds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watt, E; Spencer, DP; Meyer, T

    Purpose: Permanent seed implant brachytherapy procedures require the measurement of the air kerma strength of seeds prior to implant. This is typically accomplished using a well-type ionization chamber. Previous measurements (Griffin et al., 2005; Bohm et al., 2005) of several low-energy seeds using the air-communicating HDR 1000 Plus chamber have demonstrated that the standard temperature-pressure correction factor, P{sub TP}, may overcompensate for air density changes induced by altitude variations by up to 18%. The purpose of this work is to present empirical correction factors for two clinically-used seeds (IsoAid ADVANTAGE™ {sup 103}Pd and Nucletron selectSeed {sup 125}I) for which empiricalmore » altitude correction factors do not yet exist in the literature when measured with the HDR 1000 Plus chamber. Methods: An in-house constructed pressure vessel containing the HDR 1000 Plus well chamber and a digital barometer/thermometer was pumped or evacuated, as appropriate, to a variety of pressures from 725 to 1075 mbar. Current measurements, corrected with P{sub TP}, were acquired for each seed at these pressures and normalized to the reading at ‘standard’ pressure (1013.25 mbar). Results: Measurements in this study have shown that utilization of P{sub TP} can overcompensate in the corrected current reading by up to 20% and 17% for the IsoAid Pd-103 and the Nucletron I-125 seed respectively. Compared to literature correction factors for other seed models, the correction factors in this study diverge by up to 2.6% and 3.0% for iodine (with silver) and palladium respectively, indicating the need for seed-specific factors. Conclusion: The use of seed specific altitude correction factors can reduce uncertainty in the determination of air kerma strength. The empirical correction factors determined in this work can be applied in clinical quality assurance measurements of air kerma strength for two previously unpublished seed designs (IsoAid ADVANTAGE™ {sup 103}Pd and Nucletron selectSeed {sup 125}I) with the HDR 1000 Plus well chamber.« less

  17. Laser diffraction particle sizing in STRESS

    NASA Astrophysics Data System (ADS)

    Agrawal, Y. C.; Pottsmith, H. C.

    1994-08-01

    An autonomous instrument system for measuring particle size spectra in the sea is described. The instrument records the small-angle scattering characteristics of the particulate ensemble present in water. The small-angle scattering distribution is inverted into size spectra. The discussion of the instrument in this paper is included with a review of the information content of the data. It is noted that the inverse problem is sensitive to the forward model for light scattering employed in the construction of the matrix. The instrument system is validated using monodisperse polystyrene and NIST standard distributions of glass spheres. Data from a long-term deployment on the California shelf during the field experiment Sediment Transport Events on Shelves and Slopes (STRESS) are included. The size distribution in STRESS, measured at a fixed height-above-bed 1.2 m, showed significant variability over time. In particular, the volume distribution sometimes changed from mono-modal to bi-modal during the experiment. The data on particle-size distribution are combined with friction velocity measurements in the current boundary layer to produce a size-dependent estimate of the suspended mass at 10 cm above bottom. It is argued that these concentrations represent the reference concentration at the bed for the smaller size classes. The suspended mass at all sizes shows a strong correlation with wave variance. Using the size distribution, corrections in the optical transmissometry calibration factor are estimated for the duration of the experiment. The change in calibration at 1.2 m above bed (mab) is shown to have a standard error of 30% over the duration of the experiment with a range of 1.8-0.8.

  18. Holographic Phase Correction.

    DTIC Science & Technology

    1987-06-01

    functions, so that, for example, the device could function as a.% combined beam splitter /multifocus lens/mirror. Offset against these advantages are...illustrated in Figure 7. Here the reconstructed, phase corrected wave, is interfered with a plane wave introduced ..- after the hologram, via a beam splitter ...the recording medium). c. The phase correction can be combined with other beam forming functions. This can result in further savings in size and weight

  19. Relativistic corrections to the form factors of Bc into P-wave orbitally excited charmonium

    NASA Astrophysics Data System (ADS)

    Zhu, Ruilin

    2018-06-01

    We investigated the form factors of the Bc meson into P-wave orbitally excited charmonium using the nonrelativistic QCD effective theory. Through the analytic computation, the next-to-leading order relativistic corrections to the form factors were obtained, and the asymptotic expressions were studied in the infinite bottom quark mass limit. Employing the general form factors, we discussed the exclusive decays of the Bc meson into P-wave orbitally excited charmonium and a light meson. We found that the relativistic corrections lead to a large correction for the form factors, which makes the branching ratios of the decay channels B (Bc ± →χcJ (hc) +π± (K±)) larger. These results are useful for the phenomenological analysis of the Bc meson decays into P-wave charmonium, which shall be tested in the LHCb experiments.

  20. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    NASA Technical Reports Server (NTRS)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  1. Power corrections to TMD factorization for Z-boson production

    DOE PAGES

    Balitsky, I.; Tarasov, A.

    2018-05-24

    A typical factorization formula for production of a particle with a small transverse momentum in hadron-hadron collisions is given by a convolution of two TMD parton densities with cross section of production of the final particle by the two partons. For practical applications at a given transverse momentum, though, one should estimate at what momenta the power corrections to the TMD factorization formula become essential. In this work, we calculate the first power corrections to TMD factorization formula for Z-boson production and Drell-Yan process in high-energy hadron-hadron collisions. At the leading order in N c power corrections are expressed inmore » terms of leading power TMDs by QCD equations of motion.« less

  2. Power corrections to TMD factorization for Z-boson production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balitsky, I.; Tarasov, A.

    A typical factorization formula for production of a particle with a small transverse momentum in hadron-hadron collisions is given by a convolution of two TMD parton densities with cross section of production of the final particle by the two partons. For practical applications at a given transverse momentum, though, one should estimate at what momenta the power corrections to the TMD factorization formula become essential. In this work, we calculate the first power corrections to TMD factorization formula for Z-boson production and Drell-Yan process in high-energy hadron-hadron collisions. At the leading order in N c power corrections are expressed inmore » terms of leading power TMDs by QCD equations of motion.« less

  3. Accurate and fast multiple-testing correction in eQTL studies.

    PubMed

    Sul, Jae Hoon; Raj, Towfique; de Jong, Simone; de Bakker, Paul I W; Raychaudhuri, Soumya; Ophoff, Roel A; Stranger, Barbara E; Eskin, Eleazar; Han, Buhm

    2015-06-04

    In studies of expression quantitative trait loci (eQTLs), it is of increasing interest to identify eGenes, the genes whose expression levels are associated with variation at a particular genetic variant. Detecting eGenes is important for follow-up analyses and prioritization because genes are the main entities in biological processes. To detect eGenes, one typically focuses on the genetic variant with the minimum p value among all variants in cis with a gene and corrects for multiple testing to obtain a gene-level p value. For performing multiple-testing correction, a permutation test is widely used. Because of growing sample sizes of eQTL studies, however, the permutation test has become a computational bottleneck in eQTL studies. In this paper, we propose an efficient approach for correcting for multiple testing and assess eGene p values by utilizing a multivariate normal distribution. Our approach properly takes into account the linkage-disequilibrium structure among variants, and its time complexity is independent of sample size. By applying our small-sample correction techniques, our method achieves high accuracy in both small and large studies. We have shown that our method consistently produces extremely accurate p values (accuracy > 98%) for three human eQTL datasets with different sample sizes and SNP densities: the Genotype-Tissue Expression pilot dataset, the multi-region brain dataset, and the HapMap 3 dataset. Copyright © 2015 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  4. A comparative study of the effects of cone-plate and parallel-plate geometries on rheological properties under oscillatory shear flow

    NASA Astrophysics Data System (ADS)

    Song, Hyeong Yong; Salehiyan, Reza; Li, Xiaolei; Lee, Seung Hak; Hyun, Kyu

    2017-11-01

    In this study, the effects of cone-plate (C/P) and parallel-plate (P/P) geometries were investigated on the rheological properties of various complex fluids, e.g. single-phase (polymer melts and solutions) and multiphase systems (polymer blend and nanocomposite, and suspension). Small amplitude oscillatory shear (SAOS) tests were carried out to compare linear rheological responses while nonlinear responses were compared using large amplitude oscillatory shear (LAOS) tests at different frequencies. Moreover, Fourier-transform (FT)-rheology method was used to analyze the nonlinear responses under LAOS flow. Experimental results were compared with predictions obtained by single-point correction and shear rate correction. For all systems, SAOS data measured by C/P and P/P coincide with each other, but results showed discordance between C/P and P/P measurements in the nonlinear regime. For all systems except xanthan gum solutions, first-harmonic moduli were corrected using a single horizontal shift factor, whereas FT rheology-based nonlinear parameters ( I 3/1, I 5/1, Q 3, and Q 5) were corrected using vertical shift factors that are well predicted by single-point correction. Xanthan gum solutions exhibited anomalous corrections. Their first-harmonic Fourier moduli were superposed using a horizontal shift factor predicted by shear rate correction applicable to highly shear thinning fluids. The distinguished corrections were observed for FT rheology-based nonlinear parameters. I 3/1 and I 5/1 were superposed by horizontal shifts, while the other systems displayed vertical shifts of I 3/1 and I 5/1. Q 3 and Q 5 of xanthan gum solutions were corrected using both horizontal and vertical shift factors. In particular, the obtained vertical shift factors for Q 3 and Q 5 were twice as large as predictions made by single-point correction. Such larger values are rationalized by the definitions of Q 3 and Q 5. These results highlight the significance of horizontal shift corrections in nonlinear oscillatory shear data.

  5. Polymer nanomechanics: Separating the size effect from the substrate effect in nanoindentation

    NASA Astrophysics Data System (ADS)

    Li, Le; Encarnacao, Lucas M.; Brown, Keith A.

    2017-01-01

    While the moduli of thin polymer films are known to deviate dramatically from their bulk values, there is not a consensus regarding the nature of this size effect. In particular, indenting experiments appear to contradict results from both buckling experiments and molecular dynamics calculations. In this letter, we present a combined computational and experimental method for measuring the modulus of nanoindented soft films on rigid substrates that reconciles this discrepancy. Through extensive finite element simulation, we determine a correction to the Hertzian contact model that separates the substrate effect from the thickness-dependent modulus of the film. Interestingly, this correction only depends upon a dimensionless film thickness and the Poisson ratio of the film. To experimentally test this approach, we prepared poly(methyl methacrylate), polystyrene, and parylene films with thicknesses ranging from 20 to 300 nm and studied these films using atomic force microscope-based nanoindenting. Strikingly, when experiments were interpreted using the computationally derived substrate correction, sub-70 nm films were found to be softer than bulk, in agreement with buckling experiments and molecular dynamics studies. This correction can serve as a general method for unambiguously determining the size effect of thin polymer films and ultimately lead to the ability to quantitatively image the mechanical properties of heterogeneous materials such as composites.

  6. Hounsfield Unit inaccuracy in computed tomography lesion size and density, diagnostic quality vs attenuation correction

    NASA Astrophysics Data System (ADS)

    Szczepura, Katy; Thompson, John; Manning, David

    2017-03-01

    In computed tomography the Hounsfield Units (HU) are used as an indicator of the tissue type based on the linear attenuation coefficients of the tissue. HU accuracy is essential when this metric is used in any form to support diagnosis. In hybrid imaging, such as SPECT/CT and PET/CT, the information is used for attenuation correction (AC) of the emission images. This work investigates the HU accuracy of nodules of known size and HU, comparing diagnostic quality (DQ) images with images used for AC.

  7. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... illustrate the application of correction factors to sound level measurement readings: (1) Example 1—Highway operations. Assume that a motor vehicle generates a maximum observed sound level reading of 86 dB(A) during a... of the test site is acoustically “hard.” The corrected sound level generated by the motor vehicle...

  8. 49 CFR 325.79 - Application of correction factors.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... illustrate the application of correction factors to sound level measurement readings: (1) Example 1—Highway operations. Assume that a motor vehicle generates a maximum observed sound level reading of 86 dB(A) during a... of the test site is acoustically “hard.” The corrected sound level generated by the motor vehicle...

  9. Factors influencing workplace violence risk among correctional health workers: insights from an Australian survey.

    PubMed

    Cashmore, Aaron W; Indig, Devon; Hampton, Stephen E; Hegney, Desley G; Jalaludin, Bin B

    2016-11-01

    Little is known about the environmental and organisational determinants of workplace violence in correctional health settings. This paper describes the views of health professionals working in these settings on the factors influencing workplace violence risk. All employees of a large correctional health service in New South Wales, Australia, were invited to complete an online survey. The survey included an open-ended question seeking the views of participants about the factors influencing workplace violence in correctional health settings. Responses to this question were analysed using qualitative thematic analysis. Participants identified several factors that they felt reduced the risk of violence in their workplace, including: appropriate workplace health and safety policies and procedures; professionalism among health staff; the presence of prison guards and the quality of security provided; and physical barriers within clinics. Conversely, participants perceived workplace violence risk to be increased by: low health staff-to-patient and correctional officer-to-patient ratios; high workloads; insufficient or underperforming security staff; and poor management of violence, especially horizontal violence. The views of these participants should inform efforts to prevent workplace violence among correctional health professionals.

  10. [Comparison of annual risk for tuberculosis infection (1994-2001) in school children in Djibouti: methodological limitations and epidemiological value in a hyperendemic context].

    PubMed

    Bernatas, J J; Mohamed Ali, I; Ali Ismaël, H; Barreh Matan, A

    2008-12-01

    The purpose of this report was to describe a tuberculin survey conducted in 2001 to assess the trend in the annual risk for tuberculosis infection in Djibouti and compare resulting data with those obtained in a previous survey conducted in 1994. In 2001 cluster sampling allowed selection of 5599 school children between the ages of 6 and 10 years including 31.2% (1747/5599) without BCG vaccination scar. In this sample the annual risk of infection (ARI) estimated using cutoff points of 6 mm, 10 mm, and 14 mm corrected by a factor of 1/0.82 and a mode value (18 mm) determined according to the "mirror" method were 4.67%, 3.64%, 3.19% and 2.66% respectively. The distribution of positive tuberculin skin reaction size was significantly different from the normal law. In 1994 a total of 5257 children were selected using the same method. The distribution of positive reactions was not significantly different from the gaussian distribution and 28.6% (1505/5257) did not have a BCG scar. The ARI estimated using cutoff points of 6 mm, 10 mm, and 14 mm corrected by a factor of 1/0.82 and a mode value (17 mm) determined according to the "mirror" method were 2.68%, 2.52%, 2.75% and 3.32 respectively. Tuberculin skin reaction size among positive skin test reactors was correlated with the presence of a BCG scar, and its mean was significantly higher among children with BCG scar. The proportion of positive skin test reactors was also higher in the BCG scar group regardless of the cutoff point selected. Comparison of prevalence rates and ARI values did not allow any clear conclusion to be drawn, mainly because of a drastic difference in the positive reaction distribution profiles between the two studies. The distribution of the skin test reaction's size 1994 study could be modelized by a gaussian distribution while it could not in 2001. A partial explanation for the positive reaction distribution observed in the 2001 study might be the existence of cross-reactions with environmental mycobacteria.

  11. SU-F-T-70: A High Dose Rate Total Skin Electron Irradiation Technique with A Specific Inter-Film Variation Correction Method for Very Large Electron Beam Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, X; Rosenfield, J; Dong, X

    2016-06-15

    Purpose: Rotational total skin electron irradiation (RTSEI) is used in the treatment of cutaneous T-cell lymphoma. Due to inter-film uniformity variations the dosimetry measurement of a large electron beam of a very low energy is challenging. This work provides a method to improve the accuracy of flatness and symmetry for a very large treatment field of low electron energy used in dual beam RTSEI. Methods: RTSEI is delivered by dual angles field a gantry of ±20 degrees of 270 to cover the upper and the lower halves of the patient body with acceptable beam uniformity. The field size is inmore » the order of 230cm in vertical height and 120 cm in horizontal width and beam energy is a degraded 6 MeV (6 mm of PMMA spoiler). We utilized parallel plate chambers, Gafchromic films and OSLDs as a measuring devices for absolute dose, B-Factor, stationary and rotational percent depth dose and beam uniformity. To reduce inter-film dosimetric variation we introduced a new specific correction method to analyze beam uniformity. This correction method uses some image processing techniques combining film value before and after radiation dose to compensate the inter-variation dose response differences among films. Results: Stationary and rotational depth of dose demonstrated that the Rp is 2 cm for rotational and the maximum dose is shifted toward the surface (3mm). The dosimetry for the phantom showed that dose uniformity reduced to 3.01% for the vertical flatness and 2.35% for horizontal flatness after correction thus achieving better flatness and uniformity. The absolute dose readings of calibrated films after our correction matched with the readings from OSLD. Conclusion: The proposed correction method for Gafchromic films will be a useful tool to correct inter-film dosimetric variation for the future clinical film dosimetry verification in very large fields, allowing the optimizations of other parameters.« less

  12. The combination of the error correction methods of GAFCHROMIC EBT3 film

    PubMed Central

    Li, Yinghui; Chen, Lixin; Zhu, Jinhan; Liu, Xiaowei

    2017-01-01

    Purpose The aim of this study was to combine a set of methods for use of radiochromic film dosimetry, including calibration, correction for lateral effects and a proposed triple-channel analysis. These methods can be applied to GAFCHROMIC EBT3 film dosimetry for radiation field analysis and verification of IMRT plans. Methods A single-film exposure was used to achieve dose calibration, and the accuracy was verified based on comparisons with the square-field calibration method. Before performing the dose analysis, the lateral effects on pixel values were corrected. The position dependence of the lateral effect was fitted by a parabolic function, and the curvature factors of different dose levels were obtained using a quadratic formula. After lateral effect correction, a triple-channel analysis was used to reduce disturbances and convert scanned images from films into dose maps. The dose profiles of open fields were measured using EBT3 films and compared with the data obtained using an ionization chamber. Eighteen IMRT plans with different field sizes were measured and verified with EBT3 films, applying our methods, and compared to TPS dose maps, to check correct implementation of film dosimetry proposed here. Results The uncertainty of lateral effects can be reduced to ±1 cGy. Compared with the results of Micke A et al., the residual disturbances of the proposed triple-channel method at 48, 176 and 415 cGy are 5.3%, 20.9% and 31.4% smaller, respectively. Compared with the ionization chamber results, the difference in the off-axis ratio and percentage depth dose are within 1% and 2%, respectively. For the application of IMRT verification, there were no difference between two triple-channel methods. Compared with only corrected by triple-channel method, the IMRT results of the combined method (include lateral effect correction and our present triple-channel method) show a 2% improvement for large IMRT fields with the criteria 3%/3 mm. PMID:28750023

  13. Fluence correction factors for graphite calorimetry in a low-energy clinical proton beam: I. Analytical and Monte Carlo simulations.

    PubMed

    Palmans, H; Al-Sulaiti, L; Andreo, P; Shipley, D; Lühr, A; Bassler, N; Martinkovič, J; Dobrovodský, J; Rossomme, S; Thomas, R A S; Kacperek, A

    2013-05-21

    The conversion of absorbed dose-to-graphite in a graphite phantom to absorbed dose-to-water in a water phantom is performed by water to graphite stopping power ratios. If, however, the charged particle fluence is not equal at equivalent depths in graphite and water, a fluence correction factor, kfl, is required as well. This is particularly relevant to the derivation of absorbed dose-to-water, the quantity of interest in radiotherapy, from a measurement of absorbed dose-to-graphite obtained with a graphite calorimeter. In this work, fluence correction factors for the conversion from dose-to-graphite in a graphite phantom to dose-to-water in a water phantom for 60 MeV mono-energetic protons were calculated using an analytical model and five different Monte Carlo codes (Geant4, FLUKA, MCNPX, SHIELD-HIT and McPTRAN.MEDIA). In general the fluence correction factors are found to be close to unity and the analytical and Monte Carlo codes give consistent values when considering the differences in secondary particle transport. When considering only protons the fluence correction factors are unity at the surface and increase with depth by 0.5% to 1.5% depending on the code. When the fluence of all charged particles is considered, the fluence correction factor is about 0.5% lower than unity at shallow depths predominantly due to the contributions from alpha particles and increases to values above unity near the Bragg peak. Fluence correction factors directly derived from the fluence distributions differential in energy at equivalent depths in water and graphite can be described by kfl = 0.9964 + 0.0024·zw-eq with a relative standard uncertainty of 0.2%. Fluence correction factors derived from a ratio of calculated doses at equivalent depths in water and graphite can be described by kfl = 0.9947 + 0.0024·zw-eq with a relative standard uncertainty of 0.3%. These results are of direct relevance to graphite calorimetry in low-energy protons but given that the fluence correction factor is almost solely influenced by non-elastic nuclear interactions the results are also relevant for plastic phantoms that consist of carbon, oxygen and hydrogen atoms as well as for soft tissues.

  14. SU-F-T-67: Correction Factors for Monitor Unit Verification of Clinical Electron Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haywood, J

    Purpose: Monitor units calculated by electron Monte Carlo treatment planning systems are often higher than TG-71 hand calculations for a majority of patients. Here I’ve calculated tables of geometry and heterogeneity correction factors for correcting electron hand calculations. Method: A flat water phantom with spherical volumes having radii ranging from 3 to 15 cm was created. The spheres were centered with respect to the flat water phantom, and all shapes shared a surface at 100 cm SSD. D{sub max} dose at 100 cm SSD was calculated for each cone and energy on the flat phantom and for the spherical volumesmore » in the absence of the flat phantom. The ratio of dose in the sphere to dose in the flat phantom defined the geometrical correction factor. The heterogeneity factors were then calculated from the unrestricted collisional stopping power for tissues encountered in electron beam treatments. These factors were then used in patient second check calculations. Patient curvature was estimated by the largest sphere that aligns to the patient contour, and appropriate tissue density was read from the physical properties provided by the CT. The resulting MU were compared to those calculated by the treatment planning system and TG-71 hand calculations. Results: The geometry and heterogeneity correction factors range from ∼(0.8–1.0) and ∼(0.9–1.01) respectively for the energies and cones presented. Percent differences for TG-71 hand calculations drop from ∼(3–14)% to ∼(0–2)%. Conclusion: Monitor units calculated with the correction factors typically decrease the percent difference to under actionable levels, < 5%. While these correction factors work for a majority of patients, there are some patient anatomies that do not fit the assumptions made. Using these factors in hand calculations is a first step in bringing the verification monitor units into agreement with the treatment planning system MU.« less

  15. Percolation of fracture networks and stereology

    NASA Astrophysics Data System (ADS)

    Thovert, Jean-Francois; Mourzenko, Valeri; Adler, Pierre

    2017-04-01

    The overall properties of fractured porous media depend on the percolative character of the fracture network in a crucial way. The most important examples are permeability and transport. In a recent systematic study, a very wide range of regular, irregular and random fracture shapes is considered, in monodisperse or polydisperse networks containing fractures with different shapes and/or sizes. A simple and new model involving a dimensionless density and a new shape factor is proposed for the percolation threshold, which accounts very efficiently for the influence of the fracture shape. It applies with very good accuracy to monodisperse or moderately polydisperse networks, and provides a good first estimation in other situations. A polydispersity index is shown to control the need for a correction, and the corrective term is modelled for the investigated size distributions. Moreover, and this is crucial for practical applications, the relevant quantities which are present in the expression of the percolation threshold can all be determined from trace maps. An exact and complete set of relations can be derived when the fractures are assumed to be Identical, Isotropically Oriented and Uniformly Distributed (I2OUD). Therefore, the dimensionless density of such networks can be derived directly from the trace maps and its percolating character can be a priori predicted. These relations involve the first five moments of the trace lengths. It is clear that the higher order moments are sensitive to truncation due to the boundaries of the sampling domain. However, it can be shown that the truncation effect can be fully taken into account and corrected, for any fracture shape, size and orientation distributions, if the fractures are spatially uniformly distributed. Systematic applications of these results are made to real fracture networks that we previously analyzed by other means and to numerically simulated networks. It is important to know if the stereological results and their applications can be extended to networks which are not I2OUD. In other words, for a given trace map, an equivalent I2OUD network is defined whose percolating character and permeability are readily deduced. The conditions under which these predicted properties are not too far from the real properties are under investigation.

  16. Lateral response heterogeneity of Bragg peak ionization chambers for narrow-beam photon and proton dosimetry

    NASA Astrophysics Data System (ADS)

    Kuess, Peter; Böhlen, Till T.; Lechner, Wolfgang; Elia, Alessio; Georg, Dietmar; Palmans, Hugo

    2017-12-01

    Large area ionization chambers (LAICs) can be used to measure output factors of narrow beams. Dose area product measurements are proposed as an alternative to central-axis point dose measurements. Using such detectors requires detailed information on the uniformity of the response along the sensitive area. Eight LAICs were investigated in this study: four of type PTW-34070 (LAICThick) and four of type PTW-34080 (LAICThin). Measurements were performed in an x-ray unit using peak voltages of 100-200 kVp and a collimated beam of 3.1 mm (FWHM). The LAICs were moved with a step size of 5 mm to measure the chamber response at lateral positions. To account for beam positions where only a fraction of the beam impinged within the sensitive area of the LAICs, a corrected response was calculated which was the basis to calculate the relative response. The impact of a heterogeneous LAIC response, based on the obtained response maps was henceforth investigated for proton pencil beams and small field photon beams. A pronounced heterogeneity of the responses was observed in the investigated LAICs. The response of LAICThick generally decreased with increasing radius, resulting in a response correction of up to 5%. This correction was more pronounced and more diverse (up to 10%) for LAICThin. Considering a proton pencil beam the systematic offset for reference dosimetry was 2.4-4.1% for LAICThick and  -9.5 to 9.4% for LAICThin. For relative dosimetry (e.g. integral depth-dose curves) systematic response variation by 0.8-1.9% were found. For a decreasing photon field size the systematic offset for absolute dose measurements showed a 2.5-4.5% overestimation of the response for 6  ×  6 mm2 field sizes for LAICThick. For LAICThin the response varied even over a range of 20%. This study highlights the need for chamber-dependent response maps when using LAICs for absolute and relative dosimetry with proton pencil beams or small photon beams.

  17. Estimation of cortical magnification from positional error in normally sighted and amblyopic subjects

    PubMed Central

    Hussain, Zahra; Svensson, Carl-Magnus; Besle, Julien; Webb, Ben S.; Barrett, Brendan T.; McGraw, Paul V.

    2015-01-01

    We describe a method for deriving the linear cortical magnification factor from positional error across the visual field. We compared magnification obtained from this method between normally sighted individuals and amblyopic individuals, who receive atypical visual input during development. The cortical magnification factor was derived for each subject from positional error at 32 locations in the visual field, using an established model of conformal mapping between retinal and cortical coordinates. Magnification of the normally sighted group matched estimates from previous physiological and neuroimaging studies in humans, confirming the validity of the approach. The estimate of magnification for the amblyopic group was significantly lower than the normal group: by 4.4 mm deg−1 at 1° eccentricity, assuming a constant scaling factor for both groups. These estimates, if correct, suggest a role for early visual experience in establishing retinotopic mapping in cortex. We discuss the implications of altered cortical magnification for cortical size, and consider other neural changes that may account for the amblyopic results. PMID:25761341

  18. Radiative corrections to the η(') Dalitz decays

    NASA Astrophysics Data System (ADS)

    Husek, Tomáš; Kampf, Karol; Novotný, Jiří; Leupold, Stefan

    2018-05-01

    We provide the complete set of radiative corrections to the Dalitz decays η(')→ℓ+ℓ-γ beyond the soft-photon approximation, i.e., over the whole range of the Dalitz plot and with no restrictions on the energy of a radiative photon. The corrections inevitably depend on the η(')→ γ*γ(*) transition form factors. For the singly virtual transition form factor appearing, e.g., in the bremsstrahlung correction, recent dispersive calculations are used. For the one-photon-irreducible contribution at the one-loop level (for the doubly virtual form factor), we use a vector-meson-dominance-inspired model while taking into account the η -η' mixing.

  19. Impact of creatine kinase correction on the predictive value of S-100B after mild traumatic brain injury.

    PubMed

    Bazarian, Jeffrey J; Beck, Christopher; Blyth, Brian; von Ahsen, Nicolas; Hasselblatt, Martin

    2006-01-01

    To validate a correction factor for the extracranial release of the astroglial protein, S-100B, based on concomitant creatine kinase (CK) levels. The CK- S-100B relationship in non-head injured marathon runners was used to derive a correction factor for the extracranial release of S-100B. This factor was then applied to a separate cohort of 96 mild traumatic brain injury (TBI) patients in whom both CK and S-100B levels were measured. Corrected S-100B was compared to uncorrected S-100B for the prediction of initial head CT, three-month headache and three-month post concussive syndrome (PCS). Corrected S-100B resulted in a statistically significant improvement in the prediction of 3-month headache (area under curve [AUC] 0.46 vs 0.52, p=0.02), but not PCS or initial head CT. Using a cutoff that maximizes sensitivity (> or = 90%), corrected S-100B improved the prediction of initial head CT scan (negative predictive value from 75% [95% CI, 2.6%, 67.0%] to 96% [95% CI: 83.5%, 99.8%]). Although S-100B is overall poorly predictive of outcome, a correction factor using CK is a valid means of accounting for extracranial release. By increasing the proportion of mild TBI patients correctly categorized as low risk for abnormal head CT, CK-corrected S100-B can further reduce the number of unnecessary brain CT scans performed after this injury.

  20. Extension of the Haseman-Elston regression model to longitudinal data.

    PubMed

    Won, Sungho; Elston, Robert C; Park, Taesung

    2006-01-01

    We propose an extension to longitudinal data of the Haseman and Elston regression method for linkage analysis. The proposed model is a mixed model having several random effects. As response variable, we investigate the sibship sample mean corrected cross-product (smHE) and the BLUP-mean corrected cross product (pmHE), comparing them with the original squared difference (oHE), the overall mean corrected cross-product (rHE), and the weighted average of the squared difference and the squared mean-corrected sum (wHE). The proposed model allows for the correlation structure of longitudinal data. Also, the model can test for gene x time interaction to discover genetic variation over time. The model was applied in an analysis of the Genetic Analysis Workshop 13 (GAW13) simulated dataset for a quantitative trait simulating systolic blood pressure. Independence models did not preserve the test sizes, while the mixed models with both family and sibpair random effects tended to preserve size well. Copyright 2006 S. Karger AG, Basel.

  1. Stahl ear correction using the third crus cartilage flap.

    PubMed

    Gundeslioglu, A Ozlem; Ince, Bilsev

    2013-12-01

    Stahl ear is usually associated with the existence of a third crus that traverses the scapha. The absence of the superior crus of the antihelix, a broadened scapha, and unfolded, long helical rim may contribute to the formation of the deformity. The method presented in this article is a combined technique intended to recreate the absent superior crus using the third crus as a cartilage flap along with elimination of the third crus, reducing the scapha size, and correcting helical rim deformities by skin and cartilage excisions. One bilateral and three unilateral cases were operated on with this technique (five auricles). The first unilateral cases were performed without scapha reduction and helix correction which was used in the other three cases with successful results. This procedure can be used to correct Stahl ear variations in the case of an absent superior crus when a third crus is present, the scapha is of increased size, and the helix is long without a fold. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  2. Clearing the waters: Evaluating the need for site-specific field fluorescence corrections based on turbidity measurements

    USGS Publications Warehouse

    Saraceno, John F.; Shanley, James B.; Downing, Bryan D.; Pellerin, Brian A.

    2017-01-01

    In situ fluorescent dissolved organic matter (fDOM) measurements have gained increasing popularity as a proxy for dissolved organic carbon (DOC) concentrations in streams. One challenge to accurate fDOM measurements in many streams is light attenuation due to suspended particles. Downing et al. (2012) evaluated the need for corrections to compensate for particle interference on fDOM measurements using a single sediment standard in a laboratory study. The application of those results to a large river improved unfiltered field fDOM accuracy. We tested the same correction equation in a headwater tropical stream and found that it overcompensated fDOM when turbidity exceeded ∼300 formazin nephelometric units (FNU). Therefore, we developed a site-specific, field-based fDOM correction equation through paired in situ fDOM measurements of filtered and unfiltered streamwater. The site-specific correction increased fDOM accuracy up to a turbidity as high as 700 FNU, the maximum observed in this study. The difference in performance between the laboratory-based correction equation of Downing et al. (2012) and our site-specific, field-based correction equation likely arises from differences in particle size distribution between the sediment standard used in the lab (silt) and that observed in our study (fine to medium sand), particularly during high flows. Therefore, a particle interference correction equation based on a single sediment type may not be ideal when field sediment size is significantly different. Given that field fDOM corrections for particle interference under turbid conditions are a critical component in generating accurate DOC estimates, we describe a way to develop site-specific corrections.

  3. Use of Social Media by Spanish Hospitals: Perceptions, Difficulties, and Success Factors

    PubMed Central

    Bermúdez-Tamayo, Clara; Jiménez-Pernett, Jaime; García Gutiérrez, José-Francisco; Traver-Salcedo, Vicente; Yubraham-Sánchez, David

    2013-01-01

    Abstract This exploratory study has two aims: (1) to find out if and how social media (SM) applications are used by hospitals in Spain and (2) to assess hospital managers' perception of these applications in terms of their evaluation of them, reasons for use, success factors, and difficulties encountered during their implementation. A cross-sectional survey has been carried out using Spanish hospitals as the unit of analysis. Geographical differences in the use of SM were found. Social networks are used most often by larger hospitals (30% by medium-size, 28% by large-size). They are also more frequently used by public hospitals (19%, p<0.01) than by private ones. Respondents with a negative perception of SM felt that there is a chance they may be abused by healthcare professionals, whereas those with a positive perception believed that they can be used to improve communication both within and outside the hospital. Reasons for the use of SM include the idea of maximizing exposure of the hospital. The results show that Spanish hospitals are only just beginning to use SM applications and that hospital type can influence their use. The perceptions, reasons for use, success factors, and difficulties encountered during the implementation of SM mean that it is very important for healthcare professionals to use SM correctly and adequately. PMID:23368890

  4. The accuracy of climate models' simulated season lengths and the effectiveness of grid scale correction factors

    DOE PAGES

    Winterhalter, Wade E.

    2011-09-01

    Global climate change is expected to impact biological populations through a variety of mechanisms including increases in the length of their growing season. Climate models are useful tools for predicting how season length might change in the future. However, the accuracy of these models tends to be rather low at regional geographic scales. Here, I determined the ability of several atmosphere and ocean general circulating models (AOGCMs) to accurately simulate historical season lengths for a temperate ectotherm across the continental United States. I also evaluated the effectiveness of regional-scale correction factors to improve the accuracy of these models. I foundmore » that both the accuracy of simulated season lengths and the effectiveness of the correction factors to improve the model's accuracy varied geographically and across models. These results suggest that regional specific correction factors do not always adequately remove potential discrepancies between simulated and historically observed environmental parameters. As such, an explicit evaluation of the correction factors' effectiveness should be included in future studies of global climate change's impact on biological populations.« less

  5. Sample size, power calculations, and their implications for the cost of thorough studies of drug induced QT interval prolongation.

    PubMed

    Malik, Marek; Hnatkova, Katerina; Batchvarov, Velislav; Gang, Yi; Smetana, Peter; Camm, A John

    2004-12-01

    Regulatory authorities require new drugs to be investigated using a so-called "thorough QT/QTc study" to identify compounds with a potential of influencing cardiac repolarization in man. Presently drafted regulatory consensus requires these studies to be powered for the statistical detection of QTc interval changes as small as 5 ms. Since this translates into a noticeable drug development burden, strategies need to be identified allowing the size and thus the cost of thorough QT/QTc studies to be minimized. This study investigated the influence of QT and RR interval data quality and the precision of heart rate correction on the sample sizes of thorough QT/QTc studies. In 57 healthy subjects (26 women, age range 19-42 years), a total of 4,195 drug-free digital electrocardiograms (ECG) were obtained (65-84 ECGs per subject). All ECG parameters were measured manually using the most accurate approach with reconciliation of measurement differences between different cardiologists and aligning the measurements of corresponding ECG patterns. From the data derived in this measurement process, seven different levels of QT/RR data quality were obtained, ranging from the simplest approach of measuring 3 beats in one ECG lead to the most exact approach. Each of these QT/RR data-sets was processed with eight different heart rate corrections ranging from Bazett and Fridericia corrections to the individual QT/RR regression modelling with optimization of QT/RR curvature. For each combination of data quality and heart rate correction, standard deviation of individual mean QTc values and mean of individual standard deviations of QTc values were calculated and used to derive the size of thorough QT/QTc studies with an 80% power to detect 5 ms QTc changes at the significance level of 0.05. Irrespective of data quality and heart rate corrections, the necessary sample sizes of studies based on between-subject comparisons (e.g., parallel studies) are very substantial requiring >140 subjects per group. However, the required study size may be substantially reduced in investigations based on within-subject comparisons (e.g., crossover studies or studies of several parallel groups each crossing over an active treatment with placebo). While simple measurement approaches with ad-hoc heart rate correction still lead to requirements of >150 subjects, the combination of best data quality with most accurate individualized heart rate correction decreases the variability of QTc measurements in each individual very substantially. In the data of this study, the average of standard deviations of QTc values calculated separately in each individual was only 5.2 ms. Such a variability in QTc data translates to only 18 subjects per study group (e.g., the size of a complete one-group crossover study) to detect 5 ms QTc change with an 80% power. Cost calculations show that by involving the most stringent ECG handling and measurement, the cost of a thorough QT/QTc study may be reduced to approximately 25%-30% of the cost imposed by the simple ECG reading (e.g., three complexes in one lead only).

  6. Diminished growth and lower adiposity in hyperglycemic very low birth weight neonates at 4 months corrected age.

    PubMed

    Scheurer, J M; Gray, H L; Demerath, E W; Rao, R; Ramel, S E

    2016-02-01

    Characterize the relationship between neonatal hyperglycemia and growth and body composition at 4 months corrected age (CA) in very low birth weight (VLBW) preterm infants. A prospective study of VLBW appropriate-for-gestation infants (N=53). All blood glucose measurements in the first 14 days and nutritional intake and illness markers until discharge were recorded. Standard anthropometrics and body composition via air displacement plethysmography were measured near term CA and 4 months CA. Relationships between hyperglycemia and anthropometrics and body composition were examined using multivariate linear regression. Infants with >5 days of hyperglycemia were lighter (5345 vs 6455 g, P⩽0.001), shorter (57.9 vs 60.9 cm, P⩽0.01), had smaller occipital-frontal head circumference (39.4 vs 42.0 cm, P⩽0.05) and were leaner (percent body fat 15.0 vs 23.8, P⩽0.01) at 4 months CA than those who did not have hyperglycemia, including after correcting for nutritional and illness factors. Neonatal hyperglycemia in VLBW infants is associated with decreased body size and lower adiposity at 4 months CA independent of nutritional deficit, insulin use and illness. Downregulation of the growth hormone axis may be responsible. These changes may influence long-term growth and cognitive development.

  7. Fuzzy cluster analysis of simple physicochemical properties of amino acids for recognizing secondary structure in proteins.

    PubMed Central

    Mocz, G.

    1995-01-01

    Fuzzy cluster analysis has been applied to the 20 amino acids by using 65 physicochemical properties as a basis for classification. The clustering products, the fuzzy sets (i.e., classical sets with associated membership functions), have provided a new measure of amino acid similarities for use in protein folding studies. This work demonstrates that fuzzy sets of simple molecular attributes, when assigned to amino acid residues in a protein's sequence, can predict the secondary structure of the sequence with reasonable accuracy. An approach is presented for discriminating standard folding states, using near-optimum information splitting in half-overlapping segments of the sequence of assigned membership functions. The method is applied to a nonredundant set of 252 proteins and yields approximately 73% matching for correctly predicted and correctly rejected residues with approximately 60% overall success rate for the correctly recognized ones in three folding states: alpha-helix, beta-strand, and coil. The most useful attributes for discriminating these states appear to be related to size, polarity, and thermodynamic factors. Van der Waals volume, apparent average thickness of surrounding molecular free volume, and a measure of dimensionless surface electron density can explain approximately 95% of prediction results. hydrogen bonding and hydrophobicity induces do not yet enable clear clustering and prediction. PMID:7549882

  8. Hallux valgus surgery may produce early improvements in balance control: results of a cross-sectional pilot study.

    PubMed

    Sadra, Saba; Fleischer, Adam; Klein, Erin; Grewal, Gurtej S; Knight, Jessica; Weil, Lowell Scott; Weil, Lowell; Najafi, Bijan

    2013-01-01

    Hallux valgus (HV) is associated with poorer performance during gait and balance tasks and is an independent risk factor for falls in older adults. We sought to assess whether corrective HV surgery improves gait and balance. Using a cross-sectional study design, gait and static balance data were obtained from 40 adults: 19 patients with HV only (preoperative group), 10 patients who recently underwent successful HV surgery (postoperative group), and 11 control participants. Assessments were made in the clinic using body-worn sensors. Patients in the preoperative group generally demonstrated poorer static balance control compared with the other two groups. Despite similar age and body mass index, postoperative patients exhibited 29% and 63% less center of mass sway than preoperative patients during double-and single-support balance assessments, respectively (analysis of variance P =.17 and P =.14, respectively [both eyes open condition]). Overall, gait performance was similar among the groups, except for speed during gait initiation, where lower speeds were encountered in the postoperative group compared with the preoperative group (Scheffe P = .049). This study provides supportive evidence regarding the benefits of corrective lower-extremity surgery on certain aspects of balance control. Patients seem to demonstrate early improvements in static balance after corrective HV surgery, whereas gait improvements may require a longer recovery time. Further research using a longitudinal study design and a larger sample size capable of assessing the long-term effects of HV surgical correction on balance and gait is probably warranted.

  9. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.

    PubMed

    Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well.

  10. Sample size, library composition, and genotypic diversity among natural populations of Escherichia coli from different animals influence accuracy of determining sources of fecal pollution.

    PubMed

    Johnson, LeeAnn K; Brown, Mary B; Carruthers, Ethan A; Ferguson, John A; Dombek, Priscilla E; Sadowsky, Michael J

    2004-08-01

    A horizontal, fluorophore-enhanced, repetitive extragenic palindromic-PCR (rep-PCR) DNA fingerprinting technique (HFERP) was developed and evaluated as a means to differentiate human from animal sources of Escherichia coli. Box A1R primers and PCR were used to generate 2,466 rep-PCR and 1,531 HFERP DNA fingerprints from E. coli strains isolated from fecal material from known human and 12 animal sources: dogs, cats, horses, deer, geese, ducks, chickens, turkeys, cows, pigs, goats, and sheep. HFERP DNA fingerprinting reduced within-gel grouping of DNA fingerprints and improved alignment of DNA fingerprints between gels, relative to that achieved using rep-PCR DNA fingerprinting. Jackknife analysis of the complete rep-PCR DNA fingerprint library, done using Pearson's product-moment correlation coefficient, indicated that animal and human isolates were assigned to the correct source groups with an 82.2% average rate of correct classification. However, when only unique isolates were examined, isolates from a single animal having a unique DNA fingerprint, Jackknife analysis showed that isolates were assigned to the correct source groups with a 60.5% average rate of correct classification. The percentages of correctly classified isolates were about 15 and 17% greater for rep-PCR and HFERP, respectively, when analyses were done using the curve-based Pearson's product-moment correlation coefficient, rather than the band-based Jaccard algorithm. Rarefaction analysis indicated that, despite the relatively large size of the known-source database, genetic diversity in E. coli was very great and is most likely accounting for our inability to correctly classify many environmental E. coli isolates. Our data indicate that removal of duplicate genotypes within DNA fingerprint libraries, increased database size, proper methods of statistical analysis, and correct alignment of band data within and between gels improve the accuracy of microbial source tracking methods.

  11. Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem

    PubMed Central

    Wang, Jun Yi; Ngo, Michael M.; Hessl, David; Hagerman, Randi J.; Rivera, Susan M.

    2016-01-01

    Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer’s segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well. PMID:27213683

  12. A comparison of two nano-sized particle air filtration tests in the diameter range of 10 to 400 nanometers

    NASA Astrophysics Data System (ADS)

    Japuntich, Daniel A.; Franklin, Luke M.; Pui, David Y.; Kuehn, Thomas H.; Kim, Seong Chan; Viner, Andrew S.

    2007-01-01

    Two different air filter test methodologies are discussed and compared for challenges in the nano-sized particle range of 10-400 nm. Included in the discussion are test procedure development, factors affecting variability and comparisons between results from the tests. One test system which gives a discrete penetration for a given particle size is the TSI 8160 Automated Filter tester (updated and commercially available now as the TSI 3160) manufactured by the TSI, Inc., Shoreview, MN. Another filter test system was developed utilizing a Scanning Mobility Particle Sizer (SMPS) to sample the particle size distributions downstream and upstream of an air filter to obtain a continuous percent filter penetration versus particle size curve. Filtration test results are shown for fiberglass filter paper of intermediate filtration efficiency. Test variables affecting the results of the TSI 8160 for NaCl and dioctyl phthalate (DOP) particles are discussed, including condensation particle counter stability and the sizing of the selected particle challenges. Filter testing using a TSI 3936 SMPS sampling upstream and downstream of a filter is also shown with a discussion of test variables and the need for proper SMPS volume purging and filter penetration correction procedure. For both tests, the penetration versus particle size curves for the filter media studied follow the theoretical Brownian capture model of decreasing penetration with decreasing particle diameter down to 10 nm with no deviation. From these findings, the authors can say with reasonable confidence that there is no evidence of particle thermal rebound in the size range.

  13. Estimation of Food Guide Pyramid Serving Sizes by College Students.

    ERIC Educational Resources Information Center

    Knaust, Gretchen; Foster, Irene M.

    2000-01-01

    College students (n=158) used the Food Guide Pyramid to select serving sizes on a questionnaire (73% had been instructed in its use). Overall mean scores (31% correct) indicated they generally did not know recommended serving sizes. Those who had read about or received instruction in the pyramid had higher mean scores. (SK)

  14. Correction factors in determining speed of sound among freshmen in undergraduate physics laboratory

    NASA Astrophysics Data System (ADS)

    Lutfiyah, A.; Adam, A. S.; Suprapto, N.; Kholiq, A.; Putri, N. P.

    2018-03-01

    This paper deals to identify the correction factor in determining speed of sound that have been done by freshmen in undergraduate physics laboratory. Then, the result will be compared with speed of sound that determining by senior student. Both of them used the similar instrument, namely resonance tube with apparatus. The speed of sound indicated by senior was 333.38 ms-1 with deviation to the theory about 3.98%. Meanwhile, for freshmen, the speed of sound experiment was categorised into three parts: accurate value (52.63%), middle value (31.58%) and lower value (15.79%). Based on analysis, some correction factors were suggested: human error in determining first and second harmonic, end correction of tube diameter, and another factors from environment, such as temperature, humidity, density, and pressure.

  15. Digital templating for THA: a simple computer-assisted application for complex hip arthritis cases.

    PubMed

    Hafez, Mahmoud A; Ragheb, Gad; Hamed, Adel; Ali, Amr; Karim, Said

    2016-10-01

    Total hip arthroplasty (THA) is the standard procedure for end-stage arthritis of the hip. Its technical success relies on preoperative planning of the surgical procedure and virtual setup of the operative performance. Digital hip templating is one methodology of preoperative planning for THA which requires a digital preoperative radiograph and a computer with special software. This is a prospective study involving 23 patients (25 hips) who were candidates for complex THA surgery (unilateral or bilateral). Digital templating is done by radiographic assessment using radiographic magnification correction, leg length discrepancy and correction measurements, acetabular component and femoral component templating as well as neck resection measurement. The overall accuracy for templating the stem implant's exact size is 81%. This percentage increased to 94% when considering sizing within 1 size. Digital templating has proven effective, reliable and essential technique for preoperative planning and accurate prediction of THA sizing and alignment.

  16. Altitudinal patterns of plant diversity on the Jade Dragon Snow Mountain, southwestern China.

    PubMed

    Xu, Xiang; Zhang, Huayong; Tian, Wang; Zeng, Xiaoqiang; Huang, Hai

    2016-01-01

    Understanding altitudinal patterns of biological diversity and their underlying mechanisms is critically important for biodiversity conservation in mountainous regions. The contribution of area to plant diversity patterns is widely acknowledged and may mask the effects of other determinant factors. In this context, it is important to examine altitudinal patterns of corrected taxon richness by eliminating the area effect. Here we adopt two methods to correct observed taxon richness: a power-law relationship between richness and area, hereafter "method 1"; and richness counted in equal-area altitudinal bands, hereafter "method 2". We compare these two methods on the Jade Dragon Snow Mountain, which is the nearest large-scale altitudinal gradient to the Equator in the Northern Hemisphere. We find that seed plant species richness, genus richness, family richness, and species richness of trees, shrubs, herbs and Groups I-III (species with elevational range size <150, between 150 and 500, and >500 m, respectively) display distinct hump-shaped patterns along the equal-elevation altitudinal gradient. The corrected taxon richness based on method 2 (TRcor2) also shows hump-shaped patterns for all plant groups, while the one based on method 1 (TRcor1) does not. As for the abiotic factors influencing the patterns, mean annual temperature, mean annual precipitation, and mid-domain effect explain a larger part of the variation in TRcor2 than in TRcor1. In conclusion, for biodiversity patterns on the Jade Dragon Snow Mountain, method 2 preserves the significant influences of abiotic factors to the greatest degree while eliminating the area effect. Our results thus reveal that although the classical method 1 has earned more attention and approval in previous research, method 2 can perform better under certain circumstances. We not only confirm the essential contribution of method 1 in community ecology, but also highlight the significant role of method 2 in eliminating the area effect, and call for more application of method 2 in further macroecological studies.

  17. Application of commercial MOSFET detectors for in vivo dosimetry in the therapeutic x-ray range from 80 kV to 250 kV

    NASA Astrophysics Data System (ADS)

    Ehringfeld, Christian; Schmid, Susanne; Poljanc, Karin; Kirisits, Christian; Aiginger, Hannes; Georg, Dietmar

    2005-01-01

    The purpose of this study was to investigate the dosimetric characteristics (energy dependence, linearity, fading, reproducibility, etc) of MOSFET detectors for in vivo dosimetry in the kV x-ray range. The experience of MOSFET in vivo dosimetry in a pre-clinical study using the Alderson phantom and in clinical practice is also reported. All measurements were performed with a Gulmay D3300 kV unit and TN-502RDI MOSFET detectors. For the determination of correction factors different solid phantoms and a calibrated Farmer-type chamber were used. The MOSFET signal was linear with applied dose in the range from 0.2 to 2 Gy for all energies. Due to fading it is recommended to read the MOSFET signal during the first 15 min after irradiation. For long time intervals between irradiation and readout the fading can vary largely with the detector. The temperature dependence of the detector signal was small (0.3% °C-1) in the temperature range between 22 and 40 °C. The variation of the measuring signal with beam incidence amounts to ±5% and should be considered in clinical applications. Finally, for entrance dose measurements energy-dependent calibration factors, correction factors for field size and irradiated cable length were applied. The overall accuracy, for all measurements, was dominated by reproducibility as a function of applied dose. During the pre-clinical in vivo study, the agreement between MOSFET and TLD measurements was well within 3%. The results of MOSFET measurements, to determine the dosimetric characteristics as well as clinical applications, showed that MOSFET detectors are suitable for in vivo dosimetry in the kV range. However, some energy-dependent dosimetry effects need to be considered and corrected for. Due to reproducibility effects at low dose levels accurate in vivo measurements are only possible if the applied dose is equal to or larger than 2 Gy.

  18. Application of commercial MOSFET detectors for in vivo dosimetry in the therapeutic x-ray range from 80 kV to 250 kV.

    PubMed

    Ehringfeld, Christian; Schmid, Susanne; Poljanc, Karin; Kirisits, Christian; Aiginger, Hannes; Georg, Dietmar

    2005-01-21

    The purpose of this study was to investigate the dosimetric characteristics (energy dependence, linearity, fading, reproducibility, etc) of MOSFET detectors for in vivo dosimetry in the kV x-ray range. The experience of MOSFET in vivo dosimetry in a pre-clinical study using the Alderson phantom and in clinical practice is also reported. All measurements were performed with a Gulmay D3300 kV unit and TN-502RDI MOSFET detectors. For the determination of correction factors different solid phantoms and a calibrated Farmer-type chamber were used. The MOSFET signal was linear with applied dose in the range from 0.2 to 2 Gy for all energies. Due to fading it is recommended to read the MOSFET signal during the first 15 min after irradiation. For long time intervals between irradiation and readout the fading can vary largely with the detector. The temperature dependence of the detector signal was small (0.3% degrees C(-1)) in the temperature range between 22 and 40 degrees C. The variation of the measuring signal with beam incidence amounts to +/-5% and should be considered in clinical applications. Finally, for entrance dose measurements energy-dependent calibration factors, correction factors for field size and irradiated cable length were applied. The overall accuracy, for all measurements, was dominated by reproducibility as a function of applied dose. During the pre-clinical in vivo study, the agreement between MOSFET and TLD measurements was well within 3%. The results of MOSFET measurements, to determine the dosimetric characteristics as well as clinical applications, showed that MOSFET detectors are suitable for in vivo dosimetry in the kV range. However, some energy-dependent dosimetry effects need to be considered and corrected for. Due to reproducibility effects at low dose levels accurate in vivo measurements are only possible if the applied dose is equal to or larger than 2 Gy.

  19. Resistivity Correction Factor for the Four-Probe Method: Experiment I

    NASA Astrophysics Data System (ADS)

    Yamashita, Masato; Yamaguchi, Shoji; Enjoji, Hideo

    1988-05-01

    Experimental verification of the theoretically derived resistivity correction factor (RCF) is presented. Resistivity and sheet resistance measurements by the four-probe method are made on three samples: isotropic graphite, ITO film and Au film. It is indicated that the RCF can correct the apparent variations of experimental data to yield reasonable resistivities and sheet resistances.

  20. Consumer understanding of calorie amounts and serving size: implications for nutritional labelling.

    PubMed

    Vanderlee, Lana; Goodman, Samantha; Sae Yang, Wiworn; Hammond, David

    2012-07-18

    Increased consumption of sugar-sweetened beverages has contributed to rising obesity levels. Under Canadian law, calories for pre-packaged foods and beverages are presented by serving size; however, serving sizes differ across products and even for the same product in different containers. This study examined consumer understanding of calorie amounts for government nutrition labels and industry labelling schemes. A national sample of 687 Canadian adults completed an online survey. Participants were randomized to view images of Coke® bottles that displayed different serving sizes and calorie amounts. Participants viewed either the regulated nutrition information on the "back" of containers, or the voluntary calorie symbols displayed on the "front" of Coke® products. Participants were asked to determine how many calories the bottle contained. Across all conditions, 54.2% of participants correctly identified the number of calories in the beverage. Participants who viewed government-mandated nutrition information were more likely to answer correctly (59.0%) than those who saw industry labelling (49.1%) (OR=5.3, 95% CI: 2.6-10.6). Only 11.8% who viewed the Coke® bottle with calorie amounts per serving correctly identified the calorie amount, compared to 91.8% who saw calorie amounts per container, regardless of whether information was presented in the Nutrition Facts Table or the front-of-pack symbol (OR=242.9, 95% CI: 112.1-526.2). Few individuals can use nutrition labels to correctly identify calorie content when presented per serving or using industry labelling schemes. The findings highlight the importance of revising labelling standards and indicate that industry labelling initiatives warrant greater scrutiny.

  1. Fatigue Crack Growth Rate and Stress-Intensity Factor Corrections for Out-of-Plane Crack Growth

    NASA Technical Reports Server (NTRS)

    Forth, Scott C.; Herman, Dave J.; James, Mark A.

    2003-01-01

    Fatigue crack growth rate testing is performed by automated data collection systems that assume straight crack growth in the plane of symmetry and use standard polynomial solutions to compute crack length and stress-intensity factors from compliance or potential drop measurements. Visual measurements used to correct the collected data typically include only the horizontal crack length, which for cracks that propagate out-of-plane, under-estimates the crack growth rates and over-estimates the stress-intensity factors. The authors have devised an approach for correcting both the crack growth rates and stress-intensity factors based on two-dimensional mixed mode-I/II finite element analysis (FEA). The approach is used to correct out-of-plane data for 7050-T7451 and 2025-T6 aluminum alloys. Results indicate the correction process works well for high DeltaK levels but fails to capture the mixed-mode effects at DeltaK levels approaching threshold (da/dN approximately 10(exp -10) meter/cycle).

  2. Does the size of the rod affect the surgical results in adolescent idiopathic scoliosis? 5.5-mm versus 6.35-mm rod.

    PubMed

    Huang, Tsung-Hsi; Ma, Hsiao-Li; Wang, Shih-Tien; Chou, Po-Hsin; Ying, Szu-Han; Liu, Chien-Lin; Yu, Wing-Kwong; Chang, Ming-Chau

    2014-08-01

    Favorable clinical outcomes of surgical treatment with Cotrel-Dubousset instrumentation (CDI) or instrumentations that follow the principles of CDI, for adolescent idiopathic scoliosis (AIS) have been reported. However, there are few studies concerning the results with rods of different sizes. To find out whether the rod size affects the surgical results for AIS. A retrospective cohort study based on the same spinal system with different sizes of rod. A consecutive series of 93 patients, who underwent posterior correction with posterior instrumentation and fusion for AIS, were included and retrospectively analyzed. Postoperative radiologic outcomes were evaluated using coronal curves, percentage of curve correction, and coronal global balance. Ninety-three patients treated during the period January 2000 to December 2008 were included in this study; 48 patients were treated with the Cotrel-Dubousset Horizon (CDH) M10 system with a 6.35-mm rod from January 2000 through December 2004, and a CDH M8 was used with a 5.5-mm rod in another 45 patients from January 2005 through December 2008. The Cobb angle, Risser grade, coronal curves, flexibility of curve, percentage of curve correction, coronal global balance, operative time, and estimated blood loss were measured and analyzed. The same parameters were used when the patient was followed at the OPD. All of the patients underwent regular follow-up for at least 2 years. No statistical significance was observed in the demographic data, including age, sex, BMI, and Risser grade, between these 2 groups. The overall average percentage of correction was 60.0%±12.7%: 60.7%±12.5% for the CDH M10 group, and 59%±13.1% for the CDH M8 group. At the final follow-up, the overall average loss of correction was 4.8±3.9° for the CDH M10 group, and 4.3±4.0° for the CDH M8 group. The average percentage of correction at the final follow-up was 50.9%±15.1% for the CDH M10 group, and 51.1%±16.1% for the M8 group. No statistical significance could be observed in the radiologic parameters between these 2 groups. The radiologic results for the 5.5-mm rod and the 6.35-mm rod were comparable in terms of correction, loss of correction, and coronal global balance. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Discriminant Analysis of Defective and Non-Defective Field Pea (Pisum sativum L.) into Broad Market Grades Based on Digital Image Features.

    PubMed

    McDonald, Linda S; Panozzo, Joseph F; Salisbury, Phillip A; Ford, Rebecca

    2016-01-01

    Field peas (Pisum sativum L.) are generally traded based on seed appearance, which subjectively defines broad market-grades. In this study, we developed an objective Linear Discriminant Analysis (LDA) model to classify market grades of field peas based on seed colour, shape and size traits extracted from digital images. Seeds were imaged in a high-throughput system consisting of a camera and laser positioned over a conveyor belt. Six colour intensity digital images were captured (under 405, 470, 530, 590, 660 and 850nm light) for each seed, and surface height was measured at each pixel by laser. Colour, shape and size traits were compiled across all seed in each sample to determine the median trait values. Defective and non-defective seed samples were used to calibrate and validate the model. Colour components were sufficient to correctly classify all non-defective seed samples into correct market grades. Defective samples required a combination of colour, shape and size traits to achieve 87% and 77% accuracy in market grade classification of calibration and validation sample-sets respectively. Following these results, we used the same colour, shape and size traits to develop an LDA model which correctly classified over 97% of all validation samples as defective or non-defective.

  4. Discriminant Analysis of Defective and Non-Defective Field Pea (Pisum sativum L.) into Broad Market Grades Based on Digital Image Features

    PubMed Central

    McDonald, Linda S.; Panozzo, Joseph F.; Salisbury, Phillip A.; Ford, Rebecca

    2016-01-01

    Field peas (Pisum sativum L.) are generally traded based on seed appearance, which subjectively defines broad market-grades. In this study, we developed an objective Linear Discriminant Analysis (LDA) model to classify market grades of field peas based on seed colour, shape and size traits extracted from digital images. Seeds were imaged in a high-throughput system consisting of a camera and laser positioned over a conveyor belt. Six colour intensity digital images were captured (under 405, 470, 530, 590, 660 and 850nm light) for each seed, and surface height was measured at each pixel by laser. Colour, shape and size traits were compiled across all seed in each sample to determine the median trait values. Defective and non-defective seed samples were used to calibrate and validate the model. Colour components were sufficient to correctly classify all non-defective seed samples into correct market grades. Defective samples required a combination of colour, shape and size traits to achieve 87% and 77% accuracy in market grade classification of calibration and validation sample-sets respectively. Following these results, we used the same colour, shape and size traits to develop an LDA model which correctly classified over 97% of all validation samples as defective or non-defective. PMID:27176469

  5. Selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays and impacts of using incorrect weighting factors on curve stability, data quality, and assay performance.

    PubMed

    Gu, Huidong; Liu, Guowen; Wang, Jian; Aubry, Anne-Françoise; Arnold, Mark E

    2014-09-16

    A simple procedure for selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays is reported. The correct weighting factor is determined by the relationship between the standard deviation of instrument responses (σ) and the concentrations (x). The weighting factor of 1, 1/x, or 1/x(2) should be selected if, over the entire concentration range, σ is a constant, σ(2) is proportional to x, or σ is proportional to x, respectively. For the first time, we demonstrated with detailed scientific reasoning, solid historical data, and convincing justification that 1/x(2) should always be used as the weighting factor for all bioanalytical LC-MS/MS assays. The impacts of using incorrect weighting factors on curve stability, data quality, and assay performance were thoroughly investigated. It was found that the most stable curve could be obtained when the correct weighting factor was used, whereas other curves using incorrect weighting factors were unstable. It was also found that there was a very insignificant impact on the concentrations reported with calibration curves using incorrect weighting factors as the concentrations were always reported with the passing curves which actually overlapped with or were very close to the curves using the correct weighting factor. However, the use of incorrect weighting factors did impact the assay performance significantly. Finally, the difference between the weighting factors of 1/x(2) and 1/y(2) was discussed. All of the findings can be generalized and applied into other quantitative analysis techniques using calibration curves with weighted least-squares regression algorithm.

  6. Adjuvant Radioactive iodine 131 ablation in papillary microcarcinoma of thyroid: Saudi Arabian experience [corrected].

    PubMed

    Al-Qahtani, Khalid Hussain; Al Asiri, Mushabbab; Tunio, Mutahir A; Aljohani, Naji J; Bayoumi, Yasser; Fatani, Hanadi; AlHadab, Abdulrehman

    2015-12-01

    Papillary Microcarcinoma (PMC) of thyroid is a rare type of differentiated thyroid cancer (DTC), which according to the World Health Organization measures 1.0 cm or less. The gold standard of treatment of PMC is still controversy. Our aim was to contribute in resolving the debate on the therapeutic choices of the surgical and adjuvant I-131 (RAI) treatment in PMC. From 2000 to 2012, 326 patients were found to have PMC and were retrospectively reviewed for clinicopathological characteristics, treatment outcomes and prognostic factors. Mean age of cohort was 42.6 years (range: 18-76) and the mean tumor size was 0.61 cm ± 0.24; lymph node involvement was seen in 12.9 % of cases. Median follow up period was 8.05 years (1.62-11.4). Total 23 all site recurrences (7.13 %) were observed; more observed in patients without I-131 ablation (p <0.0001). Ten year DFS rates were 89.6 %. Cox regression Model analysis revealed size, histopathologic variants, multifocality, extrathyroidal extension, lymphovascular space invasion, nodal status, and adjuvant RAI ablation the important prognostic factors affecting DFS. Despite excellent DFS rates, a small proportion of patients with PMC develop recurrences after treatment. Adjuvant RAI therapy improves DFS in PMC patients with aggressive histopathologic variants, multifocality, ETE, LVSI, tumor size (> 0.5 cm) and lymph node involvement. Failure of RAI ablation to decrease risk in N1a/b supports prophylactic central neck dissection during thyroidectomy, however more trials are warranted. Adjuvant I-131 ablation following thyroidectomy in PMC patients, particularly with poor prognostic factors improves DFS rates.

  7. Preoperative short hookwire placement for small pulmonary lesions: evaluation of technical success and risk factors for initial placement failure.

    PubMed

    Iguchi, Toshihiro; Hiraki, Takao; Matsui, Yusuke; Fujiwara, Hiroyasu; Masaoka, Yoshihisa; Tanaka, Takashi; Sato, Takuya; Gobara, Hideo; Toyooka, Shinichi; Kanazawa, Susumu

    2018-05-01

    To retrospectively evaluate the technical success of computed tomography fluoroscopy-guided short hookwire placement before video-assisted thoracoscopic surgery and to identify the risk factors for initial placement failure. In total, 401 short hookwire placements for 401 lesions (mean diameter 9.3 mm) were reviewed. Technical success was defined as correct positioning of the hookwire. Possible risk factors for initial placement failure (i.e., requirement for placement of an additional hookwire or to abort the attempt) were evaluated using logistic regression analysis for all procedures, and for procedures performed via the conventional route separately. Of the 401 initial placements, 383 were successful and 18 failed. Short hookwires were finally placed for 399 of 401 lesions (99.5%). Univariate logistic regression analyses revealed that in all 401 procedures only the transfissural approach was a significant independent predictor of initial placement failure (odds ratio, OR, 15.326; 95% confidence interval, CI, 5.429-43.267; p < 0.001) and for the 374 procedures performed via the conventional route only lesion size was a significant independent predictor of failure (OR 0.793, 95% CI 0.631-0.996; p = 0.046). The technical success of preoperative short hookwire placement was extremely high. The transfissural approach was a predictor initial placement failure for all procedures and small lesion size was a predictor of initial placement failure for procedures performed via the conventional route. • Technical success of preoperative short hookwire placement was extremely high. • The transfissural approach was a significant independent predictor of initial placement failure for all procedures. • Small lesion size was a significant independent predictor of initial placement failure for procedures performed via the conventional route.

  8. Fisheye camera method for spatial non-uniformity corrections in luminous flux measurements with integrating spheres

    NASA Astrophysics Data System (ADS)

    Kokka, Alexander; Pulli, Tomi; Poikonen, Tuomas; Askola, Janne; Ikonen, Erkki

    2017-08-01

    This paper presents a fisheye camera method for determining spatial non-uniformity corrections in luminous flux measurements with integrating spheres. Using a fisheye camera installed into a port of an integrating sphere, the relative angular intensity distribution of the lamp under test is determined. This angular distribution is used for calculating the spatial non-uniformity correction for the lamp when combined with the spatial responsivity data of the sphere. The method was validated by comparing it to a traditional goniophotometric approach when determining spatial correction factors for 13 LED lamps with different angular spreads. The deviations between the spatial correction factors obtained using the two methods ranged from -0.15 % to 0.15%. The mean magnitude of the deviations was 0.06%. For a typical LED lamp, the expanded uncertainty (k = 2 ) for the spatial non-uniformity correction factor was evaluated to be 0.28%. The fisheye camera method removes the need for goniophotometric measurements in determining spatial non-uniformity corrections, thus resulting in considerable system simplification. Generally, no permanent modifications to existing integrating spheres are required.

  9. Model selection for semiparametric marginal mean regression accounting for within-cluster subsampling variability and informative cluster size.

    PubMed

    Shen, Chung-Wei; Chen, Yi-Hau

    2018-03-13

    We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.

  10. Correction factors to convert microdosimetry measurements in silicon to tissue in 12C ion therapy.

    PubMed

    Bolst, David; Guatelli, Susanna; Tran, Linh T; Chartier, Lachlan; Lerch, Michael L F; Matsufuji, Naruhiro; Rosenfeld, Anatoly B

    2017-03-21

    Silicon microdosimetry is a promising technology for heavy ion therapy (HIT) quality assurance, because of its sub-mm spatial resolution and capability to determine radiation effects at a cellular level in a mixed radiation field. A drawback of silicon is not being tissue-equivalent, thus the need to convert the detector response obtained in silicon to tissue. This paper presents a method for converting silicon microdosimetric spectra to tissue for a therapeutic 12 C beam, based on Monte Carlo simulations. The energy deposition spectra in a 10 μm sized silicon cylindrical sensitive volume (SV) were found to be equivalent to those measured in a tissue SV, with the same shape, but with dimensions scaled by a factor κ equal to 0.57 and 0.54 for muscle and water, respectively. A low energy correction factor was determined to account for the enhanced response in silicon at low energy depositions, produced by electrons. The concept of the mean path length [Formula: see text] to calculate the lineal energy was introduced as an alternative to the mean chord length [Formula: see text] because it was found that adopting Cauchy's formula for the [Formula: see text] was not appropriate for the radiation field typical of HIT as it is very directional. [Formula: see text] can be determined based on the peak of the lineal energy distribution produced by the incident carbon beam. Furthermore it was demonstrated that the thickness of the SV along the direction of the incident 12 C ion beam can be adopted as [Formula: see text]. The tissue equivalence conversion method and [Formula: see text] were adopted to determine the RBE 10 , calculated using a modified microdosimetric kinetic model, applied to the microdosimetric spectra resulting from the simulation study. Comparison of the RBE 10 along the Bragg peak to experimental TEPC measurements at HIMAC, NIRS, showed good agreement. Such agreement demonstrates the validity of the developed tissue equivalence correction factors and of the determination of [Formula: see text].

  11. Re-evaluation of the correction factors for the GROVEX

    NASA Astrophysics Data System (ADS)

    Ketelhut, Steffen; Meier, Markus

    2018-04-01

    The GROVEX (GROssVolumige EXtrapolationskammer, large-volume extrapolation chamber) is the primary standard for the dosimetry of low-dose-rate interstitial brachytherapy at the Physikalisch-Technische Bundesanstalt (PTB). In the course of setup modifications and re-measuring of several dimensions, the correction factors have been re-evaluated in this work. The correction factors for scatter and attenuation have been recalculated using the Monte Carlo software package EGSnrc, and a new expression has been found for the divergence correction. The obtained results decrease the measured reference air kerma rate by approximately 0.9% for the representative example of a seed of type Bebig I25.S16C. This lies within the expanded uncertainty (k  =  2).

  12. Characterization of Scattered X-Ray Photons in Dental Cone-Beam Computed Tomography.

    PubMed

    Yang, Ching-Ching

    2016-01-01

    Scatter is a very important artifact causing factor in dental cone-beam CT (CBCT), which has a major influence on the detectability of details within images. This work aimed to improve the image quality of dental CBCT through scatter correction. Scatter was estimated in the projection domain from the low frequency component of the difference between the raw CBCT projection and the projection obtained by extrapolating the model fitted to the raw projections acquired with 2 different sizes of axial field-of-view (FOV). The function for curve fitting was optimized by using Monte Carlo simulation. To validate the proposed method, an anthropomorphic phantom and a water-filled cylindrical phantom with rod inserts simulating different tissue materials were scanned using 120 kVp, 5 mA and 9-second scanning time covering an axial FOV of 4 cm and 13 cm. The detectability of the CT image was evaluated by calculating the contrast-to-noise ratio (CNR). Beam hardening and cupping artifacts were observed in CBCT images without scatter correction, especially in those acquired with 13 cm FOV. These artifacts were reduced in CBCT images corrected by the proposed method, demonstrating its efficacy on scatter correction. After scatter correction, the image quality of CBCT was improved in terms of target detectability which was quantified as the CNR for rod inserts in the cylindrical phantom. Hopefully the calculations performed in this work can provide a route to reach a high level of diagnostic image quality for CBCT imaging used in oral and maxillofacial structures whilst ensuring patient dose as low as reasonably achievable, which may ultimately make CBCT scan a reliable and safe tool in clinical practice.

  13. Visual acuity, endothelial cell density and polymegathism after iris-fixated lens implantation.

    PubMed

    Nassiri, Nader; Ghorbanhosseini, Saeedeh; Jafarzadehpur, Ebrahim; Kavousnezhad, Sara; Nassiri, Nariman; Sheibani, Kourosh

    2018-01-01

    The purpose of this study was to evaluate the visual acuity as well as endothelial cell density (ECD) and polymegathism after iris-fixated lens (Artiflex ® AC 401) implantation for correction of moderate to high myopia. In this retrospective cross-sectional study, 55 eyes from 29 patients undergoing iris-fixated lens implantation for correction of myopia (-5.00 to -15.00 D) from 2007 to 2014 were evaluated. Uncorrected visual acuity, best spectacle-corrected visual acuity, refraction, ECD and polymegathism (coefficient of variation [CV] in the sizes of endothelial cells) were measured preoperatively and 6 months postoperatively. In the sixth month of follow-up, the uncorrected vision acuity was 20/25 or better in 81.5% of the eyes. The best-corrected visual acuity was 20/30 or better in 96.3% of the eyes, and more than 92% of the eyes had a refraction score of ±1 D from the target refraction. The mean corneal ECD of patients before surgery was 2,803±339 cells/mm 2 , which changed to 2,744±369 cells/mm 2 six months after surgery ( p =0.142). CV in the sizes of endothelial cells before the surgery was 25.7%±7.1% and six months after surgery it was 25.9%±5.4% ( p =0.857). Artiflex iris-fixated lens implantation is a suitable and predictable method for correction of moderate to high myopia. There was no statistically significant change in ECD and polymegathism (CV in the sizes of endothelial cells) after 6 months of follow-up.

  14. SU-E-CAMPUS-I-04: Automatic Skin-Dose Mapping for An Angiographic System with a Region-Of-Interest, High-Resolution Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vijayan, S; Rana, V; Setlur Nagesh, S

    2014-06-15

    Purpose: Our real-time skin dose tracking system (DTS) has been upgraded to monitor dose for the micro-angiographic fluoroscope (MAF), a high-resolution, small field-of-view x-ray detector. Methods: The MAF has been mounted on a changer on a clinical C-Arm gantry so it can be used interchangeably with the standard flat-panel detector (FPD) during neuro-interventional procedures when high resolution is needed in a region-of-interest. To monitor patient skin dose when using the MAF, our DTS has been modified to automatically account for the change in scatter for the very small MAF FOV and to provide separated dose distributions for each detector. Themore » DTS is able to provide a color-coded mapping of the cumulative skin dose on a 3D graphic model of the patient. To determine the correct entrance skin exposure to be applied by the DTS, a correction factor was determined by measuring the exposure at the entrance surface of a skull phantom with an ionization chamber as a function of entrance beam size for various beam filters and kVps. Entrance exposure measurements included primary radiation, patient backscatter and table forward scatter. To allow separation of the dose from each detector, a parameter log is kept that allows a replay of the procedure exposure events and recalculation of the dose components.The graphic display can then be constructed showing the dose distribution from the MAF and FPD separately or together. Results: The DTS is able to provide separate displays of dose for the MAF and FPD with field-size specific scatter corrections. These measured corrections change from about 49% down to 10% when changing from the FPD to the MAF. Conclusion: The upgraded DTS allows identification of the patient skin dose delivered when using each detector in order to achieve improved dose management as well as to facilitate peak skin-dose reduction through dose spreading. Research supported in part by Toshiba Medical Systems Corporation and NIH Grants R43FD0158401, R44FD0158402 and R01EB002873.« less

  15. Detector signal correction method and system

    DOEpatents

    Carangelo, Robert M.; Duran, Andrew J.; Kudman, Irwin

    1995-07-11

    Corrective factors are applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factors may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects.

  16. Detector signal correction method and system

    DOEpatents

    Carangelo, R.M.; Duran, A.J.; Kudman, I.

    1995-07-11

    Corrective factors are applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factors may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects. 5 figs.

  17. Interleaved segment correction achieves higher improvement factors in using genetic algorithm to optimize light focusing through scattering media

    NASA Astrophysics Data System (ADS)

    Li, Runze; Peng, Tong; Liang, Yansheng; Yang, Yanlong; Yao, Baoli; Yu, Xianghua; Min, Junwei; Lei, Ming; Yan, Shaohui; Zhang, Chunmin; Ye, Tong

    2017-10-01

    Focusing and imaging through scattering media has been proved possible with high resolution wavefront shaping. A completely scrambled scattering field can be corrected by applying a correction phase mask on a phase only spatial light modulator (SLM) and thereby the focusing quality can be improved. The correction phase is often found by global searching algorithms, among which Genetic Algorithm (GA) stands out for its parallel optimization process and high performance in noisy environment. However, the convergence of GA slows down gradually with the progression of optimization, causing the improvement factor of optimization to reach a plateau eventually. In this report, we propose an interleaved segment correction (ISC) method that can significantly boost the improvement factor with the same number of iterations comparing with the conventional all segment correction method. In the ISC method, all the phase segments are divided into a number of interleaved groups; GA optimization procedures are performed individually and sequentially among each group of segments. The final correction phase mask is formed by applying correction phases of all interleaved groups together on the SLM. The ISC method has been proved significantly useful in practice because of its ability to achieve better improvement factors when noise is present in the system. We have also demonstrated that the imaging quality is improved as better correction phases are found and applied on the SLM. Additionally, the ISC method lowers the demand of dynamic ranges of detection devices. The proposed method holds potential in applications, such as high-resolution imaging in deep tissue.

  18. A systematic approach to designing statistically powerful heteroscedastic 2 × 2 factorial studies while minimizing financial costs.

    PubMed

    Jan, Show-Li; Shieh, Gwowen

    2016-08-31

    The 2 × 2 factorial design is widely used for assessing the existence of interaction and the extent of generalizability of two factors where each factor had only two levels. Accordingly, research problems associated with the main effects and interaction effects can be analyzed with the selected linear contrasts. To correct for the potential heterogeneity of variance structure, the Welch-Satterthwaite test is commonly used as an alternative to the t test for detecting the substantive significance of a linear combination of mean effects. This study concerns the optimal allocation of group sizes for the Welch-Satterthwaite test in order to minimize the total cost while maintaining adequate power. The existing method suggests that the optimal ratio of sample sizes is proportional to the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Instead, a systematic approach using optimization technique and screening search is presented to find the optimal solution. Numerical assessments revealed that the current allocation scheme generally does not give the optimal solution. Alternatively, the suggested approaches to power and sample size calculations give accurate and superior results under various treatment and cost configurations. The proposed approach improves upon the current method in both its methodological soundness and overall performance. Supplementary algorithms are also developed to aid the usefulness and implementation of the recommended technique in planning 2 × 2 factorial designs.

  19. Geometric morphometric analysis of Colombian Anopheles albimanus (Diptera: Culicidae) reveals significant effect of environmental factors on wing traits and presence of a metapopulation

    PubMed Central

    Gómez, Giovan F.; Márquez, Edna J.; Gutiérrez, Lina A.; Conn, Jan E.; Correa, Margarita M.

    2015-01-01

    Anopheles albimanus is a major malaria mosquito vector in Colombia. In the present study, wing variability (size and shape) in An. albimanus populations from Colombian Maracaibo and Chocó bio-geographical eco-regions and the relationship of these phenotypic traits with environmental factors were evaluated. Microsatellite and morphometric data facilitated a comparison of the genetic and phenetic structure of this species. Wing size was influenced by elevation and relative humidity, whereas wing shape was affected by these two variables and also by rainfall, latitude, temperature and eco-region. Significant differences in mean shape between populations and eco-regions were detected, but they were smaller than those at the intra-population level. Correct assignment based on wing shape was low at the population level (<58%) and only slightly higher (>70%) at the eco-regional level, supporting the low population structure inferred from microsatellite data. Wing size was similar among populations with no significant differences between eco-regions. Population relationships in the genetic tree did not agree with those from the morphometric data; however, both datasets consistently reinforced a panmictic population of An. albimanus. Overall, site-specific population differentiation is not strongly supported by wing traits or genotypic data. We hypothesize that the metapopulation structure of An. albimanus throughout these Colombian eco-regions is favoring plasticity in wing traits, a relevant characteristic of species living under variable environmental conditions and colonizing new habitats. PMID:24704285

  20. Recombination in Streptococcus pneumoniae Lineages Increase with Carriage Duration and Size of the Polysaccharide Capsule

    PubMed Central

    Andam, Cheryl P.; Harris, Simon R.; Cornick, Jennifer E.; Yang, Marie; Bricio-Moreno, Laura; Kamng’ona, Arox W.; French, Neil; Heyderman, Robert S.; Kadioglu, Aras; Everett, Dean B.; Bentley, Stephen D.

    2016-01-01

    ABSTRACT Streptococcus pneumoniae causes a high burden of invasive pneumococcal disease (IPD) globally, especially in children from resource-poor settings. Like many bacteria, the pneumococcus can import DNA from other strains or even species by transformation and homologous recombination, which has allowed the pneumococcus to evade clinical interventions such as antibiotics and pneumococcal conjugate vaccines (PCVs). Pneumococci are enclosed in a complex polysaccharide capsule that determines the serotype; the capsule varies in size and is associated with properties including carriage prevalence and virulence. We determined and quantified the association between capsule and recombination events using genomic data from a diverse collection of serotypes sampled in Malawi. We determined both the amount of variation introduced by recombination relative to mutation (the relative rate) and how many individual recombination events occur per isolate (the frequency). Using univariate analyses, we found an association between both recombination measures and multiple factors associated with the capsule, including duration and prevalence of carriage. Because many capsular factors are correlated, we used multivariate analysis to correct for collinearity. Capsule size and carriage duration remained positively associated with recombination, although with a reduced P value, and this effect may be mediated through some unassayed additional property associated with larger capsules. This work describes an important impact of serotype on recombination that has been previously overlooked. While the details of how this effect is achieved remain to be determined, it may have important consequences for the serotype-specific response to vaccines and other interventions. PMID:27677790

  1. Calculation of the Pitot tube correction factor for Newtonian and non-Newtonian fluids.

    PubMed

    Etemad, S Gh; Thibault, J; Hashemabadi, S H

    2003-10-01

    This paper presents the numerical investigation performed to calculate the correction factor for Pitot tubes. The purely viscous non-Newtonian fluids with the power-law model constitutive equation were considered. It was shown that the power-law index, the Reynolds number, and the distance between the impact and static tubes have a major influence on the Pitot tube correction factor. The problem was solved for a wide range of these parameters. It was shown that employing Bernoulli's equation could lead to large errors, which depend on the magnitude of the kinetic energy and energy friction loss terms. A neural network model was used to correlate the correction factor of a Pitot tube as a function of these three parameters. This correlation is valid for most Newtonian, pseudoplastic, and dilatant fluids at low Reynolds number.

  2. Axial Length Variation Impacts on Superficial Retinal Vessel Density and Foveal Avascular Zone Area Measurements Using Optical Coherence Tomography Angiography.

    PubMed

    Sampson, Danuta M; Gong, Peijun; An, Di; Menghini, Moreno; Hansen, Alex; Mackey, David A; Sampson, David D; Chen, Fred K

    2017-06-01

    To evaluate the impact of image magnification correction on superficial retinal vessel density (SRVD) and foveal avascular zone area (FAZA) measurements using optical coherence tomography angiography (OCTA). Participants with healthy retinas were recruited for ocular biometry, refraction, and RTVue XR Avanti OCTA imaging with the 3 × 3-mm protocol. The foveal and parafoveal SRVD and FAZA were quantified with custom software before and after correction for magnification error using the Littman and the modified Bennett formulae. Relative changes between corrected and uncorrected SRVD and FAZA were calculated. Forty subjects were enrolled and the median (range) age of the participants was 30 (18-74) years. The mean (range) spherical equivalent refractive error was -1.65 (-8.00 to +4.88) diopters and mean (range) axial length was 24.42 mm (21.27-28.85). Images from 13 eyes were excluded due to poor image quality leaving 67 for analysis. Relative changes in foveal and parafoveal SRVD and FAZA after correction ranged from -20% to +10%, -3% to +2%, and -20% to +51%, respectively. Image size correction in measurements of foveal SRVD and FAZA was greater than 5% in 51% and 74% of eyes, respectively. In contrast, 100% of eyes had less than 5% correction in measurements of parafoveal SRVD. Ocular biometry should be performed with OCTA to correct image magnification error induced by axial length variation. We advise caution when interpreting interocular and interindividual comparisons of SRVD and FAZA derived from OCTA without image size correction.

  3. Aircraft Survivability: Susceptibility Reduction, Spring 2003

    DTIC Science & Technology

    2003-01-01

    approach to implementing the real-time nonuniformity correction (NUC) hardware. Packaging and size constraints would not prohibit the future...92 Hz MWIR #3 Imager 640x512 InSb FPA Band: 3µm–5µm Pixel size: 24µm Max frame rate: 92 Hz LWIR Imager 640x512 HgCdTe FPA Band: 3µm–5µm Pixel size

  4. Bootstrap versus Statistical Effect Size Corrections: A Comparison with Data from the Finding Embedded Figures Test.

    ERIC Educational Resources Information Center

    Thompson, Bruce; Melancon, Janet G.

    Effect sizes have been increasingly emphasized in research as more researchers have recognized that: (1) all parametric analyses (t-tests, analyses of variance, etc.) are correlational; (2) effect sizes have played an important role in meta-analytic work; and (3) statistical significance testing is limited in its capacity to inform scientific…

  5. Twelve- to 14-Month-Old Infants Can Predict Single-Event Probability with Large Set Sizes

    ERIC Educational Resources Information Center

    Denison, Stephanie; Xu, Fei

    2010-01-01

    Previous research has revealed that infants can reason correctly about single-event probabilities with small but not large set sizes (Bonatti, 2008; Teglas "et al.", 2007). The current study asks whether infants can make predictions regarding single-event probability with large set sizes using a novel procedure. Infants completed two trials: A…

  6. Maturity assessment of harumanis mango using thermal camera sensor

    NASA Astrophysics Data System (ADS)

    Sa'ad, F. S. A.; Shakaff, A. Y. Md.; Zakaria, A.; Abdullah, A. H.; Ibrahim, M. F.

    2017-03-01

    The perceived quality of fruits, such as mangoes, is greatly dependent on many parameters such as ripeness, shape, size, and is influenced by other factors such as harvesting time. Unfortunately, a manual fruit grading has several drawbacks such as subjectivity, tediousness and inconsistency. By automating the procedure, as well as developing new classification technique, it may solve these problems. This paper presents the novel work on the using Infrared as a Tool in Quality Monitoring of Harumanis Mangoes. The histogram of infrared image was used to distinguish and classify the level of ripeness of the fruits based on the colour spectrum by week. The approach proposed thermal data was able to achieve 90.5% correct classification.

  7. LD Score Regression Distinguishes Confounding from Polygenicity in Genome-Wide Association Studies

    PubMed Central

    Bulik-Sullivan, Brendan K.; Loh, Po-Ru; Finucane, Hilary; Ripke, Stephan; Yang, Jian; Patterson, Nick; Daly, Mark J.; Price, Alkes L.; Neale, Benjamin M.

    2015-01-01

    Both polygenicity (i.e., many small genetic effects) and confounding biases, such as cryptic relatedness and population stratification, can yield an inflated distribution of test statistics in genome-wide association studies (GWAS). However, current methods cannot distinguish between inflation from true polygenic signal and bias. We have developed an approach, LD Score regression, that quantifies the contribution of each by examining the relationship between test statistics and linkage disequilibrium (LD). The LD Score regression intercept can be used to estimate a more powerful and accurate correction factor than genomic control. We find strong evidence that polygenicity accounts for the majority of test statistic inflation in many GWAS of large sample size. PMID:25642630

  8. Validity of photographs for food portion estimation in a rural West African setting.

    PubMed

    Huybregts, L; Roberfroid, D; Lachat, C; Van Camp, J; Kolsteren, P

    2008-06-01

    To validate food photographs for food portion size estimation of frequently consumed dishes, to be used in a 24-hour recall food consumption study of pregnant women in a rural environment in Burkina Faso. This food intake study is part of an intervention evaluating the efficacy of prenatal micronutrient supplementation on birth outcomes. Women of childbearing age (15-45 years). A food photograph album containing four photographs of food portions per food item was compiled for eight selected food items. Subjects were presented two food items each in the morning and two in the afternoon. These foods were weighed to the exact weight of a food depicted in one of the photographs and were in the same receptacles. The next day another fieldworker presented the food photographs to the subjects to test their ability to choose the correct photograph. The correct photograph out of the four proposed was chosen in 55% of 1028 estimations. For each food, proportions of underestimating and overestimating participants were balanced, except for rice and couscous. On a group level, mean differences between served and estimated portion sizes were between -8.4% and 6.3%. Subjects who attended school were almost twice as likely to choose the correct photograph. The portion size served (small vs. largest sizes) had a significant influence on the portion estimation ability. The results from this study indicate that in a West African rural setting, food photographs can be a valuable tool for the quantification of food portion size on group level.

  9. Size of the Dynamic Bead in Polymers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Agapov, Alexander L; Sokolov, Alexei P

    2010-01-01

    Presented analysis of neutron, mechanical, and MD simulation data available in the literature demonstrates that the dynamic bead size (the smallest subchain that still exhibits the Rouse-like dynamics) in most of the polymers is significantly larger than the traditionally defined Kuhn segment. Moreover, our analysis emphasizes that even the static bead size (e.g., chain statistics) disagrees with the Kuhn segment length. We demonstrate that the deficiency of the Kuhn segment definition is based on the assumption of a chain being completely extended inside a single bead. The analysis suggests that representation of a real polymer chain by the bead-and-spring modelmore » with a single parameter C cannot be correct. One needs more parameters to reflect correctly details of the chain structure in the bead-and-spring model.« less

  10. Conductivity Cell Thermal Inertia Correction Revisited

    NASA Astrophysics Data System (ADS)

    Eriksen, C. C.

    2012-12-01

    Salinity measurements made with a CTD (conductivity-temperature-depth instrument) rely on accurate estimation of water temperature within their conductivity cell. Lueck (1990) developed a theoretical framework for heat transfer between the cell body and water passing through it. Based on this model, Lueck and Picklo (1990) introduced the practice of correcting for cell thermal inertia by filtering a temperature time series using two parameters, an amplitude α and a decay time constant τ, a practice now widely used. Typically these two parameters are chosen for a given cell configuration and internal flushing speed by a statistical method applied to a particular data set. Here, thermal inertia correction theory has been extended to apply to flow speeds spanning well over an order of magnitude, both within and outside a conductivity cell, to provide predictions of α and τ from cell geometry and composition. The extended model enables thermal inertia correction for the variable flows encountered by conductivity cells on autonomous gliders and floats, as well as tethered platforms. The length scale formed as the product of cell encounter speed of isotherms, α, and τ can be used to gauge the size of the temperature correction for a given thermal stratification. For cells flushed by dynamic pressure variation induced by platform motion, this length varies by less than a factor of 2 over more than a decade of speed variation. The magnitude of correction for free-flow flushed sensors is comparable to that of pumped cells, but at an order of magnitude in energy savings. Flow conditions around a cell's exterior are found to be of comparable importance to thermal inertia response as flushing speed. Simplification of cell thermal response to a single normal mode is most valid at slow speed. Error in thermal inertia estimation arises from both neglect of higher modes and numerical discretization of the correction scheme, both of which can be easily quantified. Consideration of thermal inertia correction enables assessment of various CTD sampling schemes. Spot sampling by pumping a cell intermittently provides particular challenges, and may lead to biases in inferred salinity that are comparable to climate signals reported from profiling float arrays.

  11. Assessing the capability of continuum and discrete particle methods to simulate gas-solids flow using DNS predictions as a benchmark

    DOE PAGES

    Lu, Liqiang; Liu, Xiaowen; Li, Tingwen; ...

    2017-08-12

    For this study, gas–solids flow in a three-dimension periodic domain was numerically investigated by direct numerical simulation (DNS), computational fluid dynamic-discrete element method (CFD-DEM) and two-fluid model (TFM). DNS data obtained by finely resolving the flow around every particle are used as a benchmark to assess the validity of coarser DEM and TFM approaches. The CFD-DEM predicts the correct cluster size distribution and under-predicts the macro-scale slip velocity even with a grid size as small as twice the particle diameter. The TFM approach predicts larger cluster size and lower slip velocity with a homogeneous drag correlation. Although the slip velocitymore » can be matched by a simple modification to the drag model, the predicted voidage distribution is still different from DNS: Both CFD-DEM and TFM over-predict the fraction of particles in dense regions and under-predict the fraction of particles in regions of intermediate void fractions. Also, the cluster aspect ratio of DNS is smaller than CFD-DEM and TFM. Since a simple correction to the drag model can predict a correct slip velocity, it is hopeful that drag corrections based on more elaborate theories that consider voidage gradient and particle fluctuations may be able to improve the current predictions of cluster distribution.« less

  12. CT acquisition technique and quantitative analysis of the lung parenchyma: variability and corrections

    NASA Astrophysics Data System (ADS)

    Zheng, Bin; Leader, J. K.; Coxson, Harvey O.; Scuirba, Frank C.; Fuhrman, Carl R.; Balkan, Arzu; Weissfeld, Joel L.; Maitz, Glenn S.; Gur, David

    2006-03-01

    The fraction of lung voxels below a pixel value "cut-off" has been correlated with pathologic estimates of emphysema. We performed a "standard" quantitative CT (QCT) lung analysis using a -950 HU cut-off to determine the volume fraction of emphysema (below the cut-off) and a "corrected" QCT analysis after removing small group (5 and 10 pixels) of connected pixels ("blobs") below the cut-off. CT examinations two dataset of 15 subjects each with a range of visible emphysema and pulmonary obstruction were acquired at "low-dose and conventional dose reconstructed using a high-spatial frequency kernel at 2.5 mm section thickness for the same subject. The "blob" size (i.e., connected-pixels) removed was inversely related to the computed fraction of emphysema. The slopes of emphysema fraction versus blob size were 0.013, 0.009, and 0.005 for subjects with both no emphysema and no pulmonary obstruction, moderate emphysema and pulmonary obstruction, and severe emphysema and severe pulmonary obstruction, respectively. The slopes of emphysema fraction versus blob size were 0.008 and 0.006 for low-dose and conventional CT examinations, respectively. The small blobs of pixels removed are most likely CT image artifacts and do not represent actual emphysema. The magnitude of the blob correction was appropriately associated with COPD severity. The blob correction appears to be applicable to QCT analysis in low-dose and conventional CT exams.

  13. Assessing the capability of continuum and discrete particle methods to simulate gas-solids flow using DNS predictions as a benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Liqiang; Liu, Xiaowen; Li, Tingwen

    For this study, gas–solids flow in a three-dimension periodic domain was numerically investigated by direct numerical simulation (DNS), computational fluid dynamic-discrete element method (CFD-DEM) and two-fluid model (TFM). DNS data obtained by finely resolving the flow around every particle are used as a benchmark to assess the validity of coarser DEM and TFM approaches. The CFD-DEM predicts the correct cluster size distribution and under-predicts the macro-scale slip velocity even with a grid size as small as twice the particle diameter. The TFM approach predicts larger cluster size and lower slip velocity with a homogeneous drag correlation. Although the slip velocitymore » can be matched by a simple modification to the drag model, the predicted voidage distribution is still different from DNS: Both CFD-DEM and TFM over-predict the fraction of particles in dense regions and under-predict the fraction of particles in regions of intermediate void fractions. Also, the cluster aspect ratio of DNS is smaller than CFD-DEM and TFM. Since a simple correction to the drag model can predict a correct slip velocity, it is hopeful that drag corrections based on more elaborate theories that consider voidage gradient and particle fluctuations may be able to improve the current predictions of cluster distribution.« less

  14. A drift correction optimization technique for the reduction of the inter-measurement dispersion of isotope ratios measured using a multi-collector plasma mass spectrometer

    NASA Astrophysics Data System (ADS)

    Doherty, W.; Lightfoot, P. C.; Ames, D. E.

    2014-08-01

    The effects of polynomial interpolation and internal standardization drift corrections on the inter-measurement dispersion (statistical) of isotope ratios measured with a multi-collector plasma mass spectrometer were investigated using the (analyte, internal standard) isotope systems of (Ni, Cu), (Cu, Ni), (Zn, Cu), (Zn, Ga), (Sm, Eu), (Hf, Re) and (Pb, Tl). The performance of five different correction factors was compared using a (statistical) range based merit function ωm which measures the accuracy and inter-measurement range of the instrument calibration. The frequency distribution of optimal correction factors over two hundred data sets uniformly favored three particular correction factors while the remaining two correction factors accounted for a small but still significant contribution to the reduction of the inter-measurement dispersion. Application of the merit function is demonstrated using the detection of Cu and Ni isotopic fractionation in laboratory and geologic-scale chemical reactor systems. Solvent extraction (diphenylthiocarbazone (Cu, Pb) and dimethylglyoxime (Ni) was used to either isotopically fractionate the metal during extraction using the method of competition or to isolate the Cu and Ni from the sample (sulfides and associated silicates). In the best case, differences in isotopic composition of ± 3 in the fifth significant figure could be routinely and reliably detected for Cu65/63 and Ni61/62. One of the internal standardization drift correction factors uses a least squares estimator to obtain a linear functional relationship between the measured analyte and internal standard isotope ratios. Graphical analysis demonstrates that the points on these graphs are defined by highly non-linear parametric curves and not two linearly correlated quantities which is the usual interpretation of these graphs. The success of this particular internal standardization correction factor was found in some cases to be due to a fortuitous, scale dependent, parametric curve effect.

  15. Method of absorbance correction in a spectroscopic heating value sensor

    DOEpatents

    Saveliev, Alexei; Jangale, Vilas Vyankatrao; Zelepouga, Sergeui; Pratapas, John

    2013-09-17

    A method and apparatus for absorbance correction in a spectroscopic heating value sensor in which a reference light intensity measurement is made on a non-absorbing reference fluid, a light intensity measurement is made on a sample fluid, and a measured light absorbance of the sample fluid is determined. A corrective light intensity measurement at a non-absorbing wavelength of the sample fluid is made on the sample fluid from which an absorbance correction factor is determined. The absorbance correction factor is then applied to the measured light absorbance of the sample fluid to arrive at a true or accurate absorbance for the sample fluid.

  16. New Correction Factors Based on Seasonal Variability of Outdoor Temperature for Estimating Annual Radon Concentrations in UK.

    PubMed

    Daraktchieva, Z

    2017-06-01

    Indoor radon concentrations generally vary with season. Radon gas enters buildings from beneath due to a small air pressure difference between the inside of a house and outdoors. This underpressure which draws soil gas including radon into the house depends on the difference between the indoor and outdoor temperatures. The variation in a typical house in UK showed that the mean indoor radon concentration reaches a maximum in January and a minimum in July. Sine functions were used to model the indoor radon data and monthly average outdoor temperatures, covering the period between 2005 and 2014. The analysis showed a strong negative correlation between the modelled indoor radon data and outdoor temperature. This correlation was used to calculate new correction factors that could be used for estimation of annual radon concentration in UK homes. The comparison between the results obtained with the new correction factors and the previously published correction factors showed that the new correction factors perform consistently better on the selected data sets. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Analysis of diffuse radiation data for Beer Sheva: Measured (shadow ring) versus calculated (global-horizontal beam) values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kudish, A.I.; Ianetz, A.

    1993-12-01

    The authors have utilized concurrently measured global, normal incidence beam, and diffuse radiation data, the latter measured by means of a shadow ring pyranometer to study the relative magnitude of the anisotropic contribution (circumsolar region and nonuniform sky conditions) to the diffuse radiation. In the case of Beer Sheva, the monthly average hourly anisotropic correction factor varies from 2.9 to 20.9%, whereas the [open quotes]standard[close quotes] geometric correction factor varies from 5.6 to 14.0%. The monthly average hourly overall correction factor (combined anisotropic and geometric factors) varies from 8.9 to 37.7%. The data have also been analyzed using a simplemore » model of sky radiance developed by Steven in 1984. His anisotropic correction factor is a function of the relative strength and angular width of the circumsolar radiation region. The results of this analysis are in agreement with those previously reported for Quidron on the Dead Sea, viz. the anisotropy and relative strength of the circumsolar radiation are significantly greater than at any of the sites analyzed by Steven. In addition, the data have been utilized to validate a model developed by LeBaron et al. in 1990 for correcting shadow ring diffuse radiation data. The monthly average deviation between the corrected and true diffuse radiation values varies from 4.55 to 7.92%.« less

  18. Testing the Perey effect

    DOE PAGES

    Titus, L. J.; Nunes, Filomena M.

    2014-03-12

    Here, the effects of non-local potentials have historically been approximately included by applying a correction factor to the solution of the corresponding equation for the local equivalent interaction. This is usually referred to as the Perey correction factor. In this work we investigate the validity of the Perey correction factor for single-channel bound and scattering states, as well as in transfer (p, d) cross sections. Method: We solve the scattering and bound state equations for non-local interactions of the Perey-Buck type, through an iterative method. Using the distorted wave Born approximation, we construct the T-matrix for (p,d) on 17O, 41Ca,more » 49Ca, 127Sn, 133Sn, and 209Pb at 20 and 50 MeV. As a result, we found that for bound states, the Perey corrected wave function resulting from the local equation agreed well with that from the non-local equation in the interior region, but discrepancies were found in the surface and peripheral regions. Overall, the Perey correction factor was adequate for scattering states, with the exception of a few partial waves corresponding to the grazing impact parameters. These differences proved to be important for transfer reactions. In conclusion, the Perey correction factor does offer an improvement over taking a direct local equivalent solution. However, if the desired accuracy is to be better than 10%, the exact solution of the non-local equation should be pursued.« less

  19. S-NPP VIIRS thermal emissive band gain correction during the blackbody warm-up-cool-down cycle

    NASA Astrophysics Data System (ADS)

    Choi, Taeyoung J.; Cao, Changyong; Weng, Fuzhong

    2016-09-01

    The Suomi National Polar orbiting Partnership (S-NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) has onboard calibrators called blackbody (BB) and Space View (SV) for Thermal Emissive Band (TEB) radiometric calibration. In normal operation, the BB temperature is set to 292.5 K providing one radiance level. From the NOAA's Integrated Calibration and Validation System (ICVS) monitoring system, the TEB calibration factors (F-factors) have been trended and show very stable responses, however the BB Warm-Up-Cool-Down (WUCD) cycles provide detectors' gain and temperature dependent sensitivity measurements. Since the launch of S-NPP, the NOAA Sea Surface Temperature (SST) group noticed unexpected global SST anomalies during the WUCD cycles. In this study, the TEB Ffactors are calculated during the WUCD cycle on June 17th 2015. The TEB F-factors are analyzed by identifying the VIIRS On-Board Calibrator Intermediate Product (OBCIP) files to be Warm-Up or Cool-Down granules. To correct the SST anomaly, an F-factor correction parameter is calculated by the modified C1 (or b1) values which are derived from the linear portion of C1 coefficient during the WUCD. The F-factor correction factors are applied back to the original VIIRS SST bands showing significantly reducing the F-factor changes. Obvious improvements are observed in M12, M14 and M16, but corrections effects are hardly seen in M16. Further investigation is needed to find out the source of the F-factor oscillations during the WUCD.

  20. Electrical four-point probing of spherical metallic thin films coated onto micron sized polymer particles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pettersen, Sigurd R., E-mail: sigurd.r.pettersen@ntnu.no, E-mail: jianying.he@ntnu.no; Stokkeland, August Emil; Zhang, Zhiliang

    Micron-sized metal-coated polymer spheres are frequently used as filler particles in conductive composites for electronic interconnects. However, the intrinsic electrical resistivity of the spherical thin films has not been attainable due to deficiency in methods that eliminate the effect of contact resistance. In this work, a four-point probing method using vacuum compatible piezo-actuated micro robots was developed to directly investigate the electric properties of individual silver-coated spheres under real-time observation in a scanning electron microscope. Poly(methyl methacrylate) spheres with a diameter of 30 μm and four different film thicknesses (270 nm, 150 nm, 100 nm, and 60 nm) were investigated. By multiplying the experimental resultsmore » with geometrical correction factors obtained using finite element models, the resistivities of the thin films were estimated for the four thicknesses. These were higher than the resistivity of bulk silver.« less

  1. Computer modelling of the surface tension of the gas-liquid and liquid-liquid interface.

    PubMed

    Ghoufi, Aziz; Malfreyt, Patrice; Tildesley, Dominic J

    2016-03-07

    This review presents the state of the art in molecular simulations of interfacial systems and of the calculation of the surface tension from the underlying intermolecular potential. We provide a short account of different methodological factors (size-effects, truncation procedures, long-range corrections and potential models) that can affect the results of the simulations. Accurate calculations are presented for the calculation of the surface tension as a function of the temperature, pressure and composition by considering the planar gas-liquid interface of a range of molecular fluids. In particular, we consider the challenging problems of reproducing the interfacial tension of salt solutions as a function of the salt molality; the simulations of spherical interfaces including the calculation of the sign and size of the Tolman length for a spherical droplet; the use of coarse-grained models in the calculation of the interfacial tension of liquid-liquid surfaces and the mesoscopic simulations of oil-water-surfactant interfacial systems.

  2. Propagation of flexural and membrane waves with fluid loaded NASTRAN plate and shell elements

    NASA Technical Reports Server (NTRS)

    Kalinowski, A. J.; Wagner, C. A.

    1983-01-01

    Modeling of flexural and membrane type waves existing in various submerged (or in vacuo) plate and/or shell finite element models that are excited with steady state type harmonic loadings proportioned to e(i omega t) is discussed. Only thin walled plates and shells are treated wherein rotary inertia and shear correction factors are not included. More specifically, the issue of determining the shell or plate mesh size needed to represent the spatial distribution of the plate or shell response is of prime importance towards successfully representing the solution to the problem at hand. To this end, a procedure is presented for establishing guide lines for determining the mesh size based on a simple test model that can be used for a variety of plate and shell configurations such as, cylindrical shells with water loading, cylindrical shells in vacuo, plates with water loading, and plates in vacuo. The procedure for doing these four cases is given, with specific numerical examples present only for the cylindrical shell case.

  3. Patient-specific positioning guides versus manual instrumentation for total knee arthroplasty: an intraoperative comparison.

    PubMed

    Kassab, Safa; Pietrzak, William S

    2014-01-01

    Traditional manual instruments for total knee arthroplasty are associated with a malalignment rate of nearly 30%. Patient-specific positioning guides, developed to help address alignment, may also influence other intraoperative factors. This study compared a consecutive series of 270 Vanguard total knee replacements performed with Signature patient-specific positioning guides (study group) to a consecutive series of 595 similar knee replacements performed with manual instrumentation (control group). The study group averaged 16.7 fewer minutes in the operating room (p < .001), utilized tibial inserts that averaged 0.4 mm thinner with a smaller proportion of "thick" tibial inserts (14-18 mm) (p < .001), and required fewer transfusions (p = .022). The Signature-derived surgical plan accurately predicted correct femoral and tibial component sizes in 86.3% and 70.3% of the cases, respectively. These rates increased to 99.3% and 99.2%, respectively, for accuracy to within one size of the surgical plan, similar to published values for manual instrumentation.

  4. Experimental setup for the determination of the correction factors of the neutron doseratemeters in fast neutron fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliescu, Elena; Bercea, Sorin; Dudu, Dorin

    2013-12-16

    The use of the U-120 Cyclotron of the IFIN-HH allowed to perform a testing bench with fast neutrons in order to determine the correction factors of the doseratemeters dedicated to neutron measurement. This paper deals with researchers performed in order to develop the irradiation facility testing the fast neutrons flux generated at the Cyclotron. This facility is presented, together with the results obtain in determining the correction factor for a doseratemeter dedicated to the neutron dose equivalent rate measurement.

  5. Understanding the atmospheric measurement and behavior of perfluorooctanoic acid.

    PubMed

    Webster, Eva M; Ellis, David A

    2012-09-01

    The recently reported quantification of the atmospheric sampling artifact for perfluorooctanoic acid (PFOA) was applied to existing gas and particle concentration measurements. Specifically, gas phase concentrations were increased by a factor of 3.5 and particle-bound concentrations by a factor of 0.1. The correlation constants in two particle-gas partition coefficient (K(QA)) estimation equations were determined for multiple studies with and without correcting for the sampling artifact. Correction for the sampling artifact gave correlation constants with improved agreement to those reported for other neutral organic contaminants, thus supporting the application of the suggested correction factors for perfluorinated carboxylic acids. Applying the corrected correlation constant to a recent multimedia modeling study improved model agreement with corrected, reported, atmospheric concentrations. This work confirms that there is sufficient partitioning to the gas phase to support the long-range atmospheric transport of PFOA. Copyright © 2012 SETAC.

  6. Method and system for photoconductive detector signal correction

    DOEpatents

    Carangelo, Robert M.; Hamblen, David G.; Brouillette, Carl R.

    1992-08-04

    A corrective factor is applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factor may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects.

  7. Method and system for photoconductive detector signal correction

    DOEpatents

    Carangelo, R.M.; Hamblen, D.G.; Brouillette, C.R.

    1992-08-04

    A corrective factor is applied so as to remove anomalous features from the signal generated by a photoconductive detector, and to thereby render the output signal highly linear with respect to the energy of incident, time-varying radiation. The corrective factor may be applied through the use of either digital electronic data processing means or analog circuitry, or through a combination of those effects. 5 figs.

  8. Local concurrent error detection and correction in data structures using virtual backpointers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, C.C.J.; Chen, P.P.; Fuchs, W.K.

    1989-11-01

    A new technique, based on virtual backpointers, is presented in this paper for local concurrent error detection and correction in linked data structures. Two new data structures utilizing virtual backpointers, the Virtual Double-Linked List and the B-Tree and Virtual Backpointers, are described. For these structures, double errors within a fixed-size checking window can be detected in constant time and single errors detected during forward moves can be corrected in constant time.

  9. SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y; Tian, Z; Song, T

    Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accountingmore » for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.« less

  10. A comparison of quality of present-day heat flow obtained from BHTs, Horner Plots of Malay Basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waples, D.W.; Mahadir, R.

    1994-07-01

    Reconciling temperature data obtained from measurement of single BHT, multiple BHT at a single depth, RFTs, and DSTs, is very difficult. Quality of data varied widely, however DST data were assumed to be most reliable. Data from 87 wells was used in this study, but only 47 wells have DST data. BASINMOD program was used to calculate the present-day heat flow, using measured thermal conductivity and calibrated against the DST data. The heat flows obtained from the DST data were assumed to be correct and representative throughout the basin. Then, heat flows using (1) uncorrected RFT data, (2) multiple BHTmore » data corrected by the Horner plot method, and (3) single BHT values corrected upward by a standard 10% were calculated. All of these three heat-flow populations had identically standard deviations to that for the DST data, but with significantly lower mean values. Correction factors were calculated to give each of the three erroneous populations the same mean value as the DST population. Heat flows calculated from RFT data had to be corrected upward by a factor of 1.12 to be equivalent to DST data; Horner plot data corrected by a factor of 1.18, and single BHT data by a factor of 1.2. These results suggest that present-day subsurface temperatures using RFT, Horner plot, and BHT data are considerably lower than they should be. The authors suspect qualitatively similar results would be found in other areas. Hence, they recommend significant corrections be routinely made until local calibration factors are established.« less

  11. SU-E-T-552: Monte Carlo Calculation of Correction Factors for a Free-Air Ionization Chamber in Support of a National Air-Kerma Standard for Electronic Brachytherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mille, M; Bergstrom, P

    2015-06-15

    Purpose: To use Monte Carlo radiation transport methods to calculate correction factors for a free-air ionization chamber in support of a national air-kerma standard for low-energy, miniature x-ray sources used for electronic brachytherapy (eBx). Methods: The NIST is establishing a calibration service for well-type ionization chambers used to characterize the strength of eBx sources prior to clinical use. The calibration approach involves establishing the well-chamber’s response to an eBx source whose air-kerma rate at a 50 cm distance is determined through a primary measurement performed using the Lamperti free-air ionization chamber. However, the free-air chamber measurements of charge or currentmore » can only be related to the reference air-kerma standard after applying several corrections, some of which are best determined via Monte Carlo simulation. To this end, a detailed geometric model of the Lamperti chamber was developed in the EGSnrc code based on the engineering drawings of the instrument. The egs-fac user code in EGSnrc was then used to calculate energy-dependent correction factors which account for missing or undesired ionization arising from effects such as: (1) attenuation and scatter of the x-rays in air; (2) primary electrons escaping the charge collection region; (3) lack of charged particle equilibrium; (4) atomic fluorescence and bremsstrahlung radiation. Results: Energy-dependent correction factors were calculated assuming a monoenergetic point source with the photon energy ranging from 2 keV to 60 keV in 2 keV increments. Sufficient photon histories were simulated so that the Monte Carlo statistical uncertainty of the correction factors was less than 0.01%. The correction factors for a specific eBx source will be determined by integrating these tabulated results over its measured x-ray spectrum. Conclusion: The correction factors calculated in this work are important for establishing a national standard for eBx which will help ensure that dose is accurately and consistently delivered to patients.« less

  12. Dual ring multilayer ionization chamber and theory-based correction technique for scanning proton therapy.

    PubMed

    Takayanagi, Taisuke; Nihongi, Hideaki; Nishiuchi, Hideaki; Tadokoro, Masahiro; Ito, Yuki; Nakashima, Chihiro; Fujitaka, Shinichiro; Umezawa, Masumi; Matsuda, Koji; Sakae, Takeji; Terunuma, Toshiyuki

    2016-07-01

    To develop a multilayer ionization chamber (MLIC) and a correction technique that suppresses differences between the MLIC and water phantom measurements in order to achieve fast and accurate depth dose measurements in pencil beam scanning proton therapy. The authors distinguish between a calibration procedure and an additional correction: 1-the calibration for variations in the air gap thickness and the electrometer gains is addressed without involving measurements in water; 2-the correction is addressed to suppress the difference between depth dose profiles in water and in the MLIC materials due to the nuclear interaction cross sections by a semiempirical model tuned by using measurements in water. In the correction technique, raw MLIC data are obtained for each energy layer and integrated after multiplying them by the correction factor because the correction factor depends on incident energy. The MLIC described here has been designed especially for pencil beam scanning proton therapy. This MLIC is called a dual ring multilayer ionization chamber (DRMLIC). The shape of the electrodes allows the DRMLIC to measure both the percentage depth dose (PDD) and integrated depth dose (IDD) because ionization electrons are collected from inner and outer air gaps independently. IDDs for which the beam energies were 71.6, 120.6, 159, 180.6, and 221.4 MeV were measured and compared with water phantom results. Furthermore, the measured PDDs along the central axis of the proton field with a nominal field size of 10 × 10 cm(2) were compared. The spread out Bragg peak was 20 cm for fields with a range of 30.6 and 3 cm for fields with a range of 6.9 cm. The IDDs measured with the DRMLIC using the correction technique were consistent with those that of the water phantom; except for the beam energy of 71.6 MeV, all of the points satisfied the 1% dose/1 mm distance to agreement criterion of the gamma index. The 71.6 MeV depth dose profile showed slight differences in the shallow region, but 94.5% of the points satisfied the 1%/1 mm criterion. The 90% ranges, defined at the 90% dose position in distal fall off, were in good agreement with those in the water phantom, and the range differences from the water phantom were less than ±0.3 mm. The PDDs measured with the DRMLIC were also consistent with those that of the water phantom; 97% of the points passed the 1%/1 mm criterion. It was demonstrated that the new correction technique suppresses the difference between the depth dose profiles obtained with the MLIC and those obtained from a water phantom, and a DRMLIC enabling fast measurements of both IDD and PDD was developed. The IDDs and PDDs measured with the DRMLIC and using the correction technique were in good agreement with those that of the water phantom, and it was concluded that the correction technique and DRMLIC are useful for depth dose profile measurements in pencil beam scanning proton therapy.

  13. Dual ring multilayer ionization chamber and theory-based correction technique for scanning proton therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Takayanagi, Taisuke, E-mail: taisuke.takayanagi.wd

    2016-07-15

    Purpose: To develop a multilayer ionization chamber (MLIC) and a correction technique that suppresses differences between the MLIC and water phantom measurements in order to achieve fast and accurate depth dose measurements in pencil beam scanning proton therapy. Methods: The authors distinguish between a calibration procedure and an additional correction: 1—the calibration for variations in the air gap thickness and the electrometer gains is addressed without involving measurements in water; 2—the correction is addressed to suppress the difference between depth dose profiles in water and in the MLIC materials due to the nuclear interaction cross sections by a semiempirical modelmore » tuned by using measurements in water. In the correction technique, raw MLIC data are obtained for each energy layer and integrated after multiplying them by the correction factor because the correction factor depends on incident energy. The MLIC described here has been designed especially for pencil beam scanning proton therapy. This MLIC is called a dual ring multilayer ionization chamber (DRMLIC). The shape of the electrodes allows the DRMLIC to measure both the percentage depth dose (PDD) and integrated depth dose (IDD) because ionization electrons are collected from inner and outer air gaps independently. Results: IDDs for which the beam energies were 71.6, 120.6, 159, 180.6, and 221.4 MeV were measured and compared with water phantom results. Furthermore, the measured PDDs along the central axis of the proton field with a nominal field size of 10 × 10 cm{sup 2} were compared. The spread out Bragg peak was 20 cm for fields with a range of 30.6 and 3 cm for fields with a range of 6.9 cm. The IDDs measured with the DRMLIC using the correction technique were consistent with those that of the water phantom; except for the beam energy of 71.6 MeV, all of the points satisfied the 1% dose/1 mm distance to agreement criterion of the gamma index. The 71.6 MeV depth dose profile showed slight differences in the shallow region, but 94.5% of the points satisfied the 1%/1 mm criterion. The 90% ranges, defined at the 90% dose position in distal fall off, were in good agreement with those in the water phantom, and the range differences from the water phantom were less than ±0.3 mm. The PDDs measured with the DRMLIC were also consistent with those that of the water phantom; 97% of the points passed the 1%/1 mm criterion. Conclusions: It was demonstrated that the new correction technique suppresses the difference between the depth dose profiles obtained with the MLIC and those obtained from a water phantom, and a DRMLIC enabling fast measurements of both IDD and PDD was developed. The IDDs and PDDs measured with the DRMLIC and using the correction technique were in good agreement with those that of the water phantom, and it was concluded that the correction technique and DRMLIC are useful for depth dose profile measurements in pencil beam scanning proton therapy.« less

  14. Quadratic electroweak corrections for polarized Moller scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    A. Aleksejevs, S. Barkanova, Y. Kolomensky, E. Kuraev, V. Zykunov

    2012-01-01

    The paper discusses the two-loop (NNLO) electroweak radiative corrections to the parity violating electron-electron scattering asymmetry induced by squaring one-loop diagrams. The calculations are relevant for the ultra-precise 11 GeV MOLLER experiment planned at Jefferson Laboratory and experiments at high-energy future electron colliders. The imaginary parts of the amplitudes are taken into consideration consistently in both the infrared-finite and divergent terms. The size of the obtained partial correction is significant, which indicates a need for a complete study of the two-loop electroweak radiative corrections in order to meet the precision goals of future experiments.

  15. Heavy quark form factors at two loops

    NASA Astrophysics Data System (ADS)

    Ablinger, J.; Behring, A.; Blümlein, J.; Falcioni, G.; De Freitas, A.; Marquard, P.; Rana, N.; Schneider, C.

    2018-05-01

    We compute the two-loop QCD corrections to the heavy quark form factors in the case of the vector, axial-vector, scalar and pseudoscalar currents up to second order in the dimensional parameter ɛ =(4 -D )/2 . These terms are required in the renormalization of the higher-order corrections to these form factors.

  16. Determination of correction factors in beta radiation beams using Monte Carlo method.

    PubMed

    Polo, Ivón Oramas; Santos, William de Souza; Caldas, Linda V E

    2018-06-15

    The absorbed dose rate is the main characterization quantity for beta radiation. The extrapolation chamber is considered the primary standard instrument. To determine absorbed dose rates in beta radiation beams, it is necessary to establish several correction factors. In this work, the correction factors for the backscatter due to the collecting electrode and to the guard ring, and the correction factor for Bremsstrahlung in beta secondary standard radiation beams are presented. For this purpose, the Monte Carlo method was applied. The results obtained are considered acceptable, and they agree within the uncertainties. The differences between the backscatter factors determined by the Monte Carlo method and those of the ISO standard were 0.6%, 0.9% and 2.04% for 90 Sr/ 90 Y, 85 Kr and 147 Pm sources respectively. The differences between the Bremsstrahlung factors determined by the Monte Carlo method and those of the ISO were 0.25%, 0.6% and 1% for 90 Sr/ 90 Y, 85 Kr and 147 Pm sources respectively. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Variation in polyp size estimation among endoscopists and impact on surveillance intervals.

    PubMed

    Chaptini, Louis; Chaaya, Adib; Depalma, Fedele; Hunter, Krystal; Peikin, Steven; Laine, Loren

    2014-10-01

    Accurate estimation of polyp size is important because it is used to determine the surveillance interval after polypectomy. To evaluate the variation and accuracy in polyp size estimation among endoscopists and the impact on surveillance intervals after polypectomy. Web-based survey. A total of 873 members of the American Society for Gastrointestinal Endoscopy. Participants watched video recordings of 4 polypectomies and were asked to estimate the polyp sizes. Proportion of participants with polyp size estimates within 20% of the correct measurement and the frequency of incorrect surveillance intervals based on inaccurate size estimates. Polyp size estimates were within 20% of the correct value for 1362 (48%) of 2812 estimates (range 39%-59% for the 4 polyps). Polyp size was overestimated by >20% in 889 estimates (32%, range 15%-49%) and underestimated by >20% in 561 (20%, range 4%-46%) estimates. Incorrect surveillance intervals because of overestimation or underestimation occurred in 272 (10%) of the 2812 estimates (range 5%-14%). Participants in a private practice setting overestimated the size of 3 or of all 4 polyps by >20% more often than participants in an academic setting (difference = 7%; 95% confidence interval, 1%-11%). Survey design with the use of video clips. Substantial overestimation and underestimation of polyp size occurs with visual estimation leading to incorrect surveillance intervals in 10% of cases. Our findings support routine use of measurement tools to improve polyp size estimates. Copyright © 2014 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.

  18. From bead to rod: Comparison of theories by measuring translational drag coefficients of micron-sized magnetic bead-chains in Stokes flow

    PubMed Central

    Lu, Chen; Zhao, Xiaodan; Kawamura, Ryo

    2017-01-01

    Frictional drag force on an object in Stokes flow follows a linear relationship with the velocity of translation and a translational drag coefficient. This drag coefficient is related to the size, shape, and orientation of the object. For rod-like objects, analytical solutions of the drag coefficients have been proposed based on three rough approximations of the rod geometry, namely the bead model, ellipsoid model, and cylinder model. These theories all agree that translational drag coefficients of rod-like objects are functions of the rod length and aspect ratio, but differ among one another on the correction factor terms in the equations. By tracking the displacement of the particles through stationary fluids of calibrated viscosity in magnetic tweezers setup, we experimentally measured the drag coefficients of micron-sized beads and their bead-chain formations with chain length of 2 to 27. We verified our methodology with analytical solutions of dimers of two touching beads, and compared our measured drag coefficient values of rod-like objects with theoretical calculations. Our comparison reveals several analytical solutions that used more appropriate approximation and derived formulae that agree with our measurement better. PMID:29145447

  19. Simulation of drift of pesticides: development and validation of a model.

    PubMed

    Brusselman, E; Spanoghe, P; Van der Meeren, P; Gabriels, D; Steurbaut, W

    2003-01-01

    Over the last decade drift of pesticides has been recognized as a major problem for the environment. High fractions of pesticides can be transported through the air and deposited in neighbouring ecosystems during and after application. A new computer-two steps-drift model is developed: FYDRIMO or F(ph)Ysical DRift MOdel. In the first step the droplet size spectrum of a nozzle is analysed. In this way the volume percentage of droplets with a certain size is known. In the second step the model results in a prediction of deposition of each droplet with a certain size. This second part of the model runs in MATLAB and is grounded on a combination of two physical factors: gravity force and friction forces. In this stage of development corrections are included for evaporation and wind force following a certain measured wind profile. For validation wind tunnel experiments were performed. Salt solutions were sprayed at two wind velocities and variable distance above the floor. Small gutters in the floor filled with filter paper were used to collect the sprayed droplets. After analysing and comparing the wind tunnel results with the model predictions, FYDRIMO seems to have good predicting capacities.

  20. Revealing strong bias in common measures of galaxy properties using new inclination-independent structures

    NASA Astrophysics Data System (ADS)

    Devour, Brian M.; Bell, Eric F.

    2017-06-01

    Accurate measurement of galaxy structures is a prerequisite for quantitative investigation of galaxy properties or evolution. Yet, the impact of galaxy inclination and dust on commonly used metrics of galaxy structure is poorly quantified. We use infrared data sets to select inclination-independent samples of disc and flattened elliptical galaxies. These samples show strong variation in Sérsic index, concentration and half-light radii with inclination. We develop novel inclination-independent galaxy structures by collapsing the light distribution in the near-infrared on to the major axis, yielding inclination-independent 'linear' measures of size and concentration. With these new metrics we select a sample of Milky Way analogue galaxies with similar stellar masses, star formation rates, sizes and concentrations. Optical luminosities, light distributions and spectral properties are all found to vary strongly with inclination: When inclining to edge-on, r-band luminosities dim by >1 magnitude, sizes decrease by a factor of 2, 'dust-corrected' estimates of star formation rate drop threefold, metallicities decrease by 0.1 dex and edge-on galaxies are half as likely to be classified as star forming. These systematic effects should be accounted for in analyses of galaxy properties.

Top